PNG  IHDRX cHRMz&u0`:pQ<bKGD pHYsodtIME MeqIDATxw]Wug^Qd˶ 6`!N:!@xI~)%7%@Bh&`lnjVF29gΨ4E$|>cɚ{gk= %,a KX%,a KX%,a KX%,a KX%,a KX%,a KX%, b` ǟzeאfp]<!SJmɤY޲ڿ,%c ~ع9VH.!Ͳz&QynֺTkRR.BLHi٪:l;@(!MԴ=žI,:o&N'Kù\vRmJ雵֫AWic H@" !: Cé||]k-Ha oݜ:y F())u]aG7*JV@J415p=sZH!=!DRʯvɱh~V\}v/GKY$n]"X"}t@ xS76^[bw4dsce)2dU0 CkMa-U5tvLƀ~mlMwfGE/-]7XAƟ`׮g ewxwC4\[~7@O-Q( a*XGƒ{ ՟}$_y3tĐƤatgvێi|K=uVyrŲlLӪuܿzwk$m87k( `múcE)"@rK( z4$D; 2kW=Xb$V[Ru819קR~qloѱDyįݎ*mxw]y5e4K@ЃI0A D@"BDk_)N\8͜9dz"fK0zɿvM /.:2O{ Nb=M=7>??Zuo32 DLD@D| &+֎C #B8ַ`bOb $D#ͮҪtx]%`ES`Ru[=¾!@Od37LJ0!OIR4m]GZRJu$‡c=%~s@6SKy?CeIh:[vR@Lh | (BhAMy=݃  G"'wzn޺~8ԽSh ~T*A:xR[ܹ?X[uKL_=fDȊ؂p0}7=D$Ekq!/t.*2ʼnDbŞ}DijYaȲ(""6HA;:LzxQ‘(SQQ}*PL*fc\s `/d'QXW, e`#kPGZuŞuO{{wm[&NBTiiI0bukcA9<4@SӊH*؎4U/'2U5.(9JuDfrޱtycU%j(:RUbArLֺN)udA':uGQN"-"Is.*+k@ `Ojs@yU/ H:l;@yyTn}_yw!VkRJ4P)~y#)r,D =ě"Q]ci'%HI4ZL0"MJy 8A{ aN<8D"1#IJi >XjX֔#@>-{vN!8tRݻ^)N_╗FJEk]CT՟ YP:_|H1@ CBk]yKYp|og?*dGvzنzӴzjֺNkC~AbZƷ`.H)=!QͷVTT(| u78y֮}|[8-Vjp%2JPk[}ԉaH8Wpqhwr:vWª<}l77_~{s۴V+RCģ%WRZ\AqHifɤL36: #F:p]Bq/z{0CU6ݳEv_^k7'>sq*+kH%a`0ԣisqにtү04gVgW΂iJiS'3w.w}l6MC2uԯ|>JF5`fV5m`Y**Db1FKNttu]4ccsQNnex/87+}xaUW9y>ͯ骵G{䩓Գ3+vU}~jJ.NFRD7<aJDB1#ҳgSb,+CS?/ VG J?|?,2#M9}B)MiE+G`-wo߫V`fio(}S^4e~V4bHOYb"b#E)dda:'?}׮4繏`{7Z"uny-?ǹ;0MKx{:_pÚmFמ:F " .LFQLG)Q8qN q¯¯3wOvxDb\. BKD9_NN &L:4D{mm o^tֽ:q!ƥ}K+<"m78N< ywsard5+вz~mnG)=}lYݧNj'QJS{S :UYS-952?&O-:W}(!6Mk4+>A>j+i|<<|;ر^߉=HE|V#F)Emm#}/"y GII웻Jі94+v뾧xu~5C95~ūH>c@덉pʃ1/4-A2G%7>m;–Y,cyyaln" ?ƻ!ʪ<{~h~i y.zZB̃/,雋SiC/JFMmBH&&FAbϓO^tubbb_hZ{_QZ-sύodFgO(6]TJA˯#`۶ɟ( %$&+V'~hiYy>922 Wp74Zkq+Ovn錄c>8~GqܲcWꂎz@"1A.}T)uiW4="jJ2W7mU/N0gcqܗOO}?9/wìXžΏ0 >֩(V^Rh32!Hj5`;O28؇2#ݕf3 ?sJd8NJ@7O0 b־?lldщ̡&|9C.8RTWwxWy46ah嘦mh٤&l zCy!PY?: CJyв]dm4ǜҐR޻RլhX{FƯanшQI@x' ao(kUUuxW_Ñ줮[w8 FRJ(8˼)_mQ _!RJhm=!cVmm ?sFOnll6Qk}alY}; "baӌ~M0w,Ggw2W:G/k2%R,_=u`WU R.9T"v,<\Ik޽/2110Ӿxc0gyC&Ny޽JҢrV6N ``یeA16"J³+Rj*;BϜkZPJaÍ<Jyw:NP8/D$ 011z֊Ⱳ3ι֘k1V_"h!JPIΣ'ɜ* aEAd:ݺ>y<}Lp&PlRfTb1]o .2EW\ͮ]38؋rTJsǏP@芎sF\> P^+dYJLbJ C-xϐn> ι$nj,;Ǖa FU *择|h ~izť3ᤓ`K'-f tL7JK+vf2)V'-sFuB4i+m+@My=O҈0"|Yxoj,3]:cо3 $#uŘ%Y"y죯LebqtҢVzq¼X)~>4L׶m~[1_k?kxֺQ`\ |ٛY4Ѯr!)N9{56(iNq}O()Em]=F&u?$HypWUeB\k]JɩSع9 Zqg4ZĊo oMcjZBU]B\TUd34ݝ~:7ڶSUsB0Z3srx 7`:5xcx !qZA!;%͚7&P H<WL!džOb5kF)xor^aujƍ7 Ǡ8/p^(L>ὴ-B,{ۇWzֺ^k]3\EE@7>lYBȝR.oHnXO/}sB|.i@ɥDB4tcm,@ӣgdtJ!lH$_vN166L__'Z)y&kH;:,Y7=J 9cG) V\hjiE;gya~%ks_nC~Er er)muuMg2;֫R)Md) ,¶ 2-wr#F7<-BBn~_(o=KO㭇[Xv eN_SMgSҐ BS헃D%g_N:/pe -wkG*9yYSZS.9cREL !k}<4_Xs#FmҶ:7R$i,fi!~' # !6/S6y@kZkZcX)%5V4P]VGYq%H1!;e1MV<!ϐHO021Dp= HMs~~a)ަu7G^];git!Frl]H/L$=AeUvZE4P\.,xi {-~p?2b#amXAHq)MWǾI_r`S Hz&|{ +ʖ_= (YS(_g0a03M`I&'9vl?MM+m~}*xT۲(fY*V4x@29s{DaY"toGNTO+xCAO~4Ϳ;p`Ѫ:>Ҵ7K 3}+0 387x\)a"/E>qpWB=1 ¨"MP(\xp߫́A3+J] n[ʼnӼaTbZUWb={~2ooKױӰp(CS\S筐R*JغV&&"FA}J>G֐p1ٸbk7 ŘH$JoN <8s^yk_[;gy-;߉DV{c B yce% aJhDȶ 2IdйIB/^n0tNtџdcKj4϶v~- CBcgqx9= PJ) dMsjpYB] GD4RDWX +h{y`,3ꊕ$`zj*N^TP4L:Iz9~6s) Ga:?y*J~?OrMwP\](21sZUD ?ܟQ5Q%ggW6QdO+\@ ̪X'GxN @'4=ˋ+*VwN ne_|(/BDfj5(Dq<*tNt1х!MV.C0 32b#?n0pzj#!38}޴o1KovCJ`8ŗ_"]] rDUy޲@ Ȗ-;xџ'^Y`zEd?0„ DAL18IS]VGq\4o !swV7ˣι%4FѮ~}6)OgS[~Q vcYbL!wG3 7띸*E Pql8=jT\꘿I(z<[6OrR8ºC~ډ]=rNl[g|v TMTղb-o}OrP^Q]<98S¤!k)G(Vkwyqyr޽Nv`N/e p/~NAOk \I:G6]4+K;j$R:Mi #*[AȚT,ʰ,;N{HZTGMoּy) ]%dHء9Պ䠬|<45,\=[bƟ8QXeB3- &dҩ^{>/86bXmZ]]yޚN[(WAHL$YAgDKp=5GHjU&99v簪C0vygln*P)9^͞}lMuiH!̍#DoRBn9l@ xA/_v=ȺT{7Yt2N"4!YN`ae >Q<XMydEB`VU}u]嫇.%e^ánE87Mu\t`cP=AD/G)sI"@MP;)]%fH9'FNsj1pVhY&9=0pfuJ&gޤx+k:!r˭wkl03׼Ku C &ѓYt{.O.zҏ z}/tf_wEp2gvX)GN#I ݭ߽v/ .& и(ZF{e"=V!{zW`, ]+LGz"(UJp|j( #V4, 8B 0 9OkRrlɱl94)'VH9=9W|>PS['G(*I1==C<5"Pg+x'K5EMd؞Af8lG ?D FtoB[je?{k3zQ vZ;%Ɠ,]E>KZ+T/ EJxOZ1i #T<@ I}q9/t'zi(EMqw`mYkU6;[t4DPeckeM;H}_g pMww}k6#H㶏+b8雡Sxp)&C $@'b,fPߑt$RbJ'vznuS ~8='72_`{q纶|Q)Xk}cPz9p7O:'|G~8wx(a 0QCko|0ASD>Ip=4Q, d|F8RcU"/KM opKle M3#i0c%<7׿p&pZq[TR"BpqauIp$ 8~Ĩ!8Սx\ւdT>>Z40ks7 z2IQ}ItԀ<-%S⍤};zIb$I 5K}Q͙D8UguWE$Jh )cu4N tZl+[]M4k8֦Zeq֮M7uIqG 1==tLtR,ƜSrHYt&QP윯Lg' I,3@P'}'R˪e/%-Auv·ñ\> vDJzlӾNv5:|K/Jb6KI9)Zh*ZAi`?S {aiVDԲuy5W7pWeQJk֤#5&V<̺@/GH?^τZL|IJNvI:'P=Ϛt"¨=cud S Q.Ki0 !cJy;LJR;G{BJy޺[^8fK6)=yʊ+(k|&xQ2`L?Ȓ2@Mf 0C`6-%pKpm')c$׻K5[J*U[/#hH!6acB JA _|uMvDyk y)6OPYjœ50VT K}cǻP[ $:]4MEA.y)|B)cf-A?(e|lɉ#P9V)[9t.EiQPDѠ3ϴ;E:+Օ t ȥ~|_N2,ZJLt4! %ա]u {+=p.GhNcŞQI?Nd'yeh n7zi1DB)1S | S#ًZs2|Ɛy$F SxeX{7Vl.Src3E℃Q>b6G ўYCmtկ~=K0f(=LrAS GN'ɹ9<\!a`)֕y[uՍ[09` 9 +57ts6}b4{oqd+J5fa/,97J#6yν99mRWxJyѡyu_TJc`~W>l^q#Ts#2"nD1%fS)FU w{ܯ R{ ˎ󅃏џDsZSQS;LV;7 Od1&1n$ N /.q3~eNɪ]E#oM~}v֯FڦwyZ=<<>Xo稯lfMFV6p02|*=tV!c~]fa5Y^Q_WN|Vs 0ҘދU97OI'N2'8N֭fgg-}V%y]U4 峧p*91#9U kCac_AFңĪy뚇Y_AiuYyTTYЗ-(!JFLt›17uTozc. S;7A&&<ԋ5y;Ro+:' *eYJkWR[@F %SHWP 72k4 qLd'J "zB6{AC0ƁA6U.'F3:Ȅ(9ΜL;D]m8ڥ9}dU "v!;*13Rg^fJyShyy5auA?ɩGHRjo^]׽S)Fm\toy 4WQS@mE#%5ʈfFYDX ~D5Ϡ9tE9So_aU4?Ѽm%&c{n>.KW1Tlb}:j uGi(JgcYj0qn+>) %\!4{LaJso d||u//P_y7iRJ߬nHOy) l+@$($VFIQ9%EeKʈU. ia&FY̒mZ=)+qqoQn >L!qCiDB;Y<%} OgBxB!ØuG)WG9y(Ą{_yesuZmZZey'Wg#C~1Cev@0D $a@˲(.._GimA:uyw֬%;@!JkQVM_Ow:P.s\)ot- ˹"`B,e CRtaEUP<0'}r3[>?G8xU~Nqu;Wm8\RIkբ^5@k+5(By'L&'gBJ3ݶ!/㮻w҅ yqPWUg<e"Qy*167΃sJ\oz]T*UQ<\FԎ`HaNmڜ6DysCask8wP8y9``GJ9lF\G g's Nn͵MLN֪u$| /|7=]O)6s !ĴAKh]q_ap $HH'\1jB^s\|- W1:=6lJBqjY^LsPk""`]w)󭃈,(HC ?䔨Y$Sʣ{4Z+0NvQkhol6C.婧/u]FwiVjZka&%6\F*Ny#8O,22+|Db~d ~Çwc N:FuuCe&oZ(l;@ee-+Wn`44AMK➝2BRՈt7g*1gph9N) *"TF*R(#'88pm=}X]u[i7bEc|\~EMn}P瘊J)K.0i1M6=7'_\kaZ(Th{K*GJyytw"IO-PWJk)..axӝ47"89Cc7ĐBiZx 7m!fy|ϿF9CbȩV 9V-՛^pV̌ɄS#Bv4-@]Vxt-Z, &ֺ*diؠ2^VXbs֔Ìl.jQ]Y[47gj=幽ex)A0ip׳ W2[ᎇhuE^~q흙L} #-b۸oFJ_QP3r6jr+"nfzRJTUqoaۍ /$d8Mx'ݓ= OՃ| )$2mcM*cЙj}f };n YG w0Ia!1Q.oYfr]DyISaP}"dIӗթO67jqR ҊƐƈaɤGG|h;t]䗖oSv|iZqX)oalv;۩meEJ\!8=$4QU4Xo&VEĊ YS^E#d,yX_> ۘ-e\ "Wa6uLĜZi`aD9.% w~mB(02G[6y.773a7 /=o7D)$Z 66 $bY^\CuP. (x'"J60׿Y:Oi;F{w佩b+\Yi`TDWa~|VH)8q/=9!g߆2Y)?ND)%?Ǐ`k/sn:;O299yB=a[Ng 3˲N}vLNy;*?x?~L&=xyӴ~}q{qE*IQ^^ͧvü{Huu=R|>JyUlZV, B~/YF!Y\u_ݼF{_C)LD]m {H 0ihhadd nUkf3oٺCvE\)QJi+֥@tDJkB$1!Đr0XQ|q?d2) Ӣ_}qv-< FŊ߫%roppVBwü~JidY4:}L6M7f٬F "?71<2#?Jyy4뷢<_a7_=Q E=S1И/9{+93֮E{ǂw{))?maÆm(uLE#lïZ  ~d];+]h j?!|$F}*"4(v'8s<ŏUkm7^7no1w2ؗ}TrͿEk>p'8OB7d7R(A 9.*Mi^ͳ; eeUwS+C)uO@ =Sy]` }l8^ZzRXj[^iUɺ$tj))<sbDJfg=Pk_{xaKo1:-uyG0M ԃ\0Lvuy'ȱc2Ji AdyVgVh!{]/&}}ċJ#%d !+87<;qN޼Nفl|1N:8ya  8}k¾+-$4FiZYÔXk*I&'@iI99)HSh4+2G:tGhS^繿 Kتm0 вDk}֚+QT4;sC}rՅE,8CX-e~>G&'9xpW,%Fh,Ry56Y–hW-(v_,? ; qrBk4-V7HQ;ˇ^Gv1JVV%,ik;D_W!))+BoS4QsTM;gt+ndS-~:11Sgv!0qRVh!"Ȋ(̦Yl.]PQWgٳE'`%W1{ndΗBk|Ž7ʒR~,lnoa&:ü$ 3<a[CBݮwt"o\ePJ=Hz"_c^Z.#ˆ*x z̝grY]tdkP*:97YľXyBkD4N.C_[;F9`8& !AMO c `@BA& Ost\-\NX+Xp < !bj3C&QL+*&kAQ=04}cC!9~820G'PC9xa!w&bo_1 Sw"ܱ V )Yl3+ס2KoXOx]"`^WOy :3GO0g;%Yv㐫(R/r (s } u B &FeYZh0y> =2<Ϟc/ -u= c&׭,.0"g"7 6T!vl#sc>{u/Oh Bᾈ)۴74]x7 gMӒ"d]U)}" v4co[ ɡs 5Gg=XR14?5A}D "b{0$L .\4y{_fe:kVS\\O]c^W52LSBDM! C3Dhr̦RtArx4&agaN3Cf<Ԉp4~ B'"1@.b_/xQ} _߃҉/gٓ2Qkqp0շpZ2fԫYz< 4L.Cyυι1t@鎫Fe sYfsF}^ V}N<_`p)alٶ "(XEAVZ<)2},:Ir*#m_YӼ R%a||EƼIJ,,+f"96r/}0jE/)s)cjW#w'Sʯ5<66lj$a~3Kʛy 2:cZ:Yh))+a߭K::N,Q F'qB]={.]h85C9cr=}*rk?vwV렵ٸW Rs%}rNAkDv|uFLBkWY YkX מ|)1!$#3%y?pF<@<Rr0}: }\J [5FRxY<9"SQdE(Q*Qʻ)q1E0B_O24[U'],lOb ]~WjHޏTQ5Syu wq)xnw8~)c 쫬gٲߠ H% k5dƝk> kEj,0% b"vi2Wس_CuK)K{n|>t{P1򨾜j>'kEkƗBg*H%'_aY6Bn!TL&ɌOb{c`'d^{t\i^[uɐ[}q0lM˕G:‚4kb祔c^:?bpg… +37stH:0}en6x˟%/<]BL&* 5&fK9Mq)/iyqtA%kUe[ڛKN]Ě^,"`/ s[EQQm?|XJ߅92m]G.E΃ח U*Cn.j_)Tѧj̿30ڇ!A0=͜ar I3$C^-9#|pk!)?7.x9 @OO;WƝZBFU keZ75F6Tc6"ZȚs2y/1 ʵ:u4xa`C>6Rb/Yм)^=+~uRd`/|_8xbB0?Ft||Z\##|K 0>>zxv8۴吅q 8ĥ)"6>~\8:qM}#͚'ĉ#p\׶ l#bA?)|g g9|8jP(cr,BwV (WliVxxᡁ@0Okn;ɥh$_ckCgriv}>=wGzβ KkBɛ[˪ !J)h&k2%07δt}!d<9;I&0wV/ v 0<H}L&8ob%Hi|޶o&h1L|u֦y~󛱢8fٲUsւ)0oiFx2}X[zVYr_;N(w]_4B@OanC?gĦx>мgx>ΛToZoOMp>40>V Oy V9iq!4 LN,ˢu{jsz]|"R޻&'ƚ{53ўFu(<٪9:΋]B;)B>1::8;~)Yt|0(pw2N%&X,URBK)3\zz&}ax4;ǟ(tLNg{N|Ǽ\G#C9g$^\}p?556]/RP.90 k,U8/u776s ʪ_01چ|\N 0VV*3H鴃J7iI!wG_^ypl}r*jɤSR 5QN@ iZ#1ٰy;_\3\BQQ x:WJv츟ٯ$"@6 S#qe딇(/P( Dy~TOϻ<4:-+F`0||;Xl-"uw$Цi󼕝mKʩorz"mϺ$F:~E'ҐvD\y?Rr8_He@ e~O,T.(ފR*cY^m|cVR[8 JҡSm!ΆԨb)RHG{?MpqrmN>߶Y)\p,d#xۆWY*,l6]v0h15M˙MS8+EdI='LBJIH7_9{Caз*Lq,dt >+~ّeʏ?xԕ4bBAŚjﵫ!'\Ը$WNvKO}ӽmSşذqsOy?\[,d@'73'j%kOe`1.g2"e =YIzS2|zŐƄa\U,dP;jhhhaxǶ?КZ՚.q SE+XrbOu%\GتX(H,N^~]JyEZQKceTQ]VGYqnah;y$cQahT&QPZ*iZ8UQQM.qo/T\7X"u?Mttl2Xq(IoW{R^ ux*SYJ! 4S.Jy~ BROS[V|žKNɛP(L6V^|cR7i7nZW1Fd@ Ara{詑|(T*dN]Ko?s=@ |_EvF]׍kR)eBJc" MUUbY6`~V޴dJKß&~'d3i WWWWWW
Current Directory: /opt/saltstack/salt/lib/python3.10/site-packages/salt/utils
Viewing File: /opt/saltstack/salt/lib/python3.10/site-packages/salt/utils/master.py
""" salt.utils.master ----------------- Utilities that can only be used on a salt master. """ import logging import os import signal from threading import Event, Thread import salt.cache import salt.client import salt.config import salt.payload import salt.pillar import salt.utils.atomicfile import salt.utils.files import salt.utils.minions import salt.utils.platform import salt.utils.stringutils import salt.utils.verify from salt.exceptions import SaltException from salt.utils.cache import CacheCli as cache_cli from salt.utils.process import Process from salt.utils.zeromq import zmq log = logging.getLogger(__name__) def get_running_jobs(opts): """ Return the running jobs on this minion """ ret = [] proc_dir = os.path.join(opts["cachedir"], "proc") if not os.path.isdir(proc_dir): return ret for fn_ in os.listdir(proc_dir): path = os.path.join(proc_dir, fn_) try: data = _read_proc_file(path, opts) if data is not None: ret.append(data) except OSError: # proc files may be removed at any time during this process by # the master process that is executing the JID in question, so # we must ignore ENOENT during this process log.trace("%s removed during processing by master process", path) return ret def _read_proc_file(path, opts): """ Return a dict of JID metadata, or None """ with salt.utils.files.fopen(path, "rb") as fp_: buf = fp_.read() fp_.close() if buf: data = salt.payload.loads(buf) else: # Proc file is empty, remove try: os.remove(path) except OSError: log.debug("Unable to remove proc file %s.", path) return None if not isinstance(data, dict): # Invalid serial object return None if not salt.utils.process.os_is_running(data["pid"]): # The process is no longer running, clear out the file and # continue try: os.remove(path) except OSError: log.debug("Unable to remove proc file %s.", path) return None if not _check_cmdline(data): pid = data.get("pid") if pid: log.warning("PID %s exists but does not appear to be a salt process.", pid) try: os.remove(path) except OSError: log.debug("Unable to remove proc file %s.", path) return None return data def _check_cmdline(data): """ In some cases where there are an insane number of processes being created on a system a PID can get recycled or assigned to a non-Salt process. On Linux this fn checks to make sure the PID we are checking on is actually a Salt process. For non-Linux systems we punt and just return True """ if not salt.utils.platform.is_linux(): return True pid = data.get("pid") if not pid: return False if not os.path.isdir("/proc"): return True path = os.path.join(f"/proc/{pid}/cmdline") if not os.path.isfile(path): return False try: with salt.utils.files.fopen(path, "rb") as fp_: return b"salt" in fp_.read() except OSError: return False class MasterPillarUtil: """ Helper utility for easy access to targeted minion grain and pillar data, either from cached data on the master or retrieved on demand, or (by default) both. The minion pillar data returned in get_minion_pillar() is compiled directly from salt.pillar.Pillar on the master to avoid any possible 'pillar poisoning' from a compromised or untrusted minion. ** However, the minion grains are still possibly entirely supplied by the minion. ** Example use case: For runner modules that need access minion pillar data, MasterPillarUtil.get_minion_pillar should be used instead of getting the pillar data by executing the "pillar" module on the minions: # my_runner.py tgt = 'web*' pillar_util = salt.utils.master.MasterPillarUtil(tgt, tgt_type='glob', opts=__opts__) pillar_data = pillar_util.get_minion_pillar() """ def __init__( self, tgt="", tgt_type="glob", saltenv=None, use_cached_grains=True, use_cached_pillar=True, grains_fallback=True, pillar_fallback=True, opts=None, ): log.debug("New instance of %s created.", self.__class__.__name__) if opts is None: log.error("%s: Missing master opts init arg.", self.__class__.__name__) raise SaltException( f"{self.__class__.__name__}: Missing master opts init arg." ) else: self.opts = opts self.tgt = tgt self.tgt_type = tgt_type self.saltenv = saltenv self.use_cached_grains = use_cached_grains self.use_cached_pillar = use_cached_pillar self.grains_fallback = grains_fallback self.pillar_fallback = pillar_fallback self.cache = salt.cache.factory(opts) log.debug( "Init settings: tgt: '%s', tgt_type: '%s', saltenv: '%s', " "use_cached_grains: %s, use_cached_pillar: %s, " "grains_fallback: %s, pillar_fallback: %s", tgt, tgt_type, saltenv, use_cached_grains, use_cached_pillar, grains_fallback, pillar_fallback, ) def _get_cached_mine_data(self, *minion_ids): # Return one dict with the cached mine data of the targeted minions mine_data = {minion_id: {} for minion_id in minion_ids} if not self.opts.get("minion_data_cache", False) and not self.opts.get( "enforce_mine_cache", False ): log.debug( "Skipping cached mine data minion_data_cache" "and enfore_mine_cache are both disabled." ) return mine_data if not minion_ids: minion_ids = self.cache.list("minions") for minion_id in minion_ids: if not salt.utils.verify.valid_id(self.opts, minion_id): continue mdata = self.cache.fetch(f"minions/{minion_id}", "mine") if isinstance(mdata, dict): mine_data[minion_id] = mdata return mine_data def _get_cached_minion_data(self, *minion_ids): # Return two separate dicts of cached grains and pillar data of the # minions grains = {minion_id: {} for minion_id in minion_ids} pillars = grains.copy() if not self.opts.get("minion_data_cache", False): log.debug("Skipping cached data because minion_data_cache is not enabled.") return grains, pillars if not minion_ids: minion_ids = self.cache.list("minions") for minion_id in minion_ids: if not salt.utils.verify.valid_id(self.opts, minion_id): continue mdata = self.cache.fetch(f"minions/{minion_id}", "data") if not isinstance(mdata, dict): log.warning( "cache.fetch should always return a dict. ReturnedType: %s," " MinionId: %s", type(mdata).__name__, minion_id, ) continue if "grains" in mdata: grains[minion_id] = mdata["grains"] if "pillar" in mdata: pillars[minion_id] = mdata["pillar"] return grains, pillars def _get_live_minion_grains(self, minion_ids): # Returns a dict of grains fetched directly from the minions log.debug('Getting live grains for minions: "%s"', minion_ids) with salt.client.get_local_client(self.opts["conf_file"]) as client: return client.cmd( ",".join(minion_ids), "grains.items", timeout=self.opts["timeout"], tgt_type="list", ) def _get_live_minion_pillar(self, minion_id=None, minion_grains=None): # Returns a dict of pillar data for one minion if minion_id is None: return {} if not minion_grains: log.warning("Cannot get pillar data for %s: no grains supplied.", minion_id) return {} log.debug("Getting live pillar for %s", minion_id) pillar = salt.pillar.Pillar( self.opts, minion_grains, minion_id, self.saltenv, self.opts["ext_pillar"] ) log.debug("Compiling pillar for %s", minion_id) ret = pillar.compile_pillar() return ret def _get_minion_grains(self, *minion_ids, **kwargs): # Get the minion grains either from cache or from a direct query # on the minion. By default try to use cached grains first, then # fall back to querying the minion directly. ret = {} cached_grains = kwargs.get("cached_grains", {}) cret = {} lret = {} if self.use_cached_grains: cret = { minion_id: mcache for (minion_id, mcache) in cached_grains.items() if mcache } missed_minions = [ minion_id for minion_id in minion_ids if minion_id not in cret ] log.debug("Missed cached minion grains for: %s", missed_minions) if self.grains_fallback: lret = self._get_live_minion_grains(missed_minions) ret = dict( list({minion_id: {} for minion_id in minion_ids}.items()) + list(lret.items()) + list(cret.items()) ) else: lret = self._get_live_minion_grains(minion_ids) missed_minions = [ minion_id for minion_id in minion_ids if minion_id not in lret ] log.debug("Missed live minion grains for: %s", missed_minions) if self.grains_fallback: cret = { minion_id: mcache for (minion_id, mcache) in cached_grains.items() if mcache } ret = dict( list({minion_id: {} for minion_id in minion_ids}.items()) + list(lret.items()) + list(cret.items()) ) return ret def _get_minion_pillar(self, *minion_ids, **kwargs): # Get the minion pillar either from cache or from a direct query # on the minion. By default try use the cached pillar first, then # fall back to rendering pillar on demand with the supplied grains. ret = {} grains = kwargs.get("grains", {}) cached_pillar = kwargs.get("cached_pillar", {}) cret = {} lret = {} if self.use_cached_pillar: cret = { minion_id: mcache for (minion_id, mcache) in cached_pillar.items() if mcache } missed_minions = [ minion_id for minion_id in minion_ids if minion_id not in cret ] log.debug("Missed cached minion pillars for: %s", missed_minions) if self.pillar_fallback: lret = { minion_id: self._get_live_minion_pillar( minion_id, grains.get(minion_id, {}) ) for minion_id in missed_minions } ret = dict( list({minion_id: {} for minion_id in minion_ids}.items()) + list(lret.items()) + list(cret.items()) ) else: lret = { minion_id: self._get_live_minion_pillar( minion_id, grains.get(minion_id, {}) ) for minion_id in minion_ids } missed_minions = [ minion_id for minion_id in minion_ids if minion_id not in lret ] log.debug("Missed live minion pillars for: %s", missed_minions) if self.pillar_fallback: cret = { minion_id: mcache for (minion_id, mcache) in cached_pillar.items() if mcache } ret = dict( list({minion_id: {} for minion_id in minion_ids}.items()) + list(lret.items()) + list(cret.items()) ) return ret def _tgt_to_list(self): # Return a list of minion ids that match the target and tgt_type minion_ids = [] ckminions = salt.utils.minions.CkMinions(self.opts) _res = ckminions.check_minions(self.tgt, self.tgt_type) minion_ids = _res["minions"] if not minion_ids: log.debug( 'No minions matched for tgt="%s" and tgt_type="%s"', self.tgt, self.tgt_type, ) return {} log.debug( 'Matching minions for tgt="%s" and tgt_type="%s": %s', self.tgt, self.tgt_type, minion_ids, ) return minion_ids def get_minion_pillar(self): """ Get pillar data for the targeted minions, either by fetching the cached minion data on the master, or by compiling the minion's pillar data on the master. For runner modules that need access minion pillar data, this function should be used instead of getting the pillar data by executing the pillar module on the minions. By default, this function tries hard to get the pillar data: - Try to get the cached minion grains and pillar if the master has minion_data_cache: True - If the pillar data for the minion is cached, use it. - If there is no cached grains/pillar data for a minion, then try to get the minion grains directly from the minion. - Use the minion grains to compile the pillar directly from the master using salt.pillar.Pillar """ minion_pillars = {} minion_grains = {} minion_ids = self._tgt_to_list() if self.tgt and not minion_ids: return {} if any( arg for arg in [ self.use_cached_grains, self.use_cached_pillar, self.grains_fallback, self.pillar_fallback, ] ): log.debug("Getting cached minion data") cached_minion_grains, cached_minion_pillars = self._get_cached_minion_data( *minion_ids ) else: cached_minion_grains = {} cached_minion_pillars = {} log.debug("Getting minion grain data for: %s", minion_ids) minion_grains = self._get_minion_grains( *minion_ids, cached_grains=cached_minion_grains ) log.debug("Getting minion pillar data for: %s", minion_ids) minion_pillars = self._get_minion_pillar( *minion_ids, grains=minion_grains, cached_pillar=cached_minion_pillars ) return minion_pillars def get_minion_grains(self): """ Get grains data for the targeted minions, either by fetching the cached minion data on the master, or by fetching the grains directly on the minion. By default, this function tries hard to get the grains data: - Try to get the cached minion grains if the master has minion_data_cache: True - If the grains data for the minion is cached, use it. - If there is no cached grains data for a minion, then try to get the minion grains directly from the minion. """ minion_grains = {} minion_ids = self._tgt_to_list() if not minion_ids: return {} if any(arg for arg in [self.use_cached_grains, self.grains_fallback]): log.debug("Getting cached minion data.") cached_minion_grains, cached_minion_pillars = self._get_cached_minion_data( *minion_ids ) else: cached_minion_grains = {} log.debug("Getting minion grain data for: %s", minion_ids) minion_grains = self._get_minion_grains( *minion_ids, cached_grains=cached_minion_grains ) return minion_grains def get_cached_mine_data(self): """ Get cached mine data for the targeted minions. """ mine_data = {} minion_ids = self._tgt_to_list() log.debug("Getting cached mine data for: %s", minion_ids) mine_data = self._get_cached_mine_data(*minion_ids) return mine_data def clear_cached_minion_data( self, clear_pillar=False, clear_grains=False, clear_mine=False, clear_mine_func=None, ): """ Clear the cached data/files for the targeted minions. """ clear_what = [] if clear_pillar: clear_what.append("pillar") if clear_grains: clear_what.append("grains") if clear_mine: clear_what.append("mine") if clear_mine_func is not None: clear_what.append(f"mine_func: '{clear_mine_func}'") if not clear_what: log.debug("No cached data types specified for clearing.") return False minion_ids = self._tgt_to_list() log.debug("Clearing cached %s data for: %s", ", ".join(clear_what), minion_ids) if clear_pillar == clear_grains: # clear_pillar and clear_grains are both True or both False. # This means we don't deal with pillar/grains caches at all. grains = {} pillars = {} else: # Unless both clear_pillar and clear_grains are True, we need # to read in the pillar/grains data since they are both stored # in the same file, 'data.p' grains, pillars = self._get_cached_minion_data(*minion_ids) try: c_minions = self.cache.list("minions") for minion_id in minion_ids: if not salt.utils.verify.valid_id(self.opts, minion_id): continue if minion_id not in c_minions: # Cache bank for this minion does not exist. Nothing to do. continue bank = f"minions/{minion_id}" minion_pillar = pillars.pop(minion_id, False) minion_grains = grains.pop(minion_id, False) if ( (clear_pillar and clear_grains) or (clear_pillar and not minion_grains) or (clear_grains and not minion_pillar) ): # Not saving pillar or grains, so just delete the cache file self.cache.flush(bank, "data") elif clear_pillar and minion_grains: self.cache.store(bank, "data", {"grains": minion_grains}) elif clear_grains and minion_pillar: self.cache.store(bank, "data", {"pillar": minion_pillar}) if clear_mine: # Delete the whole mine file self.cache.flush(bank, "mine") elif clear_mine_func is not None: # Delete a specific function from the mine file mine_data = self.cache.fetch(bank, "mine") if isinstance(mine_data, dict): if mine_data.pop(clear_mine_func, False): self.cache.store(bank, "mine", mine_data) except OSError: return True return True class CacheTimer(Thread): """ A basic timer class the fires timer-events every second. This is used for cleanup by the ConnectedCache() """ def __init__(self, opts, event): Thread.__init__(self) self.opts = opts self.stopped = event self.daemon = True self.timer_sock = os.path.join(self.opts["sock_dir"], "con_timer.ipc") def run(self): """ main loop that fires the event every second """ context = zmq.Context() # the socket for outgoing timer events socket = context.socket(zmq.PUB) socket.setsockopt(zmq.LINGER, 100) socket.bind("ipc://" + self.timer_sock) count = 0 log.debug("ConCache-Timer started") while not self.stopped.wait(1): socket.send(salt.payload.dumps(count)) # pylint: disable=missing-kwoa count += 1 if count >= 60: count = 0 class CacheWorker(Process): """ Worker for ConnectedCache which runs in its own process to prevent blocking of ConnectedCache main-loop when refreshing minion-list """ def __init__(self, opts, **kwargs): """ Sets up the zmq-connection to the ConCache """ super().__init__(**kwargs) self.opts = opts def run(self): """ Gather currently connected minions and update the cache """ new_mins = list(salt.utils.minions.CkMinions(self.opts).connected_ids()) cc = cache_cli(self.opts) cc.get_cached() cc.put_cache([new_mins]) log.debug("ConCache CacheWorker update finished") class ConnectedCache(Process): """ Provides access to all minions ids that the master has successfully authenticated. The cache is cleaned up regularly by comparing it to the IPs that have open connections to the master publisher port. """ def __init__(self, opts, **kwargs): """ starts the timer and inits the cache itself """ super().__init__(**kwargs) log.debug("ConCache initializing...") # the possible settings for the cache self.opts = opts # the actual cached minion ids self.minions = [] self.cache_sock = os.path.join(self.opts["sock_dir"], "con_cache.ipc") self.update_sock = os.path.join(self.opts["sock_dir"], "con_upd.ipc") self.upd_t_sock = os.path.join(self.opts["sock_dir"], "con_timer.ipc") self.cleanup() # the timer provides 1-second intervals to the loop in run() # to make the cache system most responsive, we do not use a loop- # delay which makes it hard to get 1-second intervals without a timer self.timer_stop = Event() self.timer = CacheTimer(self.opts, self.timer_stop) self.timer.start() self.running = True def signal_handler(self, sig, frame): """ handle signals and shutdown """ self.stop() def cleanup(self): """ remove sockets on shutdown """ log.debug("ConCache cleaning up") if os.path.exists(self.cache_sock): os.remove(self.cache_sock) if os.path.exists(self.update_sock): os.remove(self.update_sock) if os.path.exists(self.upd_t_sock): os.remove(self.upd_t_sock) def secure(self): """ secure the sockets for root-only access """ log.debug("ConCache securing sockets") if os.path.exists(self.cache_sock): os.chmod(self.cache_sock, 0o600) if os.path.exists(self.update_sock): os.chmod(self.update_sock, 0o600) if os.path.exists(self.upd_t_sock): os.chmod(self.upd_t_sock, 0o600) def stop(self): """ shutdown cache process """ # avoid getting called twice self.cleanup() if self.running: self.running = False self.timer_stop.set() self.timer.join() def run(self): """ Main loop of the ConCache, starts updates in intervals and answers requests from the MWorkers """ context = zmq.Context() # the socket for incoming cache requests creq_in = context.socket(zmq.REP) creq_in.setsockopt(zmq.LINGER, 100) creq_in.bind("ipc://" + self.cache_sock) # the socket for incoming cache-updates from workers cupd_in = context.socket(zmq.SUB) cupd_in.setsockopt(zmq.SUBSCRIBE, b"") cupd_in.setsockopt(zmq.LINGER, 100) cupd_in.bind("ipc://" + self.update_sock) # the socket for the timer-event timer_in = context.socket(zmq.SUB) timer_in.setsockopt(zmq.SUBSCRIBE, b"") timer_in.setsockopt(zmq.LINGER, 100) timer_in.connect("ipc://" + self.upd_t_sock) poller = zmq.Poller() poller.register(creq_in, zmq.POLLIN) poller.register(cupd_in, zmq.POLLIN) poller.register(timer_in, zmq.POLLIN) # register a signal handler signal.signal(signal.SIGINT, self.signal_handler) # secure the sockets from the world self.secure() log.info("ConCache started") while self.running: # we check for new events with the poller try: socks = dict(poller.poll(1)) except KeyboardInterrupt: self.stop() except zmq.ZMQError as zmq_err: log.error("ConCache ZeroMQ-Error occurred") log.exception(zmq_err) self.stop() # check for next cache-request if socks.get(creq_in) == zmq.POLLIN: msg = salt.payload.loads(creq_in.recv()) log.debug("ConCache Received request: %s", msg) # requests to the minion list are send as str's if isinstance(msg, str): if msg == "minions": # Send reply back to client reply = salt.payload.dumps(self.minions) creq_in.send(reply) # pylint: disable=missing-kwoa # check for next cache-update from workers if socks.get(cupd_in) == zmq.POLLIN: new_c_data = salt.payload.loads(cupd_in.recv()) # tell the worker to exit # cupd_in.send(serial.dumps('ACK')) # check if the returned data is usable if not isinstance(new_c_data, list): log.error("ConCache Worker returned unusable result") del new_c_data continue # the cache will receive lists of minions # 1. if the list only has 1 item, its from an MWorker, we append it # 2. if the list contains another list, its from a CacheWorker and # the currently cached minions are replaced with that list # 3. anything else is considered malformed try: if not new_c_data: log.debug("ConCache Got empty update from worker") continue data = new_c_data[0] if isinstance(data, str): if data not in self.minions: log.debug( "ConCache Adding minion %s to cache", new_c_data[0] ) self.minions.append(data) elif isinstance(data, list): log.debug("ConCache Replacing minion list from worker") self.minions = data except IndexError: log.debug("ConCache Got malformed result dict from worker") del new_c_data log.info("ConCache %s entries in cache", len(self.minions)) # check for next timer-event to start new jobs if socks.get(timer_in) == zmq.POLLIN: sec_event = salt.payload.loads(timer_in.recv()) # update the list every 30 seconds if int(sec_event % 30) == 0: cw = CacheWorker(self.opts) cw.start() self.stop() creq_in.close() cupd_in.close() timer_in.close() context.term() log.debug("ConCache Shutting down") def ping_all_connected_minions(opts): """ Ping all connected minions. """ if opts["minion_data_cache"]: tgt = list(salt.utils.minions.CkMinions(opts).connected_ids()) form = "list" else: tgt = "*" form = "glob" with salt.client.LocalClient() as client: client.cmd_async(tgt, "test.ping", tgt_type=form) def get_master_key(key_user, opts, skip_perm_errors=False): """ Return the master key. """ if key_user == "root": if opts.get("user", "root") != "root": key_user = opts.get("user", "root") if key_user.startswith("sudo_"): key_user = opts.get("user", "root") if salt.utils.platform.is_windows(): # The username may contain '\' if it is in Windows # 'DOMAIN\username' format. Fix this for the keyfile path. key_user = key_user.replace("\\", "_") keyfile = os.path.join(opts["cachedir"], f".{key_user}_key") # Make sure all key parent directories are accessible salt.utils.verify.check_path_traversal(opts["cachedir"], key_user, skip_perm_errors) try: with salt.utils.files.fopen(keyfile, "r") as key: return key.read() except OSError: # Fall back to eauth return "" def get_values_of_matching_keys(pattern_dict, user_name): """ Check a whitelist and/or blacklist to see if the value matches it. """ ret = [] for expr in pattern_dict: if salt.utils.stringutils.expr_match(user_name, expr): ret.extend(pattern_dict[expr]) return ret # test code for the ConCache class if __name__ == "__main__": opts = salt.config.master_config("/etc/salt/master") conc = ConnectedCache(opts) conc.start()