PNG  IHDRX cHRMz&u0`:pQ<bKGD pHYsodtIME MeqIDATxw]Wug^Qd˶ 6`!N:!@xI~)%7%@Bh&`lnjVF29gΨ4E$|>cɚ{gk= %,a KX%,a KX%,a KX%,a KX%,a KX%,a KX%, b` ǟzeאfp]<!SJmɤY޲ڿ,%c ~ع9VH.!Ͳz&QynֺTkRR.BLHi٪:l;@(!MԴ=žI,:o&N'Kù\vRmJ雵֫AWic H@" !: Cé||]k-Ha oݜ:y F())u]aG7*JV@J415p=sZH!=!DRʯvɱh~V\}v/GKY$n]"X"}t@ xS76^[bw4dsce)2dU0 CkMa-U5tvLƀ~mlMwfGE/-]7XAƟ`׮g ewxwC4\[~7@O-Q( a*XGƒ{ ՟}$_y3tĐƤatgvێi|K=uVyrŲlLӪuܿzwk$m87k( `múcE)"@rK( z4$D; 2kW=Xb$V[Ru819קR~qloѱDyįݎ*mxw]y5e4K@ЃI0A D@"BDk_)N\8͜9dz"fK0zɿvM /.:2O{ Nb=M=7>??Zuo32 DLD@D| &+֎C #B8ַ`bOb $D#ͮҪtx]%`ES`Ru[=¾!@Od37LJ0!OIR4m]GZRJu$‡c=%~s@6SKy?CeIh:[vR@Lh | (BhAMy=݃  G"'wzn޺~8ԽSh ~T*A:xR[ܹ?X[uKL_=fDȊ؂p0}7=D$Ekq!/t.*2ʼnDbŞ}DijYaȲ(""6HA;:LzxQ‘(SQQ}*PL*fc\s `/d'QXW, e`#kPGZuŞuO{{wm[&NBTiiI0bukcA9<4@SӊH*؎4U/'2U5.(9JuDfrޱtycU%j(:RUbArLֺN)udA':uGQN"-"Is.*+k@ `Ojs@yU/ H:l;@yyTn}_yw!VkRJ4P)~y#)r,D =ě"Q]ci'%HI4ZL0"MJy 8A{ aN<8D"1#IJi >XjX֔#@>-{vN!8tRݻ^)N_╗FJEk]CT՟ YP:_|H1@ CBk]yKYp|og?*dGvzنzӴzjֺNkC~AbZƷ`.H)=!QͷVTT(| u78y֮}|[8-Vjp%2JPk[}ԉaH8Wpqhwr:vWª<}l77_~{s۴V+RCģ%WRZ\AqHifɤL36: #F:p]Bq/z{0CU6ݳEv_^k7'>sq*+kH%a`0ԣisqにtү04gVgW΂iJiS'3w.w}l6MC2uԯ|>JF5`fV5m`Y**Db1FKNttu]4ccsQNnex/87+}xaUW9y>ͯ骵G{䩓Գ3+vU}~jJ.NFRD7<aJDB1#ҳgSb,+CS?/ VG J?|?,2#M9}B)MiE+G`-wo߫V`fio(}S^4e~V4bHOYb"b#E)dda:'?}׮4繏`{7Z"uny-?ǹ;0MKx{:_pÚmFמ:F " .LFQLG)Q8qN q¯¯3wOvxDb\. BKD9_NN &L:4D{mm o^tֽ:q!ƥ}K+<"m78N< ywsard5+вz~mnG)=}lYݧNj'QJS{S :UYS-952?&O-:W}(!6Mk4+>A>j+i|<<|;ر^߉=HE|V#F)Emm#}/"y GII웻Jі94+v뾧xu~5C95~ūH>c@덉pʃ1/4-A2G%7>m;–Y,cyyaln" ?ƻ!ʪ<{~h~i y.zZB̃/,雋SiC/JFMmBH&&FAbϓO^tubbb_hZ{_QZ-sύodFgO(6]TJA˯#`۶ɟ( %$&+V'~hiYy>922 Wp74Zkq+Ovn錄c>8~GqܲcWꂎz@"1A.}T)uiW4="jJ2W7mU/N0gcqܗOO}?9/wìXžΏ0 >֩(V^Rh32!Hj5`;O28؇2#ݕf3 ?sJd8NJ@7O0 b־?lldщ̡&|9C.8RTWwxWy46ah嘦mh٤&l zCy!PY?: CJyв]dm4ǜҐR޻RլhX{FƯanшQI@x' ao(kUUuxW_Ñ줮[w8 FRJ(8˼)_mQ _!RJhm=!cVmm ?sFOnll6Qk}alY}; "baӌ~M0w,Ggw2W:G/k2%R,_=u`WU R.9T"v,<\Ik޽/2110Ӿxc0gyC&Ny޽JҢrV6N ``یeA16"J³+Rj*;BϜkZPJaÍ<Jyw:NP8/D$ 011z֊Ⱳ3ι֘k1V_"h!JPIΣ'ɜ* aEAd:ݺ>y<}Lp&PlRfTb1]o .2EW\ͮ]38؋rTJsǏP@芎sF\> P^+dYJLbJ C-xϐn> ι$nj,;Ǖa FU *择|h ~izť3ᤓ`K'-f tL7JK+vf2)V'-sFuB4i+m+@My=O҈0"|Yxoj,3]:cо3 $#uŘ%Y"y죯LebqtҢVzq¼X)~>4L׶m~[1_k?kxֺQ`\ |ٛY4Ѯr!)N9{56(iNq}O()Em]=F&u?$HypWUeB\k]JɩSع9 Zqg4ZĊo oMcjZBU]B\TUd34ݝ~:7ڶSUsB0Z3srx 7`:5xcx !qZA!;%͚7&P H<WL!džOb5kF)xor^aujƍ7 Ǡ8/p^(L>ὴ-B,{ۇWzֺ^k]3\EE@7>lYBȝR.oHnXO/}sB|.i@ɥDB4tcm,@ӣgdtJ!lH$_vN166L__'Z)y&kH;:,Y7=J 9cG) V\hjiE;gya~%ks_nC~Er er)muuMg2;֫R)Md) ,¶ 2-wr#F7<-BBn~_(o=KO㭇[Xv eN_SMgSҐ BS헃D%g_N:/pe -wkG*9yYSZS.9cREL !k}<4_Xs#FmҶ:7R$i,fi!~' # !6/S6y@kZkZcX)%5V4P]VGYq%H1!;e1MV<!ϐHO021Dp= HMs~~a)ަu7G^];git!Frl]H/L$=AeUvZE4P\.,xi {-~p?2b#amXAHq)MWǾI_r`S Hz&|{ +ʖ_= (YS(_g0a03M`I&'9vl?MM+m~}*xT۲(fY*V4x@29s{DaY"toGNTO+xCAO~4Ϳ;p`Ѫ:>Ҵ7K 3}+0 387x\)a"/E>qpWB=1 ¨"MP(\xp߫́A3+J] n[ʼnӼaTbZUWb={~2ooKױӰp(CS\S筐R*JغV&&"FA}J>G֐p1ٸbk7 ŘH$JoN <8s^yk_[;gy-;߉DV{c B yce% aJhDȶ 2IdйIB/^n0tNtџdcKj4϶v~- CBcgqx9= PJ) dMsjpYB] GD4RDWX +h{y`,3ꊕ$`zj*N^TP4L:Iz9~6s) Ga:?y*J~?OrMwP\](21sZUD ?ܟQ5Q%ggW6QdO+\@ ̪X'GxN @'4=ˋ+*VwN ne_|(/BDfj5(Dq<*tNt1х!MV.C0 32b#?n0pzj#!38}޴o1KovCJ`8ŗ_"]] rDUy޲@ Ȗ-;xџ'^Y`zEd?0„ DAL18IS]VGq\4o !swV7ˣι%4FѮ~}6)OgS[~Q vcYbL!wG3 7띸*E Pql8=jT\꘿I(z<[6OrR8ºC~ډ]=rNl[g|v TMTղb-o}OrP^Q]<98S¤!k)G(Vkwyqyr޽Nv`N/e p/~NAOk \I:G6]4+K;j$R:Mi #*[AȚT,ʰ,;N{HZTGMoּy) ]%dHء9Պ䠬|<45,\=[bƟ8QXeB3- &dҩ^{>/86bXmZ]]yޚN[(WAHL$YAgDKp=5GHjU&99v簪C0vygln*P)9^͞}lMuiH!̍#DoRBn9l@ xA/_v=ȺT{7Yt2N"4!YN`ae >Q<XMydEB`VU}u]嫇.%e^ánE87Mu\t`cP=AD/G)sI"@MP;)]%fH9'FNsj1pVhY&9=0pfuJ&gޤx+k:!r˭wkl03׼Ku C &ѓYt{.O.zҏ z}/tf_wEp2gvX)GN#I ݭ߽v/ .& и(ZF{e"=V!{zW`, ]+LGz"(UJp|j( #V4, 8B 0 9OkRrlɱl94)'VH9=9W|>PS['G(*I1==C<5"Pg+x'K5EMd؞Af8lG ?D FtoB[je?{k3zQ vZ;%Ɠ,]E>KZ+T/ EJxOZ1i #T<@ I}q9/t'zi(EMqw`mYkU6;[t4DPeckeM;H}_g pMww}k6#H㶏+b8雡Sxp)&C $@'b,fPߑt$RbJ'vznuS ~8='72_`{q纶|Q)Xk}cPz9p7O:'|G~8wx(a 0QCko|0ASD>Ip=4Q, d|F8RcU"/KM opKle M3#i0c%<7׿p&pZq[TR"BpqauIp$ 8~Ĩ!8Սx\ւdT>>Z40ks7 z2IQ}ItԀ<-%S⍤};zIb$I 5K}Q͙D8UguWE$Jh )cu4N tZl+[]M4k8֦Zeq֮M7uIqG 1==tLtR,ƜSrHYt&QP윯Lg' I,3@P'}'R˪e/%-Auv·ñ\> vDJzlӾNv5:|K/Jb6KI9)Zh*ZAi`?S {aiVDԲuy5W7pWeQJk֤#5&V<̺@/GH?^τZL|IJNvI:'P=Ϛt"¨=cud S Q.Ki0 !cJy;LJR;G{BJy޺[^8fK6)=yʊ+(k|&xQ2`L?Ȓ2@Mf 0C`6-%pKpm')c$׻K5[J*U[/#hH!6acB JA _|uMvDyk y)6OPYjœ50VT K}cǻP[ $:]4MEA.y)|B)cf-A?(e|lɉ#P9V)[9t.EiQPDѠ3ϴ;E:+Օ t ȥ~|_N2,ZJLt4! %ա]u {+=p.GhNcŞQI?Nd'yeh n7zi1DB)1S | S#ًZs2|Ɛy$F SxeX{7Vl.Src3E℃Q>b6G ўYCmtկ~=K0f(=LrAS GN'ɹ9<\!a`)֕y[uՍ[09` 9 +57ts6}b4{oqd+J5fa/,97J#6yν99mRWxJyѡyu_TJc`~W>l^q#Ts#2"nD1%fS)FU w{ܯ R{ ˎ󅃏џDsZSQS;LV;7 Od1&1n$ N /.q3~eNɪ]E#oM~}v֯FڦwyZ=<<>Xo稯lfMFV6p02|*=tV!c~]fa5Y^Q_WN|Vs 0ҘދU97OI'N2'8N֭fgg-}V%y]U4 峧p*91#9U kCac_AFңĪy뚇Y_AiuYyTTYЗ-(!JFLt›17uTozc. S;7A&&<ԋ5y;Ro+:' *eYJkWR[@F %SHWP 72k4 qLd'J "zB6{AC0ƁA6U.'F3:Ȅ(9ΜL;D]m8ڥ9}dU "v!;*13Rg^fJyShyy5auA?ɩGHRjo^]׽S)Fm\toy 4WQS@mE#%5ʈfFYDX ~D5Ϡ9tE9So_aU4?Ѽm%&c{n>.KW1Tlb}:j uGi(JgcYj0qn+>) %\!4{LaJso d||u//P_y7iRJ߬nHOy) l+@$($VFIQ9%EeKʈU. ia&FY̒mZ=)+qqoQn >L!qCiDB;Y<%} OgBxB!ØuG)WG9y(Ą{_yesuZmZZey'Wg#C~1Cev@0D $a@˲(.._GimA:uyw֬%;@!JkQVM_Ow:P.s\)ot- ˹"`B,e CRtaEUP<0'}r3[>?G8xU~Nqu;Wm8\RIkբ^5@k+5(By'L&'gBJ3ݶ!/㮻w҅ yqPWUg<e"Qy*167΃sJ\oz]T*UQ<\FԎ`HaNmڜ6DysCask8wP8y9``GJ9lF\G g's Nn͵MLN֪u$| /|7=]O)6s !ĴAKh]q_ap $HH'\1jB^s\|- W1:=6lJBqjY^LsPk""`]w)󭃈,(HC ?䔨Y$Sʣ{4Z+0NvQkhol6C.婧/u]FwiVjZka&%6\F*Ny#8O,22+|Db~d ~Çwc N:FuuCe&oZ(l;@ee-+Wn`44AMK➝2BRՈt7g*1gph9N) *"TF*R(#'88pm=}X]u[i7bEc|\~EMn}P瘊J)K.0i1M6=7'_\kaZ(Th{K*GJyytw"IO-PWJk)..axӝ47"89Cc7ĐBiZx 7m!fy|ϿF9CbȩV 9V-՛^pV̌ɄS#Bv4-@]Vxt-Z, &ֺ*diؠ2^VXbs֔Ìl.jQ]Y[47gj=幽ex)A0ip׳ W2[ᎇhuE^~q흙L} #-b۸oFJ_QP3r6jr+"nfzRJTUqoaۍ /$d8Mx'ݓ= OՃ| )$2mcM*cЙj}f };n YG w0Ia!1Q.oYfr]DyISaP}"dIӗթO67jqR ҊƐƈaɤGG|h;t]䗖oSv|iZqX)oalv;۩meEJ\!8=$4QU4Xo&VEĊ YS^E#d,yX_> ۘ-e\ "Wa6uLĜZi`aD9.% w~mB(02G[6y.773a7 /=o7D)$Z 66 $bY^\CuP. (x'"J60׿Y:Oi;F{w佩b+\Yi`TDWa~|VH)8q/=9!g߆2Y)?ND)%?Ǐ`k/sn:;O299yB=a[Ng 3˲N}vLNy;*?x?~L&=xyӴ~}q{qE*IQ^^ͧvü{Huu=R|>JyUlZV, B~/YF!Y\u_ݼF{_C)LD]m {H 0ihhadd nUkf3oٺCvE\)QJi+֥@tDJkB$1!Đr0XQ|q?d2) Ӣ_}qv-< FŊ߫%roppVBwü~JidY4:}L6M7f٬F "?71<2#?Jyy4뷢<_a7_=Q E=S1И/9{+93֮E{ǂw{))?maÆm(uLE#lïZ  ~d];+]h j?!|$F}*"4(v'8s<ŏUkm7^7no1w2ؗ}TrͿEk>p'8OB7d7R(A 9.*Mi^ͳ; eeUwS+C)uO@ =Sy]` }l8^ZzRXj[^iUɺ$tj))<sbDJfg=Pk_{xaKo1:-uyG0M ԃ\0Lvuy'ȱc2Ji AdyVgVh!{]/&}}ċJ#%d !+87<;qN޼Nفl|1N:8ya  8}k¾+-$4FiZYÔXk*I&'@iI99)HSh4+2G:tGhS^繿 Kتm0 вDk}֚+QT4;sC}rՅE,8CX-e~>G&'9xpW,%Fh,Ry56Y–hW-(v_,? ; qrBk4-V7HQ;ˇ^Gv1JVV%,ik;D_W!))+BoS4QsTM;gt+ndS-~:11Sgv!0qRVh!"Ȋ(̦Yl.]PQWgٳE'`%W1{ndΗBk|Ž7ʒR~,lnoa&:ü$ 3<a[CBݮwt"o\ePJ=Hz"_c^Z.#ˆ*x z̝grY]tdkP*:97YľXyBkD4N.C_[;F9`8& !AMO c `@BA& Ost\-\NX+Xp < !bj3C&QL+*&kAQ=04}cC!9~820G'PC9xa!w&bo_1 Sw"ܱ V )Yl3+ס2KoXOx]"`^WOy :3GO0g;%Yv㐫(R/r (s } u B &FeYZh0y> =2<Ϟc/ -u= c&׭,.0"g"7 6T!vl#sc>{u/Oh Bᾈ)۴74]x7 gMӒ"d]U)}" v4co[ ɡs 5Gg=XR14?5A}D "b{0$L .\4y{_fe:kVS\\O]c^W52LSBDM! C3Dhr̦RtArx4&agaN3Cf<Ԉp4~ B'"1@.b_/xQ} _߃҉/gٓ2Qkqp0շpZ2fԫYz< 4L.Cyυι1t@鎫Fe sYfsF}^ V}N<_`p)alٶ "(XEAVZ<)2},:Ir*#m_YӼ R%a||EƼIJ,,+f"96r/}0jE/)s)cjW#w'Sʯ5<66lj$a~3Kʛy 2:cZ:Yh))+a߭K::N,Q F'qB]={.]h85C9cr=}*rk?vwV렵ٸW Rs%}rNAkDv|uFLBkWY YkX מ|)1!$#3%y?pF<@<Rr0}: }\J [5FRxY<9"SQdE(Q*Qʻ)q1E0B_O24[U'],lOb ]~WjHޏTQ5Syu wq)xnw8~)c 쫬gٲߠ H% k5dƝk> kEj,0% b"vi2Wس_CuK)K{n|>t{P1򨾜j>'kEkƗBg*H%'_aY6Bn!TL&ɌOb{c`'d^{t\i^[uɐ[}q0lM˕G:‚4kb祔c^:?bpg… +37stH:0}en6x˟%/<]BL&* 5&fK9Mq)/iyqtA%kUe[ڛKN]Ě^,"`/ s[EQQm?|XJ߅92m]G.E΃ח U*Cn.j_)Tѧj̿30ڇ!A0=͜ar I3$C^-9#|pk!)?7.x9 @OO;WƝZBFU keZ75F6Tc6"ZȚs2y/1 ʵ:u4xa`C>6Rb/Yм)^=+~uRd`/|_8xbB0?Ft||Z\##|K 0>>zxv8۴吅q 8ĥ)"6>~\8:qM}#͚'ĉ#p\׶ l#bA?)|g g9|8jP(cr,BwV (WliVxxᡁ@0Okn;ɥh$_ckCgriv}>=wGzβ KkBɛ[˪ !J)h&k2%07δt}!d<9;I&0wV/ v 0<H}L&8ob%Hi|޶o&h1L|u֦y~󛱢8fٲUsւ)0oiFx2}X[zVYr_;N(w]_4B@OanC?gĦx>мgx>ΛToZoOMp>40>V Oy V9iq!4 LN,ˢu{jsz]|"R޻&'ƚ{53ўFu(<٪9:΋]B;)B>1::8;~)Yt|0(pw2N%&X,URBK)3\zz&}ax4;ǟ(tLNg{N|Ǽ\G#C9g$^\}p?556]/RP.90 k,U8/u776s ʪ_01چ|\N 0VV*3H鴃J7iI!wG_^ypl}r*jɤSR 5QN@ iZ#1ٰy;_\3\BQQ x:WJv츟ٯ$"@6 S#qe딇(/P( Dy~TOϻ<4:-+F`0||;Xl-"uw$Цi󼕝mKʩorz"mϺ$F:~E'ҐvD\y?Rr8_He@ e~O,T.(ފR*cY^m|cVR[8 JҡSm!ΆԨb)RHG{?MpqrmN>߶Y)\p,d#xۆWY*,l6]v0h15M˙MS8+EdI='LBJIH7_9{Caз*Lq,dt >+~ّeʏ?xԕ4bBAŚjﵫ!'\Ը$WNvKO}ӽmSşذqsOy?\[,d@'73'j%kOe`1.g2"e =YIzS2|zŐƄa\U,dP;jhhhaxǶ?КZ՚.q SE+XrbOu%\GتX(H,N^~]JyEZQKceTQ]VGYqnah;y$cQahT&QPZ*iZ8UQQM.qo/T\7X"u?Mttl2Xq(IoW{R^ ux*SYJ! 4S.Jy~ BROS[V|žKNɛP(L6V^|cR7i7nZW1Fd@ Ara{詑|(T*dN]Ko?s=@ |_EvF]׍kR)eBJc" MUUbY6`~V޴dJKß&~'d3i WWWWWW
Current Directory: /usr/lib/rads/venv/lib/python3.13/site-packages/jmespath
Viewing File: /usr/lib/rads/venv/lib/python3.13/site-packages/jmespath/parser.py
"""Top down operator precedence parser. This is an implementation of Vaughan R. Pratt's "Top Down Operator Precedence" parser. (http://dl.acm.org/citation.cfm?doid=512927.512931). These are some additional resources that help explain the general idea behind a Pratt parser: * http://effbot.org/zone/simple-top-down-parsing.htm * http://javascript.crockford.com/tdop/tdop.html A few notes on the implementation. * All the nud/led tokens are on the Parser class itself, and are dispatched using getattr(). This keeps all the parsing logic contained to a single class. * We use two passes through the data. One to create a list of token, then one pass through the tokens to create the AST. While the lexer actually yields tokens, we convert it to a list so we can easily implement two tokens of lookahead. A previous implementation used a fixed circular buffer, but it was significantly slower. Also, the average jmespath expression typically does not have a large amount of token so this is not an issue. And interestingly enough, creating a token list first is actually faster than consuming from the token iterator one token at a time. """ import random from jmespath import lexer from jmespath.compat import with_repr_method from jmespath import ast from jmespath import exceptions from jmespath import visitor class Parser(object): BINDING_POWER = { 'eof': 0, 'unquoted_identifier': 0, 'quoted_identifier': 0, 'literal': 0, 'rbracket': 0, 'rparen': 0, 'comma': 0, 'rbrace': 0, 'number': 0, 'current': 0, 'expref': 0, 'colon': 0, 'pipe': 1, 'or': 2, 'and': 3, 'eq': 5, 'gt': 5, 'lt': 5, 'gte': 5, 'lte': 5, 'ne': 5, 'flatten': 9, # Everything above stops a projection. 'star': 20, 'filter': 21, 'dot': 40, 'not': 45, 'lbrace': 50, 'lbracket': 55, 'lparen': 60, } # The maximum binding power for a token that can stop # a projection. _PROJECTION_STOP = 10 # The _MAX_SIZE most recent expressions are cached in # _CACHE dict. _CACHE = {} _MAX_SIZE = 128 def __init__(self, lookahead=2): self.tokenizer = None self._tokens = [None] * lookahead self._buffer_size = lookahead self._index = 0 def parse(self, expression): cached = self._CACHE.get(expression) if cached is not None: return cached parsed_result = self._do_parse(expression) self._CACHE[expression] = parsed_result if len(self._CACHE) > self._MAX_SIZE: self._free_cache_entries() return parsed_result def _do_parse(self, expression): try: return self._parse(expression) except exceptions.LexerError as e: e.expression = expression raise except exceptions.IncompleteExpressionError as e: e.set_expression(expression) raise except exceptions.ParseError as e: e.expression = expression raise def _parse(self, expression): self.tokenizer = lexer.Lexer().tokenize(expression) self._tokens = list(self.tokenizer) self._index = 0 parsed = self._expression(binding_power=0) if not self._current_token() == 'eof': t = self._lookahead_token(0) raise exceptions.ParseError(t['start'], t['value'], t['type'], "Unexpected token: %s" % t['value']) return ParsedResult(expression, parsed) def _expression(self, binding_power=0): left_token = self._lookahead_token(0) self._advance() nud_function = getattr( self, '_token_nud_%s' % left_token['type'], self._error_nud_token) left = nud_function(left_token) current_token = self._current_token() while binding_power < self.BINDING_POWER[current_token]: led = getattr(self, '_token_led_%s' % current_token, None) if led is None: error_token = self._lookahead_token(0) self._error_led_token(error_token) else: self._advance() left = led(left) current_token = self._current_token() return left def _token_nud_literal(self, token): return ast.literal(token['value']) def _token_nud_unquoted_identifier(self, token): return ast.field(token['value']) def _token_nud_quoted_identifier(self, token): field = ast.field(token['value']) # You can't have a quoted identifier as a function # name. if self._current_token() == 'lparen': t = self._lookahead_token(0) raise exceptions.ParseError( 0, t['value'], t['type'], 'Quoted identifier not allowed for function names.') return field def _token_nud_star(self, token): left = ast.identity() if self._current_token() == 'rbracket': right = ast.identity() else: right = self._parse_projection_rhs(self.BINDING_POWER['star']) return ast.value_projection(left, right) def _token_nud_filter(self, token): return self._token_led_filter(ast.identity()) def _token_nud_lbrace(self, token): return self._parse_multi_select_hash() def _token_nud_lparen(self, token): expression = self._expression() self._match('rparen') return expression def _token_nud_flatten(self, token): left = ast.flatten(ast.identity()) right = self._parse_projection_rhs( self.BINDING_POWER['flatten']) return ast.projection(left, right) def _token_nud_not(self, token): expr = self._expression(self.BINDING_POWER['not']) return ast.not_expression(expr) def _token_nud_lbracket(self, token): if self._current_token() in ['number', 'colon']: right = self._parse_index_expression() # We could optimize this and remove the identity() node. # We don't really need an index_expression node, we can # just use emit an index node here if we're not dealing # with a slice. return self._project_if_slice(ast.identity(), right) elif self._current_token() == 'star' and \ self._lookahead(1) == 'rbracket': self._advance() self._advance() right = self._parse_projection_rhs(self.BINDING_POWER['star']) return ast.projection(ast.identity(), right) else: return self._parse_multi_select_list() def _parse_index_expression(self): # We're here: # [<current> # ^ # | current token if (self._lookahead(0) == 'colon' or self._lookahead(1) == 'colon'): return self._parse_slice_expression() else: # Parse the syntax [number] node = ast.index(self._lookahead_token(0)['value']) self._advance() self._match('rbracket') return node def _parse_slice_expression(self): # [start:end:step] # Where start, end, and step are optional. # The last colon is optional as well. parts = [None, None, None] index = 0 current_token = self._current_token() while not current_token == 'rbracket' and index < 3: if current_token == 'colon': index += 1 if index == 3: self._raise_parse_error_for_token( self._lookahead_token(0), 'syntax error') self._advance() elif current_token == 'number': parts[index] = self._lookahead_token(0)['value'] self._advance() else: self._raise_parse_error_for_token( self._lookahead_token(0), 'syntax error') current_token = self._current_token() self._match('rbracket') return ast.slice(*parts) def _token_nud_current(self, token): return ast.current_node() def _token_nud_expref(self, token): expression = self._expression(self.BINDING_POWER['expref']) return ast.expref(expression) def _token_led_dot(self, left): if not self._current_token() == 'star': right = self._parse_dot_rhs(self.BINDING_POWER['dot']) if left['type'] == 'subexpression': left['children'].append(right) return left else: return ast.subexpression([left, right]) else: # We're creating a projection. self._advance() right = self._parse_projection_rhs( self.BINDING_POWER['dot']) return ast.value_projection(left, right) def _token_led_pipe(self, left): right = self._expression(self.BINDING_POWER['pipe']) return ast.pipe(left, right) def _token_led_or(self, left): right = self._expression(self.BINDING_POWER['or']) return ast.or_expression(left, right) def _token_led_and(self, left): right = self._expression(self.BINDING_POWER['and']) return ast.and_expression(left, right) def _token_led_lparen(self, left): if left['type'] != 'field': # 0 - first func arg or closing paren. # -1 - '(' token # -2 - invalid function "name". prev_t = self._lookahead_token(-2) raise exceptions.ParseError( prev_t['start'], prev_t['value'], prev_t['type'], "Invalid function name '%s'" % prev_t['value']) name = left['value'] args = [] while not self._current_token() == 'rparen': expression = self._expression() if self._current_token() == 'comma': self._match('comma') args.append(expression) self._match('rparen') function_node = ast.function_expression(name, args) return function_node def _token_led_filter(self, left): # Filters are projections. condition = self._expression(0) self._match('rbracket') if self._current_token() == 'flatten': right = ast.identity() else: right = self._parse_projection_rhs(self.BINDING_POWER['filter']) return ast.filter_projection(left, right, condition) def _token_led_eq(self, left): return self._parse_comparator(left, 'eq') def _token_led_ne(self, left): return self._parse_comparator(left, 'ne') def _token_led_gt(self, left): return self._parse_comparator(left, 'gt') def _token_led_gte(self, left): return self._parse_comparator(left, 'gte') def _token_led_lt(self, left): return self._parse_comparator(left, 'lt') def _token_led_lte(self, left): return self._parse_comparator(left, 'lte') def _token_led_flatten(self, left): left = ast.flatten(left) right = self._parse_projection_rhs( self.BINDING_POWER['flatten']) return ast.projection(left, right) def _token_led_lbracket(self, left): token = self._lookahead_token(0) if token['type'] in ['number', 'colon']: right = self._parse_index_expression() if left['type'] == 'index_expression': # Optimization: if the left node is an index expr, # we can avoid creating another node and instead just add # the right node as a child of the left. left['children'].append(right) return left else: return self._project_if_slice(left, right) else: # We have a projection self._match('star') self._match('rbracket') right = self._parse_projection_rhs(self.BINDING_POWER['star']) return ast.projection(left, right) def _project_if_slice(self, left, right): index_expr = ast.index_expression([left, right]) if right['type'] == 'slice': return ast.projection( index_expr, self._parse_projection_rhs(self.BINDING_POWER['star'])) else: return index_expr def _parse_comparator(self, left, comparator): right = self._expression(self.BINDING_POWER[comparator]) return ast.comparator(comparator, left, right) def _parse_multi_select_list(self): expressions = [] while True: expression = self._expression() expressions.append(expression) if self._current_token() == 'rbracket': break else: self._match('comma') self._match('rbracket') return ast.multi_select_list(expressions) def _parse_multi_select_hash(self): pairs = [] while True: key_token = self._lookahead_token(0) # Before getting the token value, verify it's # an identifier. self._match_multiple_tokens( token_types=['quoted_identifier', 'unquoted_identifier']) key_name = key_token['value'] self._match('colon') value = self._expression(0) node = ast.key_val_pair(key_name=key_name, node=value) pairs.append(node) if self._current_token() == 'comma': self._match('comma') elif self._current_token() == 'rbrace': self._match('rbrace') break return ast.multi_select_dict(nodes=pairs) def _parse_projection_rhs(self, binding_power): # Parse the right hand side of the projection. if self.BINDING_POWER[self._current_token()] < self._PROJECTION_STOP: # BP of 10 are all the tokens that stop a projection. right = ast.identity() elif self._current_token() == 'lbracket': right = self._expression(binding_power) elif self._current_token() == 'filter': right = self._expression(binding_power) elif self._current_token() == 'dot': self._match('dot') right = self._parse_dot_rhs(binding_power) else: self._raise_parse_error_for_token(self._lookahead_token(0), 'syntax error') return right def _parse_dot_rhs(self, binding_power): # From the grammar: # expression '.' ( identifier / # multi-select-list / # multi-select-hash / # function-expression / # * # In terms of tokens that means that after a '.', # you can have: lookahead = self._current_token() # Common case "foo.bar", so first check for an identifier. if lookahead in ['quoted_identifier', 'unquoted_identifier', 'star']: return self._expression(binding_power) elif lookahead == 'lbracket': self._match('lbracket') return self._parse_multi_select_list() elif lookahead == 'lbrace': self._match('lbrace') return self._parse_multi_select_hash() else: t = self._lookahead_token(0) allowed = ['quoted_identifier', 'unquoted_identifier', 'lbracket', 'lbrace'] msg = ( "Expecting: %s, got: %s" % (allowed, t['type']) ) self._raise_parse_error_for_token(t, msg) def _error_nud_token(self, token): if token['type'] == 'eof': raise exceptions.IncompleteExpressionError( token['start'], token['value'], token['type']) self._raise_parse_error_for_token(token, 'invalid token') def _error_led_token(self, token): self._raise_parse_error_for_token(token, 'invalid token') def _match(self, token_type=None): # inline'd self._current_token() if self._current_token() == token_type: # inline'd self._advance() self._advance() else: self._raise_parse_error_maybe_eof( token_type, self._lookahead_token(0)) def _match_multiple_tokens(self, token_types): if self._current_token() not in token_types: self._raise_parse_error_maybe_eof( token_types, self._lookahead_token(0)) self._advance() def _advance(self): self._index += 1 def _current_token(self): return self._tokens[self._index]['type'] def _lookahead(self, number): return self._tokens[self._index + number]['type'] def _lookahead_token(self, number): return self._tokens[self._index + number] def _raise_parse_error_for_token(self, token, reason): lex_position = token['start'] actual_value = token['value'] actual_type = token['type'] raise exceptions.ParseError(lex_position, actual_value, actual_type, reason) def _raise_parse_error_maybe_eof(self, expected_type, token): lex_position = token['start'] actual_value = token['value'] actual_type = token['type'] if actual_type == 'eof': raise exceptions.IncompleteExpressionError( lex_position, actual_value, actual_type) message = 'Expecting: %s, got: %s' % (expected_type, actual_type) raise exceptions.ParseError( lex_position, actual_value, actual_type, message) def _free_cache_entries(self): for key in random.sample(list(self._CACHE.keys()), int(self._MAX_SIZE / 2)): self._CACHE.pop(key, None) @classmethod def purge(cls): """Clear the expression compilation cache.""" cls._CACHE.clear() @with_repr_method class ParsedResult(object): def __init__(self, expression, parsed): self.expression = expression self.parsed = parsed def search(self, value, options=None): interpreter = visitor.TreeInterpreter(options) result = interpreter.visit(self.parsed, value) return result def _render_dot_file(self): """Render the parsed AST as a dot file. Note that this is marked as an internal method because the AST is an implementation detail and is subject to change. This method can be used to help troubleshoot or for development purposes, but is not considered part of the public supported API. Use at your own risk. """ renderer = visitor.GraphvizVisitor() contents = renderer.visit(self.parsed) return contents def __repr__(self): return repr(self.parsed)