PNG  IHDRX cHRMz&u0`:pQ<bKGD pHYsodtIME MeqIDATxw]Wug^Qd˶ 6`!N:!@xI~)%7%@Bh&`lnjVF29gΨ4E$|>cɚ{gk= %,a KX%,a KX%,a KX%,a KX%,a KX%,a KX%, b` ǟzeאfp]<!SJmɤY޲ڿ,%c ~ع9VH.!Ͳz&QynֺTkRR.BLHi٪:l;@(!MԴ=žI,:o&N'Kù\vRmJ雵֫AWic H@" !: Cé||]k-Ha oݜ:y F())u]aG7*JV@J415p=sZH!=!DRʯvɱh~V\}v/GKY$n]"X"}t@ xS76^[bw4dsce)2dU0 CkMa-U5tvLƀ~mlMwfGE/-]7XAƟ`׮g ewxwC4\[~7@O-Q( a*XGƒ{ ՟}$_y3tĐƤatgvێi|K=uVyrŲlLӪuܿzwk$m87k( `múcE)"@rK( z4$D; 2kW=Xb$V[Ru819קR~qloѱDyįݎ*mxw]y5e4K@ЃI0A D@"BDk_)N\8͜9dz"fK0zɿvM /.:2O{ Nb=M=7>??Zuo32 DLD@D| &+֎C #B8ַ`bOb $D#ͮҪtx]%`ES`Ru[=¾!@Od37LJ0!OIR4m]GZRJu$‡c=%~s@6SKy?CeIh:[vR@Lh | (BhAMy=݃  G"'wzn޺~8ԽSh ~T*A:xR[ܹ?X[uKL_=fDȊ؂p0}7=D$Ekq!/t.*2ʼnDbŞ}DijYaȲ(""6HA;:LzxQ‘(SQQ}*PL*fc\s `/d'QXW, e`#kPGZuŞuO{{wm[&NBTiiI0bukcA9<4@SӊH*؎4U/'2U5.(9JuDfrޱtycU%j(:RUbArLֺN)udA':uGQN"-"Is.*+k@ `Ojs@yU/ H:l;@yyTn}_yw!VkRJ4P)~y#)r,D =ě"Q]ci'%HI4ZL0"MJy 8A{ aN<8D"1#IJi >XjX֔#@>-{vN!8tRݻ^)N_╗FJEk]CT՟ YP:_|H1@ CBk]yKYp|og?*dGvzنzӴzjֺNkC~AbZƷ`.H)=!QͷVTT(| u78y֮}|[8-Vjp%2JPk[}ԉaH8Wpqhwr:vWª<}l77_~{s۴V+RCģ%WRZ\AqHifɤL36: #F:p]Bq/z{0CU6ݳEv_^k7'>sq*+kH%a`0ԣisqにtү04gVgW΂iJiS'3w.w}l6MC2uԯ|>JF5`fV5m`Y**Db1FKNttu]4ccsQNnex/87+}xaUW9y>ͯ骵G{䩓Գ3+vU}~jJ.NFRD7<aJDB1#ҳgSb,+CS?/ VG J?|?,2#M9}B)MiE+G`-wo߫V`fio(}S^4e~V4bHOYb"b#E)dda:'?}׮4繏`{7Z"uny-?ǹ;0MKx{:_pÚmFמ:F " .LFQLG)Q8qN q¯¯3wOvxDb\. BKD9_NN &L:4D{mm o^tֽ:q!ƥ}K+<"m78N< ywsard5+вz~mnG)=}lYݧNj'QJS{S :UYS-952?&O-:W}(!6Mk4+>A>j+i|<<|;ر^߉=HE|V#F)Emm#}/"y GII웻Jі94+v뾧xu~5C95~ūH>c@덉pʃ1/4-A2G%7>m;–Y,cyyaln" ?ƻ!ʪ<{~h~i y.zZB̃/,雋SiC/JFMmBH&&FAbϓO^tubbb_hZ{_QZ-sύodFgO(6]TJA˯#`۶ɟ( %$&+V'~hiYy>922 Wp74Zkq+Ovn錄c>8~GqܲcWꂎz@"1A.}T)uiW4="jJ2W7mU/N0gcqܗOO}?9/wìXžΏ0 >֩(V^Rh32!Hj5`;O28؇2#ݕf3 ?sJd8NJ@7O0 b־?lldщ̡&|9C.8RTWwxWy46ah嘦mh٤&l zCy!PY?: CJyв]dm4ǜҐR޻RլhX{FƯanшQI@x' ao(kUUuxW_Ñ줮[w8 FRJ(8˼)_mQ _!RJhm=!cVmm ?sFOnll6Qk}alY}; "baӌ~M0w,Ggw2W:G/k2%R,_=u`WU R.9T"v,<\Ik޽/2110Ӿxc0gyC&Ny޽JҢrV6N ``یeA16"J³+Rj*;BϜkZPJaÍ<Jyw:NP8/D$ 011z֊Ⱳ3ι֘k1V_"h!JPIΣ'ɜ* aEAd:ݺ>y<}Lp&PlRfTb1]o .2EW\ͮ]38؋rTJsǏP@芎sF\> P^+dYJLbJ C-xϐn> ι$nj,;Ǖa FU *择|h ~izť3ᤓ`K'-f tL7JK+vf2)V'-sFuB4i+m+@My=O҈0"|Yxoj,3]:cо3 $#uŘ%Y"y죯LebqtҢVzq¼X)~>4L׶m~[1_k?kxֺQ`\ |ٛY4Ѯr!)N9{56(iNq}O()Em]=F&u?$HypWUeB\k]JɩSع9 Zqg4ZĊo oMcjZBU]B\TUd34ݝ~:7ڶSUsB0Z3srx 7`:5xcx !qZA!;%͚7&P H<WL!džOb5kF)xor^aujƍ7 Ǡ8/p^(L>ὴ-B,{ۇWzֺ^k]3\EE@7>lYBȝR.oHnXO/}sB|.i@ɥDB4tcm,@ӣgdtJ!lH$_vN166L__'Z)y&kH;:,Y7=J 9cG) V\hjiE;gya~%ks_nC~Er er)muuMg2;֫R)Md) ,¶ 2-wr#F7<-BBn~_(o=KO㭇[Xv eN_SMgSҐ BS헃D%g_N:/pe -wkG*9yYSZS.9cREL !k}<4_Xs#FmҶ:7R$i,fi!~' # !6/S6y@kZkZcX)%5V4P]VGYq%H1!;e1MV<!ϐHO021Dp= HMs~~a)ަu7G^];git!Frl]H/L$=AeUvZE4P\.,xi {-~p?2b#amXAHq)MWǾI_r`S Hz&|{ +ʖ_= (YS(_g0a03M`I&'9vl?MM+m~}*xT۲(fY*V4x@29s{DaY"toGNTO+xCAO~4Ϳ;p`Ѫ:>Ҵ7K 3}+0 387x\)a"/E>qpWB=1 ¨"MP(\xp߫́A3+J] n[ʼnӼaTbZUWb={~2ooKױӰp(CS\S筐R*JغV&&"FA}J>G֐p1ٸbk7 ŘH$JoN <8s^yk_[;gy-;߉DV{c B yce% aJhDȶ 2IdйIB/^n0tNtџdcKj4϶v~- CBcgqx9= PJ) dMsjpYB] GD4RDWX +h{y`,3ꊕ$`zj*N^TP4L:Iz9~6s) Ga:?y*J~?OrMwP\](21sZUD ?ܟQ5Q%ggW6QdO+\@ ̪X'GxN @'4=ˋ+*VwN ne_|(/BDfj5(Dq<*tNt1х!MV.C0 32b#?n0pzj#!38}޴o1KovCJ`8ŗ_"]] rDUy޲@ Ȗ-;xџ'^Y`zEd?0„ DAL18IS]VGq\4o !swV7ˣι%4FѮ~}6)OgS[~Q vcYbL!wG3 7띸*E Pql8=jT\꘿I(z<[6OrR8ºC~ډ]=rNl[g|v TMTղb-o}OrP^Q]<98S¤!k)G(Vkwyqyr޽Nv`N/e p/~NAOk \I:G6]4+K;j$R:Mi #*[AȚT,ʰ,;N{HZTGMoּy) ]%dHء9Պ䠬|<45,\=[bƟ8QXeB3- &dҩ^{>/86bXmZ]]yޚN[(WAHL$YAgDKp=5GHjU&99v簪C0vygln*P)9^͞}lMuiH!̍#DoRBn9l@ xA/_v=ȺT{7Yt2N"4!YN`ae >Q<XMydEB`VU}u]嫇.%e^ánE87Mu\t`cP=AD/G)sI"@MP;)]%fH9'FNsj1pVhY&9=0pfuJ&gޤx+k:!r˭wkl03׼Ku C &ѓYt{.O.zҏ z}/tf_wEp2gvX)GN#I ݭ߽v/ .& и(ZF{e"=V!{zW`, ]+LGz"(UJp|j( #V4, 8B 0 9OkRrlɱl94)'VH9=9W|>PS['G(*I1==C<5"Pg+x'K5EMd؞Af8lG ?D FtoB[je?{k3zQ vZ;%Ɠ,]E>KZ+T/ EJxOZ1i #T<@ I}q9/t'zi(EMqw`mYkU6;[t4DPeckeM;H}_g pMww}k6#H㶏+b8雡Sxp)&C $@'b,fPߑt$RbJ'vznuS ~8='72_`{q纶|Q)Xk}cPz9p7O:'|G~8wx(a 0QCko|0ASD>Ip=4Q, d|F8RcU"/KM opKle M3#i0c%<7׿p&pZq[TR"BpqauIp$ 8~Ĩ!8Սx\ւdT>>Z40ks7 z2IQ}ItԀ<-%S⍤};zIb$I 5K}Q͙D8UguWE$Jh )cu4N tZl+[]M4k8֦Zeq֮M7uIqG 1==tLtR,ƜSrHYt&QP윯Lg' I,3@P'}'R˪e/%-Auv·ñ\> vDJzlӾNv5:|K/Jb6KI9)Zh*ZAi`?S {aiVDԲuy5W7pWeQJk֤#5&V<̺@/GH?^τZL|IJNvI:'P=Ϛt"¨=cud S Q.Ki0 !cJy;LJR;G{BJy޺[^8fK6)=yʊ+(k|&xQ2`L?Ȓ2@Mf 0C`6-%pKpm')c$׻K5[J*U[/#hH!6acB JA _|uMvDyk y)6OPYjœ50VT K}cǻP[ $:]4MEA.y)|B)cf-A?(e|lɉ#P9V)[9t.EiQPDѠ3ϴ;E:+Օ t ȥ~|_N2,ZJLt4! %ա]u {+=p.GhNcŞQI?Nd'yeh n7zi1DB)1S | S#ًZs2|Ɛy$F SxeX{7Vl.Src3E℃Q>b6G ўYCmtկ~=K0f(=LrAS GN'ɹ9<\!a`)֕y[uՍ[09` 9 +57ts6}b4{oqd+J5fa/,97J#6yν99mRWxJyѡyu_TJc`~W>l^q#Ts#2"nD1%fS)FU w{ܯ R{ ˎ󅃏џDsZSQS;LV;7 Od1&1n$ N /.q3~eNɪ]E#oM~}v֯FڦwyZ=<<>Xo稯lfMFV6p02|*=tV!c~]fa5Y^Q_WN|Vs 0ҘދU97OI'N2'8N֭fgg-}V%y]U4 峧p*91#9U kCac_AFңĪy뚇Y_AiuYyTTYЗ-(!JFLt›17uTozc. S;7A&&<ԋ5y;Ro+:' *eYJkWR[@F %SHWP 72k4 qLd'J "zB6{AC0ƁA6U.'F3:Ȅ(9ΜL;D]m8ڥ9}dU "v!;*13Rg^fJyShyy5auA?ɩGHRjo^]׽S)Fm\toy 4WQS@mE#%5ʈfFYDX ~D5Ϡ9tE9So_aU4?Ѽm%&c{n>.KW1Tlb}:j uGi(JgcYj0qn+>) %\!4{LaJso d||u//P_y7iRJ߬nHOy) l+@$($VFIQ9%EeKʈU. ia&FY̒mZ=)+qqoQn >L!qCiDB;Y<%} OgBxB!ØuG)WG9y(Ą{_yesuZmZZey'Wg#C~1Cev@0D $a@˲(.._GimA:uyw֬%;@!JkQVM_Ow:P.s\)ot- ˹"`B,e CRtaEUP<0'}r3[>?G8xU~Nqu;Wm8\RIkբ^5@k+5(By'L&'gBJ3ݶ!/㮻w҅ yqPWUg<e"Qy*167΃sJ\oz]T*UQ<\FԎ`HaNmڜ6DysCask8wP8y9``GJ9lF\G g's Nn͵MLN֪u$| /|7=]O)6s !ĴAKh]q_ap $HH'\1jB^s\|- W1:=6lJBqjY^LsPk""`]w)󭃈,(HC ?䔨Y$Sʣ{4Z+0NvQkhol6C.婧/u]FwiVjZka&%6\F*Ny#8O,22+|Db~d ~Çwc N:FuuCe&oZ(l;@ee-+Wn`44AMK➝2BRՈt7g*1gph9N) *"TF*R(#'88pm=}X]u[i7bEc|\~EMn}P瘊J)K.0i1M6=7'_\kaZ(Th{K*GJyytw"IO-PWJk)..axӝ47"89Cc7ĐBiZx 7m!fy|ϿF9CbȩV 9V-՛^pV̌ɄS#Bv4-@]Vxt-Z, &ֺ*diؠ2^VXbs֔Ìl.jQ]Y[47gj=幽ex)A0ip׳ W2[ᎇhuE^~q흙L} #-b۸oFJ_QP3r6jr+"nfzRJTUqoaۍ /$d8Mx'ݓ= OՃ| )$2mcM*cЙj}f };n YG w0Ia!1Q.oYfr]DyISaP}"dIӗթO67jqR ҊƐƈaɤGG|h;t]䗖oSv|iZqX)oalv;۩meEJ\!8=$4QU4Xo&VEĊ YS^E#d,yX_> ۘ-e\ "Wa6uLĜZi`aD9.% w~mB(02G[6y.773a7 /=o7D)$Z 66 $bY^\CuP. (x'"J60׿Y:Oi;F{w佩b+\Yi`TDWa~|VH)8q/=9!g߆2Y)?ND)%?Ǐ`k/sn:;O299yB=a[Ng 3˲N}vLNy;*?x?~L&=xyӴ~}q{qE*IQ^^ͧvü{Huu=R|>JyUlZV, B~/YF!Y\u_ݼF{_C)LD]m {H 0ihhadd nUkf3oٺCvE\)QJi+֥@tDJkB$1!Đr0XQ|q?d2) Ӣ_}qv-< FŊ߫%roppVBwü~JidY4:}L6M7f٬F "?71<2#?Jyy4뷢<_a7_=Q E=S1И/9{+93֮E{ǂw{))?maÆm(uLE#lïZ  ~d];+]h j?!|$F}*"4(v'8s<ŏUkm7^7no1w2ؗ}TrͿEk>p'8OB7d7R(A 9.*Mi^ͳ; eeUwS+C)uO@ =Sy]` }l8^ZzRXj[^iUɺ$tj))<sbDJfg=Pk_{xaKo1:-uyG0M ԃ\0Lvuy'ȱc2Ji AdyVgVh!{]/&}}ċJ#%d !+87<;qN޼Nفl|1N:8ya  8}k¾+-$4FiZYÔXk*I&'@iI99)HSh4+2G:tGhS^繿 Kتm0 вDk}֚+QT4;sC}rՅE,8CX-e~>G&'9xpW,%Fh,Ry56Y–hW-(v_,? ; qrBk4-V7HQ;ˇ^Gv1JVV%,ik;D_W!))+BoS4QsTM;gt+ndS-~:11Sgv!0qRVh!"Ȋ(̦Yl.]PQWgٳE'`%W1{ndΗBk|Ž7ʒR~,lnoa&:ü$ 3<a[CBݮwt"o\ePJ=Hz"_c^Z.#ˆ*x z̝grY]tdkP*:97YľXyBkD4N.C_[;F9`8& !AMO c `@BA& Ost\-\NX+Xp < !bj3C&QL+*&kAQ=04}cC!9~820G'PC9xa!w&bo_1 Sw"ܱ V )Yl3+ס2KoXOx]"`^WOy :3GO0g;%Yv㐫(R/r (s } u B &FeYZh0y> =2<Ϟc/ -u= c&׭,.0"g"7 6T!vl#sc>{u/Oh Bᾈ)۴74]x7 gMӒ"d]U)}" v4co[ ɡs 5Gg=XR14?5A}D "b{0$L .\4y{_fe:kVS\\O]c^W52LSBDM! C3Dhr̦RtArx4&agaN3Cf<Ԉp4~ B'"1@.b_/xQ} _߃҉/gٓ2Qkqp0շpZ2fԫYz< 4L.Cyυι1t@鎫Fe sYfsF}^ V}N<_`p)alٶ "(XEAVZ<)2},:Ir*#m_YӼ R%a||EƼIJ,,+f"96r/}0jE/)s)cjW#w'Sʯ5<66lj$a~3Kʛy 2:cZ:Yh))+a߭K::N,Q F'qB]={.]h85C9cr=}*rk?vwV렵ٸW Rs%}rNAkDv|uFLBkWY YkX מ|)1!$#3%y?pF<@<Rr0}: }\J [5FRxY<9"SQdE(Q*Qʻ)q1E0B_O24[U'],lOb ]~WjHޏTQ5Syu wq)xnw8~)c 쫬gٲߠ H% k5dƝk> kEj,0% b"vi2Wس_CuK)K{n|>t{P1򨾜j>'kEkƗBg*H%'_aY6Bn!TL&ɌOb{c`'d^{t\i^[uɐ[}q0lM˕G:‚4kb祔c^:?bpg… +37stH:0}en6x˟%/<]BL&* 5&fK9Mq)/iyqtA%kUe[ڛKN]Ě^,"`/ s[EQQm?|XJ߅92m]G.E΃ח U*Cn.j_)Tѧj̿30ڇ!A0=͜ar I3$C^-9#|pk!)?7.x9 @OO;WƝZBFU keZ75F6Tc6"ZȚs2y/1 ʵ:u4xa`C>6Rb/Yм)^=+~uRd`/|_8xbB0?Ft||Z\##|K 0>>zxv8۴吅q 8ĥ)"6>~\8:qM}#͚'ĉ#p\׶ l#bA?)|g g9|8jP(cr,BwV (WliVxxᡁ@0Okn;ɥh$_ckCgriv}>=wGzβ KkBɛ[˪ !J)h&k2%07δt}!d<9;I&0wV/ v 0<H}L&8ob%Hi|޶o&h1L|u֦y~󛱢8fٲUsւ)0oiFx2}X[zVYr_;N(w]_4B@OanC?gĦx>мgx>ΛToZoOMp>40>V Oy V9iq!4 LN,ˢu{jsz]|"R޻&'ƚ{53ўFu(<٪9:΋]B;)B>1::8;~)Yt|0(pw2N%&X,URBK)3\zz&}ax4;ǟ(tLNg{N|Ǽ\G#C9g$^\}p?556]/RP.90 k,U8/u776s ʪ_01چ|\N 0VV*3H鴃J7iI!wG_^ypl}r*jɤSR 5QN@ iZ#1ٰy;_\3\BQQ x:WJv츟ٯ$"@6 S#qe딇(/P( Dy~TOϻ<4:-+F`0||;Xl-"uw$Цi󼕝mKʩorz"mϺ$F:~E'ҐvD\y?Rr8_He@ e~O,T.(ފR*cY^m|cVR[8 JҡSm!ΆԨb)RHG{?MpqrmN>߶Y)\p,d#xۆWY*,l6]v0h15M˙MS8+EdI='LBJIH7_9{Caз*Lq,dt >+~ّeʏ?xԕ4bBAŚjﵫ!'\Ը$WNvKO}ӽmSşذqsOy?\[,d@'73'j%kOe`1.g2"e =YIzS2|zŐƄa\U,dP;jhhhaxǶ?КZ՚.q SE+XrbOu%\GتX(H,N^~]JyEZQKceTQ]VGYqnah;y$cQahT&QPZ*iZ8UQQM.qo/T\7X"u?Mttl2Xq(IoW{R^ ux*SYJ! 4S.Jy~ BROS[V|žKNɛP(L6V^|cR7i7nZW1Fd@ Ara{詑|(T*dN]Ko?s=@ |_EvF]׍kR)eBJc" MUUbY6`~V޴dJKß&~'d3i WWWWWW
Current Directory: /opt/imh-python/lib/python3.9/site-packages/IPython/lib
Viewing File: /opt/imh-python/lib/python3.9/site-packages/IPython/lib/lexers.py
# -*- coding: utf-8 -*- """ Defines a variety of Pygments lexers for highlighting IPython code. This includes: IPythonLexer, IPython3Lexer Lexers for pure IPython (python + magic/shell commands) IPythonPartialTracebackLexer, IPythonTracebackLexer Supports 2.x and 3.x via keyword `python3`. The partial traceback lexer reads everything but the Python code appearing in a traceback. The full lexer combines the partial lexer with an IPython lexer. IPythonConsoleLexer A lexer for IPython console sessions, with support for tracebacks. IPyLexer A friendly lexer which examines the first line of text and from it, decides whether to use an IPython lexer or an IPython console lexer. This is probably the only lexer that needs to be explicitly added to Pygments. """ #----------------------------------------------------------------------------- # Copyright (c) 2013, the IPython Development Team. # # Distributed under the terms of the Modified BSD License. # # The full license is in the file COPYING.txt, distributed with this software. #----------------------------------------------------------------------------- # Standard library import re # Third party from pygments.lexers import ( BashLexer, HtmlLexer, JavascriptLexer, RubyLexer, PerlLexer, PythonLexer, Python3Lexer, TexLexer) from pygments.lexer import ( Lexer, DelegatingLexer, RegexLexer, do_insertions, bygroups, using, ) from pygments.token import ( Generic, Keyword, Literal, Name, Operator, Other, Text, Error, ) from pygments.util import get_bool_opt # Local line_re = re.compile('.*?\n') __all__ = ['build_ipy_lexer', 'IPython3Lexer', 'IPythonLexer', 'IPythonPartialTracebackLexer', 'IPythonTracebackLexer', 'IPythonConsoleLexer', 'IPyLexer'] def build_ipy_lexer(python3): """Builds IPython lexers depending on the value of `python3`. The lexer inherits from an appropriate Python lexer and then adds information about IPython specific keywords (i.e. magic commands, shell commands, etc.) Parameters ---------- python3 : bool If `True`, then build an IPython lexer from a Python 3 lexer. """ # It would be nice to have a single IPython lexer class which takes # a boolean `python3`. But since there are two Python lexer classes, # we will also have two IPython lexer classes. if python3: PyLexer = Python3Lexer name = 'IPython3' aliases = ['ipython3'] doc = """IPython3 Lexer""" else: PyLexer = PythonLexer name = 'IPython' aliases = ['ipython2', 'ipython'] doc = """IPython Lexer""" ipython_tokens = [ (r'(?s)(\s*)(%%capture)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%debug)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?is)(\s*)(%%html)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(HtmlLexer))), (r'(?s)(\s*)(%%javascript)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(JavascriptLexer))), (r'(?s)(\s*)(%%js)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(JavascriptLexer))), (r'(?s)(\s*)(%%latex)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(TexLexer))), (r'(?s)(\s*)(%%perl)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PerlLexer))), (r'(?s)(\s*)(%%prun)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%pypy)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%python)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%python2)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PythonLexer))), (r'(?s)(\s*)(%%python3)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(Python3Lexer))), (r'(?s)(\s*)(%%ruby)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(RubyLexer))), (r'(?s)(\s*)(%%time)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%timeit)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%writefile)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r'(?s)(\s*)(%%file)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(PyLexer))), (r"(?s)(\s*)(%%)(\w+)(.*)", bygroups(Text, Operator, Keyword, Text)), (r'(?s)(^\s*)(%%!)([^\n]*\n)(.*)', bygroups(Text, Operator, Text, using(BashLexer))), (r"(%%?)(\w+)(\?\??)$", bygroups(Operator, Keyword, Operator)), (r"\b(\?\??)(\s*)$", bygroups(Operator, Text)), (r'(%)(sx|sc|system)(.*)(\n)', bygroups(Operator, Keyword, using(BashLexer), Text)), (r'(%)(\w+)(.*\n)', bygroups(Operator, Keyword, Text)), (r'^(!!)(.+)(\n)', bygroups(Operator, using(BashLexer), Text)), (r'(!)(?!=)(.+)(\n)', bygroups(Operator, using(BashLexer), Text)), (r'^(\s*)(\?\??)(\s*%{0,2}[\w\.\*]*)', bygroups(Text, Operator, Text)), (r'(\s*%{0,2}[\w\.\*]*)(\?\??)(\s*)$', bygroups(Text, Operator, Text)), ] tokens = PyLexer.tokens.copy() tokens['root'] = ipython_tokens + tokens['root'] attrs = {'name': name, 'aliases': aliases, 'filenames': [], '__doc__': doc, 'tokens': tokens} return type(name, (PyLexer,), attrs) IPython3Lexer = build_ipy_lexer(python3=True) IPythonLexer = build_ipy_lexer(python3=False) class IPythonPartialTracebackLexer(RegexLexer): """ Partial lexer for IPython tracebacks. Handles all the non-python output. """ name = 'IPython Partial Traceback' tokens = { 'root': [ # Tracebacks for syntax errors have a different style. # For both types of tracebacks, we mark the first line with # Generic.Traceback. For syntax errors, we mark the filename # as we mark the filenames for non-syntax tracebacks. # # These two regexps define how IPythonConsoleLexer finds a # traceback. # ## Non-syntax traceback (r'^(\^C)?(-+\n)', bygroups(Error, Generic.Traceback)), ## Syntax traceback (r'^( File)(.*)(, line )(\d+\n)', bygroups(Generic.Traceback, Name.Namespace, Generic.Traceback, Literal.Number.Integer)), # (Exception Identifier)(Whitespace)(Traceback Message) (r'(?u)(^[^\d\W]\w*)(\s*)(Traceback.*?\n)', bygroups(Name.Exception, Generic.Whitespace, Text)), # (Module/Filename)(Text)(Callee)(Function Signature) # Better options for callee and function signature? (r'(.*)( in )(.*)(\(.*\)\n)', bygroups(Name.Namespace, Text, Name.Entity, Name.Tag)), # Regular line: (Whitespace)(Line Number)(Python Code) (r'(\s*?)(\d+)(.*?\n)', bygroups(Generic.Whitespace, Literal.Number.Integer, Other)), # Emphasized line: (Arrow)(Line Number)(Python Code) # Using Exception token so arrow color matches the Exception. (r'(-*>?\s?)(\d+)(.*?\n)', bygroups(Name.Exception, Literal.Number.Integer, Other)), # (Exception Identifier)(Message) (r'(?u)(^[^\d\W]\w*)(:.*?\n)', bygroups(Name.Exception, Text)), # Tag everything else as Other, will be handled later. (r'.*\n', Other), ], } class IPythonTracebackLexer(DelegatingLexer): """ IPython traceback lexer. For doctests, the tracebacks can be snipped as much as desired with the exception to the lines that designate a traceback. For non-syntax error tracebacks, this is the line of hyphens. For syntax error tracebacks, this is the line which lists the File and line number. """ # The lexer inherits from DelegatingLexer. The "root" lexer is an # appropriate IPython lexer, which depends on the value of the boolean # `python3`. First, we parse with the partial IPython traceback lexer. # Then, any code marked with the "Other" token is delegated to the root # lexer. # name = 'IPython Traceback' aliases = ['ipythontb'] def __init__(self, **options): """ A subclass of `DelegatingLexer` which delegates to the appropriate to either IPyLexer, IPythonPartialTracebackLexer. """ # note we need a __init__ doc, as otherwise it inherits the doc from the super class # which will fail the documentation build as it references section of the pygments docs that # do not exists when building IPython's docs. self.python3 = get_bool_opt(options, 'python3', False) if self.python3: self.aliases = ['ipython3tb'] else: self.aliases = ['ipython2tb', 'ipythontb'] if self.python3: IPyLexer = IPython3Lexer else: IPyLexer = IPythonLexer DelegatingLexer.__init__(self, IPyLexer, IPythonPartialTracebackLexer, **options) class IPythonConsoleLexer(Lexer): """ An IPython console lexer for IPython code-blocks and doctests, such as: .. code-block:: rst .. code-block:: ipythonconsole In [1]: a = 'foo' In [2]: a Out[2]: 'foo' In [3]: print(a) foo Support is also provided for IPython exceptions: .. code-block:: rst .. code-block:: ipythonconsole In [1]: raise Exception Traceback (most recent call last): ... Exception """ name = 'IPython console session' aliases = ['ipythonconsole'] mimetypes = ['text/x-ipython-console'] # The regexps used to determine what is input and what is output. # The default prompts for IPython are: # # in = 'In [#]: ' # continuation = ' .D.: ' # template = 'Out[#]: ' # # Where '#' is the 'prompt number' or 'execution count' and 'D' # D is a number of dots matching the width of the execution count # in1_regex = r'In \[[0-9]+\]: ' in2_regex = r' \.\.+\.: ' out_regex = r'Out\[[0-9]+\]: ' #: The regex to determine when a traceback starts. ipytb_start = re.compile(r'^(\^C)?(-+\n)|^( File)(.*)(, line )(\d+\n)') def __init__(self, **options): """Initialize the IPython console lexer. Parameters ---------- python3 : bool If `True`, then the console inputs are parsed using a Python 3 lexer. Otherwise, they are parsed using a Python 2 lexer. in1_regex : RegexObject The compiled regular expression used to detect the start of inputs. Although the IPython configuration setting may have a trailing whitespace, do not include it in the regex. If `None`, then the default input prompt is assumed. in2_regex : RegexObject The compiled regular expression used to detect the continuation of inputs. Although the IPython configuration setting may have a trailing whitespace, do not include it in the regex. If `None`, then the default input prompt is assumed. out_regex : RegexObject The compiled regular expression used to detect outputs. If `None`, then the default output prompt is assumed. """ self.python3 = get_bool_opt(options, 'python3', False) if self.python3: self.aliases = ['ipython3console'] else: self.aliases = ['ipython2console', 'ipythonconsole'] in1_regex = options.get('in1_regex', self.in1_regex) in2_regex = options.get('in2_regex', self.in2_regex) out_regex = options.get('out_regex', self.out_regex) # So that we can work with input and output prompts which have been # rstrip'd (possibly by editors) we also need rstrip'd variants. If # we do not do this, then such prompts will be tagged as 'output'. # The reason can't just use the rstrip'd variants instead is because # we want any whitespace associated with the prompt to be inserted # with the token. This allows formatted code to be modified so as hide # the appearance of prompts, with the whitespace included. One example # use of this is in copybutton.js from the standard lib Python docs. in1_regex_rstrip = in1_regex.rstrip() + '\n' in2_regex_rstrip = in2_regex.rstrip() + '\n' out_regex_rstrip = out_regex.rstrip() + '\n' # Compile and save them all. attrs = ['in1_regex', 'in2_regex', 'out_regex', 'in1_regex_rstrip', 'in2_regex_rstrip', 'out_regex_rstrip'] for attr in attrs: self.__setattr__(attr, re.compile(locals()[attr])) Lexer.__init__(self, **options) if self.python3: pylexer = IPython3Lexer tblexer = IPythonTracebackLexer else: pylexer = IPythonLexer tblexer = IPythonTracebackLexer self.pylexer = pylexer(**options) self.tblexer = tblexer(**options) self.reset() def reset(self): self.mode = 'output' self.index = 0 self.buffer = u'' self.insertions = [] def buffered_tokens(self): """ Generator of unprocessed tokens after doing insertions and before changing to a new state. """ if self.mode == 'output': tokens = [(0, Generic.Output, self.buffer)] elif self.mode == 'input': tokens = self.pylexer.get_tokens_unprocessed(self.buffer) else: # traceback tokens = self.tblexer.get_tokens_unprocessed(self.buffer) for i, t, v in do_insertions(self.insertions, tokens): # All token indexes are relative to the buffer. yield self.index + i, t, v # Clear it all self.index += len(self.buffer) self.buffer = u'' self.insertions = [] def get_mci(self, line): """ Parses the line and returns a 3-tuple: (mode, code, insertion). `mode` is the next mode (or state) of the lexer, and is always equal to 'input', 'output', or 'tb'. `code` is a portion of the line that should be added to the buffer corresponding to the next mode and eventually lexed by another lexer. For example, `code` could be Python code if `mode` were 'input'. `insertion` is a 3-tuple (index, token, text) representing an unprocessed "token" that will be inserted into the stream of tokens that are created from the buffer once we change modes. This is usually the input or output prompt. In general, the next mode depends on current mode and on the contents of `line`. """ # To reduce the number of regex match checks, we have multiple # 'if' blocks instead of 'if-elif' blocks. # Check for possible end of input in2_match = self.in2_regex.match(line) in2_match_rstrip = self.in2_regex_rstrip.match(line) if (in2_match and in2_match.group().rstrip() == line.rstrip()) or \ in2_match_rstrip: end_input = True else: end_input = False if end_input and self.mode != 'tb': # Only look for an end of input when not in tb mode. # An ellipsis could appear within the traceback. mode = 'output' code = u'' insertion = (0, Generic.Prompt, line) return mode, code, insertion # Check for output prompt out_match = self.out_regex.match(line) out_match_rstrip = self.out_regex_rstrip.match(line) if out_match or out_match_rstrip: mode = 'output' if out_match: idx = out_match.end() else: idx = out_match_rstrip.end() code = line[idx:] # Use the 'heading' token for output. We cannot use Generic.Error # since it would conflict with exceptions. insertion = (0, Generic.Heading, line[:idx]) return mode, code, insertion # Check for input or continuation prompt (non stripped version) in1_match = self.in1_regex.match(line) if in1_match or (in2_match and self.mode != 'tb'): # New input or when not in tb, continued input. # We do not check for continued input when in tb since it is # allowable to replace a long stack with an ellipsis. mode = 'input' if in1_match: idx = in1_match.end() else: # in2_match idx = in2_match.end() code = line[idx:] insertion = (0, Generic.Prompt, line[:idx]) return mode, code, insertion # Check for input or continuation prompt (stripped version) in1_match_rstrip = self.in1_regex_rstrip.match(line) if in1_match_rstrip or (in2_match_rstrip and self.mode != 'tb'): # New input or when not in tb, continued input. # We do not check for continued input when in tb since it is # allowable to replace a long stack with an ellipsis. mode = 'input' if in1_match_rstrip: idx = in1_match_rstrip.end() else: # in2_match idx = in2_match_rstrip.end() code = line[idx:] insertion = (0, Generic.Prompt, line[:idx]) return mode, code, insertion # Check for traceback if self.ipytb_start.match(line): mode = 'tb' code = line insertion = None return mode, code, insertion # All other stuff... if self.mode in ('input', 'output'): # We assume all other text is output. Multiline input that # does not use the continuation marker cannot be detected. # For example, the 3 in the following is clearly output: # # In [1]: print 3 # 3 # # But the following second line is part of the input: # # In [2]: while True: # print True # # In both cases, the 2nd line will be 'output'. # mode = 'output' else: mode = 'tb' code = line insertion = None return mode, code, insertion def get_tokens_unprocessed(self, text): self.reset() for match in line_re.finditer(text): line = match.group() mode, code, insertion = self.get_mci(line) if mode != self.mode: # Yield buffered tokens before transitioning to new mode. for token in self.buffered_tokens(): yield token self.mode = mode if insertion: self.insertions.append((len(self.buffer), [insertion])) self.buffer += code for token in self.buffered_tokens(): yield token class IPyLexer(Lexer): r""" Primary lexer for all IPython-like code. This is a simple helper lexer. If the first line of the text begins with "In \[[0-9]+\]:", then the entire text is parsed with an IPython console lexer. If not, then the entire text is parsed with an IPython lexer. The goal is to reduce the number of lexers that are registered with Pygments. """ name = 'IPy session' aliases = ['ipy'] def __init__(self, **options): """ Create a new IPyLexer instance which dispatch to either an IPythonCOnsoleLexer (if In prompts are present) or and IPythonLexer (if In prompts are not present). """ # init docstring is necessary for docs not to fail to build do to parent # docs referenceing a section in pygments docs. self.python3 = get_bool_opt(options, 'python3', False) if self.python3: self.aliases = ['ipy3'] else: self.aliases = ['ipy2', 'ipy'] Lexer.__init__(self, **options) self.IPythonLexer = IPythonLexer(**options) self.IPythonConsoleLexer = IPythonConsoleLexer(**options) def get_tokens_unprocessed(self, text): # Search for the input prompt anywhere...this allows code blocks to # begin with comments as well. if re.match(r'.*(In \[[0-9]+\]:)', text.strip(), re.DOTALL): lex = self.IPythonConsoleLexer else: lex = self.IPythonLexer for token in lex.get_tokens_unprocessed(text): yield token