hdZddlmZddlmZddlmZddlmZgdZ GddeZ Gd d eZ d Z d Z Gd deZdS)z pygments.formatters.other ~~~~~~~~~~~~~~~~~~~~~~~~~ Other formatters: NullFormatter, RawTokenFormatter. :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. ) Formatter)get_choice_opt)Token)colorize) NullFormatterRawTokenFormatterTestcaseFormatterc*eZdZdZdZddgZdgZdZdS)rz; Output the text unchanged without any formatting. z Text onlytextnullz*.txtc|j}|D]E\}}|r)|||0||FdSN)encodingwriteencode)self tokensourceoutfileencttypevalues /builddir/build/BUILD/imunify360-venv-2.5.2/opt/imunify360/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/other.pyformatzNullFormatter.formatsdm' % %LE5 % ell3//0000 e$$$$  % %N)__name__ __module__ __qualname____doc__namealiases filenamesrrrrrsE DvG I%%%%%rrc4eZdZdZdZddgZdgZdZdZdZ d S) ra} Format tokens as a raw representation for storing token streams. The format is ``tokentyperepr(tokenstring)\n``. The output can later be converted to a token stream with the `RawTokenLexer`, described in the :doc:`lexer list `. Only two options are accepted: `compress` If set to ``'gz'`` or ``'bz2'``, compress the output with the given compression algorithm after encoding (default: ``''``). `error_color` If set to a color name, highlight error tokens using that color. If set but with no value, defaults to ``'red'``. .. versionadded:: 0.11 z Raw tokensrawtokensz*.rawFc Jtj|fi|d|_t|dgdd|_|dd|_|jdurd|_|j> t|jddS#t$rtd|jd wxYwdS) Nasciicompress)nonegzbz2r) error_colorTredzInvalid color z specified) r__init__rrr(getr-rKeyError ValueErrorroptionss rr/zRawTokenFormatter.__init__>s4++7+++  &w '@'@'@"FF ";;}d;;  t # #$D    ' R)2..... R R R !P$2B!P!P!PQQQ R ( 's &A==#B c> dn#t$rtdwxYw|jdkr+ddl}|dddj}j}n?|jdkr&ddl}|d fd } fd }nj}j}|j rH|D]D\}}d ||fz} |tj ur|t|j | 9|| En|D]\}}|d ||fz|dS) Nrz3The raw tokens formatter needs a binary output filer+rr)wb r,cX|dSr)rr()r compressorrs rrz'RawTokenFormatter.format..write_s) j11$7788888rc~dSr)rflush)r9rsrr;z'RawTokenFormatter.format..flushbs2 j..00111 rs%r %r ) r TypeErrorr(gzipGzipFilecloser, BZ2Compressorr;r-rErrorr) rrrr=rr;r,rrliner9s ` @rrzRawTokenFormatter.formatOs + MM#     + + +*++ + + =D KKKmmBa99GMEMEE ]e # # JJJ**1--J 9 9 9 9 9 9       MEME   4 +  u"eU^3EK''E(4#3T::;;;;E$KKKK  !, 4 4 ukUEN23333 s4N) rrrrrr r! unicodeoutputr/rr"rrrr$s\& DhG IMRRR"$$$$$rrzG def testNeedsName(lexer): fragment = %r tokens = [ zD ] assert list(lexer.get_tokens(fragment)) == tokens c(eZdZdZdZdgZdZdZdS)r zU Format tokens as appropriate for a new testcase. .. versionadded:: 2.0 Testcasetestcasec ptj|fi||j|jdkrtddSdS)Nutf-8z*Only None and utf-8 are allowed encodings.)rr/rr2r3s rr/zTestcaseFormatter.__init__sL4++7+++ = $')A)AIJJ J % $)A)Arc Zd}g}g}|D]8\}}||||d|d|d9td|fz}d|} t} |j||| z| znx||d|| d|| d|dS)Nz (z, z), r)rH)appendTESTCASE_BEFOREjoinTESTCASE_AFTERrrrr;) rrr indentationrawbufoutbufrrbeforeduringafters rrzTestcaseFormatter.formats" ' C CLE5 MM% MM[AA5AAEAAA B B B B BGGFOO#55 = MM&6/E1 2 2 2 2 MM&--00 1 1 1 MM&--00 1 1 1 MM%,,w// 0 0 0 rN)rrrrrr r/rr"rrr r sM DlGKKK rr N)rpip._vendor.pygments.formatterrpip._vendor.pygments.utilrpip._vendor.pygments.tokenrpip._vendor.pygments.consoler__all__rrrLrNr r"rrrZs544444444444,,,,,,111111 E E E%%%%%I%%%"OOOOO OOOd   r