token
--- 與 Python 剖析樹一起使用的常數¶
原始碼:Lib/token.py
This module provides constants which represent the numeric values of leaf nodes
of the parse tree (terminal tokens). Refer to the file Grammar/Tokens
in the Python distribution for the definitions of the names in the context of
the language grammar. The specific numeric values which the names map to may
change between Python versions.
The module also provides a mapping from numeric codes to names and some functions. The functions mirror definitions in the Python C header files.
- token.tok_name¶
Dictionary mapping the numeric values of the constants defined in this module back to name strings, allowing more human-readable representation of parse trees to be generated.
- token.ISTERMINAL(x)¶
Return
True
for terminal token values.
- token.ISNONTERMINAL(x)¶
Return
True
for non-terminal token values.
- token.ISEOF(x)¶
Return
True
if x is the marker indicating the end of input.
The token constants are:
- token.ENDMARKER¶
- token.NAME¶
- token.NUMBER¶
- token.STRING¶
- token.NEWLINE¶
- token.INDENT¶
- token.DEDENT¶
- token.LPAR¶
Token value for
"("
.
- token.RPAR¶
Token value for
")"
.
- token.LSQB¶
Token value for
"["
.
- token.RSQB¶
Token value for
"]"
.
- token.COLON¶
Token value for
":"
.
- token.COMMA¶
Token value for
","
.
- token.SEMI¶
Token value for
";"
.
- token.PLUS¶
Token value for
"+"
.
- token.MINUS¶
Token value for
"-"
.
- token.STAR¶
Token value for
"*"
.
- token.SLASH¶
Token value for
"/"
.
- token.VBAR¶
Token value for
"|"
.
- token.AMPER¶
Token value for
"&"
.
- token.LESS¶
Token value for
"<"
.
- token.GREATER¶
Token value for
">"
.
- token.EQUAL¶
Token value for
"="
.
- token.DOT¶
Token value for
"."
.
- token.PERCENT¶
Token value for
"%"
.
- token.LBRACE¶
Token value for
"{"
.
- token.RBRACE¶
Token value for
"}"
.
- token.EQEQUAL¶
Token value for
"=="
.
- token.NOTEQUAL¶
Token value for
"!="
.
- token.LESSEQUAL¶
Token value for
"<="
.
- token.GREATEREQUAL¶
Token value for
">="
.
- token.TILDE¶
Token value for
"~"
.
- token.CIRCUMFLEX¶
Token value for
"^"
.
- token.LEFTSHIFT¶
Token value for
"<<"
.
- token.RIGHTSHIFT¶
Token value for
">>"
.
- token.DOUBLESTAR¶
Token value for
"**"
.
- token.PLUSEQUAL¶
Token value for
"+="
.
- token.MINEQUAL¶
Token value for
"-="
.
- token.STAREQUAL¶
Token value for
"*="
.
- token.SLASHEQUAL¶
Token value for
"/="
.
- token.PERCENTEQUAL¶
Token value for
"%="
.
- token.AMPEREQUAL¶
Token value for
"&="
.
- token.VBAREQUAL¶
Token value for
"|="
.
- token.CIRCUMFLEXEQUAL¶
Token value for
"^="
.
- token.LEFTSHIFTEQUAL¶
Token value for
"<<="
.
- token.RIGHTSHIFTEQUAL¶
Token value for
">>="
.
- token.DOUBLESTAREQUAL¶
Token value for
"**="
.
- token.DOUBLESLASH¶
Token value for
"//"
.
- token.DOUBLESLASHEQUAL¶
Token value for
"//="
.
- token.AT¶
Token value for
"@"
.
- token.ATEQUAL¶
Token value for
"@="
.
- token.RARROW¶
Token value for
"->"
.
- token.ELLIPSIS¶
Token value for
"..."
.
- token.COLONEQUAL¶
Token value for
":="
.
- token.EXCLAMATION¶
Token value for
"!"
.
- token.OP¶
- token.TYPE_IGNORE¶
- token.TYPE_COMMENT¶
- token.SOFT_KEYWORD¶
- token.FSTRING_START¶
- token.FSTRING_MIDDLE¶
- token.FSTRING_END¶
- token.COMMENT¶
- token.NL¶
- token.ERRORTOKEN¶
- token.N_TOKENS¶
- token.NT_OFFSET¶
The following token type values aren't used by the C tokenizer but are needed for
the tokenize
module.
- token.COMMENT
Token value used to indicate a comment.
- token.NL
Token value used to indicate a non-terminating newline. The
NEWLINE
token indicates the end of a logical line of Python code;NL
tokens are generated when a logical line of code is continued over multiple physical lines.
- token.ENCODING¶
Token value that indicates the encoding used to decode the source bytes into text. The first token returned by
tokenize.tokenize()
will always be anENCODING
token.
- token.TYPE_COMMENT
Token value indicating that a type comment was recognized. Such tokens are only produced when
ast.parse()
is invoked withtype_comments=True
.
- token.EXACT_TOKEN_TYPES¶
A dictionary mapping the string representation of a token to its numeric code.
在 3.8 版被加入.
在 3.5 版的變更: Added AWAIT
and ASYNC
tokens.
在 3.7 版的變更: Removed AWAIT
and ASYNC
tokens. "async" and "await" are
now tokenized as NAME
tokens.
在 3.8 版的變更: Added TYPE_COMMENT
, TYPE_IGNORE
, COLONEQUAL
.
Added AWAIT
and ASYNC
tokens back (they're needed
to support parsing older Python versions for ast.parse()
with
feature_version
set to 6 or lower).
在 3.13 版的變更: Removed AWAIT
and ASYNC
tokens again.