2. Analyse lexicale¶
A Python program is read by a parser. Input to the parser is a stream of tokens, generated by the lexical analyzer (also known as the tokenizer). This chapter describes how the lexical analyzer produces these tokens.
The lexical analyzer determines the program text's encoding
(UTF-8 by default), and decodes the text into
source characters.
If the text cannot be decoded, a SyntaxError is raised.
Next, the lexical analyzer uses the source characters to generate a stream of tokens. The type of a generated token generally depends on the next source character to be processed. Similarly, other special behavior of the analyzer depends on the first source character that hasn't yet been processed. The following table gives a quick summary of these source characters, with links to sections that contain more information.
Caractère |
Next token (or other relevant documentation) |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2.1. Structure des lignes¶
Un programme en Python est divisé en lignes logiques.
2.1.1. Lignes logiques¶
The end of a logical line is represented by the token NEWLINE.
Statements cannot cross logical line boundaries except where NEWLINE
is allowed by the syntax (e.g., between statements in compound statements).
A logical line is constructed from one or more physical lines by following
the explicit or implicit
line joining rules.
2.1.2. Lignes physiques¶
A physical line is a sequence of characters terminated by one the following end-of-line sequences:
the Unix form using ASCII LF (linefeed),
the Windows form using the ASCII sequence CR LF (return followed by linefeed),
the 'Classic Mac OS' form using the ASCII CR (return) character.
Regardless of platform, each of these sequences is replaced by a single ASCII LF (linefeed) character. (This is done even inside string literals.) Each line can use any of the sequences; they do not need to be consistent within a file.
The end of input also serves as an implicit terminator for the final physical line.
Formally:
newline: <ASCII LF> | <ASCII CR> <ASCII LF> | <ASCII CR>
2.1.4. Déclaration d'encodage¶
Si un commentaire placé sur la première ou deuxième ligne du script Python correspond à l'expression rationnelle coding[=:]\s*([-\w.]+), ce commentaire est analysé comme une déclaration d'encodage ; le premier groupe de cette expression désigne l'encodage du fichier source. Cette déclaration d'encodage doit être seule sur sa ligne et, si elle est sur la deuxième ligne, la première ligne doit aussi être une ligne composée uniquement d'un commentaire. Les formes recommandées pour l'expression de l'encodage sont :
# -*- coding: <encoding-name> -*-
qui est reconnue aussi par GNU Emacs et :
# vim:fileencoding=<encoding-name>
qui est reconnue par VIM de Bram Moolenaar.
If no encoding declaration is found, the default encoding is UTF-8. If the
implicit or explicit encoding of a file is UTF-8, an initial UTF-8 byte-order
mark (b'\xef\xbb\xbf') is ignored rather than being a syntax error.
Si un encodage est déclaré, le nom de l'encodage doit être reconnu par Python (voir Standard Encodings). L'encodage est utilisé pour toute l'analyse lexicale, y compris les chaînes de caractères, les commentaires et les identifiants.
All lexical analysis, including string literals, comments and identifiers, works on Unicode text decoded using the source encoding. Any Unicode code point, except the NUL control character, can appear in Python source.
source_character: <any Unicode code point, except NUL>
2.1.5. Continuation de ligne explicite¶
Deux lignes physiques, ou plus, peuvent être jointes pour former une seule ligne logique en utilisant la barre oblique inversée (\) selon la règle suivante : quand la ligne physique se termine par une barre oblique inversée qui ne fait pas partie d'une chaine de caractères ou d'un commentaire, la ligne immédiatement suivante lui est adjointe pour former une seule ligne logique, en supprimant la barre oblique inversée et le caractère de fin de ligne. Par exemple :
if 1900 < year < 2100 and 1 <= month <= 12 \
and 1 <= day <= 31 and 0 <= hour < 24 \
and 0 <= minute < 60 and 0 <= second < 60: # Looks like a valid date
return 1
Une ligne que se termine par une barre oblique inversée ne peut pas avoir de commentaire. La barre oblique inversée ne permet pas de continuer un commentaire. La barre oblique inversée ne permet pas de continuer un lexème, sauf s'il s'agit d'une chaîne de caractères (par exemple, les lexèmes autres que les chaînes de caractères ne peuvent pas être répartis sur plusieurs lignes en utilisant une barre oblique inversée). La barre oblique inversée n'est pas autorisée ailleurs sur la ligne, en dehors d'une chaîne de caractères.
2.1.6. Continuation de ligne implicite¶
Les expressions entre parenthèses, crochets ou accolades peuvent être réparties sur plusieurs lignes sans utiliser de barre oblique inversée. Par exemple :
month_names = ['Januari', 'Februari', 'Maart', # These are the
'April', 'Mei', 'Juni', # Dutch names
'Juli', 'Augustus', 'September', # for the months
'Oktober', 'November', 'December'] # of the year
Les lignes continuées implicitement peuvent avoir des commentaires. L'indentation des lignes de continuation n'est pas importante. Une ligne blanche est autorisée comme ligne de continuation. Il ne doit pas y avoir de lexème NEWLINE entre des lignes implicitement continuées. Les lignes continuées implicitement peuvent être utilisées dans des chaînes entre triples guillemets (voir ci-dessous) ; dans ce cas, elles ne peuvent pas avoir de commentaires.
2.1.7. Lignes vierges¶
A logical line that contains only spaces, tabs, formfeeds and possibly a
comment, is ignored (i.e., no NEWLINE token is generated).
During interactive input of statements, handling of a blank line may differ
depending on the implementation of the read-eval-print loop.
In the standard interactive interpreter, an entirely blank logical line (that
is, one containing not even whitespace or a comment) terminates a multi-line
statement.
2.1.8. Indentation¶
Des espaces ou tabulations au début d’une ligne logique sont utilisées pour connaître le niveau d’indentation de la ligne, qui est ensuite utilisé pour déterminer comment les instructions sont groupées.
Les tabulations sont remplacées (de la gauche vers la droite) par une à huit espaces de manière à ce que le nombre de caractères remplacés soit un multiple de huit (nous avons ainsi la même règle que celle d'Unix). Le nombre total d'espaces précédant le premier caractère non blanc détermine alors le niveau d'indentation de la ligne. L'indentation ne peut pas être répartie sur plusieurs lignes physiques à l'aide de barres obliques inversées ; les espaces jusqu'à la première barre oblique inversée déterminent l'indentation.
L'indentation est déclarée inconsistante et rejetée si, dans un même fichier source, le mélange des tabulations et des espaces est tel que la signification dépend du nombre d'espaces que représente une tabulation. Une exception TabError est levée dans ce cas.
Note de compatibilité entre les plateformes : en raison de la nature des éditeurs de texte sur les plateformes non Unix, il n'est pas judicieux d'utiliser un mélange d'espaces et de tabulations pour l'indentation dans un seul fichier source. Il convient également de noter que des plateformes peuvent explicitement limiter le niveau d'indentation maximal.
Un caractère de saut de page peut être présent au début de la ligne ; il est ignoré pour les calculs d'indentation ci-dessus. Les caractères de saut de page se trouvant ailleurs avec les espaces en tête de ligne ont un effet indéfini (par exemple, ils peuvent remettre à zéro le nombre d'espaces).
The indentation levels of consecutive lines are used to generate
INDENT and DEDENT tokens, using a stack,
as follows.
Before the first line of the file is read, a single zero is pushed on the stack;
this will never be popped off again. The numbers pushed on the stack will
always be strictly increasing from bottom to top. At the beginning of each
logical line, the line's indentation level is compared to the top of the stack.
If it is equal, nothing happens. If it is larger, it is pushed on the stack, and
one INDENT token is generated. If it is smaller, it must be one of the
numbers occurring on the stack; all numbers on the stack that are larger are
popped off, and for each number popped off a DEDENT token is generated.
At the end of the file, a DEDENT token is generated for each number
remaining on the stack that is larger than zero.
Voici un exemple de code Python correctement indenté (bien que très confus) :
def perm(l):
# Compute the list of all permutations of l
if len(l) <= 1:
return [l]
r = []
for i in range(len(l)):
s = l[:i] + l[i+1:]
p = perm(s)
for x in p:
r.append(l[i:i+1] + x)
return r
L'exemple suivant montre plusieurs erreurs d'indentation :
def perm(l): # error: first line indented
for i in range(len(l)): # error: not indented
s = l[:i] + l[i+1:]
p = perm(l[:i] + l[i+1:]) # error: unexpected indent
for x in p:
r.append(l[i:i+1] + x)
return r # error: inconsistent dedent
En fait, les trois premières erreurs sont détectées par l'analyseur syntaxique ; seule la dernière erreur est trouvée par l'analyseur lexical (l'indentation de return r ne correspond à aucun niveau dans la pile).
2.1.9. Espaces entre lexèmes¶
Except at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens:
whitespace: ' ' | tab | formfeed
Whitespace is needed between two tokens only if their concatenation
could otherwise be interpreted as a different token. For example, ab is one
token, but a b is two tokens. However, +a and + a both produce
two tokens, + and a, as +a is not a valid token.
2.1.10. End marker¶
At the end of non-interactive input, the lexical analyzer generates an
ENDMARKER token.
2.2. Autres lexèmes¶
Besides NEWLINE, INDENT and DEDENT,
the following categories of tokens exist:
identifiers and keywords (NAME), literals (such as
NUMBER and STRING), and other symbols
(operators and delimiters, OP).
Whitespace characters (other than logical line terminators, discussed earlier)
are not tokens, but serve to delimit tokens.
Where ambiguity exists, a token comprises the longest possible string that
forms a legal token, when read from left to right.
2.3. Names (identifiers and keywords)¶
NAME tokens represent identifiers, keywords, and
soft keywords.
Names are composed of the following characters:
uppercase and lowercase letters (
A-Zanda-z),the underscore (
_),digits (
0through9), which cannot appear as the first character, andnon-ASCII characters. Valid names may only contain "letter-like" and "digit-like" characters; see Non-ASCII characters in names for details.
Names must contain at least one character, but have no upper length limit. Case is significant.
Formally, names are described by the following lexical definitions:
NAME:name_startname_continue* name_start: "a"..."z" | "A"..."Z" | "_" | <non-ASCII character> name_continue: name_start | "0"..."9" identifier: <NAME, except keywords>
Note that not all names matched by this grammar are valid; see Non-ASCII characters in names for details.
2.3.1. Mots-clés¶
The following names are used as reserved words, or keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here:
False await else import pass
None break except in raise
True class finally is return
and continue for lambda try
as def from nonlocal while
assert del global not with
async elif if or yield
2.3.2. Mots-clés ad hoc¶
Ajouté dans la version 3.10.
Some names are only reserved under specific contexts. These are known as soft keywords:
These syntactically act as keywords in their specific contexts, but this distinction is done at the parser level, not when tokenizing.
As soft keywords, their use in the grammar is possible while still preserving compatibility with existing code that uses these names as identifier names.
Modifié dans la version 3.12: type is now a soft keyword.
2.3.3. Classes réservées pour les identifiants¶
Certaines classes d'identifiants (outre les mots-clés) ont une signification particulière. Ces classes se reconnaissent par des tirets bas (ou underscores) en tête et en queue d'identifiant :
_*N'est pas importé par
from module import *._Dans un motif
cased'une instructionmatch,_est un mot-clé ad hoc qui décrit un motif attrape-tout.De son côté, l'interpréteur interactif place le résultat de la dernière évaluation dans la variable
_(son emplacement se situe dans le modulebuiltins, avec les fonctions natives telles queprint).Ailleurs,
_est un identifiant comme un autre. Il est souvent utilisé pour désigner des éléments « spéciaux », mais il n'est pas spécial pour Python en tant que tel.Note
le nom
_est souvent utilisé pour internationaliser l'affichage ; reportez-vous à la documentation du modulegettextpour plus d'informations sur cette convention.Il est aussi communément utilisé pour signifier que la variable n'est pas utilisée.
__*__Noms définis par le système, appelés noms « dunder » (pour Double Underscores) de manière informelle. Ces noms sont définis par l'interpréteur et son implémentation (y compris la bibliothèque standard). Les noms actuels définis par le système sont abordés dans la section Méthodes spéciales, mais aussi ailleurs. D'autres noms seront probablement définis dans les futures versions de Python. Toute utilisation de noms de la forme
__*__, dans n'importe quel contexte, qui n'est pas conforme à ce qu'indique explicitement la documentation, est sujette à des mauvaises surprises sans avertissement.__*Noms privés pour une classe. Les noms de cette forme, lorsqu'ils sont utilisés dans le contexte d'une définition de classe, sont réécrits sous une forme modifiée pour éviter les conflits de noms entre les attributs « privés » des classes de base et les classes dérivées. Voir la section Identifiants (noms).
2.3.4. Non-ASCII characters in names¶
Names that contain non-ASCII characters need additional normalization
and validation beyond the rules and grammar explained
above.
For example, ř_1, 蛇, or साँप are valid names, but r〰2,
€, or 🐍 are not.
This section explains the exact rules.
All names are converted into the normalization form NFKC while parsing.
This means that, for example, some typographic variants of characters are
converted to their "basic" form. For example, fiⁿₐˡᵢᶻₐᵗᵢᵒₙ normalizes to
finalization, so Python treats them as the same name:
>>> fiⁿₐˡᵢᶻₐᵗᵢᵒₙ = 3
>>> finalization
3
Note
Normalization is done at the lexical level only.
Run-time functions that take names as strings generally do not normalize
their arguments.
For example, the variable defined above is accessible at run time in the
globals() dictionary as globals()["finalization"] but not
globals()["fiⁿₐˡᵢᶻₐᵗᵢᵒₙ"].
Similarly to how ASCII-only names must contain only letters, digits and
the underscore, and cannot start with a digit, a valid name must
start with a character in the "letter-like" set xid_start,
and the remaining characters must be in the "letter- and digit-like" set
xid_continue.
These sets based on the XID_Start and XID_Continue sets as defined by the
Unicode standard annex UAX-31.
Python's xid_start additionally includes the underscore (_).
Note that Python does not necessarily conform to UAX-31.
A non-normative listing of characters in the XID_Start and XID_Continue
sets as defined by Unicode is available in the DerivedCoreProperties.txt
file in the Unicode Character Database.
For reference, the construction rules for the xid_* sets are given below.
The set id_start is defined as the union of:
Unicode category
<Lu>- uppercase letters (includesAtoZ)Unicode category
<Ll>- lowercase letters (includesatoz)Unicode category
<Lt>- titlecase lettersUnicode category
<Lm>- modifier lettersUnicode category
<Lo>- other lettersUnicode category
<Nl>- letter numbers{
"_"} - the underscore<Other_ID_Start>- an explicit set of characters in PropList.txt to support backwards compatibility
The set xid_start then closes this set under NFKC normalization, by
removing all characters whose normalization is not of the form
id_start id_continue*.
The set id_continue is defined as the union of:
id_start(see above)Unicode category
<Nd>- decimal numbers (includes0to9)Unicode category
<Pc>- connector punctuationsUnicode category
<Mn>- nonspacing marksUnicode category
<Mc>- spacing combining marks<Other_ID_Continue>- another explicit set of characters in PropList.txt to support backwards compatibility
Again, xid_continue closes this set under NFKC normalization.
Unicode categories use the version of the Unicode Character Database as
included in the unicodedata module.
2.4. Littéraux¶
Les littéraux sont des notations pour indiquer des valeurs constantes de certains types natifs.
In terms of lexical analysis, Python has string, bytes and numeric literals.
Other "literals" are lexically denoted using keywords
(None, True, False) and the special
ellipsis token (...).
2.5. Littéraux de chaînes de caractères et de suites d'octets¶
String literals are text enclosed in single quotes (') or double
quotes ("). For example:
"spam"
'eggs'
The quote used to start the literal also terminates it, so a string literal can only contain the other quote (except with escape sequences, see below). For example:
'Say "Hello", please.'
"Don't do that!"
Except for this limitation, the choice of quote character (' or ")
does not affect how the literal is parsed.
Inside a string literal, the backslash (\) character introduces an
escape sequence, which has special meaning depending on the character
after the backslash.
For example, \" denotes the double quote character, and does not end
the string:
>>> print("Say \"Hello\" to everyone!")
Say "Hello" to everyone!
See escape sequences below for a full list of such sequences, and more details.
2.5.1. Triple-quoted strings¶
Strings can also be enclosed in matching groups of three single or double quotes. These are generally referred to as triple-quoted strings:
"""This is a triple-quoted string."""
In triple-quoted literals, unescaped quotes are allowed (and are
retained), except that three unescaped quotes in a row terminate the literal,
if they are of the same kind (' or ") used at the start:
"""This string has "quotes" inside."""
Unescaped newlines are also allowed and retained:
'''This triple-quoted string
continues on the next line.'''
2.5.2. String prefixes¶
String literals can have an optional prefix that influences how the content of the literal is parsed, for example:
b"data"
f'{result=}'
The allowed prefixes are:
r: Raw stringf: Formatted string literal ("f-string")t: Template string literal ("t-string")u: No effect (allowed for backwards compatibility)
See the linked sections for details on each type.
Prefixes are case-insensitive (for example, 'B' works the same as 'b').
The 'r' prefix can be combined with 'f', 't' or 'b', so 'fr',
'rf', 'tr', 'rt', 'br', and 'rb' are also valid prefixes.
Ajouté dans la version 3.3: le préfixe 'rb' a été ajouté comme synonyme de 'br' pour les littéraux de suites d'octets.
la gestion du préfixe historique pour les chaînes Unicode (u'chaine') a été réintroduite afin de simplifier la maintenance de code compatible Python 2.x et 3.x. Voir la PEP 414 pour davantage d'informations.
2.5.3. Formal grammar¶
String literals, except "f-strings" and "t-strings", are described by the following lexical definitions.
These definitions use negative lookaheads (!)
to indicate that an ending quote ends the literal.
STRING: [stringprefix] (stringcontent) stringprefix: <("r" | "u" | "b" | "br" | "rb"), case-insensitive> stringcontent: | "'''" ( !"'''"longstringitem)* "'''" | '"""' ( !'"""'longstringitem)* '"""' | "'" ( !"'"stringitem)* "'" | '"' ( !'"'stringitem)* '"' stringitem:stringchar|stringescapeseqstringchar: <anysource_character, except backslash and newline> longstringitem:stringitem| newline stringescapeseq: "\" <anysource_character>
Note that as in all lexical definitions, whitespace is significant. In particular, the prefix (if any) must be immediately followed by the starting quote.
2.5.4. Escape sequences¶
Unless an 'r' or 'R' prefix is present, escape sequences in string and
bytes literals are interpreted according to rules similar to those used by
Standard C. The recognized escape sequences are:
Séquence d'échappement |
Signification |
|---|---|
|
|
|
|
|
|
|
|
|
cloche ASCII (BEL) |
|
retour arrière ASCII (BS) |
|
saut de page ASCII (FF) |
|
saut de ligne ASCII (LF) |
|
retour à la ligne ASCII (CR) |
|
tabulation horizontale ASCII (TAB) |
|
tabulation verticale ASCII (VT) |
|
|
|
|
|
|
|
|
|
2.5.4.1. Ignored end of line¶
Une barre oblique inverse peut être ajoutée à la fin d'une ligne pour ignorer le caractère de nouvelle ligne :
>>> 'This string will not include \
... backslashes or newline characters.'
'This string will not include backslashes or newline characters.'
On obtient le même résultat avec les chaînes entre triples guillemets ou avec des parenthèses et la concaténation littérale de chaînes.
2.5.4.2. Escaped characters¶
To include a backslash in a non-raw Python string
literal, it must be doubled. The \\ escape sequence denotes a single
backslash character:
>>> print('C:\\Program Files')
C:\Program Files
Similarly, the \' and \" sequences denote the single and double
quote character, respectively:
>>> print('\' and \"')
' and "
2.5.4.3. Octal character¶
The sequence \ooo denotes a character with the octal (base 8)
value ooo:
>>> '\120'
'P'
Up to three octal digits (0 through 7) are accepted.
In a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.
Modifié dans la version 3.11: Octal escapes with value larger than 0o377 (255) produce a
DeprecationWarning.
Modifié dans la version 3.12: Octal escapes with value larger than 0o377 (255) produce a
SyntaxWarning.
In a future Python version they will raise a SyntaxError.
2.5.4.4. Hexadecimal character¶
The sequence \xhh denotes a character with the hex (base 16)
value hh:
>>> '\x50'
'P'
Contrairement au C Standard, il est obligatoire de fournir deux chiffres hexadécimaux.
In a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.
2.5.4.5. Named Unicode character¶
The sequence \N{name} denotes a Unicode character
with the given name:
>>> '\N{LATIN CAPITAL LETTER P}'
'P'
>>> '\N{SNAKE}'
'🐍'
This sequence cannot appear in bytes literals.
Modifié dans la version 3.3: Support for name aliases has been added.
2.5.4.6. Hexadecimal Unicode characters¶
These sequences \uxxxx and \Uxxxxxxxx denote the
Unicode character with the given hex (base 16) value.
Exactly four digits are required for \u; exactly eight digits are
required for \U.
The latter can encode any Unicode character.
>>> '\u1234'
'ሴ'
>>> '\U0001f40d'
'🐍'
These sequences cannot appear in bytes literals.
2.5.4.7. Unrecognized escape sequences¶
Unlike in Standard C, all unrecognized escape sequences are left in the string unchanged, that is, the backslash is left in the result:
>>> print('\q')
\q
>>> list('\q')
['\\', 'q']
Note that for bytes literals, the escape sequences only recognized in string
literals (\N..., \u..., \U...) fall into the category of
unrecognized escapes.
Modifié dans la version 3.6: Unrecognized escape sequences produce a DeprecationWarning.
Modifié dans la version 3.12: Unrecognized escape sequences produce a SyntaxWarning.
In a future Python version they will raise a SyntaxError.
2.5.5. Bytes literals¶
Bytes literals are always prefixed with 'b' or 'B'; they produce an
instance of the bytes type instead of the str type.
They may only contain ASCII characters; bytes with a numeric value of 128
or greater must be expressed with escape sequences (typically
Hexadecimal character or Octal character):
>>> b'\x89PNG\r\n\x1a\n'
b'\x89PNG\r\n\x1a\n'
>>> list(b'\x89PNG\r\n\x1a\n')
[137, 80, 78, 71, 13, 10, 26, 10]
Similarly, a zero byte must be expressed using an escape sequence (typically
\0 or \x00).
2.5.6. Raw string literals¶
Both string and bytes literals may optionally be prefixed with a letter 'r'
or 'R'; such constructs are called raw string literals
and raw bytes literals respectively and treat backslashes as
literal characters.
As a result, in raw string literals, escape sequences
are not treated specially:
>>> r'\d{4}-\d{2}-\d{2}'
'\\d{4}-\\d{2}-\\d{2}'
Même dans une chaîne littérale brute, les guillemets peuvent être échappés avec une barre oblique inversée mais la barre oblique inversée reste dans le résultat ; par exemple, r"\"" est une chaîne de caractères valide composée de deux caractères : une barre oblique inversée et un guillemet double ; r"\" n'est pas une chaîne de caractères valide (même une chaîne de caractères brute ne peut pas se terminer par un nombre impair de barres obliques inversées). Plus précisément, une chaîne littérale brute ne peut pas se terminer par une seule barre oblique inversée (puisque la barre oblique inversée échappe le guillemet suivant). Notez également qu'une simple barre oblique inversée suivie d'un saut de ligne est interprétée comme deux caractères faisant partie du littéral et non comme une continuation de ligne.
2.5.7. f-strings¶
Ajouté dans la version 3.6.
Modifié dans la version 3.8: Added the debug specifier (=)
Modifié dans la version 3.12: Many restrictions on expressions within f-strings have been removed. Notably, nested strings, comments, and backslashes are now permitted.
A formatted string literal or f-string is a string literal
that is prefixed with 'f' or 'F'.
Unlike other string literals, f-strings do not have a constant value.
They may contain replacement fields delimited by curly braces {}.
Replacement fields contain expressions which are evaluated at run time.
For example:
>>> who = 'nobody'
>>> nationality = 'Spanish'
>>> f'{who.title()} expects the {nationality} Inquisition!'
'Nobody expects the Spanish Inquisition!'
Any doubled curly braces ({{ or }}) outside replacement fields
are replaced with the corresponding single curly brace:
>>> print(f'{{...}}')
{...}
Other characters outside replacement fields are treated like in ordinary string literals. This means that escape sequences are decoded (except when a literal is also marked as a raw string), and newlines are possible in triple-quoted f-strings:
>>> name = 'Galahad'
>>> favorite_color = 'blue'
>>> print(f'{name}:\t{favorite_color}')
Galahad: blue
>>> print(rf"C:\Users\{name}")
C:\Users\Galahad
>>> print(f'''Three shall be the number of the counting
... and the number of the counting shall be three.''')
Three shall be the number of the counting
and the number of the counting shall be three.
Expressions in formatted string literals are treated like regular
Python expressions.
Each expression is evaluated in the context where the formatted string literal
appears, in order from left to right.
An empty expression is not allowed, and both lambda and
assignment expressions := must be surrounded by explicit parentheses:
>>> f'{(half := 1/2)}, {half * 42}'
'0.5, 21.0'
Reusing the outer f-string quoting type inside a replacement field is permitted:
>>> a = dict(x=2)
>>> f"abc {a["x"]} def"
'abc 2 def'
Backslashes are also allowed in replacement fields and are evaluated the same way as in any other context:
>>> a = ["a", "b", "c"]
>>> print(f"List a contains:\n{"\n".join(a)}")
List a contains:
a
b
c
It is possible to nest f-strings:
>>> name = 'world'
>>> f'Repeated:{f' hello {name}' * 3}'
'Repeated: hello world hello world hello world'
Portable Python programs should not use more than 5 levels of nesting.
Particularité de l'implémentation CPython : CPython does not limit nesting of f-strings.
Replacement expressions can contain newlines in both single-quoted and
triple-quoted f-strings and they can contain comments.
Everything that comes after a # inside a replacement field
is a comment (even closing braces and quotes).
This means that replacement fields with comments must be closed in a
different line:
>>> a = 2
>>> f"abc{a # This comment }" continues until the end of the line
... + 3}"
'abc5'
After the expression, replacement fields may optionally contain:
a debug specifier -- an equal sign (
=), optionally surrounded by whitespace on one or both sides;a conversion specifier --
!s,!ror!a; and/ora format specifier prefixed with a colon (
:).
See the Standard Library section on f-strings for details on how these fields are evaluated.
As that section explains, format specifiers are passed as the second argument
to the format() function to format a replacement field value.
For example, they can be used to specify a field width and padding characters
using the Format Specification Mini-Language:
>>> number = 14.3
>>> f'{number:20.7f}'
' 14.3000000'
Top-level format specifiers may include nested replacement fields:
>>> field_size = 20
>>> precision = 7
>>> f'{number:{field_size}.{precision}f}'
' 14.3000000'
These nested fields may include their own conversion fields and format specifiers:
>>> number = 3
>>> f'{number:{field_size}}'
' 3'
>>> f'{number:{field_size:05}}'
'00000000000000000003'
However, these nested fields may not include more deeply nested replacement fields.
Formatted string literals cannot be used as docstrings, even if they do not include expressions:
>>> def foo():
... f"Not a docstring"
...
>>> print(foo.__doc__)
None
Voir aussi
PEP 498 -- Literal String Interpolation
PEP 701 -- Syntactic formalization of f-strings
str.format(), which uses a related format string mechanism.
2.5.8. t-strings¶
Ajouté dans la version 3.14.
A template string literal or t-string is a string literal
that is prefixed with 't' or 'T'.
These strings follow the same syntax rules as
formatted string literals.
For differences in evaluation rules, see the
Standard Library section on t-strings
2.5.9. Formal grammar for f-strings¶
F-strings are handled partly by the lexical analyzer, which produces the
tokens FSTRING_START, FSTRING_MIDDLE
and FSTRING_END, and partly by the parser, which handles
expressions in the replacement field.
The exact way the work is split is a CPython implementation detail.
Correspondingly, the f-string grammar is a mix of lexical and syntactic definitions.
Whitespace is significant in these situations:
There may be no whitespace in
FSTRING_START(between the prefix and quote).Whitespace in
FSTRING_MIDDLEis part of the literal string contents.In
fstring_replacement_field, iff_debug_specifieris present, all whitespace after the opening brace until thef_debug_specifier, as well as whitespace immediatelly followingf_debug_specifier, is retained as part of the expression.Particularité de l'implémentation CPython : The expression is not handled in the tokenization phase; it is retrieved from the source code using locations of the
{token and the token after=.
The FSTRING_MIDDLE definition uses
negative lookaheads (!)
to indicate special characters (backslash, newline, {, }) and
sequences (f_quote).
fstring:FSTRING_STARTfstring_middle*FSTRING_ENDFSTRING_START:fstringprefix("'" | '"' | "'''" | '"""') FSTRING_END:f_quotefstringprefix: <("f" | "fr" | "rf"), case-insensitive> f_debug_specifier: '=' f_quote: <the quote character(s) used in FSTRING_START> fstring_middle: |fstring_replacement_field|FSTRING_MIDDLEFSTRING_MIDDLE: | (!"\" !newline!'{' !'}' !f_quote)source_character|stringescapeseq| "{{" | "}}" | <newline, in triple-quoted f-strings only> fstring_replacement_field: | '{'f_expression[f_debug_specifier] [fstring_conversion] [fstring_full_format_spec] '}' fstring_conversion: | "!" ("s" | "r" | "a") fstring_full_format_spec: | ':'fstring_format_spec* fstring_format_spec: |FSTRING_MIDDLE|fstring_replacement_fieldf_expression: | ','.(conditional_expression| "*"or_expr)+ [","] |yield_expression
Note
In the above grammar snippet, the f_quote and FSTRING_MIDDLE rules
are context-sensitive -- they depend on the contents of FSTRING_START
of the nearest enclosing fstring.
Constructing a more traditional formal grammar from this template is left as an exercise for the reader.
The grammar for t-strings is identical to the one for f-strings, with t instead of f at the beginning of rule and token names and in the prefix.
tstring: TSTRING_START tstring_middle* TSTRING_END <rest of the t-string grammar is omitted; see above>
2.6. Littéraux numériques¶
NUMBER tokens represent numeric literals, of which there are
three types: integers, floating-point numbers, and imaginary numbers.
NUMBER:integer|floatnumber|imagnumber
The numeric value of a numeric literal is the same as if it were passed as a
string to the int, float or complex class
constructor, respectively.
Note that not all valid inputs for those constructors are also valid literals.
Numeric literals do not include a sign; a phrase like -1 is
actually an expression composed of the unary operator '-' and the literal
1.
2.6.1. Entiers littéraux¶
Integer literals denote whole numbers. For example:
7
3
2147483647
There is no limit for the length of integer literals apart from what can be stored in available memory:
7922816251426433759354395033679228162514264337593543950336
Underscores can be used to group digits for enhanced readability, and are ignored for determining the numeric value of the literal. For example, the following literals are equivalent:
100_000_000_000
100000000000
1_00_00_00_00_000
Underscores can only occur between digits.
For example, _123, 321_, and 123__321 are not valid literals.
Integers can be specified in binary (base 2), octal (base 8), or hexadecimal
(base 16) using the prefixes 0b, 0o and 0x, respectively.
Hexadecimal digits 10 through 15 are represented by letters A-F,
case-insensitive. For example:
0b100110111
0b_1110_0101
0o177
0o377
0xdeadbeef
0xDead_Beef
An underscore can follow the base specifier.
For example, 0x_1f is a valid literal, but 0_x1f and 0x__1f are
not.
Leading zeros in a non-zero decimal number are not allowed.
For example, 0123 is not a valid literal.
This is for disambiguation with C-style octal literals, which Python used
before version 3.0.
Formally, integer literals are described by the following lexical definitions:
integer:decinteger|bininteger|octinteger|hexinteger|zerointegerdecinteger:nonzerodigit(["_"]digit)* bininteger: "0" ("b" | "B") (["_"]bindigit)+ octinteger: "0" ("o" | "O") (["_"]octdigit)+ hexinteger: "0" ("x" | "X") (["_"]hexdigit)+ zerointeger: "0"+ (["_"] "0")* nonzerodigit: "1"..."9" digit: "0"..."9" bindigit: "0" | "1" octdigit: "0"..."7" hexdigit:digit| "a"..."f" | "A"..."F"
Modifié dans la version 3.6: les tirets bas ne sont pas autorisés pour grouper les littéraux.
2.6.2. Floating-point literals¶
Floating-point (float) literals, such as 3.14 or 1.5, denote
approximations of real numbers.
They consist of integer and fraction parts, each composed of decimal digits.
The parts are separated by a decimal point, .:
2.71828
4.0
Unlike in integer literals, leading zeros are allowed.
For example, 077.010 is legal, and denotes the same number as 77.01.
As in integer literals, single underscores may occur between digits to help readability:
96_485.332_123
3.14_15_93
Either of these parts, but not both, can be empty. For example:
10. # (equivalent to 10.0)
.001 # (equivalent to 0.001)
Optionally, the integer and fraction may be followed by an exponent:
the letter e or E, followed by an optional sign, + or -,
and a number in the same format as the integer and fraction parts.
The e or E represents "times ten raised to the power of":
1.0e3 # (represents 1.0×10³, or 1000.0)
1.166e-5 # (represents 1.166×10⁻⁵, or 0.00001166)
6.02214076e+23 # (represents 6.02214076×10²³, or 602214076000000000000000.)
In floats with only integer and exponent parts, the decimal point may be omitted:
1e3 # (equivalent to 1.e3 and 1.0e3)
0e0 # (equivalent to 0.)
Formally, floating-point literals are described by the following lexical definitions:
floatnumber: |digitpart"." [digitpart] [exponent] | "."digitpart[exponent] |digitpartexponentdigitpart:digit(["_"]digit)* exponent: ("e" | "E") ["+" | "-"]digitpart
Modifié dans la version 3.6: les tirets bas ne sont pas autorisés pour grouper les littéraux.
2.6.3. Imaginaires littéraux¶
Python has complex number objects, but no complex literals. Instead, imaginary literals denote complex numbers with a zero real part.
For example, in math, the complex number 3+4.2i is written
as the real number 3 added to the imaginary number 4.2i.
Python uses a similar syntax, except the imaginary unit is written as j
rather than i:
3+4.2j
This is an expression composed
of the integer literal 3,
the operator '+',
and the imaginary literal 4.2j.
Since these are three separate tokens, whitespace is allowed between them:
3 + 4.2j
No whitespace is allowed within each token.
In particular, the j suffix, may not be separated from the number
before it.
The number before the j has the same syntax as a floating-point literal.
Thus, the following are valid imaginary literals:
4.2j
3.14j
10.j
.001j
1e100j
3.14e-10j
3.14_15_93j
Unlike in a floating-point literal the decimal point can be omitted if the imaginary number only has an integer part. The number is still evaluated as a floating-point number, not an integer:
10j
0j
1000000000000000000000000j # equivalent to 1e+24j
The j suffix is case-insensitive.
That means you can use J instead:
3.14J # equivalent to 3.14j
Formally, imaginary literals are described by the following lexical definition:
imagnumber: (floatnumber|digitpart) ("j" | "J")
2.7. Operators and delimiters¶
The following grammar defines operator and delimiter tokens,
that is, the generic OP token type.
A list of these tokens and their names
is also available in the token module documentation.
OP: | assignment_operator | bitwise_operator | comparison_operator | enclosing_delimiter | other_delimiter | arithmetic_operator | "..." | other_op assignment_operator: "+=" | "-=" | "*=" | "**=" | "/=" | "//=" | "%=" | "&=" | "|=" | "^=" | "<<=" | ">>=" | "@=" | ":=" bitwise_operator: "&" | "|" | "^" | "~" | "<<" | ">>" comparison_operator: "<=" | ">=" | "<" | ">" | "==" | "!=" enclosing_delimiter: "(" | ")" | "[" | "]" | "{" | "}" other_delimiter: "," | ":" | "!" | ";" | "=" | "->" arithmetic_operator: "+" | "-" | "**" | "*" | "//" | "/" | "%" other_op: "." | "@"
Note
Generally, operators are used to combine expressions, while delimiters serve other purposes. However, there is no clear, formal distinction between the two categories.
Some tokens can serve as either operators or delimiters, depending on usage.
For example, * is both the multiplication operator and a delimiter used
for sequence unpacking, and @ is both the matrix multiplication and
a delimiter that introduces decorators.
For some tokens, the distinction is unclear.
For example, some people consider ., (, and ) to be delimiters, while others
see the getattr() operator and the function call operator(s).
Some of Python's operators, like and, or, and not in, use
keyword tokens rather than "symbols" (operator tokens).
A sequence of three consecutive periods (...) has a special
meaning as an Ellipsis literal.
2.1.3. Commentaires¶
Un commentaire commence par le caractère croisillon (
#, hash en anglais et qui ressemble au symbole musical dièse, c'est pourquoi il est souvent improprement appelé caractère dièse) situé en dehors d'une chaine de caractères littérale et se termine à la fin de la ligne physique. Un commentaire signifie la fin de la ligne logique à moins qu'une règle de continuation de ligne implicite ne s'applique. Les commentaires sont ignorés au niveau syntaxique, ce ne sont pas des lexèmes.