Nouveautés de Python 2.0¶
- Auteur
A.M. Kuchling et Moshe Zadka
Introduction¶
Une nouvelle version de Python, la version 2.0, est sortie le 16 octobre 2000. Cet article traite des nouvelles fonctionnalités intéressantes de cette version, met en évidence d'autres changements utiles, et souligne quelques incompatibilités qui peuvent nécessiter la réécriture du code.
Le développement de Python ne s'arrête jamais complètement entre les versions, et un flux constant de corrections de bogues et d'améliorations sont soumis en permanence. Une foule de corrections mineures, quelques optimisations, des docstrings supplémentaires, et de meilleurs messages d'erreur sont apparus avec l'arrivée de la version 2.0; tous les énumérer serait impossible, mais ils sont certainement significatif. Consultez les journaux CVS publics disponibles pour obtenir la liste complète. Ce progrès est dû aux cinq développeurs travaillant pour PythonLabs qui sont désormais payés pour passer leurs journées à corriger les bugs, mais aussi en raison de l'amélioration de la communication résultant du passage à SourceForge.
À propos de Python 1.6.¶
Python 1.6 peut être vu comme la version des obligations contractuelles. Après que l’équipe de développement eût quitté le CNRI en mai 2000, celui-ci a demandé la création d’une version 1.6, contenant tout le travail sur Python réalisé au CNRI. Python 1.6 représente de ce fait l’état de l’arbre CVS tel qu’il était en mai 2000, la nouvelle fonctionnalité la plus remarquable étant le support d’Unicode. Le développement a continué après mai bien sûr, donc la branche 1.6 a reçu quelques corrections pour être sûr qu’elle soit compatible avec Python 2.0. La version 1.6 fait donc partie de l’évolution de Python, ce n’est pas une branche séparée.
Alors, devriez-vous vous intéresser à Python 1.6 ? Probablement pas. Les versions 1.6final et 2.0beta1 sont sorties le même jour (5 septembre 2000), le plan étant de finaliser Python 2.0 environ un mois plus tard. Si vous avez des applications à maintenir, il n’y a pas vraiment d’intérêt à casser des choses en migrant sur la version 1.6, les réparer, puis avoir de nouveau des choses cassées à peine un mois plus tard en passant à la 2.0; il vaut mieux partir directement de la 2.0. La plupart des fonctionnalités vraiment intéressantes décrites dans ce document sont seulement dans la 2.0, parce que beaucoup de travail a été réalisé entre mai et septembre.
Nouveau processus de développement¶
Le changement le plus important dans Python 2.0 ne concerne peut-être pas le code, mais le développement de Python : en mai 2000, les développeurs Python ont commencé à utiliser les outils mis à disposition par SourceForge pour stocker le code source, suivre les rapports de bogues et gérer la file d’attente des soumissions de correctifs. Pour signaler des bogues ou soumettre des correctifs pour Python 2.0, utilisez les outils de suivi des bogues et de gestion des correctifs disponibles sur la page du projet Python, à l’adresse https://sourceforge.net/projects/python/.
Le plus important des services maintenant hébergé chez SourceForge est l’arborescence CVS Python, le référentiel sous contrôle de version contenant le code source de Python. Auparavant, environ 7 personnes avaient un accès en écriture à l’arborescence CVS et tous les correctifs devaient être inspectés et archivés par l’une des personnes figurant sur cette liste restreinte. Évidemment, ce n’était pas très évolutif. En déplaçant l’arborescence CVS vers SourceForge, il est devenu possible d’accorder un accès en écriture à davantage de personnes ; en septembre 2000, 27 personnes pouvaient enregistrer les modifications, soit quatre fois plus. Cela rend possible des modifications à grande échelle qui ne seraient pas tentées si elles devaient être filtrées par le petit groupe de développeurs principaux. Par exemple, un jour, Peter Schneider-Kamp a eu l’idée de supprimer la compatibilité K&R C et de convertir le code source C de Python en ANSI C. Après avoir obtenu l’approbation de la liste de diffusion python-dev, il s’est lancé dans une série d’archives qui ont duré environ une semaine, d’autres développeurs l'ont rejoint et le travail a été fait. S’il n’y avait eut que 5 personnes ayant un accès en écriture, cette tâche aurait probablement été considérée comme « agréable, mais ne valant pas le temps ni les efforts nécessaires » et cela ne se serait jamais fait.
Le passage à l’utilisation des services de SourceForge a entraîné une augmentation remarquable du rythme de développement. Les correctifs sont maintenant soumis, commentés, révisés par des personnes autres que l’auteur d’origine et échangés entre les personnes jusqu’à ce que le correctif soit jugé utile. Les bogues sont suivis dans un emplacement central et peuvent être attribués à une personne spécifique pour être corrigés. , et nous pouvons compter le nombre de bogues ouverts pour mesurer les progrès. Cela n’a pas coûté cher: les développeurs ont désormais plus de courrier électronique à traiter, davantage de listes de diffusion à suivre et des outils spéciaux ont dû être créés pour le nouvel environnement. Par exemple, SourceForge envoie des messages électroniques de correctif et de notification de bogues par défaut qui ne sont d’aucune utilité, Ka-Ping Yee a donc créé un scraper HTML qui envoie des messages plus utiles.
La facilité d’ajout de code a provoqué quelques problèmes de croissance initiaux, tels que le code a été archivé avant qu’il ne soit prêt ou sans l’accord clair du groupe de développeurs. Le processus d’approbation qui a émergé est quelque peu similaire à celui utilisé par le groupe Apache. Les développeurs peuvent voter +1, +0, -0 ou -1 sur un patch; +1 et -1 indiquent une acceptation ou un rejet, tandis que +0 et -0 signifient que le développeur est généralement indifférent au changement, bien qu’il présente une légère inclinaison positive ou négative. Le changement le plus important par rapport au modèle Apache est que le vote est essentiellement consultatif, permettant à Guido van Rossum, détenteur du statut de « dictateur bienveillant à vie », de connaître l’opinion générale. Il peut toujours ignorer le résultat d’un vote et approuver ou rejeter un changement même si la communauté n’est pas d’accord avec lui.
Producing an actual patch is the last step in adding a new feature, and is usually easy compared to the earlier task of coming up with a good design. Discussions of new features can often explode into lengthy mailing list threads, making the discussion hard to follow, and no one can read every posting to python-dev. Therefore, a relatively formal process has been set up to write Python Enhancement Proposals (PEPs), modelled on the Internet RFC process. PEPs are draft documents that describe a proposed new feature, and are continually revised until the community reaches a consensus, either accepting or rejecting the proposal. Quoting from the introduction to PEP 1, "PEP Purpose and Guidelines":
PEP signifie Python Enhancement Proposition. Une PEP est un document de conception fournissant des informations à la communauté Python ou décrivant une nouvelle fonctionnalité de Python. La PEP devrait fournir une spécification technique concise de la fonctionnalité et une justification de celle-ci.
Nous souhaitons que les PEP soient les principaux mécanismes permettant de proposer de nouvelles fonctionnalités, de recueillir les commentaires de la communauté sur un problème et de documenter les décisions de conception prises dans Python. L’auteur du PPE est chargé de créer un consensus au sein de la communauté et de documenter les opinions divergentes.
Lisez le reste de PEP 1 pour plus de détails sur le processus éditorial, le style et le format de PEP. Les PEP sont conservés dans l’arborescence CVS Python de SourceForge, bien qu’ils ne fassent pas partie de la distribution Python 2.0 et qu’ils soient également disponibles au format HTML à l’adresse https://www.python.org/dev/peps/. En septembre 2000, il existait 25 PEPS, allant de PEP 201, "Lockstep Iteration", à PEP 225, "Elementwise/Objectwise Operators".
Unicode¶
La plus grande nouveauté de Python 2.0 est un nouveau type de données fondamental: les chaînes Unicode. Unicode utilise des nombres à 16 bits pour représenter des caractères au lieu du nombre à 8 bits utilisé par ASCII, ce qui signifie que 65 536 caractères distincts peuvent être pris en charge.
La dernière interface de prise en charge Unicode a été mise au point après de nombreuses discussions souvent houleuses sur la liste de diffusion python-dev, et principalement implémentée par Marc-André Lemburg, basée sur une implémentation de type chaîne Unicode de Fredrik Lundh. Une explication détaillée de l’interface a été écrite ainsi PEP 100, "Intégration Python Unicode". Cet article couvrira simplement les points les plus significatifs sur les interfaces Unicode.
Dans le code source Python, les chaînes Unicode sont écrites sous la forme u"string"
. Les caractères Unicode arbitraires peuvent être écrits en utilisant une nouvelle séquence d'échappement, \uHHHH
, où HHHH est un nombre hexadécimal à 4 chiffres de 0000 à FFFF. La séquence d'échappement \xHHHH
peut également être utilisée, et les échappements octaux peuvent être utilisés pour les caractères allant jusqu'à U+01FF, représenté par \777
.
Les chaînes Unicode, tout comme les chaînes ordinaires, sont un type de séquence immuable. Ils peuvent être indexés et tranchés, mais pas modifiés en place. Les chaînes Unicode ont une méthode encoder([encoding])
qui renvoie une chaîne de 8 bits dans l’encodage souhaité. Les codages sont nommés par des chaînes, telles que ’ascii’
, ’utf-8’
, ’iso-8859-1’
, ou autre chose. Une API de codec est définie pour l’implémentation et l’enregistrement de nouveaux codages disponibles dans tout un programme Python. Si aucun codage n’est spécifié, le codage par défaut est généralement du code ASCII 7-bits, bien qu’il puisse être modifié pour votre installation Python en appelant la fonction sys.setdefaultencoding (encoding)
dans une version personnalisée de site.py
.
La combinaison de chaînes 8 bits et Unicode est toujours forcée en Unicode, à l’aide du codage ASCII par défaut; le résultat de ’a’ + u’bc’
est u’abc’
.
De nouvelles fonctions primitives ont été ajoutées, et des fonctions existantes ont été modifiées pour supporter Unicode :
unichr(ch)
renvoie une chaîne Unicode de longueur 1, contenant le caractère ch.ord(u)
, quand u est une chaîne normale ou Unicode de longueur 1, renvoie un entier représentant le nombre de caractères.unicode(string [, encoding] [, errors] )
crée une chaîne Unicode à partir d’une chaîne de 8 bits.encoding
est une chaîne nommant le codage à utiliser. Le paramètreerrors
spécifie le traitement des caractères non valides pour l’ encodage en cours; en passant’strict’
comme valeur, une exception est générée pour toute erreur de codage, alors que’ignore’
fait en sorte que les erreurs soient ignorées en silence et que’replace’
utilise U+FFFD, caractère de remplacement officiel, en cas de problème.L’instruction
exec
et divers éléments intégrés tels queeval()
,getattr()
etsetattr()
accepteront également les chaînes Unicode ainsi que les chaînes ordinaires. (Il est possible que le processus de résolution de ce problème ait échappé à certaines fonctions intégrées ; si vous trouvez une fonction intégrée qui accepte les chaînes mais n’accepte pas les chaînes Unicode, signalez-la comme un bogue.)
Un nouveau module, unicodedata
, fournit une interface aux propriétés de caractère Unicode. Par exemple, unicodedata.category(u'A')
renvoie la chaîne de 2 caractères « Lu », le « L » désignant une lettre et « u » signifiant qu’il s’agit d’une majuscule. unicodedata.bidirectional(u'\u0660')
renvoie « AN », ce qui signifie que U+0660 est un nombre arabe.
Le module codecs
contient des fonctions pour rechercher les codages existants et en enregistrer de nouveaux. À moins que vous ne souhaitiez implémenter un nouvel encodage, vous utiliserez le plus souvent la fonction codecs.lookup(encoding)
, qui renvoie un n-uplet à 4 éléments : (encode_func, decode_func, stream_reader, stream_writer)
.
encode_func est une fonction qui prend une chaîne Unicode, et renvoie un n-uplet de longueur 2
(string, length)
. string est une chaîne de caractères à 8 bits contenant une partie (ou la totalité) de la chaîne Unicode convertie en codage donné, et length vous indique le nombre de caractères de la chaîne qui ont été convertis.decode_func est l’opposé de encode_func, en prenant une chaîne de caractères à 8 bits et le retour d’un n-uplet de longueur 2
(ustring, longueur)
, composé de la chaîne Unicode résultante ustring et l’entier length indiquant combien de caractères de la chaîne de caractères à 8 bits ont été consommés.stream_reader est une classe qui prend en charge le décodage de l’entrée d’un flux. stream_reader(file_obj) renvoie un objet qui prend en charge les méthodes
read()
,readline()
etreadlines()
. Ces méthodes se traduisent toutes à partir de l’encodage donné et retourneront une chaînes de caractère Unicode.De même, stream_writer est une classe qui prend en charge le codage de sortie d’un flux. stream_writer(file_obj) renvoie un objet qui prend en charge les méthodes
write()
etwritelines()
. Ces méthodes prennent en entrée des chaînes Unicode, qu'elles renvoient, traduites à l'encodage donné, sur la sortie.
Par exemple, le code suivant écrit une chaîne Unicode dans un fichier, en l’encodant en UTF-8 :
import codecs
unistr = u'\u0660\u2000ab ...'
(UTF8_encode, UTF8_decode,
UTF8_streamreader, UTF8_streamwriter) = codecs.lookup('UTF-8')
output = UTF8_streamwriter( open( '/tmp/output', 'wb') )
output.write( unistr )
output.close()
Le code suivant lirait ensuite le texte UTF-8 du fichier :
input = UTF8_streamreader( open( '/tmp/output', 'rb') )
print repr(input.read())
input.close()
Des expressions rationnelles supportant l’Unicode sont disponibles dans le module re
, qui a une implémentation sous-jacente appelée SRE écrite par Fredrik Lundh de Secret Labs AB.
Une option de ligne de commande -U
a été ajoutée, ce qui fait que le compilateur Python interprète toutes les chaînes de caractères comme des chaînes de caractères Unicode. Ceci est destiné à être utilisé dans les tests et rendre votre code Python compatible avec les versions futures, car une version future de Python peut abandonner la prise en charge des chaînes de caractères 8-bits et fournir uniquement des chaînes de caractères Unicode.
Compréhensions de listes¶
Les listes sont un type de données crucial dans Python, et de nombreux programmes manipulent une liste à un moment donné. Deux opérations communes sur les listes sont de boucler sur elles, soit de choisir les éléments qui répondent à un certain critère, ou d’appliquer une certaine fonction à chaque élément. Par exemple, à partir d’une liste de chaînes de caractères, vous pouvez retirer toutes les chaînes contenant une sous-chaîne donnée, ou enlever les espaces de chaque ligne.
Les fonctions existantes map()
et filter()
peuvent être utilisées à cette fin, mais elles nécessitent une fonction en leurs arguments. C’est très bien s’il y a une fonction intégrée existante qui peut être passé directement, mais s’il n’y a pas, vous devez créer une petite fonction pour faire le travail requis, et les règles de portée de Python rendent le résultat laid si la petite fonction a besoin d’informations supplémentaires. Prenons le premier exemple du paragraphe précédent, en trouvant toutes les chaînes de la liste contenant une sous-chaîne donnée. Vous pouvez écrire ce qui suit pour le faire:
# Given the list L, make a list of all strings
# containing the substring S.
sublist = filter( lambda s, substring=S:
string.find(s, substring) != -1,
L)
En raison des règles de portée de Python, un argument par défaut est utilisé de sorte que la fonction anonyme créée par l’expression lambda
sait quelle sous-chaîne est recherchée. Les listes en compréhension rendent ceci plus propre :
sublist = [ s for s in L if string.find(s, S) != -1 ]
Les compréhensions de liste sont de la forme suivante :
[ expression for expr in sequence1
for expr2 in sequence2 ...
for exprN in sequenceN
if condition ]
Le for
…in
clauses contiennent les séquences à itérer. Les séquences n’ont pas à être de la même longueur, parce qu’elles ne sont pas itérées en parallèle, mais de gauche à droite; cela est expliqué plus clairement dans les paragraphes suivants. Les éléments de la liste générée seront les valeurs successives de l’expression. La clause finale if
clause est facultative ; si présent, l’expression n’est évaluée et ajoutée au résultat que si la condition est vraie.
Pour que la sémantique soit très claire, une compréhension de liste est équivalente au code Python suivant :
for expr1 in sequence1:
for expr2 in sequence2:
...
for exprN in sequenceN:
if (condition):
# Append the value of
# the expression to the
# resulting list.
Cela signifie que lorsqu’il y a plusieurs for
…in
clauses, la liste résultante sera égale au produit des longueurs de toutes les séquences. Si vous avez deux listes de longueur 3, la liste de sortie est de longueur 9:
seq1 = 'abc'
seq2 = (1,2,3)
>>> [ (x,y) for x in seq1 for y in seq2]
[('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('b', 3), ('c', 1),
('c', 2), ('c', 3)]
Afin de ne pas introduire une ambiguïté dans la grammaire de Python, expression doit être encadrée par des parenthèses si elle produit un n-uplet. La première compréhension de liste ci-dessous n'est pas valide syntaxiquement, tandis que la seconde l'est :
# Syntax error
[ x,y for x in seq1 for y in seq2]
# Correct
[ (x,y) for x in seq1 for y in seq2]
Le concept des liste en compréhension provient à l’origine du langage de programmation fonctionnelle Haskell (https://www.haskell.org). Greg Ewing a plaidé le plus efficacement pour les ajouter à Python et a écrit le patch de compréhension de liste initiale, qui a ensuite été discuté pendant un temps apparemment sans fin sur la liste de diffusion python-dev et maintenu à jour par Skip Montanaro.
Opérateurs d’affectation augmentées¶
Les opérateurs d’affectation soudées, une autre fonctionnalité demandée depuis longtemps, ont été ajoutés à Python 2.0. Les opérateurs d’affectations augmentées comprennent +=
, -=
, *=
et ainsi de suite. Par exemple, l’instruction a += 2
incrémente la valeur de la variable a
par 2, équivalente à l’opération a = a + 2
.
La liste complète des opérateurs d’affectations pris en charge est +=
, -=
, *=
, /=
, %=
, **=
, &=
, |=
, ^=
, >>=
, et <<=
. Les classes Python peuvent remplacer les opérateurs d’affectations augmentées en définissant des méthodes nommées __iadd__()
, __isub__()
, etc. Par exemple, la classe Number
stocke un nombre et prend en charge l’utilisation de += en créant une nouvelle instance avec une valeur incrémentée.
class Number:
def __init__(self, value):
self.value = value
def __iadd__(self, increment):
return Number( self.value + increment)
n = Number(5)
n += 3
print n.value
La méthode spéciale __iadd__()
est appelée avec la valeur de l’incrément, et doit renvoyer une nouvelle instance avec une valeur correctement modifiée ; cette valeur de rendement est liée comme la nouvelle valeur de la variable sur le côté gauche.
Augmented assignment operators were first introduced in the C programming language, and most C-derived languages, such as awk, C++, Java, Perl, and PHP also support them. The augmented assignment patch was implemented by Thomas Wouters.
Méthodes de chaînes de caractères¶
Until now string-manipulation functionality was in the string
module,
which was usually a front-end for the strop
module written in C. The
addition of Unicode posed a difficulty for the strop
module, because the
functions would all need to be rewritten in order to accept either 8-bit or
Unicode strings. For functions such as string.replace()
, which takes 3
string arguments, that means eight possible permutations, and correspondingly
complicated code.
Instead, Python 2.0 pushes the problem onto the string type, making string manipulation functionality available through methods on both 8-bit strings and Unicode strings.
>>> 'andrew'.capitalize()
'Andrew'
>>> 'hostname'.replace('os', 'linux')
'hlinuxtname'
>>> 'moshe'.find('sh')
2
One thing that hasn't changed, a noteworthy April Fools' joke notwithstanding, is that Python strings are immutable. Thus, the string methods return new strings, and do not modify the string on which they operate.
The old string
module is still around for backwards compatibility, but it
mostly acts as a front-end to the new string methods.
Two methods which have no parallel in pre-2.0 versions, although they did exist
in JPython for quite some time, are startswith()
and endswith()
.
s.startswith(t)
is equivalent to s[:len(t)] == t
, while
s.endswith(t)
is equivalent to s[-len(t):] == t
.
One other method which deserves special mention is join()
. The
join()
method of a string receives one parameter, a sequence of strings,
and is equivalent to the string.join()
function from the old string
module, with the arguments reversed. In other words, s.join(seq)
is
equivalent to the old string.join(seq, s)
.
Garbage Collection of Cycles¶
The C implementation of Python uses reference counting to implement garbage collection. Every Python object maintains a count of the number of references pointing to itself, and adjusts the count as references are created or destroyed. Once the reference count reaches zero, the object is no longer accessible, since you need to have a reference to an object to access it, and if the count is zero, no references exist any longer.
Reference counting has some pleasant properties: it's easy to understand and implement, and the resulting implementation is portable, fairly fast, and reacts well with other libraries that implement their own memory handling schemes. The major problem with reference counting is that it sometimes doesn't realise that objects are no longer accessible, resulting in a memory leak. This happens when there are cycles of references.
Consider the simplest possible cycle, a class instance which has a reference to itself:
instance = SomeClass()
instance.myself = instance
After the above two lines of code have been executed, the reference count of
instance
is 2; one reference is from the variable named 'instance'
, and
the other is from the myself
attribute of the instance.
If the next line of code is del instance
, what happens? The reference count
of instance
is decreased by 1, so it has a reference count of 1; the
reference in the myself
attribute still exists. Yet the instance is no
longer accessible through Python code, and it could be deleted. Several objects
can participate in a cycle if they have references to each other, causing all of
the objects to be leaked.
Python 2.0 fixes this problem by periodically executing a cycle detection
algorithm which looks for inaccessible cycles and deletes the objects involved.
A new gc
module provides functions to perform a garbage collection,
obtain debugging statistics, and tuning the collector's parameters.
Running the cycle detection algorithm takes some time, and therefore will result
in some additional overhead. It is hoped that after we've gotten experience
with the cycle collection from using 2.0, Python 2.1 will be able to minimize
the overhead with careful tuning. It's not yet obvious how much performance is
lost, because benchmarking this is tricky and depends crucially on how often the
program creates and destroys objects. The detection of cycles can be disabled
when Python is compiled, if you can't afford even a tiny speed penalty or
suspect that the cycle collection is buggy, by specifying the
--without-cycle-gc
switch when running the configure
script.
Several people tackled this problem and contributed to a solution. An early implementation of the cycle detection approach was written by Toby Kelsey. The current algorithm was suggested by Eric Tiedemann during a visit to CNRI, and Guido van Rossum and Neil Schemenauer wrote two different implementations, which were later integrated by Neil. Lots of other people offered suggestions along the way; the March 2000 archives of the python-dev mailing list contain most of the relevant discussion, especially in the threads titled "Reference cycle collection for Python" and "Finalization again".
Other Core Changes¶
Various minor changes have been made to Python's syntax and built-in functions. None of the changes are very far-reaching, but they're handy conveniences.
Changements mineurs du langage¶
A new syntax makes it more convenient to call a given function with a tuple of
arguments and/or a dictionary of keyword arguments. In Python 1.5 and earlier,
you'd use the apply()
built-in function: apply(f, args, kw)
calls the
function f()
with the argument tuple args and the keyword arguments in
the dictionary kw. apply()
is the same in 2.0, but thanks to a patch
from Greg Ewing, f(*args, **kw)
is a shorter and clearer way to achieve the
same effect. This syntax is symmetrical with the syntax for defining
functions:
def f(*args, **kw):
# args is a tuple of positional args,
# kw is a dictionary of keyword args
...
The print
statement can now have its output directed to a file-like
object by following the print
with >> file
, similar to the
redirection operator in Unix shells. Previously you'd either have to use the
write()
method of the file-like object, which lacks the convenience and
simplicity of print
, or you could assign a new value to
sys.stdout
and then restore the old value. For sending output to standard
error, it's much easier to write this:
print >> sys.stderr, "Warning: action field not supplied"
Modules can now be renamed on importing them, using the syntax import module
as name
or from module import name as othername
. The patch was submitted
by Thomas Wouters.
A new format style is available when using the %
operator; '%r' will insert
the repr()
of its argument. This was also added from symmetry
considerations, this time for symmetry with the existing '%s' format style,
which inserts the str()
of its argument. For example, '%r %s' % ('abc',
'abc')
returns a string containing 'abc' abc
.
Previously there was no way to implement a class that overrode Python's built-in
in
operator and implemented a custom version. obj in seq
returns
true if obj is present in the sequence seq; Python computes this by simply
trying every index of the sequence until either obj is found or an
IndexError
is encountered. Moshe Zadka contributed a patch which adds a
__contains__()
magic method for providing a custom implementation for
in
. Additionally, new built-in objects written in C can define what
in
means for them via a new slot in the sequence protocol.
Earlier versions of Python used a recursive algorithm for deleting objects. Deeply nested data structures could cause the interpreter to fill up the C stack and crash; Christian Tismer rewrote the deletion logic to fix this problem. On a related note, comparing recursive objects recursed infinitely and crashed; Jeremy Hylton rewrote the code to no longer crash, producing a useful result instead. For example, after this code:
a = []
b = []
a.append(a)
b.append(b)
The comparison a==b
returns true, because the two recursive data structures
are isomorphic. See the thread "trashcan and PR#7" in the April 2000 archives of
the python-dev mailing list for the discussion leading up to this
implementation, and some useful relevant links. Note that comparisons can now
also raise exceptions. In earlier versions of Python, a comparison operation
such as cmp(a,b)
would always produce an answer, even if a user-defined
__cmp__()
method encountered an error, since the resulting exception would
simply be silently swallowed.
Work has been done on porting Python to 64-bit Windows on the Itanium processor,
mostly by Trent Mick of ActiveState. (Confusingly, sys.platform
is still
'win32'
on Win64 because it seems that for ease of porting, MS Visual C++
treats code as 32 bit on Itanium.) PythonWin also supports Windows CE; see the
Python CE page at http://pythonce.sourceforge.net/ for more information.
Another new platform is Darwin/MacOS X; initial support for it is in Python 2.0. Dynamic loading works, if you specify "configure --with-dyld --with-suffix=.x". Consult the README in the Python source distribution for more instructions.
An attempt has been made to alleviate one of Python's warts, the often-confusing
NameError
exception when code refers to a local variable before the
variable has been assigned a value. For example, the following code raises an
exception on the print
statement in both 1.5.2 and 2.0; in 1.5.2 a
NameError
exception is raised, while 2.0 raises a new
UnboundLocalError
exception. UnboundLocalError
is a subclass of
NameError
, so any existing code that expects NameError
to be
raised should still work.
def f():
print "i=",i
i = i + 1
f()
Two new exceptions, TabError
and IndentationError
, have been
introduced. They're both subclasses of SyntaxError
, and are raised when
Python code is found to be improperly indented.
Changements concernant les fonctions primitives¶
A new built-in, zip(seq1, seq2, ...)
, has been added. zip()
returns a list of tuples where each tuple contains the i-th element from each of
the argument sequences. The difference between zip()
and map(None,
seq1, seq2)
is that map()
pads the sequences with None
if the
sequences aren't all of the same length, while zip()
truncates the
returned list to the length of the shortest argument sequence.
The int()
and long()
functions now accept an optional "base"
parameter when the first argument is a string. int('123', 10)
returns 123,
while int('123', 16)
returns 291. int(123, 16)
raises a
TypeError
exception with the message "can't convert non-string with
explicit base".
A new variable holding more detailed version information has been added to the
sys
module. sys.version_info
is a tuple (major, minor, micro,
level, serial)
For example, in a hypothetical 2.0.1beta1, sys.version_info
would be (2, 0, 1, 'beta', 1)
. level is a string such as "alpha"
,
"beta"
, or "final"
for a final release.
Dictionaries have an odd new method, setdefault(key, default)
, which
behaves similarly to the existing get()
method. However, if the key is
missing, setdefault()
both returns the value of default as get()
would do, and also inserts it into the dictionary as the value for key. Thus,
the following lines of code:
if dict.has_key( key ): return dict[key]
else:
dict[key] = []
return dict[key]
can be reduced to a single return dict.setdefault(key, [])
statement.
The interpreter sets a maximum recursion depth in order to catch runaway
recursion before filling the C stack and causing a core dump or GPF..
Previously this limit was fixed when you compiled Python, but in 2.0 the maximum
recursion depth can be read and modified using sys.getrecursionlimit()
and
sys.setrecursionlimit()
. The default value is 1000, and a rough maximum
value for a given platform can be found by running a new script,
Misc/find_recursionlimit.py
.
Porting to 2.0¶
New Python releases try hard to be compatible with previous releases, and the record has been pretty good. However, some changes are considered useful enough, usually because they fix initial design decisions that turned out to be actively mistaken, that breaking backward compatibility can't always be avoided. This section lists the changes in Python 2.0 that may cause old Python code to break.
The change which will probably break the most code is tightening up the
arguments accepted by some methods. Some methods would take multiple arguments
and treat them as a tuple, particularly various list methods such as
append()
and insert()
. In earlier versions of Python, if L
is
a list, L.append( 1,2 )
appends the tuple (1,2)
to the list. In Python
2.0 this causes a TypeError
exception to be raised, with the message:
'append requires exactly 1 argument; 2 given'. The fix is to simply add an
extra set of parentheses to pass both values as a tuple: L.append( (1,2) )
.
The earlier versions of these methods were more forgiving because they used an
old function in Python's C interface to parse their arguments; 2.0 modernizes
them to use PyArg_ParseTuple()
, the current argument parsing function,
which provides more helpful error messages and treats multi-argument calls as
errors. If you absolutely must use 2.0 but can't fix your code, you can edit
Objects/listobject.c
and define the preprocessor symbol
NO_STRICT_LIST_APPEND
to preserve the old behaviour; this isn't recommended.
Some of the functions in the socket
module are still forgiving in this
way. For example, socket.connect( ('hostname', 25) )()
is the correct
form, passing a tuple representing an IP address, but socket.connect(
'hostname', 25 )()
also works. socket.connect_ex()
and socket.bind()
are similarly easy-going. 2.0alpha1 tightened these functions up, but because
the documentation actually used the erroneous multiple argument form, many
people wrote code which would break with the stricter checking. GvR backed out
the changes in the face of public reaction, so for the socket
module, the
documentation was fixed and the multiple argument form is simply marked as
deprecated; it will be tightened up again in a future Python version.
The \x
escape in string literals now takes exactly 2 hex digits. Previously
it would consume all the hex digits following the 'x' and take the lowest 8 bits
of the result, so \x123456
was equivalent to \x56
.
The AttributeError
and NameError
exceptions have a more friendly
error message, whose text will be something like 'Spam' instance has no
attribute 'eggs'
or name 'eggs' is not defined
. Previously the error
message was just the missing attribute name eggs
, and code written to take
advantage of this fact will break in 2.0.
Some work has been done to make integers and long integers a bit more
interchangeable. In 1.5.2, large-file support was added for Solaris, to allow
reading files larger than 2 GiB; this made the tell()
method of file
objects return a long integer instead of a regular integer. Some code would
subtract two file offsets and attempt to use the result to multiply a sequence
or slice a string, but this raised a TypeError
. In 2.0, long integers
can be used to multiply or slice a sequence, and it'll behave as you'd
intuitively expect it to; 3L * 'abc'
produces 'abcabcabc', and
(0,1,2,3)[2L:4L]
produces (2,3). Long integers can also be used in various
contexts where previously only integers were accepted, such as in the
seek()
method of file objects, and in the formats supported by the %
operator (%d
, %i
, %x
, etc.). For example, "%d" % 2L**64
will
produce the string 18446744073709551616
.
The subtlest long integer change of all is that the str()
of a long
integer no longer has a trailing 'L' character, though repr()
still
includes it. The 'L' annoyed many people who wanted to print long integers that
looked just like regular integers, since they had to go out of their way to chop
off the character. This is no longer a problem in 2.0, but code which does
str(longval)[:-1]
and assumes the 'L' is there, will now lose the final
digit.
Taking the repr()
of a float now uses a different formatting precision
than str()
. repr()
uses %.17g
format string for C's
sprintf()
, while str()
uses %.12g
as before. The effect is that
repr()
may occasionally show more decimal places than str()
, for
certain numbers. For example, the number 8.1 can't be represented exactly in
binary, so repr(8.1)
is '8.0999999999999996'
, while str(8.1) is
'8.1'
.
The -X
command-line option, which turned all standard exceptions into
strings instead of classes, has been removed; the standard exceptions will now
always be classes. The exceptions
module containing the standard
exceptions was translated from Python to a built-in C module, written by Barry
Warsaw and Fredrik Lundh.
Extending/Embedding Changes¶
Some of the changes are under the covers, and will only be apparent to people writing C extension modules or embedding a Python interpreter in a larger application. If you aren't dealing with Python's C API, you can safely skip this section.
The version number of the Python C API was incremented, so C extensions compiled for 1.5.2 must be recompiled in order to work with 2.0. On Windows, it's not possible for Python 2.0 to import a third party extension built for Python 1.5.x due to how Windows DLLs work, so Python will raise an exception and the import will fail.
Users of Jim Fulton's ExtensionClass module will be pleased to find out that
hooks have been added so that ExtensionClasses are now supported by
isinstance()
and issubclass()
. This means you no longer have to
remember to write code such as if type(obj) == myExtensionClass
, but can use
the more natural if isinstance(obj, myExtensionClass)
.
The Python/importdl.c
file, which was a mass of #ifdefs to support
dynamic loading on many different platforms, was cleaned up and reorganised by
Greg Stein. importdl.c
is now quite small, and platform-specific code
has been moved into a bunch of Python/dynload_*.c
files. Another
cleanup: there were also a number of my*.h
files in the Include/
directory that held various portability hacks; they've been merged into a single
file, Include/pyport.h
.
Vladimir Marangozov's long-awaited malloc restructuring was completed, to make
it easy to have the Python interpreter use a custom allocator instead of C's
standard malloc()
. For documentation, read the comments in
Include/pymem.h
and Include/objimpl.h
. For the lengthy
discussions during which the interface was hammered out, see the Web archives of
the 'patches' and 'python-dev' lists at python.org.
Recent versions of the GUSI development environment for MacOS support POSIX
threads. Therefore, Python's POSIX threading support now works on the
Macintosh. Threading support using the user-space GNU pth
library was also
contributed.
Threading support on Windows was enhanced, too. Windows supports thread locks that use kernel objects only in case of contention; in the common case when there's no contention, they use simpler functions which are an order of magnitude faster. A threaded version of Python 1.5.2 on NT is twice as slow as an unthreaded version; with the 2.0 changes, the difference is only 10%. These improvements were contributed by Yakov Markovitch.
Python 2.0's source now uses only ANSI C prototypes, so compiling Python now requires an ANSI C compiler, and can no longer be done using a compiler that only supports K&R C.
Previously the Python virtual machine used 16-bit numbers in its bytecode,
limiting the size of source files. In particular, this affected the maximum
size of literal lists and dictionaries in Python source; occasionally people who
are generating Python code would run into this limit. A patch by Charles G.
Waldman raises the limit from 2^16
to 2^{32}
.
Three new convenience functions intended for adding constants to a module's
dictionary at module initialization time were added: PyModule_AddObject()
,
PyModule_AddIntConstant()
, and PyModule_AddStringConstant()
. Each
of these functions takes a module object, a null-terminated C string containing
the name to be added, and a third argument for the value to be assigned to the
name. This third argument is, respectively, a Python object, a C long, or a C
string.
A wrapper API was added for Unix-style signal handlers. PyOS_getsig()
gets
a signal handler and PyOS_setsig()
will set a new handler.
Distutils: Making Modules Easy to Install¶
Before Python 2.0, installing modules was a tedious affair -- there was no way to figure out automatically where Python is installed, or what compiler options to use for extension modules. Software authors had to go through an arduous ritual of editing Makefiles and configuration files, which only really work on Unix and leave Windows and MacOS unsupported. Python users faced wildly differing installation instructions which varied between different extension packages, which made administering a Python installation something of a chore.
The SIG for distribution utilities, shepherded by Greg Ward, has created the
Distutils, a system to make package installation much easier. They form the
distutils
package, a new part of Python's standard library. In the best
case, installing a Python module from source will require the same steps: first
you simply mean unpack the tarball or zip archive, and the run "python
setup.py install
". The platform will be automatically detected, the compiler
will be recognized, C extension modules will be compiled, and the distribution
installed into the proper directory. Optional command-line arguments provide
more control over the installation process, the distutils package offers many
places to override defaults -- separating the build from the install, building
or installing in non-default directories, and more.
In order to use the Distutils, you need to write a setup.py
script. For
the simple case, when the software contains only .py files, a minimal
setup.py
can be just a few lines long:
from distutils.core import setup
setup (name = "foo", version = "1.0",
py_modules = ["module1", "module2"])
The setup.py
file isn't much more complicated if the software consists
of a few packages:
from distutils.core import setup
setup (name = "foo", version = "1.0",
packages = ["package", "package.subpackage"])
A C extension can be the most complicated case; here's an example taken from the PyXML package:
from distutils.core import setup, Extension
expat_extension = Extension('xml.parsers.pyexpat',
define_macros = [('XML_NS', None)],
include_dirs = [ 'extensions/expat/xmltok',
'extensions/expat/xmlparse' ],
sources = [ 'extensions/pyexpat.c',
'extensions/expat/xmltok/xmltok.c',
'extensions/expat/xmltok/xmlrole.c', ]
)
setup (name = "PyXML", version = "0.5.4",
ext_modules =[ expat_extension ] )
The Distutils can also take care of creating source and binary distributions.
The "sdist" command, run by "python setup.py sdist
', builds a source
distribution such as foo-1.0.tar.gz
. Adding new commands isn't
difficult, "bdist_rpm" and "bdist_wininst" commands have already been
contributed to create an RPM distribution and a Windows installer for the
software, respectively. Commands to create other distribution formats such as
Debian packages and Solaris .pkg
files are in various stages of
development.
All this is documented in a new manual, Distributing Python Modules, that joins the basic set of Python documentation.
Modules XML¶
Python 1.5.2 included a simple XML parser in the form of the xmllib
module, contributed by Sjoerd Mullender. Since 1.5.2's release, two different
interfaces for processing XML have become common: SAX2 (version 2 of the Simple
API for XML) provides an event-driven interface with some similarities to
xmllib
, and the DOM (Document Object Model) provides a tree-based
interface, transforming an XML document into a tree of nodes that can be
traversed and modified. Python 2.0 includes a SAX2 interface and a stripped-down
DOM interface as part of the xml
package. Here we will give a brief
overview of these new interfaces; consult the Python documentation or the source
code for complete details. The Python XML SIG is also working on improved
documentation.
Support de SAX2¶
SAX defines an event-driven interface for parsing XML. To use SAX, you must
write a SAX handler class. Handler classes inherit from various classes
provided by SAX, and override various methods that will then be called by the
XML parser. For example, the startElement()
and endElement()
methods are called for every starting and end tag encountered by the parser, the
characters()
method is called for every chunk of character data, and so
forth.
The advantage of the event-driven approach is that the whole document doesn't have to be resident in memory at any one time, which matters if you are processing really huge documents. However, writing the SAX handler class can get very complicated if you're trying to modify the document structure in some elaborate way.
For example, this little example program defines a handler that prints a message
for every starting and ending tag, and then parses the file hamlet.xml
using it:
from xml import sax
class SimpleHandler(sax.ContentHandler):
def startElement(self, name, attrs):
print 'Start of element:', name, attrs.keys()
def endElement(self, name):
print 'End of element:', name
# Create a parser object
parser = sax.make_parser()
# Tell it what handler to use
handler = SimpleHandler()
parser.setContentHandler( handler )
# Parse a file!
parser.parse( 'hamlet.xml' )
For more information, consult the Python documentation, or the XML HOWTO at http://pyxml.sourceforge.net/topics/howto/xml-howto.html.
Support du DOM¶
The Document Object Model is a tree-based representation for an XML document. A
top-level Document
instance is the root of the tree, and has a single
child which is the top-level Element
instance. This Element
has children nodes representing character data and any sub-elements, which may
have further children of their own, and so forth. Using the DOM you can
traverse the resulting tree any way you like, access element and attribute
values, insert and delete nodes, and convert the tree back into XML.
The DOM is useful for modifying XML documents, because you can create a DOM
tree, modify it by adding new nodes or rearranging subtrees, and then produce a
new XML document as output. You can also construct a DOM tree manually and
convert it to XML, which can be a more flexible way of producing XML output than
simply writing <tag1>
...</tag1>
to a file.
The DOM implementation included with Python lives in the xml.dom.minidom
module. It's a lightweight implementation of the Level 1 DOM with support for
XML namespaces. The parse()
and parseString()
convenience
functions are provided for generating a DOM tree:
from xml.dom import minidom
doc = minidom.parse('hamlet.xml')
doc
is a Document
instance. Document
, like all the other
DOM classes such as Element
and Text
, is a subclass of the
Node
base class. All the nodes in a DOM tree therefore support certain
common methods, such as toxml()
which returns a string containing the XML
representation of the node and its children. Each class also has special
methods of its own; for example, Element
and Document
instances have a method to find all child elements with a given tag name.
Continuing from the previous 2-line example:
perslist = doc.getElementsByTagName( 'PERSONA' )
print perslist[0].toxml()
print perslist[1].toxml()
For the Hamlet XML file, the above few lines output:
<PERSONA>CLAUDIUS, king of Denmark. </PERSONA>
<PERSONA>HAMLET, son to the late, and nephew to the present king.</PERSONA>
The root element of the document is available as doc.documentElement
, and
its children can be easily modified by deleting, adding, or removing nodes:
root = doc.documentElement
# Remove the first child
root.removeChild( root.childNodes[0] )
# Move the new first child to the end
root.appendChild( root.childNodes[0] )
# Insert the new first child (originally,
# the third child) before the 20th child.
root.insertBefore( root.childNodes[0], root.childNodes[20] )
Again, I will refer you to the Python documentation for a complete listing of
the different Node
classes and their various methods.
Relationship to PyXML¶
The XML Special Interest Group has been working on XML-related Python code for a
while. Its code distribution, called PyXML, is available from the SIG's Web
pages at https://www.python.org/community/sigs/current/xml-sig. The PyXML distribution also used
the package name xml
. If you've written programs that used PyXML, you're
probably wondering about its compatibility with the 2.0 xml
package.
The answer is that Python 2.0's xml
package isn't compatible with PyXML,
but can be made compatible by installing a recent version PyXML. Many
applications can get by with the XML support that is included with Python 2.0,
but more complicated applications will require that the full PyXML package will
be installed. When installed, PyXML versions 0.6.0 or greater will replace the
xml
package shipped with Python, and will be a strict superset of the
standard package, adding a bunch of additional features. Some of the additional
features in PyXML include:
4DOM, a full DOM implementation from FourThought, Inc.
The xmlproc validating parser, written by Lars Marius Garshol.
The
sgmlop
parser accelerator module, written by Fredrik Lundh.
Module changes¶
Lots of improvements and bugfixes were made to Python's extensive standard
library; some of the affected modules include readline
,
ConfigParser
, cgi
, calendar
, posix
, readline
,
xmllib
, aifc
, chunk, wave
, random
, shelve
,
and nntplib
. Consult the CVS logs for the exact patch-by-patch details.
Brian Gallew contributed OpenSSL support for the socket
module. OpenSSL
is an implementation of the Secure Socket Layer, which encrypts the data being
sent over a socket. When compiling Python, you can edit Modules/Setup
to include SSL support, which adds an additional function to the socket
module: socket.ssl(socket, keyfile, certfile)
, which takes a socket
object and returns an SSL socket. The httplib
and urllib
modules
were also changed to support https://
URLs, though no one has implemented
FTP or SMTP over SSL.
The httplib
module has been rewritten by Greg Stein to support HTTP/1.1.
Backward compatibility with the 1.5 version of httplib
is provided,
though using HTTP/1.1 features such as pipelining will require rewriting code to
use a different set of interfaces.
The Tkinter
module now supports Tcl/Tk version 8.1, 8.2, or 8.3, and
support for the older 7.x versions has been dropped. The Tkinter module now
supports displaying Unicode strings in Tk widgets. Also, Fredrik Lundh
contributed an optimization which makes operations like create_line
and
create_polygon
much faster, especially when using lots of coordinates.
The curses
module has been greatly extended, starting from Oliver
Andrich's enhanced version, to provide many additional functions from ncurses
and SYSV curses, such as colour, alternative character set support, pads, and
mouse support. This means the module is no longer compatible with operating
systems that only have BSD curses, but there don't seem to be any currently
maintained OSes that fall into this category.
As mentioned in the earlier discussion of 2.0's Unicode support, the underlying
implementation of the regular expressions provided by the re
module has
been changed. SRE, a new regular expression engine written by Fredrik Lundh and
partially funded by Hewlett Packard, supports matching against both 8-bit
strings and Unicode strings.
Nouveaux modules¶
A number of new modules were added. We'll simply list them with brief descriptions; consult the 2.0 documentation for the details of a particular module.
atexit
: For registering functions to be called before the Python interpreter exits. Code that currently setssys.exitfunc
directly should be changed to use theatexit
module instead, importingatexit
and callingatexit.register()
with the function to be called on exit. (Contributed by Skip Montanaro.)codecs
,encodings
,unicodedata
: Added as part of the new Unicode support.filecmp
: Supersedes the oldcmp
,cmpcache
anddircmp
modules, which have now become deprecated. (Contributed by Gordon MacMillan and Moshe Zadka.)gettext
: This module provides internationalization (I18N) and localization (L10N) support for Python programs by providing an interface to the GNU gettext message catalog library. (Integrated by Barry Warsaw, from separate contributions by Martin von Löwis, Peter Funk, and James Henstridge.)linuxaudiodev
: Support for the/dev/audio
device on Linux, a twin to the existingsunaudiodev
module. (Contributed by Peter Bosch, with fixes by Jeremy Hylton.)mmap
: An interface to memory-mapped files on both Windows and Unix. A file's contents can be mapped directly into memory, at which point it behaves like a mutable string, so its contents can be read and modified. They can even be passed to functions that expect ordinary strings, such as there
module. (Contributed by Sam Rushing, with some extensions by A.M. Kuchling.)pyexpat
: An interface to the Expat XML parser. (Contributed by Paul Prescod.)robotparser
: Parse arobots.txt
file, which is used for writing Web spiders that politely avoid certain areas of a Web site. The parser accepts the contents of arobots.txt
file, builds a set of rules from it, and can then answer questions about the fetchability of a given URL. (Contributed by Skip Montanaro.)tabnanny
: A module/script to check Python source code for ambiguous indentation. (Contributed by Tim Peters.)UserString
: A base class useful for deriving objects that behave like strings.webbrowser
: A module that provides a platform independent way to launch a web browser on a specific URL. For each platform, various browsers are tried in a specific order. The user can alter which browser is launched by setting the BROWSER environment variable. (Originally inspired by Eric S. Raymond's patch tourllib
which added similar functionality, but the final module comes from code originally implemented by Fred Drake asTools/idle/BrowserControl.py
, and adapted for the standard library by Fred.)_winreg
: An interface to the Windows registry._winreg
is an adaptation of functions that have been part of PythonWin since 1995, but has now been added to the core distribution, and enhanced to support Unicode._winreg
was written by Bill Tutt and Mark Hammond.zipfile
: A module for reading and writing ZIP-format archives. These are archives produced by PKZIP on DOS/Windows or zip on Unix, not to be confused with gzip-format files (which are supported by thegzip
module) (Contributed by James C. Ahlstrom.)imputil
: A module that provides a simpler way for writing customized import hooks, in comparison to the existingihooks
module. (Implemented by Greg Stein, with much discussion on python-dev along the way.)
IDLE Improvements¶
IDLE is the official Python cross-platform IDE, written using Tkinter. Python 2.0 includes IDLE 0.6, which adds a number of new features and improvements. A partial list:
UI improvements and optimizations, especially in the area of syntax highlighting and auto-indentation.
The class browser now shows more information, such as the top level functions in a module.
Tab width is now a user settable option. When opening an existing Python file, IDLE automatically detects the indentation conventions, and adapts.
There is now support for calling browsers on various platforms, used to open the Python documentation in a browser.
IDLE now has a command line, which is largely similar to the vanilla Python interpreter.
Call tips were added in many places.
IDLE can now be installed as a package.
In the editor window, there is now a line/column bar at the bottom.
Three new keystroke commands: Check module (Alt-F5), Import module (F5) and Run script (Ctrl-F5).
Deleted and Deprecated Modules¶
A few modules have been dropped because they're obsolete, or because there are
now better ways to do the same thing. The stdwin
module is gone; it was
for a platform-independent windowing toolkit that's no longer developed.
A number of modules have been moved to the lib-old
subdirectory:
cmp
, cmpcache
, dircmp
, dump
, find
,
grep
, packmail
, poly
, util
, whatsound
,
zmod
. If you have code which relies on a module that's been moved to
lib-old
, you can simply add that directory to sys.path
to get them
back, but you're encouraged to update any code that uses these modules.
Remerciements¶
The authors would like to thank the following people for offering suggestions on various drafts of this article: David Bolen, Mark Hammond, Gregg Hauser, Jeremy Hylton, Fredrik Lundh, Detlef Lannert, Aahz Maruch, Skip Montanaro, Vladimir Marangozov, Tobias Polzin, Guido van Rossum, Neil Schemenauer, and Russ Schmidt.