FAQ de Bibliotecas e Extensões

Sumário

Questões gerais sobre bibliotecas

Como encontrar um módulo ou aplicação para realizar uma tarefa X?

Verifique a Referência de Bibliotecas para ver se há um módulo relevante da biblioteca padrão. (Eventualmente, você aprenderá o que está na biblioteca padrão e poderá pular esta etapa.)

Para pacotes de terceiros, pesquise no Python Package Index ou tente no Google ou outro buscador na Web. Pesquisando por “Python” mais uma ou duas palavras-chave do seu tópico de interesse geralmente encontrará algo útil.

Onde está o código fonte do math.py (socket.py, regex.py, etc.)?

Se você não conseguir encontrar um arquivo de origem para um módulo, ele pode ser um módulo interno ou carregado dinamicamente, implementado em C, C ++ ou outra linguagem compilada. Nesse caso, você pode não ter o arquivo de origem ou pode ser algo como: file: mathmodule.c, em algum lugar do diretório de origem C (não no caminho do Python).

Existem (pelo menos) três tipos de módulos no Python:

  1. módulos escritos em Python (.py)

  2. módulos escritos em C e carregados dinamicamente (.dll, .pyd, .so, .sl, etc.);

  3. módulos escritos em C e vinculados ao interpretador; para obter uma dessas listas, digite:

    import sys
    print sys.builtin_module_names
    

Como tornar um script Python executável no Unix?

Você precisa fazer duas coisas: o arquivo do script deve ser executável e a primeira linha deve começar com “#!” seguido do caminho do interpretador Python

O primeiro a se fazer é executar o chmod +x scriptfile ou, talvez, o chmod 755 scriptfile.

A segunda coisa pode ser feita de várias maneiras. A maneira mais direta é escrever

#!/usr/local/bin/python

como a primeira linha do seu arquivo, usando o endereço do caminho onde o interpretador Python está instalado.

Se você deseja que o script seja independente de onde o interpretador Python mora, você pode usar o programa: program: env. Quase todas as variantes do Unix suportam o seguinte, assumindo que o interpretador Python esteja em um diretório do usuário: enviar: PATH

#!/usr/bin/env python

Não faça isso para CGI scripts. A variável PATH para CGI scripts é normalmente muito pequena, portanto, você precisa usar o caminho completo do interpretador

Occasionally, a user’s environment is so full that the /usr/bin/env program fails; or there’s no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):

#! /bin/sh
""":"
exec python $0 ${1+"$@"}
"""

Uma pequena desvantagem é que isso define o script’s __doc__ string. Entretanto, você pode corrigir isso adicionando

__doc__ = """...Whatever..."""

Existe um pacote de curses/termcap para Python?

For Unix variants the standard Python source distribution comes with a curses module in the Modules subdirectory, though it’s not compiled by default. (Note that this is not available in the Windows distribution – there is no curses module for Windows.)

The curses module supports basic curses features as well as many additional functions from ncurses and SYSV curses such as colour, alternative character set support, pads, and mouse support. This means the module isn’t compatible with operating systems that only have BSD curses, but there don’t seem to be any currently maintained OSes that fall into this category.

Para Windows: use o módulo “consolelib” <http://effbot.org/zone/console-index.htm>`_.

Existe a função onexit() equivalente ao C no Python?

O módulo atexit fornece uma função de registro similar ao C’s onexit().

Por que o meu manipulador de sinal não funciona?

O maior problema é que o manipulador de sinal é declarado com uma lista de argumentos incorretos. Isso é chamado como

handler(signum, frame)

portanto, isso deve ser declarado com dois argumentos

def handler(signum, frame):
    ...

Tarefas comuns

Como testar um programa ou componente Python?

A Python vem com dois frameworks de testes. O :mod:doctest busca por exemplos na docstring de um módulo e os executa, comparando o resultado com a saída esperada informada na docstring.

O módulo unittest é uma estrutura de teste mais sofisticada, modelada nas estruturas de teste do Java e do Smalltalk.

To make testing easier, you should use good modular design in your program. Your program should have almost all functionality encapsulated in either functions or class methods – and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.

A lógica principal do seu programa pode tão simples quanto

if __name__ == "__main__":
    main_logic()

no botão do módulo principal do seus programa.

Once your program is organized as a tractable collection of functions and class behaviours you should write test functions that exercise the behaviours. A test suite that automates a sequence of tests can be associated with each module. This sounds like a lot of work, but since Python is so terse and flexible it’s surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the “production code”, since this makes it easy to find bugs and even design flaws earlier.

“Support modules” that are not intended to be the main module of a program may include a self-test of the module.

if __name__ == "__main__":
    self_test()

Até mesmo quando as interfaces externas não estiverem disponíveis, os programas que interagem com interfaces externas complexas podem ser testados usando as interfaces ‘fakes’ implementadas no Python.

Como faço para criar uma documentação de doc strings?

The pydoc module can create HTML from the doc strings in your Python source code. An alternative for creating API documentation purely from docstrings is epydoc. Sphinx can also include docstring content.

Como faço para pressionar uma tecla de cada vez?

For Unix variants there are several solutions. It’s straightforward to do this using curses, but curses is a fairly large module to learn. Here’s a solution without curses:

import termios, fcntl, sys, os
fd = sys.stdin.fileno()

oldterm = termios.tcgetattr(fd)
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

try:
    while 1:
        try:
            c = sys.stdin.read(1)
            print "Got character", repr(c)
        except IOError: pass
finally:
    termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
    fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

You need the termios and the fcntl module for any of this to work, and I’ve only tried it on Linux, though it should work elsewhere. In this code, characters are read and printed one at a time.

termios.tcsetattr() turns off stdin’s echoing and disables canonical mode. fcntl.fnctl() is used to obtain stdin’s file descriptor flags and modify them for non-blocking mode. Since reading stdin when it is empty results in an IOError, this error is caught and ignored.

Threads

Como faço para programar usando threads

Be sure to use the threading module and not the thread module. The threading module builds convenient abstractions on top of the low-level primitives provided by the thread module.

Aahz tem um conjunto de tutoriais threading que são úteis; veja em: http://www.pythoncraft.com/OSCON2001/.

Nenhuma de minhas threads parece funcionar, por quê?

Assim que a thread principal acaba, todas as threads são eliminadas. Sua thread principal está sendo executado tão rápida que não está dando tempo para realizar qualquer trabalho.

Uma solução simples é adicionar um tempo de espera no final do programa até que todos os threads sejam concluídos:

import threading, time

def thread_task(name, n):
    for i in range(n): print name, i

for i in range(10):
    T = threading.Thread(target=thread_task, args=(str(i), i))
    T.start()

time.sleep(10) # <----------------------------!

But now (on many platforms) the threads don’t run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn’t start a new thread until the previous thread is blocked.

Uma solução simples é adicionar um pequeno tempo de espera ( método “sleep”) no início da função

def thread_task(name, n):
    time.sleep(0.001) # <---------------------!
    for i in range(n): print name, i

for i in range(10):
    T = threading.Thread(target=thread_task, args=(str(i), i))
    T.start()

time.sleep(10)

Instead of trying to guess a good delay value for time.sleep(), it’s better to use some kind of semaphore mechanism. One idea is to use the Queue module to create a queue object, let each thread append a token to the queue when it finishes, and let the main thread read as many tokens from the queue as there are threads.

How do I parcel out work among a bunch of worker threads?

Use the Queue module to create a queue containing a list of jobs. The Queue class maintains a list of objects and has a .put(obj) method that adds items to the queue and a .get() method to return them. The class will take care of the locking necessary to ensure that each job is handed out exactly once.

Aqui está um exemplo simples:

import threading, Queue, time

# The worker thread gets jobs off the queue.  When the queue is empty, it
# assumes there will be no more work and exits.
# (Realistically workers will run until terminated.)
def worker():
    print 'Running worker'
    time.sleep(0.1)
    while True:
        try:
            arg = q.get(block=False)
        except Queue.Empty:
            print 'Worker', threading.currentThread(),
            print 'queue empty'
            break
        else:
            print 'Worker', threading.currentThread(),
            print 'running with argument', arg
            time.sleep(0.5)

# Create queue
q = Queue.Queue()

# Start a pool of 5 workers
for i in range(5):
    t = threading.Thread(target=worker, name='worker %i' % (i+1))
    t.start()

# Begin adding work to the queue
for i in range(50):
    q.put(i)

# Give threads time to run
print 'Main thread sleeping'
time.sleep(5)

Quando executado, isso produzirá a seguinte saída:

Running worker
Running worker
Running worker
Running worker
Running worker
Main thread sleeping
Worker <Thread(worker 1, started)> running with argument 0
Worker <Thread(worker 2, started)> running with argument 1
Worker <Thread(worker 3, started)> running with argument 2
Worker <Thread(worker 4, started)> running with argument 3
Worker <Thread(worker 5, started)> running with argument 4
Worker <Thread(worker 1, started)> running with argument 5
...

Consult the module’s documentation for more details; the Queue class provides a featureful interface.

Que tipos de variáveis globais mutáveis são seguras para thread?

A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setcheckinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

In theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that “look atomic” really are.

For example, the following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):

L.append(x)
L1.extend(L2)
x = L[i]
x = L.pop()
L1[i:j] = L2
L.sort()
x = y
x.field = y
D[x] = y
D1.update(D2)
D.keys()

Esses não são:

i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1

Operations that replace other objects may invoke those other objects’ __del__() method when their reference count reaches zero, and that can affect things. This is especially true for the mass updates to dictionaries and lists. When in doubt, use a mutex!

Não podemos remover o Bloqueio Global do interpretador?

The global interpreter lock (GIL) is often seen as a hindrance to Python’s deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.

Back in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set (the “free threading” patches) that removed the GIL and replaced it with fine-grained locking. Unfortunately, even on Windows (where locks are very efficient) this ran ordinary Python code about twice as slow as the interpreter using the GIL. On Linux the performance loss was even worse because pthread locks aren’t as efficient.

Since then, the idea of getting rid of the GIL has occasionally come up but nobody has found a way to deal with the expected slowdown, and users who don’t use threads would not be happy if their code ran at half the speed. Greg’s free threading patch set has not been kept up-to-date for later Python versions.

This doesn’t mean that you can’t make good use of Python on multi-CPU machines! You just have to be creative with dividing the work up between multiple processes rather than multiple threads. Judicious use of C extensions will also help; if you use a C extension to perform a time-consuming task, the extension can release the GIL while the thread of execution is in the C code and allow other threads to get some work done.

It has been suggested that the GIL should be a per-interpreter-state lock rather than truly global; interpreters then wouldn’t be able to share objects. Unfortunately, this isn’t likely to happen either. It would be a tremendous amount of work, because many object implementations currently have global state. For example, small integers and short strings are cached; these caches would have to be moved to the interpreter state. Other object types have their own free list; these free lists would have to be moved to the interpreter state. And so on.

And I doubt that it can even be done in finite time, because the same problem exists for 3rd party extensions. It is likely that 3rd party extensions are being written at a faster rate than you can convert them to store all their global state in the interpreter state.

E finalmente, uma vez que você tem vários interpretadores que não compartilham seu estado, o que você ganhou ao executar processos separados em cada interpretador?

Entrada e saída

Como faço para excluir um arquivo? (E outras perguntas sobre arquivos)

Use os.remove(filename) or os.unlink(filename); for documentation, see the os module. The two functions are identical; unlink() is simply the name of the Unix system call for this function.

To remove a directory, use os.rmdir(); use os.mkdir() to create one. os.makedirs(path) will create any intermediate directories in path that don’t exist. os.removedirs(path) will remove intermediate directories as long as they’re empty; if you want to delete an entire directory tree and its contents, use shutil.rmtree().

Para renomear um arquivos, use os.rename(old_path, new_path).

To truncate a file, open it using f = open(filename, "r+"), and use f.truncate(offset); offset defaults to the current seek position. There’s also os.ftruncate(fd, offset) for files opened with os.open(), where fd is the file descriptor (a small integer).

The shutil module also contains a number of functions to work on files including copyfile(), copytree(), and rmtree().

Como eu copio um arquivo?

The shutil module contains a copyfile() function. Note that on MacOS 9 it doesn’t copy the resource fork and Finder info.

Como leio (ou escrevo) dados binários?

To read or write complex binary data formats, it’s best to use the struct module. It allows you to take a string containing binary data (usually numbers) and convert it to Python objects; and vice versa.

For example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:

import struct

f = open(filename, "rb")  # Open in binary mode for portability
s = f.read(8)
x, y, z = struct.unpack(">hhl", s)

The ‘>’ in the format string forces big-endian data; the letter ‘h’ reads one “short integer” (2 bytes), and ‘l’ reads one “long integer” (4 bytes) from the string.

For data that is more regular (e.g. a homogeneous list of ints or floats), you can also use the array module.

Porque não consigo usar os os.read() em um pipe criado com os.popen();

os.read() is a low-level function which takes a file descriptor, a small integer representing the opened file. os.popen() creates a high-level file object, the same type returned by the built-in open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n).

How do I run a subprocess with pipes connected to both input and output?

Use the popen2 module. For example:

import popen2
fromchild, tochild = popen2.popen2("command")
tochild.write("input\n")
tochild.flush()
output = fromchild.readline()

Warning: in general it is unwise to do this because you can easily cause a deadlock where your process is blocked waiting for output from the child while the child is blocked waiting for input from you. This can be caused by the parent expecting the child to output more text than it does or by data being stuck in stdio buffers due to lack of flushing. The Python parent can of course explicitly flush the data it sends to the child before it reads any output, but if the child is a naive C program it may have been written to never explicitly flush its output, even if it is interactive, since flushing is normally automatic.

Note that a deadlock is also possible if you use popen3() to read stdout and stderr. If one of the two is too large for the internal buffer (increasing the buffer size does not help) and you read() the other one first, there is a deadlock, too.

Note on a bug in popen2: unless your program calls wait() or waitpid(), finished child processes are never removed, and eventually calls to popen2 will fail because of a limit on the number of child processes. Calling os.waitpid() with the os.WNOHANG option can prevent this; a good place to insert such a call would be before calling popen2 again.

In many cases, all you really need is to run some data through a command and get the result back. Unless the amount of data is very large, the easiest way to do this is to write it to a temporary file and run the command with that temporary file as input. The standard module tempfile exports a mktemp() function to generate unique temporary file names.

import tempfile
import os

class Popen3:
    """
    This is a deadlock-safe version of popen that returns
    an object with errorlevel, out (a string) and err (a string).
    (capturestderr may not work under windows.)
    Example: print Popen3('grep spam','\n\nhere spam\n\n').out
    """
    def __init__(self,command,input=None,capturestderr=None):
        outfile=tempfile.mktemp()
        command="( %s ) > %s" % (command,outfile)
        if input:
            infile=tempfile.mktemp()
            open(infile,"w").write(input)
            command=command+" <"+infile
        if capturestderr:
            errfile=tempfile.mktemp()
            command=command+" 2>"+errfile
        self.errorlevel=os.system(command) >> 8
        self.out=open(outfile,"r").read()
        os.remove(outfile)
        if input:
            os.remove(infile)
        if capturestderr:
            self.err=open(errfile,"r").read()
            os.remove(errfile)

Note that many interactive programs (e.g. vi) don’t work well with pipes substituted for standard input and output. You will have to use pseudo ttys (“ptys”) instead of pipes. Or you can use a Python interface to Don Libes’ “expect” library. A Python extension that interfaces to expect is called “expy” and available from http://expectpy.sourceforge.net. A pure Python solution that works like expect is pexpect.

Como acesso a porta serial RS232?

Para Win32, POSIX (Linux, BSD, etc.), Jython:

Para Unix, veja uma postagem da Usenet de Mitch Chapman:

Por que o sys.stdout (stdin, stderr) não fecha?

Python file objects are a high-level layer of abstraction on top of C streams, which in turn are a medium-level layer of abstraction on top of (among other things) low-level C file descriptors.

For most file objects you create in Python via the built-in file constructor, f.close() marks the Python file object as being closed from Python’s point of view, and also arranges to close the underlying C stream. This also happens automatically in f’s destructor, when f becomes garbage.

But stdin, stdout and stderr are treated specially by Python, because of the special status also given to them by C. Running sys.stdout.close() marks the Python-level file object as being closed, but does not close the associated C stream.

To close the underlying C stream for one of these three, you should first be sure that’s what you really want to do (e.g., you may confuse extension modules trying to do I/O). If it is, use os.close:

os.close(0)   # close C's stdin stream
os.close(1)   # close C's stdout stream
os.close(2)   # close C's stderr stream

Programação Rede / Internet

Quais ferramentas WWW existem no Python?

See the chapters titled Protocolos de Internet e Suporte and Manuseio de Dados na Internet in the Library Reference Manual. Python has many modules that will help you build server-side and client-side web systems.

Um resumo dos frameworks disponíveis é disponibilizado por Paul Boddie em https://wiki.python.org/moin/WebProgramming.

Cameron Laird maintains a useful set of pages about Python web technologies at http://phaseit.net/claird/comp.lang.python/web_python.

Como submeter o envio do formulário mimic CGI (METHOD=POST)?

Gostaria de recuperar páginas da WEB resultantes de um formulário POST. Existe algum código que consigo fazer isso facilmente?

Yes. Here’s a simple example that uses httplib:

#!/usr/local/bin/python

import httplib, sys, time

# build the query string
qs = "First=Josephine&MI=Q&Last=Public"

# connect and send the server a path
httpobj = httplib.HTTP('www.some-server.out-there', 80)
httpobj.putrequest('POST', '/cgi-bin/some-cgi-script')
# now generate the rest of the HTTP headers...
httpobj.putheader('Accept', '*/*')
httpobj.putheader('Connection', 'Keep-Alive')
httpobj.putheader('Content-type', 'application/x-www-form-urlencoded')
httpobj.putheader('Content-length', '%d' % len(qs))
httpobj.endheaders()
httpobj.send(qs)
# find out what the server said in response...
reply, msg, hdrs = httpobj.getreply()
if reply != 200:
    sys.stdout.write(httpobj.getfile().read())

Note that in general for percent-encoded POST operations, query strings must be quoted using urllib.urlencode(). For example, to send name=Guy Steele, Jr.:

>>> import urllib
>>> urllib.urlencode({'name': 'Guy Steele, Jr.'})
'name=Guy+Steele%2C+Jr.'

Qual módulo devo usar para ajudar na geração do HTML?

Você pode encontrar uma coleção de links úteis na página wiki de programação da Web <https://wiki.python.org/moin/WebProgramming>`_.

Como envio um e-mail de um script Python?

Use a biblioteca padrão do módulo smtplib.

Aqui está um remetente de email interativo muito simples. Este método funcionará em qualquer host que suporte o protocolo SMTP.

import sys, smtplib

fromaddr = raw_input("From: ")
toaddrs  = raw_input("To: ").split(',')
print "Enter message, end with ^D:"
msg = ''
while True:
    line = sys.stdin.readline()
    if not line:
        break
    msg += line

# The actual mail send
server = smtplib.SMTP('localhost')
server.sendmail(fromaddr, toaddrs, msg)
server.quit()

A Unix-only alternative uses sendmail. The location of the sendmail program varies between systems; sometimes it is /usr/lib/sendmail, sometimes /usr/sbin/sendmail. The sendmail manual page will help you out. Here’s some sample code:

import os

SENDMAIL = "/usr/sbin/sendmail"  # sendmail location
p = os.popen("%s -t -i" % SENDMAIL, "w")
p.write("To: receiver@example.com\n")
p.write("Subject: test\n")
p.write("\n") # blank line separating headers from body
p.write("Some text\n")
p.write("some more text\n")
sts = p.close()
if sts != 0:
    print "Sendmail exit status", sts

Como evito o bloqueio do método connect() de um soquete?

The select module is commonly used to help with asynchronous I/O on sockets.

To prevent the TCP connect from blocking, you can set the socket to non-blocking mode. Then when you do the connect(), you will either connect immediately (unlikely) or get an exception that contains the error number as .errno. errno.EINPROGRESS indicates that the connection is in progress, but hasn’t finished yet. Different OSes will return different values, so you’re going to have to check what’s returned on your system.

You can use the connect_ex() method to avoid creating an exception. It will just return the errno value. To poll, you can call connect_ex() again later – 0 or errno.EISCONN indicate that you’re connected – or you can pass this socket to select to check if it’s writable.

Base de Dados

Existem interfaces para banco de dados em Python?

Sim.

Python 2.3 includes the bsddb package which provides an interface to the BerkeleyDB library. Interfaces to disk-based hashes such as DBM and GDBM are also included with standard Python.

Suporte para a maioria dos bancos de dados relacionais está disponível. Para mais detalhes, veja a página wike de programação de banco de dados em https://wiki.python.org/moin/DatabaseProgramming>`

Como você implementa objetos persistentes no Python?

The pickle library module solves this in a very general way (though you still can’t store things like open files, sockets or windows), and the shelve library module uses pickle and (g)dbm to create persistent mappings containing arbitrary Python objects. For better performance, you can use the cPickle module.

A more awkward way of doing things is to use pickle’s little sister, marshal. The marshal module provides very fast ways to store noncircular basic Python types to files and strings, and back again. Although marshal does not do fancy things like store instances or handle shared references properly, it does run extremely fast. For example, loading a half megabyte of data may take less than a third of a second. This often beats doing something more complex and general such as using gdbm with pickle/shelve.

Why is cPickle so slow?

By default pickle uses a relatively old and slow format for backward compatibility. You can however specify other protocol versions that are faster:

largeString = 'z' * (100 * 1024)
myPickle = cPickle.dumps(largeString, protocol=1)

If my program crashes with a bsddb (or anydbm) database open, it gets corrupted. How come?

Databases opened for write access with the bsddb module (and often by the anydbm module, since it will preferentially use bsddb) must explicitly be closed using the .close() method of the database. The underlying library caches database contents which need to be converted to on-disk form and written.

If you have initialized a new bsddb database but not written anything to it before the program crashes, you will often wind up with a zero-length file and encounter an exception the next time the file is opened.

I tried to open Berkeley DB file, but bsddb produces bsddb.error: (22, ‘Invalid argument’). Help! How can I restore my data?

Don’t panic! Your data is probably intact. The most frequent cause for the error is that you tried to open an earlier Berkeley DB file with a later version of the Berkeley DB library.

Many Linux systems now have all three versions of Berkeley DB available. If you are migrating from version 1 to a newer version use db_dump185 to dump a plain text version of the database. If you are migrating from version 2 to version 3 use db2_dump to create a plain text version of the database. In either case, use db_load to create a new native database for the latest version installed on your computer. If you have version 3 of Berkeley DB installed, you should be able to use db2_load to create a native version 2 database.

You should move away from Berkeley DB version 1 files because the hash file code contains known bugs that can corrupt your data.

Matemáticos e Numéricos

Como gero número aleatórios no Python?

O módulo padrão random implementa um gerador de números aleatórios. O uso é simples

import random
random.random()

Isso retorna um número flutuante aleatório no intervalo [0, 1).

Existem também muitos outros geradores aleatórios neste módulo, como:

  • randrange(a, b) escolhe um número inteiro entre [a, b).

  • uniform(a, b) escolhe um número float no intervalo [a, b).

  • normalvariate(mean, sdev) samples the normal (Gaussian) distribution.

Algumas funções de nível elevado operam diretamente em sequencia, como:

  • choice(S) escolhe um elemento aleatório de uma determinada sequência

  • shuffle(L) shuffles a list in-place, i.e. permutes it randomly

Existe também uma classe Random` que você pode instanciar para criar vários geradores de números aleatórios independentes.