函式庫和擴充功能的常見問題

常見函式問題

我如何找到執行任務 X 的模組或應用程式?

Check the Library Reference to see if there's a relevant standard library module. (Eventually you'll learn what's in the standard library and will be able to skip this step.)

For third-party packages, search the Python Package Index or try Google or another web search engine. Searching for "Python" plus a keyword or two for your topic of interest will usually find something helpful.

哪裡可以找到 math.py (socket.py, regex.py, 等...) 來源檔案?

If you can't find a source file for a module it may be a built-in or dynamically loaded module implemented in C, C++ or other compiled language. In this case you may not have the source file or it may be something like mathmodule.c, somewhere in a C source directory (not on the Python Path).

有(至少)三種 Python 模組:

  1. 以 Python 編寫的模組 (.py);

  2. 用 C 編寫並動態載入的模組(.dll、.pyd、.so、.sl 等);

  3. 用 C 編寫並與直譯器鏈接的模組;要獲得這些 list,請輸入:

    import sys
    print(sys.builtin_module_names)
    

我如何使 Python script 執行在 Unix?

You need to do two things: the script file's mode must be executable and the first line must begin with #! followed by the path of the Python interpreter.

The first is done by executing chmod +x scriptfile or perhaps chmod 755 scriptfile.

The second can be done in a number of ways. The most straightforward way is to write

#!/usr/local/bin/python

as the very first line of your file, using the pathname for where the Python interpreter is installed on your platform.

If you would like the script to be independent of where the Python interpreter lives, you can use the env program. Almost all Unix variants support the following, assuming the Python interpreter is in a directory on the user's PATH:

#!/usr/bin/env python

Don't do this for CGI scripts. The PATH variable for CGI scripts is often very minimal, so you need to use the actual absolute pathname of the interpreter.

Occasionally, a user's environment is so full that the /usr/bin/env program fails; or there's no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):

#! /bin/sh
""":"
exec python $0 ${1+"$@"}
"""

The minor disadvantage is that this defines the script's __doc__ string. However, you can fix that by adding

__doc__ = """...Whatever..."""

是否有適用於 Python 的 curses/termcap 套件?

For Unix variants: The standard Python source distribution comes with a curses module in the Modules subdirectory, though it's not compiled by default. (Note that this is not available in the Windows distribution -- there is no curses module for Windows.)

The curses module supports basic curses features as well as many additional functions from ncurses and SYSV curses such as colour, alternative character set support, pads, and mouse support. This means the module isn't compatible with operating systems that only have BSD curses, but there don't seem to be any currently maintained OSes that fall into this category.

Python 中是否有等同於 C 的 onexit() 的函式?

The atexit module provides a register function that is similar to C's onexit().

為什麼我的信號處理程式不起作用?

The most common problem is that the signal handler is declared with the wrong argument list. It is called as

handler(signum, frame)

所以它應該用兩個參數聲明:

def handler(signum, frame):
    ...

常見課題

如何測試 Python 程式或元件?

Python comes with two testing frameworks. The doctest module finds examples in the docstrings for a module and runs them, comparing the output with the expected output given in the docstring.

The unittest module is a fancier testing framework modelled on Java and Smalltalk testing frameworks.

To make testing easier, you should use good modular design in your program. Your program should have almost all functionality encapsulated in either functions or class methods -- and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.

The "global main logic" of your program may be as simple as

if __name__ == "__main__":
    main_logic()

at the bottom of the main module of your program.

Once your program is organized as a tractable collection of function and class behaviours, you should write test functions that exercise the behaviours. A test suite that automates a sequence of tests can be associated with each module. This sounds like a lot of work, but since Python is so terse and flexible it's surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the "production code", since this makes it easy to find bugs and even design flaws earlier.

"Support modules" that are not intended to be the main module of a program may include a self-test of the module.

if __name__ == "__main__":
    self_test()

Even programs that interact with complex external interfaces may be tested when the external interfaces are unavailable by using "fake" interfaces implemented in Python.

How do I create documentation from doc strings?

The pydoc module can create HTML from the doc strings in your Python source code. An alternative for creating API documentation purely from docstrings is epydoc. Sphinx can also include docstring content.

How do I get a single keypress at a time?

For Unix variants there are several solutions. It's straightforward to do this using curses, but curses is a fairly large module to learn.

執行緒

如何使用執行緒編寫程式?

Be sure to use the threading module and not the _thread module. The threading module builds convenient abstractions on top of the low-level primitives provided by the _thread module.

我的執行緒似乎都沒有運行:為什麼?

As soon as the main thread exits, all threads are killed. Your main thread is running too quickly, giving the threads no time to do any work.

A simple fix is to add a sleep to the end of the program that's long enough for all the threads to finish:

import threading, time

def thread_task(name, n):
    for i in range(n):
        print(name, i)

for i in range(10):
    T = threading.Thread(target=thread_task, args=(str(i), i))
    T.start()

time.sleep(10)  # <---------------------------!

But now (on many platforms) the threads don't run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn't start a new thread until the previous thread is blocked.

A simple fix is to add a tiny sleep to the start of the run function:

def thread_task(name, n):
    time.sleep(0.001)  # <--------------------!
    for i in range(n):
        print(name, i)

for i in range(10):
    T = threading.Thread(target=thread_task, args=(str(i), i))
    T.start()

time.sleep(10)

Instead of trying to guess a good delay value for time.sleep(), it's better to use some kind of semaphore mechanism. One idea is to use the queue module to create a queue object, let each thread append a token to the queue when it finishes, and let the main thread read as many tokens from the queue as there are threads.

How do I parcel out work among a bunch of worker threads?

The easiest way is to use the concurrent.futures module, especially the ThreadPoolExecutor class.

Or, if you want fine control over the dispatching algorithm, you can write your own logic manually. Use the queue module to create a queue containing a list of jobs. The Queue class maintains a list of objects and has a .put(obj) method that adds items to the queue and a .get() method to return them. The class will take care of the locking necessary to ensure that each job is handed out exactly once.

Here's a trivial example:

import threading, queue, time

# The worker thread gets jobs off the queue.  When the queue is empty, it
# assumes there will be no more work and exits.
# (Realistically workers will run until terminated.)
def worker():
    print('Running worker')
    time.sleep(0.1)
    while True:
        try:
            arg = q.get(block=False)
        except queue.Empty:
            print('Worker', threading.current_thread(), end=' ')
            print('queue empty')
            break
        else:
            print('Worker', threading.current_thread(), end=' ')
            print('running with argument', arg)
            time.sleep(0.5)

# Create queue
q = queue.Queue()

# Start a pool of 5 workers
for i in range(5):
    t = threading.Thread(target=worker, name='worker %i' % (i+1))
    t.start()

# Begin adding work to the queue
for i in range(50):
    q.put(i)

# Give threads time to run
print('Main thread sleeping')
time.sleep(5)

When run, this will produce the following output:

Running worker
Running worker
Running worker
Running worker
Running worker
Main thread sleeping
Worker <Thread(worker 1, started 130283832797456)> running with argument 0
Worker <Thread(worker 2, started 130283824404752)> running with argument 1
Worker <Thread(worker 3, started 130283816012048)> running with argument 2
Worker <Thread(worker 4, started 130283807619344)> running with argument 3
Worker <Thread(worker 5, started 130283799226640)> running with argument 4
Worker <Thread(worker 1, started 130283832797456)> running with argument 5
...

Consult the module's documentation for more details; the Queue class provides a featureful interface.

What kinds of global value mutation are thread-safe?

A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setswitchinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

In theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that "look atomic" really are.

For example, the following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):

L.append(x)
L1.extend(L2)
x = L[i]
x = L.pop()
L1[i:j] = L2
L.sort()
x = y
x.field = y
D[x] = y
D1.update(D2)
D.keys()

這些不是:

i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1

Operations that replace other objects may invoke those other objects' __del__() method when their reference count reaches zero, and that can affect things. This is especially true for the mass updates to dictionaries and lists. When in doubt, use a mutex!

不能擺脫全局直譯器鎖嗎?

The global interpreter lock (GIL) is often seen as a hindrance to Python's deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.

With the approval of PEP 703 work is now underway to remove the GIL from the CPython implementation of Python. Initially it will be implemented as an optional compiler flag when building the interpreter, and so separate builds will be available with and without the GIL. Long-term, the hope is to settle on a single build, once the performance implications of removing the GIL are fully understood. Python 3.13 is likely to be the first release containing this work, although it may not be completely functional in this release.

The current work to remove the GIL is based on a fork of Python 3.9 with the GIL removed by Sam Gross. Prior to that, in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set (the "free threading" patches) that removed the GIL and replaced it with fine-grained locking. Adam Olsen did a similar experiment in his python-safethread project. Unfortunately, both of these earlier experiments exhibited a sharp drop in single-thread performance (at least 30% slower), due to the amount of fine-grained locking necessary to compensate for the removal of the GIL. The Python 3.9 fork is the first attempt at removing the GIL with an acceptable performance impact.

The presence of the GIL in current Python releases doesn't mean that you can't make good use of Python on multi-CPU machines! You just have to be creative with dividing the work up between multiple processes rather than multiple threads. The ProcessPoolExecutor class in the new concurrent.futures module provides an easy way of doing so; the multiprocessing module provides a lower-level API in case you want more control over dispatching of tasks.

Judicious use of C extensions will also help; if you use a C extension to perform a time-consuming task, the extension can release the GIL while the thread of execution is in the C code and allow other threads to get some work done. Some standard library modules such as zlib and hashlib already do this.

An alternative approach to reducing the impact of the GIL is to make the GIL a per-interpreter-state lock rather than truly global. This was first implemented in Python 3.12 and is available in the C API. A Python interface to it is expected in Python 3.13. The main limitation to it at the moment is likely to be 3rd party extension modules, since these must be written with multiple interpreters in mind in order to be usable, so many older extension modules will not be usable.

輸入與輸出

如何刪除檔案?(以及其他檔案問題...)

Use os.remove(filename) or os.unlink(filename); for documentation, see the os module. The two functions are identical; unlink() is simply the name of the Unix system call for this function.

To remove a directory, use os.rmdir(); use os.mkdir() to create one. os.makedirs(path) will create any intermediate directories in path that don't exist. os.removedirs(path) will remove intermediate directories as long as they're empty; if you want to delete an entire directory tree and its contents, use shutil.rmtree().

要重新命名檔案,請使用 os.rename(old_path, new_path)

To truncate a file, open it using f = open(filename, "rb+"), and use f.truncate(offset); offset defaults to the current seek position. There's also os.ftruncate(fd, offset) for files opened with os.open(), where fd is the file descriptor (a small integer).

The shutil module also contains a number of functions to work on files including copyfile(), copytree(), and rmtree().

如何複製檔案?

The shutil module contains a copyfile() function. Note that on Windows NTFS volumes, it does not copy alternate data streams nor resource forks on macOS HFS+ volumes, though both are now rarely used. It also doesn't copy file permissions and metadata, though using shutil.copy2() instead will preserve most (though not all) of it.

如何讀取(或寫入)二進位制資料?

To read or write complex binary data formats, it's best to use the struct module. It allows you to take a string containing binary data (usually numbers) and convert it to Python objects; and vice versa.

For example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:

import struct

with open(filename, "rb") as f:
    s = f.read(8)
    x, y, z = struct.unpack(">hhl", s)

The '>' in the format string forces big-endian data; the letter 'h' reads one "short integer" (2 bytes), and 'l' reads one "long integer" (4 bytes) from the string.

For data that is more regular (e.g. a homogeneous list of ints or floats), you can also use the array module.

備註

To read and write binary data, it is mandatory to open the file in binary mode (here, passing "rb" to open()). If you use "r" instead (the default), the file will be open in text mode and f.read() will return str objects rather than bytes objects.

I can't seem to use os.read() on a pipe created with os.popen(); why?

os.read() is a low-level function which takes a file descriptor, a small integer representing the opened file. os.popen() creates a high-level file object, the same type returned by the built-in open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n).

如何存取序列 (RS232) 連接埠?

對於 Win32、OSX、Linux、BSD、Jython、IronPython:

對於 Unix,請參閱 Mitch Chapman 的 Usenet 貼文:

為什麼關閉 sys.stdout (stdin, stderr) 並沒有真正關閉它?

Python 檔案物件是低階 C 檔案描述器的高階抽象層。

For most file objects you create in Python via the built-in open() function, f.close() marks the Python file object as being closed from Python's point of view, and also arranges to close the underlying C file descriptor. This also happens automatically in f's destructor, when f becomes garbage.

But stdin, stdout and stderr are treated specially by Python, because of the special status also given to them by C. Running sys.stdout.close() marks the Python-level file object as being closed, but does not close the associated C file descriptor.

To close the underlying C file descriptor for one of these three, you should first be sure that's what you really want to do (e.g., you may confuse extension modules trying to do I/O). If it is, use os.close():

os.close(stdin.fileno())
os.close(stdout.fileno())
os.close(stderr.fileno())

或者你可以分別使用數字常數 0、1 和 2。

網路 (Network)/網際網路 (Internet) 程式

Python 有哪些 WWW 工具?

See the chapters titled 網路協定 (Internet protocols) 及支援 and 網際網路資料處理 in the Library Reference Manual. Python has many modules that will help you build server-side and client-side web systems.

A summary of available frameworks is maintained by Paul Boddie at https://wiki.python.org/moin/WebProgramming.

我應該使用什麼模組來輔助產生 HTML?

You can find a collection of useful links on the Web Programming wiki page.

如何從 Python 腳本發送郵件?

使用標準函式庫模組 smtplib

Here's a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener.

import sys, smtplib

fromaddr = input("From: ")
toaddrs  = input("To: ").split(',')
print("Enter message, end with ^D:")
msg = ''
while True:
    line = sys.stdin.readline()
    if not line:
        break
    msg += line

# The actual mail send
server = smtplib.SMTP('localhost')
server.sendmail(fromaddr, toaddrs, msg)
server.quit()

A Unix-only alternative uses sendmail. The location of the sendmail program varies between systems; sometimes it is /usr/lib/sendmail, sometimes /usr/sbin/sendmail. The sendmail manual page will help you out. Here's some sample code:

import os

SENDMAIL = "/usr/sbin/sendmail"  # sendmail location
p = os.popen("%s -t -i" % SENDMAIL, "w")
p.write("To: receiver@example.com\n")
p.write("Subject: test\n")
p.write("\n")  # blank line separating headers from body
p.write("Some text\n")
p.write("some more text\n")
sts = p.close()
if sts != 0:
    print("Sendmail exit status", sts)

How do I avoid blocking in the connect() method of a socket?

select 模組通常用於幫助處理 socket 上的非同步 I/O。

To prevent the TCP connect from blocking, you can set the socket to non-blocking mode. Then when you do the connect(), you will either connect immediately (unlikely) or get an exception that contains the error number as .errno. errno.EINPROGRESS indicates that the connection is in progress, but hasn't finished yet. Different OSes will return different values, so you're going to have to check what's returned on your system.

You can use the connect_ex() method to avoid creating an exception. It will just return the errno value. To poll, you can call connect_ex() again later -- 0 or errno.EISCONN indicate that you're connected -- or you can pass this socket to select.select() to check if it's writable.

備註

asyncio 模組提供了一個通用的單執行緒並發非同步函式庫,可用於編寫非阻塞網路程式碼。第三方 Twisted 函式庫是一種流行且功能豐富的替代方案。

資料庫

Python 中是否有任何資料庫套件的介面?

有的。

Interfaces to disk-based hashes such as DBM and GDBM are also included with standard Python. There is also the sqlite3 module, which provides a lightweight disk-based relational database.

Support for most relational databases is available. See the DatabaseProgramming wiki page for details.

How do you implement persistent objects in Python?

The pickle library module solves this in a very general way (though you still can't store things like open files, sockets or windows), and the shelve library module uses pickle and (g)dbm to create persistent mappings containing arbitrary Python objects.

數學和數值

如何在 Python 中生成隨機數?

標準模組 random 實作了一個隨機數生成器。用法很簡單:

import random
random.random()

這將回傳 [0, 1) 範圍內的隨機浮點數。

該模組中還有許多其他專用生成器,例如:

  • randrange(a, b) 會選擇 [a, b) 範圍內的一個整數。

  • uniform(a, b) 會選擇 [a, b) 範圍內的浮點數。

  • normalvariate(mean, sdev) 對常態(高斯)分佈進行採樣 (sample)。

一些更高階的函式會直接對序列進行操作,例如:

  • choice(S) 會從給定序列中選擇一個隨機元素。

  • shuffle(L) 會原地 (in-place) 打亂 list,即隨機排列它。

還有一個 Random 類別,你可以將它實例化以建立多個獨立的隨機數生成器。