16.6. multiprocessing
— Process-based “threading” interface¶
2.6 新版功能.
16.6.1. 概述¶
multiprocessing
是一个用与 threading
模块相似API的支持产生进程的包。 multiprocessing
包同时提供本地和远程并发,使用子进程代替线程,有效避免 Global Interpreter Lock 带来的影响。因此, multiprocessing
模块允许程序员充分利用机器上的多个核心。Unix 和 Windows 上都可以运行。
The multiprocessing
module also introduces APIs which do not have
analogs in the threading
module. A prime example of this is the
Pool
object which offers a convenient means of parallelizing the
execution of a function across multiple input values, distributing the
input data across processes (data parallelism). The following example
demonstrates the common practice of defining such functions in a module so
that child processes can successfully import that module. This basic example
of data parallelism using Pool
,
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
将打印到标准输出
[1, 4, 9]
16.6.1.1. Process
类¶
在 multiprocessing
中,通过创建一个 Process
对象然后调用它的 start()
方法来生成进程。 Process
和 threading.Thread
API 相同。 一个简单的多进程程序示例是:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
要显示所涉及的各个进程ID,这是一个扩展示例:
from multiprocessing import Process
import os
def info(title):
print title
print 'module name:', __name__
if hasattr(os, 'getppid'): # only available on Unix
print 'parent process:', os.getppid()
print 'process id:', os.getpid()
def f(name):
info('function f')
print 'hello', name
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
For an explanation of why (on Windows) the if __name__ == '__main__'
part is
necessary, see 编程指导.
16.6.1.2. 在进程之间交换对象¶
multiprocessing
支持进程之间的两种通信通道:
队列
The
Queue
class is a near clone ofQueue.Queue
. For example:from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) if __name__ == '__main__': q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get() # prints "[42, None, 'hello']" p.join()队列是线程和进程安全的。
管道
Pipe()
函数返回一个由管道连接的连接对象,默认情况下是双工(双向)。例如:from multiprocessing import Process, Pipe def f(conn): conn.send([42, None, 'hello']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints "[42, None, 'hello']" p.join()返回的两个连接对象
Pipe()
表示管道的两端。每个连接对象都有send()
和recv()
方法(相互之间的)。请注意,如果两个进程(或线程)同时尝试读取或写入管道的 同一 端,则管道中的数据可能会损坏。当然,同时使用管道的不同端的进程不存在损坏的风险。
16.6.1.3. 进程之间的同步¶
multiprocessing
包含来自 threading
的所有同步原语的等价物。例如,可以使用锁来确保一次只有一个进程打印到标准输出:
from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print 'hello world', i
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
不使用来自不同进程的锁输出容易产生混淆。
16.6.1.4. 在进程之间共享状态¶
如上所述,在进行并发编程时,通常最好尽量避免使用共享状态。使用多个进程时尤其如此。
但是,如果你真的需要使用一些共享数据,那么 multiprocessing
提供了两种方法。
共享内存
可以使用
Value
或Array
将数据存储在共享内存映射中。例如,以下代码:from multiprocessing import Process, Value, Array def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i] if __name__ == '__main__': num = Value('d', 0.0) arr = Array('i', range(10)) p = Process(target=f, args=(num, arr)) p.start() p.join() print num.value print arr[:]将打印
3.1415927 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]创建
num
和arr
时使用的'd'
和'i'
参数是array
模块使用的类型的 typecode :'d'
表示双精度浮点数,'i'
表示有符号整数。这些共享对象将是进程和线程安全的。为了更灵活地使用共享内存,可以使用
multiprocessing.sharedctypes
模块,该模块支持创建从共享内存分配的任意ctypes对象。
服务器进程
由
Manager()
返回的管理器对象控制一个服务器进程,该进程保存Python对象并允许其他进程使用代理操作它们。A manager returned by
Manager()
will support typeslist
,dict
,Namespace
,Lock
,RLock
,Semaphore
,BoundedSemaphore
,Condition
,Event
,Queue
,Value
andArray
. For example,from multiprocessing import Process, Manager def f(d, l): d[1] = '1' d['2'] = 2 d[0.25] = None l.reverse() if __name__ == '__main__': manager = Manager() d = manager.dict() l = manager.list(range(10)) p = Process(target=f, args=(d, l)) p.start() p.join() print d print l将打印
{0.25: None, 1: '1', '2': 2} [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]服务器进程管理器比使用共享内存对象更灵活,因为它们可以支持任意对象类型。此外,单个管理器可以通过网络由不同计算机上的进程共享。但是,它们比使用共享内存慢。
16.6.1.5. 使用工作进程¶
Pool
类表示一个工作进程池。它具有允许以几种不同方式将任务分配到工作进程的方法。
例如
from multiprocessing import Pool, TimeoutError
import time
import os
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
# print "[0, 1, 4,..., 81]"
print pool.map(f, range(10))
# print same numbers in arbitrary order
for i in pool.imap_unordered(f, range(10)):
print i
# evaluate "f(20)" asynchronously
res = pool.apply_async(f, (20,)) # runs in *only* one process
print res.get(timeout=1) # prints "400"
# evaluate "os.getpid()" asynchronously
res = pool.apply_async(os.getpid, ()) # runs in *only* one process
print res.get(timeout=1) # prints the PID of that process
# launching multiple evaluations asynchronously *may* use more processes
multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
print [res.get(timeout=1) for res in multiple_results]
# make a single worker sleep for 10 secs
res = pool.apply_async(time.sleep, (10,))
try:
print res.get(timeout=1)
except TimeoutError:
print "We lacked patience and got a multiprocessing.TimeoutError"
请注意,池的方法只能由创建它的进程使用。
注解
Functionality within this package requires that the __main__
module be
importable by the children. This is covered in 编程指导
however it is worth pointing out here. This means that some examples, such
as the Pool
examples will not work in the interactive interpreter.
For example:
>>> from multiprocessing import Pool
>>> p = Pool(5)
>>> def f(x):
... return x*x
...
>>> p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
Process PoolWorker-3:
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
(如果你尝试这个,它实际上会以半随机的方式输出三个完整的回溯,然后你可能不得不以某种方式停止主进程。)
16.6.2. 参考¶
multiprocessing
包大部分复制了 threading
模块的API。
16.6.2.1. Process
和异常¶
-
class
multiprocessing.
Process
(group=None, target=None, name=None, args=(), kwargs={})¶ 进程对象表示在单独进程中运行的活动。
Process
类等价于threading.Thread
。The constructor should always be called with keyword arguments. group should always be
None
; it exists solely for compatibility withthreading.Thread
. target is the callable object to be invoked by therun()
method. It defaults toNone
, meaning nothing is called. name is the process name. By default, a unique name is constructed of the form ‘Process-N1:N2:…:Nk’ where N1,N2,…,Nk is a sequence of integers whose length is determined by the generation of the process. args is the argument tuple for the target invocation. kwargs is a dictionary of keyword arguments for the target invocation. By default, no arguments are passed to target.如果子类重写构造函数,它必须确保它在对进程执行任何其他操作之前调用基类构造函数(
Process.__init__()
)。-
run
()¶ 表示进程活动的方法。
你可以在子类中重载此方法。标准
run()
方法调用传递给对象构造函数的可调用对象作为目标参数(如果有),分别从 args 和 kwargs 参数中获取顺序和关键字参数。
-
join
([timeout])¶ Block the calling thread until the process whose
join()
method is called terminates or until the optional timeout occurs.If timeout is
None
then there is no timeout.一个进程可以合并多次。
进程无法并入自身,因为这会导致死锁。尝试在启动进程之前合并进程是错误的。
-
name
¶ The process’s name.
The name is a string used for identification purposes only. It has no semantics. Multiple processes may be given the same name. The initial name is set by the constructor.
-
daemon
¶ 进程的守护标志,一个布尔值。这必须在
start()
被调用之前设置。初始值继承自创建进程。
当进程退出时,它会尝试终止其所有守护进程子进程。
请注意,不允许守护进程创建子进程。否则,守护进程会在子进程退出时终止其子进程。 另外,这些 不是 Unix守护进程或服务,它们是正常进程,如果非守护进程已经退出,它们将被终止(并且不被合并)。
除了
threading.Thread
API ,Process
对象还支持以下属性和方法:-
pid
¶ 返回进程ID。在生成该进程之前,这将是
None
。
-
exitcode
¶ 的退子进程出代码。如果进程尚未终止,这将是
None
。负值 -N 表示孩子被信号 N 终止。
-
authkey
¶ 进程的身份验证密钥(字节字符串)。
当
multiprocessing
初始化时,主进程使用os.urandom()
分配一个随机字符串。当创建
Process
对象时,它将继承其父进程的身份验证密钥,尽管可以通过将authkey
设置为另一个字节字符串来更改。参见 认证密码 。
-
terminate
()¶ 终止进程。 在Unix上,这是使用
SIGTERM
信号完成的;在Windows上使用TerminateProcess()
。 请注意,不会执行退出处理程序和finally子句等。请注意,进程的后代进程将不会被终止 —— 它们将简单地变成孤立的。
警告
如果在关联进程使用管道或队列时使用此方法,则管道或队列可能会损坏,并可能无法被其他进程使用。类似地,如果进程已获得锁或信号量等,则终止它可能导致其他进程死锁。
注意
start()
、join()
、is_alive()
、terminate()
和exitcode
方法只能由创建进程对象的进程调用。Process
一些方法的示例用法:>>> import multiprocessing, time, signal >>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) >>> print p, p.is_alive() <Process(Process-1, initial)> False >>> p.start() >>> print p, p.is_alive() <Process(Process-1, started)> True >>> p.terminate() >>> time.sleep(0.1) >>> print p, p.is_alive() <Process(Process-1, stopped[SIGTERM])> False >>> p.exitcode == -signal.SIGTERM True
-
-
exception
multiprocessing.
BufferTooShort
¶ 当提供的缓冲区对象太小而无法读取消息时,
Connection.recv_bytes_into()
引发的异常。如果
e
是一个BufferTooShort
实例,那么e.args[0]
将把消息作为字节字符串给出。
16.6.2.2. 管道和队列¶
使用多进程时,一般使用消息机制实现进程间通信,尽可能避免使用同步原语,例如锁。
消息机制包含: Pipe()
(可以用于在两个进程间传递消息),以及队列(能够在多个生产者和消费者之间通信)。
The Queue
, multiprocessing.queues.SimpleQueue
and JoinableQueue
types are multi-producer,
multi-consumer FIFO queues modelled on the Queue.Queue
class in the
standard library. They differ in that Queue
lacks the
task_done()
and join()
methods introduced
into Python 2.5’s Queue.Queue
class.
如果你使用了 JoinableQueue
,那么你**必须**对每个已经移出队列的任务调用 JoinableQueue.task_done()
。不然的话用于统计未完成任务的信号量最终会溢出并抛出异常。
另外还可以通过使用一个管理器对象创建一个共享队列,详见 数据管理器 。
注解
multiprocessing
uses the usual Queue.Empty
and
Queue.Full
exceptions to signal a timeout. They are not available in
the multiprocessing
namespace so you need to import them from
Queue
.
注解
当一个对象被放入一个队列中时,这个对象首先会被一个后台线程用pickle序列化,并将序列化后的数据通过一个底层管道的管道传递到队列中。这种做法会有点让人惊讶,但一般不会出现什么问题。如果它们确实妨碍了你,你可以使用一个由管理器 manager 创建的队列替换它。
After putting an object on an empty queue there may be an infinitesimal delay before the queue’s
empty()
method returnsFalse
andget_nowait()
can return without raisingQueue.Empty
.如果有多个进程同时将对象放入队列,那么在队列的另一端接受到的对象可能是无序的。但是由同一个进程放入的多个对象的顺序在另一端输出时总是一样的。
警告
If a process is killed using Process.terminate()
or os.kill()
while it is trying to use a Queue
, then the data in the queue is
likely to become corrupted. This may cause any other process to get an
exception when it tries to use the queue later on.
警告
正如刚才提到的,如果一个子进程将一些对象放进队列中 (并且它没有用 JoinableQueue.cancel_join_thread
方法),那么这个进程在所有缓冲区的对象被刷新进管道之前,是不会终止的。
这意味着,除非你确定所有放入队列中的对象都已经被消费了,否则如果你试图等待这个进程,你可能会陷入死锁中。相似地,如果该子进程不是后台进程,那么父进程可能在试图等待所有非后台进程退出时挂起。
注意用管理器创建的队列不存在这个问题,详见 编程指导 。
该 例子 展示了如何使用队列实现进程间通信。
-
multiprocessing.
Pipe
([duplex])¶ Returns a pair
(conn1, conn2)
ofConnection
objects representing the ends of a pipe.如果 duplex 被置为
True
(默认值),那么该管道是双向的。如果 duplex 被置为False
,那么该管道是单向的,即conn1
只能用于接收消息,而conn2
仅能用于发送消息。
-
class
multiprocessing.
Queue
([maxsize])¶ 返回一个使用一个管道和少量锁和信号量实现的共享队列实例。当一个进程将一个对象放进队列中时,一个写入线程会启动并将对象从缓冲区写入管道中。
The usual
Queue.Empty
andQueue.Full
exceptions from the standard library’sQueue
module are raised to signal timeouts.Queue
implements all the methods ofQueue.Queue
except fortask_done()
andjoin()
.-
qsize
()¶ 返回队列的大致长度。由于多线程或者多进程的上下文,这个数字是不可靠的。
注意,在 Unix 平台上,例如 Mac OS X ,这个方法可能会抛出
NotImplementedError
异常,因为该平台没有实现sem_getvalue()
。
-
empty
()¶ 如果队列是空的,返回
True
,反之返回False
。 由于多线程或多进程的环境,该状态是不可靠的。
-
full
()¶ 如果队列是满的,返回
True
,反之返回False
。 由于多线程或多进程的环境,该状态是不可靠的。
-
put
(obj[, block[, timeout]])¶ Put obj into the queue. If the optional argument block is
True
(the default) and timeout isNone
(the default), block if necessary until a free slot is available. If timeout is a positive number, it blocks at most timeout seconds and raises theQueue.Full
exception if no free slot was available within that time. Otherwise (block isFalse
), put an item on the queue if a free slot is immediately available, else raise theQueue.Full
exception (timeout is ignored in that case).
-
put_nowait
(obj)¶ 相当于
put(obj, False)
。
-
get
([block[, timeout]])¶ Remove and return an item from the queue. If optional args block is
True
(the default) and timeout isNone
(the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises theQueue.Empty
exception if no item was available within that time. Otherwise (block isFalse
), return an item if one is immediately available, else raise theQueue.Empty
exception (timeout is ignored in that case).
-
get_nowait
()¶ 相当于
get(False)
。
Queue
has a few additional methods not found inQueue.Queue
. These methods are usually unnecessary for most code:-
close
()¶ 指示当前进程将不会再往队列中放入对象。一旦所有缓冲区中的数据被写入管道之后,后台的线程会退出。这个方法在队列被gc回收时会自动调用。
-
join_thread
()¶ 等待后台线程。这个方法仅在调用了
close()
方法之后可用。这会阻塞当前进程,直到后台线程退出,确保所有缓冲区中的数据都被写入管道中。默认情况下,如果一个不是队列创建者的进程试图退出,它会尝试等待这个队列的后台线程。这个进程可以使用
cancel_join_thread()
让join_thread()
方法什么都不做直接跳过。
-
cancel_join_thread
()¶ 防止
join_thread()
方法阻塞当前进程。具体而言,这防止进程退出时自动等待后台线程退出。详见join_thread()
。可能这个方法称为”
allow_exit_without_flush()
“ 会更好。这有可能会导致正在排队进入队列的数据丢失,大多数情况下你不需要用到这个方法,仅当你不关心底层管道中可能丢失的数据,只是希望进程能够马上退出时使用。
注解
该类的功能依赖于宿主操作系统具有可用的共享信号量实现。否则该类将被禁用,任何试图实例化一个
Queue
对象的操作都会抛出ImportError
异常,更多信息详见 bpo-3770 。后续说明的任何专用队列对象亦如此。-
-
class
multiprocessing.queues.
SimpleQueue
¶ It is a simplified
Queue
type, very close to a lockedPipe
.-
empty
()¶ 如果队列为空返回
True
,否则返回False
。
-
get
()¶ 从队列中移出并返回一个对象。
-
put
(item)¶ 将 item 放入队列。
-
-
class
multiprocessing.
JoinableQueue
([maxsize])¶ JoinableQueue
, aQueue
subclass, is a queue which additionally hastask_done()
andjoin()
methods.-
task_done
()¶ Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each
get()
used to fetch a task, a subsequent call totask_done()
tells the queue that the processing on the task is complete.If a
join()
is currently blocking, it will resume when all items have been processed (meaning that atask_done()
call was received for every item that had beenput()
into the queue).如果被调用的次数多于放入队列中的项目数量,将引发
ValueError
异常 。
-
join
()¶ 阻塞至队列中所有的元素都被接收和处理完毕。
The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls
task_done()
to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero,join()
unblocks.
-
16.6.2.3. 杂项¶
-
multiprocessing.
active_children
()¶ 返回当前进程存活的子进程的列表。
调用该方法有“等待”已经结束的进程的副作用。
-
multiprocessing.
cpu_count
()¶ Return the number of CPUs in the system. May raise
NotImplementedError
.
-
multiprocessing.
freeze_support
()¶ 为使用了
multiprocessing
的程序,提供冻结以产生 Windows 可执行文件的支持。(在 py2exe, PyInstaller 和 cx_Freeze 上测试通过)需要在 main 模块的
if __name__ == '__main__'
该行之后马上调用该函数。例如:from multiprocessing import Process, freeze_support def f(): print 'hello world!' if __name__ == '__main__': freeze_support() Process(target=f).start()
如果没有调用
freeze_support()
在尝试运行被冻结的可执行文件时会抛出RuntimeError
异常。对
freeze_support()
的调用在非 Windows 平台上是无效的。如果该模块在 Windows 平台的 Python 解释器中正常运行 (该程序没有被冻结), 调用``freeze_support()`` 也是无效的。
-
multiprocessing.
set_executable
()¶ 设置在启动子进程时使用的 Python 解释器路径。 ( 默认使用
sys.executable
) 嵌入式编程人员可能需要这样做:set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
before they can create child processes. (Windows only)
注解
multiprocessing
并没有包含类似 threading.active_count()
, threading.enumerate()
, threading.settrace()
, threading.setprofile()
, threading.Timer
, 或者 threading.local
的方法和类。
16.6.2.4. 连接对象(Connection)¶
Connection 对象允许收发可以序列化的对象或字符串。它们可以看作面向消息的连接套接字。
通常使用 Pipe
创建 Connection 对象。详见 : 监听者及客户端.
-
class
Connection
¶ -
send
(obj)¶ 将一个对象发送到连接的另一端,可以用
recv()
读取。The object must be picklable. Very large pickles (approximately 32 MB+, though it depends on the OS) may raise a
ValueError
exception.
-
fileno
()¶ 返回由连接对象使用的描述符或者句柄。
-
close
()¶ 关闭连接对象。
当连接对象被垃圾回收时会自动调用。
-
poll
([timeout])¶ 返回连接对象中是否有可以读取的数据。
如果未指定 timeout ,此方法会马上返回。如果 timeout 是一个数字,则指定了最大阻塞的秒数。如果 timeout 是
None
,那么将一直等待,不会超时。
-
send_bytes
(buffer[, offset[, size]])¶ Send byte data from an object supporting the buffer interface as a complete message.
If offset is given then data is read from that position in buffer. If size is given then that many bytes will be read from buffer. Very large buffers (approximately 32 MB+, though it depends on the OS) may raise a
ValueError
exception
-
recv_bytes
([maxlength])¶ 以字符串形式返回一条从连接对象另一端发送过来的字节数据。此方法在接收到数据前将一直阻塞。 如果连接对象被对端关闭或者没有数据可读取,将抛出
EOFError
异常。If maxlength is specified and the message is longer than maxlength then
IOError
is raised and the connection will no longer be readable.
-
recv_bytes_into
(buffer[, offset])¶ 将一条完整的字节数据消息读入 buffer 中并返回消息的字节数。 此方法在接收到数据前将一直阻塞。 如果连接对象被对端关闭或者没有数据可读取,将抛出
EOFError
异常。buffer must be an object satisfying the writable buffer interface. If offset is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the length of buffer (in bytes).
如果缓冲区太小,则将引发
BufferTooShort
异常,并且完整的消息将会存放在异常实例e
的e.args[0]
中。
-
例如:
>>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
>>> b.recv()
[1, 'hello', None]
>>> b.send_bytes('thank you')
>>> a.recv_bytes()
'thank you'
>>> import array
>>> arr1 = array.array('i', range(5))
>>> arr2 = array.array('i', [0] * 10)
>>> a.send_bytes(arr1)
>>> count = b.recv_bytes_into(arr2)
>>> assert count == len(arr1) * arr1.itemsize
>>> arr2
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
警告
Connection.recv()
方法会自动解封它收到的数据,除非你能够信任发送消息的进程,否则此处可能有安全风险。
因此, 除非连接对象是由 Pipe()
产生的,否则你应该仅在使用了某种认证手段之后才使用 recv()
和 send()
方法。 参考 认证密码。
警告
如果一个进程在试图读写管道时被终止了,那么管道中的数据很可能是不完整的,因为此时可能无法确定消息的边界。
16.6.2.5. 同步原语¶
通常来说同步原语在多进程环境中并不像它们在多线程环境中那么必要。参考 threading
模块的文档。
注意可以使用管理器对象创建同步原语,参考 数据管理器 。
-
class
multiprocessing.
BoundedSemaphore
([value])¶ 非常类似
threading.BoundedSemaphore
的有界信号量对象。A solitary difference from its close analog exists: its
acquire
method’s first argument is named block and it supports an optional second argument timeout, as is consistent withLock.acquire()
.注解
在 Mac OS X 平台上, 该对象于
Semaphore
不同在于sem_getvalue()
方法并没有在该平台上实现。
-
class
multiprocessing.
Condition
([lock])¶ A condition variable: a clone of
threading.Condition
.指定的 lock 参数应该是
multiprocessing
模块中的Lock
或者RLock
对象。
-
class
multiprocessing.
Event
¶ A clone of
threading.Event
. This method returns the state of the internal semaphore on exit, so it will always returnTrue
except if a timeout is given and the operation times out.在 2.7 版更改: Previously, the method always returned
None
.
-
class
multiprocessing.
Lock
¶ 原始锁(非递归锁)对象,类似于
threading.Lock
。一旦一个进程或者线程拿到了锁,后续的任何其他进程或线程的其他请求都会被阻塞直到锁被释放。任何进程或线程都可以释放锁。除非另有说明,否则multiprocessing.Lock
用于进程或者线程的概念和行为都和threading.Lock
一致。注意
Lock
实际上是一个工厂函数。它返回由默认上下文初始化的multiprocessing.synchronize.Lock
对象。Lock
supports the context manager protocol and thus may be used inwith
statements.-
acquire
(block=True, timeout=None)¶ 可以阻塞或非阻塞地获得锁。
如果 block 参数被设为
True
( 默认值 ) , 对该方法的调用在锁处于释放状态之前都会阻塞,然后将锁设置为锁住状态并返回True
。需要注意的是第一个参数名与threading.Lock.acquire()
的不同。如果 block 参数被设置成
False
,方法的调用将不会阻塞。 如果锁当前处于锁住状态,将返回False
; 否则将锁设置成锁住状态,并返回True
。When invoked with a positive, floating-point value for timeout, block for at most the number of seconds specified by timeout as long as the lock can not be acquired. Invocations with a negative value for timeout are equivalent to a timeout of zero. Invocations with a timeout value of
None
(the default) set the timeout period to infinite. The timeout argument has no practical implications if the block argument is set toFalse
and is thus ignored. ReturnsTrue
if the lock has been acquired orFalse
if the timeout period has elapsed. Note that the timeout argument does not exist in this method’s analog,threading.Lock.acquire()
.
-
release
()¶ 释放锁,可以在任何进程、线程使用,并不限于锁的拥有者。
其行为与
threading.Lock.release()
一样,只不过当尝试释放一个没有被持有的锁时,会抛出ValueError
异常。
-
-
class
multiprocessing.
RLock
¶ 递归锁对象: 类似于
threading.RLock
。递归锁必须由持有线程、进程亲自释放。如果某个进程或者线程拿到了递归锁,这个进程或者线程可以再次拿到这个锁而不需要等待。但是这个进程或者线程的拿锁操作和释放锁操作的次数必须相同。注意
RLock
是一个工厂函数,调用后返回一个使用默认 context 初始化的multiprocessing.synchronize.RLock
对象。RLock
支持 context manager 协议,因此可在with
语句内使用。-
acquire
(block=True, timeout=None)¶ 可以阻塞或非阻塞地获得锁。
当 block 设置为
True
时,会一直阻塞直到锁处于空闲状态(没有被任何进程、线程拥有),除非当前进程/线程已经拥有了这把锁。然后当前进程会持有这把锁(在锁没有被持有者的情况下),锁内的递归等级加一,并返回True
. 注意, 这个函数第一个参数和threading.RLock.acquire()
有几个不同点,包括参数名本身。当 block 参数是
False
, 将不会阻塞,如果此时锁被其他进程或者线程持有,当前进程、线程获取锁操作失败,锁的递归等级也不会改变,函数返回False
, 如果当前锁已经处于释放状态,则当前进程、线程则会拿到锁,并且锁内的递归等级加一,函数返回True
。Use and behaviors of the timeout argument are the same as in
Lock.acquire()
. Note that the timeout argument does not exist in this method’s analog,threading.RLock.acquire()
.
-
release
()¶ 释放锁,使锁内的递归等级减一。如果释放后锁内的递归等级降低为0,则会重置锁的状态为释放状态(即没有被任何进程、线程持有),重置后如果有有其他进程和线程在等待这把锁,他们中的一个会获得这个锁而继续运行。如果释放后锁内的递归等级还没到达0,则这个锁仍将保持未释放状态且当前进程和线程仍然是持有者。
只有当前进程或线程是锁的持有者时,才允许调用这个方法。如果当前进程或线程不是这个锁的拥有者,或者这个锁处于已释放的状态(即没有任何拥有者),调用这个方法会抛出
AssertionError
异常。注意这里抛出的异常类型和threading.RLock.release()
中实现的行为不一样。
-
-
class
multiprocessing.
Semaphore
([value])¶ 一种信号量对象: 类似于
threading.Semaphore
.A solitary difference from its close analog exists: its
acquire
method’s first argument is named block and it supports an optional second argument timeout, as is consistent withLock.acquire()
.
注解
The acquire()
method of BoundedSemaphore
, Lock
,
RLock
and Semaphore
has a timeout parameter not supported
by the equivalents in threading
. The signature is
acquire(block=True, timeout=None)
with keyword parameters being
acceptable. If block is True
and timeout is not None
then it
specifies a timeout in seconds. If block is False
then timeout is
ignored.
在 Mac OS X 上,不支持 sem_timedwait
,所以,使用调用 acquire()
时如果使用 timeout 参数,会通过循环sleep来模拟timeout的行为。
注解
假如信号 SIGINT 是来自于 Ctrl-C ,并且主线程被 BoundedSemaphore.acquire()
, Lock.acquire()
, RLock.acquire()
, Semaphore.acquire()
, Condition.acquire()
或 Condition.wait()
阻塞,则调用会立即中断同时抛出 KeyboardInterrupt
异常。
这和 threading
的行为不同,此模块中当执行对应的阻塞式调用时,SIGINT 会被忽略。
注解
这个库的某些功能依赖于宿主机系统的共享信号量,如果系统没有这个特性, multiprocessing.synchronize
会被禁用,尝试导入这个包会引发 ImportError
异常,详细信息请查看 bpo-3770 。
16.6.2.7. 数据管理器¶
Managers provide a way to create data which can be shared between different processes. A manager object controls a server process which manages shared objects. Other processes can access the shared objects by using proxies.
返回一个已启动的
SyncManager
管理器对象,这个对象可以用于在不同进程中共享数据。返回的管理器对象对应了一个 spawned 方式启动的子进程,并且拥有一系列方法可以用于创建共享对象、返回对应的代理。
当管理器被垃圾回收或者父进程退出时,管理器进程会立即退出。管理器类定义在 multiprocessing.managers
模块:
-
class
multiprocessing.managers.
BaseManager
([address[, authkey]])¶ 创建一个 BaseManager 对象。
一旦创建,应该及时调用
start()
或者get_server().serve_forever()
以确保管理器对象对应的管理进程已经启动。address 是管理器对象监听的地址。如果 address 是
None
,则允许和任意主机的请求建立连接。authkey is the authentication key which will be used to check the validity of incoming connections to the server process. If authkey is
None
thencurrent_process().authkey
. Otherwise authkey is used and it must be a string.-
start
([initializer[, initargs]])¶ 为管理器开启一个子进程,如果 initializer 不是
None
, 子进程在启动时将会调用initializer(*initargs)
。
-
get_server
()¶ 返回一个
Server
对象,它是管理器在后台控制的真实的服务。Server
对象拥有serve_forever()
方法。>>> from multiprocessing.managers import BaseManager >>> manager = BaseManager(address=('', 50000), authkey='abc') >>> server = manager.get_server() >>> server.serve_forever()
Server
额外拥有一个address
属性。
-
connect
()¶ 将本地管理器对象连接到一个远程管理器进程:
>>> from multiprocessing.managers import BaseManager >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc') >>> m.connect()
-
register
(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])¶ 一个 classmethod,可以将一个类型或者可调用对象注册到管理器类。
typeid 是一种 “类型标识”,用于唯一表示某种共享类型,必须是一个字符串。
callable is a callable used for creating objects for this type identifier. If a manager instance will be created using the
from_address()
classmethod or if the create_method argument isFalse
then this can be left asNone
.proxytype 是
BaseProxy
的子类,可以根据 typeid 为共享对象创建一个代理,如果是None
, 则会自动创建一个代理类。exposed 是一个函数名组成的序列,用来指明只有这些方法可以使用
BaseProxy._callmethod()
代理。(如果 exposed 是None
, 则会在proxytype._exposed_
存在的情况下转而使用它) 当暴露的方法列表没有指定的时候,共享对象的所有 “公共方法” 都会被代理。(这里的“公共方法”是指所有拥有__call__()
方法并且不是以'_'
开头的属性)method_to_typeid 是一个映射,用来指定那些应该返回代理对象的暴露方法所返回的类型。(如果 method_to_typeid 是
None
, 则proxytype._method_to_typeid_
会在存在的情况下被使用)如果方法名称不在这个映射中或者映射是None
,则方法返回的对象会是一个值拷贝。create_method 指明,是否要创建一个以 typeid 命名并返回一个代理对象的函数,这个函数会被服务进程用于创建共享对象,默认为
True
。
BaseManager
实例也有一个只读属性。-
address
¶ 管理器所用的地址。
-
-
class
multiprocessing.managers.
SyncManager
¶ BaseManager
的子类,可用于进程的同步。这个类型的对象使用multiprocessing.Manager()
创建。It also supports creation of shared lists and dictionaries.
-
BoundedSemaphore
([value])¶ 创建一个共享的
threading.BoundedSemaphore
对象并返回它的代理。
-
Condition
([lock])¶ 创建一个共享的
threading.Condition
对象并返回它的代理。如果提供了 lock 参数,那它必须是
threading.Lock
或threading.RLock
的代理对象。
-
Event
()¶ 创建一个共享的
threading.Event
对象并返回它的代理。
-
Lock
()¶ 创建一个共享的
threading.Lock
对象并返回它的代理。
-
Namespace
()¶ 创建一个共享的
Namespace`
对象并返回它的代理。
-
Queue
([maxsize])¶ Create a shared
Queue.Queue
object and return a proxy for it.
-
RLock
()¶ 创建一个共享的
threading.RLock
对象并返回它的代理。
-
Semaphore
([value])¶ 创建一个共享的
threading.Semaphore
对象并返回它的代理。
-
Array
(typecode, sequence)¶ 创建一个数组并返回它的代理。
-
Value
(typecode, value)¶ 创建一个具有可写
value
属性的对象并返回它的代理。
-
dict
()¶ -
dict
(mapping) -
dict
(sequence) Create a shared
dict
object and return a proxy for it.
-
list
()¶ -
list
(sequence) Create a shared
list
object and return a proxy for it.
注解
Modifications to mutable values or items in dict and list proxies will not be propagated through the manager, because the proxy has no way of knowing when its values or items are modified. To modify such an item, you can re-assign the modified object to the container proxy:
# create a list proxy and append a mutable object (a dictionary) lproxy = manager.list() lproxy.append({}) # now mutate the dictionary d = lproxy[0] d['a'] = 1 d['b'] = 2 # at this point, the changes to d are not yet synced, but by # reassigning the dictionary, the proxy is notified of the change lproxy[0] = d
-
-
class
multiprocessing.managers.
Namespace
¶ 一个可以注册到
SyncManager
的类型。命名空间对象没有公共方法,但是拥有可写的属性。它的表示(repr)会显示所有属性的值。
值得一提的是,当对命名空间对象使用代理的时候,访问所有名称以
'_'
开头的属性都只是代理器上的属性,而不是命名空间对象的属性。>>> manager = multiprocessing.Manager() >>> Global = manager.Namespace() >>> Global.x = 10 >>> Global.y = 'hello' >>> Global._z = 12.3 # this is an attribute of the proxy >>> print Global Namespace(x=10, y='hello')
16.6.2.7.1. 自定义管理器¶
要创建一个自定义的管理器,需要新建一个 BaseManager
的子类,然后使用这个管理器类上的 register()
类方法将新类型或者可调用方法注册上去。例如:
from multiprocessing.managers import BaseManager
class MathsClass(object):
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
manager = MyManager()
manager.start()
maths = manager.Maths()
print maths.add(4, 3) # prints 7
print maths.mul(7, 8) # prints 56
16.6.2.7.2. 使用远程管理器¶
可以将管理器服务运行在一台机器上,然后使用客户端从其他机器上访问。(假设它们的防火墙允许这样的网络通信)
运行下面的代码可以启动一个服务,此付包含了一个共享队列,允许远程客户端访问:
>>> from multiprocessing.managers import BaseManager
>>> import Queue
>>> queue = Queue.Queue()
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue', callable=lambda:queue)
>>> m = QueueManager(address=('', 50000), authkey='abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
远程客户端可以通过下面的方式访问服务:
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.put('hello')
也可以通过下面的方式:
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.get()
'hello'
本地进程也可以访问这个队列,利用上面的客户端代码通过远程方式访问:
>>> from multiprocessing import Process, Queue
>>> from multiprocessing.managers import BaseManager
>>> class Worker(Process):
... def __init__(self, q):
... self.q = q
... super(Worker, self).__init__()
... def run(self):
... self.q.put('local hello')
...
>>> queue = Queue()
>>> w = Worker(queue)
>>> w.start()
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('get_queue', callable=lambda: queue)
>>> m = QueueManager(address=('', 50000), authkey='abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
16.6.2.8. 代理对象¶
代理是一个 指向 其他共享对象的对象,这个对象(很可能)在另外一个进程中。共享对象也可以说是代理 指涉 的对象。多个代理对象可能指向同一个指涉对象。
A proxy object has methods which invoke corresponding methods of its referent (although not every method of the referent will necessarily be available through the proxy). A proxy can usually be used in most of the same ways that its referent can:
>>> from multiprocessing import Manager
>>> manager = Manager()
>>> l = manager.list([i*i for i in range(10)])
>>> print l
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> print repr(l)
<ListProxy object, typeid 'list' at 0x...>
>>> l[4]
16
>>> l[2:5]
[4, 9, 16]
注意,对代理使用 str()
函数会返回指涉对象的字符串表示,但是 repr()
却会返回代理本身的内部字符串表示。
An important feature of proxy objects is that they are picklable so they can be passed between processes. Note, however, that if a proxy is sent to the corresponding manager’s process then unpickling it will produce the referent itself. This means, for example, that one shared object can contain a second:
>>> a = manager.list()
>>> b = manager.list()
>>> a.append(b) # referent of a now contains referent of b
>>> print a, b
[[]] []
>>> b.append('hello')
>>> print a, b
[['hello']] ['hello']
注解
multiprocessing
中的代理类并没有提供任何对于代理值比较的支持。所以,我们会得到如下结果:
>>> manager.list([1,2,3]) == [1,2,3]
False
当需要比较值的时候,应该替换为使用指涉对象的拷贝。
-
class
multiprocessing.managers.
BaseProxy
¶ 代理对象是
BaseProxy
派生类的实例。-
_callmethod
(methodname[, args[, kwds]])¶ 调用指涉对象的方法并返回结果。
如果
proxy
是一个代理且其指涉的是obj
, 那么下面的表达式:proxy._callmethod(methodname, args, kwds)
相当于求取以下表达式的值:
getattr(obj, methodname)(*args, **kwds)
于管理器进程。
返回结果会是一个值拷贝或者一个新的共享对象的代理 - 见函数
BaseManager.register()
中关于参数 method_to_typeid 的文档。如果这个调用熬出了异常,则这个异常会被
_callmethod()
透传出来。如果是管理器进程本身抛出的一些其他异常,则会被_callmethod()
转换为RemoteError
异常重新抛出。特别注意,如果 methodname 没有 暴露 出来,将会引发一个异常。
_callmethod()
的一个使用示例:>>> l = manager.list(range(10)) >>> l._callmethod('__len__') 10 >>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]` [2, 3, 4, 5, 6] >>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]` Traceback (most recent call last): ... IndexError: list index out of range
-
_getvalue
()¶ 返回指涉对象的一份拷贝。
如果指涉对象无法序列化,则会抛出一个异常。
-
__repr__
()¶ 返回代理对象的字符串表示。
-
__str__
()¶ 返回指涉对象的字符串表示。
-
16.6.2.9. 进程池¶
可以创建一个进程池,它将使用 Pool
类执行提交给它的任务。
-
class
multiprocessing.
Pool
([processes[, initializer[, initargs[, maxtasksperchild]]]])¶ 一个进程池对象,它控制可以提交作业的工作进程池。它支持带有超时和回调的异步结果,以及一个并行的 map 实现。
processes is the number of worker processes to use. If processes is
None
then the number returned bycpu_count()
is used. If initializer is notNone
then each worker process will callinitializer(*initargs)
when it starts.注意,进程池对象的方法只有创建它的进程能够调用。
2.7 新版功能: maxtasksperchild 是一个工作进程在它退出或被一个新的工作进程代替之前能完成的任务数量,为了释放未使用的资源。默认的 maxtasksperchild 是
None
,意味着工作进程寿与池齐。注解
通常来说,
Pool
中的 Worker 进程的生命周期和进程池的工作队列一样长。一些其他系统中(如 Apache, mod_wsgi 等)也可以发现另一种模式,他们会让工作进程在完成一些任务后退出,清理、释放资源,然后启动一个新的进程代替旧的工作进程。Pool
的 maxtasksperchild 参数给用户提供了这种能力。-
apply
(func[, args[, kwds]])¶ Equivalent of the
apply()
built-in function. It blocks until the result is ready, soapply_async()
is better suited for performing work in parallel. Additionally, func is only executed in one of the workers of the pool.
-
apply_async
(func[, args[, kwds[, callback]]])¶ apply()
方法的一个变种,返回一个结果对象。If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
-
map
(func, iterable[, chunksize])¶ A parallel equivalent of the
map()
built-in function (it supports only one iterable argument though). It blocks until the result is ready.这个方法会将可迭代对象分割为许多块,然后提交给进程池。可以将 chunksize 设置为一个正整数从而(近似)指定每个块的大小可以。
-
map_async
(func, iterable[, chunksize[, callback]])¶ 和
map()
方法类似,但是返回一个结果对象。If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
-
imap
(func, iterable[, chunksize])¶ An equivalent of
itertools.imap()
.chunksize 参数的作用和
map()
方法的一样。对于很长的迭代器,给 chunksize 设置一个很大的值会比默认值1
极大 地加快执行速度。同样,如果 chunksize 是
1
, 那么imap()
方法所返回的迭代器的next()
方法拥有一个可选的 timeout 参数: 如果无法在 timeout 秒内执行得到结果,则``next(timeout)`` 会抛出multiprocessing.TimeoutError
异常。
-
imap_unordered
(func, iterable[, chunksize])¶ 和
imap()
相同,只不过通过迭代器返回的结果是任意的。(当进程池中只有一个工作进程的时候,返回结果的顺序才能认为是”有序”的)
-
close
()¶ 阻止后续任务提交到进程池,当所有任务执行完成后,工作进程会退出。
-
terminate
()¶ 不必等待未完成的任务,立即停止工作进程。当进程池对象呗垃圾回收时,
terminate()
会立即调用。
-
join
()¶ 等待工作进程结束。调用
join()
前必须先调用close()
或者terminate()
。
-
-
class
multiprocessing.pool.
AsyncResult
¶ Pool.apply_async()
和Pool.map_async()
返回对象所属的类。-
get
([timeout])¶ 用于获取执行结果。如果 timeout 不是
None
并且在 timeout 秒内仍然没有执行完得到结果,则抛出multiprocessing.TimeoutError
异常。如果远程调用发生异常,这个异常会通过get()
重新抛出。
-
wait
([timeout])¶ 阻塞,直到返回结果,或者 timeout 秒后超时。
-
ready
()¶ 用于判断执行状态,是否已经完成。
-
successful
()¶ Return whether the call completed without raising an exception. Will raise
AssertionError
if the result is not ready.
-
下面的例子演示了进程池的用法:
from multiprocessing import Pool
import time
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
it = pool.imap(f, range(10))
print it.next() # prints "0"
print it.next() # prints "1"
print it.next(timeout=1) # prints "4" unless your computer is *very* slow
result = pool.apply_async(time.sleep, (10,))
print result.get(timeout=1) # raises multiprocessing.TimeoutError
16.6.2.10. 监听者及客户端¶
Usually message passing between processes is done using queues or by using
Connection
objects returned by Pipe()
.
However, the multiprocessing.connection
module allows some extra
flexibility. It basically gives a high level message oriented API for dealing
with sockets or Windows named pipes, and also has support for digest
authentication using the hmac
module.
-
multiprocessing.connection.
deliver_challenge
(connection, authkey)¶ 发送一个随机生成的消息到另一端,并等待回复。
If the reply matches the digest of the message using authkey as the key then a welcome message is sent to the other end of the connection. Otherwise
AuthenticationError
is raised.
-
multiprocessing.connection.
answer_challenge
(connection, authkey)¶ 接收一条信息,使用 authkey 作为键计算信息摘要,然后将摘要发送回去。
If a welcome message is not received, then
AuthenticationError
is raised.
-
multiprocessing.connection.
Client
(address[, family[, authenticate[, authkey]]])¶ Attempt to set up a connection to the listener which is using address address, returning a
Connection
.连接的类型取决于 family 参数,但是通常可以省略,因为可以通过 address 的格式推导出来。(查看 地址格式 )
If authenticate is
True
or authkey is a string then digest authentication is used. The key used for authentication will be either authkey orcurrent_process().authkey)
if authkey isNone
. If authentication fails thenAuthenticationError
is raised. See 认证密码.
-
class
multiprocessing.connection.
Listener
([address[, family[, backlog[, authenticate[, authkey]]]]])¶ 可以监听连接请求,是对于绑定套接字或者 Windows 命名管道的封装。
address 是监听器对象中的绑定套接字或命名管道使用的地址。
注解
如果使用 ‘0.0.0.0’ 作为监听地址,那么在Windows上这个地址无法建立连接。想要建立一个可连接的端点,应该使用 ‘127.0.0.1’ 。
family 是套接字(或者命名管道)使用的类型。它可以是以下一种:
'AF_INET'
( TCP 套接字类型),'AF_UNIX'
( Unix 域套接字) 或者'AF_PIPE'
( Windows 命名管道)。其中只有第一个保证各平台可用。如果 family 是None
,那么 family 会根据 address 的格式自动推导出来。如果 address 也是None
, 则取默认值。默认值为可用类型中速度最快的。见 地址格式 。注意,如果 family 是'AF_UNIX'
而address是``None`` ,套接字会在一个tempfile.mkstemp()
创建的私有临时目录中创建。如果监听器对象使用了套接字,backlog (默认值为1) 会在套接字绑定后传递给它的
listen()
方法。If authenticate is
True
(False
by default) or authkey is notNone
then digest authentication is used.If authkey is a string then it will be used as the authentication key; otherwise it must be
None
.If authkey is
None
and authenticate isTrue
thencurrent_process().authkey
is used as the authentication key. If authkey isNone
and authenticate isFalse
then no authentication is done. If authentication fails thenAuthenticationError
is raised. See 认证密码.-
accept
()¶ Accept a connection on the bound socket or named pipe of the listener object and return a
Connection
object. If authentication is attempted and fails, thenAuthenticationError
is raised.
-
close
()¶ 关闭监听器上的绑定套接字或者命名管道。此函数会在监听器被垃圾回收后自动调用。不过仍然建议显式调用函数关闭。
监听器对象拥有下列只读属性:
-
address
¶ 被监听器对象使用的地址。
-
last_accepted
¶ 最后一个连接所使用的地址。如果没有的话就是
None
。
-
The module defines the following exceptions:
-
exception
multiprocessing.connection.
ProcessError
¶ 所有
multiprocessing
异常的基类。
-
exception
multiprocessing.connection.
BufferTooShort
¶ 当提供的缓冲区对象太小而无法读取消息时,
Connection.recv_bytes_into()
引发的异常。
-
exception
multiprocessing.connection.
AuthenticationError
¶ 出现身份验证错误时引发。
-
exception
multiprocessing.connection.
TimeoutError
¶ 有超时的方法超时时引发。
示例
下面的服务代码创建了一个使用 'secret password'
作为认证密码的监听器。它会等待连接然后发送一些数据给客户端:
from multiprocessing.connection import Listener
from array import array
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='secret password')
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
conn.send([2.25, None, 'junk', float])
conn.send_bytes('hello')
conn.send_bytes(array('i', [42, 1729]))
conn.close()
listener.close()
下面的代码连接到服务然后从服务器上j接收一些数据:
from multiprocessing.connection import Client
from array import array
address = ('localhost', 6000)
conn = Client(address, authkey='secret password')
print conn.recv() # => [2.25, None, 'junk', float]
print conn.recv_bytes() # => 'hello'
arr = array('i', [0, 0, 0, 0, 0])
print conn.recv_bytes_into(arr) # => 8
print arr # => array('i', [42, 1729, 0, 0, 0])
conn.close()
16.6.2.11. 认证密码¶
When one uses Connection.recv()
, the
data received is automatically
unpickled. Unfortunately unpickling data from an untrusted source is a security
risk. Therefore Listener
and Client()
use the hmac
module
to provide digest authentication.
An authentication key is a string which can be thought of as a password: once a connection is established both ends will demand proof that the other knows the authentication key. (Demonstrating that both ends are using the same key does not involve sending the key over the connection.)
如果要求认证但是没有指定认证密钥,则会使用 current_process().authkey
的返回值 (参见 Process
)。 这个值将被当前进程所创建的任何 Process
对象自动继承。 这意味着 (在默认情况下) 一个包含多进程的程序中的所有进程会在相互间建立连接的时候共享单个认证密钥。
os.urandom()
也可以用来生成合适的认证密钥。
16.6.2.12. 日志记录¶
当前模块也提供了一些对 logging 的支持。注意, logging
模块本身并没有使用进程间共享的锁,所以来自于多个进程的日志可能(具体取决于使用的日志 handler)相互覆盖或者混杂。
-
multiprocessing.
get_logger
()¶ 返回
multiprocessing
使用的 logger,必要的话会创建一个新的。如果创建的首个 logger 日志级别为
logging.NOTSET
并且没有默认 handler。通过这个 logger 打印的消息不会传递到根 logger。注意在 Windows 上,子进程只会继承父进程 logger 的日志级别 - 对于logger的其他自定义项不会继承。
-
multiprocessing.
log_to_stderr
()¶ 此函数会调用
get_logger()
但是会在返回的 logger 上增加一个 handler,将所有输出都使用'[%(levelname)s/%(processName)s] %(message)s'
的格式发送到sys.stderr
。
下面是一个在交互式解释器中打开日志功能的例子:
>>> import multiprocessing, logging
>>> logger = multiprocessing.log_to_stderr()
>>> logger.setLevel(logging.INFO)
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = multiprocessing.Manager()
[INFO/SyncManager-...] child process calling self.run()
[INFO/SyncManager-...] created temp directory /.../pymp-...
[INFO/SyncManager-...] manager serving at '/.../listener-...'
>>> del m
[INFO/MainProcess] sending shutdown message to manager
[INFO/SyncManager-...] manager exiting with exitcode 0
In addition to having these two logging functions, the multiprocessing also
exposes two additional logging level attributes. These are SUBWARNING
and SUBDEBUG
. The table below illustrates where theses fit in the
normal level hierarchy.
Level |
Numeric value |
---|---|
|
25 |
|
5 |
要查看日志等级的完整列表,见 logging
模块。
These additional logging levels are used primarily for certain debug messages
within the multiprocessing module. Below is the same example as above, except
with SUBDEBUG
enabled:
>>> import multiprocessing, logging
>>> logger = multiprocessing.log_to_stderr()
>>> logger.setLevel(multiprocessing.SUBDEBUG)
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = multiprocessing.Manager()
[INFO/SyncManager-...] child process calling self.run()
[INFO/SyncManager-...] created temp directory /.../pymp-...
[INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
>>> del m
[SUBDEBUG/MainProcess] finalizer calling ...
[INFO/MainProcess] sending shutdown message to manager
[DEBUG/SyncManager-...] manager received shutdown message
[SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
[SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
[SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
[SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
[INFO/SyncManager-...] manager exiting with exitcode 0
16.6.2.13. multiprocessing.dummy
模块¶
multiprocessing.dummy
复制了 multiprocessing
的 API,不过是在 threading
模块之上包装了一层。
16.6.3. 编程指导¶
使用 multiprocessing
时,应遵循一些指导原则和习惯用法。
16.6.3.1. All platforms¶
避免共享状态
应该尽可能避免在进程间传递大量数据,越少越好。
It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives from the
threading
module.
可序列化
保证所代理的方法的参数是可以序列化的。
代理的线程安全性
不要在多线程中同时使用一个代理对象,除非你用锁保护它。
(而在不同进程中使用 相同 的代理对象从不会发生问题。)
使用 Join 避免僵尸进程
在 Unix 上,如果一个进程执行完成但是没有被 join,就会变成僵尸进程。一般来说,僵尸进程不会很多,因为每次新启动进程(或者
active_children()
被调用)时,所有已执行完成且没有被 join 的进程都会自动被 join,而且对一个执行完的进程调用Process.is_alive
也会 join 这个进程。尽管如此,对自己启动的进程显式调用 join 依然是最佳实践。
继承优于序列化、反序列化
On Windows many types from
multiprocessing
need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.
避免杀死进程
听过
Process.terminate
停止一个进程很容易导致这个进程正在使用的共享资源(如锁、信号量、管道和队列)损坏或者变得不可用,无法在其他进程中继续使用。所以,最好只对那些从来不使用共享资源的进程调用
Process.terminate
。
Join 使用队列的进程
Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the “feeder” thread to the underlying pipe. (The child process can call the
cancel_join_thread()
method of the queue to avoid this behaviour.)这意味着,任何使用队列的时候,你都要确保在进程join之前,所有存放到队列中的项将会被其他进程、线程完全消费。否则不能保证这个写过队列的进程可以正常终止。记住非精灵进程会自动 join 。
下面是一个会导致死锁的例子:
from multiprocessing import Process, Queue def f(q): q.put('X' * 1000000) if __name__ == '__main__': queue = Queue() p = Process(target=f, args=(queue,)) p.start() p.join() # this deadlocks obj = queue.get()交换最后两行可以修复这个问题(或者直接删掉
p.join()
)。
显示传递资源给子进程
On Unix a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
Apart from making the code (potentially) compatible with Windows this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
所以对于实例:
from multiprocessing import Process, Lock def f(): ... do something using "lock" ... if __name__ == '__main__': lock = Lock() for i in range(10): Process(target=f).start()应当重写成这样:
from multiprocessing import Process, Lock def f(l): ... do something using "l" ... if __name__ == '__main__': lock = Lock() for i in range(10): Process(target=f, args=(lock,)).start()
谨防将 sys.stdin
数据替换为 “类似文件的对象”
multiprocessing
内部会无条件地这样调用:os.close(sys.stdin.fileno())在
multiprocessing.Process._bootstrap()
方法中 —— 这会导致与”进程中的进程”相关的一些问题。这已经被修改成了:sys.stdin.close() sys.stdin = open(os.devnull)它解决了进程相互冲突导致文件描述符错误的根本问题,但是对使用带缓冲的“文件类对象”替换
sys.stdin()
作为输出的应用程序造成了潜在的危险。如果多个进程调用了此文件类对象的close()
方法,会导致相同的数据多次刷写到此对象,损坏数据。如果你写入文件类对象并实现了自己的缓存,可以在每次追加缓存数据时记录当前进程id,从而将其变成 fork 安全的,当发现进程id变化后舍弃之前的缓存,例如:
@property def cache(self): pid = os.getpid() if pid != self._pid: self._pid = pid self._cache = [] return self._cache
16.6.3.2. Windows¶
Since Windows lacks os.fork()
it has a few extra restrictions:
更依赖序列化
Ensure that all arguments to
Process.__init__()
are picklable. This means, in particular, that bound or unbound methods cannot be used directly as thetarget
argument on Windows — just define a function and use that instead.Also, if you subclass
Process
then make sure that instances will be picklable when theProcess.start
method is called.
全局变量
记住,如果子进程中的代码尝试访问一个全局变量,它所看到的值可能和父进程中执行
Process.start
那一刻的值不一样。当全局变量知识模块级别的常量时,是不会有问题的。
安全导入主模块
确保主模块可以被新启动的Python解释器安全导入而不会引发什么副作用(比如又启动了一个子进程)
For example, under Windows running the following module would fail with a
RuntimeError
:from multiprocessing import Process def foo(): print 'hello' p = Process(target=foo) p.start()应该通过下面的方法使用
if __name__ == '__main__':
,从而保护程序”入口点”:from multiprocessing import Process, freeze_support def foo(): print 'hello' if __name__ == '__main__': freeze_support() p = Process(target=foo) p.start()(如果程序将正常运行而不是冻结,则可以省略
freeze_support()
行)这允许新启动的 Python 解释器安全导入模块然后运行模块中的
foo()
函数。如果主模块中创建了进程池或者管理器,这个规则也适用。
16.6.4. 例子¶
创建和使用自定义管理器、代理的示例:
#
# This module shows how to use arbitrary callables with a subclass of
# `BaseManager`.
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
from multiprocessing import freeze_support
from multiprocessing.managers import BaseManager, BaseProxy
import operator
##
class Foo(object):
def f(self):
print 'you called Foo.f()'
def g(self):
print 'you called Foo.g()'
def _h(self):
print 'you called Foo._h()'
# A simple generator function
def baz():
for i in xrange(10):
yield i*i
# Proxy type for generator objects
class GeneratorProxy(BaseProxy):
_exposed_ = ('next', '__next__')
def __iter__(self):
return self
def next(self):
return self._callmethod('next')
def __next__(self):
return self._callmethod('__next__')
# Function to return the operator module
def get_operator_module():
return operator
##
class MyManager(BaseManager):
pass
# register the Foo class; make `f()` and `g()` accessible via proxy
MyManager.register('Foo1', Foo)
# register the Foo class; make `g()` and `_h()` accessible via proxy
MyManager.register('Foo2', Foo, exposed=('g', '_h'))
# register the generator function baz; use `GeneratorProxy` to make proxies
MyManager.register('baz', baz, proxytype=GeneratorProxy)
# register get_operator_module(); make public functions accessible via proxy
MyManager.register('operator', get_operator_module)
##
def test():
manager = MyManager()
manager.start()
print '-' * 20
f1 = manager.Foo1()
f1.f()
f1.g()
assert not hasattr(f1, '_h')
assert sorted(f1._exposed_) == sorted(['f', 'g'])
print '-' * 20
f2 = manager.Foo2()
f2.g()
f2._h()
assert not hasattr(f2, 'f')
assert sorted(f2._exposed_) == sorted(['g', '_h'])
print '-' * 20
it = manager.baz()
for i in it:
print '<%d>' % i,
print
print '-' * 20
op = manager.operator()
print 'op.add(23, 45) =', op.add(23, 45)
print 'op.pow(2, 94) =', op.pow(2, 94)
print 'op.getslice(range(10), 2, 6) =', op.getslice(range(10), 2, 6)
print 'op.repeat(range(5), 3) =', op.repeat(range(5), 3)
print 'op._exposed_ =', op._exposed_
##
if __name__ == '__main__':
freeze_support()
test()
使用 Pool
:
#
# A test of `multiprocessing.Pool` class
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import multiprocessing
import time
import random
import sys
#
# Functions used by test code
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % (
multiprocessing.current_process().name,
func.__name__, args, result
)
def calculatestar(args):
return calculate(*args)
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def f(x):
return 1.0 / (x-5.0)
def pow3(x):
return x**3
def noop(x):
pass
#
# Test code
#
def test():
print 'cpu_count() = %d\n' % multiprocessing.cpu_count()
#
# Create pool
#
PROCESSES = 4
print 'Creating pool with %d processes\n' % PROCESSES
pool = multiprocessing.Pool(PROCESSES)
print 'pool = %s' % pool
print
#
# Tests
#
TASKS = [(mul, (i, 7)) for i in range(10)] + \
[(plus, (i, 8)) for i in range(10)]
results = [pool.apply_async(calculate, t) for t in TASKS]
imap_it = pool.imap(calculatestar, TASKS)
imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)
print 'Ordered results using pool.apply_async():'
for r in results:
print '\t', r.get()
print
print 'Ordered results using pool.imap():'
for x in imap_it:
print '\t', x
print
print 'Unordered results using pool.imap_unordered():'
for x in imap_unordered_it:
print '\t', x
print
print 'Ordered results using pool.map() --- will block till complete:'
for x in pool.map(calculatestar, TASKS):
print '\t', x
print
#
# Simple benchmarks
#
N = 100000
print 'def pow3(x): return x**3'
t = time.time()
A = map(pow3, xrange(N))
print '\tmap(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
B = pool.map(pow3, xrange(N))
print '\tpool.map(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
C = list(pool.imap(pow3, xrange(N), chunksize=N//8))
print '\tlist(pool.imap(pow3, xrange(%d), chunksize=%d)):\n\t\t%s' \
' seconds' % (N, N//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
L = [None] * 1000000
print 'def noop(x): pass'
print 'L = [None] * 1000000'
t = time.time()
A = map(noop, L)
print '\tmap(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
B = pool.map(noop, L)
print '\tpool.map(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
C = list(pool.imap(noop, L, chunksize=len(L)//8))
print '\tlist(pool.imap(noop, L, chunksize=%d)):\n\t\t%s seconds' % \
(len(L)//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
del A, B, C, L
#
# Test error handling
#
print 'Testing error handling:'
try:
print pool.apply(f, (5,))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.apply()'
else:
raise AssertionError('expected ZeroDivisionError')
try:
print pool.map(f, range(10))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.map()'
else:
raise AssertionError('expected ZeroDivisionError')
try:
print list(pool.imap(f, range(10)))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from list(pool.imap())'
else:
raise AssertionError('expected ZeroDivisionError')
it = pool.imap(f, range(10))
for i in range(10):
try:
x = it.next()
except ZeroDivisionError:
if i == 5:
pass
except StopIteration:
break
else:
if i == 5:
raise AssertionError('expected ZeroDivisionError')
assert i == 9
print '\tGot ZeroDivisionError as expected from IMapIterator.next()'
print
#
# Testing timeouts
#
print 'Testing ApplyResult.get() with timeout:',
res = pool.apply_async(calculate, TASKS[0])
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % res.get(0.02))
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
print 'Testing IMapIterator.next() with timeout:',
it = pool.imap(calculatestar, TASKS)
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % it.next(0.02))
except StopIteration:
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
#
# Testing callback
#
print 'Testing callback:'
A = []
B = [56, 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
r = pool.apply_async(mul, (7, 8), callback=A.append)
r.wait()
r = pool.map_async(pow3, range(10), callback=A.extend)
r.wait()
if A == B:
print '\tcallbacks succeeded\n'
else:
print '\t*** callbacks failed\n\t\t%s != %s\n' % (A, B)
#
# Check there are no outstanding tasks
#
assert not pool._cache, 'cache = %r' % pool._cache
#
# Check close() methods
#
print 'Testing close():'
for worker in pool._pool:
assert worker.is_alive()
result = pool.apply_async(time.sleep, [0.5])
pool.close()
pool.join()
assert result.get() is None
for worker in pool._pool:
assert not worker.is_alive()
print '\tclose() succeeded\n'
#
# Check terminate() method
#
print 'Testing terminate():'
pool = multiprocessing.Pool(2)
DELTA = 0.1
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
pool.terminate()
pool.join()
for worker in pool._pool:
assert not worker.is_alive()
print '\tterminate() succeeded\n'
#
# Check garbage collection
#
print 'Testing garbage collection:'
pool = multiprocessing.Pool(2)
DELTA = 0.1
processes = pool._pool
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
results = pool = None
time.sleep(DELTA * 2)
for worker in processes:
assert not worker.is_alive()
print '\tgarbage collection succeeded\n'
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as multiprocessing
else:
print 'Usage:\n\t%s [processes | threads]' % sys.argv[0]
raise SystemExit(2)
test()
Synchronization types like locks, conditions and queues:
#
# A test file for the `multiprocessing` package
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time, sys, random
from Queue import Empty
import multiprocessing # may get overwritten
#### TEST_VALUE
def value_func(running, mutex):
random.seed()
time.sleep(random.random()*4)
mutex.acquire()
print '\n\t\t\t' + str(multiprocessing.current_process()) + ' has finished'
running.value -= 1
mutex.release()
def test_value():
TASKS = 10
running = multiprocessing.Value('i', TASKS)
mutex = multiprocessing.Lock()
for i in range(TASKS):
p = multiprocessing.Process(target=value_func, args=(running, mutex))
p.start()
while running.value > 0:
time.sleep(0.08)
mutex.acquire()
print running.value,
sys.stdout.flush()
mutex.release()
print
print 'No more running processes'
#### TEST_QUEUE
def queue_func(queue):
for i in range(30):
time.sleep(0.5 * random.random())
queue.put(i*i)
queue.put('STOP')
def test_queue():
q = multiprocessing.Queue()
p = multiprocessing.Process(target=queue_func, args=(q,))
p.start()
o = None
while o != 'STOP':
try:
o = q.get(timeout=0.3)
print o,
sys.stdout.flush()
except Empty:
print 'TIMEOUT'
print
#### TEST_CONDITION
def condition_func(cond):
cond.acquire()
print '\t' + str(cond)
time.sleep(2)
print '\tchild is notifying'
print '\t' + str(cond)
cond.notify()
cond.release()
def test_condition():
cond = multiprocessing.Condition()
p = multiprocessing.Process(target=condition_func, args=(cond,))
print cond
cond.acquire()
print cond
cond.acquire()
print cond
p.start()
print 'main is waiting'
cond.wait()
print 'main has woken up'
print cond
cond.release()
print cond
cond.release()
p.join()
print cond
#### TEST_SEMAPHORE
def semaphore_func(sema, mutex, running):
sema.acquire()
mutex.acquire()
running.value += 1
print running.value, 'tasks are running'
mutex.release()
random.seed()
time.sleep(random.random()*2)
mutex.acquire()
running.value -= 1
print '%s has finished' % multiprocessing.current_process()
mutex.release()
sema.release()
def test_semaphore():
sema = multiprocessing.Semaphore(3)
mutex = multiprocessing.RLock()
running = multiprocessing.Value('i', 0)
processes = [
multiprocessing.Process(target=semaphore_func,
args=(sema, mutex, running))
for i in range(10)
]
for p in processes:
p.start()
for p in processes:
p.join()
#### TEST_JOIN_TIMEOUT
def join_timeout_func():
print '\tchild sleeping'
time.sleep(5.5)
print '\n\tchild terminating'
def test_join_timeout():
p = multiprocessing.Process(target=join_timeout_func)
p.start()
print 'waiting for process to finish'
while 1:
p.join(timeout=1)
if not p.is_alive():
break
print '.',
sys.stdout.flush()
#### TEST_EVENT
def event_func(event):
print '\t%r is waiting' % multiprocessing.current_process()
event.wait()
print '\t%r has woken up' % multiprocessing.current_process()
def test_event():
event = multiprocessing.Event()
processes = [multiprocessing.Process(target=event_func, args=(event,))
for i in range(5)]
for p in processes:
p.start()
print 'main is sleeping'
time.sleep(2)
print 'main is setting event'
event.set()
for p in processes:
p.join()
#### TEST_SHAREDVALUES
def sharedvalues_func(values, arrays, shared_values, shared_arrays):
for i in range(len(values)):
v = values[i][1]
sv = shared_values[i].value
assert v == sv
for i in range(len(values)):
a = arrays[i][1]
sa = list(shared_arrays[i][:])
assert a == sa
print 'Tests passed'
def test_sharedvalues():
values = [
('i', 10),
('h', -2),
('d', 1.25)
]
arrays = [
('i', range(100)),
('d', [0.25 * i for i in range(100)]),
('H', range(1000))
]
shared_values = [multiprocessing.Value(id, v) for id, v in values]
shared_arrays = [multiprocessing.Array(id, a) for id, a in arrays]
p = multiprocessing.Process(
target=sharedvalues_func,
args=(values, arrays, shared_values, shared_arrays)
)
p.start()
p.join()
assert p.exitcode == 0
####
def test(namespace=multiprocessing):
global multiprocessing
multiprocessing = namespace
for func in [ test_value, test_queue, test_condition,
test_semaphore, test_join_timeout, test_event,
test_sharedvalues ]:
print '\n\t######## %s\n' % func.__name__
func()
ignore = multiprocessing.active_children() # cleanup any old processes
if hasattr(multiprocessing, '_debug_info'):
info = multiprocessing._debug_info()
if info:
print info
raise ValueError('there should be no positive refcounts left')
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
namespace = multiprocessing
elif sys.argv[1] == 'manager':
print ' Using processes and a manager '.center(79, '-')
namespace = multiprocessing.Manager()
namespace.Process = multiprocessing.Process
namespace.current_process = multiprocessing.current_process
namespace.active_children = multiprocessing.active_children
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as namespace
else:
print 'Usage:\n\t%s [processes | manager | threads]' % sys.argv[0]
raise SystemExit(2)
test(namespace)
一个演示如何使用队列来向一组工作进程提供任务并收集结果的例子:
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print '\t', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print '\t', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test()
An example of how a pool of worker processes can each run a
SimpleHTTPServer.HttpServer
instance while sharing a single listening
socket.
#
# Example where a pool of http servers share a single listening socket
#
# On Windows this module depends on the ability to pickle a socket
# object so that the worker processes can inherit a copy of the server
# object. (We import `multiprocessing.reduction` to enable this pickling.)
#
# Not sure if we should synchronize access to `socket.accept()` method by
# using a process-shared lock -- does not seem to be necessary.
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import os
import sys
from multiprocessing import Process, current_process, freeze_support
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
if sys.platform == 'win32':
import multiprocessing.reduction # make sockets pickable/inheritable
def note(format, *args):
sys.stderr.write('[%s]\t%s\n' % (current_process().name, format%args))
class RequestHandler(SimpleHTTPRequestHandler):
# we override log_message() to show which process is handling the request
def log_message(self, format, *args):
note(format, *args)
def serve_forever(server):
note('starting server')
try:
server.serve_forever()
except KeyboardInterrupt:
pass
def runpool(address, number_of_processes):
# create a single server object -- children will each inherit a copy
server = HTTPServer(address, RequestHandler)
# create child processes to act as workers
for i in range(number_of_processes-1):
Process(target=serve_forever, args=(server,)).start()
# main process also acts as a worker
serve_forever(server)
def test():
DIR = os.path.join(os.path.dirname(__file__), '..')
ADDRESS = ('localhost', 8000)
NUMBER_OF_PROCESSES = 4
print 'Serving at http://%s:%d using %d worker processes' % \
(ADDRESS[0], ADDRESS[1], NUMBER_OF_PROCESSES)
print 'To exit press Ctrl-' + ['C', 'Break'][sys.platform=='win32']
os.chdir(DIR)
runpool(ADDRESS, NUMBER_OF_PROCESSES)
if __name__ == '__main__':
freeze_support()
test()
Some simple benchmarks comparing multiprocessing
with threading
:
#
# Simple benchmarks for the multiprocessing package
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time, sys, multiprocessing, threading, Queue, gc
if sys.platform == 'win32':
_timer = time.clock
else:
_timer = time.time
delta = 1
#### TEST_QUEUESPEED
def queuespeed_func(q, c, iterations):
a = '0' * 256
c.acquire()
c.notify()
c.release()
for i in xrange(iterations):
q.put(a)
q.put('STOP')
def test_queuespeed(Process, q, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = Process(target=queuespeed_func, args=(q, c, iterations))
c.acquire()
p.start()
c.wait()
c.release()
result = None
t = _timer()
while result != 'STOP':
result = q.get()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through the queue in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_PIPESPEED
def pipe_func(c, cond, iterations):
a = '0' * 256
cond.acquire()
cond.notify()
cond.release()
for i in xrange(iterations):
c.send(a)
c.send('STOP')
def test_pipespeed():
c, d = multiprocessing.Pipe()
cond = multiprocessing.Condition()
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = multiprocessing.Process(target=pipe_func,
args=(d, cond, iterations))
cond.acquire()
p.start()
cond.wait()
cond.release()
result = None
t = _timer()
while result != 'STOP':
result = c.recv()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through connection in',elapsed,'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_SEQSPEED
def test_seqspeed(seq):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
a = seq[5]
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_LOCK
def test_lockspeed(l):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
l.acquire()
l.release()
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_CONDITION
def conditionspeed_func(c, N):
c.acquire()
c.notify()
for i in xrange(N):
c.wait()
c.notify()
c.release()
def test_conditionspeed(Process, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
c.acquire()
p = Process(target=conditionspeed_func, args=(c, iterations))
p.start()
c.wait()
t = _timer()
for i in xrange(iterations):
c.notify()
c.wait()
elapsed = _timer()-t
c.release()
p.join()
print iterations * 2, 'waits in', elapsed, 'seconds'
print 'average number/sec:', iterations * 2 / elapsed
####
def test():
manager = multiprocessing.Manager()
gc.disable()
print '\n\t######## testing Queue.Queue\n'
test_queuespeed(threading.Thread, Queue.Queue(),
threading.Condition())
print '\n\t######## testing multiprocessing.Queue\n'
test_queuespeed(multiprocessing.Process, multiprocessing.Queue(),
multiprocessing.Condition())
print '\n\t######## testing Queue managed by server process\n'
test_queuespeed(multiprocessing.Process, manager.Queue(),
manager.Condition())
print '\n\t######## testing multiprocessing.Pipe\n'
test_pipespeed()
print
print '\n\t######## testing list\n'
test_seqspeed(range(10))
print '\n\t######## testing list managed by server process\n'
test_seqspeed(manager.list(range(10)))
print '\n\t######## testing Array("i", ..., lock=False)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=False))
print '\n\t######## testing Array("i", ..., lock=True)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=True))
print
print '\n\t######## testing threading.Lock\n'
test_lockspeed(threading.Lock())
print '\n\t######## testing threading.RLock\n'
test_lockspeed(threading.RLock())
print '\n\t######## testing multiprocessing.Lock\n'
test_lockspeed(multiprocessing.Lock())
print '\n\t######## testing multiprocessing.RLock\n'
test_lockspeed(multiprocessing.RLock())
print '\n\t######## testing lock managed by server process\n'
test_lockspeed(manager.Lock())
print '\n\t######## testing rlock managed by server process\n'
test_lockspeed(manager.RLock())
print
print '\n\t######## testing threading.Condition\n'
test_conditionspeed(threading.Thread, threading.Condition())
print '\n\t######## testing multiprocessing.Condition\n'
test_conditionspeed(multiprocessing.Process, multiprocessing.Condition())
print '\n\t######## testing condition managed by a server process\n'
test_conditionspeed(multiprocessing.Process, manager.Condition())
gc.enable()
if __name__ == '__main__':
multiprocessing.freeze_support()
test()