Python's multiprocessing package中队列和管道之间的根本区别是什么?
在什么情况下应该选择一个而不是另一个?什么时候使用Pipe()
是有利的?何时使用Queue()
?
答案 0 :(得分:223)
何时使用
如果您需要两个以上的点进行交流,请使用Queue()
。
如果您需要绝对性能,Pipe()
要快得多,因为Queue()
建立在Pipe()
之上。
效果基准
假设您想要生成两个进程并尽快在它们之间发送消息。这些是使用Pipe()
和Queue()
的类似测试之间的拖拽竞赛的时间结果......这是在运行Ubuntu 11.10和Python 2.7.2的ThinkpadT61上。
仅供参考,我将JoinableQueue()
的结果作为奖励; JoinableQueue()
在调用queue.task_done()
时会考虑任务(它甚至不知道具体任务,只计算队列中未完成的任务),以便queue.join()
知道工作已完成
这个答案底部的每个代码......
mpenning@mpenning-T61:~$ python multi_pipe.py
Sending 10000 numbers to Pipe() took 0.0369849205017 seconds
Sending 100000 numbers to Pipe() took 0.328398942947 seconds
Sending 1000000 numbers to Pipe() took 3.17266988754 seconds
mpenning@mpenning-T61:~$ python multi_queue.py
Sending 10000 numbers to Queue() took 0.105256080627 seconds
Sending 100000 numbers to Queue() took 0.980564117432 seconds
Sending 1000000 numbers to Queue() took 10.1611330509 seconds
mpnening@mpenning-T61:~$ python multi_joinablequeue.py
Sending 10000 numbers to JoinableQueue() took 0.172781944275 seconds
Sending 100000 numbers to JoinableQueue() took 1.5714070797 seconds
Sending 1000000 numbers to JoinableQueue() took 15.8527247906 seconds
mpenning@mpenning-T61:~$
总之,Pipe()
比Queue()
快三倍。除非你真的必须有这些好处,否则不要考虑JoinableQueue()
。
BONUS MATERIAL 2
多处理引入了信息流的细微变化,除非您知道一些快捷方式,否则会使调试变得困难。例如,您可能有一个脚本在许多条件下通过字典索引时工作正常,但很少因某些输入而失败。
通常我们会在整个python进程崩溃时找到失败的线索;但是,如果多处理功能崩溃,则不会将未经请求的崩溃回溯打印到控制台。追踪未知的多处理崩溃是很困难的,而不知道崩溃过程的原因。
我发现追踪多处理崩溃信息的最简单方法是将整个多处理功能包装在try
/ except
并使用traceback.print_exc()
:
import traceback
def reader(args):
try:
# Insert stuff to be multiprocessed here
return args[0]['that']
except:
print "FATAL: reader({0}) exited while multiprocessing".format(args)
traceback.print_exc()
现在,当您发现崩溃时,您会看到类似的内容:
FATAL: reader([{'crash', 'this'}]) exited while multiprocessing
Traceback (most recent call last):
File "foo.py", line 19, in __init__
self.run(task_q, result_q)
File "foo.py", line 46, in run
raise ValueError
ValueError
源代码:
"""
multi_pipe.py
"""
from multiprocessing import Process, Pipe
import time
def reader_proc(pipe):
## Read from the pipe; this will be spawned as a separate Process
p_output, p_input = pipe
p_input.close() # We are only reading
while True:
msg = p_output.recv() # Read from the output pipe and do nothing
if msg=='DONE':
break
def writer(count, p_input):
for ii in xrange(0, count):
p_input.send(ii) # Write 'count' numbers into the input pipe
p_input.send('DONE')
if __name__=='__main__':
for count in [10**4, 10**5, 10**6]:
# Pipes are unidirectional with two endpoints: p_input ------> p_output
p_output, p_input = Pipe() # writer() writes to p_input from _this_ process
reader_p = Process(target=reader_proc, args=((p_output, p_input),))
reader_p.daemon = True
reader_p.start() # Launch the reader process
p_output.close() # We no longer need this part of the Pipe()
_start = time.time()
writer(count, p_input) # Send a lot of stuff to reader_proc()
p_input.close()
reader_p.join()
print("Sending {0} numbers to Pipe() took {1} seconds".format(count,
(time.time() - _start)))
"""
multi_queue.py
"""
from multiprocessing import Process, Queue
import time
import sys
def reader_proc(queue):
## Read from the queue; this will be spawned as a separate Process
while True:
msg = queue.get() # Read from the queue and do nothing
if (msg == 'DONE'):
break
def writer(count, queue):
## Write to the queue
for ii in range(0, count):
queue.put(ii) # Write 'count' numbers into the queue
queue.put('DONE')
if __name__=='__main__':
pqueue = Queue() # writer() writes to pqueue from _this_ process
for count in [10**4, 10**5, 10**6]:
### reader_proc() reads from pqueue as a separate process
reader_p = Process(target=reader_proc, args=((pqueue),))
reader_p.daemon = True
reader_p.start() # Launch reader_proc() as a separate python process
_start = time.time()
writer(count, pqueue) # Send a lot of stuff to reader()
reader_p.join() # Wait for the reader to finish
print("Sending {0} numbers to Queue() took {1} seconds".format(count,
(time.time() - _start)))
"""
multi_joinablequeue.py
"""
from multiprocessing import Process, JoinableQueue
import time
def reader_proc(queue):
## Read from the queue; this will be spawned as a separate Process
while True:
msg = queue.get() # Read from the queue and do nothing
queue.task_done()
def writer(count, queue):
for ii in xrange(0, count):
queue.put(ii) # Write 'count' numbers into the queue
if __name__=='__main__':
for count in [10**4, 10**5, 10**6]:
jqueue = JoinableQueue() # writer() writes to jqueue from _this_ process
# reader_proc() reads from jqueue as a different process...
reader_p = Process(target=reader_proc, args=((jqueue),))
reader_p.daemon = True
reader_p.start() # Launch the reader process
_start = time.time()
writer(count, jqueue) # Send a lot of stuff to reader_proc() (in different process)
jqueue.join() # Wait for the reader to finish
print("Sending {0} numbers to JoinableQueue() took {1} seconds".format(count,
(time.time() - _start)))
答案 1 :(得分:3)
Queue()
的另一个值得注意的功能是进纸器线程。 This部分指出:“当进程首先将项目放入队列时,启动了供料器线程,该线程将对象从缓冲区转移到管道中。”可以将无数(或maxsize)个项目插入Queue()
中,而无需任何对queue.put()
的调用。这样,您就可以在Queue()
中存储多个项目,直到程序准备好处理它们为止。
Pipe()
对于已发送到一个连接但尚未从另一连接接收的项目具有有限的存储量。该存储空间用完后,对connection.send()
的调用将阻塞,直到有空间写入整个项目为止。这将使线程停止写操作,直到从管道读取其他线程为止。 Connection
对象使您可以访问基础文件描述符。在* nix系统上,可以使用connection.send()
函数阻止os.set_blocking()
调用被阻塞。但是,如果您尝试发送不适合管道文件的单个项目,这将导致问题。 Linux的最新版本允许您增加文件的大小,但是允许的最大大小根据系统配置而有所不同。因此,您永远不要依靠Pipe()
来缓冲数据。对connection.send
的调用可能会阻塞,直到从管道中读取数据为止。
总而言之,当您需要缓冲数据时,队列是比管道更好的选择。即使您只需要在两点之间进行交流。