我有一个多处理作业,我将只读numpy数组排队,作为生产者消费者管道的一部分。
目前他们被腌制,因为这是multiprocessing.Queue
的默认行为,会降低效果。
是否有任何pythonic方法将引用传递给共享内存而不是pickle数组?
不幸的是,在消费者启动之后会生成数组,并且没有简单的方法。 (所以全局变量方法会很难看......)。
[注意,在下面的代码中,我们并不期望并行计算h(x0)和h(x1)。相反,我们看到并行计算的h(x0)和g(h(x1))(就像CPU中的流水线一样)。]
from multiprocessing import Process, Queue
import numpy as np
class __EndToken(object):
pass
def parrallel_pipeline(buffer_size=50):
def parrallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parrallel_pipeline_with_args
@parrallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parrallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parrallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
if __name__ == "__main__":
rs = f(g(h(xs())))
for r in rs:
print r
答案 0 :(得分:13)
由于你正在使用numpy,你可以利用the global interpreter lock is released during numpy computations这一事实。这意味着您可以使用标准线程和共享内存进行并行处理,而不是多处理和进程间通信。这是你的代码的一个版本,调整使用threading.Thread和Queue.Queue而不是multiprocessing.Process和multiprocessing.Queue。这会通过队列传递一个numpy ndarray而不会腌制它。在我的计算机上,运行速度比代码快3倍。 (但是,它只比你的代码的串行版本快20%左右。我已经提出了一些其他的方法。)
from threading import Thread
from Queue import Queue
import numpy as np
class __EndToken(object):
pass
def parallel_pipeline(buffer_size=50):
def parallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = Queue(buffer_size)
consumer_process = Thread(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parallel_pipeline_with_args
@parallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
rs = f(g(h(xs())))
%time print sum(r.sum() for r in rs) # 12.2s
另一个选项,接近您的请求,将继续使用多处理程序包,但使用存储在共享内存中的数组在进程之间传递数据。下面的代码创建了一个新的ArrayQueue类来做到这一点。应在生成子进程之前创建ArrayQueue对象。它创建并管理由共享内存支持的numpy数组池。将结果数组推送到队列时,ArrayQueue将该数组中的数据复制到现有的共享内存数组中,然后通过队列传递共享内存数组的id。这比通过队列发送整个数组快得多,因为它避免了对数组进行pickle。这与上面的线程版本具有相似的性能(大约慢10%),如果全局解释器锁是一个问题(例如,在函数中运行了很多python代码),它可能会更好地扩展。
from multiprocessing import Process, Queue, Array
import numpy as np
class ArrayQueue(object):
def __init__(self, template, maxsize=0):
if type(template) is not np.ndarray:
raise ValueError('ArrayQueue(template, maxsize) must use a numpy.ndarray as the template.')
if maxsize == 0:
# this queue cannot be infinite, because it will be backed by real objects
raise ValueError('ArrayQueue(template, maxsize) must use a finite value for maxsize.')
# find the size and data type for the arrays
# note: every ndarray put on the queue must be this size
self.dtype = template.dtype
self.shape = template.shape
self.byte_count = len(template.data)
# make a pool of numpy arrays, each backed by shared memory,
# and create a queue to keep track of which ones are free
self.array_pool = [None] * maxsize
self.free_arrays = Queue(maxsize)
for i in range(maxsize):
buf = Array('c', self.byte_count, lock=False)
self.array_pool[i] = np.frombuffer(buf, dtype=self.dtype).reshape(self.shape)
self.free_arrays.put(i)
self.q = Queue(maxsize)
def put(self, item, *args, **kwargs):
if type(item) is np.ndarray:
if item.dtype == self.dtype and item.shape == self.shape and len(item.data)==self.byte_count:
# get the ID of an available shared-memory array
id = self.free_arrays.get()
# copy item to the shared-memory array
self.array_pool[id][:] = item
# put the array's id (not the whole array) onto the queue
new_item = id
else:
raise ValueError(
'ndarray does not match type or shape of template used to initialize ArrayQueue'
)
else:
# not an ndarray
# put the original item on the queue (as a tuple, so we know it's not an ID)
new_item = (item,)
self.q.put(new_item, *args, **kwargs)
def get(self, *args, **kwargs):
item = self.q.get(*args, **kwargs)
if type(item) is tuple:
# unpack the original item
return item[0]
else:
# item is the id of a shared-memory array
# copy the array
arr = self.array_pool[item].copy()
# put the shared-memory array back into the pool
self.free_arrays.put(item)
return arr
class __EndToken(object):
pass
def parallel_pipeline(buffer_size=50):
def parallel_pipeline_with_args(f):
def consumer(xs, q):
for x in xs:
q.put(x)
q.put(__EndToken())
def parallel_generator(f_xs):
q = ArrayQueue(template=np.zeros(0,1,(500,2000)), maxsize=buffer_size)
consumer_process = Process(target=consumer,args=(f_xs,q,))
consumer_process.start()
while True:
x = q.get()
if isinstance(x, __EndToken):
break
yield x
def f_wrapper(xs):
return parallel_generator(f(xs))
return f_wrapper
return parallel_pipeline_with_args
@parallel_pipeline(3)
def f(xs):
for x in xs:
yield x + 1.0
@parallel_pipeline(3)
def g(xs):
for x in xs:
yield x * 3
@parallel_pipeline(3)
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
print "multiprocessing with shared-memory arrays:"
%time print sum(r.sum() for r in f(g(h(xs())))) # 13.5s
上面的代码比单线程版本快了大约20%(12.2s与下面显示的串行版本的14.8s)。这是因为每个函数都在一个线程或进程中运行,大部分工作都是由xs()完成的。上面示例的执行时间与您刚刚运行%time print sum(1 for x in xs())
时的执行时间几乎相同。
如果您的真实项目有更多的中间功能和/或它们比您展示的功能更复杂,那么工作负载可能会在处理器之间更好地分配,这可能不是问题。但是,如果您的工作负载确实类似于您提供的代码,那么您可能希望重构代码以将一个样本分配给每个线程而不是每个线程分配一个函数。这看起来像下面的代码(显示了线程和多处理版本):
import multiprocessing
import threading, Queue
import numpy as np
def f(x):
return x + 1.0
def g(x):
return x * 3
def h(x):
return x * x
def final(i):
return f(g(h(x(i))))
def final_sum(i):
return f(g(h(x(i)))).sum()
def x(i):
# produce sample number i
return np.random.uniform(0, 1, (500, 2000))
def rs_serial(func, n):
for i in range(n):
yield func(i)
def rs_parallel_threaded(func, n):
todo = range(n)
q = Queue.Queue(2*n_workers)
def worker():
while True:
try:
# the global interpreter lock ensures only one thread does this at a time
i = todo.pop()
q.put(func(i))
except IndexError:
# none left to do
q.put(None)
break
threads = []
for j in range(n_workers):
t = threading.Thread(target=worker)
t.daemon=False
threads.append(t) # in case it's needed later
t.start()
while True:
x = q.get()
if x is None:
break
else:
yield x
def rs_parallel_mp(func, n):
pool = multiprocessing.Pool(n_workers)
return pool.imap_unordered(func, range(n))
n_workers = 4
n_samples = 1000
print "serial:" # 14.8s
%time print sum(r.sum() for r in rs_serial(final, n_samples))
print "threaded:" # 10.1s
%time print sum(r.sum() for r in rs_parallel_threaded(final, n_samples))
print "mp return arrays:" # 19.6s
%time print sum(r.sum() for r in rs_parallel_mp(final, n_samples))
print "mp return results:" # 8.4s
%time print sum(r_sum for r_sum in rs_parallel_mp(final_sum, n_samples))
此代码的线程版本仅比我给出的第一个示例稍快,并且仅比串行版本快30%。这并不像我预期的那样加速;也许Python仍然被GIL部分陷入困境?
多处理版本的执行速度明显快于原始多处理代码,主要是因为所有函数在一个进程中链接在一起,而不是排队(和挑选)中间结果。但是,它仍然比串行版本慢,因为所有结果数组必须在被imap_unordered返回之前被pickle(在工作进程中)和unpickled(在主进程中)。但是,如果您可以安排它以便管道返回聚合结果而不是完整数组,那么您可以避免酸洗开销,并且多处理版本最快:比串行版本快约43%。
好的,现在为了完整起见,这是第二个示例的一个版本,它使用原始生成器函数的多处理而不是上面显示的更精细的函数。这使用一些技巧在多个进程之间传播样本,这可能使其不适合许多工作流程。但是使用生成器似乎比使用更精细的函数稍微快一点,并且这种方法可以使您的速度提高54%,而上面显示的是串行版本。但是,仅当您不需要从工作程序函数返回完整数组时,这才可用。
import multiprocessing, itertools, math
import numpy as np
def f(xs):
for x in xs:
yield x + 1.0
def g(xs):
for x in xs:
yield x * 3
def h(xs):
for x in xs:
yield x * x
def xs():
for i in range(1000):
yield np.random.uniform(0,1,(500,2000))
def final():
return f(g(h(xs())))
def final_sum():
for x in f(g(h(xs()))):
yield x.sum()
def get_chunk(args):
"""Retrieve n values (n=args[1]) from a generator function (f=args[0]) and return them as a list.
This runs in a worker process and does all the computation."""
return list(itertools.islice(args[0](), args[1]))
def parallelize(gen_func, max_items, n_workers=4, chunk_size=50):
"""Pull up to max_items items from several copies of gen_func, in small groups in parallel processes.
chunk_size should be big enough to improve efficiency (one copy of gen_func will be run for each chunk)
but small enough to avoid exhausting memory (each worker will keep chunk_size items in memory)."""
pool = multiprocessing.Pool(n_workers)
# how many chunks will be needed to yield at least max_items items?
n_chunks = int(math.ceil(float(max_items)/float(chunk_size)))
# generate a suitable series of arguments for get_chunk()
args_list = itertools.repeat((gen_func, chunk_size), n_chunks)
# chunk_gen will yield a series of chunks (lists of results) from the generator function,
# totaling n_chunks * chunk_size items (which is >= max_items)
chunk_gen = pool.imap_unordered(get_chunk, args_list)
# parallel_gen flattens the chunks, and yields individual items
parallel_gen = itertools.chain.from_iterable(chunk_gen)
# limit the output to max_items items
return itertools.islice(parallel_gen, max_items)
# in this case, the parallel version is slower than a single process, probably
# due to overhead of gathering numpy arrays in imap_unordered (via pickle?)
print "serial, return arrays:" # 15.3s
%time print sum(r.sum() for r in final())
print "parallel, return arrays:" # 24.2s
%time print sum(r.sum() for r in parallelize(final, max_items=1000))
# in this case, the parallel version is more than twice as fast as the single-thread version
print "serial, return result:" # 15.1s
%time print sum(r for r in final_sum())
print "parallel, return result:" # 6.8s
%time print sum(r for r in parallelize(final_sum, max_items=1000))
答案 1 :(得分:0)
您的示例似乎没有在我的计算机上运行,尽管这可能与我正在运行Windows的事实有关(问题是在__main__
名称空间(任何装饰的东西)中腌制任何东西)...会这样的帮助? (你必须把它打包并解压缩到f(),g()和h()中的每一个)
注意*我不确定这实际上会更快......只是刺伤别人的建议..
from multiprocessing import Process, freeze_support
from multiprocessing.sharedctypes import Value, Array
import numpy as np
def package(arr):
shape = Array('i', arr.shape, lock=False)
if arr.dtype == float:
ctype = Value('c', b'd') #d for double #f for single
if arr.dtype == int:
ctype = Value('c', b'i') #if statements could be avoided if data is always the same
data = Array(ctype.value, arr.reshape(-1),lock=False)
return data, shape
def unpack(data, shape):
return np.array(data[:]).reshape(shape[:])
#test
def f(args):
print(unpack(*args))
if __name__ == '__main__':
freeze_support()
a = np.array([1,2,3,4,5])
a_packed = package(a)
print('array has been packaged')
p = Process(target=f, args=(a_packed,))
print('passing to parallel process')
p.start()
print('joining to parent process')
p.join()
print('finished')
答案 2 :(得分:0)
查看Pathos-multiprocessing project,这样可以避免标准multiprocessing
对酸洗的依赖。这应该允许您解决酸洗的低效问题,并允许您访问只读共享资源的公共内存。请注意,虽然Pathos即将在完整的pip包中部署,但在此期间,我建议使用pip install git+https://github.com/uqfoundation/pathos
进行安装