使用多处理没有任何改进

时间:2016-07-21 20:20:21

标签: python multithreading multiprocessing threadpool python-multiprocessing

我测试了mapmp.dummy.Pool.mapmp.Pool.map

的效果
import itertools
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
# wrapper function
def wrap(args): return args[0](*args[1:])
# make data arrays
x = np.random.rand(30, 100000)
y = np.random.rand(30, 100000)
# map
%timeit -n10 map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
# mp.dummy.Pool.map
for i in range(2, 16, 2):
    print 'Thread Pool ', i, ' : ',
    t = ThreadPool(i)
    %timeit -n10 t.map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
    t.close()
    t.join()
# mp.Pool.map
for i in range(2, 16, 2):
    print 'Process Pool ', i, ' : ',
    p = mp.Pool(i)
    %timeit -n10 p.map(wrap, itertools.izip(itertools.repeat(np.correlate), x, y))
    p.close()
    p.join()

输出

 # in this case, one CPU core usage reaches 100%
 10 loops, best of 3: 3.16 ms per loop

 # in this case, all CPU core usages reach ~80%
 Thread Pool   2  : 10 loops, best of 3: 4.03 ms per loop
 Thread Pool   4  : 10 loops, best of 3: 3.3 ms per loop
 Thread Pool   6  : 10 loops, best of 3: 3.16 ms per loop
 Thread Pool   8  : 10 loops, best of 3: 4.48 ms per loop
 Thread Pool  10  : 10 loops, best of 3: 4.19 ms per loop
 Thread Pool  12  : 10 loops, best of 3: 4.03 ms per loop
 Thread Pool  14  : 10 loops, best of 3: 4.61 ms per loop

 # in this case, all CPU core usages reach 80-100%
 Process Pool   2  : 10 loops, best of 3: 71.7 ms per loop
 Process Pool   4  : 10 loops, best of 3: 128 ms per loop
 Process Pool   6  : 10 loops, best of 3: 165 ms per loop
 Process Pool   8  : 10 loops, best of 3: 145 ms per loop
 Process Pool  10  : 10 loops, best of 3: 259 ms per loop
 Process Pool  12  : 10 loops, best of 3: 176 ms per loop
 Process Pool  14  : 10 loops, best of 3: 176 ms per loop
  • 多线程确实可以提高速度。由于锁定,它是可以接受的。

  • 多进程大大降低了速度,这是令人惊讶的。我有8个3.78 MHz CPU,每个CPU有4个核心。

如果将xy的形状增加到(300, 10000),即大10倍,则可以看到类似的结果。

但对于小型数组(20, 1000)

 10 loops, best of 3: 28.9 µs per loop

 Thread Pool  2  : 10 loops, best of 3: 429 µs per loop
 Thread Pool  4  : 10 loops, best of 3: 632 µs per loop
 ...

 Process Pool  2  : 10 loops, best of 3: 525 µs per loop
 Process Pool  4  : 10 loops, best of 3: 660 µs per loop
 ...
  • 多处理和多线程具有相似的性能。
  • 单个过程要快得多。 (由于多处理和多线程的开销?)

无论如何,即使在排除这样一个简单的功能时,它真的还没想到多处理会表现得那么糟糕。怎么会发生这种情况?

正如@TrevorMerrifield所建议的,我修改了代码以避免将大数组传递给wrap

from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
n = 30
m = 1000
# make data in wrap
def wrap(i):
    x = np.random.rand(m)
    y = np.random.rand(m)
    return np.correlate(x, y)
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
print 'Thread Pool %2d : '%(4),
t = ThreadPool(4)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(4),
p = Pool(4)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()

输出

Single process :10 loops, best of 3: 688 µs per loop
 ---
Thread Pool  4 : 10 loops, best of 3: 1.67 ms per loop
 ---
Process Pool  4 : 10 loops, best of 3: 854 µs per loop
  • 没有改进。

我尝试了另一种方式,将指示传递给wrap以从全局数组xy获取数据。

from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
import numpy as np
# make data arrays
n = 30
m = 10000
x = np.random.rand(n, m)
y = np.random.rand(n, m)
def wrap(i):   return np.correlate(x[i], y[i])
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
print 'Thread Pool %2d : '%(4),
t = ThreadPool(4)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(4),
p = Pool(4)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()

输出

Single process :10 loops, best of 3: 133 µs per loop
 ---
Thread Pool  4 : 10 loops, best of 3: 2.23 ms per loop
 ---
Process Pool  4 : 10 loops, best of 3: 10.4 ms per loop
  • 那太糟糕了.....

我尝试了另一个简单的例子(不同的wrap)。

from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
# make data arrays
n = 30
m = 10000
# No big arrays passed to wrap
def wrap(i):   return sum(range(i, i+m))
# map
print 'Single process :',
%timeit -n10 map(wrap, range(n))
# mp.dummy.Pool.map
print '---'
i = 4
print 'Thread Pool %2d : '%(i),
t = ThreadPool(i)
%timeit -n10 t.map(wrap, range(n))
t.close()
t.join()
print '---'
# mp.Pool.map, function must be defined before making Pool
print 'Process Pool %2d : '%(i),
p = Pool(i)
%timeit -n10 p.map(wrap, range(n))
p.close()
p.join()

时间:

 10 loops, best of 3: 4.28 ms per loop
 ---
 Thread Pool  4 : 10 loops, best of 3: 5.8 ms per loop
 ---
 Process Pool  4 : 10 loops, best of 3: 2.06 ms per loop
  • 现在multiprocessing更快。

但如果将m更改为10倍(即100000),

 Single process :10 loops, best of 3: 48.2 ms per loop
 ---
 Thread Pool  4 : 10 loops, best of 3: 61.4 ms per loop
 ---
 Process Pool  4 : 10 loops, best of 3: 43.3 ms per loop
  • 再次,没有改进。

1 个答案:

答案 0 :(得分:2)

您要将wrap映射到(a, b, c),其中a是一个函数,bc是100K元素向量。所有这些数据在发送到池中选定的进程时都会被pickle,然后在到达时进行unpickled。这是为了确保进程具有对数据的互斥访问权。

您的问题是酸洗过程比相关更昂贵。作为指导,您希望最大限度地减少流程之间发送的信息量,并最大限度地提高每个流程的工作量,同时仍然分布在系统的核心数量上。

如何做到这一点取决于您尝试解决的实际问题。通过调整你的玩具示例以使你的向量更大(100万个元素)并在wrap函数中随机生成,通过使用具有4个元素的进程池,我可以获得比单个核心快2倍的速度。代码如下所示:

def wrap(a):
    x = np.random.rand(1000000)
    y = np.random.rand(1000000)
    return np.correlate(x, y)

p = Pool(4)
p.map(wrap, range(30))