我有一笔我正在尝试计算的数字,而且我很难并行化代码。我试图并行化的计算有点复杂(它使用numpy数组和scipy稀疏矩阵)。它吐出一个numpy数组,我想从大约1000个计算中求和输出数组。理想情况下,我会在所有迭代中保持运行总和。但是,我还没弄清楚如何做到这一点。
到目前为止,我已尝试使用joblib的Parallel函数和pool.map函数与python的多处理包。对于这两个,我使用一个返回numpy数组的内部函数。这些函数返回一个列表,我将其转换为numpy数组,然后求和。
但是,在joblib并行函数完成所有迭代后,主程序永远不会继续运行(看起来原始作业处于挂起状态,使用0%CPU)。当我使用pool.map时,我会在所有迭代完成后得到内存错误。
有没有办法简单地并行运行数组的总和?
编辑:目标是执行以下操作,但并行执行。
def summers(num_iters):
sumArr = np.zeros((1,512*512)) #initialize sum
for index in range(num_iters):
sumArr = sumArr + computation(index) #computation returns a 1 x 512^2 numpy array
return sumArr
答案 0 :(得分:5)
我想出了如何使用多处理,apply_async和回调并行化数组的总和,所以我在这里为其他人发布这个。我使用the example page for Parallel Python作为Sum回调类,虽然我实际上没有使用该包进行实现。但它给了我使用回调的想法。这是我最终使用的简化代码,它完成了我想要它做的事情。
import multiprocessing
import numpy as np
import thread
class Sum: #again, this class is from ParallelPython's example code (I modified for an array and added comments)
def __init__(self):
self.value = np.zeros((1,512*512)) #this is the initialization of the sum
self.lock = thread.allocate_lock()
self.count = 0
def add(self,value):
self.count += 1
self.lock.acquire() #lock so sum is correct if two processes return at same time
self.value += value #the actual summation
self.lock.release()
def computation(index):
array1 = np.ones((1,512*512))*index #this is where the array-returning computation goes
return array1
def summers(num_iters):
pool = multiprocessing.Pool(processes=8)
sumArr = Sum() #create an instance of callback class and zero the sum
for index in range(num_iters):
singlepoolresult = pool.apply_async(computation,(index,),callback=sumArr.add)
pool.close()
pool.join() #waits for all the processes to finish
return sumArr.value
我还能够使用并行化地图来实现这一点,这是在另一个答案中提出的。我之前尝试过这个,但我没有正确实现它。这两种方式都有效,我认为this answer很好地解释了使用哪种方法(map或apply.async)的问题。对于地图版本,您不需要定义类Sum,并且夏天函数变为
def summers(num_iters):
pool = multiprocessing.Pool(processes=8)
outputArr = np.zeros((num_iters,1,512*512)) #you wouldn't have to initialize these
sumArr = np.zeros((1,512*512)) #but I do to make sure I have the memory
outputArr = np.array(pool.map(computation, range(num_iters)))
sumArr = outputArr.sum(0)
pool.close() #not sure if this is still needed since map waits for all iterations
return sumArr
答案 1 :(得分:1)
我不确定我是否理解这个问题。您是否只是尝试将列表分区到工作池中,让它们保持计算的运行总和,并对结果求和?
#!/bin/env python
import sys
import random
import time
import multiprocessing
import numpy as np
numpows = 5
numitems = 25
nprocs = 4
def expensiveComputation( i ):
time.sleep( random.random() * 10 )
return np.array([i**j for j in range(numpows)])
def listsum( l ):
sum = np.zeros_like(l[0])
for item in l:
sum = sum + item
return sum
def partition(lst, n):
division = len(lst) / float(n)
return [ lst[int(round(division * i)): int(round(division * (i + 1)))] for i in xrange(n) ]
def myRunningSum( l ):
sum = np.zeros(numpows)
for item in l:
sum = sum + expensiveComputation(item)
return sum
if __name__ == '__main__':
random.seed(1)
data = range(numitems)
pool = multiprocessing.Pool(processes=4,)
calculations = pool.map(myRunningSum, partition(data,nprocs))
print 'Answer is:', listsum(calculations)
print 'Expected answer: ', np.array([25.,300.,4900.,90000.,1763020.])
(分区函数来自Python: Slicing a list into n nearly-equal-length partitions)