在Numpy中并行化函数

时间:2014-02-25 18:42:40

标签: python numpy multiprocessing

我有一个用numpy编写的函数ComparePatchMany,它执行一些基本的矩阵函数(点积,对角线等),并且由于我使用的矩阵的大小,太慢了。为了实现一些加速,我想并行运行对此函数的调用。由于存储器问题,我似乎无法在一次超过100个堆叠矩阵上调用它。因此,只需在巨型矩阵上运行ComparePatchMany即可(尽管它在MatLab中有效)。

我现在拥有的是:

def comparePatchManyRunner(tex_flat,imMask_flat,s_tex,metrics,i):
    metrics[i] = ComparePatchMany.main(tex_flat[imMask_flat==1,:],np.reshape(s_tex[:,i],(-1,1)))

# N = 100
def main(TexLib,tex,OperationMask,N,gpu=0):

    if gpu:
        print 'ERROR: GPU Capability not set'
    else:
        tex_flat = np.array([tex.flatten('F')]).T

    CreateGrid = np.ones((TexLib.Gr.l_y.shape[1],TexLib.Gr.l_x.shape[1]))
    PatchMap = np.nan*CreateGrid
    MetricMap = np.nan*CreateGrid
    list_of_patches = np.argwhere(CreateGrid>0)

    for i in range(list_of_patches.shape[0]):
        y,x = list_of_patches[i]
        imMask = TexLib.obtainMask(y,x)
        Box = [TexLib.Gr.l_x[0,x],TexLib.Gr.l_x[-1,x],TexLib.Gr.l_y[0,y],TexLib.Gr.l_y[-1,y]]

        imMaskO = imMask
        imMask = imMask & OperationMask

        imMask_flat = np.dstack((imMask,imMask,imMask))

        if gpu:
            print 'ERROR! GPU Capability not yet implemented'
            # TODO
        else:
            imMask_flat = imMask_flat.flatten('F')

        if np.sum(imMask)<8:
            continue

        indd_s = np.random.randint(TexLib.NumTexs,size=(1,N*5))

        s_tex = TexLib.ImW[imMask_flat==1][:,np.squeeze(indd_s)]
        s_tex = s_tex.astype('float32')

        if gpu:
            print 'ERROR! GPU Capability not yet implemented'
            # TODO
        else:
            metrics = np.zeros((N*5,1))
            shared_arr = multiprocessing.Array('d',metrics)

            processes = [multiprocessing.Process(target=comparePatchManyRunner, args=(tex_flat,imMask_flat,s_tex,shared_arr,i)) for i in xrange(N*5)]
            for p in processes:
                p.start()
            for p in processes:
                p.join()
            metrics = shared_arr
            print metrics

我认为这可能会创建500个流程,这可能是一个问题。我在此版本和之前版本中遇到的一个问题是IOError: [Errno 32] Broken pipe,它源自p.start()

我正在使用Python 2.7,NumPy 1.8和SciPy 0.13.2开发Windows。

编辑:

评论建议我使用池。所以我正在尝试这个:

metrics = np.zeros((N*5,1))
shared_arr = multiprocessing.Array('d',metrics,lock=False)
po = multiprocessing.Pool(processes=2)
po.map_async(comparePatchManyRunner,((tex_flat,imMask_flat,s_tex,shared_arr,idex) for idex in xrange(N*5)))

但它似乎没有写任何东西到shared_arr,我一直得到一个PicklingError:

Exception in thread Thread-29:
Traceback (most recent call last):
  File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
    self.run()
  File "C:\Python27\lib\threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "C:\Python27\lib\multiprocessing\pool.py", line 342, in _handle_tasks
    put(task)
PicklingError: Can't pickle <class 'multiprocessing.sharedctypes.c_double_Array_500'>: attribute lookup multiprocessing.sharedctypes.c_double_Array_500 failed

0 个答案:

没有答案