使用Python多处理进行逐块数组编写

时间:2018-05-04 08:04:18

标签: python arrays multiprocessing

我知道有很多关于类似问题的主题(例如How do I make processes able to write in an array of the main program?Multiprocessing - Shared ArrayMultiprocessing a loop of a function that writes to an array in python),但我只是没有得到它......很抱歉再问一次。

我需要用一个庞大的数组做一些事情,并希望通过将它分成块并在这些块上运行我的函数来加速,每个块都在自己的进程中运行。问题是:块被切断了#34;从一个数组中,然后将结果写入一个新的公共数组。这就是我到目前为止所做的(最小的工作示例;不要介意阵列整形,这对于我的真实案例来说是必要的):

import numpy as np
import multiprocessing as mp

def calcArray(array, blocksize, n_cores=1):
    in_shape = (array.shape[0] * array.shape[1], array.shape[2])
    input_array = array[:, :, :array.shape[2]].reshape(in_shape)
    result_array = np.zeros(array.shape)
    # blockwise loop
    pix_count = array.size
    for position in range(0, pix_count, blocksize):
        if position + blocksize < array.shape[0] * array.shape[1]:
            num = blocksize
        else:
            num = pix_count - position
        result_part = input_array[position:position + num, :] * 2
        result_array[position:position + num] = result_part
    # finalize result
    final_result = result_array.reshape(array.shape)
    return final_result

if __name__ == '__main__':
    start = time.time()
    img = np.ones((4000, 4000, 4))
    result = calcArray(img, blocksize=100, n_cores=4)
    print 'Input:\n', img
    print '\nOutput:\n', result

我现在如何以设置多个核心的方式实现多处理,然后calcArray将流程分配给每个块,直到达到n_cores为止?

在@Blownhither Ma的帮助下,代码现在看起来像这样:

import time, datetime
import numpy as np
from multiprocessing import Pool

def calculate(array):
    return array * 2

if __name__ == '__main__':
    start = time.time()
    CORES = 4
    BLOCKSIZE = 100
    ARRAY = np.ones((4000, 4000, 4))
    pool = Pool(processes=CORES)
    in_shape = (ARRAY.shape[0] * ARRAY.shape[1], ARRAY.shape[2])
    input_array = ARRAY[:, :, :ARRAY.shape[2]].reshape(in_shape)
    result_array = np.zeros(input_array.shape)
    # do it
    pix_count = ARRAY.size
    handles = []
    for position in range(0, pix_count, BLOCKSIZE):
        if position + BLOCKSIZE < ARRAY.shape[0] * ARRAY.shape[1]:
            num = BLOCKSIZE
        else:
            num = pix_count - position
        ### OLD APPROACH WITH NO PARALLELIZATION ###
        # part = calculate(input_array[position:position + num, :])
        # result_array[position:position + num] = part
        ### NEW APPROACH WITH PARALLELIZATION ###
        handle = pool.apply_async(func=calculate, args=(input_array[position:position + num, :],))
        handles.append(handle)
    # finalize result
    ### OLD APPROACH WITH NO PARALLELIZATION ###
    # final_result = result_array.reshape(ARRAY.shape)
    ### NEW APPROACH WITH PARALLELIZATION ###
    final_result = [h.get() for h in handles]
    final_result = np.concatenate(final_result, axis=0)
    print 'Done!\nDuration (hh:mm:ss): {duration}'.format(duration=datetime.timedelta(seconds=time.time() - start))

代码运行并真正启动我指定的数字流程,但只需使用循环&#34; as-is&#34;花费的时间比旧方法长得多。 (3秒与1分钟相比)。这里肯定有一些东西。

1 个答案:

答案 0 :(得分:1)

核心功能是pool.apply_asynchandler.get

我最近一直在研究相同的功能,并发现制作标准实用功能很有用。 balanced_parallel以静默的方式在矩阵fn上应用函数aassigned_parallel明确地对每个元素应用函数 一世。我分割数组的方式是np.array_split。您可以改用块方案 II。我使用concat而不是在收集结果时分配给空矩阵。没有共享内存。

from multiprocessing import cpu_count, Pool

def balanced_parallel(fn, a, processes=None, timeout=None):
    """ apply fn on slice of a, return concatenated result """
    if processes is None:
        processes = cpu_count()
    print('Parallel:\tstarting {} processes on input with shape {}'.format(processes, a.shape))
    results = assigned_parallel(fn, np.array_split(a, processes), timeout=timeout, verbose=False)
    return np.concatenate(results, 0)


def assigned_parallel(fn, l, processes=None, timeout=None, verbose=True):
    """ apply fn on each element of l, return list of results """
    if processes is None:
        processes = min(cpu_count(), len(l))
    pool = Pool(processes=processes)
    if verbose:
        print('Parallel:\tstarting {} processes on {} elements'.format(processes, len(l)))

    # add jobs to the pool
    handler = [pool.apply_async(fn, args=x if isinstance(x, tuple) else (x, )) for x in l]

    # pool running, join all results
    results = [handler[i].get(timeout=timeout) for i in range(processes)]

    pool.close()
    return results

在您的情况下,fn将是

def _fn(matrix_part): return matrix_part * 2
result = balanced_parallel(_fn, img)

随访: 您的循环应如下所示,以实现并行化。

handles = []
for position in range(0, pix_count, BLOCKSIZE):
    if position + BLOCKSIZE < ARRAY.shape[0] * ARRAY.shape[1]:
        num = BLOCKSIZE
    else:
        num = pix_count - position
    handle = pool.apply_async(func=calculate, args=(input_array[position:position + num, :], ))
    handles.append(handle)

# multiple handlers exist at this moment!! Don't `.get()` yet
results = [h.get() for h in handles]
results = np.concatenate(results, axis=0)
相关问题