我如何使用Dask在NumPy数组的片上执行并行操作?

时间:2016-10-15 00:49:31

标签: python arrays numpy parallel-processing dask

我有一个大小为n_slice x 2048 x 3的numpy坐标数组,其中n_slice是成千上万的。我想分别对每个2048 x 3切片应用以下操作

import numpy as np
from scipy.spatial.distance import pdist

# load coor from a binary xyz file, dcd format

n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])

# this loop is what I want to parallelize, each slice is completely independent
for i in xrange(n_slice): 
    dist[i, r[:, None] < r] = pdist(coor[i])

我尝试使用coor dask.array

来使用Dask
import dask.array as da
dcoor = da.from_array(coor, chunks=(1, 2048, 3))

但只是将coor替换为dcoor不会暴露并行性。我可以看到为每个切片设置并行线程,但是如何利用Dask来处理并行性呢?

以下是使用concurrent.futures

的并行实现
import concurrent.futures
import multiprocessing

n_cpu = multiprocessing.cpu_count()

def get_dist(coor, dist, r):
    dist[r[:, None] < r] = pdist(coor)

# load coor from a binary xyz file, dcd format

n_slice, n_coor, _ = coor.shape
r = np.arange(n_coor)
dist = np.zeros([n_slice, n_coor, n_coor])

with concurrent.futures.ThreadPoolExecutor(max_workers=n_cpu) as executor:
    for i in xrange(n_slice):
        executor.submit(get_dist, cool[i], dist[i], r)

这个问题可能不适合Dask,因为没有块间计算。

1 个答案:

答案 0 :(得分:4)

map_blocks

map_blocks方法可能会有所帮助:

dcoor.map_blocks(pdist)

不均匀的数组

看起来您正在做一些奇特的切片,以将特定值插入到输出数组的特定位置。这可能与dask.arrays有些尴尬。相反,我建议制作一个产生numpy数组的函数

def myfunc(chunk):
    values = pdist(chunk[0, :, :])
    output = np.zeroes((2048, 2048))
    r = np.arange(2048)
    output[r[:, None] < r] = values
    return output

dcoor.map_blocks(myfunc)

delayed

最糟糕的情况是,您始终可以使用dask.delayed

from dask import delayed, compute
coor2 = delayed(coor)
slices = [coor2[i] for i in range(coor.shape[0])]
slices2 = [delayed(pdist)(slice) for slice in slices]
results = compute(*slices2)