numpy数组中具有循环边界的点之间的距离

时间:2018-10-04 14:56:56

标签: python numpy scipy distance

我知道如何使用以下公式计算数组中点之间的欧几里得距离 scipy.spatial.distance.cdist

import scipy.spatial.distance as scidist
import numpy as np

n=4
a=np.zeros([n,n])
i=np.argwhere(a>-1)
dist1=scidist.cdist(i,i,metric='euclidean')

类似于此问题的答案: Calculate Distances Between One Point in Matrix From All Other Points

不过,我想以循环边界条件为例进行计算,例如因此在这种情况下,点[0,0]与点[0,n-1]的距离为1,而不是n-1的距离。

我能想到的唯一方法是重复计算4次,在x,y和x&y方向上减去n的域索引,然后堆叠结果并找到4个切片的最小值:< / p>

distl=[]
for xoff in [0,n]:
    for yoff in [0,n]:
        j=i.copy()
        j[:,0]-=xoff
        j[:,1]-=yoff
        distl.append(scidist.cdist(i,j,metric='euclidean'))
dist=np.amin(np.dstack(distl),2)

我认为这(!)可行,但是对于大型阵列来说它很慢,而且有点难看-有更好/更快的方法吗?

5 个答案:

答案 0 :(得分:2)

我将从图像处理的角度展示一种替代方法,无论它是否最快,您都可能会感兴趣。为了方便起见,我仅将其实现为奇数n

我们不考虑一组nxni,而取nxn框。我们可以将其视为二进制图像。令j中的每个点在此图像中均为正像素。对于n=5,它看起来像:

binary image

现在让我们考虑一下图像处理中的另一个概念:扩张。对于任何输入像素,如果其neighborhood中有一个正像素,则输出像素将为1。该邻域由Structuring Element定义:布尔核,其中的核将显示邻居考虑。

这是我为该问题定义SE的方式:

Y, X = np.ogrid[-n:n+1, -n:n+1]
SQ = X*X+Y*Y

H = SQ == r

从直觉上讲,H是一个掩码,表示满足条件x*x+y*y=r的'从中心开始的所有点。也就是说,H中的所有点都距中心sqrt(r)。另一种可视化效果将是绝对清晰的:

H1 H2 H4 H5

这是一个不断扩大的像素圈。每个掩模中的每个白色像素表示一个与中心像素的距离正好为sqrt(r)的点。您也许还可以说,如果我们迭代地增加r的值,则实际上是在稳定地覆盖特定位置周围的所有像素位置,最终覆盖整个图像。 (r的某些值没有给出响应,因为任何一对点都不存在这样的距离sqrt(r)。我们跳过了这些r值-如3。)。

这就是主要算法的作用。

  • 我们将从0开始将r的值递增到一些较高的上限。
  • 在每个步骤中,如果图像中的任何位置(x,y)都能对膨胀产生响应,则意味着在与jrt(r)的精确距离处存在一个j点!
  • 我们可以多次找到匹配项;我们只会保留第一个匹配项,并丢弃更多匹配项。直到所有像素(x,y)都找到了最小距离/第一个匹配项。

所以您可以说此算法取决于nxn图像中唯一距离对的数量。

这还意味着,如果j中的点越来越多,则该算法实际上会变得更快,这与常识背道而驰!

这种膨胀算法的最坏情况是当您拥有最少的点数(在j中恰好为1个点)时,因为这样一来,您需要将r迭代到非常高的值才能从较远的点获得匹配。 / p>

在实施方面:

n=5 # size of 2D box (n X n points)
np.random.seed(1) # to make reproducible
a=np.random.uniform(size=(n,n)) 
I=np.argwhere(a>-1)  # all points, for each loc we want distance to nearest J 
J=np.argwhere(a>0.85)

Y, X = np.ogrid[-n:n+1, -n:n+1]
SQ = X*X+Y*Y

point_space = np.zeros((n, n))
point_space[J[:,0], J[:,1]] = 1


C1 = point_space[:, :n//2]
C2 = point_space[:, n//2+1:]
C = np.hstack([C2, point_space, C1])

D1 = point_space[:n//2, :]
D2 = point_space[n//2+1:, :]
D2_ = np.hstack([point_space[n//2+1:, n//2+1:],D2,point_space[n//2+1:, :n//2]])
D1_ = np.hstack([point_space[:n//2:, n//2+1:],D1,point_space[:n//2, :n//2]])
D = np.vstack([D2_, C, D1_])
p = (3*n-len(D))//2
D = np.pad(D, (p,p), constant_values=(0,0))

plt.imshow(D, cmap='gray')
plt.title(f'n={n}')

D1 D2

如果您查看n = 5的图像,则可以说出我的所作所为;我只是用四个象限来填充图像,以表示循环空间,然后添加了一些额外的零填充来解决最坏情况下的搜索边界。

@nb.jit
def dilation(image, output, kernel, N, i0, i1):
    for i in range(i0,i1):
        for j in range(i0, i1):
            a_0 = i-(N//2)
            a_1 = a_0+N
            b_0 = j-(N//2)
            b_1 = b_0+N
            neighborhood = image[a_0:a_1, b_0:b_1]*kernel
            if np.any(neighborhood):
                output[i-i0,j-i0] = 1
    return output

@nb.njit(cache=True)
def progressive_dilation(point_space, out, total, dist, SQ, n, N_):
    for i in range(N_):
        if not np.any(total): 
            break
            
        H = SQ == i

        rows, cols = np.nonzero(H)
        if len(rows) == 0: continue
        
        rmin, rmax = rows.min(), rows.max()
        cmin, cmax = cols.min(), cols.max()

        H_ = H[rmin:rmax+1, cmin:cmax+1]
        
        out[:] = False 
        out = dilation(point_space, out, H_, len(H_), n, 2*n)
        
        idx = np.logical_and(out, total)
        
        for a, b in  zip(*np.where(idx)):
            dist[a, b] = i
        
        total = total * np.logical_not(out)
    return dist

def dilateWrap(D, SQ, n):
    out = np.zeros((n,n), dtype=bool)
    total = np.ones((n,n), dtype=bool)
    dist=-1*np.ones((n,n))
    dist = progressive_dilation(D, out, total, dist, SQ, n, 2*n*n+1)
    return dist

dout = dilateWrap(D, SQ, n)

如果我们可视化dout,我们实际上可以获得距离的真棒视觉表示。

O1 O2

黑点基本上是存在j点的位置。最亮的点自然意味着距离任何j最远的点。请注意,我将值保持平方以获取整数图像。实际距离仍然是一平方根。结果与球停泊算法的输出相匹配。

# after resetting n = 501 and rerunning the first block

N = J.copy()
NE = J.copy()
E = J.copy()
SE = J.copy()
S = J.copy()
SW = J.copy()
W = J.copy()
NW = J.copy()

N[:,1] = N[:,1] - n
NE[:,0] = NE[:,0] - n
NE[:,1] = NE[:,1] - n
E[:,0] = E[:,0] - n
SE[:,0] = SE[:,0] - n
SE[:,1] = SE[:,1] + n
S[:,1] = S[:,1] + n
SW[:,0] = SW[:,0] + n
SW[:,1] = SW[:,1] + n
W[:,0] = W[:,0] + n
NW[:,0] = NW[:,0] + n
NW[:,1] = NW[:,1] - n

def distBP(I,J):
    tree = BallTree(np.concatenate([J,N,E,S,W,NE,SE,SW,NW]), leaf_size=15, metric='euclidean')
    dist = tree.query(I, k=1, return_distance=True)
    minimum_distance = dist[0].reshape(n,n)
    return minimum_distance

print(np.array_equal(distBP(I,J), np.sqrt(dilateWrap(D, SQ, n))))

出局:

True

现在进行一次时间检查,n = 501。

from timeit import timeit
nl=1
print("ball tree:",timeit(lambda: distBP(I,J),number=nl))
print("dilation:",timeit(lambda: dilateWrap(D, SQ, n),number=nl))

出局:

ball tree: 1.1706031339999754
dilation: 1.086665302000256

我会说它们大致相等,尽管扩张的边缘很小。实际上,扩张仍然缺少平方根运算,让我们添加一下。

from timeit import timeit
nl=1
print("ball tree:",timeit(lambda: distBP(I,J),number=nl))
print("dilation:",timeit(lambda: np.sqrt(dilateWrap(D, SQ, n)),number=nl))

出局:

ball tree: 1.1712950239998463
dilation: 1.092416919000243

平方根对时间的影响基本可以忽略不计。

现在,我之前说过,当j中有更多点时,膨胀会更快。因此,让我们增加j中的点数。

n=501 # size of 2D box (n X n points)
np.random.seed(1) # to make reproducible
a=np.random.uniform(size=(n,n)) 
I=np.argwhere(a>-1)  # all points, for each loc we want distance to nearest J 
J=np.argwhere(a>0.4) # previously a>0.85

现在检查时间:

from timeit import timeit
nl=1
print("ball tree:",timeit(lambda: distBP(I,J),number=nl))
print("dilation:",timeit(lambda: np.sqrt(dilateWrap(D, SQ, n)),number=nl))

出局:

ball tree: 3.3354218500007846
dilation: 0.2178608220001479

球树实际上变慢了,而扩张变快了!这是因为,如果有很多j点,我们可以通过几次重复的扩张来快速找到所有距离。我发现这种效果相当有趣-通常,您会希望运行时间随着点数的增加而变差,但是相反的情况在这里发生。

相反,如果我们减小j,我们会看到膨胀变慢:

#Setting a>0.9
print("ball tree:",timeit(lambda: distBP(I,J),number=nl))
print("dilation:",timeit(lambda: np.sqrt(dilateWrap(D, SQ, n)),number=nl))

出局:

ball tree: 1.010353464000218
dilation: 1.4776274510004441

我认为我们可以肯定地说,基于卷积或基于核的方法比基于对或点或基于树的方法在此特定问题上的收益要好得多。

最后,我在开始时已经提到过,而我将再次提到:整个实现只占n的奇数;我没有耐心来计算偶数n的适当填充。 (如果您熟悉图像处理,那么您可能以前就已经面临过:使用奇数大小,一切都会变得更加容易。)

由于我只是numba的偶尔涉水者,因此这可能还会进一步优化。

答案 1 :(得分:1)

这是代码的固定版本和更快的其他方法。它们给出的结果相同,所以我有理由相信它们是正确的:

import numpy as np
from scipy.spatial.distance import squareform, pdist, cdist
from numpy.linalg import norm

def pb_OP(A, p=1.0):
    distl = []
    for *offs, ct in [(0, 0, 0), (0, p, 1), (p, 0, 1), (p, p, 1), (-p, p, 1)]:
        B = A - offs
        distl.append(cdist(B, A, metric='euclidean'))
        if ct:
            distl.append(distl[-1].T)
    return np.amin(np.dstack(distl), 2)

def pb_pp(A, p=1.0):
    out = np.empty((2, A.shape[0]*(A.shape[0]-1)//2))
    for o, i in zip(out, A.T):
        pdist(i[:, None], 'cityblock', out=o)
    out[out > p/2] -= p
    return squareform(norm(out, axis=0))

test = np.random.random((1000, 2))

assert np.allclose(pb_OP(test), pb_pp(test))

from timeit import timeit
t_OP = timeit(lambda: pb_OP(test), number=10)*100
t_pp = timeit(lambda: pb_pp(test), number=10)*100
print('OP', t_OP)
print('pp', t_pp)

样品运行。 1000分:

OP 210.11001259903423
pp 22.288734700123314

我们看到我的方法快了约9倍,巧合的是,OP版本必须检查的偏移字形数量。它在各个坐标上使用pdist以获取绝对差异。如果这些值大于网格间距的一半,我们减去一个周期。仍然需要采用欧几里得准则并解压缩存储。

答案 2 :(得分:1)

这些是我定时使用的8种不同解决方案,有些是我自己的,有些是针对我的问题而发布的,它们使用4种广泛的方法:

  1. 空间cdist
  2. 空间KDtree
  3. Sklearn BallTree
  4. 内核方法

这是带有8个测试例程的代码:

import numpy as np
from scipy import spatial
from sklearn.neighbors import BallTree

n=500 # size of 2D box
f=200./(n*n) # first number is rough number of target cells...

np.random.seed(1) # to make reproducable
a=np.random.uniform(size=(n,n))
i=np.argwhere(a>-1)  # all points, we want to know distance to nearest point
j=np.argwhere(a>1.0-f) # set of locations to find distance to.

# long array of 3x3 j points:
for xoff in [0,n,-n]:
    for yoff in [0,-n,n]:
        if xoff==0 and yoff==0:
            j9=j.copy()
        else:
            jo=j.copy()
            jo[:,0]+=xoff
            jo[:,1]+=yoff
            j9=np.vstack((j9,jo))

global maxdist
maxdist=10
overlap=5.2
kernel_size=int(np.sqrt(overlap*n**2/j.shape[0])/2)

print("no points",len(j))

# repear cdist over each member of 3x3 block
def dist_v1(i,j):
    dist=[]
    # 3x3 search required for periodic boundaries.
    for xoff in [-n,0,n]:
        for yoff in [-n,0,n]:
            jo=j.copy()
            jo[:,0]+=xoff
            jo[:,1]+=yoff
            dist.append(np.amin(spatial.distance.cdist(i,jo,metric='euclidean'),1))
    dist=np.amin(np.stack(dist),0).reshape([n,n])
    #dmask=np.where(dist<=maxdist,1,0)
    return(dist)

# same as v1, but taking one amin function at the end
def dist_v2(i,j):
    dist=[]
    # 3x3 search required for periodic boundaries.
    for xoff in [-n,0,n]:
        for yoff in [-n,0,n]:
            jo=j.copy()
            jo[:,0]+=xoff
            jo[:,1]+=yoff
            dist.append(spatial.distance.cdist(i,jo,metric='euclidean'))
    dist=np.amin(np.dstack(dist),(1,2)).reshape([n,n])
    #dmask=np.where(dist<=maxdist,1,0)
    return(dist)

# using a KDTree query ball points, looping over j9 points as in online example
def dist_v3(n,j):
    x,y=np.mgrid[0:n,0:n]
    points=np.c_[x.ravel(), y.ravel()]
    tree=spatial.KDTree(points)
    mask=np.zeros([n,n])
    for results in tree.query_ball_point((j), 2.1):
        mask[points[results][:,0],points[results][:,1]]=1
    return(mask)

# using ckdtree query on the j9 long array
def dist_v4(i,j):
    tree=spatial.cKDTree(j)
    dist,minid=tree.query(i)
    return(dist.reshape([n,n]))

# back to using Cdist, but on the long j9 3x3 array, rather than on each element separately
def dist_v5(i,j):
    # 3x3 search required for periodic boundaries.
    dist=np.amin(spatial.distance.cdist(i,j,metric='euclidean'),1)
    #dmask=np.where(dist<=maxdist,1,0)
    return(dist)

def dist_v6(i,j):
    tree = BallTree(j,leaf_size=5,metric='euclidean')
    dist = tree.query(i, k=1, return_distance=True)
    mindist = dist[0].reshape(n,n)
    return(mindist)

def sq_distance(x1, y1, x2, y2, n):
    # computes the pairwise squared distance between 2 sets of points (with periodicity)
    # x1, y1 : coordinates of the first set of points (source)
    # x2, y2 : same
    dx = np.abs((np.subtract.outer(x1, x2) + n//2)%(n) - n//2)
    dy = np.abs((np.subtract.outer(y1, y2) + n//2)%(n) - n//2)
    d  = (dx*dx + dy*dy)
    return d

def apply_kernel1(sources, sqdist, kern_size, n, mask):
    ker_i, ker_j = np.meshgrid(np.arange(-kern_size, kern_size+1), np.arange(-kern_size, kern_size+1), indexing="ij")
    kernel = np.add.outer(np.arange(-kern_size, kern_size+1)**2, np.arange(-kern_size, kern_size+1)**2)
    mask_kernel = kernel > kern_size**2

    for pi, pj in sources:
        ind_i = (pi+ker_i)%n
        ind_j = (pj+ker_j)%n
        sqdist[ind_i,ind_j] = np.minimum(kernel, sqdist[ind_i,ind_j])
        mask[ind_i,ind_j] *= mask_kernel

def apply_kernel2(sources, sqdist, kern_size, n, mask):
    ker_i = np.arange(-kern_size, kern_size+1).reshape((2*kern_size+1,1))
    ker_j = np.arange(-kern_size, kern_size+1).reshape((1,2*kern_size+1))

    kernel = np.add.outer(np.arange(-kern_size, kern_size+1)**2, np.arange(-kern_size, kern_size+1)**2)
    mask_kernel = kernel > kern_size**2

    for pi, pj in sources:

        imin = pi-kern_size
        jmin = pj-kern_size
        imax = pi+kern_size+1
        jmax = pj+kern_size+1
        if imax < n and jmax < n and imin >=0 and jmin >=0: # we are inside
            sqdist[imin:imax,jmin:jmax] = np.minimum(kernel, sqdist[imin:imax,jmin:jmax])
            mask[imin:imax,jmin:jmax] *= mask_kernel

        elif imax < n and imin >=0:
            ind_j = (pj+ker_j.ravel())%n
            sqdist[imin:imax,ind_j] = np.minimum(kernel, sqdist[imin:imax,ind_j])
            mask[imin:imax,ind_j] *= mask_kernel

        elif jmax < n and jmin >=0:
            ind_i = (pi+ker_i.ravel())%n
            sqdist[ind_i,jmin:jmax] = np.minimum(kernel, sqdist[ind_i,jmin:jmax])
            mask[ind_i,jmin:jmax] *= mask_kernel

        else :
            ind_i = (pi+ker_i)%n
            ind_j = (pj+ker_j)%n
            sqdist[ind_i,ind_j] = np.minimum(kernel, sqdist[ind_i,ind_j])
            mask[ind_i,ind_j] *= mask_kernel


def dist_v7(sources, n, kernel_size,method):
    sources = np.asfortranarray(sources) #for memory contiguity
    kernel_size = min(kernel_size, n//2)
    kernel_size = max(kernel_size, 1)
    sqdist = np.full((n,n), 10*n**2, dtype=np.int32) #preallocate with a huge distance (>max**2)
    mask   = np.ones((n,n), dtype=bool)              #which points have not been reached?
    #main code
    if (method==1):
        apply_kernel1(sources, sqdist, kernel_size, n, mask)
    else:
        apply_kernel2(sources, sqdist, kernel_size, n, mask)
      
    #remaining points
    rem_i, rem_j = np.nonzero(mask)
    if len(rem_i) > 0:
        sq_d = sq_distance(sources[:,0], sources[:,1], rem_i, rem_j, n).min(axis=0)
        sqdist[rem_i, rem_j] = sq_d
    return np.sqrt(sqdist)



from timeit import timeit
nl=10
print ("-----------------------")
print ("Timings for ",nl,"loops")
print ("-----------------------")
print("1. cdist looped amin:",timeit(lambda: dist_v1(i,j),number=nl))
print("2. cdist single amin:",timeit(lambda: dist_v2(i,j),number=nl))
print("3. KDtree ball pt:", timeit(lambda: dist_v3(n,j9),number=nl))
print("4. KDtree query:",timeit(lambda: dist_v4(i,j9),number=nl))
print("5. cdist long list:",timeit(lambda: dist_v5(i,j9),number=nl))
print("6. ball tree:",timeit(lambda: dist_v6(i,j9),number=nl))
print("7. kernel orig:", timeit(lambda: dist_v7(j, n, kernel_size,1), number=nl))
print("8. kernel optimised:", timeit(lambda: dist_v7(j, n, kernel_size,2), number=nl))

我的linux 12核心台式机(带有48GB RAM)上的输出(以秒为单位的时间)为n = 350和63点:

no points 63
-----------------------
Timings for  10 loops
-----------------------
1. cdist looped amin: 3.2488364999881014
2. cdist single amin: 6.494611179979984
3. KDtree ball pt: 5.180531410995172
4. KDtree query: 0.9377906009904109
5. cdist long list: 3.906166430999292
6. ball tree: 3.3540162370190956
7. kernel orig: 0.7813036740117241
8. kernel optimised: 0.17046571199898608

,对于n = 500和npts = 176:

no points 176
-----------------------
Timings for  10 loops
-----------------------
1. cdist looped amin: 16.787221198988846
2. cdist single amin: 40.97849371898337
3. KDtree ball pt: 9.926229109987617
4. KDtree query: 0.8417396580043714
5. cdist long list: 14.345821461000014
6. ball tree: 1.8792325239919592
7. kernel orig: 1.0807358759921044
8. kernel optimised: 0.5650744160229806

因此,总而言之,我得出以下结论:

  1. 如果您遇到很大的问题,请避免使用cdist
  2. 如果您的问题不太受计算时间的限制,我建议您使用“ KDtree查询”方法,因为它只有2行没有周期性边界,而另一些具有周期性边界就可以建立j9数组
  3. 要获得最佳性能(例如,按照我的情况,每个步骤都需要对模型进行长时间集成),那么Kernal解决方案现在是最快的。

答案 3 :(得分:1)

对于计算多个距离,我认为很难击败一个简单的BallTree(或类似的球)。

我不太了解循环边界,或者至少为什么要循环3x3次,因为我看到它的表现像圆环,足以制作5个副本。 更新:实际上,您需要3x3的边缘。我更新了代码。

通过对minimum_distance进行的n = 200测试来np.all( minimum_distance == dist_v1(i,j) ),确保我的True是正确的。

对于使用提供的代码生成的n = 500%%time进行了冷启动

CPU times: user 1.12 s, sys: 0 ns, total: 1.12 s
Wall time: 1.11 s

所以我像帖子中那样生成了500个数据点

import numpy as np

n=500 # size of 2D box (n X n points)
np.random.seed(1) # to make reproducible
a=np.random.uniform(size=(n,n)) 
i=np.argwhere(a>-1)  # all points, for each loc we want distance to nearest J 
j=np.argwhere(a>0.85) # set of J locations to find distance to.

并使用BallTree

import numpy as np
from sklearn.neighbors import BallTree

N = j.copy()
NE = j.copy()
E = j.copy()
SE = j.copy()
S = j.copy()
SW = j.copy()
W = j.copy()
NW = j.copy()

N[:,1] = N[:,1] - n
NE[:,0] = NE[:,0] - n
NE[:,1] = NE[:,1] - n
E[:,0] = E[:,0] - n
SE[:,0] = SE[:,0] - n
SE[:,1] = SE[:,1] + n
S[:,1] = S[:,1] + n
SW[:,0] = SW[:,0] + n
SW[:,1] = SW[:,1] + n
W[:,0] = W[:,0] + n
NW[:,0] = NW[:,0] + n
NW[:,1] = NW[:,1] - n

tree = BallTree(np.concatenate([j,N,E,S,W,NE,SE,SW,NW]), leaf_size=15, metric='euclidean')

dist = tree.query(i, k=1, return_distance=True)

minimum_distance = dist[0].reshape(n,n)

更新:

请注意,这里我将数据复制到框的N,E,S,W,NE,SE,NW,SE,以处理边界条件。同样,对于n = 200,得出的结果相同。您可以调整leaf_size,但我认为此设置还可以。

性能对于j中的点数很敏感。

答案 4 :(得分:1)

[EDIT]-我发现代码跟踪作业完成点的方式存在错误,并用mask_kernel进行了修复。较新代码的纯python版本要慢大约1.5倍,但numba版本要快一些(由于其他一些优化)。

[当前最佳:大约是原始速度的100倍至120倍]

首先,感谢您提交此问题,优化它带来了很多乐趣!

我目前的最佳解决方案是基于这样一个假设,即网格是规则的,并且“源”点(我们需要根据这些点来计算距离)大致均匀地分布。

这里的想法是所有距离都将为1,sqrt(2)sqrt(3),...,所以我们可以预先进行数值计算。然后,我们只需将这些值放在一个矩阵中,然后在每个源点周围复制该矩阵(并确保保持在每个点处找到的最小值)。这涵盖了绝大多数点(> 99%)。然后,对剩余的1%应用另一种“经典”方法。

这是代码:

import numpy as np

def sq_distance(x1, y1, x2, y2, n): 
    # computes the pairwise squared distance between 2 sets of points (with periodicity)
    # x1, y1 : coordinates of the first set of points (source)
    # x2, y2 : same
    dx = np.abs((np.subtract.outer(x1, x2) + n//2)%(n) - n//2)
    dy = np.abs((np.subtract.outer(y1, y2) + n//2)%(n) - n//2)
    d  = (dx*dx + dy*dy)
    return d

def apply_kernel(sources, sqdist, kern_size, n, mask):
    ker_i, ker_j = np.meshgrid(np.arange(-kern_size, kern_size+1), np.arange(-kern_size, kern_size+1), indexing="ij")
    kernel = np.add.outer(np.arange(-kern_size, kern_size+1)**2, np.arange(-kern_size, kern_size+1)**2)
    mask_kernel = kernel > kern_size**2

    for pi, pj in sources:
        ind_i = (pi+ker_i)%n
        ind_j = (pj+ker_j)%n
        sqdist[ind_i,ind_j] = np.minimum(kernel, sqdist[ind_i,ind_j])
        mask[ind_i,ind_j] *= mask_kernel

def dist_vf(sources, n, kernel_size):
    sources = np.asfortranarray(sources) #for memory contiguity

    kernel_size = min(kernel_size, n//2)
    kernel_size = max(kernel_size, 1)

    sqdist = np.full((n,n), 10*n**2, dtype=np.int32) #preallocate with a huge distance (>max**2)
    mask   = np.ones((n,n), dtype=bool)              #which points have not been reached?

    #main code
    apply_kernel(sources, sqdist, kernel_size, n, mask) 

    #remaining points
    rem_i, rem_j = np.nonzero(mask)
    if len(rem_i) > 0:
        sq_d = sq_distance(sources[:,0], sources[:,1], rem_i, rem_j, n).min(axis=0)
        sqdist[rem_i, rem_j] = sq_d

    #eff = 1-rem_i.size/n**2
    #print("covered by kernel :", 100*eff, "%")
    #print("overlap :", sources.shape[0]*(1+2*kernel_size)**2/n**2)
    #print()

    return np.sqrt(sqdist)

使用以下版本测试该版本

n=500  # size of 2D box (n X n points)
np.random.seed(1) # to make reproducible
a=np.random.uniform(size=(n,n)) 
all_points=np.argwhere(a>-1)  # all points, for each loc we want distance to nearest J 
source_points=np.argwhere(a>1-70/n**2) # set of J locations to find distance to.

#
# code for dist_v1 and dist_vf
#

overlap=5.2
kernel_size = int(np.sqrt(overlap*n**2/source_points.shape[0])/2)

print("cdist v1      :", timeit(lambda: dist_v1(all_points,source_points), number=1)*1000, "ms")
print("kernel version:", timeit(lambda: dist_vf(source_points, n, kernel_size), number=10)*100, "ms")

给予

cdist v1      : 1148.6694 ms
kernel version: 69.21876999999998 ms

这已经是约17倍的加速!我还实现了sq_distanceapply_kernel的数字版本:[这是新的正确版本]

@njit(cache=True)
def sq_distance(x1, y1, x2, y2, n):
    m1 = x1.size
    m2 = x2.size
    n2 = n//2
    d = np.empty((m1,m2), dtype=np.int32)
    for i in range(m1):
        for j in range(m2):
            dx = np.abs(x1[i] - x2[j] + n2)%n - n2
            dy = np.abs(y1[i] - y2[j] + n2)%n - n2
            d[i,j]  = (dx*dx + dy*dy)
    return d

@njit(cache=True)
def apply_kernel(sources, sqdist, kern_size, n, mask):
    # creating the kernel
    kernel = np.empty((2*kern_size+1, 2*kern_size+1))
    vals = np.arange(-kern_size, kern_size+1)**2
    for i in range(2*kern_size+1):
        for j in range(2*kern_size+1):
            kernel[i,j] = vals[i] + vals[j]
    mask_kernel = kernel > kern_size**2

    I = sources[:,0]
    J = sources[:,1]

    # applying the kernel for each point
    for l in range(sources.shape[0]):
        pi = I[l]
        pj = J[l]

        if pj - kern_size >= 0 and pj + kern_size<n: #if we are in the middle, no need to do the modulo for j
            for i in range(2*kern_size+1):
                ind_i = np.mod((pi+i-kern_size), n)
                for j in range(2*kern_size+1):
                    ind_j = (pj+j-kern_size)
                    sqdist[ind_i,ind_j] = np.minimum(kernel[i,j], sqdist[ind_i,ind_j])
                    mask[ind_i,ind_j] = mask_kernel[i,j] and mask[ind_i,ind_j]

        else:
            for i in range(2*kern_size+1):
                ind_i = np.mod((pi+i-kern_size), n)
                for j in range(2*kern_size+1):
                    ind_j = np.mod((pj+j-kern_size), n)
                    sqdist[ind_i,ind_j] = np.minimum(kernel[i,j], sqdist[ind_i,ind_j])
                    mask[ind_i,ind_j] = mask_kernel[i,j] and mask[ind_i,ind_j]
    return

并通过

进行测试
overlap=5.2
kernel_size = int(np.sqrt(overlap*n**2/source_points.shape[0])/2)

print("cdist v1                :", timeit(lambda: dist_v1(all_points,source_points), number=1)*1000, "ms")
print("kernel numba (first run):", timeit(lambda: dist_vf(source_points, n, kernel_size), number=1)*1000, "ms") #first run = cimpilation = long
print("kernel numba            :", timeit(lambda: dist_vf(source_points, n, kernel_size), number=10)*100, "ms")

得出以下结果

cdist v1                : 1163.0742 ms
kernel numba (first run): 2060.0802 ms
kernel numba            : 8.80377000000001 ms

由于JIT编译,第一次运行的速度相当慢,但是,它的速度提高了120倍!

通过调整kernel_size参数(或overlap),可以从该算法中获得更多收益。当前选择的kernel_size仅对少数几个源点有效。例如,此选择对source_points=np.argwhere(a>0.85)(13s)失败,而手动设置kernel_size=5则在22ms内给出了答案。

我希望我的帖子不会(不必要)太复杂,我真的不知道如何更好地组织它。

[编辑2]

我对代码的非数字部分给予了更多关注,并设法实现了相当大的加速,非常接近numba可以实现的目标:这是函数apply_kernel的新版本:

def apply_kernel(sources, sqdist, kern_size, n, mask):
    ker_i = np.arange(-kern_size, kern_size+1).reshape((2*kern_size+1,1))
    ker_j = np.arange(-kern_size, kern_size+1).reshape((1,2*kern_size+1))

    kernel = np.add.outer(np.arange(-kern_size, kern_size+1)**2, np.arange(-kern_size, kern_size+1)**2)
    mask_kernel = kernel > kern_size**2

    for pi, pj in sources:

        imin = pi-kern_size
        jmin = pj-kern_size
        imax = pi+kern_size+1
        jmax = pj+kern_size+1
        if imax < n and jmax < n and imin >=0 and jmin >=0: # we are inside
            sqdist[imin:imax,jmin:jmax] = np.minimum(kernel, sqdist[imin:imax,jmin:jmax])
            mask[imin:imax,jmin:jmax] *= mask_kernel

        elif imax < n and imin >=0:
            ind_j = (pj+ker_j.ravel())%n
            sqdist[imin:imax,ind_j] = np.minimum(kernel, sqdist[imin:imax,ind_j])
            mask[imin:imax,ind_j] *= mask_kernel

        elif jmax < n and jmin >=0:
            ind_i = (pi+ker_i.ravel())%n
            sqdist[ind_i,jmin:jmax] = np.minimum(kernel, sqdist[ind_i,jmin:jmax])
            mask[ind_i,jmin:jmax] *= mask_kernel

        else :
            ind_i = (pi+ker_i)%n
            ind_j = (pj+ker_j)%n
            sqdist[ind_i,ind_j] = np.minimum(kernel, sqdist[ind_i,ind_j])
            mask[ind_i,ind_j] *= mask_kernel

主要优化是

  • 使用切片(而不是密集数组)建立索引
  • 稀疏索引的使用(我早些时候没有想到)

测试

overlap=5.4
kernel_size = int(np.sqrt(overlap*n**2/source_points.shape[0])/2)

print("cdist v1  :", timeit(lambda: dist_v1(all_points,source_points), number=1)*1000, "ms")
print("kernel v2 :", timeit(lambda: dist_vf(source_points, n, kernel_size), number=10)*100, "ms")

给予

cdist v1  : 1209.8163000000002 ms
kernel v2 : 11.319049999999997 ms

这比cdist改进了100倍,比以前的仅numpy的版本提高了约5.5倍,并且比numba所实现的速度慢了约25%。