如何以矢量化方式平均给定距离内的所有坐标

时间:2017-10-18 11:22:36

标签: python arrays numpy scipy vectorization

我确实找到了计算一组点的中心坐标的方法。但是,当初始坐标的数量增加时,我的方法非常慢(我有大约100 000个坐标)。

瓶颈是代码中的for循环。我尝试使用np.apply_along_axis删除它,但发现这只不过是一个隐藏的python循环。

是否有可能以矢量化的方式检测和平均各种大小的太近点的聚类?

import numpy as np
from scipy.spatial import cKDTree
np.random.seed(7)
max_distance=1

#Create random points
points = np.array([[1,1],[1,2],[2,1],[3,3],[3,4],[5,5],[8,8],[10,10],[8,6],[6,5]])

#Create trees and detect the points and neighbours which needs to be fused
tree = cKDTree(points)
rows_to_fuse = np.array(list(tree.query_pairs(r=max_distance))).astype('uint64')

#Split the points and neighbours into two groups
points_to_fuse = points[rows_to_fuse[:,0], :2]
neighbours = points[rows_to_fuse[:,1], :2]

#get unique points_to_fuse
nonduplicate_points = np.ascontiguousarray(points_to_fuse)
unique_points = np.unique(nonduplicate_points.view([('', nonduplicate_points.dtype)]\
                                                 *nonduplicate_points.shape[1]))
unique_points = unique_points.view(nonduplicate_points.dtype).reshape(\
                                          (unique_points.shape[0],\
                                           nonduplicate_points.shape[1]))
#Empty array to store fused points
fused_points = np.empty((len(unique_points), 2))

####BOTTLENECK LOOP####
for i, point in enumerate(unique_points):
    #Detect all locations where a unique point occurs
    locs=np.where(np.logical_and((points_to_fuse[:,0] == point[0]), (points_to_fuse[:,1]==point[1])))
    #Select all neighbours on these locations take the average
    fused_points[i,:] = (np.average(np.hstack((point[0],neighbours[locs,0][0]))),np.average(np.hstack((point[1],neighbours[locs,1][0]))))

#Get original points that didn't need to be fused
points_without_fuse = np.delete(points, np.unique(rows_to_fuse.reshape((1, -1))), axis=0)

#Stack result
points = np.row_stack((points_without_fuse, fused_points))

预期输出

>>> points
array([[  8.        ,   8.        ],
       [ 10.        ,  10.        ],
       [  8.        ,   6.        ],
       [  1.33333333,   1.33333333],
       [  3.        ,   3.5       ],
       [  5.5       ,   5.        ]])

编辑1:具有所需结果的1循环示例

第1步:为循环创建变量

#outside loop
points_to_fuse = np.array([[100,100],[101,101],[100,100]])
neighbours = np.array([[103,105],[109,701],[99,100]])
unique_points = np.array([[100,100],[101,101]])

#inside loop
point = np.array([100,100])
i = 0

第2步:检测points_to_fuse数组中出现唯一点的所有位置

locs=np.where(np.logical_and((points_to_fuse[:,0] == point[0]), (points_to_fuse[:,1]==point[1])))
>>> (array([0, 2], dtype=int64),)

第3步:在这些位置创建一个点和相邻点的数组并计算平均值

array_of_points = np.column_stack((np.hstack((point[0],neighbours[locs,0][0])),np.hstack((point[1],neighbours[locs,1][0]))))
>>> array([[100, 100],
           [103, 105],
           [ 99, 100]])
fused_points[i, :] = np.average(array_of_points, 0)
>>> array([ 100.66666667,  101.66666667])

完整运行后的循环输出

>>> print(fused_points)
>>> array([[ 100.66666667,  101.66666667],
           [ 105.        ,  401.        ]])

1 个答案:

答案 0 :(得分:2)

瓶颈不是必需的循环,因为所有邻域的大小都不相同。

陷阱是循环中的points_to_fuse[:,0] == point[0],它触发了二次复杂性。你可以通过索引对点进行排序来避免这种情况。

这样做的一个例子,即使它没有解决整个问题(在生成rows_to_fuse之后):

sorter=np.lexsort(rows_to_fuse.T)
sorted_points=rows_to_fuse[sorter]
uniques,counts=np.unique(sorted_points[:,1],return_counts=True)
indices=counts.cumsum()
neighbourhood=np.split(sorted_points,indices)[:-1]
means=[(points[ne[:,0]].sum(axis=0)+points[ne[0,1]])/(len(ne)+1) \
for ne in neighbourhood] # a simple python loop.
# + manage unfused points.

如果你想加速代码,另一个改进就是使用numba来计算均值,但我认为复杂性现在是最优的。