根据元素之间的差异范围进行Numpy分组

时间:2018-01-16 08:09:37

标签: python numpy grouping

我有一组角度,我想将它们组合成阵列,它们之间的最大差异为2度。

例如:输入:

angles = np.array([[1],[2],[3],[4],[4],[5],[10]])

输出

('group', 1)
[[1]
 [2]
 [3]]
('group', 2)
[[4]
 [4]
 [5]]
('group', 3)
[[10]]

numpy.diff获取当前下一个元素的差异,我需要来自组中第一个元素的下一个元素的差异

itertools.groupby对不在可定义范围内的元素进行分组

numpy.digitize按预定义范围对元素进行分组,而不是按数组元素指定的范围进行分组。 (也许我可以通过获取角度的唯一值,根据它们的差异对它们进行分组并将其用作预定义范围来使用它?)

我的方法有效,但效率极低且非pythonic: (我正在使用 expand_dims vstack ,因为我正在使用1d数组(不仅仅是角度),但我已经减少它们以简化此问题)

angles = np.array([[1],[2],[3],[4],[4],[5],[10]])

groupedangles = []
idx1 = 0
diffAngleMax = 2

while(idx1 < len(angles)):
    angleA = angles[idx1]
    group = np.expand_dims(angleA, axis=0)
    for idx2 in xrange(idx1+1,len(angles)):
        angleB = angles[idx2]
        diffAngle = angleB - angleA
        if abs(diffAngle) <= diffAngleMax:
            group = np.vstack((group,angleB))
        else:
            idx1 = idx2
            groupedangles.append(group)
            break
    if idx2 == len(angles) - 1:
        if idx1 == idx2:
            angleA = angles[idx1]
            group = np.expand_dims(angleA, axis=0)
        groupedangles.append(group)
        break

for idx, x in enumerate(groupedangles):
    print('group', idx+1)
    print(x)

有什么更好,更快的方法呢?

2 个答案:

答案 0 :(得分:4)

更新以下是一些Cython处理

In [1]: import cython

In [2]: %load_ext Cython

In [3]: %%cython
   ...: import numpy as np
   ...: cimport numpy as np
   ...: def cluster(np.ndarray array, np.float64_t maxdiff):
   ...:     cdef np.ndarray[np.float64_t, ndim=1] flat = np.sort(array.flatten())
   ...:     cdef list breakpoints = []
   ...:     cdef np.float64_t seed = flat[0]
   ...:     cdef np.int64_t int = 0
   ...:     for i in range(0, len(flat)):
   ...:         if (flat[i] - seed) > maxdiff:
   ...:             breakpoints.append(i)
   ...:             seed = flat[i]
   ...:     return np.split(array, breakpoints)
   ...: 

稀疏性测试

In [4]: angles = np.random.choice(np.arange(5000), 500).astype(np.float64)[:, None]

In [5]: %timeit cluster(angles, 2)
422 µs ± 12.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

重复测试

In [6]: angles = np.random.choice(np.arange(500), 1500).astype(np.float64)[:, None]

In [7]: %timeit cluster(angles, 2)
263 µs ± 14.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

两项测试均显示出显着改善。该算法现在对输入进行排序并对排序的数组进行单次运行,这使得它稳定为O(N * log(N))。

<强>预更新

这是种子聚类的变体。它不需要排序

def cluster(array, maxdiff):
    tmp = array.copy()
    groups = []
    while len(tmp):
        # select seed
        seed = tmp.min()
        mask = (tmp - seed) <= maxdiff
        groups.append(tmp[mask, None])
        tmp = tmp[~mask]
    return groups

示例:

In [27]: cluster(angles, 2)
Out[27]: 
[array([[1],
        [2],
        [3]]), array([[4],
        [4],
        [5]]), array([[10]])]

500,1000和1500角度的基准:

In [4]: angles = np.random.choice(np.arange(500), 500)[:, None]

In [5]: %timeit cluster(angles, 2)
1.25 ms ± 60.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [6]: angles = np.random.choice(np.arange(500), 1000)[:, None]

In [7]: %timeit cluster(angles, 2)
1.46 ms ± 37 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [8]: angles = np.random.choice(np.arange(500), 1500)[:, None]

In [9]: %timeit cluster(angles, 2)
1.99 ms ± 72.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

虽然在最坏的情况下算法是O(N ^ 2)而在最好的情况下是O(N),但上面的基准测试清楚地表明接近线性的时间增长,因为实际的运行时间取决于数据的结构:稀疏性和重复率。在大多数情况下,你不会遇到最糟糕的情况。

一些稀疏性基准

In [4]: angles = np.random.choice(np.arange(500), 500)[:, None]

In [5]: %timeit cluster(angles, 2)
1.06 ms ± 27.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [6]: angles = np.random.choice(np.arange(1000), 500)[:, None]

In [7]: %timeit cluster(angles, 2)
1.79 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [8]: angles = np.random.choice(np.arange(1500), 500)[:, None]

In [9]: %timeit cluster(angles, 2)
2.16 ms ± 90.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [10]: angles = np.random.choice(np.arange(5000), 500)[:, None]

In [11]: %timeit cluster(angles, 2)
3.21 ms ± 139 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

答案 1 :(得分:1)

这是一个基于排序的解决方案。人们可以尝试并且更聪明一些,并使用bincount和argpartition来避免排序,但是在N <= 500时,这不值得麻烦。

import numpy as np

def flexibin(a):
    idx0 = np.argsort(a)
    as_ = a[idx0]
    A = np.r_[as_, as_+2]
    idx = np.argsort(A)
    uinv = np.flatnonzero(idx >= len(a))
    linv = np.empty_like(idx)
    linv[np.flatnonzero(idx < len(a))] = np.arange(len(a))
    bins = [0]
    curr = 0
    while True:
        for j in range(uinv[idx[curr]], len(idx)):
            if idx[j] < len(a) and A[idx[j]] > A[idx[curr]] + 2:
                bins.append(j)
                curr = j
                break
        else:
            return np.split(idx0, linv[bins[1:]])

a = 180 * np.random.random((500,))
bins = flexibin(a)

mn, mx = zip(*((np.min(a[b]), np.max(a[b])) for b in bins))
assert np.all(np.diff(mn) > 2)
assert np.all(np.subtract(mx, mn) <= 2)
print('all ok')