在不同大小的顺序阵列切片上操作

时间:2018-04-20 10:38:41

标签: python numpy time-series

假设我有以下signal数组,其中每个值对应time数组中的时间:

np.random.seed(123)
time = np.array([0,2,4,7,10,11,12,17,21,25,29,30,31,40])  # in seconds
signal = np.random.randint(1,5,len(time))

我想要做的是将signal数组切割成更小的数组,这样每个切片的时间跨度至少 10秒。然后为每个切片求和signal。目测:

         |-----sum------||-----sum------||---sum----||--X--
time   = 0,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 31, 40
signal = 3,  2,  3,  3,  1,  3,  3,  2,  4,  3,  4,  3,  2

我想要的输出是一个列表,其中包含每10秒切片的signal之和:

[12,  # 3+2+3+3+1
 13,  # 1+3+3+2+4
 14]  # 4+3+4+3

请注意,最终的2个signal元素无法求和,因为时间差小于10秒

我写了以下函数:

def count(x, time, epoch=60):
    # calculate time diff
    time = time - time[0]

    # get indices at time boundaries
    num_bins = int(max(time) / epoch)
    inds = [0]

    for i in range(num_bins):
        upper_ind = np.argmax(time >= time[inds[-1]] + epoch)

        if time[upper_ind] - time[inds[-1]] >= epoch:
            inds.append(upper_ind)

    # calculate sums between each boundary
    counts = []
    for i in range(len(inds) - 1):
        lower = inds[i]
        upper = inds[i+1] + 1

        cur_signal = x[lower:upper]

        counts.append(sum(cur_signal))

    return counts

由以下人员召集:

counts = count(signal, time, epoch=10)

它可以工作,但它对大型阵列来说很慢而且相当hacky。有没有更有效的方法来做到这一点,也许有一些numpy魔法,我不需要通过确定边界,然后再次通过以获得总和?

如果有一种方法可以在2个时间点之间进行线性插值(即如果前一个稍微短于10秒,下一次稍微超过10秒),则可以通过估算signal的精确值来获得奖励积分10秒间隔

编辑:

只需从评论中提取一些额外的信息......

至少10秒钟意味着切片不能短于10秒,但可以更长。我会把第一个时间点大于10秒。请参阅上面示例中的第二个切片

边界处的信号值将被计数两次。换句话说,一个切片的结束值是下一个

的起始值

5 个答案:

答案 0 :(得分:1)

修改

在考虑了这一点后,我意识到你最好的选择可能不是优雅的numpy代码,特别是如果你关心性能。甚至@PaperPanzer的代码也很漂亮,依赖于循环调用searchsorted(基于相对昂贵的二进制搜索)。

相反,您可以在没有搜索的单通道循环中完成整个算法:

signal = np.array([3,  2,  3,  3,  1,  3,  3,  2,  4,  3,  4,  3,  2])
time = np.array([0,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 31, 40])

def count(signal, time, epoch=10):
    counts = []
    total = 0
    timestart = times[0]

    for x,t in zip(signal, time):
        total += x

        if t - timestart >= epoch:
            counts.append(total)
            total = x
            timestart = t

    return counts

count(signal, time)

输出:

[12, 13, 14]

计时

看起来简单的循环确实比numpy / searchsorted / where方法快得多。

我的代码:

%%timeit        
count(signal, time)

5.88 µs ± 165 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

@PaperPanzer的代码:

%%timeit
idx = np.fromiter(iter(accumulate(chain((0,), repeat(10)), lambda now, delta: time.searchsorted(time[now] + delta)).__next__, len(time)), int)
np.add.reduceat(signal[:idx[-1]], idx[:-1]) + signal[idx[1:]]

9.63 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

@ Brenlla的代码:

%%timeit
out=[]
prev=0
# need to reinitialize the time array since the loop eats it
time = np.array([0,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 31, 40])
while True:
    try:
        idx10 = np.where(time >=10)[0][0]
        time-=time[idx10]
        out.append(np.sum(signal[prev:idx10+1]))
        prev=idx10
    except:
        break

30.1 µs ± 502 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

答案 1 :(得分:1)

以下是使用itertoolsnumpy组合的方法:

>>> time   = 0,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 31, 40
>>> signal = 3,  2,  3,  3,  1,  3,  3,  2,  4,  3,  4,  3,  2
>>> time, signal = map(np.array, (time, signal))
>>> 
>>> idx = np.fromiter(iter(accumulate(chain((0,), repeat(10)), lambda now, delta: time.searchsorted(time[now] + delta)).__next__, len(time)), int)
>>> np.add.reduceat(signal[:idx[-1]], idx[:-1]) + signal[idx[1:]]
array([12, 13, 14])

答案 2 :(得分:1)

这是一种矢量化和Numpythonic方法:

# time is array([ 2,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 30, 31, 40])

# Using broadcasting you can get a 2d array of the difference of all items
# from other items within your array
In [115]: arr = time[:, None] - time
# Then find indices where the difference is less and equal to -10
In [116]: x, y = np.where(arr <= -10)
# find the first occurrences of where for each item the difference is less and equal to -10 
In [117]: first_acc = np.concatenate(([0], np.where(np.diff(x) != 0)[0]  + 1, [x.size]))

# use a recursive generator function to retrieve all the expected indices.
In [118]: def get_ind_rec(ind=0):
     ...:     try:
     ...:         ind = y[first_acc[ind]]
     ...:         yield ind
     ...:         yield from get_ind_rec(ind)
     ...:     except: # IndexError
     ...:         pass
     ...:     
     ...:     

In [119]: list(get_ind_rec())
Out[119]: [6, 9, 13]

现在,您只需使用np.split()根据这些索引拆分signal,然后使用map在所有切片上应用sum

答案 3 :(得分:1)

也有点hacky,但我觉得很容易理解。可能需要用更健壮/更优雅的东西替换try ... except

time   = 0,  2,  4,  7, 10, 11, 12, 17, 21, 25, 29, 31, 40
signal = 3,  2,  3,  3,  1,  3,  3,  2,  4,  3,  4,  3,  2
time, signal = map(np.array, (time, signal))

out=[]
prev=0
while True:
    try:
        idx10 = np.where(time >=10)[0][0]
        time-=time[idx10]
        out.append(sum(signal[prev:idx10+1]))
        prev=idx10
    except:
        break

答案 4 :(得分:1)

如果编译器是一个选项

如果使用numpy和广播解决问题并不容易,那么这应该是另一种选择。即使对问题进行矢量化很容易,也可以获得显着的加速。

尽可能多地循环

import numpy as np
import numba as nb

@nb.njit(fastmath=True)
def count(x, time, epoch=10):
  max_bins=int((time[-1]-time[0]))//epoch
  sum_arr=np.zeros((max_bins),dtype=x.dtype)

  start_time=time[0]
  ii=0
  for i in range(x.shape[0]):
    if (time[i]-start_time) < epoch:
      sum_arr[ii]+=x[i]
    else:
      sum_arr[ii]+=x[i]
      ii+=1
      sum_arr[ii]+=x[i]
      start_time=time[i]

  return sum_arr[0:ii]

编译

在这个例子中,我使用numba,因为它简单。导入和函数装饰器是您获得一些数量级加速所需的全部。

衡量效果

#create some data
t=np.arange(0,1e6,2)
signal = np.random.randint(1,5,len(t))
sum_arr=count(signal, t, epoch=10)

t1=time.time()
sum_arr_1=your_count(signal, t, epoch=10)
print(time.time()-t1)

#The first call gets about 0.2s compilation overhead
sum_arr_2=count(signal, t, epoch=10)

t1=time.time()
for i in range(1000):
  sum_arr_2=count(signal, t, epoch=10)

print((time.time()-t1)/1000)
np.allclose(sum_arr_1,sum_arr_2)

<强>结果

your_version:13.6s
compiled_version: 0.6ms
np.allclose: True

总而言之,加速 20200x