我正在做一些日志分析并每隔几分钟检查一次队列的长度。我知道文件何时进入“队列”(一个简单的文件系统目录)以及何时离开。有了这个,我可以按给定的时间间隔绘制队列的长度。到目前为止这么好,虽然代码有点程序性:
ts = pd.date_range(start='2012-12-05 10:15:00', end='2012-12-05 15:45', freq='5t')
tmpdf = df.copy()
for d in ts:
tmpdf[d] = (tmpdf.date_in < d)&(tmpdf.date_out > d)
queue_length = tmpdf[list(ts)].apply(func=np.sum)
但是,我想比较实际长度和给定消费率下的长度(例如每秒1次等)。我不能只减去一个常数,因为队列不能超过零。
我已经做到了,但是以非常程序的方式。我试图使用pandas窗口函数但收效甚微,因为无法访问已经为前一个元素计算的结果。这是我尝试的第一件事,这是致命的错误:
imagenes_min = 60 * imagenes_sec
def roll(window_vals):
return max(0.0, window_vals[-1] + window_vals[-2] - imagenes_min)
pd.rolling_apply(arg=imagenes_introducidas, func=roll , window = 2, min_periods=2)
真正的代码是这样的,我认为它过于冗长和缓慢:
imagenes_sec = 1.05
imagenes_min = imagenes_sec * 60 *5
imagenes_introducidas = df3.aet.resample(rule='5t',how='count')
imagenes_introducidas.head()
def accum_minus(serie, rate):
acc = 0
retval = np.zeros(len(serie))
for i,a in enumerate(serie.values):
acc = max(0, a + acc - rate)
retval[i] = acc
return Series(data=retval, index=serie.index)
est_1 = accum_minus(imagenes_introducidas, imagenes_min)
comparativa = DataFrame(data = { 'real': queue_length, 'est_1_sec': est_1 })
comparativa.plot()
这似乎是一件容易的事,但我不知道如何正确地做到这一点。可能是熊猫不是工具,而是一些numpy或scipy魔法。
更新:df3是这样的(某些列无名):
aet date_out
date_in
2012-12-05 10:08:59.318600 Z2XG17 2012-12-05 10:09:37.172300
2012-12-05 10:08:59.451300 Z2XG17 2012-12-05 10:09:38.048800
2012-12-05 10:08:59.587400 Z2XG17 2012-12-05 10:09:39.044100
更新2:这似乎更快,仍然不是很优雅
imagenes_sec = 1.05
imagenes_min = imagenes_sec * 60 *5
imagenes_introducidas = df3.aet.resample(rule='5t',how='count')
def add_or_zero(x, y):
return max(0.0, x + y - imagenes_min)
v_add_or_zero = np.frompyfunc(add_or_zero, 2,1)
xx = v_add_or_zero.accumulate(imagenes_introducidas.values, dtype=np.object)
dd = DataFrame(data = {'est_1_sec' : xx, 'real': queue_length}, index=imagenes_introducidas.index)
dd.plot()
答案 0 :(得分:2)
如何将入站和出站事件交错到一个帧中?
In [15]: df
Out[15]:
date_in aet date_out
0 2012-12-05 10:08:59.318600 Z2XG17 2012-12-05 10:09:37.172300
1 2012-12-05 10:08:59.451300 Z2XG17 2012-12-05 10:09:38.048800
2 2012-12-05 10:08:59.587400 Z2XG17 2012-12-05 10:09:39.044100
In [16]: inbnd = pd.DataFrame({'event': 1}, index=df.date_in)
In [17]: outbnd = pd.DataFrame({'event': -1}, index=df.date_out)
In [18]: real_stream = pd.concat([inbnd, outbnd]).sort()
In [19]: real_stream
Out[19]:
event
date
2012-12-05 10:08:59.318600 1
2012-12-05 10:08:59.451300 1
2012-12-05 10:08:59.587400 1
2012-12-05 10:09:37.172300 -1
2012-12-05 10:09:38.048800 -1
2012-12-05 10:09:39.044100 -1
以这种格式(每个增量减少一个),队列深度 可以使用cumsum()轻松计算。
In [20]: real_stream['depth'] = real_stream.event.cumsum()
In [21]: real_stream
Out[21]:
event depth
date
2012-12-05 10:08:59.318600 1 1
2012-12-05 10:08:59.451300 1 2
2012-12-05 10:08:59.587400 1 3
2012-12-05 10:09:37.172300 -1 2
2012-12-05 10:09:38.048800 -1 1
2012-12-05 10:09:39.044100 -1 0
要模拟不同的消费率,请替换所有真实的出站时间戳 以固定频率制造的一系列出站时间戳。由于cumsum()函数在这种情况下不起作用,我创建了一个计数功能 需要底价。
In [53]: outbnd_1s = pd.DataFrame({'event': -1},
....: index=real_stream.event.resample("S").index)
In [54]: fixed_stream = pd.concat([inbnd, outbnd_1s]).sort()
In [55]: def make_floor_counter(floor):
....: count = [0]
....: def process(n):
....: count[0] += n
....: if count[0] < floor
....: count[0] = floor
....: return count[0]
....: return process
....:
In [56]: fixed_stream['depth'] = fixed_stream.event.map(make_floor_counter(0))
In [57]: fixed_stream.head(8)
Out[57]:
event depth
2012-12-05 10:08:59 -1 0
2012-12-05 10:08:59.318600 1 1
2012-12-05 10:08:59.451300 1 2
2012-12-05 10:08:59.587400 1 3
2012-12-05 10:09:00 -1 2
2012-12-05 10:09:01 -1 1
2012-12-05 10:09:02 -1 0
2012-12-05 10:09:03 -1 0