我在Pandas DataFrame中有一个时间序列。时间戳可能不均匀(每1-5分钟一个),但总会有一个每5分钟一次(时间戳以分钟结尾于0,5,10,15,20,25,30,35,40,45,50 ,55)。
示例:
2017-01-01 2:05:00 32.90
2017-01-01 2:07:30 29.83
2017-01-01 2:10:00 45.76
2017-01-01 2:15:00 16.22
2017-01-01 2:20:00 17.33
2017-01-01 2:25:00 23.40
2017-01-01 2:28:45 150.12
2017-01-01 2:30:00 100.29
2017-01-01 2:35:00 38.45
2017-01-01 2:40:00 67.12
2017-01-01 2:45:00 20.00
2017-01-01 2:50:00 58.41
2017-01-01 2:55:00 58.32
2017-01-01 3:00:00 59.89
我希望获得15分钟的时间加权平均值。带有时间戳的行直接在15分钟标记(时间戳以分钟结尾于0,15,30,45)结束一个间隔,因此分组如下:
Group 1 (interval 2017-01-01 2:00:00):
2017-01-01 2:05:00 32.90
2017-01-01 2:07:30 29.83
2017-01-01 2:10:00 45.76
2017-01-01 2:15:00 16.22
Group 2 (interval 2017-01-01 2:15:00):
2017-01-01 2:20:00 17.33
2017-01-01 2:25:00 23.40
2017-01-01 2:28:45 150.12
2017-01-01 2:30:00 100.29
Group 3 (interval 2017-01-01 2:30:00):
2017-01-01 2:35:00 38.45
2017-01-01 2:40:00 67.12
2017-01-01 2:45:00 20.00
Group 4 (interval 2017-01-01 2:45:00):
2017-01-01 2:50:00 58.41
2017-01-01 2:55:00 58.32
2017-01-01 3:00:00 59.89
平均值必须是时间加权的,因此不仅仅是组中所有值的标准平均值。
例如,第2组的时间加权平均值不是72.785,这是所有4个值的常规平均值。相反,它应该是:
(5 minutes / 15 minutes) * 17.33 = 5.776667 ==> The 5 minutes is taken from the difference between this timestamp and the previous timestamp
+(5 minutes / 15 minutes) * 23.40 = 7.8
+(3.75 minutes / 15 minutes) * 150.12 = 37.53
+(1.25 minutes / 15 minutes) * 100.29 = 8.3575
= **59.46417**
同样理想情况下,15分钟是参数化的,因为将来可能会改变为60分钟(每小时),但我不认为这是一个问题。
此外,性能在这方面非常重要。由于我的数据集将有大约10k行,因此逐个迭代每个记录将非常慢。
我试着查看Pandas的df.rolling()函数,但无法弄清楚如何将它直接应用到我的特定场景中。
非常感谢您的帮助!
更新1:
遵循西蒙的出色解决方案,我对其进行了一些修改。
我做了一些调整以使其适应我的具体情况:
def func(df):
if df.size == 0: return
timestep = 15*60
indexes = df.index - (df.index[-1] - pd.Timedelta(seconds=timestep))
seconds = indexes.seconds
weight = [seconds[n]/timestep if n == 0 else (seconds[n] - seconds[n - 1])/timestep
for n, k in enumerate(seconds)]
return np.sum(weight*df.values)
这是为了应对可能空的15分钟间隔(数据库中缺少行)
答案 0 :(得分:5)
这个很棘手。我希望看到另一位评论者更有效地做到这一点,因为我有预感,有更好的方法来做到这一点。
我也跳过了一个参与15分钟值的部分,但是我指出你可以在评论中做到这一点。这是留给读者的练习:D它应该参数化,因为它现在有很多随机的' * 15'和' * 60'价值分散在这个地方,看起来很笨拙。
我也累了,我的妻子想看电影,所以我没有清理我的代码。它有点乱,应该写得更干净 - 这可能是也可能不值得做,这取决于其他人是否可以在6行代码中重做这一切。如果明天早上还没有答案,我会回过头来做得更好。
def func(df):
timestep = 15*60
seconds = (df.index.minute*60+df.index.second)-timestep
weight = [k/timestep if n == 0 else (seconds[n] - seconds[n - 1])/timestep
for n, k in enumerate(seconds)]
return np.sum(weight*df.values)
df.resample('15min', closed='right').apply(func)
答案 1 :(得分:0)
让第一列的标签为ts
,下一个同居的标签为value
def tws(df, lenght):
df['ts'] = pd.to_datetime(df['ts'])
interval =[0]
df1 = df
for i in range(1,len(df1)):
interval.append(((df1.loc[i, 'ts']-df1.loc[i-1, 'ts']).days * 24 * 60 +(df1.loc[i, 'ts']-df1.loc[i-1, 'ts']).seconds)/60)
df1['time_interval']= interval
start = pd.to_datetime('2017-01-01 2:00:00')
TWS = []
ave = 0
for i in range(1, len(df1)+1):
try:
if df1.loc[i, 'ts']<= (start+timedelta(minutes = lenght)):
ave = ave+df1.loc[i, 'value']*df1.loc[i,'time_interval']
else:
TWS.append(ave/lenght)
ave = df1.loc[i, 'value']*df1.loc[i,'time_interval']
start = df1.loc[i-1,'ts']
except :
TWS.append(ave/lenght)
return TWS
tws(df,15)
输出是每个间隔的加权平均时间列表
答案 2 :(得分:0)
另一种选择是将这些值乘以刻度之间的小数时间,然后将结果相加。以下函数采用带有值和请求索引的系列或数据框。:
import numpy as np
import pandas as pd
def resample_time_weighted_mean(x, target_index, closed=None, label=None):
shift = 1 if closed == "right" else -1
fill = "bfill" if closed == "right" else "ffill"
# Determine length of each interval (daylight saving aware)
extended_index = target_index.union(
[target_index[0] - target_index.freq, target_index[-1] + target_index.freq]
)
interval_lengths = -extended_index.to_series().diff(periods=shift)
# Create a combined index of the source index and target index and reindex to combined index
combined_index = x.index.union(extended_index)
x = x.reindex(index=combined_index, method=fill)
interval_lengths = interval_lengths.reindex(index=combined_index, method=fill)
# Determine weights of each value and multiply source values
weights = -x.index.to_series().diff(periods=shift) / interval_lengths
x = x.mul(weights, axis=0)
# Resample to new index, the final reindex is necessary because resample
# might return more rows based on the frequency
return (
x.resample(target_index.freq, closed=closed, label=label)
.sum()
.reindex(target_index)
)
将此应用于示例数据:
x = pd.Series(
[
32.9,
29.83,
45.76,
16.22,
17.33,
23.4,
150.12,
100.29,
38.45,
67.12,
20.0,
58.41,
58.32,
59.89,
],
index=pd.to_datetime(
[
"2017-01-01 2:05:00",
"2017-01-01 2:07:30",
"2017-01-01 2:10:00",
"2017-01-01 2:15:00",
"2017-01-01 2:20:00",
"2017-01-01 2:25:00",
"2017-01-01 2:28:45",
"2017-01-01 2:30:00",
"2017-01-01 2:35:00",
"2017-01-01 2:40:00",
"2017-01-01 2:45:00",
"2017-01-01 2:50:00",
"2017-01-01 2:55:00",
"2017-01-01 3:00:00",
]
),
)
opts = dict(closed="right", label="right")
resample_time_weighted_mean(
x, pd.DatetimeIndex(x.resample("15T", **opts).groups.keys(), freq="infer"), **opts
)
哪个返回:
2017-01-01 02:15:00 18.005000
2017-01-01 02:30:00 59.464167
2017-01-01 02:45:00 41.856667
2017-01-01 03:00:00 58.873333
Freq: 15T, dtype: float64
关于 simon 的回答中提到的性能问题,该方法在数百万行上表现良好,并且权重是一次性计算的,而不是在相对较慢的 python 循环中计算:
new_index = pd.date_range("2017-01-01", "2021-01-01", freq="1T")
new_index = new_index + pd.TimedeltaIndex(
np.random.rand(*new_index.shape) * 60 - 30, "s"
)
values = pd.Series(np.random.rand(*new_index.shape), index=new_index)
print(values.shape)
(2103841,)
%%timeit
resample_time_weighted_mean(
values, pd.date_range("2017-01-01", "2021-01-01", freq="15T"), closed="right"
)
4.93 s ± 48.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)