Python Pandas - 获取与连续日期时间相关的属性

时间:2018-01-14 17:26:18

标签: python pandas

我有一个数据框,其中包含按分钟列出的日期时间列表(通常以小时为单位),例如2018-01-14 03:00,2018-01-14 04:00等。

我想要做的是按照我定义的分钟增量(有些可能是60个其他15,等等)捕获连续记录的数量。然后,我想关联块中的第一个和最后一个读取时间。

以下列数据为例:

id             reading_time     type
1              1/6/2018 00:00   Interval
1              1/6/2018 01:00   Interval
1              1/6/2018 02:00   Interval
1              1/6/2018 03:00   Interval
1              1/6/2018 06:00   Interval
1              1/6/2018 07:00   Interval
1              1/6/2018 09:00   Interval
1              1/6/2018 10:00   Interval
1              1/6/2018 14:00   Interval
1              1/6/2018 15:00   Interval

我希望输出如下所示:

id  first_reading_time  last_reading_time   number_of_records   type
1   1/6/2018 00:00      1/6/2018 03:00      4                   Received
1   1/6/2018 04:00      1/6/2018 05:00      2                   Missed
1   1/6/2018 06:00      1/6/2018 07:00      2                   Received
1   1/6/2018 08:00      1/6/2018 08:00      1                   Missed
1   1/6/2018 09:00      1/6/2018 10:00      2                   Received
1   1/6/2018 11:00      1/6/2018 13:00      3                   Missed
1   1/6/2018 14:00      1/6/2018 15:00      2                   Received

现在,在这个例子中只有一天,我可以编写一天的代码。许多行延续了多天。

现在,我能够捕获此聚合到第一个连续记录进入的点,而不是使用此代码的下一个集合:

first_reading_time = df['reading_time'][0]
last_reaeding_time = df['reading_time'][idx_loc-1]

df = pd.DataFrame(data=d)
df.reading_time = pd.to_datetime(df.reading_time)
d = pd.Timedelta(60, 'm')
df = df.sort_values('reading_time', ascending=True)
consecutive = df.reading_time.diff().fillna(0).abs().le(d)
df['consecutive'] = consecutive
df.iloc[:idx_loc]
idx_loc = df.index.get_loc(consecutive.idxmin())

其中数据框' d'代表顶部更精细的级别数据。设置变量'连续'的代码行。根据当前行与前一行之间的分钟数差异,将每个记录标记为True或False。变量idx_loc捕获连续的行数,但它仅捕获第一组(在本例中为1/6/2018 00:00和1/6/2018 00:03)。

感谢任何帮助。

1 个答案:

答案 0 :(得分:1)

import pandas as pd 
df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'reading_time': ['1/6/2018 00:00', '1/6/2018 01:00', '1/6/2018 02:00', '1/6/2018 03:00', '1/6/2018 06:00', '1/6/2018 07:00', '1/6/2018 09:00', '1/6/2018 10:00', '1/6/2018 14:00', '1/6/2018 15:00'], 'type': ['Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval']} )
df['reading_time'] = pd.to_datetime(df['reading_time'])
df = df.set_index('reading_time')
df = df.asfreq('1H')
df = df.reset_index()
df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()

result = df.groupby('group')['reading_time'].agg(['first','last','count'])
types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]

产量

                    first                last  count      type
group                                                         
1     2018-01-06 00:00:00 2018-01-06 03:00:00      4  Received
2     2018-01-06 04:00:00 2018-01-06 05:00:00      2    Missed
3     2018-01-06 06:00:00 2018-01-06 07:00:00      2  Received
4     2018-01-06 08:00:00 2018-01-06 08:00:00      1    Missed
5     2018-01-06 09:00:00 2018-01-06 10:00:00      2  Received
6     2018-01-06 11:00:00 2018-01-06 13:00:00      3    Missed
7     2018-01-06 14:00:00 2018-01-06 15:00:00      2  Received

您可以使用asfreq展开DataFrame以包含缺失的行:

df = df.set_index('reading_time')
df = df.asfreq('1H')
df = df.reset_index()

#           reading_time   id      type
# 0  2018-01-06 00:00:00  1.0  Interval
# 1  2018-01-06 01:00:00  1.0  Interval
# 2  2018-01-06 02:00:00  1.0  Interval
# 3  2018-01-06 03:00:00  1.0  Interval
# 4  2018-01-06 04:00:00  NaN       NaN
# 5  2018-01-06 05:00:00  NaN       NaN
# 6  2018-01-06 06:00:00  1.0  Interval
# 7  2018-01-06 07:00:00  1.0  Interval
# 8  2018-01-06 08:00:00  NaN       NaN
# 9  2018-01-06 09:00:00  1.0  Interval
# 10 2018-01-06 10:00:00  1.0  Interval
# 11 2018-01-06 11:00:00  NaN       NaN
# 12 2018-01-06 12:00:00  NaN       NaN
# 13 2018-01-06 13:00:00  NaN       NaN
# 14 2018-01-06 14:00:00  1.0  Interval
# 15 2018-01-06 15:00:00  1.0  Interval

接下来,在id列中使用NaN来识别群组:

df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()

然后按group值进行分组,以便为​​每个组找到firstlast reading_times

result = df.groupby('group')['reading_time'].agg(['first','last','count'])

#                     first                last  count
# group                                               
# 1     2018-01-06 00:00:00 2018-01-06 03:00:00      4
# 2     2018-01-06 04:00:00 2018-01-06 05:00:00      2
# 3     2018-01-06 06:00:00 2018-01-06 07:00:00      2
# 4     2018-01-06 08:00:00 2018-01-06 08:00:00      1
# 5     2018-01-06 09:00:00 2018-01-06 10:00:00      2
# 6     2018-01-06 11:00:00 2018-01-06 13:00:00      3
# 7     2018-01-06 14:00:00 2018-01-06 15:00:00      2

由于MissedReceived值互换,因此可以从索引生成它们:

types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]

要基于每个ID处理多个频率,您可以使用:

import pandas as pd 
df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], 'reading_time': ['1/6/2018 00:00', '1/6/2018 01:00', '1/6/2018 02:00', '1/6/2018 03:00', '1/6/2018 06:00', '1/6/2018 07:00', '1/6/2018 09:00', '1/6/2018 10:00', '1/6/2018 14:00', '1/6/2018 15:00'], 'type': ['Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval']} )

df['reading_time'] = pd.to_datetime(df['reading_time'])
df = df.sort_values(by='reading_time')
df = df.set_index('reading_time')
freqmap = {1:'1H', 2:'15T'}
df = df.groupby('id', group_keys=False).apply(
    lambda grp: grp.asfreq(freqmap[grp['id'][0]]))
df = df.reset_index(level='reading_time')

df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()
grouped = df.groupby('group')
result = grouped['reading_time'].agg(['first','last','count'])
result['id'] = grouped['id'].agg('first')
types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]

产生

                    first                last  count   id      type
group                                                              
1     2018-01-06 00:00:00 2018-01-06 03:00:00      4  1.0  Received
2     2018-01-06 04:00:00 2018-01-06 05:00:00      2  NaN    Missed
3     2018-01-06 06:00:00 2018-01-06 07:00:00      2  1.0  Received
4     2018-01-06 07:15:00 2018-01-06 08:45:00      7  NaN    Missed
5     2018-01-06 09:00:00 2018-01-06 09:00:00      1  2.0  Received
6     2018-01-06 09:15:00 2018-01-06 09:45:00      3  NaN    Missed
7     2018-01-06 10:00:00 2018-01-06 10:00:00      1  2.0  Received
8     2018-01-06 10:15:00 2018-01-06 13:45:00     15  NaN    Missed
9     2018-01-06 14:00:00 2018-01-06 14:00:00      1  2.0  Received
10    2018-01-06 14:15:00 2018-01-06 14:45:00      3  NaN    Missed
11    2018-01-06 15:00:00 2018-01-06 15:00:00      1  2.0  Received

“错过”行似乎不应与任何id相关联,但为了使结果更接近您发布的行,您可以ffill转发填充NaN ID值:

result['id'] = result['id'].ffill()

将结果更改为

                    first                last  count  id      type
group                                                             
1     2018-01-06 00:00:00 2018-01-06 03:00:00      4   1  Received
2     2018-01-06 04:00:00 2018-01-06 05:00:00      2   1    Missed
3     2018-01-06 06:00:00 2018-01-06 07:00:00      2   1  Received
4     2018-01-06 07:15:00 2018-01-06 08:45:00      7   1    Missed
5     2018-01-06 09:00:00 2018-01-06 09:00:00      1   2  Received
6     2018-01-06 09:15:00 2018-01-06 09:45:00      3   2    Missed
7     2018-01-06 10:00:00 2018-01-06 10:00:00      1   2  Received
8     2018-01-06 10:15:00 2018-01-06 13:45:00     15   2    Missed
9     2018-01-06 14:00:00 2018-01-06 14:00:00      1   2  Received
10    2018-01-06 14:15:00 2018-01-06 14:45:00      3   2    Missed
11    2018-01-06 15:00:00 2018-01-06 15:00:00      1   2  Received