Pandas:按时间间隔的另一个数据帧过滤数据帧

时间:2021-02-02 23:31:18

标签: python pandas dataframe time filter

如果我有一个数据框 (df_data),例如:

ID        Time                X        Y        Z        H
05  2020-06-26 14:13:16    0.055    0.047    0.039    0.062
05  2020-06-26 14:13:21    0.063    0.063    0.055    0.079
05  2020-06-26 14:13:26    0.063    0.063    0.063    0.079
05  2020-06-26 14:13:31    0.095    0.102    0.079    0.127
...    ..    ...     ...     ...      ...      ...      ...
01  2020-07-01 08:59:43    0.063    0.063    0.047    0.079
01  2020-07-01 08:59:48    0.055    0.055    0.055    0.079
01  2020-07-01 08:59:53    0.071    0.063    0.055    0.082
01  2020-07-01 08:59:58    0.063    0.063    0.047    0.082
01  2020-07-01 08:59:59    0.047    0.047    0.047    0.071

[17308709 rows x 8 columns]

我想通过另一个数据帧间隔(df_intervals)过滤,例如:

int_id         start               end
1            2020-02-03 18:11:59   2020-02-03 18:42:00
2            2020-02-03 19:36:59   2020-02-03 20:06:59
3            2020-02-03 21:00:59   2020-02-03 21:31:00
4            2020-02-03 22:38:00   2020-02-03 23:08:00
5            2020-02-04 05:55:00   2020-02-04 06:24:59
...                         ...                   ...
1804         2021-01-10 13:50:00   2021-01-10 14:20:00
1805         2021-01-10 18:10:00   2021-01-10 18:40:00
1806         2021-01-10 19:40:00   2021-01-10 20:10:00
1807         2021-01-10 21:25:00   2021-01-10 21:55:00
1808         2021-01-10 22:53:00   2021-01-10 23:23:00

[1808 rows x 2 columns]

最有效的方法是什么?我有一个大型数据集,如果我尝试对其进行迭代,例如:

for i in range(len(intervals)):
    df_filtered = df[df['Time'].between(intervals['start'][i], intervals['end'][i])
    ...
    ...
    ...

它需要永远!我知道我不应该遍历大型数据帧,但我不知道如何按第二个数据帧的每个间隔过滤它。

我正在尝试做的步骤是:

1- 从 df_intervals 获取所有间隔(开始/结束列);

2- 使用这些间隔创建一个新的数据框 (df_stats),其中包含这些时间范围内的列的统计信息。示例:

      start                  end             ID    X_max    X_min    X_mean    Y_max    Y_min    Y_mean    ....
2020-02-03 18:11:59   2020-02-03 18:42:00    01    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 18:11:59   2020-02-03 18:42:00    02    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 18:11:59   2020-02-03 18:42:00    03    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 18:11:59   2020-02-03 18:42:00    04    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 18:11:59   2020-02-03 18:42:00    05    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 19:36:59   2020-02-03 20:06:59    01    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 19:36:59   2020-02-03 20:06:59    02    ...    ...    ...     ...   ...    ...    ...     ...
2020-02-03 19:36:59   2020-02-03 20:06:59    03    ...    ...    ...     ...   ...    ...    ...     ...

2 个答案:

答案 0 :(得分:1)

这是完成此操作的完整代码。我尝试创建一些示例数据以查看这是否有效。请针对您的完整数据集运行此程序,看看这是否为您提供了所需的结果。

  1. 第 1 步:创建一个临时列表来存储临时数据帧。

    temp_list = []

  2. 第 2 步:遍历数据框 2。对于选定的每一行,执行 以下:

    • 从数据帧 1 中过滤开始和结束日期的行

      temp = df1[df1.Time.between(row.start,row.end)]

    • Groupby ID 并获取 X、Y、Z 和 H 的统计值。每列一组

      x = temp.groupby('ID' ['X'].agg(['min', 'max', 'mean', 'median']).add_prefix('X_').reset_index()

    • 将所有 X、Y、Z、H 项合并到一个数据框中。

    • 向合并的数据框添加开始和结束日期

    • 将数据帧附加到 temp_list

  3. 第 3 步:使用 temp_list 创建最终数据帧

  4. 第 4 步:根据您的需要重新排列列。开始和结束日期为前两列,然后是 ID,然后是 X 值、Y 值、Z 值,最后是 H 值。

  5. 第 5 步:打印数据框

完成此操作的完整代码:

c1 = ['ID','Time','X','Y','Z','H']
d1 = [
['01','2020-02-03 18:13:16',0.011,0.012,0.013,0.014],
['01','2020-02-03 18:13:21',0.015,0.016,0.017,0.018],
['01','2020-02-03 18:13:26',0.013,0.013,0.013,0.013],
['01','2020-02-03 18:13:31',0.015,0.015,0.015,0.015],
     
['02','2020-02-03 18:13:16',0.021,0.022,0.023,0.024],
['02','2020-02-03 18:13:21',0.025,0.026,0.027,0.028],
['02','2020-02-03 18:13:26',0.023,0.023,0.023,0.023],
['02','2020-02-03 18:13:31',0.025,0.025,0.025,0.025],
     
['03','2020-02-03 18:13:16',0.031,0.032,0.033,0.034],
['03','2020-02-03 18:13:21',0.035,0.036,0.037,0.038],
['03','2020-02-03 18:13:26',0.033,0.033,0.033,0.033],
['03','2020-02-03 18:13:31',0.035,0.035,0.035,0.035],

['04','2020-02-03 18:13:16',0.041,0.042,0.043,0.044],
['04','2020-02-03 18:13:21',0.045,0.046,0.047,0.048],
['04','2020-02-03 18:13:26',0.043,0.043,0.043,0.043],
['04','2020-02-03 18:13:31',0.045,0.045,0.045,0.045],
     
['05','2020-02-03 18:13:16',0.055,0.047,0.039,0.062],
['05','2020-02-03 18:13:21',0.063,0.063,0.055,0.079],
['05','2020-02-03 18:13:26',0.063,0.063,0.063,0.079],
['05','2020-02-03 18:13:31',0.095,0.102,0.079,0.127],
     
['01','2020-02-03 20:03:16',0.011,0.012,0.013,0.014],
['01','2020-02-03 20:03:21',0.015,0.016,0.017,0.018],
['01','2020-02-03 20:03:26',0.013,0.013,0.013,0.013],
['01','2020-02-03 20:03:31',0.015,0.015,0.015,0.015],
     
['02','2020-02-03 20:03:16',0.021,0.022,0.023,0.024],
['02','2020-02-03 20:03:21',0.025,0.026,0.027,0.028],
['02','2020-02-03 20:03:26',0.023,0.023,0.023,0.023],
['02','2020-02-03 20:03:31',0.025,0.025,0.025,0.025],
     
['03','2020-02-03 20:03:16',0.031,0.032,0.033,0.034],
['03','2020-02-03 20:03:21',0.035,0.036,0.037,0.038],
['03','2020-02-03 20:03:26',0.033,0.033,0.033,0.033],
['03','2020-02-03 20:03:31',0.035,0.035,0.035,0.035],

['04','2020-02-03 20:03:16',0.041,0.042,0.043,0.044],
['04','2020-02-03 20:03:21',0.045,0.046,0.047,0.048],
['04','2020-02-03 20:03:26',0.043,0.043,0.043,0.043],
['04','2020-02-03 20:03:31',0.045,0.045,0.045,0.045],
     
['05','2020-02-03 20:03:16',0.055,0.047,0.039,0.062],
['05','2020-02-03 20:03:21',0.063,0.063,0.055,0.079],
['05','2020-02-03 20:03:26',0.063,0.063,0.063,0.079],
['05','2020-02-03 20:03:31',0.095,0.102,0.079,0.127],
     
['01','2020-07-01 08:59:43',0.063,0.063,0.047,0.079],
['01','2020-07-01 08:59:48',0.055,0.055,0.055,0.079],
['01','2020-07-01 08:59:53',0.071,0.063,0.055,0.082],
['01','2020-07-01 08:59:58',0.063,0.063,0.047,0.082],
['01','2020-07-01 08:59:59',0.047,0.047,0.047,0.071]]

import pandas as pd
df1 = pd.DataFrame(d1,columns=c1)
df1.Time = pd.to_datetime(df1.Time)

c2 = ['int_id','start','end']
d2 = [[1,'2020-02-03 18:11:59','2020-02-03 18:42:00'],
[2,'2020-02-03 19:36:59','2020-02-03 20:06:59'],
[3,'2020-02-03 21:00:59','2020-02-03 21:31:00'],
[4,'2020-02-03 22:38:00','2020-02-03 23:08:00'],
[5,'2020-02-04 05:55:00','2020-02-04 06:24:59'],
[1804,'2021-01-10 13:50:00','2021-01-10 14:20:00'],
[1805,'2021-01-10 18:10:00','2021-01-10 18:40:00'],
[1806,'2021-01-10 19:40:00','2021-01-10 20:10:00'],
[1807,'2021-01-10 21:25:00','2021-01-10 21:55:00'],
[1808,'2021-01-10 22:53:00','2021-01-10 23:23:00']]

import pandas as pd
from functools import reduce

df2 = pd.DataFrame(d2,columns=c2)

df2.start = pd.to_datetime(df2.start)
df2.end = pd.to_datetime(df2.end)

temp_list = []

for i, row in df2.iterrows():

    temp = df1[df1.Time.between(row.start,row.end)]

    x = temp.groupby('ID')['X'].agg(['min','max','mean','median']).add_prefix('X_').reset_index()
    y = temp.groupby('ID')['Y'].agg(['min','max','mean','median']).add_prefix('Y_').reset_index()
    z = temp.groupby('ID')['Z'].agg(['min','max','mean','median']).add_prefix('Z_').reset_index()
    h = temp.groupby('ID')['H'].agg(['min','max','mean','median']).add_prefix('H_').reset_index()

    data_frames = [x,y,z,h]

    df_merged = reduce(lambda left,right: pd.merge(left,right,on=['ID'],
                            how='outer'), data_frames).fillna('void')

    df_merged['start'] = row.start
    df_merged['end'] = row.end
    
    temp_list.append(df_merged)


df_final = pd.concat(temp_list, ignore_index=True)

column_names = ['start','end','ID',
                    'X_min','X_max','X_mean','X_median',
                    'Y_min','Y_max','Y_mean','Y_median',
                    'Z_min','Z_max','Z_mean','Z_median',
                    'H_min','H_max','H_mean','H_median']

df_final = df_final[column_names]

print (df_final)

输出结果为:

                start                 end  ID  ...  H_max   H_mean  H_median
0 2020-02-03 18:11:59 2020-02-03 18:42:00  01  ...  0.018  0.01500    0.0145
1 2020-02-03 18:11:59 2020-02-03 18:42:00  02  ...  0.028  0.02500    0.0245
2 2020-02-03 18:11:59 2020-02-03 18:42:00  03  ...  0.038  0.03500    0.0345
3 2020-02-03 18:11:59 2020-02-03 18:42:00  04  ...  0.048  0.04500    0.0445
4 2020-02-03 18:11:59 2020-02-03 18:42:00  05  ...  0.127  0.08675    0.0790
5 2020-02-03 19:36:59 2020-02-03 20:06:59  01  ...  0.018  0.01500    0.0145
6 2020-02-03 19:36:59 2020-02-03 20:06:59  02  ...  0.028  0.02500    0.0245
7 2020-02-03 19:36:59 2020-02-03 20:06:59  03  ...  0.038  0.03500    0.0345
8 2020-02-03 19:36:59 2020-02-03 20:06:59  04  ...  0.048  0.04500    0.0445
9 2020-02-03 19:36:59 2020-02-03 20:06:59  05  ...  0.127  0.08675    0.0790

答案 1 :(得分:1)

如果 Joe 的回答没有给你你想要的速度,我认为可以通过消除 for 循环中的统计计算来改进它。 (我正在窃取他的 df 创建,因为他是将其放入答案的英雄。)理想情况下,您也可以摆脱 for 循环,但我认为复制时间戳索引(跨 ID 号)它可以合并这两个数据框很棘手。

这是我仍然使用迭代来处理开始/结束时间的尝试。首先,我将 int_id 应用于父 df。我想将它添加到父数据帧中,这样我就可以在不制作“临时”数据帧并对其进行统计的情况下进行“分组”。

for index, row in df2.iterrows():
    
    df1.loc[df1.Time.between(row.start,row.end), 'int_id'] = row.int_id

    ID                Time      X      Y      Z      H  int_id
0   01 2020-02-03 18:13:16  0.011  0.012  0.013  0.014     1.0
1   01 2020-02-03 18:13:21  0.015  0.016  0.017  0.018     1.0
2   01 2020-02-03 18:13:26  0.013  0.013  0.013  0.013     1.0
3   01 2020-02-03 18:13:31  0.015  0.015  0.015  0.015     1.0
4   02 2020-02-03 18:13:16  0.021  0.022  0.023  0.024     1.0
5   02 2020-02-03 18:13:21  0.025  0.026  0.027  0.028     1.0
6   02 2020-02-03 18:13:26  0.023  0.023  0.023  0.023     1.0

然后我定义聚合以在循环完成后一次性完成所有操作。

aggs = {'X':['sum', 'max', 'mean', 'median'], 
        'Y':['sum', 'max', 'mean', 'median'], 
        'Z':['sum', 'max', 'mean', 'median'], 
        'H':['sum', 'max', 'mean', 'median']}

df2 = df1.groupby(by=('int_id')).agg(aggs)

            X                            Y                             Z                            H                        
          sum    max    mean median    sum    max     mean median    sum    max    mean median    sum    max     mean  median
int_id                                                                                                                       
1.0     0.732  0.095  0.0366  0.034  0.739  0.102  0.03695  0.034  0.708  0.079  0.0354  0.034  0.827  0.127  0.04135  0.0345
2.0     0.732  0.095  0.0366  0.034  0.739  0.102  0.03695  0.034  0.708  0.079  0.0354  0.034  0.827  0.127  0.04135  0.0345

注意:这里的列上有一个多索引。您可以通过以下方式加入他们。

df_final.columns = ['_'.join(col).strip() for col in df_final.columns.values]

        X_sum  X_max  X_mean  X_median  Y_sum  Y_max   Y_mean  Y_median  Z_sum  Z_max  Z_mean  Z_median  H_sum  H_max   H_mean  H_median
int_id                                                                                                                                  
1.0     0.732  0.095  0.0366     0.034  0.739  0.102  0.03695     0.034  0.708  0.079  0.0354     0.034  0.827  0.127  0.04135    0.0345
2.0     0.732  0.095  0.0366     0.034  0.739  0.102  0.03695     0.034  0.708  0.079  0.0354     0.034  0.827  0.127  0.04135    0.0345
相关问题