慢熊猫DataFrame MultiIndex重新索引

时间:2019-12-05 03:23:33

标签: python pandas numpy dataframe

我有一个格式为pandas的熊猫:

                       id                start_time  sequence_no    value
0                      71 2018-10-17 20:12:43+00:00       114428        3
1                      71 2018-10-17 20:12:43+00:00       114429        3
2                      71 2018-10-17 20:12:43+00:00       114431       79
3                      71 2019-11-06 00:51:14+00:00       216009      100
4                      71 2019-11-06 00:51:14+00:00       216011      150
5                      71 2019-11-06 00:51:14+00:00       216013      180
6                      92 2019-12-01 00:51:14+00:00       114430       19
7                      92 2019-12-01 00:51:14+00:00       114433       79
8                      92 2019-12-01 00:51:14+00:00       114434      100

我要尝试做的是填写丢失的sequence_no id / start_time组合。例如,idstart_time的{​​{1}} / 71对缺少sequence_no114430。对于每个添加的缺失sequence_no,我还需要对缺失的{{ 1}}列值。因此,对以上数据的最终处理将看起来像这样:

2018-10-17 20:12:43+00:00

({value已添加到新插入的行的右侧,以便于阅读)

我执行此操作的原始解决方案在很大程度上依赖于对大量数据表的Python循环,因此它似乎是numpy和pandas发光的理想场所。靠Pandas: create rows to fill numeric gaps之类的答案,我想到了:

                       id                start_time  sequence_no    value
0                      71 2018-10-17 20:12:43+00:00       114428        3
1                      71 2018-10-17 20:12:43+00:00       114429        3
2                      71 2018-10-17 20:12:43+00:00       114430       41  **
3                      71 2018-10-17 20:12:43+00:00       114431       79
4                      71 2019-11-06 00:51:14+00:00       216009      100  
5                      71 2019-11-06 00:51:14+00:00       216010      125  **
6                      71 2019-11-06 00:51:14+00:00       216011      150
7                      71 2019-11-06 00:51:14+00:00       216012      165  **
8                      71 2019-11-06 00:51:14+00:00       216013      180
9                      92 2019-12-01 00:51:14+00:00       114430       19
10                     92 2019-12-01 00:51:14+00:00       114431       39  **
11                     92 2019-12-01 00:51:14+00:00       114432       59  **
12                     92 2019-12-01 00:51:14+00:00       114433       79
13                     92 2019-12-01 00:51:14+00:00       114434      100

输出正确,但是它的运行速度几乎与我的大量python循环解决方案相同。我敢肯定有些地方可以减少一些步骤,但是测试中最慢的部分似乎是**。鉴于现实世界中的数据几乎包含一百万行(经常进行操作),是否有任何明显的方法可以取得比我已经写过的性能更高的性能?有什么方法可以加快这种转变?

更新9/12/2019

在足够大的数据集上进行测试时,将this answer的合并解决方案与扩展数据帧的原始结构结合起来,可以得到迄今为止最快的结果:

import pandas as pd
import numpy as np

# Generate dummy data
df = pd.DataFrame([
    (71, '2018-10-17 20:12:43+00:00', 114428, 3),
    (71, '2018-10-17 20:12:43+00:00', 114429, 3),
    (71, '2018-10-17 20:12:43+00:00', 114431, 79),
    (71, '2019-11-06 00:51:14+00:00', 216009, 100),
    (71, '2019-11-06 00:51:14+00:00', 216011, 150),
    (71, '2019-11-06 00:51:14+00:00', 216013, 180),
    (92, '2019-12-01 00:51:14+00:00', 114430, 19),
    (92, '2019-12-01 00:51:14+00:00', 114433, 79),
    (92, '2019-12-01 00:51:14+00:00', 114434, 100),   
], columns=['id', 'start_time', 'sequence_no', 'value'])

# create a new DataFrame with the min/max `sequence_no` values for each `id`/`start_time` pairing
by_start = df.groupby(['start_time', 'id'])
ranges = by_start.agg(
    sequence_min=('sequence_no', np.min), sequence_max=('sequence_no', np.max)
)
reset = ranges.reset_index()

mins = reset['sequence_min']
maxes = reset['sequence_max']

# Use those min/max values to generate a sequence with ALL values in that range
expanded = pd.DataFrame(dict(
    start_time=reset['start_time'].repeat(maxes - mins + 1),
    id=reset['id'].repeat(maxes - mins + 1),
    sequence_no=np.concatenate([np.arange(mins, maxes + 1) for mins, maxes in zip(mins, maxes)])
))

# Use the above generated DataFrame as an index to generate the missing rows, then interpolate
expanded_index = pd.MultiIndex.from_frame(expanded)
df.set_index(
    ['start_time', 'id', 'sequence_no']
).reindex(expanded_index).interpolate()

3 个答案:

答案 0 :(得分:7)

使用merge代替reindex可以加快速度。另外,也可以使用map而不是列表理解。

# Generate dummy data
df = pd.DataFrame([
    (71, '2018-10-17 20:12:43+00:00', 114428, 3),
    (71, '2018-10-17 20:12:43+00:00', 114429, 3),
    (71, '2018-10-17 20:12:43+00:00', 114431, 79),
    (71, '2019-11-06 00:51:14+00:00', 216009, 100),
    (71, '2019-11-06 00:51:14+00:00', 216011, 150),
    (71, '2019-11-06 00:51:14+00:00', 216013, 180),
    (92, '2019-12-01 00:51:14+00:00', 114430, 19),
    (92, '2019-12-01 00:51:14+00:00', 114433, 79),
    (92, '2019-12-01 00:51:14+00:00', 114434, 100),   
], columns=['id', 'start_time', 'sequence_no', 'value'])

# create a ranges df with groupby and agg
ranges = df.groupby(['start_time', 'id'])['sequence_no'].agg([('sequence_min', np.min), ('sequence_max', np.max)])
# map with range to create the sequence number rnage
ranges['sequence_no'] = list(map(lambda x,y: range(x,y), ranges.pop('sequence_min'), ranges.pop('sequence_max')+1))
# explode you DataFrame
new_df = ranges.explode('sequence_no')
# merge new_df and df
merge = new_df.reset_index().merge(df, on=['start_time', 'id', 'sequence_no'], how='left')
# interpolate and assign values 
merge['value'] = merge['value'].interpolate()

                   start_time  id sequence_no  value
0   2018-10-17 20:12:43+00:00  71      114428    3.0
1   2018-10-17 20:12:43+00:00  71      114429    3.0
2   2018-10-17 20:12:43+00:00  71      114430   41.0
3   2018-10-17 20:12:43+00:00  71      114431   79.0
4   2019-11-06 00:51:14+00:00  71      216009  100.0
5   2019-11-06 00:51:14+00:00  71      216010  125.0
6   2019-11-06 00:51:14+00:00  71      216011  150.0
7   2019-11-06 00:51:14+00:00  71      216012  165.0
8   2019-11-06 00:51:14+00:00  71      216013  180.0
9   2019-12-01 00:51:14+00:00  92      114430   19.0
10  2019-12-01 00:51:14+00:00  92      114431   39.0
11  2019-12-01 00:51:14+00:00  92      114432   59.0
12  2019-12-01 00:51:14+00:00  92      114433   79.0
13  2019-12-01 00:51:14+00:00  92      114434  100.0

答案 1 :(得分:3)

merge解决方案的简化版本:

df.groupby(['start_time', 'id'])['sequence_no']\
.apply(lambda x: np.arange(x.min(), x.max() + 1))\
.explode().reset_index()\
.merge(df, on=['start_time', 'id', 'sequence_no'], how='left')\
.interpolate()

输出:

                   start_time  id sequence_no  value
0   2018-10-17 20:12:43+00:00  71      114428    3.0
1   2018-10-17 20:12:43+00:00  71      114429    3.0
2   2018-10-17 20:12:43+00:00  71      114430   41.0
3   2018-10-17 20:12:43+00:00  71      114431   79.0
4   2019-11-06 00:51:14+00:00  71      216009  100.0
5   2019-11-06 00:51:14+00:00  71      216010  125.0
6   2019-11-06 00:51:14+00:00  71      216011  150.0
7   2019-11-06 00:51:14+00:00  71      216012  165.0
8   2019-11-06 00:51:14+00:00  71      216013  180.0
9   2019-12-01 00:51:14+00:00  92      114430   19.0
10  2019-12-01 00:51:14+00:00  92      114431   39.0
11  2019-12-01 00:51:14+00:00  92      114432   59.0
12  2019-12-01 00:51:14+00:00  92      114433   79.0
13  2019-12-01 00:51:14+00:00  92      114434  100.0

答案 2 :(得分:1)

使用reindex而不使用explode的另一种解决方案:

result = (df.groupby(["id","start_time"])
          .apply(lambda d: d.set_index("sequence_no")
          .reindex(range(min(d["sequence_no"]),max(d["sequence_no"])+1)))
          .drop(["id","start_time"],axis=1).reset_index()
          .interpolate())

print (result)

#
    id                 start_time  sequence_no  value
0   71  2018-10-17 20:12:43+00:00       114428    3.0
1   71  2018-10-17 20:12:43+00:00       114429    3.0
2   71  2018-10-17 20:12:43+00:00       114430   41.0
3   71  2018-10-17 20:12:43+00:00       114431   79.0
4   71  2019-11-06 00:51:14+00:00       216009  100.0
5   71  2019-11-06 00:51:14+00:00       216010  125.0
6   71  2019-11-06 00:51:14+00:00       216011  150.0
7   71  2019-11-06 00:51:14+00:00       216012  165.0
8   71  2019-11-06 00:51:14+00:00       216013  180.0
9   92  2019-12-01 00:51:14+00:00       114430   19.0
10  92  2019-12-01 00:51:14+00:00       114431   39.0
11  92  2019-12-01 00:51:14+00:00       114432   59.0
12  92  2019-12-01 00:51:14+00:00       114433   79.0
13  92  2019-12-01 00:51:14+00:00       114434  100.0