在groupby()对象上运行的熊猫比组多得多

时间:2019-03-12 15:37:56

标签: python pandas

我继承了一些我要优化的pandas代码。已使用

创建了一个DataFrameresults
results = pd.DataFrame(columns=['plan','volume','avg_denial_increase','std_dev_impact', 'avg_idr_increase', 'std_dev_idr_increase'])
for plan in my_df['plan_name'].unique():
    df1 = df[df['plan_name'] == plan]]
    df1['volume'].fillna(0, inplace=True)
    df1['change'] = df1['idr'] - df1['idr'].shift(1)
    df1['change'].fillna(0, inplace=True)
    df1['impact'] = df1['change'] * df1['volume']
    describe_impact = df1['impact'].describe()
    describe_change = df1['change'].describe()
    results = results.append({'plan': plan,
                              'volume': df1['volume'].mean(),
                              'avg_denial_increase': describe_impact['mean'],
                              'std_dev_impact': describe_impact['std'],
                              'avg_idr_increase': describe_change['mean'],
                              'std_dev_idr_increase': describe_change['std']}, 
                             ignore_index=True)

我的第一个想法是将所有内容从for循环下移至一个单独的函数get_results_for_plan中,并使用pandas groupby()apply()方法。但是事实证明,他的速度要慢得多。正在运行

%lprun -f get_results_for_plan my_df.groupby('plan_name', sort=False, as_index=False).apply(get_results_for_plan)

返回

Timer unit: 1e-06 s

Total time: 0.77167 s
File: <ipython-input-46-7c36b3902812>
Function: get_results_for_plan at line 1

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     1                                           def get_results_for_plan(plan_df):
     2        94      33221.0    353.4      4.3      plan = plan_df.iloc[0]['plan_name']
     3        94      25901.0    275.5      3.4      plan_df['volume'].fillna(0, inplace=True)
     4        94      75765.0    806.0      9.8      plan_df['change'] = plan_df['idr'] - plan_df['idr'].shift(1)
     5        93      38653.0    415.6      5.0      plan_df['change'].fillna(0, inplace=True)
     6        93      57088.0    613.8      7.4      plan_df['impact'] = plan_df['change'] * plan_df['volume']
     7        93     204828.0   2202.5     26.5      describe_impact = plan_df['impact'].describe()
     8        93     201127.0   2162.7     26.1      describe_change = plan_df['change'].describe()
     9        93        129.0      1.4      0.0      return pd.DataFrame({'plan': plan,
    10        93      21703.0    233.4      2.8                           'volume': plan_df['volume'].mean(),
    11        93       4291.0     46.1      0.6                           'avg_denial_increase': describe_impact['mean'],
    12        93       1957.0     21.0      0.3                           'std_dev_impact': describe_impact['std'],
    13        93       2912.0     31.3      0.4                           'avg_idr_increase': describe_change['mean'],
    14        93       1783.0     19.2      0.2                           'std_dev_idr_increase': describe_change['std']},
    15        93     102312.0   1100.1     13.3                         index=[0])

我看到的最明显的问题是每行的点击数。分组数,按

len(my_df.groupby('plan_name', sort=False, as_index=False).groups)

是72。那么为什么这些行分别被击中94或93次? (这可能与this问题有关,但在那种情况下,我希望匹配数为num_groups + 1

更新:在上述%lprun的{​​{1}}调用中,删除groupby()可以将第2-6行的行命中率降低到80,其余的行命中率降低到79。点击率仍然比我想象的要高,但要好一些。

第二个问题:是否有更好的方法来优化此特定代码?

0 个答案:

没有答案