熊猫为循环性能问题重新采样

时间:2018-09-24 06:54:51

标签: python pandas performance resampling

我有以下DataFrame:

df = pd.DataFrame({"id": [0]*5 + [1]*5,
         "time": ['2015-01-01', '2015-01-03', '2015-01-04', '2015-01-08', '2015-01-10', '2015-02-02', '2015-02-04', '2015-02-06', '2015-02-11', '2015-02-13'],
         'hit': [0,3,8,2,5, 6,12,0,7,3]})
df.time = df.time.astype('datetime64[ns]')
df = df[['id', 'time', 'hit']]
df

输出:

    id        time  hit
0   0   2015-01-01  0
1   0   2015-01-03  3
2   0   2015-01-04  8
3   0   2015-01-08  2
4   0   2015-01-10  5
5   1   2015-02-02  6
6   1   2015-02-04  12
7   1   2015-02-06  0
8   1   2015-02-11  7
9   1   2015-02-13  3

以及执行重采样的功能:

def subset(df):
    '''select first x rows'''
    return df.iloc[:14]

def dailyCount(df, member_id, values, time):
    '''Transform a time-series df into 7 daily count per group'''
    # container for resulting dataframe
    ts = pd.DataFrame()
    for i in df.member_id.unique():
        # prepare a series and upsample it within the same id
        chunk = pd.Series(df.loc[df.member_id == i, values])
        #print(chunk)
        chunk = chunk.resample('1D').asfreq()

        # create dataframe and construct some additional columns
        chunk = pd.DataFrame(chunk, columns=[values]).reset_index().fillna(0)
        chunk[values] = chunk[values].astype(int)
        chunk[member_id] = i
        chunk['daily_count'] = chunk.groupby(member_id).cumcount() + 1

        # accumulate id-wise dataframes 1 by 1 vertically
        ts = pd.concat([ts, chunk], axis=0, ignore_index=True)

    ts = ts.set_index([member_id, time])
    ts = ts.reset_index(level=0).groupby(member_id).apply(subset).drop(member_id, axis=1).reset_index().drop(time, axis=1).set_index([member_id,'daily_count']).unstack().fillna(0)
    #ts = ts.reset_index().drop(columns=time).set_index([member_id,'daily_count']).unstack().fillna(0)
    ts.columns = pd.Index(['dailyCount_' + e[0] + '_' + str(e[1]) for e in ts.columns.tolist()])
    ts = ts.astype(np.int32)#.reset_index()
    return ts

输入:

df.rename(columns={'id': 'member_id'}, inplace=True)
df = df.set_index('time')
dailyCount(df, 'member_id', 'hit', 'time')

输出:

    dailyCount_hit_1    dailyCount_hit_2    dailyCount_hit_3    dailyCount_hit_4    dailyCount_hit_5    dailyCount_hit_6    dailyCount_hit_7    dailyCount_hit_8    dailyCount_hit_9    dailyCount_hit_10   dailyCount_hit_11   dailyCount_hit_12
member_id                                               
0   0   0   3   8   0   0   0   2   0   5   0   0
1   6   0   12  0   0   0   0   0   0   7   0   3

当我在约180,000行DataFrame上使用此功能时,花了6分钟才能在我的2.3GHz i5 MacBookPro上运行。我知道我的机器运行缓慢,但是我需要在各种数据集上重复使用此功能。在这种情况下,有什么方法可以在不使用For循环的情况下执行相同的转换?

1 个答案:

答案 0 :(得分:1)

这是使用pandas.date_range Index.reindexDataFrame.pivot_table的另一种可能的解决方案:

df.rename(columns={'id': 'member_id'}, inplace=True)
df = df.set_index('time')
members = []

for _, g in df.groupby('member_id'):
    dt_idx = pd.date_range(start=g.index.min(), end=g.index.max(), freq='D')
    g = g.reindex(dt_idx).reset_index(drop=True)
    members.append(g)

resampled_df = pd.concat(members)
resampled_df['member_id'].ffill(inplace=True)
resampled_df['hit'].fillna(0, inplace=True)
resampled_df.index += 1
resampled_df = (resampled_df.pivot_table(values='hit',
                                         index='member_id',
                                         columns=resampled_df.index,
                                         fill_value=0)
                .add_prefix('dailyCount_hit_'))
resampled_df.index =  resampled_df.index.astype(int)
resampled_df.iloc[:, :14]