Pandas功能太慢 - 用dict / numpy优化?

时间:2018-06-04 03:07:27

标签: python pandas numpy optimization

我有~10个大df(每个5mil +行并且正在增长)我想要执行计算。即使在超高速AWS机器上使用原始大熊猫这样做也是非常缓慢的。我需要的大多数函数都是基本的,所以我认为可以从pandas导出到dict(?),运行我的计算然后将它发送回df?

原始的df只是交易的价格和大小捕获,如下所示(如上所述,数百万行)。

                            size      price
time        
2018-05-18 12:05:11.521 -0.026600   8100.000000
2018-05-18 12:05:11.674 -0.115616   8100.000000
2018-05-18 12:05:11.677 -0.026611   8100.000000
2018-05-18 12:05:11.678 -0.074000   8098.400000
2018-05-18 12:05:11.680 -0.783772   8096.600000
2018-05-18 12:05:11.807 -1.000000   8096.600000
2018-05-18 12:05:12.024 -0.100600   8096.600000
2018-05-18 12:05:12.198 -0.899400   8096.600000
2018-05-18 12:05:12.199 -1.600600   8095.100000
2018-05-18 12:05:14.949 1.000000    8092.600000
2018-05-18 12:05:14.951 0.258350    8092.600000
2018-05-18 12:05:30.191 -0.017330   8092.500000
2018-05-18 12:05:30.192 -0.161670   8088.300000
2018-05-18 12:05:30.712 -0.002000   8088.300000
2018-05-18 12:05:30.773 -0.002000   8088.300000
2018-05-18 12:05:34.688 0.003328    8088.400000

现在我想应用以下内容(它充当数百万行到5秒窗口的聚合):

df = df.groupby(pd.Grouper(freq='5S')).apply(tick_features).shift()[1:]

其中tick_features()是:

def tick_features(x):
    if not x.empty:
        open = x['price'].iloc[0]
        close = x['price'].iloc[-1]
    else:
        open = np.nan
        close = np.nan
    high = x['price'].max()
    low = x['price'].min()
    volume = np.abs(x['size']).sum()
    buy_volume = x['size'][x['size'] > 0].sum()
    sell_volume = np.abs(x['size'][x['size'] < 0].sum())
    pct_buy_volume = (buy_volume) / ((buy_volume) + (sell_volume))
    pct_sell_volume = (sell_volume) / ((buy_volume) + (sell_volume))
    num_trades = x['size'].count()
    num_buy_trades = (x['size'] > 0).sum()
    num_sell_trades = (x['size'] < 0).sum()
    pct_buy_trades = (x['size'] > 0).mean() * 100
    pct_sell_trades = (x['size'] < 0).mean() * 100

    return pd.Series([open,high,low,close,volume,buy_volume,sell_volume,pct_buy_volume,pct_sell_volume,
                      num_trades,num_buy_trades,num_sell_trades,pct_buy_trades,pct_sell_trades], 
                     index=['open','high','low','close','volume','buy_volume','sell_volume','pct_buy_volume','pct_sell_volume',
                            'num_trades','num_buy_trades','num_sell_trades','pct_buy_trades','pct_sell_trades'])

这种类型的优化不属于我的联盟,所以任何解释是否可行,如果非常赞赏的话,那就是可行的。

1 个答案:

答案 0 :(得分:3)

代码很慢,因为有很多组,对于每个组,Pandas需要创建一个DataFrame对象并将其传递给tick_features(),循环在Python中执行。

为了加速计算,您可以调用在Cython循环中执行的聚合方法:

首先准备一些虚拟数据:

import pandas as pd
import numpy as np

idx = pd.date_range("2018-05-01", "2018-06-02", freq="0.1S")
x = np.random.randn(idx.shape[0], 2)

df = pd.DataFrame(x, index=idx, columns=["size", "price"]) 

添加额外的列,如果你有足够的内存,计算速度很快:

df["time"] = df.index
df["volume"] = df["size"].abs()
df["buy_volume"] = np.clip(df["size"], 0, np.inf)
df["sell_volume"] = np.clip(df["size"], -np.inf, 0)
df["buy_trade"] = df["size"] > 0
df["sell_trade"] = df["size"] < 0    

然后首先对DataFrame对象进行分组,并调用聚合方法:

g = df.groupby(pd.Grouper(freq="5s"))
df2 = pd.DataFrame(
    dict(
    open = g["time"].first(),
    close = g["time"].last(),
    high = g["price"].max(),
    low = g["price"].min(),
    volume = g["volume"].sum(),
    buy_volume = g["buy_volume"].sum(),
    sell_volume = -g["sell_volume"].sum(),
    num_trades = g["size"].count(),
    buy_trade = g["buy_trade"].sum(),
    sell_trade = g["sell_trade"].sum(),
    pct_buy_trades  = g["buy_trade"].mean() * 100,
    pct_sell_trades = g["sell_trade"].mean() * 100,
    )
)

d = df2.eval("buy_volume + sell_volume")
df2["pct_buy_volume"] = df2.eval("buy_volume / @d")
df2["pct_sell_volume"] = df2.eval("sell_volume / @d")