我刚刚将Pandas更新为0.13.1但是现在一行代码(在0.12.0之下已经很慢)变得难以忍受。我想知道是否有更快的替代品。
我使用数据帧。假设我有类似这样的东西:
import pandas as pd
df = pd.DataFrame({'A': ['one', 'one', 'two', 'three', 'three', 'one'], 'B': range(6)})
print df
A B
0 one 0
1 one 1
2 two 2
3 three 3
4 three 4
5 one 5
我首先按'A'分组并选择B中每个组的最后一个值来创建第三列'C':
df['C'] = df.groupby('A')['B'].transform(lambda x: x.iloc[-1])
print df
A B C
0 one 0 5
1 one 1 5
2 two 2 2
3 three 3 4
4 three 4 4
5 one 5 5
问题是:使用Pandas版本0.13.1有更快的方法吗?
谢谢
答案 0 :(得分:3)
是的,这正在等待实施:https://github.com/pydata/pandas/issues/6496
但你可以这样做:
生成数据/组:
In [31]: np.random.seed(0)
In [32]: N = 120000
In [33]: N_TRANSITIONS = 1400
In [35]: transition_points = np.random.permutation(np.arange(N))[:N_TRANSITIONS]
In [36]: transition_points.sort()
In [37]: transitions = np.zeros((N,), dtype=np.bool)
In [38]: transitions[transition_points] = True
In [39]: g = transitions.cumsum()
In [40]: df = pd.DataFrame({ "signal" : np.random.rand(N)})
In [41]: grp = df["signal"].groupby(g)
这是实际的转换:
In [42]: result2 = grp.transform(lambda x: x.iloc[-1])
In [43]: result1 = pd.concat([ Series([r]*len(grp.groups[i])) for i, r in enumerate(grp.tail(1).values) ],ignore_index=True)
In [44]: result1.equals(result2)
Out[44]: True
时序。
In [26]: %timeit pd.concat([ Series([r]*len(grp.groups[i])) for i, r in enumerate(grp.tail(1).values) ],ignore_index=True)
10 loops, best of 3: 123 ms per loop
In [27]: %timeit grp.transform(lambda x: x.iloc[-1])
1 loops, best of 3: 472 ms per loop