新年快乐。
我正在寻找一种方法来计算滚动窗口和固定窗口('补丁')与pandas的相关性。最终目标是进行模式匹配。
根据我在文档中阅读的内容,并且我非常想念SOMETHING,corr()或corrwith()不允许您锁定其中一个Series / DataFrame。
目前,我能提出的最好的解决方案如下所示。当使用30个样本的补丁在50K行上运行时,处理时间将进入Ctrl + C范围。
我非常感谢您的建议和选择。谢谢。
请运行下面的代码,我将非常清楚我要做的事情:
import numpy as np
import pandas as pd
from pandas import Series
from pandas import DataFrame
# Create test DataFrame df and a patch to be found.
n = 10
rng = pd.date_range('1/1/2000 00:00:00', periods=n, freq='5min')
df = DataFrame(np.random.rand(n, 1), columns=['a'], index=rng)
n = 4
rng = pd.date_range('1/1/2000 00:10:00', periods=n, freq='5min')
patch = DataFrame(np.arange(n), columns=['a'], index=rng)
print
print ' *** Start corr example ***'
# To avoid the automatic alignment between df and patch,
# I need to reset the index.
patch.reset_index(inplace=True, drop=True)
# Cannot do:
# df.reset_index(inplace=True, drop=True)
df['corr'] = np.nan
for i in range(df.shape[0]):
window = df[i : i+patch.shape[0]]
# If slice has only two rows, I have a line between two points
# When I corr with to points in patch, I start getting
# misleading values like 1 or -1
if window.shape[0] != patch.shape[0] :
break
else:
# I need to reset_index for the window,
# which is less efficient than doing outside the
# for loop where the patch has its reset_index done.
# If I would do the df.reset_index up there,
# I would still have automatic realignment but
# by index.
window.reset_index(inplace=True, drop=True)
# On top of the obvious inefficiency
# of this method, I cannot just corrwith()
# between specific columns in the dataframe;
# corrwith() runs for all.
# Alternatively I could create a new DataFrame
# only with the needed columns:
# df_col = DataFrame(df.a)
# patch_col = DataFrame(patch.a)
# Alternatively I could join the patch to
# the df and shift it.
corr = window.corrwith(patch)
print
print '==========================='
print 'window:'
print window
print '---------------------------'
print 'patch:'
print patch
print '---------------------------'
print 'Corr for this window'
print corr
print '============================'
df['corr'][i] = corr.a
print
print ' *** End corr example ***'
print " Please inspect var 'df'"
print
答案 0 :(得分:2)
显然,reset_index
的大量使用是我们与熊猫索引和自动对齐的信号。哦,如果我们忘记索引会有多容易!
实际上,这就是NumPy的用途。 (一般来说,当你需要通过索引进行对齐或分组时使用Pandas,在N维数组上进行计算时使用NumPy。)
使用NumPy将使计算更快,因为我们将能够删除for-loop
并处理在for-loop中完成的所有计算,因为在NumPy数组上完成一次计算滚动窗户。
我们可以look inside pandas/core/frame.py
's DataFrame.corrwith
查看计算是如何完成的。然后将其转换为在NumPy数组上完成的相应代码,根据需要进行调整,以便我们想要在一个完整的滚动窗口阵列上进行计算而不是一次只有一个窗口,同时保持patch
不变。 (注意:Pandas corrwith
方法处理NaN。为了使代码更简单,我假设输入中没有NaN。)
import numpy as np
import pandas as pd
from pandas import Series
from pandas import DataFrame
import numpy.lib.stride_tricks as stride
np.random.seed(1)
n = 10
rng = pd.date_range('1/1/2000 00:00:00', periods=n, freq='5min')
df = DataFrame(np.random.rand(n, 1), columns=['a'], index=rng)
m = 4
rng = pd.date_range('1/1/2000 00:10:00', periods=m, freq='5min')
patch = DataFrame(np.arange(m), columns=['a'], index=rng)
def orig(df, patch):
patch.reset_index(inplace=True, drop=True)
df['corr'] = np.nan
for i in range(df.shape[0]):
window = df[i : i+patch.shape[0]]
if window.shape[0] != patch.shape[0] :
break
else:
window.reset_index(inplace=True, drop=True)
corr = window.corrwith(patch)
df['corr'][i] = corr.a
return df
def using_numpy(df, patch):
left = df['a'].values
itemsize = left.itemsize
left = stride.as_strided(left, shape=(n-m+1, m), strides = (itemsize, itemsize))
right = patch['a'].values
ldem = left - left.mean(axis=1)[:, None]
rdem = right - right.mean()
num = (ldem * rdem).sum(axis=1)
dom = (m - 1) * np.sqrt(left.var(axis=1, ddof=1) * right.var(ddof=1))
correl = num/dom
df.ix[:len(correl), 'corr'] = correl
return df
expected = orig(df.copy(), patch.copy())
result = using_numpy(df.copy(), patch.copy())
print(expected)
print(result)
这确认了orig
和using_numpy
生成的值值
同样的:
assert np.allclose(expected['corr'].dropna(), result['corr'].dropna())
技术说明:
要以对内存友好的方式创建充满滚动窗口的数组,我used a striding trick I learned here。
这是一个基准测试,使用n, m = 1000, 4
(许多行和一个小补丁来生成大量窗口):
In [77]: %timeit orig(df.copy(), patch.copy())
1 loops, best of 3: 3.56 s per loop
In [78]: %timeit using_numpy(df.copy(), patch.copy())
1000 loops, best of 3: 1.35 ms per loop
- 加速2600倍。