我正在尝试在两个时间序列上运行grangercausalitytests
:
import numpy as np
import pandas as pd
from statsmodels.tsa.stattools import grangercausalitytests
n = 1000
ls = np.linspace(0, 2*np.pi, n)
df1 = pd.DataFrame(np.sin(ls))
df2 = pd.DataFrame(2*np.sin(1+ls))
df = pd.concat([df1, df2], axis=1)
df.plot()
grangercausalitytests(df, maxlag=20)
然而,我正在
Granger Causality
number of lags (no zero) 1
ssr based F test: F=272078066917221398041264652288.0000, p=0.0000 , df_denom=996, df_num=1
ssr based chi2 test: chi2=272897579166972095424217743360.0000, p=0.0000 , df=1
likelihood ratio test: chi2=60811.2671, p=0.0000 , df=1
parameter F test: F=272078066917220553616334520320.0000, p=0.0000 , df_denom=996, df_num=1
Granger Causality
number of lags (no zero) 2
ssr based F test: F=7296.6976, p=0.0000 , df_denom=995, df_num=2
ssr based chi2 test: chi2=14637.3954, p=0.0000 , df=2
likelihood ratio test: chi2=2746.0362, p=0.0000 , df=2
parameter F test: F=13296850090491009488285469769728.0000, p=0.0000 , df_denom=995, df_num=2
...
/usr/local/lib/python3.5/dist-packages/numpy/linalg/linalg.py in _raise_linalgerror_singular(err, flag)
88
89 def _raise_linalgerror_singular(err, flag):
---> 90 raise LinAlgError("Singular matrix")
91
92 def _raise_linalgerror_nonposdef(err, flag):
LinAlgError: Singular matrix
我不确定为什么会这样。
答案 0 :(得分:12)
问题出现是由于数据中两个系列之间的完美关联。从回溯中可以看出,内部使用wald测试来计算滞后时间序列参数的最大似然估计。要做到这一点,需要估计参数协方差矩阵(接近于零)及其倒数(正如您还可以在回溯中的行invcov = np.linalg.inv(cov_p)
中看到的那样)。对于某个最大滞后数(> = 5),该近零矩阵现在是单数的,因此测试崩溃。如果您为数据添加一点噪音,则错误消失:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.stattools import grangercausalitytests
n = 1000
ls = np.linspace(0, 2*np.pi, n)
df1Clean = pd.DataFrame(np.sin(ls))
df2Clean = pd.DataFrame(2*np.sin(ls+1))
dfClean = pd.concat([df1Clean, df2Clean], axis=1)
dfDirty = dfClean+0.00001*np.random.rand(n, 2)
grangercausalitytests(dfClean, maxlag=20, verbose=False) # Raises LinAlgError
grangercausalitytests(dfDirty, maxlag=20, verbose=False) # Runs fine
答案 1 :(得分:1)
需要注意的另一件事是重复的列。重复的列将具有1.0的相关评分,从而导致奇异。否则,您可能还具有两个完全相关的功能。最简单的检查方法是使用df.corr()
,并查找相关= 1.0的成对的列。