建模ARIMA时的LinAlgError

时间:2018-02-19 04:42:31

标签: python statsmodels arima

当我对ARIMA进行建模并检查MSE时,我遇到了一个奇怪的问题。

这是我尝试过的代码。

from sklearn.metrics import mean_squared_error
import sys

split_point = int(len(value_series) * 0.66)
train, test = value_series.values[0:split_point], value_series.values[split_point:]
history = [float(x) for x in train]
predictions = list()

for t in range(len(test)):
    try:
        model = ARIMA(history, order=(2,1,2))
        model_fit = model.fit(disp=0)
        output = model_fit.forecast()
        yhat = output[0]
        predictions.append(yhat)
        obs = test[t]
        history.append(obs)
        print('# %s predicted=%f, expected=%f' % (t, yhat, obs))
    except:
        print("Unexpected error:", sys.exc_info()[0])
        pass

error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)

我收到的错误是Unexpected error: <class 'numpy.linalg.linalg.LinAlgError'>model_fit = model.fit(disp=0)。 错误从第282位到数据末尾出现,其中列表长度为343,但我仍无法找到任何解决方案和原因。

无论如何,预测和测试的长度输出分别为282和343。我不知道为什么预测无法附加yhat,这意味着无法通过arima.fit.forcast()的输出分配...

+)那个SVD did not converge错误。

1 个答案:

答案 0 :(得分:1)

尝试:

X = value_series.values
size = int(len(X) * 0.66)
trn, tst = X[0:size], X[size:len(X)]
hsty = [x.astype(float) for x in trn]
pred = []
for i in range(len(tst)):
    try:
        model = ARIMA(hsty, order=(3,1,1))
        model_fit = model.fit(disp=0, start_ar_lags = None)
        residuals = DataFrame(model_fit.resid)
        out = model_fit.forecast()
        yhat = out[0]
        predictions.append(yhat)
        obs = tst[i]
        hsty.append(obs)
        print('predicted=%f, expected=%f' % (yhat, obs))
    except:
        pass
if len(tst)>len(pred):
    err = mean_squared_error(tst[:len(pred)], pred)
else:
    err = mean_squared_error(tst, pred[:len(tst)])
print('Test MSE: %.3f' % err)