我是Python的菜鸟。我使用sklearn来拟合线性回归:
lm = LinearRegression()
lm.fit(x, y)
如何获得残差的方差?
答案 0 :(得分:-1)
让我们定义
y_true = np.array([3, -0.5, 2, 7])
y_pred = np.array([2.5, 0.0, 2, 8])
平均绝对误差可以定义为
np.mean(np.abs(y_true - y_pred)) # 0.5 same as sklearn.metrics.mean_absolute_error
绝对误差的方差是
np.var(np.abs(y_true - y_pred)) # 0.125
误差的方差是
np.var((y_true - y_pred)) # 0.3125
现在如何使用scikit-learn
来实现它?
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# X and target data and train test split
boston = datasets.load_boston()
X, y = boston.data, boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# initialize and fit to your train data and predict on test data
clf = LinearRegression()
clf.fit(X_train, y_train)
preds = clf.predict(X_test)
# evaluate
mean_absolute_error(y_test, preds) == np.mean(np.abs(y_test - preds))
# get the variance of (absolute) residuals
np.var(np.abs(y_test - preds))
np.var((y_test - preds))