sklearn.cross_validation
使用LeaveOneOut
存在潜在错误。
x_test
中未使用y_test
和LeaveOneOut
。相反,验证是使用x_train
和y_train
完成的。
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut, cross_val_predict
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
clf = LinearRegression().fit(x,y)
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print 'y_pred_USING_x_test: ', y_pred_USING_x_test, 'y_pred_USING_x_train: ', y_pred_USING_x_train
y_pred_USING_x_test: [ 13.2] y_pred_USING_x_train: [ 14.3 15.4 16.5 17.6]
y_pred_USING_x_test: [ 14.3] y_pred_USING_x_train: [ 13.2 15.4 16.5 17.6]
y_pred_USING_x_test: [ 15.4] y_pred_USING_x_train: [ 13.2 14.3 16.5 17.6]
y_pred_USING_x_test: [ 16.5] y_pred_USING_x_train: [ 13.2 14.3 15.4 17.6]
y_pred_USING_x_test: [ 17.6] y_pred_USING_x_train: [ 13.2 14.3 15.4 16.5]
y_pred_USING_x_test
在每个for循环中给出一个值,这没有任何意义!
使用y_pred_USING_x_train
寻找LeaveOneOut
。
以下代码的结果完全无关紧要!
bug = cross_val_predict(clf, x, y, cv=cv)
print 'bug: ', bug
bug: [ 15. 14.85714286 14.5 15.85714286 21.5 ]
欢迎任何辩护。
答案 0 :(得分:2)
每个样品作为测试集(单例)使用一次
这意味着x_test
将是一个元素的数组,clf.predict(x_test)
将返回一个(预测)元素的数组。这可以在输出中看到。
x_train
将是没有为x_test
选择的一个元素的训练集。这可以通过在for循环中添加以下行来确认
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
if len(x_test)!=1 or ( len(x_train)+1!=len(x) ): # Confirmation
raise Exception
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print 'predicting for',x_test,'and expecting',y_test, 'and got', y_pred_USING_x_test
print 'predicting for',x_train,'and expecting',y_train, 'and got', y_pred_USING_x_train
print
print
注意由于您在相同的数据上训练和测试模型,因此这不是正确的验证。您应该在for循环的迭代中创建新的LinearRegression
对象,并使用x_train
,y_train
对其进行训练。用此预测x_test
,然后比较y_test
和y_pred_USING_x_test
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
if len(x_test)!=1 or ( len(x_train)+1!=len(x) ):
raise Exception
clf = LinearRegression()
clf.fit(x_train, y_train)
y_pred_USING_x_test = clf.predict(x_test)
print 'predicting for',x_test,'and expecting',y_test, 'and got', y_pred_USING_x_test
答案 1 :(得分:2)
没有错误。两件事:
您正在执行交叉验证拆分,但您从未在训练集上进行过训练!您需要在致电clf.fit(x_train, y_train)
之前致电predict()
,以使其按预期行事。
按照设计,LeaveOneOut
中的测试集是单个样本(即 one 被遗漏),因此预测结果也将是单个数字。 cross_val_predict()
函数是将这些单个输出拼接在一起的便利例程。
一旦你考虑到这两件事,我相信你的代码输出会更有意义。
结果如下:
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut, cross_val_predict
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
clf = LinearRegression().fit(x,y)
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
clf.fit(x_train, y_train) # <--------------- note added line!
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print('y_pred_USING_x_test: ', y_pred_USING_x_test,
'y_pred_USING_x_train: ', y_pred_USING_x_train)
print()
print(cross_val_predict(clf, x, y, cv=cv))
输出:
y_pred_USING_x_test: [ 15.] y_pred_USING_x_train: [ 15.5 16. 16.5 17. ]
y_pred_USING_x_test: [ 14.85714286] y_pred_USING_x_train: [ 13.94285714 15.77142857 16.68571429 17.6 ]
y_pred_USING_x_test: [ 14.5] y_pred_USING_x_train: [ 12.3 13.4 15.6 16.7]
y_pred_USING_x_test: [ 15.85714286] y_pred_USING_x_train: [ 13.2 14.08571429 14.97142857 16.74285714]
y_pred_USING_x_test: [ 21.5] y_pred_USING_x_train: [ 11.9 14.3 16.7 19.1]
[ 15. 14.85714286 14.5 15.85714286 21.5 ]
如您所见,手动循环中的测试输出与cross_val_predict()
的输出匹配。
答案 2 :(得分:1)
做clf = LinearRegression().fit(x,y)' after the for loop, it gives the same answer ascross_val_predict(clf, x, y, cv=cv)
不再是bug。该程序使用每个循环的左侧样本进行预测。