具有sklearn的渐变提升分类器丢失功能 - 操作数无法一起进行braodcast

时间:2015-09-02 18:35:03

标签: python scikit-learn operands loss boosting

我遇到了sklearn Gradient Boosting Classifier的estimator.loss_方法的问题。我试图将测试错误与随时间推移的训练误差进行比较。以下是我的一些数据准备:

# convert data to numpy array
train = np.array(shuffled_ds)

#label encode neighborhoods
for i in range(train.shape[1]):
if i in [1,2]:
    print(i,list(train[1:5,i]))
    lbl = preprocessing.LabelEncoder()
    lbl.fit(list(train[:,i]))
    train[:,i] = lbl.transform(train[:,i])
print('neighborhoods & crimes encoded')

#create target vector
y_crimes = train[::,1]
train=np.delete(train,1,1)
print(y_crimes)

#arrays to float
train = train.astype(float)
y_crimes = y_crimes.astype(float)

#data holdout for testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
    train, y_crimes, test_size=0.4, random_state=0)
print('test data created')

#train model and check train vs test error
print('begin training...')
est=GBC(n_estimators = 3000,learning_rate=.1,max_depth=4,max_features=1,min_samples_leaf=3)
est.fit(X_train,y_train)
print('done training')

此时我用

打印出我的数组形状
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)

我明白了:

(18000, 9)
(12000, 9)
(18000,)
(12000,)

分别

所以我的形状根据sklearn文档是兼容的。但接下来,我尝试填写一个测试分数向量,以便我可以将其与我的训练误差进行比较以便进行比较:

test_score=np.empty(len(est.estimators_))
for i, pred in enumerate(est.staged_predict(X_test)):
    test_score[i] = est.loss_(y_test,pred)

我收到以下错误:

: operands could not be broadcast together with shapes (12000,47) (12000,) 
         return np.sum(-1 * (Y * pred).sum(axis=1) +
543    544else:ValueError

我不确定那47来自哪里。我之前在另一个数据集上使用过相同的程序而没有任何问题。任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

您已发出此错误,因为您必须将staged_decision_function(而不是staged_predict)方法的结果传递给loss _

看这里Gradient Boosting regularization

clf = ensemble.GradientBoostingClassifier(**params)
clf.fit(X_train, y_train)

# compute test set deviance
test_deviance = np.zeros((params['n_estimators'],), dtype=np.float64)

for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
    # clf.loss_ assumes that y_test[i] in {0, 1}
    test_deviance[i] = clf.loss_(y_test, y_pred)