我正在尝试使用sklearn的gridsearch和xgboost创建的模型。为此,我正在基于ndcg评估创建一个自定义得分手。我成功地能够使用Snippet 1但它太杂乱/ hacky,我宁愿使用好的旧sklearn来简化代码。我试图实现GridSearch并且结果完全关闭:对于相同的X和y集合,我得到NDCG @ k = 0.8,Snippet 1与0.5使用Snippet 2.显然我在这里没有做的事情......
以下代码片段会返回截然不同的结果:
Snippet1:
kf = StratifiedKFold(y, n_folds=5, shuffle=True, random_state=42)
max_depth = [6]
learning_rate = [0.22]
n_estimators = [43]
reg_alpha = [0.1]
reg_lambda = [10]
for md in max_depth:
for lr in learning_rate:
for ne in n_estimators:
for ra in reg_alpha:
for rl in reg_lambda:
xgb = XGBClassifier(objective='multi:softprob',
max_depth=md,
learning_rate=lr,
n_estimators=ne,
reg_alpha=ra,
reg_lambda=rl,
subsample=0.6, colsample_bytree=0.6, seed=0)
print([md, lr, ne])
score = []
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
xgb.fit(X_train, y_train)
y_pred = xgb.predict_proba(X_test)
score.append(ndcg_scorer(y_test, y_pred))
print('all scores: %s' % score)
print('average score: %s' % np.mean(score))
Snippet2:
from sklearn.grid_search import GridSearchCV
params = {
'max_depth':[6],
'learning_rate':[0.22],
'n_estimators':[43],
'reg_alpha':[0.1],
'reg_lambda':[10],
'subsample':[0.6],
'colsample_bytree':[0.6]
}
xgb = XGBClassifier(objective='multi:softprob',seed=0)
scorer = make_scorer(ndcg_scorer, needs_proba=True)
gs = GridSearchCV(xgb, params, cv=5, scoring=scorer, verbose=10, refit=False)
gs.fit(X,y)
gs.best_score_
虽然snippet1按预期给出了结果,但Snippet2返回的分数与ndcg_scorer不一致。
答案 0 :(得分:0)
问题在于GridSearchCV(xgb, params, cv=5, scoring=scorer, verbose=10, refit=False)
中的cv。它可以接收KFold / StratifiedKFold而不是int。与doc中的内容不同,似乎默认情况下,'int'类型的agrument不会调用StratifiedKFold,而另一个函数可能是KFold。