GridSearchCV best_score_属性的含义是什么? (该值与交叉验证数组的平均值不同)

时间:2015-09-17 14:26:22

标签: machine-learning scikit-learn decision-tree cross-validation grid-search

我对结果很困惑,可能我没有得到交叉验证和GridSearch的概念。我遵循了这篇文章背后的逻辑: https://randomforests.wordpress.com/2014/02/02/basics-of-k-fold-cross-validation-and-gridsearchcv-in-scikit-learn/

argd = CommandLineParser(argv)
folder,fname=argd['dir'],argd['fname']

df = pd.read_csv('../../'+folder+'/Results/'+fname, sep=";")

explanatory_variable_columns = set(df.columns.values)
response_variable_column = df['A']
explanatory_variable_columns.remove('A')
y = np.array([1 if e else 0 for e in response_variable_column])

X =df[list(explanatory_variable_columns)].as_matrix()

kf_total = KFold(len(X), n_folds=5, indices=True, shuffle=True, random_state=4)

dt=DecisionTreeClassifier(criterion='entropy')

min_samples_split_range=[x for x in range(1,20)]
dtgs=GridSearchCV(estimator=dt, param_grid=dict(min_samples_split=min_samples_split_range), n_jobs=1)

scores=[dtgs.fit(X[train],y[train]).score(X[test],y[test]) for train, test in kf_total]
# SAME AS DOING: cross_validation.cross_val_score(dtgs, X, y, cv=kf_total, n_jobs = 1)

print scores
print np.mean(scores)
print dtgs.best_score_

获得的结果:

# score [0.81818181818181823, 0.78181818181818186, 0.7592592592592593, 0.7592592592592593, 0.72222222222222221]
# mean score 0.768
# .best_score_ 0.683486238532

附加说明:

我使用解释变量的另一个组合(仅使用其中一些)运行它,我得到了反问题。现在.best_score_高于交叉验证数组中的所有值。

# score [0.74545454545454548, 0.70909090909090911, 0.79629629629629628, 0.7407407407407407, 0.64814814814814814]
# mean score 0.728
# .best_score_ 0.802752293578

1 个答案:

答案 0 :(得分:3)

代码混淆了几件事。 dtgs.fit(X[train_],y[train_])对来自param_grid的每个参数组合进行内部3倍交叉验证,生成20个结果的网格,您可以通过调用dtgs.grid_scores_打开该网格。

[dtgs.fit(X[train_],y[train_]).score(X[test],y[test]) for train_, test in kf_total]因此,此行适合五次网格搜索,然后使用5折交叉验证获取其分数。结果是5-fold验证的分数数组。

当你致电dtgs.best_score_时,你会在最后拟合(5)的超参数验证结果的网格中得到最高分。