我用mlxtend构建了一个简单的Stacking分类器,并尝试使用不同的基本分类器,我正面临一个有趣的情况。从我的所有研究看来,堆叠分类器始终比其基础分类器性能更好。
就我而言,当我交叉验证训练集上的堆叠分类器时,我得到的得分比一些基本估计量要低。另外,我经常得到堆叠分类器的平均CV分数等于基本估计量的平均CV分数中的最低者。
这不是很奇怪吗?更奇怪的是,一旦我在堆叠分类器上执行GridSearchCV,选择最佳参数并在整个训练集上进行再训练,最后计算出测试集的准确性,我实际上得到了不错的成绩。
我知道这种方法很容易泄漏,并且对堆叠分类器进行CV的处理有很多不同的技术,但是它们似乎非常慢,根据我的研究,上述方法似乎还可以(关于这种潜在的泄漏,本Kaggle堆叠指南帖子甚至说:“实际上,每个人都忽略了这个理论漏洞(坦率地说,我认为大多数人都不知道它甚至存在!” http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/参见参数调整段落)
from mlxtend.classifier import StackingCVClassifier
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import accuracy_score
RANDOM_SEED = 12
#Imported df in separate code snippet
y = df['y']
X = df.drop(columns=['y'])
scaler = preprocessing.StandardScaler().fit(X)
X_transformed = scaler.transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_transformed,y, random_state = 4)
def gridSearch_clf(clf, param_grid, X_train, y_train):
gs = GridSearchCV(clf, param_grid).fit(X_train, y_train)
print("Best Parameters")
print(gs.best_params_)
return gs.best_estimator_
def gs_report(y_test, X_test, best_estimator):
print(classification_report(y_test, best_estimator.predict(X_test)))
print("Overall Accuracy Score: ")
print(accuracy_score(y_test, best_estimator.predict(X_test)))
lr = LogisticRegression()
np.random.seed(RANDOM_SEED)
sclf = StackingCVClassifier(classifiers=[best_clf1, best_clf2, best_clf3],
meta_classifier=lr)
clfs = [best_clf1, best_clf2, best_clf3, sclf]
clf_names = [i.__class__.__name__ for i in clfs]
print_cv(clfs, clf_names)
Accuracy: 0.68 (+/- 0.30) [Decision Tree Classifier]
Accuracy: 0.55 (+/- 0.26) [K Neighbors Classifier]
Accuracy: 0.67 (+/- 0.32) [Bernoulli Naive Bayes]
Accuracy: 0.55 (+/- 0.26) [StackingClassifier]
## StackingClassifier Accuracy = KNN Classifier Accuracy
param_grid = {'meta-logisticregression__C':np.logspace(-2, 3, num=6, base=10)}
best_sclf = gridSearch_clf(sclf, param_grid, X_train, y_train)
gs_report(y_test,X_test, best_sclf)
Best Parameters
{'meta-logisticregression__C': 0.1}
precision recall f1-score support
0 0.91 0.99 0.95 9131
1 0.68 0.22 0.33 1166
avg / total 0.88 0.90 0.88 10297
Overall Accuracy Score:
0.9000679809653297