为什么CalibratedClassifierCV表现不如直接分类?

时间:2015-05-17 09:48:19

标签: python scikit-learn

我注意到当CalibratedClassifierCVbase_estimator时,sklearn的新base_estimator似乎表现不及直接GradientBoostingClassifer,(我还没有测试过其他分类)。有趣的是,如果make_classification的参数是:

n_features = 10
n_informative = 3
n_classes = 2

然后CalibratedClassifierCV似乎是轻微的表现(日志损失评估)。

但是,根据以下分类数据集,CalibratedClassifierCV似乎通常是表现不佳的人:

from sklearn.datasets import make_classification
from sklearn import ensemble
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import log_loss
from sklearn import cross_validation
# Build a classification task using 3 informative features

X, y = make_classification(n_samples=1000,
                           n_features=100,
                           n_informative=30,
                           n_redundant=0,
                           n_repeated=0,
                           n_classes=9,
                           random_state=0,
                           shuffle=False)

skf = cross_validation.StratifiedShuffleSplit(y, 5)

for train, test in skf:

    X_train, X_test = X[train], X[test]
    y_train, y_test = y[train], y[test]

    clf = ensemble.GradientBoostingClassifier(n_estimators=100)
    clf_cv = CalibratedClassifierCV(clf, cv=3, method='isotonic')
    clf_cv.fit(X_train, y_train)
    probas_cv = clf_cv.predict_proba(X_test)
    cv_score = log_loss(y_test, probas_cv)

    clf = ensemble.GradientBoostingClassifier(n_estimators=100)
    clf.fit(X_train, y_train)
    probas = clf.predict_proba(X_test)
    clf_score = log_loss(y_test, probas) 

    print 'calibrated score:', cv_score
    print 'direct clf score:', clf_score
    print

一次跑步产生了:

enter image description here

也许我错过了CalibratedClassifierCV的工作原理,或者我没有正确使用它,但我的印象是,如果有的话,将分类器传递给CalibratedClassifierCV会导致改进相对于base_estimator单独的表现。

任何人都可以解释这种观察到的表现不佳吗?

3 个答案:

答案 0 :(得分:8)

概率校准本身需要交叉验证,因此CalibratedClassifierCV每次折叠训练一个校准的分类器(在这种情况下使用StratifiedKFold),并且当你使用每个分类器的预测概率的均值时调用predict_proba()。这可能导致对效果的解释。

我的假设是,如果训练集相对于特征和类的数量很小,则每个子分类器的简化训练集会影响性能,并且整合不会弥补(或使情况变得更糟)。此外,GradientBoostingClassifier可能会从一开始就提供非常好的概率估计,因为其损失函数已针对概率估计进行了优化。

如果这是正确的,集合分类器的方式与CalibratedClassifierCV相同但没有校准应该比单个分类器更差。此外,当使用大量折叠进行校准时,效果应该消失。

为了测试这一点,我扩展了你的脚本以增加折叠数量并包括没有校准的整体分类器,我能够确认我的预测。 10倍校准分类器总是比单一分类器表现更好,未校准的整体显着更差。在我的运行中,3倍校准分类器也没有真正比单一分类器更差,所以这可能也是一种不稳定的效果。这些是同一数据集的详细结果:

Log-loss results from cross-validation

这是我的实验代码:

import numpy as np
from sklearn.datasets import make_classification
from sklearn import ensemble
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import log_loss
from sklearn import cross_validation

X, y = make_classification(n_samples=1000,
                           n_features=100,
                           n_informative=30,
                           n_redundant=0,
                           n_repeated=0,
                           n_classes=9,
                           random_state=0,
                           shuffle=False)

skf = cross_validation.StratifiedShuffleSplit(y, 5)

for train, test in skf:

    X_train, X_test = X[train], X[test]
    y_train, y_test = y[train], y[test]

    clf = ensemble.GradientBoostingClassifier(n_estimators=100)
    clf_cv = CalibratedClassifierCV(clf, cv=3, method='isotonic')
    clf_cv.fit(X_train, y_train)
    probas_cv = clf_cv.predict_proba(X_test)
    cv_score = log_loss(y_test, probas_cv)
    print 'calibrated score (3-fold):', cv_score


    clf = ensemble.GradientBoostingClassifier(n_estimators=100)
    clf_cv = CalibratedClassifierCV(clf, cv=10, method='isotonic')
    clf_cv.fit(X_train, y_train)
    probas_cv = clf_cv.predict_proba(X_test)
    cv_score = log_loss(y_test, probas_cv)
    print 'calibrated score (10-fold:)', cv_score

    #Train 3 classifiers and take average probability
    skf2 = cross_validation.StratifiedKFold(y_test, 3)
    probas_list = []
    for sub_train, sub_test in skf2:
        X_sub_train, X_sub_test = X_train[sub_train], X_train[sub_test]
        y_sub_train, y_sub_test = y_train[sub_train], y_train[sub_test]
        clf = ensemble.GradientBoostingClassifier(n_estimators=100)
        clf.fit(X_sub_train, y_sub_train)
        probas_list.append(clf.predict_proba(X_test))
    probas = np.mean(probas_list, axis=0)
    clf_ensemble_score = log_loss(y_test, probas)
    print 'uncalibrated ensemble clf (3-fold) score:', clf_ensemble_score

    clf = ensemble.GradientBoostingClassifier(n_estimators=100)
    clf.fit(X_train, y_train)
    probas = clf.predict_proba(X_test)
    score = log_loss(y_test, probas)
    print 'direct clf score:', score
    print

答案 1 :(得分:5)

等渗回归方法(及其在sklearn中的实现)存在一些问题,使其成为校准的次优选择。

具体做法是:

1)它适合分段常数函数,而不是校准函数的平滑变化曲线。

2)交叉验证平均每次折叠得到的模型/校准结果。但是,这些结果中的每一个仍然只能在相应的折叠上进行拟合和校准。

通常,更好的选择是ML-insights包中的SplineCalibratedClassifierCV类(免责声明:我是该包的作者)。包的github repo是here

它具有以下优点:

1)它适合立方平滑样条而不是分段常数函数。

2)它使用整个(交叉验证的)答案集进行校准,并在完整数据集上重新构建基本模型。因此,校准函数和基本模型都在整个数据集上得到有效训练。

您可以查看比较示例herehere

从第一个例子,这里是一个图表,显示训练集(红点),独立测试集(绿色+符号)的分箱概率,以及由ML-insights样条方法(蓝线)计算的校准和等渗sklearn方法(灰点/线)。

Spline vs Isotonic Calibration

我修改了你的代码以比较方法(并提高了例子的数量)。它表明样条方法典型的表现更好(我上面链接的例子也是如此)。

以下是代码和结果:

代码(您必须先pip install ml_insights):

import numpy as np from sklearn.datasets import make_classification from sklearn import ensemble from sklearn.calibration import CalibratedClassifierCV from sklearn.metrics import log_loss from sklearn import cross_validation import ml_insights as mli

X, y = make_classification(n_samples=10000, n_features=100, n_informative=30, n_redundant=0, n_repeated=0, n_classes=9, random_state=0, shuffle=False)

skf = cross_validation.StratifiedShuffleSplit(y, 5)

for train, test in skf:

X_train, X_test = X[train], X[test]
y_train, y_test = y[train], y[test]

clf = ensemble.GradientBoostingClassifier(n_estimators=100)    
clf_cv_mli = mli.SplineCalibratedClassifierCV(clf, cv=3)
clf_cv_mli.fit(X_train, y_train)
probas_cv_mli = clf_cv_mli.predict_proba(X_test)
cv_score_mli = log_loss(y_test, probas_cv_mli)

clf = ensemble.GradientBoostingClassifier(n_estimators=100)    
clf_cv = CalibratedClassifierCV(clf, cv=3, method='isotonic')
clf_cv.fit(X_train, y_train)
probas_cv = clf_cv.predict_proba(X_test)
cv_score = log_loss(y_test, probas_cv)

clf = ensemble.GradientBoostingClassifier(n_estimators=100)
clf.fit(X_train, y_train)
probas = clf.predict_proba(X_test)
clf_score = log_loss(y_test, probas) 

clf = ensemble.GradientBoostingClassifier(n_estimators=100)    
clf_cv_mli = mli.SplineCalibratedClassifierCV(clf, cv=10)
clf_cv_mli.fit(X_train, y_train)
probas_cv_mli = clf_cv_mli.predict_proba(X_test)
cv_score_mli_10 = log_loss(y_test, probas_cv_mli)

clf = ensemble.GradientBoostingClassifier(n_estimators=100)    
clf_cv = CalibratedClassifierCV(clf, cv=10, method='isotonic')
clf_cv.fit(X_train, y_train)
probas_cv = clf_cv.predict_proba(X_test)
cv_score_10 = log_loss(y_test, probas_cv)

print('\nuncalibrated score: {}'.format(clf_score))
print('\ncalibrated score isotonic-sklearn (3-fold): {}'.format(cv_score))
print('calibrated score mli (3-fold): {}'.format(cv_score_mli))
print('\ncalibrated score isotonic-sklearn (10-fold): {}'.format(cv_score_10))
print('calibrated score mli (10-fold): {}\n'.format(cv_score_mli_10))`

结果

uncalibrated score: 1.4475396740876696

calibrated score isotonic-sklearn (3-fold): 1.465140552847886 calibrated score mli (3-fold): 1.3651638065446683

calibrated score isotonic-sklearn (10-fold): 1.4158622673607426 calibrated score mli (10-fold): 1.3620771116522705

uncalibrated score: 1.5097320476479625

calibrated score isotonic-sklearn (3-fold): 1.5189534673089442 calibrated score mli (3-fold): 1.4386253950100405

calibrated score isotonic-sklearn (10-fold): 1.4976505139437257 calibrated score mli (10-fold): 1.4408912879989917

uncalibrated score: 1.4654527691892194

calibrated score isotonic-sklearn (3-fold): 1.493355643575107 calibrated score mli (3-fold): 1.388789694535648

calibrated score isotonic-sklearn (10-fold): 1.419760490609242 calibrated score mli (10-fold): 1.3830851694161692

uncalibrated score: 1.5163851866969407

calibrated score isotonic-sklearn (3-fold): 1.5532628847926322 calibrated score mli (3-fold): 1.459797287154743

calibrated score isotonic-sklearn (10-fold): 1.4748100659449732 calibrated score mli (10-fold): 1.4620173012979816

uncalibrated score: 1.4760935523959617

calibrated score isotonic-sklearn (3-fold): 1.469434735152088 calibrated score mli (3-fold): 1.402024502986732

calibrated score isotonic-sklearn (10-fold): 1.4702032019673137 calibrated score mli (10-fold): 1.3983943648572212

答案 2 :(得分:1)

使用校准分类器的目的是提出一种概率预测,其行为比普通分类器稍微平滑一些。这不是为了提高您的基本估算员的表现。

因此,不保证概率或对数损失是相同的(相同的邻域,但不相同)。但如果您绘制样本+概率,您可能会看到更好的分布。

主要保留的将是决策阈值(0.5)之上和之下的#samples。