如何让SGDClassifier反映不确定性

时间:2014-04-11 02:42:11

标签: python scikit-learn

你如何让sklearn's SGDClassifier在预测中表现出不确定性?

我正在尝试确认SGDClassifier将报告与任何标签不严格对应的输入数据的概率为50%。但是,我发现分类器始终是100%确定的。

我正在使用以下脚本对此进行测试:

from sklearn.linear_model import SGDClassifier

c = SGDClassifier(loss="log")
#c = SGDClassifier(loss="modified_huber")

X = [
    # always -1
    [1,0,0],
    [1,0,0],
    [1,0,0],
    [1,0,0],

    # always +1
    [0,0,1],
    [0,0,1],
    [0,0,1],
    [0,0,1],

    # uncertain
    [0,1,0],
    [0,1,0],
    [0,1,0],
    [0,1,0],
    [0,1,0],
    [0,1,0],
    [0,1,0],
    [0,1,0],
]
y = [
    -1,
    -1,
    -1,
    -1,
    +1,
    +1,
    +1,
    +1,

    -1,
    +1,
    -1,
    +1,
    -1,
    +1,
    -1,
    +1,
]

def lookup_prob_class(c, dist):
    a = sorted(zip(dist, c.classes_))
    best_prob, best_class = a[-1]
    return best_prob, best_class

c.fit(X, y)

probs = c.predict_proba(X)
print 'probs:'
for dist, true_value in zip(probs, y):
    prob, value = lookup_prob_class(c, dist)
    print '%.02f'%prob, value, true_value

如您所见,我的训练数据总是将-1与输入数据[1,0,0]相关联,+1与[0,0,1]相关联,并且[0,1,0]为50/50。

因此,我希望predict_proba()的结果为输入[0,1,0]返回0.5。但相反,它报告的概率为100%。为什么会这样,我该如何解决?

有趣的是,为DecisionTreeClassifierRandomForestClassifier换出SGDClassifier会产生我期望的输出。

1 个答案:

答案 0 :(得分:4)

确实显示出一些不确定性:

>>> c.predict_proba(X)
array([[  9.97254333e-01,   2.74566740e-03],
       [  9.97254333e-01,   2.74566740e-03],
       [  9.97254333e-01,   2.74566740e-03],
       [  9.97254333e-01,   2.74566740e-03],
       [  1.61231111e-06,   9.99998388e-01],
       [  1.61231111e-06,   9.99998388e-01],
       [  1.61231111e-06,   9.99998388e-01],
       [  1.61231111e-06,   9.99998388e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01],
       [  1.24171982e-04,   9.99875828e-01]])

如果您希望模型更加不确定,您必须更加强烈地规范它。这是通过调整alpha参数:

来完成的
>>> c = SGDClassifier(loss="log", alpha=1)
>>> c.fit(X, y)
SGDClassifier(alpha=1, class_weight=None, epsilon=0.1, eta0=0.0,
       fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',
       loss='log', n_iter=5, n_jobs=1, penalty='l2', power_t=0.5,
       random_state=None, shuffle=False, verbose=0, warm_start=False)
>>> c.predict_proba(X)
array([[ 0.58782817,  0.41217183],
       [ 0.58782817,  0.41217183],
       [ 0.58782817,  0.41217183],
       [ 0.58782817,  0.41217183],
       [ 0.53000442,  0.46999558],
       [ 0.53000442,  0.46999558],
       [ 0.53000442,  0.46999558],
       [ 0.53000442,  0.46999558],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761],
       [ 0.55579239,  0.44420761]])

alpha是对高要素权重的惩罚,因此alpha越高,允许权重增长越少,线性模型值变得越不极端,逻辑概率估计越接近到½。通常,使用交叉验证来调整此参数。