我在Iris数据集上使用来自sklearn的支持向量分类器。当我呼叫decision_function
时,它会返回负值。但是,分类后测试数据集中的所有样本都具有正确的类别。我认为,当样本是一个异常值时,decision_function应返回正值,如果样本是异常值,则应返回负值。哪里错了?
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X = iris.data[:,:]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3,
random_state=0)
clf = SVC(probability=True)
print(clf.fit(X_train,y_train).decision_function(X_test))
print(clf.predict(X_test))
print(y_test)
这是输出:
[[-0.76231668 -1.03439531 -1.40331645]
[-1.18273287 -0.64851109 1.50296097]
[ 1.10803774 1.05572833 0.12956269]
[-0.47070432 -1.08920859 -1.4647051 ]
[ 1.18767563 1.12670665 0.21993744]
[-0.48277866 -0.98796232 -1.83186272]
[ 1.25020033 1.13721691 0.15514536]
[-1.07351583 -0.84997114 0.82303659]
[-1.04709616 -0.85739411 0.64601611]
[-1.23148923 -0.69072989 1.67459938]
[-0.77524787 -1.00939817 -1.08441968]
[-1.12212245 -0.82394879 1.11615504]
[-1.14646662 -0.91238712 0.80454974]
[-1.13632316 -0.8812114 0.80171542]
[-1.14881866 -0.95169643 0.61906248]
[ 1.15821271 1.10902205 0.22195304]
[-1.19311709 -0.93149873 0.78649126]
[-1.21653084 -0.90953622 0.78904491]
[ 1.16829526 1.12102515 0.20604678]
[ 1.18446364 1.1080255 0.15199149]
[-0.93911991 -1.08150089 -0.8026332 ]
[-1.15462733 -0.95603159 0.5713605 ]
[ 0.93278883 0.99763184 0.34033663]
[ 1.10999556 1.04596018 0.14791409]
[-1.07285663 -1.01864255 -0.10701465]
[ 1.21200422 1.01284263 0.0416991 ]
[ 0.9462457 1.01076579 0.36620915]
[-1.2108146 -0.79124775 1.43264808]
[-1.02747495 -0.25741977 1.13056021]
...
[ 1.16066886 1.11212424 0.22506538]]
[2 1 0 2 0 2 0 1 1 1 2 1 1 1 1 0 1 1 0 0 2 1 0 0 2 0 0 1 1 0 2 1 0 2 2 1 0
2 1 1 2 0 2 0 0]
[2 1 0 2 0 2 0 1 1 1 2 1 1 1 1 0 1 1 0 0 2 1 0 0 2 0 0 1 1 0 2 1 0 2 2 1 0
1 1 1 2 0 2 0 0]
答案 0 :(得分:2)
您需要分别考虑decision_function和预测。决定是从超平面到样本的距离。这意味着通过查看标志,您可以判断样品是否位于超平面的右侧或左侧。因此,负值非常精细并且表示负级("超平面的另一侧")。
使用虹膜数据集,您会遇到多类问题。由于SVM是二元分类器,因此没有固有的多类分类。两种方法是" one-vs-rest" (OvR)和" one-vs-one"方法,从二元"单元"。
构造一个多类分类器现在你已经知道了OvR,OvA并没有那么难掌握。你基本上构建了每个类对组合的分类器(A, B)。在你的情况下:0 vs 1,0比2,1 vs 2。
注意:(A,B)和(B,A)的值可以从单个二元分类器获得。您只能更改被认为是正类的内容,因此您必须反转符号。
这样做会给你一个矩阵:
+-------+------+-------+-------+
| A / B | #0 | #1 | #2 |
+-------+------+-------+-------+
| | | | |
| #0 | -- | -1.18 | -0.64 |
| | | | |
| #1 | 1.18 | -- | 1.50 |
| | | | |
| #2 | 0.64 | -1.50 | -- |
+-------+------+-------+-------+
阅读如下: 当A类(行)与B类(列)竞争时的决策函数值。
为了提取结果,进行投票。在基本形式中,您可以将其视为每个分类器可以给出的单一投票:是或否。这可能导致绘制,因此我们使用整个决策函数值。
+-------+------+-------+-------+-------+
| A / B | #0 | #1 | #2 | SUM |
+-------+------+-------+-------+-------+
| | | | | |
| #0 | - | -1.18 | -0.64 | -1.82 |
| | | | | |
| #1 | 1.18 | - | 1.50 | 2.68 |
| | | | | |
| #2 | 0.64 | -1.50 | - | 0.86 |
+-------+------+-------+-------+-------+
结果列再次为您提供了一个向量[-1.82, 2.68, 0.86]
。现在应用arg max
,它符合您的预测。
我保留此部分以避免进一步混淆。 scikit-lear SVC分类器(libsvm)有一个decision_function_shape
参数,它让我觉得它是OvR(我大部分时间都在使用liblinear)。
对于真正的OvR响应,您可以从每个分类器的决策函数中获得一个值,例如
[-1.18273287 -0.64851109 1.50296097]
现在要从中获取预测,您只需应用arg max
,它将返回值为1.50296097
的最后一个索引。从这里开始,不再需要决策函数的值(对于这个单一的预测)。这就是为什么你注意到你的预测很好的原因。
但是,您还指定了probability=True
,它使用distance_function的值并将其传递给sigmoid function。上面的示例原理,但现在你也有0和1之间的置信度值(我更喜欢这个术语而不是概率,因为它只描述到超平面的距离)。
修改强> 哎呀,sascha是对的。 LibSVM使用一对一(尽管决策函数的形状)。
答案 1 :(得分:2)
克里斯托弗是正确的,但在这里假设 OvR 。
现在你正在做 OvO 计划而没有注意到它!
这里有一些例子,其中:
但首先是OvO的预测理论来自:
代码:
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
import numpy as np
iris = datasets.load_iris()
X = iris.data[:,:]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3,
random_state=0)
clf = SVC(decision_function_shape='ovo') # EXPLICIT OVO-usage!
clf.fit(X, y)
def predict(dec):
# OVO prediction-scheme
# hardcoded for 3 classes!
# OVO order assumption: 0 vs 1; 0 vs 2; 1 vs 2 (lexicographic!)
# theory: http://www.stat.ucdavis.edu/~chohsieh/teaching/ECS289G_Fall2015/lecture9.pdf page 18
# and: http://www.mit.edu/~9.520/spring09/Classes/multiclass.pdf page 8
class0 = dec[0] + dec[1]
class1 = -dec[0] + dec[2]
class2 = -dec[1] - dec[2]
return np.argmax([class0, class1, class2])
dec_vals = clf.decision_function(X_test)
pred_vals = clf.predict(X_test)
pred_vals_own = np.array([predict(x) for x in dec_vals])
for i in range(len(X_test)):
print('decision_function vals : ', dec_vals[i])
print('sklearns prediction : ', pred_vals[i])
print('own prediction using dec: ', pred_vals_own[i])
输出:
decision_function vals : [-0.76867027 -1.04536032 -1.60216452]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.19939987 -0.64932285 1.6951256 ]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 1.11946664 1.05573131 0.06261988]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.46107656 -1.09842529 -1.50671611]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [ 1.2094164 1.12827802 0.1415261 ]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.47736819 -0.99988924 -2.15027278]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [ 1.25467104 1.13814461 0.07643985]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-1.07557745 -0.87436887 0.93179222]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.05047139 -0.88027404 0.80181305]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.24310627 -0.70058067 1.906847 ]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-0.78440125 -1.00630434 -0.99963088]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.12586024 -0.84193093 1.25542752]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.15639222 -0.91555677 1.07438865]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.14345638 -0.90050709 0.95795276]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.15790163 -0.95844647 0.83046875]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 1.17805731 1.11063472 0.1333462 ]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-1.20283096 -0.93961585 0.98410451]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.22782802 -0.90725712 1.05316513]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 1.16903803 1.12221984 0.11367107]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [ 1.17145967 1.10832227 0.08212776]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.9506135 -1.08467062 -0.79851794]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.16266048 -0.9573001 0.79179457]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 0.99991983 0.99976567 0.27258784]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [ 1.14009372 1.04646327 0.05173163]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-1.08080806 -1.03404209 -0.06411027]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [ 1.23515997 1.01235174 -0.03884014]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [ 0.99958361 1.0123953 0.31647776]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-1.21958703 -0.8018796 1.67844367]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.03327108 -0.25946619 1.1567434 ]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 1.12368215 1.11169071 0.20956223]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.82416303 -1.07792277 -1.1580516 ]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.13071754 -0.96096255 0.65828256]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 1.194643 1.12966124 0.15746621]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-1.04070512 -1.04532308 -0.20319486]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-0.70170723 -1.09340841 -1.9323473 ]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.24655214 -0.74489305 1.15450078]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [ 0.99984598 1.03781258 0.2790073 ]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.99993896 -1.06846079 -0.44496083]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [-1.22495071 -0.83041964 1.41965874]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-1.286798 -0.72689128 1.72244026]
sklearns prediction : 1
own prediction using dec: 1
decision_function vals : [-0.75503345 -1.09561165 -1.44344022]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [ 1.24778268 1.11179415 0.05277115]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [-0.79577073 -1.00004599 -0.99974376]
sklearns prediction : 2
own prediction using dec: 2
decision_function vals : [ 1.07018075 1.0831253 0.22181655]
sklearns prediction : 0
own prediction using dec: 0
decision_function vals : [ 1.16705531 1.11326796 0.15604895]
sklearns prediction : 0
own prediction using dec: 0