我在概念上理解scikit中的ROC功能如何产生真正的阳性和假阳性率。我使用了BC scikit学习数据并围绕2个随机特征构建了一个决策树。
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn import tree
import numpy as np
data = load_breast_cancer()
X = data.data[:, [1,3]]
y = data.target
# Splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=0)
# Training tree
bc_tree = tree.DecisionTreeClassifier(criterion="entropy").fit(X_train, y_train)
# Predictions
bc_pred = bc_tree.predict(X_test)
# Score
bc_tree.score(X_test, y_test)
# Confusion matrix
from sklearn.metrics import confusion_matrix
metrics.confusion_matrix(y_test, bc_pred) # True positive = 0.83
# ROC curve
fpr_tree, tpr_tree, thresholds_tree = metrics.roc_curve(y_test, bc_pred)
# True positive rate ROC
tpr_tree # 0.91
混淆矩阵如下所示:
[[ 55, 12]
[ 11, 110]]
根据我的计算,真正的阳性率是:
55 /(55 + 11)= .83
根据scikit learn实施的ROC曲线,真阳性率为0.92。它是如何计算这个数字的,为什么我的计算不匹配?我错过了什么?
答案 0 :(得分:1)
因为你推断confusion_matrix是错误的。
confusion_matrix返回的矩阵的格式为
List<MVCDemo.Models.Employee>
所以根据TPR的公式,该值应为110 /(110 + 11)= 0.9090 ......