library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
prp(mytree, type = 4, extra = 101, leaf.round = 0, fallen.leaves = TRUE,
varlen = 0, tweak = 1.2)
然后使用printcp
我可以看到交叉验证结果
> printcp(mytree)
Classification tree:
rpart(formula = Fraud ~ RearEnd + Whiplash + Activity, data = train,
method = "class", minsplit = 2, minbucket = 1, cp = -1)
Variables actually used in tree construction:
[1] Activity RearEnd Whiplash
Root node error: 5/10 = 0.5
n= 10
CP nsplit rel error xerror xstd
1 0.6 0 1.0 2.0 0.0
2 0.2 1 0.4 0.4 0.3
3 -1.0 3 0.0 0.4 0.3
所以根节点错误是0.5,从我的理解是错误分类错误。但是我在计算敏感性(真阳性的比例)和特异性(真阴性的比例)方面遇到了麻烦。如何根据rpart
输出计算出这些?
(以上示例来自http://gormanalysis.com/decision-trees-in-r-using-rpart/)
答案 0 :(得分:2)
您可以使用caret
包来执行此操作:
数据:
library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
解决方案
library(caret)
#calculate predictions
preds <- predict(mytree, train)
#calculate sensitivity
> sensitivity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
#calculate specificity
> specificity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
sensitivity
和specificity
都将预测作为第一个参数,观察值(响应变量,即train$Fraud
)作为第二个参数。
根据文件记录,预测和观察值都需要作为具有相同水平的因素提供给函数。
在这种情况下,特异性和敏感性都是1,因为预测是100%准确的。
答案 1 :(得分:0)
根节点错误是树根处的错误分类错误。因此在添加任何节点之前发生错误分类错误。不是最终树的错误分类错误。