从插入10倍CV中提取训练和测试AUROC

时间:2018-01-06 19:27:10

标签: r classification cross-validation r-caret xgboost

说我正在进行如下分类:

library(mlbench)
data(Sonar)

library(caret)
set.seed(998)

my_data <- Sonar

fitControl <-
  trainControl(
    method = "cv",
    number = 10,
    classProbs = T,
    savePredictions = T,
    summaryFunction = twoClassSummary
  )


model <- train(
  Class ~ .,
  data = my_data,
  method = "xgbTree",
  trControl = fitControl,
  metric = "ROC"
)

对于10折中的每一个,10%的数据用于验证。对于插入符号确定的最佳参数,我可以使用getTrainPerf(model)找到所有10倍的平均验证AUC或model$resample以获得每个折叠的AUC的单独值。但是,如果将训练数据放回到同一模型中,我无法获得AUC。如果我可以获得每个训练集的单独AUC值,那将是很好的。

如何提取这些信息?我想确保我的模型不适合(我正在使用的数据集非常小)。

谢谢!

1 个答案:

答案 0 :(得分:1)

根据评论中的要求,这是一个用于评估交叉验证测试错误的自定义函数。我不确定它是否可以从插入符号列车对象中提取出来。

在运行插入符号列车后,提取折叠以获得最佳音调:

library(tidyverse)
model$bestTune %>%
  left_join(model$pred) %>%
  select(rowIndex, Resample) %>%
  mutate(Resample = as.numeric(gsub(".*(\\d$)", "\\1", Resample)),
         Resample = ifelse(Resample == 0, 10, Resample)) %>%
  arrange(rowIndex) -> resamples

构造一个交叉验证函数,它将使用与插入符相同的折叠:

library(xgboost)
train <- my_data[,!names(my_data)%in% "Class"]
label <- as.numeric(my_data$Class) - 1

test_auc <- lapply(1:10, function(x){
  model <- xgboost(data = data.matrix(train[resamples[,2] != x,]),
                   label = label[resamples[,2] != x],
                   nrounds = model$bestTune$nrounds,
                   max_depth = model$bestTune$max_depth,
                   gamma = model$bestTune$gamma,
                   colsample_bytree = model$bestTune$colsample_bytree,
                   objective = "binary:logistic",
                   eval_metric= "auc" ,
                   print_every_n = 50)
  preds_train <- predict(model, data.matrix(train[resamples[,2] != x,]))
  preds_test <- predict(model, data.matrix(train[resamples[,2] == x,]))
  auc_train <- pROC::auc(pROC::roc(response = label[resamples[,2] != x], predictor = preds_train, levels = c(0, 1)))
  auc_test <- pROC::auc(pROC::roc(response = label[resamples[,2] == x], predictor = preds_test, levels = c(0, 1)))
  return(data.frame(fold = unique(resamples[resamples[,2] == x, 2]), auc_train, auc_test))
  })

do.call(rbind, test_auc)
#output
   fold auc_train  auc_test
1     1         1 0.9909091
2     2         1 0.9797980
3     3         1 0.9090909
4     4         1 0.9629630
5     5         1 0.9363636
6     6         1 0.9363636
7     7         1 0.9181818
8     8         1 0.9636364
9     9         1 0.9818182
10   10         1 0.8888889

arrange(model$resample, Resample)
#output
         ROC      Sens      Spec Resample
1  0.9909091 1.0000000 0.8000000   Fold01
2  0.9898990 0.9090909 0.8888889   Fold02
3  0.9909091 0.9090909 1.0000000   Fold03
4  0.9444444 0.8333333 0.8888889   Fold04
5  0.9545455 0.9090909 0.8000000   Fold05
6  0.9272727 1.0000000 0.7000000   Fold06
7  0.9181818 0.9090909 0.9000000   Fold07
8  0.9454545 0.9090909 0.8000000   Fold08
9  0.9909091 0.9090909 0.9000000   Fold09
10 0.8888889 0.9090909 0.7777778   Fold10

为什么我的功能和插入符号的测试折叠AUC不一样我不能说。我很确定使用了相同的参数和折叠。我可以假设它与随机种子有关。当我检查插入符号测试预测的auc时,我得到与插入符相同的输出:

model$bestTune %>%
  left_join(model$pred) %>%
  arrange(rowIndex) %>%
  select(M, Resample, obs) %>%
  mutate(Resample = as.numeric(gsub(".*(\\d$)", "\\1", Resample)),
                             Resample = ifelse(Resample == 0, 10, Resample),
         obs = as.numeric(obs) - 1) %>%
  group_by(Resample) %>%
  do(auc = as.vector(pROC::auc(pROC::roc(response = .$obs, predictor = .$M)))) %>%
  unnest()
#output
   Resample   auc
      <dbl> <dbl>
 1     1.00 0.991
 2     2.00 0.990
 3     3.00 0.991
 4     4.00 0.944
 5     5.00 0.955
 6     6.00 0.927
 7     7.00 0.918
 8     8.00 0.945
 9     9.00 0.991
10    10.0  0.889

但我再次强调测试错误会告诉你很少,你应该依赖火车错误。如果你想让两者更接近,而不是考虑摆弄gammaalphalambda参数。

使用小数据集我仍然会尝试拆分train:test = 80:20并使用该独立测试集来验证CV错误是否接近测试错误。