R中支持向量机和朴素贝叶斯分类器的变量重要性

时间:2016-04-25 15:46:58

标签: r machine-learning svm naivebayes

我正致力于在癌症数据集中构建R中的预测分类器。 我正在使用随机森林,支持向量机和朴素贝叶斯分类器。我无法计算SVM和NB模型的变量重要性

我最终收到以下错误。

  

UseMethod(“varImp”)中的错误:       没有适用于'varImp'的方法应用于类“c('svm.formula','svm')的对象”

如果有人能帮助我,我将不胜感激。

3 个答案:

答案 0 :(得分:4)

鉴于

library(e1071)
model <- svm(Species ~ ., data = iris)
class(model)
# [1] "svm.formula" "svm"     

library(caret)
varImp(model)
# Error in UseMethod("varImp") : 
#   no applicable method for 'varImp' applied to an object of class "c('svm.formula', 'svm')"

methods(varImp)
#  [1] varImp.bagEarth      varImp.bagFDA        varImp.C5.0*         varImp.classbagg*   
#  [5] varImp.cubist*       varImp.dsa*          varImp.earth*        varImp.fda*         
#  [9] varImp.gafs*         varImp.gam*          varImp.gbm*          varImp.glm*         
# [13] varImp.glmnet*       varImp.JRip*         varImp.lm*           varImp.multinom*    
# [17] varImp.mvr*          varImp.nnet*         varImp.pamrtrained*  varImp.PART*        
# [21] varImp.plsda         varImp.randomForest* varImp.RandomForest* varImp.regbagg*     
# [25] varImp.rfe*          varImp.rpart*        varImp.RRF*          varImp.safs*        
# [29] varImp.sbf*          varImp.train*  

varImp.svm中没有函数methods(varImp),因此错误。您也可以查看this post on Cross Validated

答案 1 :(得分:2)

如果使用R,则可以使用rminer包中的重要性方法计算变量重要性。这是我的示例代码:

library(rminer)
M <- fit(y~., data=train, model="svm", kpar=list(sigma=0.10), C=2)
svm.imp <- Importance(M, data=train)

详细信息,请参阅以下链接https://cran.r-project.org/web/packages/rminer/rminer.pdf

答案 2 :(得分:1)

我创建了一个循环,一次迭代地删除一个预测变量,并在数据帧中捕获了从混淆矩阵得出的各种性能指标。这不应该是一种适合所有解决方案的方法,我没有时间,但是应用修改应该不难。

确保预测变量在数据框中位于最后。

我主要需要模型中的特异性值,通过一次删除一个预测变量,我可以评估每个预测变量的重要性,即,通过删除一个预测变量,模型的最小特异性(较少预测变量编号i)表示预测变量最重要。您需要知道将什么指标赋予重要性。

您还可以在内部添加另一个for循环以在内核之间进行更改,即线性,多项式,径向,但是您可能必须考虑其他参数,例如伽玛使用目标变量更改“ label_fake”,并使用数据框更改df_final。

SVM版本:

set.seed(1)
varimp_df <- NULL # df with results
ptm1 <- proc.time() # Start the clock!
for(i in 1:(ncol(df_final)-1)) { # the last var is the dep var, hence the -1
  smp_size <- floor(0.70 * nrow(df_final)) # 70/30 split
  train_ind <- sample(seq_len(nrow(df_final)), size = smp_size)
  training <- df_final[train_ind, -c(i)] # receives all the df less 1 var
  testing <- df_final[-train_ind, -c(i)]

  tune.out.linear <- tune(svm, label_fake ~ .,
                          data = training,
                          kernel = "linear",
                          ranges = list(cost =10^seq(1, 3, by = 0.5))) # you can choose any range you see fit

  svm.linear <- svm(label_fake ~ .,
                    kernel = "linear",
                    data = training,
                    cost = tune.out.linear[["best.parameters"]][["cost"]])

  train.pred.linear <- predict(svm.linear, testing)
  testing_y <- as.factor(testing$label_fake)
  conf.matrix.svm.linear <- caret::confusionMatrix(train.pred.linear, testing_y)
  varimp_df <- rbind(varimp_df,data.frame(
                     var_no=i,
                     variable=colnames(df_final[,i]), 
                     cost_param=tune.out.linear[["best.parameters"]][["cost"]],
                     accuracy=conf.matrix.svm.linear[["overall"]][["Accuracy"]],
                     kappa=conf.matrix.svm.linear[["overall"]][["Kappa"]],
                     sensitivity=conf.matrix.svm.linear[["byClass"]][["Sensitivity"]],
                     specificity=conf.matrix.svm.linear[["byClass"]][["Specificity"]]))
  runtime1 <- as.data.frame(t(data.matrix(proc.time() - ptm1)))$elapsed # time for running this loop
  runtime1 # divide by 60 and you get minutes, /3600 you get hours
    }

朴素贝叶斯版本:

varimp_nb_df <- NULL
ptm1 <- proc.time() # Start the clock!
for(i in 1:(ncol(df_final)-1)) {
  smp_size <- floor(0.70 * nrow(df_final))
  train_ind <- sample(seq_len(nrow(df_final)), size = smp_size)
  training <- df_final[train_ind, -c(i)]
  testing <- df_final[-train_ind, -c(i)]

  x = training[, names(training) != "label_fake"]
  y = training$label_fake

  model_nb_var = train(x,y,'nb', trControl=ctrl)

  predict_nb_var <- predict(model_nb_var, newdata = testing )

  confusion_matrix_nb_1 <- caret::confusionMatrix(predict_nb_var, testing$label_fake)  

  varimp_nb_df <- rbind(varimp_nb_df, data.frame(
    var_no=i,
    variable=colnames(df_final[,i]), 
    accuracy=confusion_matrix_nb_1[["overall"]][["Accuracy"]],
    kappa=confusion_matrix_nb_1[["overall"]][["Kappa"]],
    sensitivity=confusion_matrix_nb_1[["byClass"]][["Sensitivity"]],
    specificity=confusion_matrix_nb_1[["byClass"]][["Specificity"]]))
  runtime1 <- as.data.frame(t(data.matrix(proc.time() - ptm1)))$elapsed # time for running this loop
  runtime1 # divide by 60 and you get minutes, /3600 you get hours
}

玩得开心!