我正在使用虹膜数据集在R中练习SVM,我想从模型中获取特征权重/系数,但是鉴于我的输出给了我32个支持向量,我想我可能会误解某些东西。假设我要分析四个变量,我将得到四个。我知道在使用svm()
函数时有一种方法,但是我试图使用插入符号中的train()
函数来生成我的SVM。
library(caret)
# Define fitControl
fitControl <- trainControl(## 5-fold CV
method = "cv",
number = 5,
classProbs = TRUE,
summaryFunction = twoClassSummary )
# Define Tune
grid<-expand.grid(C=c(2^-5,2^-3,2^-1))
##########
df<-iris head(df)
df<-df[df$Species!='setosa',]
df$Species<-as.character(df$Species)
df$Species<-as.factor(df$Species)
# set random seed and run the model
set.seed(321)
svmFit1 <- train(x = df[-5],
y=df$Species,
method = "svmLinear",
trControl = fitControl,
preProc = c("center","scale"),
metric="ROC",
tuneGrid=grid )
svmFit1
我以为只是svmFit1$finalModel@coef
,但是当我认为应该得到4时我得到32个向量。为什么?
答案 0 :(得分:5)
所以coef
不是支持向量的权重W
。这是docs中ksvm
类的相关部分:
coef
相应的系数乘以训练标签。
要获得所需的内容,您需要执行以下操作:
coefs <- svmFit1$finalModel@coef[[1]]
mat <- svmFit1$finalModel@xmatrix[[1]]
coefs %*% mat
有关可重现的示例,请参见下文。
library(caret)
#> Loading required package: lattice
#> Loading required package: ggplot2
#> Warning: package 'ggplot2' was built under R version 3.5.2
# Define fitControl
fitControl <- trainControl(
method = "cv",
number = 5,
classProbs = TRUE,
summaryFunction = twoClassSummary
)
# Define Tune
grid <- expand.grid(C = c(2^-5, 2^-3, 2^-1))
##########
df <- iris
df<-df[df$Species != 'setosa', ]
df$Species <- as.character(df$Species)
df$Species <- as.factor(df$Species)
# set random seed and run the model
set.seed(321)
svmFit1 <- train(x = df[-5],
y=df$Species,
method = "svmLinear",
trControl = fitControl,
preProc = c("center","scale"),
metric="ROC",
tuneGrid=grid )
coefs <- svmFit1$finalModel@coef[[1]]
mat <- svmFit1$finalModel@xmatrix[[1]]
coefs %*% mat
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,] -0.1338791 -0.2726322 0.9497457 1.027411
由reprex package(v0.2.1.9000)于2019-06-11创建
来源
答案 1 :(得分:0)
随着越来越多的人开始从Caret转向Tidymodels,我想我会在2020年8月为Tidymodels投放上述解决方案的一个版本,因为到目前为止我还没有看到太多关于此问题的讨论,而且做起来也不那么容易。
此处概述了主要步骤,但请查看结尾处的链接以详细了解采用这种方法的原因。
1。获取最终模型
set.seed(2020)
# Assuming kernlab linear SVM
# Grid Search Parameters
tune_rs <- tune_grid(
model_wf,
train_folds,
grid = param_grid,
metrics = classification_measure,
control = control_grid(save_pred = TRUE)
)
# Finalise workflow with the parameters for best accuracy
best_accuracy <- select_best(tune_rs, "accuracy")
svm_wf_final <- finalize_workflow(
model_wf,
best_accuracy
)
# Fit on your final model on all available data at the end of experiment
final_model <- fit(svm_wf_final, data)
# fit takes a model spec and executes the model fit routine (Parsnip)
# model_spec, formula and data to fit upon
2。提取KSVM对象,提取所需信息,计算变量重要性
ksvm_obj <- pull_workflow_fit(final_model)$fit
# Pull_workflow_fit returns the parsnip model fit object
# $fit returns the object produced by the fitting fn (which is what we need! and is dependent on the engine)
coefs <- ksvm_obj@coef[[1]]
# first bit of info we need are the coefficients from the linear fit
mat <- ksvm_obj@xmatrix[[1]]
# xmatrix that we need to matrix multiply against
var_impt <- coefs %*% mat
# var importance
参考:
使用Caret提取支持向量的权重:Linear SVM and extracting the weights
变量重要性(本文的最后部分):http://www.rebeccabarter.com/blog/2020-03-25_machine_learning/#finalize-the-workflow