我正在使用glmnet()
包来拟合套索回归模型。以下是有关我的数据的一些信息:
the_data
是具有62行和2001列的数据帧。最后一列是响应resp
。这些列用G(column number)
表示,因此G120
是第120列。
这是我用来套索套索模型的代码:
> library(glmnet)
> x = model.matrix(resp ~ ., data = the_data)[,-1]
> y = the_data$resp
> grid = 10^seq(10, -2, length = 100)
# making the training and test sets, and fitting the lasso mod
> set.seed(1)
> train = sort(sample(1:nrow(the_data), nrow(the_data)/2))
> test = (1:nrow(the_data))[-train]
> lasso_mod = glmnet(x[train,], as.factor(y[train]), alpha = 1, lambda = grid,
family = "binomial")
# choosing the optimal lambda using cross-validation
> set.seed(1)
> cv_out = cv.glmnet(x[train,], as.factor(y[train]), alpha = 1, family = "binomial")
> best_lambda = cv_out$lambda.min
# fitting the lasso with the optimal lambda
> best_lasso_mod = glmnet(x[train,], as.factor(y[train]), alpha = 1,
lambda = best_lambda, family = "binomial")
现在,我目前有两种使用coef.glmnet()
命令获取非零系数的方法:
# method 1
> best_coef_1 = coef(lasso_mod, s = best_lambda)
# method 2
> best_coef_2 = coef(best_lasso_mod)
这两种方法给出相似但不同的非零系数。
# nonzero coefficients from method 1
> rownames(best_coef_1[best_coef_1[, 1] != 0, 0])[-1]
[1] "G258" "G281" "G698" "G822" "G1153" "G1346" "G1423" "G1582" "G1870"
[10] "G1899"
# nonzero coefficients from method 2
> rownames(best_coef_2[best_coef_2[, 1] != 0, 0])[-1]
[1] "G249" "G258" "G281" "G698" "G822" "G1153" "G1423" "G1582" "G1870"
第一种方法给出10个非零系数,第二种给出9。为什么它们不同?
这是模型的系数图,其中突出显示了最佳的lambda值。
> plot(lasso_mod, xvar = "lambda", las = 1, xlim = c(-4, -1))
> abline(v = log(best_lambda), col = "black", lty = 2)
可能很难看清,所以我去强调了所有在最佳lambda处非零的行,其中有10条。