为什么MASS:lm.ridge系数与手动计算的系数不同?

时间:2020-02-13 17:14:36

标签: r regression mass

按定义手动执行岭回归时

solve(t(X) %*% X + lbd*I) %*%t(X) %*% y

我得到的结果与MASS::lm.ridge计算出的结果不同。为什么?对于普通的线性回归,手动方法(计算拟逆)可以很好地工作。

这是我的最小重现示例:

library(tidyverse)

ridgeRegression = function(X, y, lbd) {
  Rinv = solve(t(X) %*% X + lbd*diag(ncol(X)))
  t(Rinv %*% t(X) %*% y)
}

# generate some data:
set.seed(0)
tb1 = tibble(
  x0 = 1,
  x1 = seq(-1, 1, by=.01),
  x2 = x1 + rnorm(length(x1), 0, .1),
  y  = x1 + x2 + rnorm(length(x1), 0, .5)
)
X = as.matrix(tb1 %>% select(x0, x1, x2))

# sanity check: force ordinary linear regression
# and compare it with the built-in linear regression:
ridgeRegression(X, tb1$y, 0) - coef(summary(lm(y ~ x1 + x2, data=tb1)))[, 1]
# looks the same: -2.94903e-17 1.487699e-14 -2.176037e-14

# compare manual ridge regression to MASS ridge regression:
ridgeRegression(X, tb1$y, 10) - coef(MASS::lm.ridge(y ~ x0 + x1 + x2 - 1, data=tb1, lambda = 10))
# noticeably different: -0.0001407148 0.003689412 -0.08905392

1 个答案:

答案 0 :(得分:3)

MASS :: lm.ridge在建模之前会缩放数据-这说明了系数的差异。

您可以通过在R控制台中键入MASS :: lm.ridge来检查功能代码来确认这一点。

这是lm.ridge函数,其中缩放部分已被注释掉:

X = as.matrix(tb1 %>% select(x0, x1, x2))
n <- nrow(X); p <- ncol(X)
#Xscale <- drop(rep(1/n, n) %*% X^2)^0.5
#X <- X/rep(Xscale, rep(n, p))
Xs <- svd(X)
rhs <- t(Xs$u) %*% tb1$y
d <- Xs$d
lscoef <-  Xs$v %*% (rhs/d)
lsfit <- X %*% lscoef
resid <- tb1$y - lsfit
s2 <- sum(resid^2)/(n - p)
HKB <- (p-2)*s2/sum(lscoef^2)
LW <- (p-2)*s2*n/sum(lsfit^2)
k <- 1
dx <- length(d)
div <- d^2 + rep(10, rep(dx,k))
a <- drop(d*rhs)/div
dim(a) <- c(dx, k)
coef <- Xs$v %*% a
coef
#             x0        x1        x2
#[1,] 0.01384984 0.8667353 0.9452382