使用基于子组的线性回归分析

时间:2016-11-23 13:30:15

标签: r linear-regression

假设我有数据(t,y),我期望线性相关y(t)。此外,每个观察par1, par2, par3都存在属性。是否存在算法或技术来判断(一个或两个或所有参数)是否与拟合相关?我试过leaps::regsubsets(y ~ t + par1 + par2 + par3, data = mydata, nbest = 10),但无法得到最合适的公式。

最终结果如果已绘制,则应如下所示。有关数据,请参阅下文。

enter image description here
因此,我想要信息

  • 添加par1par2最合适
  • 模型为y_i = a_i * t_i + b_i,给定a_ib_i

可重复的示例

t <- seq(0,10, length.out = 1000) # large sample of x values
# Create 3 linear equations of the form y_i = a*t_i + b
a <- c(1, 0.3, 0.2) # slope
b <- c(-0.5, 0.5, 0.1) # offset

# create t_i, y_ti and y_i (including noise)
d <- list()
y <- list()
y_t <- list()
for (i in 1:3) {
  set.seed(33*i)
  d[[i]] <- sort(sample(t, 50, replace = F))
  set.seed(33*i)
  noise <- rnorm(10)
  y[[i]] <- a[i]*d[[i]] + b[i] + noise
  y_t[[i]] <- a[i]*d[[i]] + b[i]
}
# Final data set
df1 <- data.frame(t=d[[1]], y=y[[1]], par1=rep(1), par2=rep(10), par3=sample(c(100, 200, 300), length(d[[1]]), replace = T))
df2 <- data.frame(t=d[[2]], y=y[[2]], par1=rep(2), par2=rep(20), par3=sample(c(100, 200, 300), length(d[[1]]), replace = T))
df3 <- data.frame(t=d[[3]], y=y[[3]], par1=rep(2), par2=rep(30), par3=sample(c(100, 200, 300), length(d[[1]]), replace = T))
mydata <- rbind(df1, df2, df3)
mydata <- mydata[sample(nrow(mydata)), ]

# That is what the data is looking like:
plot(mydata$t, mydata$y)

# This is the result I am looking for (ideally):
plot(d[[1]], y[[1]], col = "black", xlim = c(0, 10), ylim = c(-2, 10), xlab = "t", ylab = "y",
     main = "Fit for three different groups")
points(d[[2]], y[[2]], col = "red")
points(d[[3]], y[[3]], col = "blue")
lines(d[[1]], y_t[[1]],col = "black")
lines(d[[2]], y_t[[2]], col = "red")
lines(d[[3]], y_t[[3]], col = "blue")

关于@Roland答案的评论和提问

我理解,使用给定的三个参数,2^3=8组具有2*3*3=18因子级别。但我希望我们只有8个相关的组,因为我总是可以选择“包含参数x或不包含”。对我来说,只包含“包含参数y的等级x”是没有意义的。

我尝试了以下

g <- 0
t_lin1 <- mydata$t[mydata$g == g]
y_lin1 <- mydata$y[mydata$g == g]
plot(mydata$t, mydata$y)
points(t_lin1, y_lin1, col = "red")
abline(lm(y_lin1 ~ t_lin1), col = "red")
points(pred.1se ~ t, data = mydata, col = as.integer(mydata$g), pch = 16)

并意识到合身已经脱落。回顾这一点很清楚,因为

  • 我包含错误的因子级别(最有可能的参数3不相关)
  • 因此得到错误的数据

所以我的最后一个问题是:

  • 我在哪里可以找到最佳模型中包含的相关群组
  • 回归的相应拟合参数是什么?

很抱歉,如果这是显而易见的,但对我来说这很神秘

1 个答案:

答案 0 :(得分:3)

LASSO可以非常接近(虽然它确定了太多的影响):

#I assume these are supposed to be factors:
mydata$par1 <- factor(mydata$par1)
mydata$par2 <- factor(mydata$par2)
mydata$par3 <- factor(mydata$par3)

#create model matrix, remove intercept since glmnet adds it
x <- model.matrix(y ~ (par1 * par2 * par3) * t, data = mydata)[,-1]

#cross-validated LASSO
library(glmnet)
set.seed(42)
fit <- cv.glmnet(x, mydata$y, intercept = TRUE, nfolds = 10, alpha = 1)
plot(fit)

resulting plot

coef <- as.matrix(coef(fit, s = "lambda.1se"))
coef[coef != 0,]
#(Intercept)      par230           t     par12:t    par230:t   par3300:t 
# 0.47542479 -0.27612966  0.75497711 -0.42493030 -0.15044371  0.03033057

#The groups:
mydata$g <- factor((mydata$par2 == 30) + 10 * (mydata$par1 == 2) + 100 * (mydata$par3 == 300))



mydata$pred.1se <- predict(fit, newx = x, s = "lambda.1se")

library(ggplot2)
ggplot(mydata, aes(x = t, color = g)) +
  geom_point(aes(y = y)) +
  geom_line(aes(y = pred.1se))

resulting plot

然后,您可以根据系数计算所需的截距和斜率。