如何用R两次回归对斜坡进行韦尔奇t检验?

时间:2017-01-13 17:02:24

标签: r regression linear-regression t-test

我使用相同的自变量对两组进行回归分析。然后,我想测试这两个回归的斜率是否有显着差异。

我已经读过,当样本量和方差在两组之间不相等时,建议进行Welch的t检验。我找到了t.test()功能,但是我没有在斜坡上应用它。

Data <- data.frame(
  gender = sample (c("men", "women"), 2000, replace = TRUE),
  var1 = sample (c("value1", "value2"), 2000, replace = TRUE),
  var2 = sample (c("valueA", "valueB"), 2000, replace = TRUE),
  var3 = sample (c("valueY", "valueZ"), 2000, replace = TRUE),
  y = sample(0:10, 2000, replace = TRUE)
)

我的两个回归:

lm.male <- lm(y ~ var1 + var2 + var3, data = subset(Data, gender == "men"))
summary(lm.male)

lm.women <- lm(y ~ var1 + var2 + var3, data = subset(Data, gender == "women"))
summary(lm.women)

使用Stata,我会使用suettest函数来执行测试。

有谁知道如何编写Welch对R中斜率的t检验?

1 个答案:

答案 0 :(得分:4)

我不会完全回答你的问题,而是更普遍的问题,在R中,我将如何检验两组中斜率差异的假设,其中响应变量中存在可疑的不等方差。

概述

有几个选项,其中两个我将进入。所有好的选择都涉及将两个数据集组合成一个单一的建模策略,并面对一个“完整”模型,其中包括性别和斜率的交互效应,以及具有附加性别效应的“无交互”模型,但相同其他变量的斜率。

如果我们准备假设两个性别组中的方差相同,我们只使用普通最小二乘法将我们的两个模型拟合到组合数据中并使用经典的F检验:

Data <- data.frame(
  gender = sample (c("men", "women"), 2000, replace = TRUE),
  var1 = sample (c("value1", "value2"), 2000, replace = TRUE),
  var2 = sample (c("valueA", "valueB"), 2000, replace = TRUE),
  var3 = sample (c("valueY", "valueZ"), 2000, replace = TRUE),
  y = sample(0:10, 2000, replace = TRUE)
)


lm_full <- lm(y ~ (var1 + var2 + var3) * gender, data = Data)
lm_nointeraction <- lm(y ~ var1 + var2 + var3 + gender, data = Data)

# If the variance were equal we could just do an F-test:
anova(lm_full, lm_nointeraction)

然而,这种假设是不可接受的,所以我们需要一种替代方案。我认为this discussion on Cross-Validated很有用。

选项1 - 加权最小二乘法

我不确定这是否与韦尔奇的t检验相同;我怀疑这是一个更高层次的概括。这是解决问题的一种非常简单的参数方法。基本上,我们只是在平均值的同时模拟响应的方差。然后在拟合过程(变为迭代)中,我们给予预期具有更高方差的点更少的权重,即更多随机性。 gls包中的nlme函数 - 广义最小二乘 - 为我们做了这个。

# Option 1 - modelling variance, and making weights inversely proportional to it
library(nlme)
gls_full <- gls(y ~ (var1 + var2 + var3) * gender, data = Data, weights = varPower())
gls_nointeraction <- gls(y ~ var1 + var2 + var3 + gender, data = Data, weights = varPower())

# test the two models against eachother (preferred):
AIC(gls_full, gls_nointeraction) # lower value wins

# or test individual interactions:
summary(gls_full)$tTable

选项2 - 强大的回归,通过bootstrap进行比较

第二种选择是使用M估计,该估计被设计为对数据内的组中的不等方差具有鲁棒性。比较两个模型的稳健回归的良好实践是选择某种验证统计量并使用引导程序来查看平均哪个模型在该统计数据上做得更好。

这有点复杂,但这里有一个模拟数据的实例:

# Option 2 - use robust regression and the bootstrap
library(MASS)
library(boot)
rlm_full <- rlm(y ~ (var1 + var2 + var3) * gender, data = Data)
rlm_nointeraction <- rlm(y ~ var1 + var2 + var3 + gender, data = Data)

# Could just test to see which one fits best (lower value wins)
AIC(rlm_full, rlm_nointeraction)

# or - preferred - use the bootstrap to validate each model and pick the best one.
# First we need a function to give us a performance statistic on how good
# a model is at predicting values compared to actuality.  Let's use root 
#  mean squared error:
RMSE <- function(predicted, actual){
  sqrt(mean((actual - predicted) ^ 2))
}

# This function takes a dataset full_data, "scrambled" by the resampling vector i.
# It fits the model to the resampled/scrambled version of the data, and uses this
# to predict the values of y in the full original unscrambled dataset.  This is
# described as the "simple bootstrap" in Harrell *Regression Modeling Strategies*,
# buiolding on Efron and Tibshirani.
simple_bootstrap <- function(full_data, i){
  sampled_data <- full_data[i, ]

  rlm_full <- rlm(y ~ (var1 + var2 + var3) * gender, data = sampled_data)
  rlm_nointeraction <- rlm(y ~ var1 + var2 + var3 + gender, data = sampled_data)

  pred_full <- predict(rlm_full, newdata = full_data)
  pred_nointeraction <- predict(rlm_nointeraction, newdata = full_data)

  rmse_full <- RMSE(pred_full, full_data$y)
  rmse_nointeraction <- RMSE(pred_nointeraction, full_data$y)
  return(rmse_full - rmse_nointeraction)
}

rlm_boot <- boot(Data, statistic = simple_bootstrap, R = 500, strata = Data$gender)

# Confidence interval for the improvement from the full model, compared to the one with no interaction:
boot.ci(rlm_boot, type = "perc")

结论

上述中的一个或任何一个都是合适的。当我怀疑方差的变化时,我通常认为引导是推理的一个重要方面。即使您使用nlme::gls,也可以使用它。引导程序更加强大,并且使许多较旧的命名统计测试变得多余,以处理特定情况。