我想计算一个"mlogit"
对象的边际效应,其中解释变量是分类的(因素)。尽管使用数字数据effects()
会抛出某些错误,而使用分类数据则不会。
为简单起见,我在下面显示一个双变量示例。
# with mlogit
library(mlogit)
ml.dat <- mlogit.data(df3, choice="y", shape="wide")
fit.mnl <- mlogit(y ~ 1 | x, data=ml.dat)
head(effects(fit.mnl, covariate="x", data=ml.dat))
# FALSE TRUE
# 1 -0.01534581 0.01534581
# 2 -0.01534581 0.01534581
# 3 -0.20629452 0.20629452
# 4 -0.06903946 0.06903946
# 5 -0.24174312 0.24174312
# 6 -0.39306240 0.39306240
# with glm
fit.glm <- glm(y ~ x, df3, family = binomial)
head(effects(fit.glm))
# (Intercept) x
# -0.2992979 -4.8449254 2.3394989 0.2020127 0.4616640 1.0499595
# transform to factor
df3F <- within(df3, x <- factor(x))
class(df3F$x) == "factor"
# [1] TRUE
glm()
仍会抛出一些东西,
# with glm
fit.glmF <- glm(y ~ x, df3F, family = binomial)
head(effects(fit.glmF))
# (Intercept) x2 x3 x4 x5 x6
# 0.115076511 -0.002568206 -0.002568206 -0.003145397 -0.003631992 -0.006290794
mlogit()
方法
# with mlogit
ml.datF <- mlogit.data(df3F, choice="y", shape="wide")
fit.mnlF <- mlogit(y ~ 1 | x, data=ml.datF)
head(effects(fit.mnlF, covariate="x", data=ml.datF))
引发此错误:
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
In addition: Warning message:
In Ops.factor(data[, covariate], eps) :
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
我该如何解决?
我已经尝试用this answer来操纵effects.mlogit()
,但这无助于解决我的问题。
注意:该问题与this solution有关,我想将其应用于分类解释变量。
(在将给定解决方案应用于与上面链接的问题相关的基本问题时演示该问题。请参见注释。)
# new example ----
library(mlogit)
ml.d <- mlogit.data(df1, choice="y", shape="wide")
ml.fit <- mlogit(y ~ 1 | factor(x), reflevel="1", data=ml.d)
AME.fun2 <- function(betas) {
aux <- model.matrix(y ~ x, df1)[, -1]
ml.datF <- mlogit.data(data.frame(y=df1$y, aux),
choice="y", shape="wide")
frml <- mFormula(formula(paste("y ~ 1 |", paste(colnames(aux),
collapse=" + "))))
fit.mnlF <- mlogit(frml, data=ml.datF)
fit.mnlF$coefficients <- betas # probably?
colMeans(effects(fit.mnlF, covariate="x2", data=ml.datF)) # first co-factor?
}
(AME.mnl <- AME.fun2(ml.fit$coefficients))
require(numDeriv)
grad <- jacobian(AME.fun2, ml.fit$coef)
(AME.mnl.se <- matrix(sqrt(diag(grad %*% vcov(ml.fit) %*% t(grad))),
nrow=3, byrow=TRUE))
AME.mnl / AME.mnl.se
# doesn't work yet though...
# probably "true" values, obtained from Stata:
# # ame
# 1 2 3 4 5
# 1. NA NA NA NA NA
# 2. -0.400 0.121 0.0971 0.113 0.0686
# 3. -0.500 -0.179 0.0390 0.166 0.474
#
# # z-values
# 1 2 3 4 5
# 1. NA NA NA NA NA
# 2. -3.86 1.25 1.08 1.36 0.99
# 3. -5.29 -2.47 0.37 1.49 4.06
df3 <- structure(list(x = c(11, 11, 7, 10, 9, 8, 9, 6, 9, 9, 8, 9, 11,
7, 8, 11, 12, 5, 8, 8, 11, 6, 13, 12, 5, 8, 7, 11, 8, 10, 9,
10, 7, 9, 2, 10, 3, 6, 11, 9, 7, 8, 4, 12, 8, 12, 11, 9, 12,
9, 7, 7, 7, 10, 4, 10, 9, 6, 7, 8, 9, 13, 10, 8, 10, 6, 7, 10,
9, 6, 4, 6, 6, 8, 6, 9, 3, 7, 8, 2, 8, 6, 7, 9, 10, 8, 6, 5,
5, 7, 9, 1, 6, 11, 11, 9, 7, 8, 9, 9), y = c(TRUE, TRUE, TRUE,
TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, FALSE, TRUE, TRUE,
TRUE, FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, TRUE,
TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE,
TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE,
TRUE, FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE,
TRUE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE,
FALSE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE,
FALSE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE,
FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE,
TRUE, FALSE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE, FALSE
)), class = "data.frame", row.names = c(NA, -100L))
> summary(df3)
x y
Min. : 1.00 Mode :logical
1st Qu.: 7.00 FALSE:48
Median : 8.00 TRUE :52
Mean : 8.08
3rd Qu.:10.00
Max. :13.00
df1 <- structure(list(y = c(5, 4, 2, 2, 2, 3, 5, 4, 1, 1, 2, 4, 1, 4,
5, 5, 2, 3, 3, 5, 5, 3, 2, 4, 5, 1, 3, 3, 4, 3, 5, 2, 4, 4, 5,
5, 5, 2, 1, 5, 1, 3, 1, 4, 1, 2, 2, 4, 3, 1, 4, 3, 1, 1, 5, 2,
5, 4, 2, 2, 4, 2, 3, 5, 4, 1, 2, 2, 3, 5, 2, 5, 3, 3, 3, 1, 3,
1, 1, 4, 3, 4, 5, 2, 1, 1, 3, 1, 5, 4, 4, 2, 5, 3, 4, 4, 3, 1,
5, 2), x = structure(c(2L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 2L, 2L,
2L, 1L, 1L, 2L, 2L, 3L, 2L, 2L, 2L, 2L, 3L, 2L, 2L, 3L, 3L, 2L,
3L, 2L, 2L, 2L, 3L, 2L, 1L, 3L, 2L, 3L, 3L, 1L, 1L, 3L, 2L, 2L,
1L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 3L, 2L,
2L, 2L, 3L, 2L, 3L, 1L, 2L, 1L, 2L, 2L, 1L, 3L, 2L, 2L, 1L, 2L,
2L, 1L, 3L, 1L, 1L, 2L, 2L, 3L, 3L, 2L, 2L, 1L, 1L, 1L, 3L, 2L,
3L, 2L, 3L, 1L, 2L, 3L, 3L, 1L, 2L, 2L), .Label = c("1", "2",
"3"), class = "factor")), row.names = c(NA, -100L), class = "data.frame")
答案 0 :(得分:1)
可以预料effects
不能与因子一起使用,因为否则输出将包含另一个维度,从而使结果有些复杂,并且很合理,就像下面我的解决方案一样,相反,只需要特定因子级别的效果,而不是所有级别的效果。另外,正如我在下面解释的那样,分类变量情况下的边际效应不是唯一定义的,因此对于effects
来说,这将是一个额外的麻烦。
一种自然的解决方法是将因子变量手动转换为一系列虚拟变量,如
aux <- model.matrix(y ~ x, df3F)[, -1]
head(aux)
# x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13
# 1 0 0 0 0 0 0 0 0 0 1 0 0
# 2 0 0 0 0 0 0 0 0 0 1 0 0
# 3 0 0 0 0 0 1 0 0 0 0 0 0
# 4 0 0 0 0 0 0 0 0 1 0 0 0
# 5 0 0 0 0 0 0 0 1 0 0 0 0
# 6 0 0 0 0 0 0 1 0 0 0 0 0
这样数据就可以了
ml.datF <- mlogit.data(data.frame(y = df3F$y, aux), choice = "y", shape = "wide")
我们还需要手动构造公式
frml <- mFormula(formula(paste("y ~ 1 |", paste(colnames(aux), collapse = " + "))))
到目前为止,一切都很好。现在,如果我们运行
fit.mnlF <- mlogit(frml, data = ml.datF)
head(effects(fit.mnlF, covariate = "x2", data = ml.datF))
# FALSE TRUE
# 1 -1.618544e-15 0.000000e+00
# 2 -1.618544e-15 0.000000e+00
# 3 -7.220891e-08 7.221446e-08
# 4 -1.618544e-15 0.000000e+00
# 5 -5.881129e-08 5.880851e-08
# 6 -8.293366e-08 8.293366e-08
则结果不正确。 effects
在这里所做的是将x2
视为连续变量,并计算了这些情况下通常的边际效应。即,如果对应于x2
的系数为b2并且我们的模型为f(x,b2),则effects
计算f对b2的导数,并在每个观察到的向量x i上进行评估。这是错误的,因为x2
仅采用值0和1,而不是约0或约1的值,这就是推导所假定的(极限的概念)!例如,考虑其他数据集df1
。在这种情况下,我们会错误地获取
colMeans(effects(fit.mnlF, covariate = "x2", data = ml.datF))
# 1 2 3 4 5
# -0.25258378 0.07364406 0.05336283 0.07893391 0.04664298
这是另一种方法(使用微分逼近)来获得此错误结果:
temp <- ml.datF
temp$x2 <- temp$x2 + 0.0001
colMeans(predict(fit.mnlF, newdata = temp, type = "probabilities") -
predict(fit.mnlF, newdata = ml.datF, type = "probabilities")) / 0.0001
# 1 2 3 4 5
# -0.25257597 0.07364089 0.05336032 0.07893273 0.04664202
我没有使用effects
,而是通过两次使用predict
手工计算了错误的边际效应:结果是均值({x2new的拟合概率= x2old + 0.0001}-{x2new的拟合概率= x2old})/ 0.0001。也就是说,我们通过将x2
向上移动0.0001(从0到0.0001或从1到0.0001)来查看预测概率的变化。两者都没有道理。当然,我们不应该期望effects
的其他信息,因为数据中的x2
是数字。
因此,问题是如何计算正确的(平均)边际效应。如我所说,分类变量的边际效应不是唯一定义的。假设x_i是个人i是否有工作,而y_i是他们是否有汽车。因此,至少要考虑以下六件事。
现在,当我们对平均边际效应感兴趣时,我们可能希望仅对1-3的变化产生影响的个人进行平均。也就是说,
根据您的结果,Stata使用选项5,因此我将重现相同的结果,但是实现任何其他选项都非常简单,我建议您考虑在您的特定应用程序中哪些是有趣的。
AME.fun2 <- function(betas) {
aux <- model.matrix(y ~ x, df1)[, -1]
ml.datF <- mlogit.data(data.frame(y = df1$y, aux), choice="y", shape="wide")
frml <- mFormula(formula(paste("y ~ 1 |", paste(colnames(aux), collapse=" + "))))
fit.mnlF <- mlogit(frml, data = ml.datF)
fit.mnlF$coefficients <- betas
aux <- ml.datF # Auxiliary dataset
aux$x3 <- 0 # Going from 0 to the observed x_i
idx <- unique(aux[aux$x3 != ml.datF$x3, "chid"]) # Where does it make a change?
actual <- predict(fit.mnlF, newdata = ml.datF)
counterfactual <- predict(fit.mnlF, newdata = aux)
colMeans(actual[idx, ] - counterfactual[idx, ])
}
(AME.mnl <- AME.fun2(ml.fit$coefficients))
# 1 2 3 4 5
# -0.50000000 -0.17857142 0.03896104 0.16558441 0.47402597
require(numDeriv)
grad <- jacobian(AME.fun2, ml.fit$coef)
AME.mnl.se <- matrix(sqrt(diag(grad %*% vcov(ml.fit) %*% t(grad))), nrow = 1, byrow = TRUE)
AME.mnl / AME.mnl.se
# [,1] [,2] [,3] [,4] [,5]
# [1,] -5.291503 -2.467176 0.36922 1.485058 4.058994