R2jags使用普通的Poisson回归中的Prior先验来跳过适应阶段

时间:2019-07-10 11:54:25

标签: r r2jags

我使用R中的jags,stan,glm基本方法对库“ Stanarm”中的“ Roaches”数据集执行经典贝叶斯泊松回归,并获得相同的最终结果。为了获得更好的输出,我尝试使用R2jags并发生奇怪的事情:当我设置正常的先验适应阶段时(没有进度条带有交叉),并且每次链收敛到相同的错误值(如果我设置了例如柯西分布先验)或均匀分布先验(在R2jags中),我可以获得正确的值。

我尝试更改迭代次数,稀疏度,正常的先验精度,但什么也没做。

在完整的运行代码下面:

library(R2jags)
library(rstanarm)

data<-roaches
colnames(data)[1]<-"Y"
data.list <- as.list(data)
data.list$N<-nrow(data)

#normal priors

modelString_1="

model {
  for (i in 1:N) {
     Y[i] ~ dpois(lambda[i])
     log(lambda[i]) <-offset[i]+beta_0+ beta_1*roach1[i]+beta_2*treatment[i]+beta_3*senior[i]
     offset[i]<-log(exposure2[i])
  }
beta_0~dnorm(0.0,tau)
beta_1~dnorm(0.0,tau)
beta_2~dnorm(0.0,tau)
beta_3~dnorm(0.0,tau)
tau<-1/100000


} 
"
writeLines(modelString_1,con='classical_poisson_1.txt')

#cauchy priors

modelString_2="

model {
  for (i in 1:N) {
     Y[i] ~ dpois(lambda[i])
     log(lambda[i]) <-offset[i]+beta_0+ beta_1*roach1[i]+beta_2*treatment[i]+beta_3*senior[i]
     offset[i]<-log(exposure2[i])
  }
beta_0~dt(0,0.00001,1)  
beta_1~dt(0,0.00001,1)
beta_2~dt(0,0.00001,1)
beta_3~dt(0,0.00001,1)


} 
"
writeLines(modelString_2,con='classical_poisson_2.txt')

params<-c("beta_0","beta_1","beta_2","beta_3")

Model_1<-jags(model.file='classical_poisson_1.txt', parameters.to.save=params,n.iter=20000,data=data.list,n.chains=3,inits = NULL)

#correct values from MCMC

Model_2<-jags(model.file='classical_poisson_2.txt', parameters.to.save=params,n.iter=20000,data=data.list,n.chains=3,inits = NULL)

R输出:

第一个模型输出:

          Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 262
   Unobserved stochastic nodes: 4
   Total graph size: 1877

Initializing model

  |**************************************************| 100%
Inference for Bugs model at "classical_poisson_1.txt", fit using jags,
 3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
 n.sims = 3000 iterations saved
           mu.vect sd.vect      2.5%       25%       50%       75%     97.5%  Rhat n.eff
beta_0       4.038   0.044     3.990     4.023     4.040     4.055     4.088 1.003  3000
beta_1       0.005   0.000     0.005     0.005     0.005     0.005     0.005 1.002  1100
beta_2      -0.740   0.028    -0.793    -0.758    -0.740    -0.722    -0.688 1.001  3000
beta_3      -0.047   0.038    -0.117    -0.072    -0.046    -0.022     0.025 1.001  3000
deviance 17417.012 275.140 16959.270 17261.288 17414.916 17578.130 17919.022 1.001  3000

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule, pD = var(deviance)/2)
pD = 37874.0 and DIC = 55291.0
DIC is an estimate of expected predictive error (lower deviance is better).

第二个模型输出(正确):

            Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 262
   Unobserved stochastic nodes: 4
   Total graph size: 1876

Initializing model

  |++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
  |**************************************************| 100%
Inference for Bugs model at "classical_poisson_2.txt", fit using jags,
 3 chains, each with 20000 iterations (first 10000 discarded), n.thin = 10
 n.sims = 3000 iterations saved
           mu.vect sd.vect      2.5%       25%       50%       75%     97.5%  Rhat n.eff
beta_0       3.089   0.021     3.046     3.074     3.089     3.104     3.130 1.001  3000
beta_1       0.007   0.000     0.007     0.007     0.007     0.007     0.007 1.001  3000
beta_2      -0.516   0.025    -0.566    -0.533    -0.516    -0.500    -0.468 1.001  2200
beta_3      -0.379   0.034    -0.446    -0.402    -0.379    -0.356    -0.314 1.001  3000
deviance 12187.995   2.839 12184.462 12185.888 12187.338 12189.416 12195.288 1.001  3000

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule, pD = var(deviance)/2)
pD = 4.0 and DIC = 12192.0
DIC is an estimate of expected predictive error (lower deviance is better).

0 个答案:

没有答案