如何改进卷积分布的集成和参数化?

时间:2016-03-18 00:19:09

标签: r integration modeling convolution parameterization

我正在尝试解决与正态分布和对数正态分布卷积的伽玛分布的参数。我可以通过实验推导出正常和对数正态分量的参数,因此,我只想解决伽马参数。

我尝试了3种解决此问题的方法:

1)生成卷积随机数据集(即rnorm()+rlnorm()+rgamma())并对数据的线性或对数分箱直方图使用最小二乘回归(未显示,但RNG非常偏向且未优化好吧。)

2)卷积函数的“强力”数值积分(示例代码#1)

3)数字集成方法与distr包。 (示例代码#2)

我用这三种方法取得了有限的成功。重要的是,这些方法似乎适用于伽玛参数的“标称”值,但是当k(形状)低且θ(标度)高时,它们都开始失败 - 这是我的实验数据所在的位置。请查看以下示例。

直接数值积分

# make the functions
f.N <- function(n) dnorm(n, N[1], N[2])
f.L <- function(l) dlnorm(l, L[1], L[2])
f.G <- function(g) dgamma(g, G[1], scale=G[2])

# make convolved functions
f.Z <- function(z) integrate(function(x,z) f.L(z-x)*f.N(x), -Inf, Inf, z)$value # L+N
f.Z <- Vectorize(f.Z)
f.Z1 <- function(z) integrate(function(x,z) f.G(z-x)*f.Z(x), -Inf, Inf, z)$value # G+(L+N)
f.Z1 <- Vectorize(f.Z1)

# params of Norm, Lnorm, and Gamma
N <- c(0,5)
L <- c(2.5,.5)
G <- c(2,7) # this distribution is the one we ultimately want to solve for.
# G <- c(.5,10) # 0<k<1
# G <- c(.25,5e4) # ballpark params of experimental data 

# generate some data
set.seed(1)
rN <- rnorm(1e4, N[1], N[2])
rL <- rlnorm(1e4, L[1], L[2])
rG <- rgamma(1e4, G[1], scale=G[2])
Z <- rN + rL
Z1 <- rN + rL + rG

# check the fit
hist(Z,freq=F,breaks=100, xlim=c(-10,50), col=rgb(0,0,1,.25))
hist(Z1,freq=F,breaks=100, xlim=c(-10,50), col=rgb(1,0,0,.25), add=T)
z <- seq(-10,50,1)
lines(z,f.Z(z),lty=2,col="blue", lwd=2) # looks great... convolution performs as expected.
lines(z,f.Z1(z),lty=2,col="red", lwd=2) # this works perfectly so long as k(shape)>=1

# I'm guessing the failure to compute when shape 0 < k < 1 is due to 
# numerical integration problems, but I don't know how to fix it.
integrate(dgamma, -Inf, Inf, shape=1, scale=1) # ==1
integrate(dgamma, 0, Inf, shape=1, scale=1) # ==1
integrate(dgamma, -Inf, Inf, shape=.5, scale=1) # !=1
integrate(dgamma, 0, Inf, shape=.5, scale=1) # != 1

# Let's try to estimate gamma anyway, supposing k>=1
optimFUN <- function(par, N, L) {
  print(par)
  -sum(log(f.Z1(Z1[1:4e2])))
}
f.G <- function(g) dgamma(g, par[1], scale=par[2])
fitresult <- optim(c(1.6,5), optimFUN, N=N, L=L)
par <- fitresult$par
lines(z,f.Z1(z),lty=2,col="green3", lwd=2) # not so great... likely better w/ more data, 
# but it is SUPER slow and I observe large step sizes.

尝试通过distr包进行卷积

# params of Norm, Lnorm, and Gamma
N <- c(0,5)
L <- c(2.5,.5)
G <- c(2,7) # this distribution is the one we ultimately want to solve for.
# G <- c(.5,10) # 0<k<1
# G <- c(.25,5e4) # ballpark params of experimental data 

# make the distributions and "convolvings'
dN <- Norm(N[1], N[2])
dL <- Lnorm(L[1], L[2])
dG <- Gammad(G[1], G[2])
d.NL <- d(convpow(dN+dL,1))
d.NLG <- d(convpow(dN+dL+dG,1)) # for large values of theta, no matter how I change 
# getdistrOption("DefaultNrFFTGridPointsExponent"), grid size is always wrong.

# Generate some data
set.seed(1)
rN <- r(dN)(1e4)
rL <- r(dL)(1e4)
rG <- r(dG)(1e4)
r.NL <- rN + rL
r.NLG <- rN + rL + rG

# check the fit
hist(r.NL, freq=F, breaks=100, xlim=c(-10,50), col=rgb(0,0,1,.25))
hist(r.NLG, freq=F, breaks=100, xlim=c(-10,50), col=rgb(1,0,0,.25), add=T)
z <- seq(-10,50,1)
lines(z,d.NL(z), lty=2, col="blue", lwd=2) # looks great... convolution performs as expected.
lines(z,d.NLG(z), lty=2, col="red", lwd=2) # this appears to work perfectly 
# for most values of K and low values of theta

# this is looking a lot more promising... how about estimating gamma params?
optimFUN <- function(par, dN, dL) {
  tG <- Gammad(par[1],par[2])
  d.NLG <- d(convpow(dN+dL+tG,1))
  p <- d.NLG(r.NLG)
  p[p==0] <- 1e-15 # because sometimes very low probabilities evaluate to 0... 
# ...and logs don't like that.
  -sum(log(p))
}
fitresult <- optim(c(1,1e4), optimFUN, dN=dN, dL=dL)
fdG <- Gammad(fitresult$par[1], fitresult$par[2])
fd.NLG <- d(convpow(dN+dL+fdG,1))
lines(z,fd.NLG(z), lty=2, col="green3", lwd=2) ## this works perfectly when ~k>1 & ~theta<100... but throws
## "Error in validityMethod(object) : shape has to be positive" when k decreases and/or theta increases 
## (boundary subject to RNG).

我可以加快示例1中的集成吗?我可以在示例2(distr包)中增加网格大小吗?我该如何解决k <1问题?我可以以更好的方式重新调整数据,以便在高θ值下进行评估吗? 有没有更好的方式在一起? 救命啊!

1 个答案:

答案 0 :(得分:1)

好吧,函数的卷积与高斯内核调用使用Gauss–Hermite quadrature。在 R 中,它是在特殊包中实现的:https://cran.r-project.org/web/packages/gaussquad/gaussquad.pdf

更新

对于使用Gamma分布的卷积,此包可能也很有用Gauss-Laguerre quadrature

更新II

这是卷积高斯与对数正态的快速代码, 希望不是很多错误,并打印一些合理的图表

library(gaussquad)

n.quad <- 170 # integration order

# get the particular weights/abscissas as data frame with 2 observables and n.quad observations
rule <- ghermite.h.quadrature.rules(n.quad, mu = 0.0)[[n.quad]]

# test function - integrate 1 over exp(-x^2) from -Inf to Inf
# should get sqrt(pi) as an answer
f <- function(x) {
    1.0
}

q <- ghermite.h.quadrature(f, rule)
print(q - sqrt(pi))

# convolution of lognormal with gaussian
# because of the G-H rules, we have to make our own function
# for simplicity, sigmas are one and mus are zero

sqrt2 <- sqrt(2.0)

c.LG <- function(z) {
   #print(z)

   f.LG <- function(x) {
        t <- (z - x*sqrt2)
        q <- 0.0
        if (t > 0.0) {
            l <- log(t)
            q <- exp( - 0.5*l*l ) / t
        }
        q
   }

   ghermite.h.quadrature(Vectorize(f.LG), rule) / (pi*sqrt2)
}

library(ggplot2)

p <- ggplot(data = data.frame(x = 0), mapping = aes(x = x))
p <- p + stat_function(fun = Vectorize(c.LG))
p <- p + xlim(-1.0, 5.0)
print(p)