我需要从具有已知累积分布函数(CDF)的相当复杂的概率密度函数(PDF)中随机抽样,并且我正在尝试使用inverse transform sampling。这应该很容易做到,因为我有CDF,只需要在插入统一的随机数时用数字反转它(不可能用代数做)。但是,由此产生的分布方差低于预期,我在CDF中找不到任何错误。
所以我通过从正态分布中抽样来简化和测试我的算法。结果是一样的:位置还可以,但规模是错误的。我知道高斯采样有更好的内置方法,但这只是对采样算法的一种测试。
这个问题最初出现在Fortran中,但我已经在Python中复制了这个问题,所以我必须做一些从根本上说错误或有数字问题的事情。
import numpy as np
from scipy.special import erf
from scipy.optimize import brentq
import matplotlib.pyplot as plt
from scipy.stats import norm
def testfunc(x):
## Test case, result should be 6.04880103
# out = 0.5 * (1. + erf((x - 5.) / (2. * np.sqrt(2.)))) - 0.7
r = np.random.uniform()
# hand-built cdf:
# out = 0.5 * (1. + erf((x - 5.) / (2. * np.sqrt(2.)))) - r
# scipy cdf:
out = norm.cdf(x, 5, 2) - r
return out
if __name__ == '__main__':
n = 10000
sol_array = np.zeros(n)
for i in range(0, n):
sol_array[i] = brentq(testfunc, -100.,100.)
print('mean = ' + str(np.mean(sol_array)))
print('std = ' + str(np.std(sol_array)))
plt.hist(sol_array, normed=True, bins='fd')
x = np.linspace(-1, 11, 1000)
plt.plot(x, norm.pdf(x, 5, 2))
plt.show()
如预期的那样,采样值的平均值约为5,但标准偏差约为1.28,对于我手工制作的CDF和scipy
的CDF,它应该是2。
这在直方图中也可见:
Fortran中的问题相同,但结果标准差的值不同。代码更长,因为包含了解算器。该解算器是旧版FORTRAN 77 netlib例程(Alan Miller)的zeroin.f翻译的Fortran 90版本。
implicit none
integer, parameter :: dp = selected_real_kind(15, 307)
integer, parameter :: n = 1000000
real, dimension(n) :: v
real :: mean, std
integer, dimension(:), allocatable :: seed
integer :: i, seedsize, clock
! seed the PRNG
call random_seed(size=seedsize)
allocate(seed(seedsize))
call system_clock(count=clock)
seed=clock + 37 * (/ (i - 1, i=1, seedsize) /)
call random_seed(put=seed)
deallocate(seed)
do i = 1, n
v(i) = real(zeroin(testfunc, -100._dp, 100._dp, 1e-20_dp, 1e-10_dp))
end do
mean = sum(v) / n
std = sum((v - mean)**2) / n
print*, mean, std
contains
function testfunc(v)
implicit none
real(dp), intent(in) :: v
real(dp) :: testfunc, r
call random_number(r)
! testfunc = 0.5 * (1. + erf((v-5.)/(2.*sqrt(2.)))) - 0.7 ! should be 6.04880
testfunc = 0.5 * (1. + erf((v-5.)/(2.*sqrt(2.)))) - r ! Gaussian test with mu=5 and sigma=2
end function testfunc
function zeroin(f, ax, bx, aerr, rerr) result(fn_val)
! original zeroin.f from netlib.org
! code converted using to_f90 by alan miller
! date: 2003-07-14 time: 12:32:54
!-----------------------------------------------------------------------
! finding a zero of the function f(x) in the interval (ax,bx)
! ------------------------
! INPUT:
! f function subprogram which evaluates f(x) for any x in the
! closed interval (ax,bx). it is assumed that f is continuous,
! and that f(ax) and f(bx) have different signs.
! ax left endpoint of the interval
! bx right endpoint of the interval
! aerr the absolute error tolerance to be satisfied
! rerr the relative error tolerance to be satisfied
! OUTPUT:
! abcissa approximating a zero of f in the interval (ax,bx)
!-----------------------------------------------------------------------
! zeroin is a slightly modified translation of the algol procedure
! zero given by Richard Brent in "Algorithms for Minimization without
! Derivatives", Prentice-Hall, Inc. (1973).
implicit none
real(dp), intent(in) :: ax
real(dp), intent(in) :: bx
real(dp), intent(in) :: aerr
real(dp), intent(in) :: rerr
real(dp) :: fn_val
real(dp) :: a, b, c, d, e, eps, fa, fb, fc, tol, xm, p, q, r, s, atol, rtol
interface
real(selected_real_kind(15, 307)) function f(x)
real(selected_real_kind(15, 307)), intent(in) :: x
end function f
end interface
! compute eps, the relative machine precision
eps = epsilon(0.0_dp)
! initialization
a = ax
b = bx
fa = f(a)
fb = f(b)
if (fb*fa > 0.) then
print*, 'a, b, fa, fb', a, b, fa, fb
stop
end if
atol = 0.5 * aerr
rtol = max(0.5_dp*rerr, 2.0_dp*eps)
! begin step
10 c = a
fc = fa
d = b - a
e = d
20 if (abs(fc) < abs(fb)) then
a = b
b = c
c = a
fa = fb
fb = fc
fc = fa
end if
! convergence test
tol = rtol * max(abs(b),abs(c)) + atol
xm = 0.5 * (c-b)
if (abs(xm) > tol) then
if (fb /= 0.0) then
! is bisection necessary
if (abs(e) >= tol) then
if (abs(fa) > abs(fb)) then
! is quadratic interpolation possible
if (a == c) then
! linear interpolation
s = fb / fc
p = (c-b) * s
q = 1.0 - s
else
! inverse quadratic interpolation
q = fa / fc
r = fb / fc
s = fb / fa
p = s * ((c-b)*q*(q-r)-(b-a)*(r-1.0))
q = (q-1.0) * (r-1.0) * (s-1.0)
end if
! adjust signs
if (p > 0.0) q = -q
p = abs(p)
! is interpolation acceptable
if (2.0*p < (3.0*xm*q-abs(tol*q))) then
if (p < abs(0.5*e*q)) then
e = d
d = p / q
go to 30
end if
end if
end if
end if
! bisection
d = xm
e = d
! complete step
30 a = b
fa = fb
if (abs(d) > tol) b = b + d
if (abs(d) <= tol) b = b + sign(tol,xm)
fb = f(b)
if (fb*(fc/abs(fc)) > 0.0) go to 10
go to 20
end if
end if
! done
fn_val = b
end function zeroin
end
所得样品的平均值约为5,而标准偏差约为1.64。
有没有人知道我的算法在哪些方面可能会出现数字问题?事实上,Python版本和Fortran版本都有相同的问题,但是在不同程度上让我相信它是浮点数问题的一些舍入,但我无法想象在哪里。即使求解器返回舍入值,该差异也不应显示在简单的直方图中。
有没有人在我的算法中看到错误?我理解错了吗?
答案 0 :(得分:2)
我只检查了Python版本,实际上确实存在错误。
即,您的testfunc
,即根查找brentq
例程的目标函数,行为不确定。在根查找运行期间(即一次调用brentq()
直到它返回),brentq
需要多次调用相同的回调,直到达到收敛。但是,每次brentq
调用此回调时,目标等式都会更改,因为r
会获得新的伪随机值。因此,根寻找程序无法收敛到您想要的解决方案。
在概念上,您需要做的是首先生成均匀随机变量的样本,并将相同的,确定性变换(即逆分布函数)应用于它们。当然,您不需要进行根求解,因为您可以使用ppf
随机变量类的cdf
(百分位函数,即scipy.stats
的逆)方法。
作为概念验证,您可以在标准统一样本数组上使用(不必要的昂贵且不太精确)转换方法运行以下代码:
import numpy
import numpy.random
from scipy.optimize import brentq
from scipy.stats import norm
# Setup
n = 10000
numpy.random.seed(0x5eed)
ran_array = numpy.random.uniform(size=n)
sol_array = numpy.empty_like(ran_array)
# Target function for root-finding: F(x) - p = 0 where p is the probability level
# for which the quantile is to be found
def targetfunc(x, p):
return norm.cdf(x, 5, 2) - p
for i in range(n):
sol_array[i] = brentq(targetfunc, -100.0, 100.0, args=(ran_array[i],))
print("mean = %10f" % sol_array.mean())
print("std = %10f" % sol_array.std())
输出:
mean = 5.011041 std = 2.009365