如何在最小化NLL的过程中确保数值稳定性?

时间:2018-08-26 15:06:49

标签: python scipy minimization

如果我们采用scipy.optimize.curve_fit example并稍加修改,以使最佳拟合参数被选为最大似然估计器(MLE),并使用scipy.optimize.minimize并用损失函数来表示负对数,数值稳定性似乎存在问题:

import numpy as np
from scipy.optimize import curve_fit
from scipy.optimize import minimize

np.random.seed(1729)


def func(x, a, b, c):
    return a * np.exp(-b * x) + c


def NLL(p, data):
    log_likelihood = np.vectorize(
        lambda x: np.log(p[0] * np.exp(-p[1] * x) + p[2]))
    return -1. * np.array(log_likelihood(data)).sum()


def main():
    x_data = np.linspace(0, 4, 50)
    y = func(x_data, 2.5, 1.3, 0.5)
    y_noise = 0.2 * np.random.normal(size=x_data.size)
    y_data = y + y_noise

    init_params = [3., 1.5, 0.5]
    print('\n### Using minimize\n')
    minimize_result = minimize(NLL, x0=init_params, args=(x_data),
                               method='BFGS', options={'disp': True})
    print('\n')
    print(minimize_result)
    print('\n### Using curve_fit\n')

    init_params = [3., 1.5, 0.5]
    popt, pcov = curve_fit(func, x_data, y_data,
                           p0=init_params, bounds=(0, [4., 2., 0.5]))
    print('fit values: {}'.format(popt))
    print('covariance matrix:\n{}'.format(pcov))
    print('uncertainties: {}\n'.format(np.sqrt(np.diag(pcov))))


if __name__ == '__main__':
    main()

产生

### Using minimize

/home/mcf/anaconda3/lib/python3.5/site-packages/scipy/optimize/optimize.py:663: RuntimeWarning: invalid value encountered in double_scalars
  grad[k] = (f(*((xk + d,) + args)) - f0) / d[k]
curve_fitting_example.py:16: RuntimeWarning: overflow encountered in double_scalars
  lambda x: np.log(p[0] * np.exp(-p[1] * x) + p[2]))
/home/mcf/anaconda3/lib/python3.5/site-packages/scipy/optimize/optimize.py:663: RuntimeWarning: invalid value encountered in double_scalars
  grad[k] = (f(*((xk + d,) + args)) - f0) / d[k]
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: -34.366008
         Iterations: 1
         Function evaluations: 552
         Gradient evaluations: 108


      fun: -34.36600756246744
 hess_inv: array([[ 0.99953426,  0.00164311, -0.04118523],
       [ 0.00164311,  0.99432624,  0.12712279],
       [-0.04118523,  0.12712279,  0.04248063]])
      jac: array([ -3.61366653,  10.76308966, -26.28632689])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 552
      nit: 1
     njev: 108
   status: 2
  success: False
        x: array([3.07825766, 1.2641401 , 1.4789514 ])

### Using curve_fit

fit values: [2.55424137 1.35192223 0.4745096 ]
covariance matrix:
[[ 0.01588964  0.00681668 -0.00076153]
 [ 0.00681668  0.02019715  0.00541932]
 [-0.00076153  0.00541932  0.0028263 ]]
uncertainties: [0.12605411 0.14211667 0.05316297]

我天真的假设是,由于函数中的参数c使得函数的NLL不能以某种形式很好地表示,因此没有指数形式会导致最小化失败,因为相空间的某些区域被探索导致inf s。

如果这个假设是正确的(?),那么鉴于这种玩具功能非常简单,那么一般如何防范这类问题呢?如果我的假设是错误的,那我应该怎么做呢?

0 个答案:

没有答案