单标量技术优化最小化

时间:2020-05-05 11:00:03

标签: python numpy scipy scipy-optimize-minimize

我在使用scipy的minimize()函数时遇到问题,并且我对优化的了解还不足以了解这里的问题。

我有一个调用scipy.optimize.minimize()的函数。当x0的数组大小大于1时,它可以很好地工作并为我提供所需的输出,但是当x0恰好为1时,它将失败。 documentation表示x0的大小必须为np.ndarray的{​​{1}},但未指定其值应大于1,因此我认为可以。我的代码的较小版本以最佳值调用该函数:

(n,)

现在,在未知最佳值的情况下,随机初始化import numpy as np from scipy.optimize import minimize def to_freq(*arrays): # Better version of `convert_to_freq()` out = [] for a in arrays: converted = np.array([(x + i / len(a)) / (max(a)+1) for i, x in enumerate(a, start=1)]) out.append(converted) return out def likelihood(x, x_freq, expected, x_max): # Better version, supports vectorisation a = 2 * x * np.log(x_freq / expected) b = 2 * (x_max - x) * np.log((1 - x_freq) / (1 - expected)) return a + b def objective(x0, labels, a, b): R = x0[labels=='R'].item() a_c, b_c = np.cumsum(a), np.cumsum(b) a_f, b_f = to_freq(a_c, b_c) # Get the expected values for signals and noises exp_a = ((1 - R) * b_f + R)[:-1] exp_b = b_f[:-1] # Compute the gsquared using the dual process model parameters # Still getting runtime warnings about division. Function only works with numpy, so can't use math. a_lrat = likelihood(x=a_c[:-1], x_freq=a_f[:-1], expected=exp_a, x_max=a_c.max()) b_lrat = likelihood(x=b_c[:-1], x_freq=b_f[:-1], expected=exp_b, x_max=b_c.max()) return sum(a_lrat + b_lrat) # Observations a = [508,224,172,135,119,63] b = [102,161,288,472,492,308] x0 = np.array([0.520274590415736]) # Optimal value for variable labels = np.array(['R']) # Gives correct iotimized value of 163.27525607890783 objective(x0, labels, a, b)

x0

失败的优化结果是这样的:

x0 = np.random.uniform(-.5,0.5, len(labels)) # random initialization

# Without method='nelder-mead' occasionally gives correct value of fun, but frequently fails
opt = minimize(fun=objective, x0=x0, args=(labels, a, b), tol=1e-4)
print(opt)

但是如果我继续运行并随机设置初始值,它偶尔会吐出一个不错的结果:

      fun: nan
 hess_inv: array([[1]])
      jac: array([nan])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 336
      nit: 1
     njev: 112
   status: 2
  success: False
        x: array([1034.74])

如果我在较大函数的 fun: 163.27525607888913 hess_inv: array([[4.14149525e-05]]) jac: array([-1.90734863e-05]) message: 'Optimization terminated successfully.' nfev: 27 nit: 7 njev: 9 status: 0 success: True x: array([0.52027462]) 调用中指定了method='nelder-mead'a solution to a possibly unrelated problem),它实际上也为我提供了预期的结果:

minimize()

我不太了解实现此目标的最佳方法,因为我对优化非常缺乏经验。

[脚注]:最小化算法有时会尝试与我的函数不兼容的值(例如<0或> 1),并且对 final_simplex: (array([[0.52026029], [0.52031204]]), array([163.27525856, 163.27527298])) fun: 163.2752585612531 message: 'Optimization terminated successfully.' nfev: 32 nit: 16 status: 0 success: True x: array([0.52026029]) 的调用最终会引发警告,但是我通常只是禁止这样做因为不管...似乎都能正常工作

0 个答案:

没有答案
相关问题