我对scipy的optimize.minimize例程有一些疑问和问题。我想尽量减少这个功能:
f(eta)= sum_i | eta * x_i - y_i |
关于eta。由于我不熟悉最小化程序和相应的方法,我尝试了一些。但是,使用方法BFGS会引发以下错误:
File "/usr/local/lib/python3.4/dist-packages/scipy/optimize/_minimize.py", line 441, in minimize return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "/usr/local/lib/python3.4/dist-packages/scipy/optimize/optimize.py", line 904, in _minimize_bfgs
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index
我无法解决。请找到导致以下错误的代码。我在Ubuntu 14.04.3 LTS上使用Python3和scipy 0.17.0以及numpy 1.8.2。
此外,方法共轭梯度似乎比其他方法表现更差。
最后但同样重要的是,我赞成通过scipy.optimize.brentq找到一阶导数的零来估算最小值。这样很好还是你推荐另一种方法?我更喜欢坚固而不是速度。
以下是一些说明问题和问题的代码:
from scipy import optimize
import numpy as np
def function(x, bs, cs):
sum = 0.
for b, c in zip(bs, cs):
sum += np.abs(x*b - c)
return sum
def derivativeFunction(x, bs, cs):
sum = 0.
for b, c in zip(bs, cs):
if x*b > c:
sum += b
else:
sum -= b
return sum
np.random.seed(1000)
bs = np.random.rand(10)
cs = np.random.rand(10)
eta0 = 0.5
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs), method='Nelder-Mead', tol=1e-6)
print('Nelder-Mead:\t', res.x[0], function(res.x[0], bs, cs))
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs,), method='CG', jac=derivativeFunction, tol=1e-6)
print('CG:\t', res.x[0], function(res.x[0], bs, cs))
x = optimize.brentq(f=derivativeFunction, a=0, b=2., args=(bs, cs), xtol=1e-6, maxiter=100)
print('Brentq:\t', x, function(x, bs, cs))
#Throwing the error
res = optimize.minimize(fun=function, x0=eta0, args=(bs, cs), method='BFGS', jac=derivativeFunction, tol=1e-6)
print('BFGS:\t', res.x[0], function(res.x[0], bs, cs))
它的输出是:
Nelder-Mead: 0.493537902832 3.71986334101
CG: 0.460178525461 3.72659733011
Brentq: 0.49353725172947666 3.71986347245
其中第一个值是最小值的位置,第二个值是最小值本身。输出错过了上面的错误消息。
感谢您的帮助!