如何显示scipy.optimize函数的进度?

时间:2013-05-24 15:57:23

标签: python numpy scipy output mathematical-optimization

我使用scipy.optimize来最小化12个参数的功能。

我刚刚开始优化并仍在等待结果。

有没有办法强制scipy.optimize显示其进度(比如已经完成了多少,目前的最佳点是什么)?

7 个答案:

答案 0 :(得分:27)

正如mg007建议的那样,一些scipy.optimize例程允许回调函数(不幸的是,minimalsq目前不允许这样做)。下面是一个使用“fmin_bfgs”例程的例子,其中我使用回调函数来显示参数的当前值和每次迭代时目标函数的值。

import numpy as np
from scipy.optimize import fmin_bfgs

Nfeval = 1

def rosen(X): #Rosenbrock function
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

def callbackF(Xi):
    global Nfeval
    print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
    Nfeval += 1

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              callback=callbackF, 
              maxiter=2000, 
              full_output=True, 
              retall=False)

输出如下:

Iter    X1          X2          X3         f(X)      
   1    1.031582    1.062553    1.130971    0.005550
   2    1.031100    1.063194    1.130732    0.004973
   3    1.027805    1.055917    1.114717    0.003927
   4    1.020343    1.040319    1.081299    0.002193
   5    1.005098    1.009236    1.016252    0.000739
   6    1.004867    1.009274    1.017836    0.000197
   7    1.001201    1.002372    1.004708    0.000007
   8    1.000124    1.000249    1.000483    0.000000
   9    0.999999    0.999999    0.999998    0.000000
  10    0.999997    0.999995    0.999989    0.000000
  11    0.999997    0.999995    0.999989    0.000000
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 11
         Function evaluations: 85
         Gradient evaluations: 17

至少可以通过这种方式观察优化器跟踪最小值

答案 1 :(得分:7)

按照@ joel的例子,有一种简洁有效的方法来做类似的事情。以下示例显示我们如何摆脱global变量,call_back函数和多次重新评估目标函数

import numpy as np
from scipy.optimize import fmin_bfgs

def rosen(X, info): #Rosenbrock function
    res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2


    # display information
    if info['Nfeval']%100 == 0:
        print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
    info['Nfeval'] += 1
    return res

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              args=({'Nfeval':0},), 
              maxiter=1000, 
              full_output=True, 
              retall=False,
              )

这将生成类似

的输出
Iter    X1          X2          X3         f(X)     
   0    1.100000    1.100000    1.100000    2.440000
 100    1.000000    0.999999    0.999998    0.000000
 200    1.000000    0.999999    0.999998    0.000000
 300    1.000000    0.999999    0.999998    0.000000
 400    1.000000    0.999999    0.999998    0.000000
 500    1.000000    0.999999    0.999998    0.000000
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.000000
         Iterations: 12
         Function evaluations: 502
         Gradient evaluations: 98

但是,没有免费启动,我在这里使用function evaluation times代替algorithmic iteration times作为计数器。一些算法可以在单次迭代中多次评估目标函数。

答案 2 :(得分:5)

尝试使用:

options={'disp': True} 

强制scipy.optimize.minimize打印中间结果。

答案 3 :(得分:3)

您正在使用哪种最小化功能?

大多数功能都内置了进度报告,包括使用disp标志显示您想要的数据的多级报告(例如,请参阅scipy.optimize.fmin_l_bfgs_b)。

答案 4 :(得分:2)

还可以在函数中包含一个简单的print()语句以将其最小化。如果导入该函数,则可以创建包装器。

import numpy as np
from scipy.optimize import minimize


def rosen(X): #Rosenbrock function
    print(X)
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen, 
         x0)

答案 5 :(得分:0)

以下是适用于我的解决方案:

def f_(x):   # The rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def conjugate_gradient(x0, f):
    all_x_i = [x0[0]]
    all_y_i = [x0[1]]
    all_f_i = [f(x0)]
    def store(X):
        x, y = X
        all_x_i.append(x)
        all_y_i.append(y)
        all_f_i.append(f(X))
    optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
    return all_x_i, all_y_i, all_f_i

,例如:

conjugate_gradient([2, -1], f_)

Source

答案 6 :(得分:0)

许多scipy中的优化器确实缺少详细的输出(scipy.optimize.minimize的{​​{3}}方法是一个例外)。我遇到了类似的问题,并通过在目标函数周围创建包装器并使用了回调函数来解决了该问题。这里没有执行其他功能评估,因此这应该是一个有效的解决方案。

import numpy as np

class Simulator:
def __init__(self, function):
    self.f = function # actual objective function
    self.num_calls = 0 # how many times f has been called
    self.callback_count = 0 # number of times callback has been called, also measures iteration count
    self.list_calls_inp = [] # input of all calls
    self.list_calls_res = [] # result of all calls
    self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
    self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
    self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
    self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations

def simulate(self, x):
    """Executes the actual simulation and returns the result, while
    updating the lists too. Pass to optimizer without arguments or
    parentheses."""
    result = self.f(x) # the actual evaluation of the function
    if not self.num_calls: # first call is stored in all lists
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
        self.list_callback_inp.append(x)
        self.list_callback_res.append(result)
    elif result < self.decreasing_list_calls_res[-1]:
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
    self.list_calls_inp.append(x)
    self.list_calls_res.append(result)
    self.num_calls += 1
    return result

def callback(self, xk, *_):
    """Callback function that can be used by optimizers of scipy.optimize.
    The third argument "*_" makes sure that it still works when the
    optimizer calls the callback function with more than one argument. Pass
    to optimizer without arguments or parentheses."""
    s1 = ""
    xk = np.atleast_1d(xk)
    # search backwards in input list for input corresponding to xk
    for i, x in reversed(list(enumerate(self.list_calls_inp))):
        x = np.atleast_1d(x)
        if np.allclose(x, xk):
            break

    for comp in xk:
        s1 += f"{comp:10.5e}\t"
    s1 += f"{self.list_calls_res[i]:10.5e}"

    self.list_callback_inp.append(xk)
    self.list_callback_res.append(self.list_calls_res[i])

    if not self.callback_count:
        s0 = ""
        for j, _ in enumerate(xk):
            tmp = f"Comp-{j+1}"
            s0 += f"{tmp:10s}\t"
        s0 += "Objective"
        print(s0)
    print(s1)
    self.callback_count += 1

可以定义一个简单的测试

from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})

print(f"Number of calls to Simulator instance {ros_sim.num_calls}")

导致:

Comp-1          Comp-2          Objective
1.76348e-01     -1.31390e-07    7.75116e-01
2.85778e-01     4.49433e-02     6.44992e-01
3.14130e-01     9.14198e-02     4.75685e-01
4.26061e-01     1.66413e-01     3.52251e-01
5.47657e-01     2.69948e-01     2.94496e-01
5.59299e-01     3.00400e-01     2.09631e-01
6.49988e-01     4.12880e-01     1.31733e-01
7.29661e-01     5.21348e-01     8.53096e-02
7.97441e-01     6.39950e-01     4.26607e-02
8.43948e-01     7.08872e-01     2.54921e-02
8.73649e-01     7.56823e-01     2.01121e-02
9.05079e-01     8.12892e-01     1.29502e-02
9.38085e-01     8.78276e-01     4.13206e-03
9.73116e-01     9.44072e-01     1.55308e-03
9.86552e-01     9.73498e-01     1.85366e-04
9.99529e-01     9.98598e-01     2.14298e-05
9.99114e-01     9.98178e-01     1.04837e-06
9.99913e-01     9.99825e-01     7.61051e-09
9.99995e-01     9.99989e-01     2.83979e-11
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 96
         Gradient evaluations: 24
Number of calls to Simulator instance 96

当然,这只是一个模板,可以根据您的需要进行调整。它没有提供有关优化器状态的所有信息(例如在MATLAB的优化工具箱中),但至少您对优化的进度有所了解。

可以找到类似的方法'trust-constr',而无需使用回调函数。在我的方法中,回调函数用于在优化程序完成一次迭代后准确地打印输出,而不是每个函数调用都可以。