实现无约束优化问题的回溯线搜索算法

时间:2018-09-06 12:33:31

标签: python algorithm

我无法解决如何将回溯线搜索算法实现到python中的问题。该算法本身是: here

该算法的另一种形式是: here

理论上,它们是完全相同的。

我正在尝试在python中实现此功能,以解决给定起点的无约束优化问题。到目前为止,这是我尝试解决的问题:

def func(x):  
return # my function with inputs x1,x2

def grad_func(x):
  df1 # derivative with respect to x1
  df2 # derivative with respect to x2
  return np.array([df1, df2])

def backtrack(x, gradient, t, a, b):  
 '''  
   x: the initial values given  
   gradient: the initial gradient direction for the given initial value  
   t: t is initialized at t=1 
   a: alpha value between (0, .5). I set it to .3  
   b: beta value between (0, 1). I set it to .8  
 '''
 return t

# Define the initial point, step size, and alpha/beta constants
x0, t0, alpha, beta = [x1, x2], 1, .3, .8

# Find the gradient of the initial value to determine the initial slope
direction = grad_func(x0)

t = backtrack(x0, direction, t0, alpha, beta)

谁能提供有关如何最佳实施回溯算法的指导?我觉得我已经掌握了所有需要的信息,但我只是不了解代码中的实现

1 个答案:

答案 0 :(得分:1)

import numpy as np
alpha = 0.3
beta = 0.8

f = lambda x: (x[0]**2 + 3*x[1]*x[0] + 12)
dfx1 = lambda x: (2*x[0] + 3*x[1])
dfx2 = lambda x: (3*x[0])

t = 1
count = 1
x0 = np.array([2,3])
dx0 = np.array([.1, 0.05])


def backtrack(x0, dfx1, dfx2, t, alpha, beta, count):
    while (f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)])))) < 0:
        t *= beta
        print("""

########################
###   iteration {}   ###
########################
""".format(count))
        print("Inequality: ",  f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)]))))
        count += 1
    return t

t = backtrack(x0, dfx1, dfx2, t, alpha, beta,count)

print("\nfinal step size :",  t)

输出:

########################
###   iteration 1   ###
########################

Inequality:  -143.12


########################
###   iteration 2   ###
########################

Inequality:  -73.22880000000006


########################
###   iteration 3   ###
########################

Inequality:  -32.172032000000044


########################
###   iteration 4   ###
########################

Inequality:  -8.834580480000021


########################
###   iteration 5   ###
########################

Inequality:  3.7502844927999845

final step size : 0.32768000000000014
[Finished in 0.257s]