我正在为无约束的优化问题创建一个基本的牛顿法算法,但是我从算法中得到的结果与我的预期不同。它是一个简单的目标函数,因此很明显该算法应收敛于(1,1)。我之前在这里创建的梯度下降算法对此进行了确认:
def grad_descent(x, t, count, magnitude):
xvalues.append(x)
gradvalues.append(np.array([dfx1(x), dfx2(x)]))
fvalues.append(f(x))
temp=x-t*dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
return xvalues, gradvalues, fvalues, count
我为牛顿法创建算法的尝试是在这里
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp=x-f(x)/dfx(x)
while count < 10:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp=x-t*f(x)/dfx(x)
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
这是目标函数和梯度函数:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
这是初始条件。请注意,牛顿方法未使用alpha和beta。
x0, t0, alpha, beta, count = np.array([-1.1, 1.1]), 1, .15, .7, 1
magnitude = mag(np.array([dfx1(x0), dfx2(x0)]))
调用该函数:
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
这会产生非常奇怪的结果。这是x值,梯度值及其相应x输入的函数解的前10个迭代:
[array([-1.1, 1.1]), array([-0.99315589, 1.35545455]), array([-1.11651296, 1.11709035]), array([-1.01732476, 1.35478987]), array([-1.13070578, 1.13125051]), array([-1.03603697, 1.35903467]), array([-1.14368874, 1.14364506]), array([-1.05188162, 1.36561528]), array([-1.15600558, 1.15480705]), array([-1.06599492, 1.37360245])]
[array([-52.6, -22. ]), array([142.64160215, 73.81918332]), array([-62.07323963, -25.90216846]), array([126.11789251, 63.96803995]), array([-70.85773749, -29.44900758]), array([114.31050737, 57.13241151]), array([-79.48668009, -32.87577304]), array([104.93863096, 51.83206539]), array([-88.25737032, -36.308371 ]), array([97.03403558, 47.45145765])]
[5.620000000000003, 17.59584998020613, 6.156932949106968, 14.29937453260906, 6.7080172227439725, 12.305727666787176, 7.297442528545537, 10.926625703722639, 7.944104584786208, 9.89743708419569]
这是最终输出:
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations)
显示了3d图here 代码:
x = np.array([i[0] for i in xvalues])
y = np.array([i[1] for i in xvalues])
z = np.array(fvalues)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.scatter(x, y, z, label='Newton Method')
ax.legend()
是因为最初的猜测非常接近最佳点,还是因为我的算法中有一些错误导致我没有抓住?任何建议将不胜感激。看来解决方案甚至可能会出现振荡,但很难分辨
答案 0 :(得分:0)
我认为我已经找到了问题的一部分。我使用了不正确的牛顿算法。在我使用之前:
x {k + 1} = x {k} - f(x) ⁄ ∇f(x)
正确的更新是:
x {k + 1} = x {k} -[f''(x {k} )] -1 < / sup> f'(x {k} )
当我更改此设置时,结果仍会有所不同,但会稍好一些。新功能在这里:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx1 = lambda x: -400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2
dfx2 = lambda x: 200*(x[1]-np.square(x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))])
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)]))
inv_hessian = lambda x: inv(np.array(([dfx11(x0), dfx12(x0)], [dfx21(x0), dfx22(x0)])))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = x-(inv_hessian(x).dot(dfx(x)))
while count < 25:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
temp = x-(inv_hessian(x).dot(dfx(x)))
x = temp
magnitude = mag(dfx(x))
count+=1
if count > 100:
break
return xvalues, gradvalues, fvalues, count
最接近解的收敛是在第一步之后,它到达(-1.05,1.1)。但是,它仍然存在分歧。我从未使用过牛顿法,所以我不确定这是否与算法要达到的精度一样。
答案 1 :(得分:0)
我现在确定python代码有问题。我决定在Matlab中实现该算法,而且看起来效果很好。这是代码:
pool=my_pool
输出:
clear; clc;
x=[-1.1, 1.1]';
t=1;
count=1;
xvalues=[];
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]);
disp(x-inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200])*[-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)])
while count<10
xvalues(count,:)= x;
temp = x - inv([(-400*x(2)+1200*x(1)^2+2), -400*x(1); -400*x(1), 200]) * [-400*x(1)*x(2)+400*x(1)^3+2*x(1)-2; 200*(x(2)-x(1)^2)];
x = temp;
count = count+1;
end
disp(xvalues)
答案 2 :(得分:0)
所以我终于弄清楚了这是怎么回事。一切都是关于Python将我的变量存储为什么数据结构。这样,我将所有值设置为“ float32”,并初始化了要迭代的变量。工作代码在这里:
f = lambda x: 100*np.square(x[1]-np.square(x[0])) + np.square((1-x[0]))
dfx = lambda x: np.array([-400*x[0]*x[1]+400*np.power(x[0],3)+2*x[0]-2, 200*(x[1]-np.square(x[0]))], dtype='float32')
dfx11 = lambda x: -400*(x[1])+1200*np.square(x[0])+2
dfx12 = lambda x: -400*x[0]
dfx21 = lambda x: -400*x[0]
dfx22 = lambda x: 200
hessian = lambda x: np.array([[dfx11(x), dfx12(x)], [dfx21(x), dfx22(x)]], dtype='float32')
inv_hessian = lambda x: inv(hessian(x))
mag = lambda x: math.sqrt(sum(i**2 for i in x))
def newton(x, t, count, magnitude):
xvalues=[]
gradvalues=[]
fvalues=[]
temp = np.zeros((2,1))
while magnitude > .000005:
xvalues.append(x)
gradvalues.append(dfx(x))
fvalues.append(f(x))
deltaX = np.array(np.dot(-inv_hessian(x), dfx(x)))
temp = np.array(x+t*deltaX)
x = temp
magnitude = mag(deltaX)
count+=1
return xvalues, gradvalues, fvalues, count
x0, t0, alpha, beta, count = np.array([[-1.1], [1.1]]), 1, .15, .7, 1
xvalues, gradvalues, fvalues, iterations = newton(x0, t0, count, magnitude)
final_value = print('Final set of x values: ', xvalues[-1])
final_grad = print('Final gradient values: ', gradvalues[-1])
final_f = print('Final value of the object function with optimized inputs: ', fvalues[-1])
final_grad_mag = print('Final magnitude of the gradient with optimized inputs: ', mag(np.array([dfx1(xvalues[-1]), dfx2(xvalues[-1])])))
total_iterations = print('Total iterations: ', iterations
print(xvalues)
输出:
Final set of x values: [[0.99999995]
[0.99999987]]
Final gradient values: [[ 9.1299416e-06]
[-4.6193604e-06]]
Final value of the object function with optimized inputs: [5.63044182e-14]
Final magnitude of the gradient with optimized inputs: 1.02320249276675e-05
Total iterations: 9
[array([[-1.1],
[ 1.1]]), array([[-1.00869558],
[ 1.00913081]]), array([[-0.25557778],
[-0.50186648]]), array([[-0.24460602],
[ 0.05971173]]), array([[ 0.97073805],
[-0.53472879]]), array([[0.97083687],
[0.94252417]]), array([[0.99999957],
[0.99914868]]), array([[0.99999995],
[0.99999987]])]