多变量优化可行性条件

时间:2019-02-03 18:09:15

标签: python-3.x nonlinear-optimization

在过去的一周中,我一直在尝试解决Python的多变量优化问题,并试图理解和实现它。我终于到了某种程度上,我拥有对我来说有意义的代码(最后提供了代码)。但是,当我运行它时,它不起作用。

考虑了一会儿之后,我认为问题来自以下错误/位置: -在function_evaluation中:-np.log(1-a)不能保证1-a是正数,因为A是用randrn随机生成的矩阵 -我没有实现这个问题的一部分,因为无论我读了多少书并用谷歌搜索了它(这是第3点),我都无法理解它的含义。 -在p = np.linalg.solve(hessian(x,A),-gradient(x,A))中生成的矩阵引发错误“ raise LinAlgError(“ Singular matrix”)“。我在Google上搜索了一下,发现我正在处理一个奇异矩阵。

我的主要问题是第3点的含义是什么,如何确定1-a是正数,以便日志不会引发错误?

这是问题的完整屏幕截图。 https://gyazo.com/d7642672583e48a4374d437eb2d66734?token=67d83a711f7dbe77ebce50d9419d44e7

#libraries import
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
import numpy as np

#Generate random vectors
#size: how much components in a  (a = [1,2] => size = 2)
#number: how much vectors to create (a1,a2,a3 => number = 3)
def generate_vectors_a(size,number):
    coefficient_matrix = np.random.randn(size,number)
    return coefficient_matrix

def generate_vectors_x(size):
    X = np.random.randn(size)
    return X

#Calculates function value at a given point p
def function_evaluation(p,A):
    t_1 = 0
    t_2 = 0

    for x in p:
        t_2 += - np.log(1 - x**2)
    for i in range(500):
        a = np.dot(A[:,i],p)
        t_1 += - np.log(1 - a)
    f= t_1 + t_2
    return f


#Calculates gradient at a given point p
def gradient(p,A):
    g = np.zeros(len(p))
    Index = 0
    for x in p:
        t_2 = 0
        t_1 = 0
        t_2 = (2 * x) / (1 - x**2) 
        for n in range(500):
            t_1 += A[Index,n] / (1 - np.dot(A[:,n],p))
        g[Index] = t_1 + t_2
        Index +=1
    return g


#Calculates hessian at a given point p
def hessian(p,A):
    #create the hessian matrix
    r, c = 100, 100; #row and col
    h = np.zeros((r,c))
    #calculating entries of Hessian
    for j in range(100):
        t_1 = 0
        t_2 = 0 
        for k in range(100):
            if j == k:
                for i in range(500):
                    t_1 = A[j][i]**2 / (1 - np.dot(A[:,i],p)**2)
                t_2  = 2 * ( 1  + p[j]**2) / (1 - p[j]**2)**2
            else:
                for i in range(500):
                    t_1 = A[j][i]* A[k][i] / (1 - np.dot(A[:,i],p)**2)
        h[j][k] = t_1 + t_2
    return h


#Backtrack line search
def backtrack_linesearch(gk, pk, xk, alpha = 0.1, beta = 0.8):
    t = 1
    while (function_evaluation((xk + t*pk),A) > function_evaluation(xk,A) + alpha * t * gk @ pk ):
        t *= beta
    return t 

#Newton
def newton_bt(A, x, tol = 1e-5, ):

    #initialize the point x which is p here
    history = np.array([x])
    while (np.linalg.norm(gradient(x,A)) > tol):
        p = np.linalg.solve(hessian(x,A), -gradient(x,A))
        t = backtrack_linesearch(gradient(x,A), p, x)
        x += t * p
        history = np.vstack( (history, x))
    return x, history

#Draws a graph representing the performance (as in number of interations)
def VisualPerformance(h):
    nsteps = h.shape[0]
    fhist = np.zeros(nsteps)
    for i in range(nsteps):
        fhist[i] = function_evaluation(h[i,:])
    plt.figure()
    plt.autoscale(enable=True, axis='x', tight=True)
    plt.semilogy(np.arange(0, nsteps), fhist, linewidth=1)
    plt.xlabel('Iteration count')
    plt.ylabel(r'$|f^k - f^*|$')
    plt.savefig('Hundred_Variables_Newton_Performance.png',bbox_inches='tight')
    plt.show()


#prepare Coffecient Matrix
A = generate_vectors_a(100,500)
#prepare 100 variables
X = generate_vectors_x(100)
#do newton
xstar, hist = newton_bt(A,X)
#visualize performance
VisualPerformance(hist)

0 个答案:

没有答案