scipy-optimize-minimize最小化中缺乏收敛

时间:2019-04-10 19:06:55

标签: python optimization scipy

总体而言,我正在尝试“缩放”数组,以使数组的积分为1,即数组元素的总和除以元素数为1。但是,必须通过更改参数alpha,而不是简单地将数组乘以比例因子。为此,我正在使用scipy-optimize-minimize。问题在于代码正在运行,输出为“ Optimisation成功终止”,但是显示的当前函数值不为0,因此显然优化实际上并未成功。

This is a screenshot from the paper that defines the equation.

import numpy as np
from scipy.optimize import minimize

# just defining some parameters
N = 100
g = np.ones(N)
eta = np.array([i/100 for i in range(N)])
g_at_one = 0.01   

def my_minimization_func(alpha):
    g[:] = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
    to_be_minimized = np.sum(g[:])/N - 1
    return to_be_minimized

result_of_minimization = minimize(my_minimization_func, 0.1, options={'gtol': 1e-8, 'disp': True})
alpha_at_min = result_of_minimization.x
print(alpha_at_min)

1 个答案:

答案 0 :(得分:0)

我不清楚,为什么要使用最小化来解决此类问题?您可以简单地对矩阵进行归一化,然后使用归一化的矩阵和旧的矩阵来计算alpha。对于矩阵归一化,请查看here

在您的代码中,目标函数包括被零除的(1-g_at_one/alpha),因此该函数未在0中定义,这就是为什么我假设scipy正在跳过它的原因。

编辑: 因此,我只是简单地重新表述了您的问题并使用了约束,并添加了一些打印件以实现更好的可视化效果。我希望这会有所帮助:

import numpy as np
from scipy.optimize import minimize

# just defining some parameters
N   = 100
g   = np.ones(N)
eta = np.array([i/100 for i in range(N)])
g_at_one = 0.01   

def f(alpha):
    g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
    to_be_minimized = np.sum(g[:])/N
    print("+ For alpha: %7s => f(alpha): %7s" % ( round(alpha[0],3),
                                                  round(to_be_minimized,3) ))
    return to_be_minimized

cons = {'type': 'ineq', 'fun': lambda alpha:  f(alpha) - 1}
result_of_minimization = minimize(f, 
                                  x0 = 0.1,
                                  constraints = cons,
                                  tol = 1e-8,
                                  options = {'disp': True})

alpha_at_min = result_of_minimization.x

# verify 
print("\nAlpha at min: ", alpha_at_min[0])
alpha = alpha_at_min
g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1])))) 
print("Verification: ", round(np.sum(g[:])/N - 1) == 0)

输出:

+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:     0.1 => f(alpha):   0.021
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
+ For alpha:   7.962 => f(alpha):     1.0
Optimization terminated successfully.    (Exit mode 0)
            Current function value: 1.0000000000000004
            Iterations: 3
            Function evaluations: 9
            Gradient evaluations: 3

Alpha at min:  7.9620687892224264
Verification:  True