scipy的shgo优化器无法最小化方差

时间:2019-07-09 01:36:58

标签: optimization scipy

为了熟悉全局优化方法,尤其是shgo中的scipy.optimize v1.3.0优化器,我尝试使用以下方法最小化向量var(x)的方差x = [x1,...,xN] 0 <= xi <= 1x具有给定平均值的约束下:

import numpy as np
from scipy.optimize import shgo

# Constraint
avg = 0.5  # Given average value of x
cons = {'type': 'eq', 'fun': lambda x: np.mean(x)-avg}

# Minimize the variance of x under the given constraint
res = shgo(lambda x: np.var(x), bounds=6*[(0, 1)], constraints=cons)

shgo方法在此问题上失败:

>>> res
     fun: 0.0
 message: 'Failed to find a feasible minimiser point. Lowest sampling point = 0.0'
    nfev: 65
     nit: 2
   nlfev: 0
   nlhev: 0
   nljev: 0
 success: False
       x: array([0., 0., 0., 0., 0., 0.])

正确的解决方案是统一分布x = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5],可以使用minimize中的本地优化器scipy.optimize轻松找到它:

from scipy.optimize import minimize
from numpy.random import random

x0 = random(6)  # Random start vector
res2 = minimize(lambda x: np.var(x), x0, bounds=6*[(0, 1)], constraints=cons)

对于任何起始向量,minimize方法都会产生正确的结果:

>>> res2.success
True

>>> res2.x
array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5])

我的问题是:为什么shgo在这个相对简单的任务上失败了?我是否犯了一个错误,或者shgo根本无法解决此问题?任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

A very detailed answer to this question Stefan-Endres 在github上的scipy项目页面中提供。在此,非常感谢 Stefan-Endres