OpenMDAO cache_linear_solution不更新初始猜测

时间:2019-06-20 12:45:29

标签: python openmdao

我想通过使用先前的线性解作为优化中后续迭代的初始猜测来节省昂贵的线性解的时间,以进行优化。我正在看OpenMDAO的cache_linear_solution功能示例,该功能似乎是为此目的开发的(here)和下面显示的代码:

from distutils.version import LooseVersion
import numpy as np
import scipy
from scipy.sparse.linalg import gmres

import openmdao.api as om


class QuadraticComp(om.ImplicitComponent):
    """
    A Simple Implicit Component representing a Quadratic Equation.

    R(a, b, c, x) = ax^2 + bx + c

    Solution via Quadratic Formula:
    x = (-b + sqrt(b^2 - 4ac)) / 2a
    """

    def setup(self):
        self.add_input('a', val=1.)
        self.add_input('b', val=1.)
        self.add_input('c', val=1.)
        self.add_output('states', val=[0,0])

        self.declare_partials(of='*', wrt='*')

    def apply_nonlinear(self, inputs, outputs, residuals):
        a = inputs['a']
        b = inputs['b']
        c = inputs['c']
        x = outputs['states'][0]
        y = outputs['states'][1]

        residuals['states'][0] = a * x ** 2 + b * x + c
        residuals['states'][1] = a * y + b

    def solve_nonlinear(self, inputs, outputs):
        a = inputs['a']
        b = inputs['b']
        c = inputs['c']
        outputs['states'][0] = (-b + (b ** 2 - 4 * a * c) ** 0.5) / (2 * a)
        outputs['states'][1] = -b/a

    def linearize(self, inputs, outputs, partials):
        a = inputs['a'][0]
        b = inputs['b'][0]
        c = inputs['c'][0]
        x = outputs['states'][0]
        y = outputs['states'][1]

        partials['states', 'a'] = [[x**2],[y]]
        partials['states', 'b'] = [[x],[1]]
        partials['states', 'c'] = [[1.0],[0]]
        partials['states', 'states'] = [[2*a*x+b, 0],[0, a]]

        self.state_jac = np.array([[2*a*x+b, 0],[0, a]])

    def solve_linear(self, d_outputs, d_residuals, mode):

        if mode == 'fwd':
            print("incoming initial guess", d_outputs['states'])
            if LooseVersion(scipy.__version__) < LooseVersion("1.1"):
                d_outputs['states'] = gmres(self.state_jac, d_residuals['states'], x0=d_outputs['states'])[0]
            else:
                d_outputs['states'] = gmres(self.state_jac, d_residuals['states'], x0=d_outputs['states'], atol='legacy')[0]
        elif mode == 'rev':
            if LooseVersion(scipy.__version__) < LooseVersion("1.1"):
                d_residuals['states'] = gmres(self.state_jac, d_outputs['states'], x0=d_residuals['states'])[0]
            else:
                d_residuals['states'] = gmres(self.state_jac, d_outputs['states'], x0=d_residuals['states'], atol='legacy')[0]

p = om.Problem()
indeps = p.model.add_subsystem('indeps', om.IndepVarComp(), promotes_outputs=['a', 'b', 'c'])
indeps.add_output('a', 1.)
indeps.add_output('b', 4.)
indeps.add_output('c', 1.)
p.model.add_subsystem('quad', QuadraticComp(), promotes_inputs=['a', 'b', 'c'], promotes_outputs=['states'])

p.model.add_design_var('a', cache_linear_solution=True)
p.model.add_constraint('states', upper=10)


p.setup(mode='fwd')
p.run_model()

print(p['states'])

derivs = p.compute_totals(of=['states'], wrt=['a'])
print(derivs['states', 'a'])

p['a'] = 4
derivs = p.compute_totals(of=['states'], wrt=['a'])
print(derivs['states', 'a'])

上面的代码给出了以下打印输出:

[-0.26794919 -4.        ]
incoming initial guess [0. 0.]
[[-0.02072594]
 [ 4.        ]]
incoming initial guess [0. 0.]
[[-0.02072594]
 [ 4.        ]]

从此示例的打印结果来看,线性猜测的初始猜测似乎并未得到实际更新。我想念什么吗?我还尝试将cache_linear_solution设置为False的情况下运行代码,结果似乎是相同的。

1 个答案:

答案 0 :(得分:2)

当前,线性解决方案的缓存仅在驱动程序运行期间计算总导数时发生,因此,如果要检查以确保它在优化过程中发生(在run_driver调用中),更改

derivs = p.compute_totals(of=['states'], wrt=['a'])

derivs = p.driver._compute_totals(of=['states'], wrt=['a'], global_names=False)

用您的代码执行此操作时,将得到以下输出:

[-0.26794919 -4.        ]
incoming initial guess [0. 0.]
[[-0.02072594]
 [ 4.        ]]
incoming initial guess [-0.02072594  4.        ]
[[-0.02072594]
 [ 4.        ]]

请注意,仅当您为变量global_names=Falseof使用升级名称时,才需要wrt参数。

我将更新示例代码以反映执行此操作的正确方法。