某个组中某个组的半约数不使用相同的输入

时间:2018-11-02 11:36:04

标签: openmdao

这里是一个示例的N2简图。我有一组(gr1)附加到线性和非线性求解器(DirectSolver和NonlinearBlockGS)

如果我将版本1中所示的设置与基于梯度的优化器一起用于整个问题,则将有限差分应用于变量D1和D2(为浮点数)。

如果用相同的设置将gr1与gr2包装在一起(现在,rox_totals都位于gr2内),则对ndarray的t和d施加有限差,这将导致最小n * 2函数评估。

当然,不必在此设置中添加gr2,但我的目标是在下图中提供标记为将来所需的版本。

enter image description here

我无法使用Sellarmda复制相同的问题,这很奇怪,因为我尝试进行相同的设置。不过,我添加了sellarmda代码来说明我的问题。设置为True或False的名为“ ver1”的变量将更改设置。

ver1 = False->是单个循环组的情况。在我的设置中,将fd应用于全局设计变量,这正是我想要的。

ver2 = True->组中的组就是这种情况。 fd应用于耦合参数较大的数组。

from openmdao.api import Problem, ScipyOptimizeDriver, ExecComp, IndepVarComp, DirectSolver,ExplicitComponent,NonlinearBlockGS, Group
import numpy as np

class SellarDis1(ExplicitComponent):
    """
    Component containing Discipline 1 -- no derivatives version.
    """

    def setup(self):

        # Global Design Variable
        self.add_input('z', val=np.zeros(2))



        # Coupling parameter
        self.add_input('y2', val=1.0)

        # Coupling output
        self.add_output('y1', val=1.0)

        # Finite difference all partials.
        self.declare_partials('*', '*', method='fd')

    def compute(self, inputs, outputs):
        """
        Evaluates the equation
        y1 = z1**2 + z2 + x1 - 0.2*y2
        """
        z1 = inputs['z'][0]
        z2 = inputs['z'][1]
        y2 = inputs['y2']
        print(inputs['z'])

        outputs['y1'] = z1**2 + z2 -0.2*y2
class SellarDis2(ExplicitComponent):
    """
    Component containing Discipline 2 -- no derivatives version.
    """

    def setup(self):
        # Global Design Variable
        self.add_input('z', val=np.zeros(2))

        # Coupling parameter
        self.add_input('y1', val=1.0)

        # Coupling output
        self.add_output('y2', val=1.0)

        # Finite difference all partials.
        self.declare_partials('*', '*', method='fd')

    def compute(self, inputs, outputs):
        """
        Evaluates the equation
        y2 = y1**(.5) + z1 + z2
        """
        z1 = inputs['z'][0]
        z2 = inputs['z'][1]
        y1 = inputs['y1']

        # Note: this may cause some issues. However, y1 is constrained to be
        # above 3.16, so lets just let it converge, and the optimizer will
        # throw it out
        if y1.real < 0.0:
            y1 *= -1

        outputs['y2'] = y1**.5 + z1 + z2




class SellarMDA(Group):
    """
    Group containing the Sellar MDA.
    """

    def setup(self):
        ver1=False
        if ver1:
            cycle = self.add_subsystem('cycle', Group(), promotes=['*'])
            cycle.add_subsystem('d1', SellarDis1(), promotes_inputs=[ 'z', 'y2'], promotes_outputs=['y1'])
            cycle.add_subsystem('d2', SellarDis2(), promotes_inputs=['z', 'y1'], promotes_outputs=['y2'])
            # Nonlinear Block Gauss Seidel is a gradient free solver
            cycle.nonlinear_solver = NonlinearBlockGS()
        else:            
            self.add_subsystem('d1', SellarDis1(), promotes_inputs=[ 'z', 'y2'], promotes_outputs=['y1'])
            self.add_subsystem('d2', SellarDis2(), promotes_inputs=['z', 'y1'], promotes_outputs=['y2'])
            self.nonlinear_solver = NonlinearBlockGS()


        self.approx_totals()


prob = Problem()
indeps = prob.model.add_subsystem('indeps', IndepVarComp(), promotes=['*'])

indeps.add_output('z', np.array([5.0, 2.0]))  
SellarMDA11=SellarMDA()
prob.model.add_subsystem('SellarMDA', SellarMDA11, promotes=['*'])
#SellarMDA11.approx_totals()
prob.model.add_subsystem('obj_cmp', ExecComp('obj =  z[1] + y1 + exp(-y2)',
                   z=np.array([0.0, 0.0])),
                   promotes=[ 'z', 'y1', 'y2', 'obj'])
prob.model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
prob.model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

prob.driver = ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
# prob.driver.options['maxiter'] = 100
prob.driver.options['tol'] = 1e-8

prob.model.add_design_var('z', lower=0, upper=10)
prob.model.add_objective('obj')
prob.model.add_constraint('con1', upper=0)
prob.model.add_constraint('con2', upper=0)

prob.setup()
prob.set_solver_print(level=0)

# Ask OpenMDAO to finite-difference across the model to compute the gradients for the optimizer
#prob.model.approx_totals()

prob.run_driver()

print('minimum found at')
print(prob['z'])

print('minumum objective')
print(prob['obj'][0])        

1 个答案:

答案 0 :(得分:0)

我们仔细检查了代码和示例,但无法复制您描述的任何问题。没有测试用例,我们将无法在此方面取得更多进展。