Openmdao中kriged函数的梯度

时间:2016-01-20 17:07:19

标签: gradient kriging openmdao

我目前正在编写一个多梯度下降算法,我使用kriged函数。 我的问题是,我无法找到如何获得kriged函数的渐变(我尝试使用线性化,但我不知道如何使其工作)。

    from __future__ import print_function

from six import moves
from random import shuffle
import sys
import numpy as np
from numpy import linalg as LA
import math
from openmdao.braninkm import F, G, DF, DG

from openmdao.api import Group, Component,IndepVarComp
from openmdao.api import MetaModel
from openmdao.api import KrigingSurrogate, FloatKrigingSurrogate

def rand_lhc(b, k):
    # Calculates a random Latin hypercube set of n points in k dimensions within [0,n-1]^k hypercube.
    arr = np.zeros((2*b, k))
    row = list(moves.xrange(-b, b))
    for i in moves.xrange(k):
        shuffle(row)
        arr[:, i] = row
    return arr/b*1.2


class TrigMM(Group):
    ''' FloatKriging gives responses as floats '''

    def __init__(self):
        super(TrigMM, self).__init__()

        # Create meta_model for f_x as the response
        F_mm = self.add("F_mm", MetaModel())
        F_mm.add_param('X', val=np.array([0., 0.]))
        F_mm.add_output('f_x:float', val=0., surrogate=FloatKrigingSurrogate())
       # F_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)


        #F_mm.linearize('X', 'f_x:float')
        #F_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
        print('init ok')
        self.add('p1', IndepVarComp('X', val=np.array([0., 0.])))
        self.connect('p1.X','F_mm.X')       

        # Create meta_model for f_x as the response
        G_mm = self.add("G_mm", MetaModel())
        G_mm.add_param('X', val=np.array([0., 0.]))
        G_mm.add_output('g_x:float', val=0., surrogate=FloatKrigingSurrogate())
        #G_mm.add_output('df_x:float', val=0., surrogate=KrigingSurrogate().linearize)

        #G_mm.linearize('X', 'g_x:float')
        self.add('p2', IndepVarComp('X', val=np.array([0., 0.])))
        self.connect('p2.X','G_mm.X')                 

from openmdao.api import Problem

prob = Problem()
prob.root = TrigMM()
prob.setup()

u=4 
v=3 

#training avec latin hypercube

prob['F_mm.train:X'] = rand_lhc(20,2)
prob['G_mm.train:X'] = rand_lhc(20,2)

#prob['F_mm.train:X'] = rand_lhc(10,2)
#prob['G_mm.train:X'] = rand_lhc(10,2)
#prob['F_mm.linearize:X'] = rand_lhc(10,2)
#prob['G_mm.linearize:X'] = rand_lhc(10,2)
datF=[]
datG=[]
datDF=[]
datDG=[]

for i in range(len(prob['F_mm.train:X'])):
    datF.append(F(np.array([prob['F_mm.train:X'][i]]),u))
    #datG.append(G(np.array([prob['F_mm.train:X'][i]]),v))
data_trainF=np.fromiter(datF,np.float) 

for i in range(len(prob['G_mm.train:X'])):
    datG.append(G(np.array([prob['G_mm.train:X'][i]]),v))   
data_trainG=np.fromiter(datG,np.float) 

prob['F_mm.train:f_x:float'] = data_trainF
#prob['F_mm.train:g_x:float'] = data_trainG
prob['G_mm.train:g_x:float'] = data_trainG

1 个答案:

答案 0 :(得分:1)

你打算写一个多梯度下降驱动程序吗?如果是这样,那么OpenMDAO使用Problem方法计算从calc_gradient级别的参数到输出的渐变。

如果你看一下pyoptsparse驱动程序的源代码:

https://github.com/OpenMDAO/OpenMDAO/blob/master/openmdao/drivers/pyoptsparse_driver.py

_gradfunc方法是一个回调函数,它返回与设计变量相关的约束和目标的梯度。 Metamodel组件为所有(我认为)代理人提供了内置的分析梯度,因此您甚至不必在那里声明任何内容。

如果这不是您想要做的事情,那么我可能需要更多关于您的申请的信息。