输入信号随时间变化的Pyomo参数估计

时间:2018-08-08 15:58:37

标签: python optimization estimation pyomo

我想尝试Pyomo来解决参数估计问题,这是我到目前为止所拥有的。首先创建参数和变量。参数估计问题的未知数为p1p6。随时间变化的输入是TVLmdotTU

model = pyo.ConcreteModel()
model.t = dae.ContinuousSet(initialize=time)

model.p1 = pyo.Var(domain=pyo.NonNegativeReals, initialize=5.993867814123688)
model.p2 = pyo.Var(domain=pyo.NonNegativeReals, initialize=0.5254928953213035)
model.p3 = pyo.Var(domain=pyo.NonNegativeReals, initialize=50.507139006670045)
model.p4 = pyo.Var(domain=pyo.NonNegativeReals, initialize=50.349545087852945)
model.p5 = pyo.Var(domain=pyo.NonNegativeReals, initialize=0.03248392142362977)
model.p6 = pyo.Var(domain=pyo.NonNegativeReals, initialize=0.10106006227941483)

model.TU = pyo.Param(model.t, default=273.15)
model.TVL = pyo.Param(model.t, default=333.15)
model.mdot = pyo.Param(model.t, default=0.01)

model.TR = pyo.Var(model.t)
model.TRL = pyo.Var(model.t)
model.TW = pyo.Var(model.t)

model.dTRdt = dae.DerivativeVar(model.TR, wrt=model.t)
model.dTRLdt = dae.DerivativeVar(model.TRL, wrt=model.t)
model.dTWdt = dae.DerivativeVar(model.TW, wrt=model.t)

model.t_meas = pyo.Set(within=model.t, initialize=time)
model.TR_meas = pyo.Param(model.t_meas, initialize=TR_dict)

系统由三个ODE组成。

def _diffeq1(model, t):
    return model.dTRdt[t] == model.p1 * (model.TRL[t] - model.TR[t]) - model.p2 * (model.TR[t] - model.TW[t])

def _diffeq2(model, t):
    return model.dTRLdt[t] == model.p3 * model.mdot[t] * (model.TVL[t] - model.TRL[t]) - model.p4 * (model.TRL[t] - model.TR[t])

def _diffeq3(model, t):
    return model.dTWdt[t] == model.p5 * (model.TR[t] - model.TW[t]) - model.p6 * (model.TW[t] - model.TU[t])

model.diffeq1 = pyo.Constraint(model.t, rule=_diffeq1)
model.diffeq2 = pyo.Constraint(model.t, rule=_diffeq2)
model.diffeq3 = pyo.Constraint(model.t, rule=_diffeq3)

这是目标函数。

def _obj(model):
    return sum((model.TR[i] - model.TR_meas[i])**2 for i in model.t_meas)

根据文档,模拟效果很好

model.var_input = pyo.Suffix(direction=pyo.Suffix.LOCAL)
model.var_input[model.TU] = TU_dict
model.var_input[model.TVL] = TVL_dict
model.var_input[model.mdot] = mdot_dict

sim = dae.Simulator(model, package="casadi")
tsim, profiles = sim.simulate(numpoints=3600, integrator="cvodes", varying_inputs=model.var_input)

但是我很难将其用于优化。有没有建议的方法可以对时变输入进行优化?

编辑:

这是代码,我用于优化。

discretizer = pyo.TransformationFactory("dae.finite_difference")
discretizer.apply_to(model, wrt=model.t, nfe=200, scheme="BACKWARD")
solver = pyo.SolverFactory("ipopt")
results = solver.solve(model, tee=True)

我将上面的代码从model.TU = pyo.Var(model.t)更改为model.TU = pyo.Param(model.t, default=273.15)(也适用于TVLmdot)。否则,ipopt会尝试找到TUTVLmdot的最佳轨迹。使用此新实现,仅将默认值用于优化。我添加了下图,显示TU以说明我的观点。

problem

0 个答案:

没有答案