“ scipy.optimize.minimize”忽略约束,到处推导都是正值

时间:2019-03-03 11:43:55

标签: python optimization scipy constraints

我有一个函数z(T,x,p)。在给定的数据点下,我想拟合该函数并获取该函数的系数。我的约束是z之后x的派生在dz/dx > 0处都应为正。但是在下面的代码中,约束不起作用,我也不知道为什么。

import numpy as np
from scipy.optimize import minimize
import matplotlib.pyplot as plt

T = np.array([262,257,253,261,260,243,300,283,282], dtype=float)
p = np.array([25,22,19,24,24,14,62,45,44], dtype=float)
x = np.array([0.1,0.1,0.2,0.2,0.3,0.3,1,0.3,0.2], dtype=float)
z = np.array([10,9,13,16,20,12,62,37,28], dtype=float)

def func(pars, T, x, p):    #my actual function
    a,b,c,d,e,f = pars
    return x * p + x * (1 - x) * (a + b * T + c * T ** 2 + d * x + e * x * T + f * x * T ** 2) * p

def resid(pars):   #residual function
    return ((func(pars, T, x, p) - z) ** 2).sum()

def der(pars): # constraint function: Derivation of func() after x positive everywhere
    a,b,c,d,e,f = pars
    return p+p*(2*x*a+2*x*b*T+2*x*c*T**2+3*x**2*d+3*x**2*e*T+3*x**2*f*T**2)+p*(a+b*T+c*T**2+2*x*d+2*e*x*T+2*f*x*T**2)

con1 = (dict(type='ineq', fun=der))
pars0 = np.array([0,0,0,0,0,0])
res = minimize(resid, pars0, method='cobyla',options={'maxiter': 5000000}, constraints=con1)
print("a = %f , b = %f, c = %f, d = %f, e = %f, f = %f" % (res.x[0], res.x[1], res.x[2], res.x[3], res.x[4], res.x[5]))

尝试绘制示例:

x0 = np.linspace(0, 1, 100) # plot two example graphs z(x) for a certain T and p
fig, ax = plt.subplots()
fig.dpi = 80
ax.plot(x,z,'ro', label='data')
ax.plot(x0, func(res.x, 300, x0, 62), '-', label='fit T=300, p=62')
ax.plot(x0, func(res.x, 283, x0, 45), '-', label='fit T=283, p=45')
plt.xlabel('x')
plt.ylabel('z')
plt.legend()
plt.show()

examplePlot

如您所见,导数(梯度)在每个地方都不是正数。我不知道为什么约束会被忽略。也许有人可以帮助我。

0 个答案:

没有答案