我想让2个lorentzians适合我的实验数据。我把我的方程式分解为两个lorentzians lorentz1
和lorentz2
函数的简单形式。然后我定义了另外两个函数L1
和L2
,只将常量cnst
与它们相乘。我有4个适合的参数:cnst1
,cnst2
,tau1
,tau2
。
我使用lmfit
:模型和最小化(可能都使用相同的方法)。
初始拟合参数的设置方式使其更接近精细拟合。但最小化使用lmfit会丢失(下面的第一张图片):
使用以下参数:
params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)
但错误百分比很低:
cnst1: 117.459806 +/- 14.67188 (12.49%) (init= 1000)
cnst2: 413.959032 +/- 44.21042 (10.68%) (init= 300000)
tau1: 11.0343531 +/- 1.065570 (9.66%) (init= 2)
tau2: 1.55259664 +/- 0.125853 (8.11%) (init= 0.005)
另一方面,将参数保持在非常接近初始状态(力与初始状态相似):
使用参数:
#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)
拟合在视觉上更好,但错误值很大:
[[Variables]]
cnst1: 752.988629 +/- 221.3098 (29.39%) (init= 1000)
cnst2: 3.0159e+05 +/- 3.05e+07 (10113.40%) (init= 300000)
tau1: 1.99684317 +/- 0.600748 (30.08%) (init= 2)
tau2: 0.00497806 +/- 0.289651 (5818.56%) (init= 0.005)
这是总代码:
import numpy as np
from lmfit import Model, minimize, Parameters, report_fit
import matplotlib.pyplot as plt
x = np.array([0.02988, 0.07057,0.19365,0.4137,0.91078,1.85075,3.44353,6.39428,\
11.99302,24.37024,52.58804,121.71927,221.53799,358.27392,464.70405])
y = 1.0 / np.array([4.60362E-4,5.63559E-4,8.44538E-4,0.00138,0.00287,0.00657,0.01506,\
0.03119,0.0584,0.09153,0.12538,0.19389,0.34391,0.68869,1.0])
def lorentz1(x, tau):
L = tau / ( 1 + (x*tau)**2 )
return(L)
def lorentz2(x, tau):
L = tau**2 / ( 1 + (x*tau)**2 )
return(L)
def L1(x,cnst1,tau1):
L1 = cnst1 * lorentz1(x,tau1)
return (L1)
def L2(x, cnst2, tau2):
L2 = cnst2 * lorentz2(x,tau2)
return (L2)
def L_min(params, x, y):
cnst1 = params['cnst1'].value
cnst2 = params['cnst2'].value
tau1 = params['tau1'].value
tau2 = params['tau2'].value
L_total = L1(x, cnst1, tau1) + L2(x, cnst2, tau2)
resids = L_total - y
return resids
#params = mod.make_params( cnst1=10e2, cnst2=3e5, tau1=2e0, tau2=0.5e-2)
params = Parameters()
#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)
params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)
#1-----Model--------------------
mod = Model(L1) + Model(L2)
result_mod = mod.fit(y, params, x=x)
print('---results from lmfit.Model----')
print(result_mod.fit_report())
#2---minimize-----------
result_min = minimize(L_min, params, args=(x,y))
final_min = y + result_min.residual
print('---results from lmfit.minimize----')
report_fit(params)
#-------Plot------
plt.close('all')
plt.loglog(x, y,'bo' , label='experimental data')
plt.loglog(x, result_mod.init_fit, 'k--', label='initial')
plt.loglog(x, result_mod.best_fit, 'r-', label='final')
plt.legend()
plt.show()
答案 0 :(得分:3)
在搜索某些内容时,谷歌提出了我前段时间问过的问题。现在我知道了答案,我在这里给出了答案。我希望它对某人有帮助。 :)
我会考虑lmfit.minimize
功能。所以我做的改变是绘制lmfit.minimize
的结果。并且为了解决对数y尺度问题(这也是@mdurant也提到的主要问题),我只是将残差划分为y值(以某种方式将所有数据归一化,以便在获取残差时具有可比性)。我把它命名为加权残差。
def L_min(params, x, y):
...
..
.
resids = L_total - y
weighted_resids = resids/y
return weighted_resids
结果显示为蓝线:
完整代码:
import numpy as np
from lmfit import Model, minimize, Parameters, report_fit
import matplotlib.pyplot as plt
x = np.array([0.02988, 0.07057,0.19365,0.4137,0.91078,1.85075,3.44353,6.39428,\
11.99302,24.37024,52.58804,121.71927,221.53799,358.27392,464.70405])
y = 1.0 / np.array([4.60362E-4,5.63559E-4,8.44538E-4,0.00138,0.00287,0.00657,0.01506,\
0.03119,0.0584,0.09153,0.12538,0.19389,0.34391,0.68869,1.0])
def lorentz1(x, tau):
L = tau / ( 1 + (x*tau)**2 )
return(L)
def lorentz2(x, tau):
L = tau**2 / ( 1 + (x*tau)**2 )
return(L)
def L1(x,cnst1,tau1):
L1 = cnst1 * lorentz1(x,tau1)
return (L1)
def L2(x, cnst2, tau2):
L2 = cnst2 * lorentz2(x,tau2)
return (L2)
def L_min(params, x, y):
cnst1 = params['cnst1'].value
cnst2 = params['cnst2'].value
tau1 = params['tau1'].value
tau2 = params['tau2'].value
L_total = L1(x, cnst1, tau1) + L2(x, cnst2, tau2)
resids = L_total - y
weighted_resids = resids/y
return weighted_resids
# return resids
#params = mod.make_params( cnst1=10e2, cnst2=3e5, tau1=2e0, tau2=0.5e-2)
params = Parameters()
#params.add('cnst1', value=1e3 , min=0.1e3, max=1e3)
#params.add('cnst2', value=3e5, min=1e3, max=1e6)
#params.add('tau1', value=2e0, min=0, max=2)
#params.add('tau2', value=5e-3, min=0, max=10)
params.add('cnst1', value=1e3 , min=1e2, max=1e5)
params.add('cnst2', value=3e5, min=1e2, max=1e6)
params.add('tau1', value=2e0, min=0, max=1e2)
params.add('tau2', value=5e-3, min=0, max=10)
#1-----Model--------------------
mod = Model(L1) + Model(L2)
result_mod = mod.fit(y, params, x=x)
print('---results from lmfit.Model----')
print(result_mod.fit_report())
#2---minimize-----------
result_min = minimize(L_min, params, args=(x,y))
final_min = y + result_min.residual
print('---results from lmfit.minimize----')
report_fit(params)
#-------Plot------
plt.close('all')
plt.loglog(x, y,'bo' , label='experimental data')
plt.loglog(x, result_mod.init_fit, 'k--', label='initial')
plt.loglog(x, result_mod.best_fit, 'r-', label='lmfit.Model')
min_result = L1(x, params['cnst1'].value, params['tau1'].value) + \
L2(x, params['cnst2'].value, params['tau2'].value)
plt.loglog(x, min_result, 'b-', label='lmfit.Minimize')
plt.legend()
plt.show()
并且拟合错误很好:
cnst1: 832.592441 +/- 77.32939 (9.29%) (init= 1000)
cnst2: 2.0836e+05 +/- 3.55e+04 (17.04%) (init= 300000)
tau1: 1.64355457 +/- 0.221466 (13.47%) (init= 2)
tau2: 0.00700899 +/- 0.000935 (13.34%) (init= 0.005)
[[Correlations]] (unreported correlations are < 0.100)
C(tau1, tau2) = 0.151