我一直在使用我的Matlab,但我的愿景是最终切换到在Python中进行所有分析,因为它是一种实际的编程语言和其他一些原因。
我一直在努力解决的最近问题是对复杂数据进行最小二乘最小化。我是一名工程师,我们经常处理复杂的阻抗,我正在尝试使用曲线拟合将简单的电路模型拟合到测量数据中。
阻抗方程式如下:
Z(w)= 1 /(1 / R + j * w * C)+ j * w * L
然后我试图找到R,C和L的值,以便找到最小二乘曲线。
我尝试使用优化包,例如optimize.curve_fit或optimize.leastsq,但它们不适用于复数。
然后我尝试使剩余函数返回复杂数据的大小,但这也不起作用。
答案 0 :(得分:5)
参考to unutbu answer's,没有必要通过取功能残差的幅度平方来减少可用信息,因为leastsq
并不关心数字是真实的还是复杂的,而只是它们是表示为一维数组,保持函数关系的完整性。
这是替换残差函数:
def residuals(params, w, Z):
R, C, L = params
diff = model(w, R, C, L) - Z
z1d = np.zeros(Z.size*2, dtype = np.float64)
z1d[0:z1d.size:2] = diff.real
z1d[1:z1d.size:2] = diff.imag
return z1d
这是唯一需要做出的改变。种子2013的答案是:[2.96564781,1.99929516,4.00106534]。
相对于to unutbu answer's的错误显着减少了一个数量级。
答案 1 :(得分:4)
最终,目标是降低平方和的绝对值
模型与观察到的Z
之间的差异:
abs(((model(w, *params) - Z)**2).sum())
我的original
answer
建议将leastsq
应用于返回标量的residuals
函数
表示实部和虚部差异的平方和:
def residuals(params, w, Z):
R, C, L = params
diff = model(w, R, C, L) - Z
return diff.real**2 + diff.imag**2
Mike Sulzer suggested使用了 残余函数,返回浮点数向量。
以下是使用这些剩余函数的结果比较:
from __future__ import print_function
import random
import numpy as np
import scipy.optimize as optimize
j = 1j
def model1(w, R, C, L):
Z = 1.0/(1.0/R + j*w*C) + j*w*L
return Z
def model2(w, R, C, L):
Z = 1.0/(1.0/R + j*w*C) + j*w*L
# make Z non-contiguous and of a different complex dtype
Z = np.repeat(Z, 2)
Z = Z[::2]
Z = Z.astype(np.complex64)
return Z
def make_data(R, C, L):
N = 10000
w = np.linspace(0.1, 2, N)
Z = model(w, R, C, L) + 0.1*(np.random.random(N) + j*np.random.random(N))
return w, Z
def residuals(params, w, Z):
R, C, L = params
diff = model(w, R, C, L) - Z
return diff.real**2 + diff.imag**2
def MS_residuals(params, w, Z):
"""
https://stackoverflow.com/a/20104454/190597 (Mike Sulzer)
"""
R, C, L = params
diff = model(w, R, C, L) - Z
z1d = np.zeros(Z.size*2, dtype=np.float64)
z1d[0:z1d.size:2] = diff.real
z1d[1:z1d.size:2] = diff.imag
return z1d
def alt_residuals(params, w, Z):
R, C, L = params
diff = model(w, R, C, L) - Z
return diff.astype(np.complex128).view(np.float64)
def compare(*funcs):
fmt = '{:15} | {:37} | {:17} | {:6}'
header = fmt.format('name', 'params', 'sum(residuals**2)', 'ncalls')
print('{}\n{}'.format(header, '-'*len(header)))
fmt = '{:15} | {:37} | {:17.2f} | {:6}'
for resfunc in funcs:
# params, cov = optimize.leastsq(resfunc, p_guess, args=(w, Z))
params, cov, infodict, mesg, ier = optimize.leastsq(
resfunc, p_guess, args=(w, Z),
full_output=True)
ssr = abs(((model(w, *params) - Z)**2).sum())
print(fmt.format(resfunc.__name__, params, ssr, infodict['nfev']))
print(end='\n')
R, C, L = 3, 2, 4
p_guess = 1, 1, 1
seed = 2013
model = model1
np.random.seed(seed)
w, Z = make_data(R, C, L)
assert np.allclose(model1(w, R, C, L), model2(w, R, C, L))
print('Using model1')
compare(residuals, MS_residuals, alt_residuals)
model = model2
print('Using model2')
compare(residuals, MS_residuals, alt_residuals)
产量
Using model1
name | params | sum(residuals**2) | ncalls
------------------------------------------------------------------------------------
residuals | [ 2.86950167 1.94245378 4.04362841] | 9.41 | 89
MS_residuals | [ 2.85311972 1.94525477 4.04363883] | 9.26 | 29
alt_residuals | [ 2.85311972 1.94525477 4.04363883] | 9.26 | 29
Using model2
name | params | sum(residuals**2) | ncalls
------------------------------------------------------------------------------------
residuals | [ 2.86590332 1.9326829 4.0450271 ] | 7.81 | 483
MS_residuals | [ 2.85422448 1.94853383 4.04333851] | 9.78 | 754
alt_residuals | [ 2.85422448 1.94853383 4.04333851] | 9.78 | 754
因此,使用哪个残差函数可能取决于模型函数。我
考虑到model1
和model2
的相似性,我们无法解释结果的差异
{{1}}。