scipy curve_fit即使我反复更改初始猜测也不会收敛

时间:2019-05-16 08:58:06

标签: python scipy curve-fitting

我通过实验获得了一些要点。

这些要点应遵循以下类型的理论功能:

f(x)= A *(1-e ^ {-x / B})

我尝试使用curve_fit中的scipy.optimize函数来找到最适合指数的参数A和B。

我必须对几乎100个不同的样本进行拟合。

此外,根据经验,我知道0.5

我的问题与curve_fit收敛到A和B的最佳值失败。

这是我编写的代码,首先,我导入所需的包,定义指数函数,然后定义拟合函数,在其中对A值施加一些约束。我这样做是因为在某些情况下(例如10%的次数)进行其他检查,curve_fit向我返回了A的一些不切实际的值,例如A = 10 ^ 5甚至更大。如果A的值大于2,则通过更改初始猜测值再次调用curve_fit函数。

from scipy.optimize import curve_fit
import pandas as pd
import numpy as np

initial_guess = [8, 1]

def exponential(x, a, b):
    return a*(1 - np.exp(-(x)/b))

def fit(x, y, i):
    best_vals, covar = curve_fit(lambda t, a, b: exponential(t, a, b), x, y, p0=i)
    if best_vals[1]<0.5 or best_vals[1]>2:
            i2 = np.array([1, 0.8, 1])
            while best_vals[1]<0.5 or best_vals[1]>2:
                   i2 = i2 + [0.5, 0.1, 0.5]
                   best_vals, covar = curve_fit(lambda t, a, b: exponential(t, a, b), x, y, p0=i2)
                   print(best_vals)
                   variance = np.sqrt(np.diag(covar))
    i2= i        
    B = best_vals[0]
    A = best_vals[1]
    return variance, A, B

df = pd.read_csv('data.csv')
v, a, b = fit(df['x'], df['y'], initial_guess)

不幸的是,使用此代码,有时我无法收敛到介于0.5和2.0之间的A值。

有人考虑到我的约束条件,建议采用其他方法来进行这种拟合吗? 也许有一种更好的方法来编写拟合函数。或者考虑我所拥有的约束,然后更改初始猜测

感谢谁能帮助我

安德里亚

1 个答案:

答案 0 :(得分:1)

这是使用scipy的差异进化遗传算法确定curve_fit()的初始参数估计值的示例图形拟合器。 scipy实现使用Latin Hypercube算法来确保对参数空间进行彻底搜索,这需要在搜索范围内进行。在此示例中,我使用了带有增加的偏移量的方程式,以便它与我的测试数据一起使用。我还使他在A和B上的遗传算法搜索范围比您在搜索范围上作为“误差范围”提供的范围稍大。

plot

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])


# exponential equation + offset
def func(x, a, b, offset):
    return a*(1.0 - numpy.exp(-(x)/b)) + offset


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    minY = min(yData)
    maxY = max(yData)

    parameterBounds = []
    parameterBounds.append([0.0, 5.0]) # search bounds for a
    parameterBounds.append([5.0, 15.0]) # search bounds for b
    parameterBounds.append([minY, maxY]) # search bounds for offset

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()

# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData))
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)