找不到最佳参数:函数调用次数已达到maxfev = 100

时间:2019-10-25 21:29:16

标签: curve-fitting

我是python的新手,我尝试对数据进行一些调整,但是当我获得图形时,仅显示原始数据,并显示消息“找不到最佳参数:对函数的调用次数已达到maxfev =1000。”你能帮我发现我的错误吗?

%matplotlib inline
import matplotlib.pylab as m
from scipy.optimize import curve_fit
import numpy as num
import scipy.optimize as optimize


xData=num.array([0,0,100,200,250,300,400], dtype="float")
yData=num.array([0,0,0,0,75,100,100], dtype="float")

m.plot(xData, yData, 'ro', label='Datos originales')

def fun(x, a, b):
  return a + b * num.log(x)

popt,pcov=optimize.curve_fit(fun, xData, yData,p0=[1,1], maxfev=1000)
print=popt

x=num.linspace(1,400,7)

m.plot(x,fun(x, *popt), label='Función ajustada')

m.xlabel('concentración')
m.ylabel('% mortalidad')
m.legend()
m.grid()

1 个答案:

答案 0 :(得分:0)

您的代码中的模型为“ a + b * num.log(x)”。因为您的数据包含x值0.0,所以log(0.0)的求值会出错,并且无法使拟合软件运行。有时x的0.0值可以用很小的数字代替,因为log(small number)不会失败-但在这种情况下,方程和数据似乎不匹配,因此仅使用该技术不足以解决问题。 / p>

我的想法是,对于这个数据,一个不同的方程将是一个更好的模型。我使用您的数据进行方程搜索,发现几个不同的S型方程可疑地很好地拟合了该数据集-这并不奇怪,因为数据点数量少。

我尝试过的S形方程对初始参数估计都非常敏感。这是一个图形python钳工,它使用scipy的差异进化遗传算法模块确定curve_fit的非线性求解器的初始参数估计。该scipy模块使用Latin Hypercube算法来确保对参数空间进行彻底搜索,从而需要在搜索范围内进行搜索。这里的界限取自数据的最大值和最小值。

我个人不会精确地使用这种拟合,因为少量数据点提供了这样的可疑拟合,因此强烈建议尽可能增加其他数据点。但是,我找不到包含少于三个参数的方程来拟合数据。

plot

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData=numpy.array([0,0,100,200,250,300,400], dtype="float")
yData=numpy.array([0,0,0,0,75,100,100], dtype="float")


def func(x, a, b, c): # Sigmoid B equation from zunzun.com
    return  a / (1.0 + numpy.exp(-1.0 * (x - b) / c))


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(xData)
    minX = min(xData)

    parameterBounds = []
    parameterBounds.append([minX, maxX]) # search bounds for a
    parameterBounds.append([minX, maxX]) # search bounds for b
    parameterBounds.append([0.0, 2.0]) # search bounds for c

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()

# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData), 100)
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)