优化分段线性回归

时间:2019-05-28 11:35:44

标签: scipy linear-regression mathematical-optimization

我编写了一个函数,该函数可以在给定参数的情况下,将具有任意多个分段部分的分段线性拟合应用于某些数据。

我正在尝试使用scipy.optimize.curve_fit使函数适合我的数据,但是我收到“ OptimizeWarning:无法估计参数的协方差”错误。我相信这可能是由于我用来定义分段部分的嵌套lambda函数所致。

是否有一种简单的方法来调整我的代码以解决此问题,或者是否有其他更合适的scipy优化功能?

#The piecewise function
def piecewise_linear(x, *params):

    N=len(params)/2

    if N.is_integer():N=int(N)
    else:raise(ValueError())

    c=params[0]
    xbounds=params[1:N]
    grads=params[N:]


    #First we define our conditions, which are true if x is a member of a given
    #bin.
    conditions=[]
    #first and last bins are a special case:
    cond0=lambda x: x<xbounds[0]
    condl=lambda x: x>=xbounds[-1]
    conditions.append(cond0(x))

    for i in range(len(xbounds)-1):
        cond=lambda x : (x >= xbounds[i]) & (x < xbounds[i+1])
        conditions.append(cond(x))

    conditions.append(condl(x))

    #Next we define our linear regression function for each bin. The offset
    #for each bin depends on where the previous bin ends, so we define
    #the regression functions recursively:

    functions=[]
    func0 = lambda x: grads[0]*x +c

    functions.append(func0)

    for i in range(len(grads)-1):
        func = (lambda j: lambda x: grads[j+1]*(x-xbounds[j])\
               +functions[j](xbounds[j]))(i)

        functions.append(func)

    return np.piecewise(x,conditions,functions)
#Some data

x=np.arange(100)
y=np.array([*np.arange(0,19,1),*np.arange(20,59,2),\
*np.arange(60,20,-1),*np.arange(21,42,1)]) + np.random.randn(100)

#A first guess of parameters
cguess=0
boundguess=[20,30,50]
gradguess=[1,1,1,1]
p0=[cguess,*boundguess,*gradguess]

fit=scipy.optimize.curve_fit(piecewise_linear,x,y,p0=p0)

1 个答案:

答案 0 :(得分:0)

这里是示例代码,该代码将两条直线拟合到具有断点的弯曲数据集,其中所有线参数和断点均已拟合。本示例使用scipy的差异进化遗传算法来确定回归的初始参数估计。该模块使用Latin Hypercube算法来确保对参数空间进行彻底搜索,这需要在搜索范围内进行。在此示例中,这些搜索范围是从数据本身得出的。请注意,找到初始参数估计值的范围比提供特定值要容易得多。

plot

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])


def func(xArray, breakpoint, slopeA, offsetA, slopeB, offsetB):
    returnArray = []
    for x in xArray:
        if x < breakpoint:
            returnArray.append(slopeA * x + offsetA)
        else:
            returnArray.append(slopeB * x + offsetB)
    return returnArray


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(xData)
    minX = min(xData)
    maxY = max(yData)
    minY = min(yData)
    slope = 10.0 * (maxY - minY) / (maxX - minX) # times 10 for safety margin

    parameterBounds = []
    parameterBounds.append([minX, maxX]) # search bounds for breakpoint
    parameterBounds.append([-slope, slope]) # search bounds for slopeA
    parameterBounds.append([minY, maxY]) # search bounds for offsetA
    parameterBounds.append([-slope, slope]) # search bounds for slopeB
    parameterBounds.append([minY, maxY]) # search bounds for offsetB


    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()

# call curve_fit without passing bounds from genetic algorithm
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Parameters:', fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData))
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)