约束线性回归-scikit如何学习?

时间:2020-10-13 14:10:06

标签: python machine-learning scikit-learn linear-regression

我正在尝试使用一些约束条件进行线性回归主题以获得一定的预测。 我想使用类似于图中绿线的非常窄的范围(使用约束),使模型预测线性预测的一半,并在上半年的最后一个值附近预测最后一半的线性预测。

完整代码:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
pd.options.mode.chained_assignment = None  # default='warn'
data = [5.269, 5.346, 5.375, 5.482, 5.519, 5.57, 5.593999999999999, 5.627000000000001, 5.724, 5.818, 5.792999999999999, 5.817, 5.8389999999999995, 5.882000000000001, 5.92, 6.025, 6.064, 6.111000000000001, 6.1160000000000005, 6.138, 6.247000000000001, 6.279, 6.332000000000001, 6.3389999999999995, 6.3420000000000005, 6.412999999999999, 6.442, 6.519, 6.596, 6.603, 6.627999999999999, 6.76, 6.837000000000001, 6.781000000000001, 6.8260000000000005, 6.849, 6.875, 6.982, 7.018, 7.042000000000001, 7.068, 7.091, 7.204, 7.228, 7.261, 7.3420000000000005, 7.414, 7.44, 7.516, 7.542000000000001, 7.627000000000001, 7.667000000000001, 7.821000000000001, 7.792999999999999, 7.756, 7.871, 8.006, 8.078, 7.916, 7.974, 8.074, 8.119, 8.228, 7.976, 8.045, 8.312999999999999, 8.335, 8.388, 8.437999999999999, 8.456, 8.227, 8.266, 8.277999999999999, 8.289, 8.299, 8.318, 8.332, 8.34, 8.349, 8.36, 8.363999999999999, 8.368, 8.282, 8.283999999999999]
time = range(1,85,1)   
x=int(0.7*len(data))
df = pd.DataFrame(list(zip(*[time, data])))
df.columns = ['time', 'data']
# print df
x=int(0.7*len(df))
train = df[:x]
valid = df[x:]
models = []
names = []
tr_x_ax = []
va_x_ax = []
pr_x_ax = []
tr_y_ax = []
va_y_ax = []
pr_y_ax = []
time_model = []
models.append(('LR', LinearRegression()))

for name, model in models:
    x_train=df.iloc[:, 0][:x].values
    y_train=df.iloc[:, 1][:x].values
    x_valid=df.iloc[:, 0][x:].values
    y_valid=df.iloc[:, 1][x:].values

    model = LinearRegression()
    # poly = PolynomialFeatures(5)
    x_train= x_train.reshape(-1, 1)
    y_train= y_train.reshape(-1, 1)
    x_valid = x_valid.reshape(-1, 1)
    y_valid = y_valid.reshape(-1, 1)
    # model.fit(x_train,y_train)
    model.fit(x_train,y_train.ravel())
    # score = model.score(x_train,y_train.ravel())
    # print 'score', score
    preds = model.predict(x_valid)
    tr_x_ax.extend(train['data'])
    va_x_ax.extend(valid['data'])
    pr_x_ax.extend(preds)

    valid['Predictions'] = preds
    valid.index = df[x:].index
    train.index = df[:x].index
    plt.figure(figsize=(5,5))
    # plt.plot(train['data'],label='data')
    # plt.plot(valid[['Close', 'Predictions']])
    x = valid['data']
    # print x
    # plt.plot(valid['data'],label='validation')
    plt.plot(valid['Predictions'],label='Predictions before',color='orange')



y =range(0,58)
y1 =range(58,84)
for index, item in enumerate(pr_x_ax):
    if index >13:
        pr_x_ax[index] = pr_x_ax[13]
pr_x_ax = list([float(i) for i in pr_x_ax])
va_x_ax = list([float(i) for i in va_x_ax])
tr_x_ax = list([float(i) for i in tr_x_ax])
plt.plot(y,tr_x_ax,  label='train' , color='red',  linewidth=2)
plt.plot(y1,va_x_ax,  label='validation1' , color='blue',  linewidth=2)
plt.plot(y1,pr_x_ax,  label='Predictions after' , color='green',  linewidth=2)
plt.xlabel("time")
plt.ylabel("data")
plt.xticks(rotation=45)
plt.legend()
plt.show()

如果看到此图:

标签:Predictions before,该模型在没有任何约束的情况下对其进行了预测(我不需要此结果)。

标签:Predictions after,模型在约束内对其进行了预测,但这是在模型预测后且所有值均等于index = 71 , item 8.56的最后一个值。

我在第64行的循环for index, item in enumerate(pr_x_ax):中使用了曲线,如您所见,曲线是从时间71到85秒的直线,以便向您展示我如何需要模型。

我可以建立一个给出相同结果而不是for循环的模型吗?

figure

请提出您的建议

1 个答案:

答案 0 :(得分:2)

我希望在您的问题中通过画绿线,您真的希望训练有素的模型能够预测右侧的线性水平转弯。但是目前训练有素的模型仅画出直线橙色。

对于任何算法和类型的任何经过训练的模型来说,确实如此,为了学习行为模型中的一些非常规变化,至少需要具有该非常规变化的一些样本。或至少在观察到的数据中有一些隐藏的含义应该指出这种非常规的变化。

换句话说,为了让您的模型学习到右转绿线,模型应该在训练数据集中具有右转弯的点。但是您只需要train = df[:int(0.7 * len(df))]就将训练数据的第一个(最左侧)的70%用于训练数据,并且该训练数据没有这样的右转,并且该训练数据看起来几乎是一条直线。

因此,您需要以不同的方式将数据重新采样以进行训练和验证-从X的整个范围中随机抽取70%的样本,其余用于验证。这样一来,在您的训练数据样本中还会进行右转。

第二件事是LinearRegression模型始终仅以一条直线对预测进行建模,并且该直线不能右转。为了获得正确的转弯,您需要一些更复杂的模型。

模型向右转的一种方式是分段线性的,即具有多个连接的直线。我没有在sklearn内部找到现成的分段线性模型,仅使用其他pip模型。因此,我决定实现自己的简单类PieceWiseLinearRegression,该类使用np.piecewise()scipy.optimize.curve_fit()来建模分段线性函数。

下一张图片显示了应用上面提到的两件事的结果,随后是代码,以不同的方式重新采样数据集并建模分段线性函数。您当前的线性模型LR仍仅使用一条蓝色直线进行预测,而我的分段线性PWLR2的橙色线则由两段组成,可以正确预测右转:

img

要清楚地看到一张PWLR2图,我也做了下一张照片:

enter image description here

关于对象创建的类PieceWiseLinearRegression仅接受一个参数n-用于预测的线性段数。对于n = 2以上的图片,使用了

import sys, numpy as np, pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
np.random.seed(0)

class PieceWiseLinearRegression:
    @classmethod
    def nargs_func(cls, f, n):
        return eval('lambda ' + ', '.join([f'a{i}'for i in range(n)]) + ': f(' + ', '.join([f'a{i}'for i in range(n)]) + ')', locals())
        
    @classmethod
    def piecewise_linear(cls, n):
        condlist = lambda xs, xa: [(lambda x: (
            (xs[i] <= x if i > 0 else np.full_like(x, True, dtype = np.bool_)) &
            (x < xs[i + 1] if i < n - 1 else np.full_like(x, True, dtype = np.bool_))
        ))(xa) for i in range(n)]
        funclist = lambda xs, ys: [(lambda i: (
            lambda x: (
                (x - xs[i]) * (ys[i + 1] - ys[i]) / (
                    (xs[i + 1] - xs[i]) if abs(xs[i + 1] - xs[i]) > 10 ** -7 else 10 ** -7 * (-1, 1)[xs[i + 1] - xs[i] >= 0]
                ) + ys[i]
            )
        ))(j) for j in range(n)]
        def f(x, *pargs):
            assert len(pargs) == (n + 1) * 2, (n, pargs)
            xs, ys = pargs[0::2], pargs[1::2]
            xa = x.ravel().astype(np.float64)
            ya = np.piecewise(x = xa, condlist = condlist(xs, xa), funclist = funclist(xs, ys)).ravel()
            #print('xs', xs, 'ys', ys, 'xa', xa, 'ya', ya)
            return ya
        return cls.nargs_func(f, 1 + (n + 1) * 2)
        
    def __init__(self, n):
        self.n = n
        self.f = self.piecewise_linear(self.n)

    def fit(self, x, y):
        from scipy import optimize
        self.p, self.e = optimize.curve_fit(self.f, x, y, p0 = [j for i in range(self.n + 1) for j in (np.amin(x) + i * (np.amax(x) - np.amin(x)) / self.n, 1)])
        #print('p', self.p)
        
    def predict(self, x):
        return self.f(x, *self.p)

data = [5.269, 5.346, 5.375, 5.482, 5.519, 5.57, 5.593999999999999, 5.627000000000001, 5.724, 5.818, 5.792999999999999, 5.817, 5.8389999999999995, 5.882000000000001, 5.92, 6.025, 6.064, 6.111000000000001, 6.1160000000000005, 6.138, 6.247000000000001, 6.279, 6.332000000000001, 6.3389999999999995, 6.3420000000000005, 6.412999999999999, 6.442, 6.519, 6.596, 6.603, 6.627999999999999, 6.76, 6.837000000000001, 6.781000000000001, 6.8260000000000005, 6.849, 6.875, 6.982, 7.018, 7.042000000000001, 7.068, 7.091, 7.204, 7.228, 7.261, 7.3420000000000005, 7.414, 7.44, 7.516, 7.542000000000001, 7.627000000000001, 7.667000000000001, 7.821000000000001, 7.792999999999999, 7.756, 7.871, 8.006, 8.078, 7.916, 7.974, 8.074, 8.119, 8.228, 7.976, 8.045, 8.312999999999999, 8.335, 8.388, 8.437999999999999, 8.456, 8.227, 8.266, 8.277999999999999, 8.289, 8.299, 8.318, 8.332, 8.34, 8.349, 8.36, 8.363999999999999, 8.368, 8.282, 8.283999999999999]
time = list(range(1, 85))
df = pd.DataFrame(list(zip(time, data)), columns = ['time', 'data'])

choose_train = np.random.uniform(size = (len(df),)) < 0.8
choose_valid = ~choose_train

x_all = df.iloc[:, 0].values
y_all = df.iloc[:, 1].values
x_train = df.iloc[:, 0][choose_train].values
y_train = df.iloc[:, 1][choose_train].values
x_valid = df.iloc[:, 0][choose_valid].values
y_valid = df.iloc[:, 1][choose_valid].values
x_all_lin = np.linspace(np.amin(x_all), np.amax(x_all), 500)

models = []
models.append(('LR', LinearRegression()))
models.append(('PWLR2', PieceWiseLinearRegression(2)))
        
for imodel, (name, model) in enumerate(models):
    model.fit(x_train[:, None], y_train)
    x_all_lin_pred = model.predict(x_all_lin[:, None])
    plt.plot(x_all_lin, x_all_lin_pred, label = f'pred {name}')

plt.plot(x_train, y_train, label='train')
plt.plot(x_valid, y_valid, label='valid')
plt.xlabel('time')
plt.ylabel('data')
plt.legend()
plt.show()
相关问题