线性回归损失增加

时间:2020-01-17 13:55:25

标签: python machine-learning regression linear-regression

我从头开始写了线性回归,但损失在增加。我的数据是休斯顿住房数据集的面积和价格(作为标签)。我尝试了多种学习速率(从10到0.00000000001),但仍然无法正常工作。在每个时代,我的健身线/功能都不断远离数据点。我猜的功能肯定有问题,但是我不知道是什么。 这是损失的一个例子:

loss: 0.5977188541860982
loss: 0.6003449724263221
loss: 0.6029841845821928
loss: 0.6056365560589673
loss: 0.6083021525886172
loss: 0.6109810402314608
loss: 0.6136732853778034
loss: 0.6163789547495854
loss: 0.6190981154020385
loss: 0.6218308347253524
loss: 0.6245771804463445

这里是代码:

from preprocessing import load_csv
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt

# mean squared error
def MSE(y_prediction, y_true, deriv=(False, 1)):
    if deriv[0]:
        # deriv[1] is the  derivitive of the fit_function
        return 2 * np.mean(np.subtract(y_true, y_prediction) * deriv[1])
    return np.mean(np.square(np.subtract(y_true, y_prediction)))

# linear function
def fit_function(theta_0, theta_1, x):
    return theta_0 + (theta_1 * x)

# train model
def train(dataset, epochs=10, lr=0.01):
    # loadinh and normalizing the data
    x = (v := np.array(dataset["GrLivArea"].tolist()[:100])) / max(v)
    y = (l := np.array(dataset["SalePrice"].tolist()[:100])) / max(l)

    # y-intercept
    theta_0 = random.uniform(min(y), max(y))
    # slope
    theta_1 = random.uniform(-1, 1)

    for epoch in range(epochs):

        predictions = fit_function(theta_0, theta_1, x)
        loss = MSE(predictions, y)

        delta_theta_0 = MSE(predictions, y, deriv=(True, 1))
        delta_theta_1 = MSE(predictions, y, deriv=(True, x))

        theta_0 -= lr * delta_theta_0
        theta_1 -= lr * delta_theta_1

        print("\nloss:", loss)


    plt.style.use("ggplot")
    plt.scatter(x, y)
    x, predictions = map(list, zip(*sorted(zip(x, predictions))))
    plt.plot(x, predictions, "b--")

    plt.show()


train(load_csv("dataset/houston_housing/single_variable_dataset/train.csv"), epochs=500, lr=0.001)

这是500个纪元后的情节。 plot

感谢您的帮助:)

1 个答案:

答案 0 :(得分:0)

很旧的帖子,但我想我还是会给出答案。

您翻转了 MSE 导数上的符号:

def MSE(y_prediction, y_true, deriv=(False, 1)):
    if deriv[0]:
        return 2 * np.mean(np.subtract(y_prediction, y_true) * deriv[1])
    return np.mean(np.square(np.subtract(y_true, y_prediction)))

偏导数 w.r.t.你的参数是:

enter image description here


为了简洁:

def MSE(y_prediction, y_true, deriv=None):
    if deriv is not None:
        return 2 * np.mean((y_prediction - y_true)*deriv)
    return np.mean((y_prediction - y_true)**2)

它允许您在不传入带有标志的元组的情况下获得导数:

delta_theta_0 = MSE(predictions, y, deriv=1)
delta_theta_1 = MSE(predictions, y, deriv=x)

这是一个使用 sklearn.datasets.load_bostonLSTAT人口地位较低)和 MEDV $1000's) 作为目标,最后两个数据特征分别作为输入和目标。

使用 epochs=10000lr=0.001 进行训练:

enter image description here