梯度下降实现 - 绝对错误问题

时间:2017-11-18 20:09:10

标签: python machine-learning gradient-descent

我正在尝试在Python中实现梯度下降算法。当我绘制成本函数的历史时,它似乎在收敛,但是我实现的平均绝对误差比我从sklearn的 linear_model 得到的更差。我无法弄清楚我的实施有什么问题。

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error


def gradient_descent(x, y, theta, alpha, num_iters):
    m = len(y)
    cost_history = np.zeros(num_iters)

    for iter in range(num_iters):
        h = np.dot(x, theta)

        for i in range(len(theta)):
            theta[i] = theta[i] - (alpha/m) * np.sum((h - y) * x[:,i])

        #save the cost in every iteration
        cost_history[iter] = np.sum(np.square((h - y))) / (2 * m)

    return theta, cost_history


attributes = [...]
class_field = [...]

x_df = pd.read_csv('train.csv', usecols = attributes)
y_df = pd.read_csv('train.csv', usecols = class_field)

#normalize
x_df = (x_df - x_df.mean()) / x_df.std()

#gradient descent
alpha = 0.01
num_iters = 1000

err = 0
i = 10
for i in range(i):
    x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2)
    x_train = np.array(x_train)
    y_train = np.array(y_train).flatten()
    theta = np.random.sample(len(x_df.columns))
    theta, cost_history = gradient_descent(x_train, y_train, theta, alpha, num_iters)
    err = err + mean_absolute_error(y_test, np.dot(x_test, theta))
    print(np.dot(x_test, theta))
    #plt.plot(cost_history)
    #plt.show()
print(err/i)

regr = linear_model.LinearRegression()
regr.fit(x_train, y_train)
y_pred = regr.predict(x_test)
print(mean_absolute_error(y_test, y_pred))

1 个答案:

答案 0 :(得分:1)

您似乎错过了偏见/拦截列和系数。

线性函数的假设应如下所示:

H = theta_0 + theta_1 * x

在您的实现中,它看起来如下:

H = theta_1 * x