渐渐下降是不同的

时间:2017-09-07 00:41:05

标签: python machine-learning linear-regression gradient-descent

我正在关注Andrew Ng的Coursera课程,我正在尝试使用我相信他也在幻灯片中使用的住房数据来编写梯度下降的基本python实现(可以找到它here)。我没有使用numpy或scikit学习或任何东西,我只是试图让代码使用1D输入和输出格式为theta0 + theta1 * x(2个变量)的行。我的代码非常简单,但即使我提高或降低学习速度或让它运行更多迭代,它仍然设法分歧。我已经查看并尝试了多个其他公式,它仍然存在分歧。我确保数据正确加载。这是代码:

dataset_f = open("housing_prices.csv", "r")

dataset = dataset_f.read().split("\n")

xs = []
ys = []

for line in dataset:
    split = line.split(",")
    xs.append(int(split[0]))
    ys.append(int(split[2]))

m = float(len(xs))

learning_rate = 1e-5

theta0 = 0
theta1 = 0

n_steps = 1


def converged():
    return n_steps > 1000


while not converged():
    print("Step #" + str(n_steps))
    print("θ Naught: {}".format(theta0))
    print("θ One: {}".format(theta1))

    theta0_gradient = (1.0 / m) * sum([(theta0 + theta1 * xs[i] - ys[i]) for i in range(int(m))])
    theta1_gradient = (1.0 / m) * sum([(theta0 + theta1 * xs[i] - ys[i]) * xs[i] for i in range(int(m))])

    theta0_temp = theta0 - learning_rate * theta0_gradient
    theta1_temp = theta1 - learning_rate * theta1_gradient

    theta0 = theta0_temp
    theta1 = theta1_temp

    n_steps += 1

print(theta0)
print(theta1)

Theta naught,很快就会变成nan,因为它们会变为无穷大。我所注意到的是,无论是正面还是负面,都会在正面和负面之间振荡并且变得越来越大。例如:

Step #1
θ Naught: 0
θ One: 0

Step #2
θ Naught: 3.4041265957446813
θ One: 7642.091281914894

Step #3
θ Naught: -146.0856377478662
θ One: -337844.5760108272

Step #4
θ Naught: 6616.511688310662
θ One: 15281052.424862152

Step #5
θ Naught: -299105.2400554526
θ One: -690824180.132845

Step #6
θ Naught: 13522088.241560074
θ One: 31231058614.54401

Step #7
θ Naught: -611311852.8608981
θ One: -1411905961438.4395

Step #8
θ Naught: 27636426469.18927
θ One: 63829999475126.086

Step #9
θ Naught: -1249398426624.6619
θ One: -2885651696197370.0

Step #10
θ Naught: 56483294981582.41
θ One: 1.304556757051869e+17

Step #11
θ Naught: -2553518992810967.5
θ One: -5.89769144561785e+18

Step #12
θ Naught: 1.1544048994968486e+17
θ One: 2.6662515218056607e+20

Step #13
θ Naught: -5.218879028251596e+18
θ One: -1.2053694641507752e+22

1 个答案:

答案 0 :(得分:1)

我已经让你的代码处理了一些小的改动。忽略我的进口,这完全是出于我自己的绘图目的。这个应该使用您的新数据集。主要的变化只是调整学习率并删除一些不必要的演员阵容。

import matplotlib.pyplot as plt
import numpy as np

dataset_f = open("actual_housing_prices.csv", "r")

dataset = dataset_f.read().split("\n")

xs = []
ys = []

for line in dataset:
    split = line.split(",")
    xs.append(int(split[0]))
    ys.append(int(split[2]))

m = len(xs)

learning_rate1 = 1e-7
learning_rate2 = 1e-3

theta0 = 0
theta1 = 0

n_steps = 1


def converged():
    return n_steps > 100000


while not converged():
    print("Step #" + str(n_steps))
    print("Theta Naught: {}".format(theta0))
    print("Theta One: {}".format(theta1))

    theta0_gradient = (1.0 / m) * sum([theta0 + theta1*xs[i] - ys[i] for i in range(m)])
    theta1_gradient = (1.0 / m) * sum([(theta0 + theta1*xs[i] - ys[i])* xs[i] for i in range(m)])

    theta0_temp = theta0 - learning_rate2 * theta0_gradient
    theta1_temp = theta1 - learning_rate1 * theta1_gradient

    theta0 = theta0_temp
    theta1 = theta1_temp

    n_steps += 1

print(theta0)
print(theta1)

print("Error: {}".format(sum([ys[i]-theta0+theta1*xs[i] for i in range(m)])))
plt.plot(xs, ys, 'ro')
plt.axis([0, max(xs), 0, max(ys)])
my_vals = list(np.arange(0, max(xs), 0.02))
plt.plot(my_vals, map(lambda q: theta0+theta1*q, my_vals), '-bo')
plt.show()

以下是使用两个优化权重的结果行:enter image description here