python实现问题中的Gradient Descent

时间:2017-08-19 07:19:10

标签: python machine-learning gradient-descent

嘿,我试图理解这个算法的线性假设。我无法弄清楚我的实现是否正确。我认为这不正确,但我无法弄清楚我错过了什么。

theta0 = 1
theta1 = 1
alpha = 0.01
for i in range(0,le*10): 
    for j in range(0,le):
        temp0 = theta0 - alpha * (theta1 * x[j] + theta0 - y[j])
        temp1 = theta1 - alpha * (theta1 * x[j] + theta0 - y[j]) * x[j]
        theta0 = temp0 
        theta1 = temp1

print ("Values of slope and y intercept derived using gradient descent ",theta1, theta0)

它给了我第四级精度的正确答案。但是当我将它与网上的其他程序进行比较时,我感到很困惑。

提前致谢!

1 个答案:

答案 0 :(得分:1)

Gradient Descent 算法的实现:

import numpy as np

cur_x = 1 # Initial value
gamma = 1e-2 # step size multiplier
precision = 1e-10
prev_step_size = cur_x

# test function
def foo_func(x):
    y = (np.sin(x) + x**2)**2
    return y

# Iteration loop until a certain error measure
# is smaller than a maximal error
while (prev_step_size > precision):
    prev_x = cur_x
    cur_x += -gamma * foo_func(prev_x)
    prev_step_size = abs(cur_x - prev_x)

print("The local minimum occurs at %f" % cur_x)