为什么我的梯度下降算法无法正常工作?

时间:2018-02-07 04:29:29

标签: python machine-learning linear-regression gradient-descent

我正在尝试模拟从Andrew NG的机器学习课程到Python的线性回归的梯度下降算法,但由于某种原因我的实现无法正常工作。

这是我在Octave中的实现,它运作正常:

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

J_history = zeros(num_iters, 1);

for iter = 1:num_iters


    prediction = X*theta;
    margin_error = prediction - y;

    gradient = 1/m * (alpha * (X' * margin_error));
    theta = theta - gradient;

    J_history(iter) = computeCost(X, y, theta);

end

end

然而,当我因为某些原因将其翻译成Python时,它并没有给我准确的结果。成本似乎在上升而不是下降。

这是我在Python中的实现:

def gradientDescent(x, y, theta, alpha, iters):
    m = len(y)

    J_history = np.matrix(np.zeros((iters,1)))

    for i in range(iters):
        prediction = x*theta.T
        margin_error = prediction - y

        gradient = 1/m * (alpha * (x.T * margin_error))
        theta = theta - gradient

        J_history[i] = computeCost(x,y,theta)

    return theta,J_history

我的代码正在编译,并且没有任何错误。请注意这是theta:

theta = np.matrix(np.array([0,0]))

Alpha和iters设置为:

alpha = 0.01
iters = 1000

当我运行opt_theta, cost = gradientDescent(x, y, theta, alpha, iters)并打印出opt_theta时,我明白了:

matrix([[  2.36890383e+16,  -1.40798902e+16],
        [  2.47503758e+17,  -2.36890383e+16]])

当我得到这个时:

matrix([[-3.24140214, 1.1272942 ]])

我做错了什么?

编辑:

成本函数

def computeCost(x, y, theta):
#   Get length of data set
    m = len(y)

    # We get theta transpose because we are working with a numpy array [0,0] for example
    prediction = x * theta.T

    J = 1/(2*m) * np.sum(np.power((prediction - y), 2))

    return J

1 个答案:

答案 0 :(得分:1)

看那里:

>>> A = np.matrix([3,3,3])
>>> B = np.matrix([[1,1,1], [2,2,2]])
>>> A-B
matrix([[2, 2, 2],
        [1, 1, 1]])

矩阵一起广播。

"it's because np.matrix inherits from np.array. np.matrix overrides multiplication, but not addition and subtraction"

在你的情况下,theta(1x2)减去梯度(2x1),结果你得到了2x2。尝试在减去之前调换渐变。

theta = theta - gradient.T