梯度下降不更新theta值

时间:2016-05-14 17:10:29

标签: matlab machine-learning neural-network gradient-descent

使用渐变的矢量化版本,如下所述: gradient descent seems to fail

theta = theta - (alpha/m *  (X * theta-y)' * X)';

theta值未更新,因此无论初始theta值如何 这是运行梯度下降后设置的值:

example1:

m = 1
X = [1]
y = [0]
theta = 2
theta = theta - (alpha/m .* (X .* theta-y)' * X)'

theta =

    2.0000

example2:

m = 1
X = [1;1;1]
y = [1;0;1]
theta = [1;2;3]
theta = theta - (alpha/m .* (X .* theta-y)' * X)'

theta =

    1.0000
    2.0000
    3.0000

theta = theta - (alpha/m * (X * theta-y)' * X)';是否是梯度下降的正确矢量化实现?

1 个答案:

答案 0 :(得分:0)

theta = theta - (alpha/m * (X * theta-y)' * X)';确实是梯度下降的正确矢量化实现。

您完全忘记设置学习率alpha

设置alpha = 0.01后,您的代码变为:

m = 1                # number of training examples
X = [1;1;1]
y = [1;0;1]
theta = [1;2;3]
alpha = 0.01
theta = theta - (alpha/m .* (X .* theta-y)' * X)'
theta =

   0.96000
   1.96000
   2.96000