具有多个变量而没有Matrix的梯度下降

时间:2015-10-17 18:46:38

标签: matlab matrix machine-learning gradient-descent

我是Matlab和Machine Learning的新手,我试图在不使用矩阵的情况下制作梯度下降函数。

  • m 是我的训练集上的示例数
  • n 是每个示例的功能数

函数gradientDescentMulti有5个参数:

  • X mxn Matrix
  • y m维向量
  • theta :n维向量
  • alpha :实数
  • nb_iters :实数

我已经有了使用矩阵乘法的解决方案

function theta = gradientDescentMulti(X, y, theta, alpha, num_iters)
  for iter = 1:num_iters
    gradJ = 1/m * (X'*X*theta - X'*y);
    theta = theta - alpha * gradJ;
  end
end

迭代后的结果:

theta =
   1.0e+05 *

    3.3430
    1.0009
    0.0367

但是现在,我尝试在没有矩阵乘法的情况下做同样的事情,这是函数: enter image description here

function theta = gradientDescentMulti(X, y, theta, alpha, num_iters)
  m = length(y); % number of training examples
  n = size(X, 2); % number of features

  for iter = 1:num_iters
    new_theta = zeros(1, n);
    %// for each feature, found the new theta
    for t = 1:n
      S = 0;
      for example = 1:m
        h = 0;
        for example_feature = 1:n
          h = h + (theta(example_feature) * X(example, example_feature));
        end
        S = S + ((h - y(example)) * X(example, n)); %// Sum each feature for this example
      end
      new_theta(t) = theta(t) - alpha * (1/m) * S; %// Calculate new theta for this example
    end 
    %// only at the end of the function, update all theta simultaneously
    theta = new_theta'; %// Transpose new_theta (horizontal vector) to theta (vertical vector)
  end
end

结果,所有theta都是相同的:/

theta =
   1.0e+04 *

    3.5374
    3.5374
    3.5374

1 个答案:

答案 0 :(得分:1)

如果你看一下梯度更新规则,首先实际计算所有训练样例的假设可能更有效,然后用每个训练样例的基础真值减去它,并将它们存储到数组或向量中。完成此操作后,您可以非常轻松地计算更新规则。对我而言,您的代码中似乎没有这样做。

因此,我重写了代码,但是我有一个单独的数组,用于存储每个训练示例和基础事实值的假设的差异。执行此操作后,我将分别为每个功能计算更新规则:

for iter = 1 : num_iters

    %// Compute hypothesis differences with ground truth first
    h = zeros(1, m);
    for t = 1 : m
        %// Compute hypothesis
        for tt = 1 : n
            h(t) = h(t) + theta(tt)*X(t,tt);
        end
        %// Compute difference between hypothesis and ground truth
        h(t) = h(t) - y(t);
    end

    %// Now update parameters
    new_theta = zeros(1, n);    
    %// for each feature, find the new theta
    for tt = 1 : n
        S = 0;
        %// For each sample, compute products of hypothesis difference
        %// and the right feature of the sample and accumulate
        for t = 1 : m
            S = S + h(t)*X(t,tt);
        end

        %// Compute gradient descent step
        new_theta(tt) = theta(tt) - (alpha/m)*S;
    end

    theta = new_theta'; %// Transpose new_theta (horizontal vector) to theta (vertical vector)    

end

当我这样做时,我得到与使用矩阵公式相同的答案。