我用于线性回归的梯度下降不会收敛

时间:2019-09-19 23:33:21

标签: machine-learning linear-regression gradient-descent

我已经花了好几个小时试图弄清楚为什么我的线性回归梯度下降的代码无法收敛。 尝试使用很小的alpha和很大的迭代次数,仍然无法正常工作。

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent
% t to learn theta
%   theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha

% Initialize some useful values
m = length(y) % number of training examples

ST0=0;
ST1=0;

for iter = 1:num_iters
   for i=1:m
       ST0=ST0+((theta(1)+theta(2)*X(i))-y(i));
       ST1=ST1+(((theta(1)+theta(2)*X(i))-y(i)))*X(i,2);
   end

  ST0=ST0/m;
  ST0=ST0*alpha;
  ST1=ST1/m;
  ST1=ST1*alpha;  
  theta(1)=theta(1)-ST0;
  theta(2)=theta(2)-ST1;
    J= computeCost(X, y, theta);

    J_history(iter) = J;


end

end

0 个答案:

没有答案