梯度下降Matlab

时间:2014-04-10 19:21:08

标签: matlab gradient

我在Matlab中遇到梯度下降问题。 我不知道如何建立这个功能。

默认设置:

  max_iter = 1000;
  learing = 1;
  degree = 1;

我的逻辑回归成本函数:(正确???)

function [Jval, Jgrad] = logcost(function(theta, matrix, y)
 mb = matrix * theta;
 p = sigmoid(mb);

 Jval = sum(-y' * log(p) - (1 - y')*log(1 - p)) / length(matrix);

if nargout > 1
    Jgrad = matrix' * (p - y) / length(matrix);
end

现在我的渐变下降函数:

function [theta, Jval] = graddescent(logcost, learing, theta, max_iter)

[Jval, Jgrad] = logcost(theta);
for iter = 1:max_iter 
  theta = theta - learing * Jgrad; % is this correct?
  Jval[iter] = ???

end

thx for all help :),Hans

1 个答案:

答案 0 :(得分:1)

您可以在常规的matlab函数中指定代价函数的代码:

function [Jval, Jgrad] = logcost(theta, matrix, y)
    mb = matrix * theta;
    p = sigmoid(mb);

    Jval = sum(-y' * log(p) - (1 - y')*log(1 - p)) / length(matrix);

    if nargout > 1
        Jgrad = matrix' * (p - y) / length(matrix);
    end
end

然后,创建渐变下降方法( Jgrad 在每次循环迭代中自动更新):

function [theta, Jval] = graddescent(logcost, learing, theta, max_iter)
    for iter = 1:max_iter 
        [Jval, Jgrad] = logcost(theta);
        theta = theta - learing * Jgrad;
    end
end

并使用功能对象调用它,可用于评估费用:

% Initialize 'matrix' and 'y' ...
matrix = randn(2,2);
y = randn(2,1);

% Create function object.
fLogcost = @(theta)(logcost(theta, matrix, y));

% Perform gradient descent.
[ theta, Jval] = graddescent(fLogcost, 1e-3, [ 0 0 ]', 10);

您还可以查看fminunc, built in Matlab's method for function optimization,其中包括渐变下降的实现,以及其他最小化技术。

问候。