梯度下降-Logistic回归-怪异的thetas

时间:2019-10-30 01:23:23

标签: matlab machine-learning gradient-descent

针对训练数据运行我的梯度下降函数产生的thetas为[0.3157; 0.0176; 0.0148]。第一个值明显高于其他值。当预测我的测试数据的概率时,最终为0.42±0.01。总是使概率接近于0。我相信该错误取决于梯度下降函数。

梯度下降函数

function [theta] = GradientDescent(x, y)
    m = size(x,1);  
    n = size(x,2);  

    theta = zeros(n,1);
    alpha = .005; 
    iterations = 10000; 
    J=[];

    for i = 1:iterations
        h = x * theta;
        theta = theta - (alpha/m)* x' * (h-y);
        J_old = J;
        J = -(1/m) * sum(y .* log(h) + (1 - y) .* log(1-h));
        if((i>2 && abs(J_old - J) < 10^-5))
             break;
        end
        if(any(isnan(theta())))
            disp("breaking the iterations since theta returns NaN values");
            break;
        end
    end
    disp("Performing Gradient descent -  with "+n+" features");
end

主代码-加载数据和运行概率

[X, Y] = LoadData("train_q1.csv");
scatter(X(:, 2), X(:, 3), 25, Y);
% 1 is buy - on the ends
% 0 is sell - in the middle
%============ 1b.
thetas = ones(3, 1);
[theta] = GradientDescent(X, Y);
disp(theta);
% get accuracy
[trainX, trainY] = LoadData("test_q1.csv");

correct = 0;
%probability
for i=1:length(trainY)
    disp(1 ./ (1 + exp(trainX(i, :) * theta)));
    probability = round(1 ./ (1 + exp(trainX(i, :) * theta)));
    if trainY(i) == probability
        correct = correct + 1;
    end
end
disp(correct);
% print accuracy
disp("The model is " + (correct/length(trainY) * 100) + "% correct");

1 个答案:

答案 0 :(得分:0)

  • 您已经在h = x * theta函数中使用了GradientDescent(..)来计算不正确的假设函数。它应该是h = 1./(1 + exp(-x*theta))(请注意减号)。
  • 因此,在计算概率时,应该为disp(1 ./ (1 + exp(-(trainX(i, :) * theta))))。请注意,您尚未在 exp()中包含减号(-)。

  • [trainX, trainY] = LoadData("test_q1.csv")中,名称也不正确。因为您要加载测试数据集(test_q1.csv),所以它应该是 testX,testY 而不是trainX,trainY。