Java中的梯度下降线性回归

时间:2013-02-19 01:49:27

标签: java machine-learning linear-regression

这有点远,但我想知道是否有人能看到这个。我在这里正确地进行线性回归的批量梯度下降吗? 它给出了单个独立和截距的预期答案,但不是多个独立变量。

/**
 * (using Colt Matrix library)
 * @param alpha Learning Rate
 * @param thetas Current Thetas
 * @param independent 
 * @param dependent
 * @return new Thetas
 */
public DoubleMatrix1D descent(double         alpha,
                              DoubleMatrix1D thetas,
                              DoubleMatrix2D independent,
                              DoubleMatrix1D dependent ) {
    Algebra algebra     = new Algebra();

    // ALPHA*(1/M) in one.
    double  modifier    = alpha / (double)independent.rows();

    //I think this can just skip the transpose of theta.
    //This is the result of every Xi run through the theta (hypothesis fn)
    //So each Xj feature is multiplied by its Theata, to get the results of the hypothesis
    DoubleMatrix1D hypothesies = algebra.mult( independent, thetas );

    //hypothesis - Y  
    //Now we have for each Xi, the difference between predictect by the hypothesis and the actual Yi
    hypothesies.assign(dependent, Functions.minus);

    //Transpose Examples(MxN) to NxM so we can matrix multiply by hypothesis Nx1
    DoubleMatrix2D transposed = algebra.transpose(independent);

    DoubleMatrix1D deltas     = algebra.mult(transposed, hypothesies );


    // Scale the deltas by 1/m and learning rate alhpa.  (alpha/m)
    deltas.assign(Functions.mult(modifier));

    //Theta = Theta - Deltas
    thetas.assign( deltas, Functions.minus );

    return( thetas );
}

2 个答案:

答案 0 :(得分:1)

您的实施中没有任何问题,并且根据您对collinearity中生成x2时产生的问题的评论。这在回归估计中存在问题。

要测试算法,您可以生成两个独立的随机数列。选择w0w1w2的值,即分别为interceptx1x2的系数。计算相关值y

然后看看你的随机/批量梯度体面算法是否可以恢复w0w1w2

答案 1 :(得分:0)

我想添加

  // ALPHA*(1/M) in one.
double  modifier    = alpha / (double)independent.rows();

这是一个不好的主意,因为您将梯度函数与梯度下降算法混合使用,所以最好在公共方法(例如Java)中包含一个gradientDescent算法:

import org.la4j.Matrix;
import org.la4j.Vector;

public Vector gradientDescent(Matrix x, Matrix y, int kmax, double alpha)
{
    int k=1;
    Vector  thetas = Vector.fromArray(new double[] { 0.0, 0.0});
    while (k<kmax)
    {
        thetas = thetas.subtract(gradient(x, y, thetas).multiply(alpha));
        k++;
    }
    return thetas;
}