梯度下降趋向于错误的值

时间:2016-02-07 04:57:20

标签: c++ linear-regression gradient-descent convergence

我正在尝试用C ++实现梯度下降算法。这是我到目前为止的代码:

#include <iostream>

double X[] {163,169,158,158,161,172,156,161,154,145};
double Y[] {52, 68, 49, 73, 71, 99, 50, 82, 56, 46 };
double m, p;
int n = sizeof(X)/sizeof(X[0]);

int main(void) {
    double alpha = 0.00004; // 0.00007;
    m = (Y[1] - Y[0]) / (X[1] - X[0]);
    p = Y[0] - m * X[0];
    for (int i = 1; i <= 8; i++) {
        gradientStep(alpha);
    }
    return 0;
}

double Loss_function(void) {
    double res = 0;
    double tmp;
    for (int i = 0; i < n; i++) {
        tmp =  Y[i] - m * X[i] - p;
        res += tmp * tmp;
    }
    return res / 2.0 / (double)n;
}

void gradientStep(double alpha) {
    double pg = 0, mg = 0;
    for (int i = 0; i < n; i++) {
        pg += Y[i] - m * X[i] - p;
        mg += X[i] * (Y[i] - m * X[i] - p);
    }
    p += alpha * pg / n;
    m += alpha * mg / n;
}

该代码收敛于m = 2.79822,p = -382.666,误差为102.88。但如果我使用我的计算器找出正确的线性回归模型,我发现m和p的正确值应分别为1.601和-191.1。

我还注意到算法不会收敛于alpha&gt; 0.00007,看起来相当低,并且在8次迭代期间(甚至在2000次迭代之后)p的值几乎没有变化。

我的代码出了什么问题?

Here对我正在尝试实现的算法有一个很好的概述。在我的程序中,theta0和theta1的值称为p和m。

Other implementation in python

More about the algorithm

1 个答案:

答案 0 :(得分:0)

This link提供了算法的全面视图;事实证明我采用了一种完全错误的做法。

以下代码无法正常运行(而且我没有计划进一步处理),但是应该跟任何与我面对同样问题的人进行跟踪:

#include <vector>
#include <iostream>

typedef std::vector<double> vect;

std::vector<double> y, omega(2, 0), omega2(2, 0);;
std::vector<std::vector<double>> X;
int n = 10;

int main(void) {
    /* Initialize x so that each members contains (1, x_i) */
    /* Initialize x so that each members contains y_i */
    double alpha = 0.00001;
    display();
    for (int i = 1; i <= 8; i++) {
        gradientStep(alpha);
        display();
    }
    return 0;
}

double f_function(const std::vector<double> &x) {
    double c;
    for (unsigned int i = 0; i < omega.size(); i++) {
        c += omega[i] * x[i];
    }
    return c;
}

void gradientStep(double alpha) {
    for (int i = 0; i < n; i++) {
        for (unsigned int j = 0; j < X[0].size(); j++) {
            omega2[j] -= alpha/(double)n * (f_function(X[i]) - y[i]) * X[i][j];
        }
    }
    omega = omega2;
}

void display(void) {
    double res = 0, tmp = 0;
    for (int i = 0; i < n; i++) {
        tmp = y[i] - f_function(X[i]);
        res += tmp * tmp; // Loss functionn
    }

    std::cout << "omega = ";
    for (unsigned int i = 0; i < omega.size(); i++) {
        std::cout << "[" << omega[i] << "] ";
    }
    std::cout << "\tError : " << res * .5/(double)n << std::endl;
}