如何为线性回归获得正确的斜率变化量?

时间:2019-05-10 12:50:13

标签: java processing linear-regression gradient-descent

我想用Processing编程线性回归。但是我混淆了必须相乘的参数,然后再从斜率中添加或减去。

我尝试更改参数(将其设置为负数,更改学习率)。 b确实可以工作,但是我遇到了一些问题,无法正确选择坡度。

//Data
float[] P1 = {100,100};
float[] P2 = {200,300};
float[] P3 = {300,250};

float[][] allData = {P1,P2,P3};

//random start values
float w1 = random(0,3);
float b = random(-100,100);

float learningRate = 0.01;
int i = 0;

void setup(){
    size(1000,1000);
}

void draw(){
    background(255);
    axes();

    //Draw Points
    for(int j=0;j<allData.length;j+=1){
    float[] point = allData[j];
        advancedPoint(point[0],point[1],color(181, 16, 32),10);
    }

    //Gradient descend, thats the confusing part...
    if(i<10000){
        i += 1;
        float dcost_dreg = 0;
        float dcost_dtar = 0;
        for(int j=0;j<allData.length;j+=1){
            float[] point = allData[j];
            float yTarget = point[1];
            float yRegression = w1*point[0] + b;
            dcost_dreg += -2*(yRegression-yTarget);  //I don't understand these lines
            dcost_dtar += -2*(yRegression-yTarget)*point[0];
        }
        w1 += learningRate * (dcost_dtar/allData.length); 
        b +=  learningRate * (dcost_dreg/allData.length) ;//until here
    }

    //Draw Regression
    linearPoints(w1, b);
}

void linearPoints (float w1, float b){
    float y;
    for(float x=-width; x<width; x=x+0.25){
        y = w1*x + b;
        strokeWeight(3);
        stroke(100,100);
        point(x+width/2, -y + height/2);
    }
}

void axes(){
    for(float a=0; a<height; a=a+0.25){
        strokeWeight(1);
        stroke(255,100,0);
        point(width/2,a);
    }
    for(float b=0; b<width; b=b+0.25){
        stroke(255,100,0);
        point(b,height/2);
    } 
}

void advancedPoint(float x,float y, color c, int size){
    strokeWeight(size);
    stroke(c);
    point(x+width/2,-y+height/2);
}

从理论上讲,程序应该通过我的数据进行线性回归。

1 个答案:

答案 0 :(得分:0)

线性回归基于形式为Line的方程式

y = w1 * x + b

条款

dcost_dreg += -2*(yRegression-yTarget); 
dcost_dtar += -2*(yRegression-yTarget)*point[0];

应该计算与采样点相比的线方程的误差,但是您的计算是错误的。

恒定误差( b 误差)是样本y坐标与y坐标之差,y坐标是通过样本x坐标上的线方程计算得出的。
线性误差( w1 误差)由梯度差计算得出。梯度差是高度与宽度(y / x)的商而不是乘积。
这意味着计算必须为:

dcost_dreg += (yTarget-yRegression);
dcost_dtar += (yTarget-yRegression)/point[0];

表达式

w1 += learningRate * (dcost_dtar/allData.length);
b  += learningRate * (dcost_dreg/allData.length);

计算样本的平均误差,并考虑学习率,将校正应用于线方程。

更改功能draw可以解决此问题:

void draw(){
    background(255);
    axes();

    //Draw Points
    for(int j=0;j<allData.length;j+=1){
        float[] point = allData[j];
        advancedPoint(point[0],point[1],color(181, 16, 32),10);
    }

    //Gradient descend, thats the confusing part...
    if(i<10000){
        i += 1;
        float dcost_dreg = 0;
        float dcost_dtar = 0;
        for(int j=0;j<allData.length;j+=1){
            float[] point = allData[j];
            float yTarget = point[1];
            float yRegression = w1*point[0] + b;
            dcost_dreg += (yTarget-yRegression);
            dcost_dtar += (yTarget-yRegression)/point[0];
        }
        w1 += learningRate * (dcost_dtar/allData.length); 
        b  += learningRate * (dcost_dreg/allData.length);
    }

    //Draw Regression
    linearPoints(w1, b);
}

顺便提一下,建议使用line()来绘制轴和当前线方程:

void linearPoints (float w1, float b){
    strokeWeight(3);
    stroke(100,100,255);
    float x0 = -width;
    float x1 = width;
    float y0 = x0 * w1 + b;
    float y1 = x1 * w1 + b;
    line(x0+width/2, -y0+height/2, x1+width/2, -y1+height/2);
}

void axes(){
    strokeWeight(1);
    stroke(255,100,0);
    line(0,height/2, width, height/2);
    line(width/2, 0, width/2, height);
}