为什么简单的梯度下降发散?

时间:2016-12-28 01:24:20

标签: python regression gradient

这是我在一个变量中实现梯度下降的第二次尝试,它总是发散。有什么想法吗?

这是简单的线性回归,用于最小化一个变量中的残差平方和。

def gradient_descent_wtf(xvalues, yvalues):
    tolerance = 0.1

    #y=mx+b
    #some line to predict y values from x values
    m=1.
    b=1.

    #a predicted y-value has value mx + b

    for i in range(0,10):

        #calculate y-value predictions for all x-values
        predicted_yvalues = list()
        for x in xvalues:
            predicted_yvalues.append(m*x + b)

        # predicted_yvalues holds the predicted y-values

        #now calculate the residuals = y-value - predicted y-value for each point
        residuals = list()
        number_of_points = len(yvalues)
        for n in range(0,number_of_points):
            residuals.append(yvalues[n] - predicted_yvalues[n])

        ## calculate the residual sum of squares from the residuals, that is,
        ## square each residual and add them all up. we will try to minimize
        ## the residual sum of squares later.
        residual_sum_of_squares = 0.
        for r in residuals:
            residual_sum_of_squares += r**2
        print("RSS = %s" % residual_sum_of_squares)
        ##
        ##
        ##

        #now make a version of the residuals which is multiplied by the x-values
        residuals_times_xvalues = list()
        for n in range(0,number_of_points):
            residuals_times_xvalues.append(residuals[n] * xvalues[n])

        #now create the sums for the residuals and for the residuals times the x-values
        residuals_sum = sum(residuals)

        residuals_times_xvalues_sum = sum(residuals_times_xvalues)

        # now multiply the sums by a positive scalar and add each to m and b.

        residuals_sum *= 0.1
        residuals_times_xvalues_sum *= 0.1

        b += residuals_sum
        m += residuals_times_xvalues_sum

        #and repeat until convergence.
        #convergence occurs when ||sum vector|| < some tolerance.
        # ||sum vector|| = sqrt( residuals_sum**2 + residuals_times_xvalues_sum**2 )

        #check for convergence
        magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
        if magnitude_of_sum_vector < tolerance:
            break

    return (b, m)

结果:

gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567])
RSS = 370433.0
RSS = 300170125.7
RSS = 4.86943013045e+11
RSS = 7.90447409339e+14
RSS = 1.28312217794e+18
RSS = 2.08287421094e+21
RSS = 3.38110045417e+24
RSS = 5.48849288217e+27
RSS = 8.90939341376e+30
RSS = 1.44624932026e+34
Out[108]:
(-3.475524066284303e+16, -2.4195981188763203e+17)

2 个答案:

答案 0 :(得分:2)

渐变巨大 - 因此您需要长距离跟踪大型矢量(大数字的0.1倍大)。在适当的方向上查找单位向量。像这样的东西(理解力取代你的循环):

sap.ui.Device.system.phone

例如:

<Dialog id="confirmDialog"
            title="Confirm"
            showHeader="true"
            state="Warning" 
            stretch="sap.ui.Device.system.phone"
            type="Standard">

这当然更合理。

制作数值稳定的梯度下降算法并非易事。您可能想在数值分析中查阅一本体面的教科书。

答案 1 :(得分:1)

首先,你的代码是对的。

但是当你进行线性回归时,你应该考虑一些关于数学的东西。

例如,残差 -205.8 且您的学习率 0.1 ,因此您将获得一个巨大的下降步骤 -25.8 。< / p>

这是一个非常大的步骤,你无法回到正确的 m b 。你必须让自己的步伐足够小。

有两种方法可以使梯度下降步骤合理:

  1. 初始化较小的学习率,例如0.001和0.0003。
  2. 将您的步数除以输入值的总和。