如何在Python中使用梯度下降正确编码线性回归?

时间:2017-02-20 14:07:23

标签: python-3.x linear-regression gradient-descent

import pandas as pd
import matplotlib.pyplot as plt

# I'm trying to code the utter basic func of LinearRegression
# from sklearn.linear_model import LinearRegression

dataframe = pd.read_fwf('brain_body.txt')      # link given below

x_values = dataframe[['Brain']]
y_values = dataframe[['Body']]

lr = LinearRegression(0.0001, 10)             # sending learning_rate and iterations
lr.fit(x_values, y_values)

# commenting out because the values are insane
# plt.scatter(x_values, y_values)
# plt.plot(x_values, clf.predict(x_values))
# plt.show()

链接到brain_body.txt

这是我写的课程

class LinearRegression:
    def __init__(self, learning_rate, iterations):
        self.b = 0                               # b as in y=mx+b
        self.m = 0                               # m as in y=mx+b
        self.learning_rate = learning_rate
        self.iterations = iterations

    def get_y(self, x):
        return self.m * float(x) + self.b

    def step_gradient(self, x_values, y_values):
        print()
        print("Values before: m =", self.m, " b =", self.b)

        m_gradient = 0
        b_gradient = 0
        N = float(len(x_values.ix[:, 0]))

        print('%11s' % "d(m)", '%11s' % "m_gradient", '%11s' % "d(b)", '%11s' % "b_gradient")

        for i in range(int(N)):
            x = x_values.iloc[i][0]
            y = y_values.iloc[i][0]

            # EDIT: I missed a * -1 here
            # But that wouldn't just fix everything, adjusting learning rate does

            pm = (y - self.get_y(x)) * x             # partial derivative of m
            pb = (y - self.get_y(x)) * -1            # partial derivative of b

            m_gradient += pm * 2 / N
            b_gradient += pb * 2 / N

            print('%11s' % pm, '%11s' % m_gradient, '%11s' % pb, '%11s' % b_gradient)

        self.m -= self.learning_rate * m_gradient     # adjust current m
        self.b -= self.learning_rate * b_gradient     # adjust current b

        print("Values after: m =", self.m, " b =", self.b)
        print()

    def fit(self, x_values, y_values):                # equivalent to train_model
        for i in range(self.iterations):
            self.step_gradient(x_values, y_values)
        return

    def predict(self, x_values):                      # equivalent to get_output
        predictions = []
        for x in x_values.ix[:, 0]:
            predictions.append(self.get_y(x))
        return predictions

我看了Siraj Raval's How to do Linear Regression the right way并且跟着他做的几乎一样。 我确实了解了偏导数和梯度下降是什么,但我不知道偏导数的值是什么(或猜测它们)。在第一次迭代中数字变得疯狂:

Values before: m = 0  b = 0
       d(m)  m_gradient        d(b)  b_gradient
   150.6325 4.85911290323       -44.5 -1.43548387097
       7.44 5.09911290323       -15.5 -1.93548387097
     10.935 5.45185483871        -8.1 -2.19677419355
   196695.0 6350.45185484      -423.0 -15.8419354839
   4341.435 6490.49814516      -119.5 -19.6967741935
     3180.9 6593.10782258      -115.0 -23.4064516129
   1456.306 6640.08543548       -98.2 -26.5741935484
       5.72 6640.26995161        -5.5 -26.7516129032
     243.02 6648.10930645       -58.0 -28.6225806452
       2.72 6648.19704839        -6.4 -28.8290322581
      0.404 6648.21008065        -4.0 -28.9580645161
      5.244 6648.37924194        -5.7 -29.1419354839
        6.6 6648.59214516        -6.6 -29.3548387097
     0.0007 6648.59216774       -0.14 -29.3593548387
       0.06 6648.59410323        -1.0 -29.3916129032
       37.8 6649.81345806       -10.8      -29.74
       24.6 6650.60700645       -12.3 -30.1367741935
      10.71 6650.95249032        -6.3      -30.34
 11723841.0 384839.371845     -4603.0 -178.823870968
     0.0069 384839.372068        -0.3 -178.833548387
    78394.9 387368.23981      -419.0 -192.349677419
   341255.0 398376.465616      -655.0 -213.478709677
     2.7475 398376.554245        -3.5 -213.591612903
     1150.0 398413.651019      -115.0 -217.301290323
      84.48 398416.376181       -25.6 -218.127096774
        1.0 398416.408439        -5.0 -218.288387097
     24.675 398417.204406       -17.5 -218.852903226
   359720.0 410021.075374      -680.0 -240.788387097
    84042.0 412732.107632      -406.0 -253.88516129
    27625.0 413623.236665      -325.0 -264.369032258
      9.225 413623.534245       -12.3 -264.765806452
    81840.0 416263.534245     -1320.0 -307.346451613
 38007648.0 1642316.69554     -5712.0 -491.604516129
      13.65 1642317.13586        -3.9 -491.730322581
     1217.2 1642356.40037      -179.0 -497.504516129
     1960.0 1642419.62618       -56.0 -499.310967742
      68.85 1642421.84715       -17.0 -499.859354839
       0.12 1642421.85102        -1.0 -499.891612903
     0.0092 1642421.85132        -0.4 -499.904516129
     0.0025 1642421.8514       -0.25 -499.912580645
       17.5 1642422.41591       -12.5 -500.315806452
   122500.0 1646374.02882      -490.0 -516.122258065
      30.25 1646375.00462       -12.1 -516.512580645
     9712.5 1646688.31107      -175.0 -522.157741935
    15700.0 1647194.76269      -157.0 -527.222258065
    22950.4 1647935.09817      -440.0 -541.415806452
   1893.725 1647996.18607      -179.5 -547.206129032
       1.32 1647996.22865        -2.4 -547.283548387
     4860.0 1648153.00285       -81.0 -549.896451613
       75.6 1648155.44156       -21.0 -550.573870968
   168.0896 1648160.8638       -39.2 -551.838387097
      0.532 1648160.88096        -1.9 -551.899677419
       0.09 1648160.88387        -1.2 -551.938387097
      0.366 1648160.89567        -3.0 -552.03516129
    0.01584 1648160.89619       -0.33 -552.045806452
    34560.0 1649275.73489      -180.0 -557.852258065
       75.0 1649278.15425       -25.0 -558.658709677
    27040.0 1650150.41231      -169.0 -564.110322581
       2.34 1650150.4878        -2.6 -564.194193548
     18.468 1650151.08354       -11.4 -564.561935484
       0.26 1650151.09193        -2.5 -564.642580645
    213.444 1650157.97722       -50.4 -566.268387097
Values after: m = -165.015797722  b = 0.0566268387097

Values after 10 iteration: m = -1.76899770934e+22  b = 4.21166966984e+18
  

我如何正确地从头开始做

1 个答案:

答案 0 :(得分:1)

这可能不是一个真正的答案,因为它使用R(我可能会在python中解决这个问题,但这需要更长的时间)。我认为您的问题与learning_rate相当。我此刻正在考虑this machine learning class,因此我熟悉您正在做的事情并尝试自己实施。这是我的代码:

library(ggplot2)

## create test data
data <- data.frame(x = 1:10, y = 1:10)    
n <- nrow(data)

## initialize values
m <- 0
b <- 0
alpha <- 0.01
iters <- 100
results <- data.frame(i = 1:iters,
                      pm = 1:iters,
                      pb = 1:iters,
                      m = 1:iters,
                      b = 1:iters)

for (i in 1:iters) {

  y_hat <- (m * data$x) + b
  pm <- (1/n) * sum((y_hat - data$y) * data$x)
  pb <- (1/n) * sum(y_hat - data$y)
  m <- m - (alpha * pm)
  b <- b - (alpha * pb)

  ##  uncomment if you want; shows "animated" change
  ##  p <- ggplot(data, aes(x = x, y = y)) + geom_point()
  ##  p <- p + geom_abline(intercept = b, slope = m)
  ##  print(p)

  ## this turned out to be key for looking at output
  results[i, 2:5] <- c(pm, pb, m, b)

}

现在,请注意results结尾的大字母0.1

> tail(results)
      i            pm            pb             m             b
95   95 -2.864612e+45 -4.114745e+44  2.135518e+44  3.067470e+43
96   96  8.390457e+45  1.205210e+45 -6.254938e+44 -8.984628e+43
97   97 -2.457567e+46 -3.530062e+45  1.832073e+45  2.631600e+44
98   98  7.198218e+46  1.033956e+46 -5.366146e+45 -7.707961e+44
99   99 -2.108360e+47 -3.028460e+46  1.571745e+46  2.257664e+45
100 100  6.175391e+47  8.870365e+46 -4.603646e+46 -6.612702e+45

了解mb是如何翻转的?学习率alpha非常高,alpha * derivative正在跳过最小值!在链接类中,这显示在渐变下降视频中,但概念与我发现的图像相同:

example of divergence

使用results

查看alpha = 0.01
> tail(results)
      i           pm         pb         m         b
95   95 -0.003483741 0.02425319 0.9834438 0.1152615
96   96 -0.003476426 0.02420226 0.9834785 0.1150195
97   97 -0.003469127 0.02415144 0.9835132 0.1147780
98   98 -0.003461842 0.02410073 0.9835478 0.1145370
99   99 -0.003454573 0.02405012 0.9835824 0.1142965
100 100 -0.003447319 0.02399962 0.9836169 0.1140565

速度很慢,但我们正按照预期在m = 1b = 0进行珩磨。根据您的真实数据,我遇到了类似的问题。主代码体是相同的,这会在开头替换data <- data.frame()行:

data <- read.table(file = "https://raw.githubusercontent.com/llSourcell/linear_regression_demo/master/brain_body.txt",
                   header = T, sep = "", stringsAsFactors = F)
names(data) <- c("y", "x")

除了我使用alphaiters进行游戏外,其他所有内容均相同。这就是我找到的东西!

## your learning rate; diverging/flip-flopping
## alpha <- 0.0001
> tail(results)
      i             pm             pb              m              b
95   95 -3.842565e+190 -1.167811e+187  3.801319e+186  1.155276e+183
96   96  3.541406e+192  1.076285e+189 -3.503393e+188 -1.064732e+185
97   97 -3.263851e+194 -9.919315e+190  3.228817e+190  9.812842e+186
98   98  3.008048e+196  9.141894e+192 -2.975760e+192 -9.043766e+188
99   99 -2.772294e+198 -8.425404e+194  2.742537e+194  8.334966e+190
100 100  2.555018e+200  7.765068e+196 -2.527592e+196 -7.681718e+192

## 1/10 as big; still diverging!
## alpha <- 0.00001
> tail(results)
      i            pm            pb             m             b
95   95 -2.453089e+92 -7.455293e+88  2.189776e+87  6.655047e+83
96   96  2.040052e+93  6.200012e+89 -1.821074e+88 -5.534508e+84
97   97 -1.696559e+94 -5.156089e+90  1.514452e+89  4.602638e+85
98   98  1.410902e+95  4.287936e+91 -1.259457e+90 -3.827672e+86
99   99 -1.173342e+96 -3.565957e+92  1.047397e+91  3.183190e+87
100 100  9.757815e+96  2.965541e+93 -8.710418e+91 -2.647222e+88

## even smaller; that's better!
## alpha <- 0.000001
> tail(results)
      i          pm       pb         m            b
95   95 -0.01579109 51.95899 0.8856351 -0.004667159
96   96 -0.01579107 51.95894 0.8856352 -0.004719118
97   97 -0.01579106 51.95889 0.8856352 -0.004771077
98   98 -0.01579104 51.95885 0.8856352 -0.004823036
99   99 -0.01579103 51.95880 0.8856352 -0.004874995
100 100 -0.01579102 51.95875 0.8856352 -0.004926953

根据这个最终结果,我绘制了看起来合理的结果?

p <- ggplot(data, aes(x = x, y = y)) + geom_point()
p <- p + geom_abline(intercept = b, slope = m)
print(p)

data and fit line plot

所以,结束:

  • 我没有验证/检查您的python代码
  • 我确实在R中实现了对渐变下降的理解,并尝试使用测试来验证行为
  • 我重新尝试使用您的实际数据来查找它似乎正常工作
  • 因此,我的建议是重新尝试使用简化数据的方法(听起来像你已经拥有的那样),然后用非常小的学习率查看初始步骤,看看是否有解决它。如果没有,你的数学可能还有问题吗?

希望有所帮助!