多变量梯度下降

时间:2014-06-25 14:24:39

标签: python machine-learning linear-regression gradient-descent

我正在学习gradient descent来计算系数。以下是我正在做的事情:

#!/usr/bin/Python

 import numpy as np


   # m denotes the number of examples here, not the number of features
 def gradientDescent(x, y, theta, alpha, m, numIterations):
     xTrans = x.transpose()
     for i in range(0, numIterations):
        hypothesis = np.dot(x, theta)
        loss = hypothesis - y
        # avg cost per example (the 2 in 2*m doesn't really matter here.
        # But to be consistent with the gradient, I include it)
        cost = np.sum(loss ** 2) / (2 * m)
        #print("Iteration %d | Cost: %f" % (i, cost))
        # avg gradient per example
        gradient = np.dot(xTrans, loss) / m
        # update
        theta = theta - alpha * gradient
     return theta


     X = np.array([41.9,43.4,43.9,44.5,47.3,47.5,47.9,50.2,52.8,53.2,56.7,57.0,63.5,65.3,71.1,77.0,77.8])
     y = np.array([251.3,251.3,248.3,267.5,273.0,276.5,270.3,274.9,285.0,290.0,297.0,302.5,304.5,309.3,321.7,330.7,349.0])
     n = np.max(X.shape)
     x = np.vstack([np.ones(n), X]).T      
     m, n = np.shape(x)
     numIterations= 100000
     alpha = 0.0005
     theta = np.ones(n)
     theta = gradientDescent(x, y, theta, alpha, m, numIterations)
     print(theta)

现在我上面的代码工作正常。如果我现在尝试多个变量并将X替换为X1,如下所示:

  X1 = np.array([[41.9,43.4,43.9,44.5,47.3,47.5,47.9,50.2,52.8,53.2,56.7,57.0,63.5,65.3,71.1,77.0,77.8], [29.1,29.3,29.5,29.7,29.9,30.3,30.5,30.7,30.8,30.9,31.5,31.7,31.9,32.0,32.1,32.5,32.9]])

然后我的代码失败并向我显示以下错误:

  JustTestingSGD.py:14: RuntimeWarning: overflow encountered in square
  cost = np.sum(loss ** 2) / (2 * m)
  JustTestingSGD.py:19: RuntimeWarning: invalid value encountered in subtract
  theta = theta - alpha * gradient
  [ nan  nan  nan]

有人可以告诉我如何gradient descent使用X1吗?我使用X1的预期输出是:

[-153.5 1.24 12.08]

我也对其他Python实现持开放态度。我只想要coefficients (also called thetas)X1的{​​{1}}。

1 个答案:

答案 0 :(得分:2)

问题在于你的算法没有收敛。它发生了分歧。第一个错误:

JustTestingSGD.py:14: RuntimeWarning: overflow encountered in square
cost = np.sum(loss ** 2) / (2 * m)

来自问题,在某些时候计算某事物的平方是不可能的,因为64位浮点数不能保持数字(即它> 10 ^ 309)。

JustTestingSGD.py:19: RuntimeWarning: invalid value encountered in subtract
theta = theta - alpha * gradient

这只是之前错误的结果。这些数字对于计算来说是不合理的。

您可以通过取消注释调试打印行来实际看到分歧。成本开始增长,因为没有趋同。

如果您尝试使用X1的函数和较小的alpha值,则会收敛。