在python中实现梯度下降

时间:2019-07-04 12:01:06

标签: python numpy machine-learning gradient-descent

我正在尝试在python中实现梯度下降。尽管我认为我的代码返回的结果是完全错误的。

这是我编写的代码:

import numpy as np
import pandas

dataset = pandas.read_csv('D:\ML Data\house-prices-advanced-regression-techniques\\train.csv')

X = np.empty((0, 1),int)
Y = np.empty((0, 1), int)

for i in range(dataset.shape[0]):
  X = np.append(X, dataset.at[i, 'LotArea'])
  Y = np.append(Y, dataset.at[i, 'SalePrice'])

X = np.c_[np.ones(len(X)), X]
Y = Y.reshape(len(Y), 1)

def gradient_descent(X, Y, theta, iterations=100, learningRate=0.000001):
  m = len(X)
  for i in range(iterations):
    prediction = np.dot(X, theta)
    theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y))

  return theta

  theta = np.random.randn(2,1)
  theta = gradient_descent(X, Y, theta)
  print('theta',theta)

运行该程序后得到的结果是:

  

theta [[-5.23237458e + 228]    [-1.04560188e + 233]]

哪个值很高。有人可以指出我在实施过程中犯的错误。

第二个问题是我必须将学习率的值设置得非常低(在这种情况下,我将其设置为0.000001),否则其他明智的程序会抛出错误。

请帮助我诊断问题。

1 个答案:

答案 0 :(得分:1)

尝试通过迭代来降低学习率,否则它将无法达到最佳最低值。

import numpy as np
import pandas

dataset = pandas.read_csv('start.csv')

X = np.empty((0, 1),int)
Y = np.empty((0, 1), int)

for i in range(dataset.shape[0]):
  X = np.append(X, dataset.at[i, 'R&D Spend'])
  Y = np.append(Y, dataset.at[i, 'Profit'])

X = np.c_[np.ones(len(X)), X]
Y = Y.reshape(len(Y), 1)

def gradient_descent(X, Y, theta, iterations=50, learningRate=0.01):
  m = len(X)
  for i in range(iterations):
    prediction = np.dot(X, theta)
    theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y))
    learningRate/=10;

  return theta

theta = np.random.randn(2,1)
theta = gradient_descent(X, Y, theta)
print('theta',theta)