Python中的随机梯度下降

时间:2019-04-09 14:20:33

标签: python numpy machine-learning gradient-descent

我正在尝试在Python中从头开始实现随机梯度下降,以便预测特定的多项式函数。我觉得自己的整体结构正确,但是我的权重(θ)显然没有正确更新。这是我的代码:

from matplotlib import pyplot as plt
import math
import numpy as np

def epsilon():
    '''Adds noise to the data points'''
    return np.random.normal(0, 0.3, 100)

def yFunction(x):
    '''Function to predict'''
    return np.sin(2 * math.pi * x) + epsilon()

def predict(x, thetas):
    '''Predict value of x with the given thetas'''
    prediction = 0
    for i in range(thetas.size):
        prediction += (x ** i) * thetas[i]
    return prediction

# learning rate
alpha = 0.1

# generate random data points
X = np.random.random_sample(100)
y = yFunction(X)

# init weights
thetas = np.random.normal(0, 0.5, 3)

# init loss history
lossHistory = []

for epoch in range(1000):
    # predict
    prediction = predict(X[epoch % 100], thetas)

    # calculate loss
    error = prediction - y[epoch % 100]
    loss = np.sum(error ** 2)

    # update thetas
    if error <= 0:
        thetas += alpha * loss
    else:
        thetas -= alpha * loss

    # log current loss
    lossHistory.append(loss)

# final predictions based on trained model
Y = predict(X, thetas)

# plot the original data along with our line of best fit
fig = plt.figure()
plt.scatter(X, y)
plt.plot(X, Y, "r-")
plt.suptitle("Prediction line over actual values")

# construct a figure that plots the loss over time
fig = plt.figure()
plt.plot(np.arange(0, len(lossHistory)), lossHistory)
fig.suptitle("Training Loss")
plt.show()

这些是结果图:

Python script results

我认为我必须独立地更新权重,而不是每个时期增加/减去一个数量,但是我不确定如何适当地分配损失。

0 个答案:

没有答案