简单的机器学习模型训练回归南

时间:2018-02-02 09:47:33

标签: python numpy machine-learning linear-regression

我正在尝试开始学习ML。

我写了一个简单的例子:

import numpy as np

# Prepare the data
input = np.array(list(range(100)))
output = np.array([x**2 + 2 for x in list(range(100))])

# Visualize Data
import matplotlib.pyplot as plt
plt.plot(input, output, 'ro')
plt.show()

# Define your Model
a = 1
b = 1

# y = ax + b # we put a bias in the model based on our knowledge

# Train your model == Optimize the parameters so that they give very less loss
for e in range(10):
    for x, y in zip(input, output):
        y_hat = a*x + b
        loss =  0.5*(y_hat-y)**2

        # Now that we have loss, we want gradient of the parameters a and b
        # derivative of loss wrt a = (-x)(y-ax+b)
        # so gradient descent: a = a - (learning_rate)*(derivative wrt a)

        a = a - 0.1*(-x)*(y_hat-y)
        b = b - 0.1*(-1)*(y_hat-y)
    print("Epoch {0} Training loss = {1}".format(e, loss))


# Make Prections on new data

test_input = np.array(list(range(101,150))) 
test_output = np.array([x**2.0 + 2 for x in list(range(101,150))])
model_predictions = np.array([a*x + b for x in list(range(101,150))])

plt.plot(test_input, test_output, 'ro')
plt.plot(test_input, model_predictions, '-')
plt.show()

现在我运行代码:

ml_zero.py:22: RuntimeWarning: overflow encountered in double_scalars
  loss =  0.5*(y_hat-y)**2
Epoch 0 Training loss = inf
ml_zero.py:21: RuntimeWarning: overflow encountered in double_scalars
  y_hat = a*x + b
Epoch 1 Training loss = inf
ml_zero.py:21: RuntimeWarning: invalid value encountered in double_scalars
  y_hat = a*x + b
Epoch 2 Training loss = nan
Epoch 3 Training loss = nan
Epoch 4 Training loss = nan
Epoch 5 Training loss = nan
Epoch 6 Training loss = nan
Epoch 7 Training loss = nan
Epoch 8 Training loss = nan
Epoch 9 Training loss = nan

为什么错误是楠?我写了最简单的模型,但是我得到了python:

Traceback (most recent call last):
  File "ml_zero.py", line 20, in <module>
    loss = (y_hat-y)**2
OverflowError: (34, 'Result too large')

然后我将所有Python列表转换为numpy。现在,我得到了Nan错误,我只是不明白为什么这些小值会给出这些错误。

Daniele的答案用均方损失代替损失,即将损失除以输入总数,我得到了这个输出:

Epoch 0 Training loss = 1.7942781420994678e+36
Epoch 1 Training loss = 9.232837400842652e+70
Epoch 2 Training loss = 4.751367833814119e+105
Epoch 3 Training loss = 2.4455835946216386e+140
Epoch 4 Training loss = 1.2585275201812707e+175
Epoch 5 Training loss = 6.4767849625200624e+209
Epoch 6 Training loss = 3.331617554363007e+244
Epoch 7 Training loss = 1.714758503849272e+279
ml_zero.py:22: RuntimeWarning: overflow encountered in double_scalars
  loss =  0.5*(y-y_hat)**2
Epoch 8 Training loss = inf
Epoch 9 Training loss = inf

至少它运行,但我试图使用随机梯度下降来学习线性函数,它在每个点丢失后更新参数。

仍然没有了解人们如何使用这些模型,损失应该减少为什么它会随着梯度下降而增加?

1 个答案:

答案 0 :(得分:4)

你的数学错了。当您计算GD的渐变更新时,您必须除以数据集中的样本数量:这就是为什么它被称为平均值平方误差而不仅仅是平方误差。 此外,您可能希望使用较小的输入,因为您尝试使用指数,因为它会随着x呈指数增长... ... 请查看this post以了解LR和GD的简介。

我冒昧地重写你的代码,这应该有效:

import numpy as np
import matplotlib.pyplot as plt

# Prepare the data
input_ = np.linspace(0, 10, 100)  # Don't assign user data to Python's input builtin
output = np.array([x**2 + 2 for x in input_])

# Define model
a = 1
b = 1

# Train model
N = input_.shape[0]  # Number of samples
for e in range(10):
    loss = 0.
    for x, y in zip(input_, output):
        y_hat = a * x + b
        a = a - 0.1 * (2. / N) * (-x) * (y - y_hat)
        b = b - 0.1 * (2. / N) * (-1) * (y - y_hat)
        loss +=  0.5 * ((y - y_hat) ** 2)
    loss /= N

    print("Epoch {:2d}\tLoss: {:4f}".format(e, loss))


# Predict on test data
test_input = np.linspace(0, 15, 150) # Training data [0-10] + test data [10 - 15]
test_output = np.array([x**2.0 + 2 for x in test_input])
model_predictions = np.array([a*x + b for x in test_input])

plt.plot(test_input, test_output, 'ro')
plt.plot(test_input, model_predictions, '-')
plt.show()

这应该会给你输出这些内容:

Epoch  0    Loss: 33.117127
Epoch  1    Loss: 42.949756
Epoch  2    Loss: 40.733332
Epoch  3    Loss: 38.657764
Epoch  4    Loss: 36.774646
Epoch  5    Loss: 35.067299
Epoch  6    Loss: 33.520409
Epoch  7    Loss: 32.119958
Epoch  8    Loss: 30.853112
Epoch  9    Loss: 29.708126

这是输出图:

enter image description here

干杯

编辑:OP询问新元。上面的答案仍然是有效的代码,但它适用于标准GD(同时在整个数据集上迭代)。
对于SGD,主循环必须稍微改变:

for e in range(10):
    for x, y in zip(input_, output):
        y_hat = a * x + b
        loss =  0.5 * ((y - y_hat) ** 2)
        a = a - 0.01 * (2.) * (-x) * (y - y_hat)
        b = b - 0.01 * (2.) * (-1) * (y - y_hat)

    print("Epoch {:2d}\tLoss: {:4f}".format(e, loss))

请注意,我必须降低学习率以避免分歧。当您以1的批量训练进行训练时,避免这种梯度爆炸变得非常重要,因为单个样本可能会使您的下降达到最佳状态。

示例输出:

Epoch  0    Loss: 0.130379
Epoch  1    Loss: 0.123007
Epoch  2    Loss: 0.117352
Epoch  3    Loss: 0.112991
Epoch  4    Loss: 0.109615
Epoch  5    Loss: 0.106992
Epoch  6    Loss: 0.104948
Epoch  7    Loss: 0.103353
Epoch  8    Loss: 0.102105
Epoch  9    Loss: 0.101127