全连接神经网络无法正确预测

时间:2021-06-05 09:04:41

标签: python machine-learning deep-learning neural-network pytorch

我是 ML 和 DL 的新手,但我决定尝试一些东西,但我发现我的网络无法正确预测。

我有一个只有一个密集(线性)层的全连接神经网络,我使用 SGD 作为优化器,它预测为 9.9 而不是 10 但是当我使用 Adam 它预测了 10。预期结果是 10,我很困惑有人可以向我解释为什么会这样吗?

!pip install -Uqq tqdm

import torch
import torch.nn as nn
import torch.optim as optim
from tqdm import tqdm as tqdm

我的训练数据作为样本

X = torch.tensor([[1], [2], [3], [4]], dtype=torch.float32)
Y = torch.tensor([[2], [4], [6], [8]], dtype=torch.float32)

我的前向传递和神经网络模型或网络

class SimpleNeuralNetwork(nn.Module) :
  def __init__(self, num_input, num_output):
    super(SimpleNeuralNetwork, self).__init__()
    self.fc = nn.Linear(num_input, num_output)

  def forward(self, x):
    x = self.fc(x)
    return x

在功能和批处理中

  in_samples, in_features = X.shape

定义和初始化我的损失函数

criterion = nn.MSELoss()

训练过程的参数

learning_rate = 0.01
ePoch = 1000

初始化我的模型

sNN = SimpleNeuralNetwork(in_features, in_features)

初始化我的优化器

optimiser = optim.SGD(sNN.parameters(), lr=learning_rate)

训练我的网络

for i in tqdm(list(range(ePoch))):
  # prediction - forward pass in the model
  y_pred = sNN(X)

  # loss - check how well or how far our model did with the prediction
  loss = criterion(Y, y_pred)

  # gradient - do a backward propagation (backward pass)
  loss.backward()

  # update weight - readjust the weight using our learning rate as a proximity
  optimiser.step()

  # zero gradient - reinitialize our memory to zero so that the neural network will not cram
  optimiser.zero_grad()

  # if i % 10 == 0:
    #   [w, b] = sNN.parameters()
    #   print(f'epoch: {i + 1}, weight: {w[0][0].item()}, bias: {b[0].item()}, pred: {y_pred}')

实际预测

predict = sNN(torch.tensor([5], dtype=torch.float32))
print(f'prediction for 5: {predict[0].item()}')

0 个答案:

没有答案
相关问题