PyTorch 模型似乎没有优化

时间:2021-05-18 10:18:32

标签: python pytorch

我有以下代码运行得很好,但是模型本身的张量似乎总是接近 0.5,它似乎离那里不远。我知道我肯定有足够的 epoch,所以出了什么问题?

import torch
import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
import matplotlib.pyplot as plt
import os
import keras
import sys
devicet = 'cuda' if torch.cuda.is_available() else 'cpu'
device = torch.device(devicet)
if devicet == 'cpu':
  print ('Using CPU')
else:
  print ('Using GPU')
cuda0 = torch.device('cuda:0')
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.step1 = nn.Linear(5, 25)
        self.step2 = nn.Linear(25, 50)
        self.step3 = nn.Linear(50, 100)
        self.step4 = nn.Linear(100, 100)
        self.step5 = nn.Linear(100, 10)
        self.step6 = nn.Linear(10, 1)
    def forward(self, x):
      x = F.relu(x)
      x = self.step1(x)
      x = self.step2(x)
      x = self.step3(x)
      x = self.step4(x)
      x = self.step5(x)
      x = self.step6(x)
      return (x)
net = Net()
x = torch.rand(50,5)
y = torch.rand(50, 1)
x.to(devicet)
y.to(devicet)
learning_rate = 1e-4
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
loss_fn = torch.nn.MSELoss()
acc_list = []
for i in range(1000):
    y_pred = net(x)
    loss = loss_fn(y_pred, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    acc_list.append(abs(net(x).detach().numpy()[0]-y.detach().numpy()[0]))
    sys.stdout.write("\rEpoch: {0}, Tensor Difference: {1}".format(len(acc_list), net(x).detach().numpy()[0]-y.detach().numpy()[0]))
    sys.stdout.flush()
    with torch.no_grad():
        for param in net.parameters():
            param -= learning_rate * param.grad
print ('\nFinished training in {} epochs.'.format(len(acc_list)))
plt.plot(range(len(acc_list)),acc_list)
plt.show()
print (net(x).detach().numpy()[0:5])
print (y.detach().numpy()[0:5])

请记住,我对 PyTorch 还很陌生,这是我第一个自行设计的模型,而不仅仅是内置的 Sequential。

1 个答案:

答案 0 :(得分:0)

在前向函数中,所需要做的就是在每个线性层之后添加 x = F.relu(x)。再次感谢Shai

相关问题