PyTorch CNN永不收敛(怀疑实现问题)

时间:2019-11-20 18:10:25

标签: pytorch

我无法使此网络正常工作。我已经尝试了很多次此模型的迭代,但仍无法得到合理的错误(永远无法拟合,甚至无法拟合过度)。

我哪里出错了?任何帮助将不胜感激

作为参考,有12个形状为49,9的输入“图像”(实际上是河口9个站点的水面高程)和12个形状为1,9的标签。

有关数据的完整示例,请访问https://gitlab.com/jb4earth/effonn/

  net = []
  class Net(torch.nn.Module):
      def __init__(self, kernel_size):
          super(Net, self).__init__()
          mid_size = (49*49*9)
          self.predict = torch.nn.Sequential(
              nn.Conv2d(
                          in_channels=1,
                          out_channels=mid_size,
                          kernel_size=kernel_size,
                          stride=1,
                          padding=(0, 0)
                      ),
              nn.ReLU(),
              nn.MaxPool2d(1),
              nn.ReLU(),
              nn.Conv2d(
                          in_channels=mid_size,
                          out_channels=1,
                          kernel_size=kernel_size,
                          stride=1,
                          padding=(0, 0)
                      ),
              nn.ReLU()
          )


      def forward(self, x):
          x = self.predict(x)
          return x

  def train_network(x,y,optimizer,loss_func):
      prediction = net(x)    
      loss = loss_func(prediction, y.squeeze())     
      optimizer.zero_grad()  
      loss.backward()     
      optimizer.step()    
      return prediction, loss


  net = Net((1,1))
  optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
  loss_func = torch.nn.MSELoss()
  cnt = 0
  t = True
  while t == True:
      # get_xy in place of DataLoader
      (x,y) = get_xy(input_data,output_data,cnt)
      # x.shape is 1,1,49,9
      # y.shape is 1,1,1,9

      # train and predict
      (prediction,loss) = train_network(x,y,optimizer,loss_func)

      # prediction shape different than desired so averaging all results
      prediction_ = torch.mean(prediction)

      # only 12 IO's so loop through 
      cnt += 1
      if cnt > 11:
          cnt = 0

1 个答案:

答案 0 :(得分:1)

在这里看看,这看起来很可疑。您正在计算损耗,然后将梯度设为零。在计算损失之前,应先调用零梯度。因此,您需要将optimizer.zero_grad()切换到顶部,并且我认为它将正常工作。我无法复制您的示例,这就是为什么我猜这是您的错误。

  loss = loss_func(prediction, y.squeeze())     
  optimizer.zero_grad()   # switch this to the top  
  loss.backward()     
  optimizer.step()