PyTorch RuntimeError:张量a(416)的大小必须与非单维度3

时间:2020-04-06 16:27:40

标签: runtime-error size pytorch tensor dimension

我实际上是Pytorch的新人。我已经开发了用于密度估计的培训包。我从nn.Module继承创建了一个名为“模型”的基类,对于每个要实现的CNN模型,我都想扩展基类并覆盖net buid方法(创建CNN的体系结构)。 我已经使用MCNN模型运行火车,现在我使用CSR模型运行火车。 在计算损失loss=train_params.criterion(est_dmap,gt_dmap)时,出现以下错误:

          File "ML_package/process/main.py", line 213, in <module>
            epochs_list,train_loss_list,test_error_list,min_epoch,min_MAE,train_time=model.train_model(merged_train_dataset,merged_test_dataset,train_params,resume=True)
          File "/content/drive/My Drive/pfe-documentations/ML_package/models/model.py", line 90, in train_model
            loss=train_params.criterion(est_dmap,gt_dmap)
          File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
            result = self.forward(*input, **kwargs)
          File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 431, in forward
            return F.mse_loss(input, target, reduction=self.reduction)
          File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2215, in mse_loss
            expanded_input, expanded_target = torch.broadcast_tensors(input, target)
          File "/usr/local/lib/python3.6/dist-packages/torch/functional.py", line 52, in broadcast_tensors
            return torch._C._VariableFunctions.broadcast_tensors(tensors)
        RuntimeError: The size of tensor a (512) must match the size of tensor b (256) at non-singleton dimension 3 

我已经看到了有关此错误的多个主题,但是没有一个对我有用。 基本类模型的代码如下:

class Model(nn.Module):
   def train_model(self,train_dataloader,test_dataloader,train_params:TrainParams,resume=False):

        self.params=train_params
        /* Initialize variables and parameters*/   

        start=time.time()        

            # Start Train
        for epoch in range(start_epoch,train_params.maxEpochs):
                # Set the Model on training mode
            self.train()
            epoch_loss=0

            for i,(img,gt_dmap) in enumerate(train_dataloader):
                img=img.to(device)
                gt_dmap=gt_dmap.to(device)
                    # forward propagation
                est_dmap=self(img)

                    # calculate loss
                loss=train_params.criterion(est_dmap,gt_dmap)
                epoch_loss+=loss.item()

                self.optimizer.zero_grad()
                    # Backpropagation
                loss.backward()
                self.optimizer.step()
                del img,gt_dmap,est_dmap
            print("\t epoch:"+str(epoch)+"\n","\t loss:",epoch_loss/len(train_dataloader))




                # Set the Model on validation mode
            self.eval()
            MAE=0
            MSE=0
            for i,(img,gt_dmap) in enumerate(test_dataloader):
                img=img.to(device)
                gt_dmap=gt_dmap.to(device)
                    # forward propagation
                est_dmap=self(img)
                MAE+=abs(est_dmap.data.sum()-gt_dmap.data.sum()).item()
                MSE+=np.math.pow(est_dmap.data.sum()-gt_dmap.data.sum(),2)
                del img,gt_dmap,est_dmap
            MAE=MAE/len(test_dataloader)  
            MSE=np.math.sqrt(MSE/len(test_dataloader))

            if MAE<self.min_MAE:
                self.min_MAE=MAE
                self.min_epoch=epoch
            test_error_list.append(MAE)


我正在实现的模型具有前端,前端和输出层。 forward()方法分别将图像传递到这三个图像上。 该体系结构是: 前端:VGG16模块 后端:[512,512,512,256,128,64](其中数字表示out_channels,in_channel = 512,且dilation = 2) Output_layer = Conv2d(64,1,kernel_size = 1)。

条件是MSELoss函数。

UPDATE :gt_dmap和est_dmap的大小分别为: gt torch.Size([1, 1, 158, 247]) est torch.Size([1, 1, 316, 494])

请,有人可以帮我吗,我挣扎了很多天。

0 个答案:

没有答案