我更改了标量类型float的预期对象,但在Pytorch中仍然得到Long

时间:2019-04-13 12:54:36

标签: python pytorch

执行二进制类分类。我使用二进制交叉熵作为损失函数(nn.BCEloss()),最后一层的单位为1。

在将(输入,目标)放入损失函数之前,我将目标从Long转换为float。错误消息只有DataLoader的最后一步出现,并且错误消息像下面这样。 "RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'" 在代码中定义了DataLoader(如果批量大小不匹配,我将丢弃最后一个批处理),我不确定是否与错误相关。

我尝试打印目标和输入(神经网络的输出)的类型,并且两个变量的类型均为float。我输入“类型结果”和下面的代码。

trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
                                          shuffle=True, drop_last=True)
loss_func = nn.BCELoss() 

# training 
for epoch in range(EPOCH):
    test_loss = 0
    train_loss = 0

    for step, (b_x, b_y) in enumerate(trainloader):        # gives batch data
        b_x = b_x.view(-1, TIME_STEP, 1)              # reshape x to (batch, time_step, input_size)
        print("step: ", step)
        b_x = b_x.to(device) 
        print("BEFORE|b_y type: ",b_y.type())
        b_y = b_y.to(device, dtype=torch.float)
        print("AFTER|b_y type: ",b_y.type())
        output = rnn(b_x)                               # rnn output
        print("output type:", output.type())
        loss = loss_func(output, b_y)  # !!!error occurs when trainloader enumerate the final step!!!                 

        train_loss = train_loss + loss

        optimizer.zero_grad()                           
        loss.backward()                                 
        optimizer.step()  
#### type result and the error message####
... 
step:  6
BEFORE|b_y type:  torch.LongTensor
AFTER|b_y type:  torch.cuda.FloatTensor
output type: torch.cuda.FloatTensor
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-18-e028fcb6b840> in <module>
     30         b_y = b_y.to(device)
     31         output = rnn(b_x)
---> 32         loss = loss_func(output, b_y)
     33         test_loss = test_loss + loss
     34         rnn.train()

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
    502     @weak_script_method
    503     def forward(self, input, target):
--> 504         return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
    505 
    506 

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
   2025 
   2026     return torch._C._nn.binary_cross_entropy(
-> 2027         input, target, weight, reduction_enum)
   2028 
   2029 

RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'

1 个答案:

答案 0 :(得分:1)

似乎表明类型已正确更改,因为您指出在打印类型时以及从Pytorch中观察到了更改:

  

返回具有指定设备和(可选)Tensor的{​​{1}}。如果   dtype为None,它推断为dtype。当self.dtype时,   尽可能尝试相对于主机进行异步转换,   例如,将具有固定内存的CPU张量转换为CUDA张量。   设置了副本后,即使已经打开了张量,也会创建一个新的张量   匹配所需的转化。

和其他方法

non_blocking

不应有太大差异,因为.float()等效于b_y = b_y.to(device).float() .float等效于.to(..., torch.float32)。您可以在引发错误之前验证.float32的类型并编辑问题吗? (我会对此发表评论-但我想添加更多细节。当提供该信息时,我会尽力提供帮助)