在Pytorch中,F.nll_loss()的类型为torch.LongTensor的预期对象,但为参数#2'target'

时间:2018-07-20 19:03:21

标签: python pytorch loss-function

为什么会发生此错误。

我正在尝试编写一个自定义损失函数,该函数最终对数可能性为负。

据我了解,NLL是在两个概率值之间计算的?

>>> loss = F.nll_loss(sigm, trg_, ignore_index=250, weight=None, size_average=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home//lib/python3.5/site-packages/torch/nn/functional.py", line 1332, in nll_loss
    return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target'

此处的输入如下:

>>> sigm.size()
torch.Size([151414, 80])
>>> sigm
tensor([[ 0.3283,  0.6472,  0.8278,  ...,  0.6756,  0.2168,  0.5659],
        [ 0.6603,  0.5957,  0.8375,  ...,  0.2274,  0.4523,  0.4665],
        [ 0.5262,  0.4223,  0.5009,  ...,  0.5734,  0.3151,  0.2076],
        ...,
        [ 0.4083,  0.2479,  0.5996,  ...,  0.8355,  0.6681,  0.7900],
        [ 0.6373,  0.3771,  0.6568,  ...,  0.4356,  0.8143,  0.4704],
        [ 0.5888,  0.4365,  0.8587,  ...,  0.2233,  0.8264,  0.5411]])

我的目标张量是:

>>> trg_.size()
torch.Size([151414])
>>> trg_
tensor([-7.4693e-01,  3.5152e+00,  2.9679e-02,  ...,  1.6316e-01,
         3.6594e+00,  1.3366e-01])

如果将其转换为long,则会丢失所有数据:

>>> sigm.long()
tensor([[ 0,  0,  0,  ...,  0,  0,  0],
        [ 0,  0,  0,  ...,  0,  0,  0],
        [ 0,  0,  0,  ...,  0,  0,  0],
        ...,
        [ 0,  0,  0,  ...,  0,  0,  0],
        [ 0,  0,  0,  ...,  0,  0,  0],
        [ 0,  0,  0,  ...,  0,  0,  0]])
>>> trg_.long()
tensor([ 0,  3,  0,  ...,  0,  3,  0])

如果我也将目标张量的原始值也转换为sigmoid

>>> F.sigmoid(trg_)
tensor([ 0.3215,  0.9711,  0.5074,  ...,  0.5407,  0.9749,  0.5334])
>>> loss = F.nll_loss(sigm, F.sigmoid(trg_), ignore_index=250, weight=None, size_average=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/lib/python3.5/site-packages/torch/nn/functional.py", line 1332, in nll_loss
    return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target'

这确实可以很准确地计算出损失,但是由于我在长时间转换中丢失了数据,因此请再次相信:

>>> loss = F.nll_loss(sigm, F.sigmoid(trg_).long(), ignore_index=250, weight=None, size_average=True)
>>> loss 
tensor(-0.5010)

>>> F.sigmoid(trg_).long()
tensor([ 0,  0,  0,  ...,  0,  0,  0])

2 个答案:

答案 0 :(得分:3)

“据我了解,NLL是在两个概率值之间计算的?”

否,不会在两个概率值之间计算NLL。根据{{​​3}} (请参阅形状部分),通常用于实现交叉熵损失。当N为数据大小且C为类数时,它采用预期为对数概率且大小为(N,C)的输入。目标是一个大小为(N,)的长张量,它说明了样本的真实类别。

由于在您的情况下,确定的目标不是真正的类,所以您可能必须实现自己的损失版本,并且可能无法使用NLLLoss。如果您添加了更多有关您要编码的损失的详细信息,我可以帮助/解释如何执行该损失(如果可能的话,可以使用割炬中的现有功能)。

答案 1 :(得分:2)

我将在此处留下可运行的最少注释代码,使您可以查看每个步骤的尺寸并了解这种(或其他)损失的工作原理:

import torch
import torch.nn as nn

m = nn.LogSoftmax()
loss = nn.NLLLoss()

# input is of size N x C = 3 x 5
# this is FloatTensor containing probability for 
# each item in batch for each class
input = torch.randn(3, 5)

# target is LongTensor for index of true class for each item in batch
# each element in target has to have 0 <= value < C
target = torch.tensor([1, 0, 4])

# output is tensor of 0 dimension, i.e., scaler wrapped in tensor
output = loss(m(input), target)