我正在使用pytorch训练模型。但是在计算交叉熵损失时遇到了运行时错误。
Traceback (most recent call last):
File "deparser.py", line 402, in <module>
d.train()
File "deparser.py", line 331, in train
total, correct, avgloss = self.train_util()
File "deparser.py", line 362, in train_util
loss = self.step(X_train, Y_train, correct, total)
File "deparser.py", line 214, in step
loss = nn.CrossEntropyLoss()(out.long(), y)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 862, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 1550, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 975, in log_softmax
return input.log_softmax(dim)
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
我认为这是因为.cuda()
函数或torch.Float
和torch.Long
之间的转换。但是我尝试了多种方法来通过.cpu()
/ .cuda()
和.long()
/ .float()
来更改变量,但是仍然无法正常工作。在Google上搜索时找不到此错误消息。谁能帮我?谢谢!!!
这是代码原因错误:
def step(self, x, y, correct, total):
self.optimizer.zero_grad()
out = self.forward(*x)
loss = nn.CrossEntropyLoss()(out.long(), y)
loss.backward()
self.optimizer.step()
_, predicted = torch.max(out.data, 1)
total += y.size(0)
correct += int((predicted == y).sum().data)
return loss.data
此函数step()的调用者是:
def train_util(self):
total = 0
correct = 0
avgloss = 0
for i in range(self.step_num_per_epoch):
X_train, Y_train = self.trainloader()
self.optimizer.zero_grad()
if torch.cuda.is_available():
self.cuda()
for i in range(len(X_train)):
X_train[i] = Variable(torch.from_numpy(X_train[i]))
X_train[i].requires_grad = False
X_train[i] = X_train[i].cuda()
Y_train = torch.from_numpy(Y_train)
Y_train.requires_grad = False
Y_train = Y_train.cuda()
loss = self.step(X_train, Y_train, correct, total)
avgloss+=float(loss)*Y_train.size(0)
self.optimizer.step()
if i%100==99:
print('STEP %d, Loss: %.4f, Acc: %.4f'%(i+1,loss,correct/total))
return total, correct, avgloss/self.data_len
输入数据X_train, Y_train = self.trainloader()
开头是numpy数组。
这是一个数据样本:
>>> X_train, Y_train = d.trainloader()
>>> X_train[0].dtype
dtype('int64')
>>> X_train[1].dtype
dtype('int64')
>>> X_train[2].dtype
dtype('int64')
>>> Y_train.dtype
dtype('float32')
>>> X_train[0]
array([[ 0, 6, 0, ..., 0, 0, 0],
[ 0, 1944, 8168, ..., 0, 0, 0],
[ 0, 815, 317, ..., 0, 0, 0],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 23, 6, ..., 0, 0, 0],
[ 0, 0, 297, ..., 0, 0, 0]])
>>> X_train[1]
array([ 6, 7, 8, 21, 2, 34, 3, 4, 19, 14, 15, 2, 13, 3, 11, 22, 4,
13, 34, 10, 13, 3, 48, 18, 16, 19, 16, 17, 48, 3, 3, 13])
>>> X_train[2]
array([ 4, 5, 8, 36, 2, 33, 5, 3, 17, 16, 11, 0, 9, 3, 10, 20, 1,
14, 33, 25, 19, 1, 46, 17, 14, 24, 15, 15, 51, 2, 1, 14])
>>> Y_train
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
...,
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
dtype=float32)
尝试所有可能的组合:
情况1:
loss = nn.CrossEntropyLoss()(out, y)
我知道了:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
情况2:
loss = nn.CrossEntropyLoss()(out.long(), y)
如上文所述
情况3:
loss = nn.CrossEntropyLoss()(out.float(), y)
我知道了:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
情况4:
loss = nn.CrossEntropyLoss()(out, y.long())
我知道了:
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
案例5:
loss = nn.CrossEntropyLoss()(out.long(), y.long())
我知道了:
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
案例6:
loss = nn.CrossEntropyLoss()(out.float(), y.long())
我知道了:
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
案例7:
loss = nn.CrossEntropyLoss()(out, y.float())
我知道了:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
案例8:
loss = nn.CrossEntropyLoss()(out.long(), y.float())
我知道了:
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
案例9:
loss = nn.CrossEntropyLoss()(out.float(), y.float())
我知道了:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
答案 0 :(得分:2)
我知道问题出在哪里。
y
的dtype应该没有热编码。
torch.int64
会自动进行一键编码(而出局的是预测的概率分布,如一键格式)。
它现在可以运行!
答案 1 :(得分:0)
在我的情况下,这是因为我翻转了targets
和logits
,并且由于日志显然不是torch.int64
,所以引发了错误。