在尝试建立pytorch模型时,我收到以下错误:尝试进行Pytorch autograd时,损失对象不可调用。 (相关代码如下所示)
optimizer = torch.optim.Adam(model.parameters(), lr=lr,
betas(0.0,0.9))
def train(epoch, shuffle, wisdom_model, optim, loss):
print('train')
accuracy = 0
batch_num = 0
wisdom_model.train()
for batch in data.train_dl:
optim.zero_grad()
result = model(batch[0])
loss = nn.CrossEntropyLoss()(result, batch[1].long())
loss.backward()
accuracy += accuracy(result, batch[1])
print(accuracy)
pdb.set_trace()
batch_num += 1
return accuracy / batch_num
TypeError Traceback (most recent call last)
<ipython-input-28-5b9c9fe3b320> in <module>
----> 1 run(1, False)
<ipython-input-27-d0d67dbf6eb2> in run(num_models, dropout)
71
72 for epoch in range(10):
---> 73 train_accuracy = train(epoch, False, model, optimizer, loss)
74 accuracy.append(validate(epoch, model))
75
<ipython-input-27-d0d67dbf6eb2> in train(epoch, shuffle, model, optim, loss)
24 pdb.set_trace()
25
---> 26 loss.backward()
27 optim.step()
28
TypeError: 'int' object is not callable
答案 0 :(得分:1)
问题出在这一行:
loss = nn.CrossEntropyLoss()(result, batch[1].long())
看起来不应该这样:
nn.CrossEntropyLoss()()
应如下所示:
nn.CrossEntropyLoss()
答案 1 :(得分:0)
问题可能出在您目标的数据类型,即batch [1]。检查它是否为Tensor()类型。一个简单的torch.tensor(batch [1])就可以完成这项工作。