我在代码中的不同位置(在修改其值之前或之后)将输出张量传输到了GPU,但是得到了不同的结果。是什么原因?
失败的代码可以简化为:
def Network(self):
........
A = self.model(input)
indexlist = self.indexlist
output = torch.zeros(A.size(0))
for i,li in enumerate(indexlist):
if li:
s,e = li
output[i]+=sum(A[i,s:e])
output = output if self.no_cuda else output.cuda(device=self.gpu,async=True)
return output
pred = Network()
loss = F.nll_loss(pred,target)
loss.backward()
和 RuntimeError:函数torch :: autograd :: CopySlices在索引1处返回了无效的渐变-预期类型为torch.cuda.FloatTensor,但得到了torch.FloatTensor
如果我如下更改一行,它就会正常运行:
def Network(self):
........
A = self.model(input)
indexlist = self.indexlist
output = torch.zeros(A.size(0))
output = output if self.no_cuda else output.cuda(device=self.gpu,async=True)
for i,li in enumerate(indexlist):
if li:
s,e = li
output[i]+=sum(A[i,s:e])
return output
pred = Network()
loss = F.nll_loss(pred,target)
loss.backward()