我有一个带有零和整数的基本实量张量,我想通过在损失函数中将非零值乘以5来更改权重,但是这会影响反向传播并导致错误,还有其他方法吗通过“损失”功能实现这一目标?
def MSE(output,truth,batch_size):
for i in range(batch_size):
for pos,j in enumerate(truth[i]):
if j != 0:
output[i][pos] = output[i][pos]*5
mse = ((truth.sub(output))**2).mean()
return mse
下面的翻译部分
for epoch in range(EPOCH):
losses = []
for step,(im,pointlist) in enumerate(train_loader):
im = im.to(device)
output = cnn(im)
y = torch.stack(pointlist,dim=1)
y = y.type(torch.FloatTensor)
y = y.to(device)
loss = MSE(output,y,batch_size)
optimizer.zero_grad()
loss.backward() # backpropagation, compute gradients
optimizer.step()
losses.append(loss.item())
scheduler.step()
print("Epoch={0},Loss={1:.6F}".format(epoch, np.average(np.array(losses))))
错误发生在loss.backward()
Traceback (most recent call last):
File "CNN_Icarus.py", line 213, in <module>
main()
File "CNN_Icarus.py", line 203, in main
loss.backward()# backpropagation, compute gradients
File "C:\Users\et302\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\et302\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 20]], which is output 0 of SigmoidBackward, is at version 54; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).