在我的火车上。py
criteon = nn.CrossEntropyLoss()
loss = criteon(binary_output_c1,labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
binary_output_c1,标签大小为[4,224,224],4表示批处理大小,224表示h和w。 它出现了这样的错误
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-78553e2886de>", line 1, in <module>
runfile('F:/experiment_code/U-net/train.py', wdir='F:/experiment_code/U-net')
File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "F:/experiment_code/U-net/train.py", line 77, in <module>
loss = criteon(binary_output_c1,labels)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 942, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 2056, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1881, in nll_loss
out_size, target.size()))
ValueError: Expected target size (4, 224), got torch.Size([4, 224, 224])
我不知道是否可以在交叉熵损失中使用3D张量,该网络用于语义分割
我将label'size设置为[4,256,224,224],其中256是类数。代码在这里
model.train()
outputs = model(imgs) # output B * C * H *W
output_c1 = outputs[:,1,:,:] # 2 channels ,I choose the second channel
Rounding_output_c1 = torch.round(output_c1)
labelss = torch.stack([(labels == i).long() for i in range(256)])
labelss = labelss.permute(1,0,2,3)
Rounding_output_c11 = torch.stack([(Rounding_output_c1 == i).float() for i in range(256)])
Rounding_output_c11 = Rounding_output_c11.permute(1,0,2,3)
loss = criteon(Rounding_output_c11,labelss)
optimizer.zero_grad()
loss.backward()
它也会出错
Traceback (most recent call last):
File "F:/experiment_code/U-net/train_2.py", line 76, in <module>
loss = criteon(Rounding_output_c11,labelss)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 942, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 2056, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "D:\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1873, in nll_loss
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: 1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [4, 256, 224, 224]
答案 0 :(得分:0)
如果您使用的是nn.CrossEntropyLoss
,则您的预测应该有两个渠道:一个用于预测class Example extends React.Component {
render() {
let actortype = 1;
const arr1 = ['a', 'b', 'c'];
const arr2 = ['x', 'y', 'z'];
if (actortype == 1) {
var array = [...arr1]; //clone arr1 to array
} else {
var array = [...arr2];
}
return (
<div>
//I want to map through the array returned above in render. I am doing something like this:
array.map((index)=>{
<div>testing</div>
})
// seems not the right approach to access array inside return. returns plain text
</div>
);
}
}
,另一个用于预测0
。这有点多余,但是损失期望预测具有#channels == #labels。
或者,您可以在损失前先进行预测:
1