我有一个卷积神经网络,其输出是一个4通道2D图像。我想对前两个通道应用sigmoid激活函数,然后使用BCECriterion来计算生成的图像与基础真实图像的丢失。我想在最后两个通道中应用平方损失函数,最后计算梯度并做后退。我还希望将最后两个通道中每个通道的平方损失成本乘以所需的标量。
所以费用有以下形式:
cost = crossEntropyCh[{1, 2}] + l1 * squaredLossCh_3 + l2 * squaredLossCh_4
我考虑这样做的方式如下:
criterion1 = nn.BCECriterion()
criterion2 = nn.MSECriterion()
error = criterion1:forward(model.output[{{}, {1, 2}}], groundTruth1) + l1 * criterion2:forward(model.output[{{}, {3}}], groundTruth2) + l2 * criterion2:forward(model.output[{{}, {4}}], groundTruth3)
但是,我不认为这是正确的做法,因为我将不得不做3个单独的反向步骤,每个步骤对应一个成本条款。所以我想知道,有人能给我一个更好的解决方案来在火炬上做到这一点吗?
答案 0 :(得分:2)
SplitTable和ParallelCriterion可能对您的问题有帮助。
您当前的输出图层后跟nn.SplitTable
,它会分割您的输出通道并将输出张量转换为表格。您还可以使用ParallelCriterion
组合不同的函数,以便将每个条件应用于输出表的相应条目。
有关详细信息,我建议您阅读关于桌子的Torch文档。
在评论之后,我添加了以下代码段来解决原始问题。
M = 100
C = 4
H = 64
W = 64
dataIn = torch.rand(M, C, H, W)
layerOfTables = nn.Sequential()
-- Because SplitTable discards the dimension it is applied on, we insert
-- an additional dimension.
layerOfTables:add(nn.Reshape(M,C,1,H,W))
-- We want to split over the second dimension (i.e. channels).
layerOfTables:add(nn.SplitTable(2, 5))
-- We use ConcatTable in order to create paths accessing to the data for
-- numereous number of criterions. Each branch from the ConcatTable will
-- have access to the data (i.e. the output table).
criterionPath = nn.ConcatTable()
-- Starting from offset 1, NarrowTable will select 2 elements. Since you
-- want to use this portion as a 2 dimensional channel, we need to combine
-- then by using JoinTable. Without JoinTable, the output will be again a
-- table with 2 elements.
criterionPath:add(nn.Sequential():add(nn.NarrowTable(1, 2)):add(nn.JoinTable(2)))
-- SelectTable is simplified version of NarrowTable, and it fetches the desired element.
criterionPath:add(nn.SelectTable(3))
criterionPath:add(nn.SelectTable(4))
layerOfTables:add(criterionPath)
-- Here goes the criterion container. You can use this as if it is a regular
-- criterion function (Please see the examples on documentation page).
criterionContainer = nn.ParallelCriterion()
criterionContainer:add(nn.BCECriterion())
criterionContainer:add(nn.MSECriterion())
criterionContainer:add(nn.MSECriterion())
由于我几乎使用了所有可能的表操作,因此看起来有点讨厌。但是,这是我解决这个问题的唯一方法。我希望它可以帮助你和其他人遭受同样的问题。结果如下:
dataOut = layerOfTables:forward(dataIn)
print(dataOut)
{
1 : DoubleTensor - size: 100x2x64x64
2 : DoubleTensor - size: 100x1x64x64
3 : DoubleTensor - size: 100x1x64x64
}