我有一个Keras模型,该模型由其他3个Keras模型(嵌套模型)组成。我的问题是关于Keras训练日志中显示的损耗值的含义。
以下是我的全局模型的摘要:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_16 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 16, 16, 128) 690368 input_16[0][0]
__________________________________________________________________________________________________
model_4 (Model) [(None, 17, 4), (None, 17, 4), (None, 16, 16, 128)] 5103826 input_16[0][0]
__________________________________________________________________________________________________
concatenate_8 (Concatenate) (None, 16, 16, 256) 0 model_1[1][0]
model_4[1][2]
__________________________________________________________________________________________________
decoder (Model) (None, 256, 256, 3) 582843 concatenate_8[0][0]
==================================================================================================
这些嵌套模型是2个编码器(model_1
和model_4
)和1个解码器(decoder
)。
我也有3种损失:2种损失直接应用于model_4
输出中的2种,一种损失应用于解码器的输出。
训练完整模型时,我只看到model_4
有一个损失,称为model_4_loss
:
Epoch 34/60
13548/19512 [===================>..........] - ETA: 34:57 - loss: 0.6764 - decoder_loss: 0.0944 - model_4_loss: 0.2797
但是当我尝试单独训练model_4
时,我在训练日志中清楚地看到了2个损失(此处concatenate_xxx
的损失对应于model_4
的前两个输出):
Epoch 35/60
5430/19512 [=======>......................] - ETA: 1:20:14 - loss: 0.8475 - concatenate_5_loss: 0.2998 - concatenate_7_loss: 0.2767
对此我有几个问题:
model_4
2个损失,decoder
1个损失吗?model_4_loss
代表什么? model_4
造成的2次损失的平均值?总和?两者中只有一个?model_4
的两个损失而不是一些合计值?为提供更多背景信息,以下是我如何构建整个模型的摘要:
encoder1 = build_encoder1() # returns an object of type `Model` with a single (None, 16, 16, 128) output
encoder2 = build_encoder2() # returns an object of type `Model` with a list of 3 tensors as output
decoder = build_decoder() # returns a `Model` with a single (None, 256, 256, 3) output
inp = Input(shape=input_shape) # input_shape is (None, 256, 256, 3)
z_1 = encoder1(inp) # (None, 16, 16, 128)
out1, out2, z_2 = encoder2(inp) # [(None, 17, 4), (None, 17, 4), (None, 16, 16, 128)]
concat = concatenate[z_1, z_2] # (None, 16, 16, 256)
out3 = decoder(concat) # (None, 256, 256, 3)
outputs = [out3, out1, out2]
losses = [loss1(), loss2(), loss2()] # loss1 is a custom loss function managing the (None, 256, 256, 3) output and loss2 is another managing the (None, 17, 4) outputs
model = Model(inputs=inp, outputs=outputs)
model.compile(loss=losses, optimizer=RMSprop(lr=start_lr))
非常感谢您!