使用中间层输出共同训练两个单独的模型

时间:2019-11-25 15:28:07

标签: tensorflow keras deep-learning autoencoder

我想训练两个自动编码器:

X (input of first model) -> Y (output of first model (Yo: output of network)) 

Y (input of second model) -> X (output of second model (Xo: output of network)) 

我想在连接两个网络的潜在空间方面有所损失:

X (size: 1,64,2048) -> Z1  (size: 512,8,16 - channels_first) -> Y (size: 1,128,256)

Y (size: 1,128,256) -> Z2 (size: 512,8,16 - channels_first) -> X (size: 1,64,2048)

这个想法是连接两个自动编码器的激活层并共同训练它们。 看完这个问题后:How to use hidden layer activations to construct loss function and provide y_true during fitting in Keras?;我可以设法对同一模型中的两个网络执行此操作:

diffLR = Lambda(lambda x: x[0] - x[1])([Z1, Z2])
model = Model(inputs=[X, Y], outputs=[diffLR, Yo, Xo])
yM1 = np.zeros((datasize_size, 512, 8, 16))
history = model.fit([X, Y], [yM1, Y, X], batch_size=10, epochs=50, validation_split=0.2, shuffle=True)

此实现的问题是,当我想在训练后实际预测测试数据的输出时,我需要提供一个Y矩阵(我没有)。因此,我想将这两个网络分为两个模型:

model1: X (input of first model) -> Y (output of first model (Yo:output of network)) 

model2: Y (input of second model) -> X (output of second model (Xo:output of network))

然后,以相同的方式连接中间层,但是关于两个图已断开连接,我得到一个错误。

model1 = Model(inputs=X, outputs=[Yo])
model2 = Model(inputs=Y, outputs=[Xo])
model1.fit(X, Y, epochs=1, batch_size=10, validation_split=0.2, shuffle=True)
model2.fit(Y, X, epochs=1, batch_size=10, validation_split=0.2,shuffle=True)

#Initialize the mid-layers for new network: 
Model1BN7 = model1.layers[24].output
Model2BN5 = model2.layers[19].output
diff12 = Lambda(lambda x: x[0] - x[1])([model1_bn7('somelayer in model1'), Model2BN5])
diff21 = Lambda(lambda x: x[0] - x[1])([Model1BN7, model2_bn5 ('somelayer in model2')])

newmodel1 = Model(inputs=X, outputs=[diff12, Yo])
newmodel2 = Model(inputs=Y, outputs=[diff21, Xo])

有没有办法做到这一点?

0 个答案:

没有答案