将经过训练的自动编码器拆分为编码器和解码器

时间:2021-02-06 09:24:14

标签: tensorflow keras keras-layer

我现在意识到像 this 一样实现它本来是个好主意。但是,我有一个经过训练和微调的自动编码器,如下所示:

Model: "autoencoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
user_input (InputLayer)      [(None, 5999)]            0         
_________________________________________________________________
user_e00 (Dense)             (None, 64)                384000    
_________________________________________________________________
user_e01 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_e02 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_e03 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_out (Dense)             (None, 32)                2080      
_________________________________________________________________
emb_dropout (Dropout)        (None, 32)                0         
_________________________________________________________________
user_d00 (Dense)             (None, 64)                2112      
_________________________________________________________________
user_d01 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_d02 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_d03 (Dense)             (None, 64)                4160      
_________________________________________________________________
user_res (Dense)             (None, 5999)              389935    
=================================================================
Total params: 803,087
Trainable params: 0
Non-trainable params: 803,087
_________________________________________________________________

现在我想把它分成编码器和解码器。我相信我已经找到了编码器的正确方法,那就是:

encoder_in = model.input
encoder_out = model.get_layer(name='user_out').output
encoder = Model(encoder_in, encoder_out, name='encoder')

对于解码器,我想做类似的事情:

decoder_in = model.get_layer("user_d00").input
decoder_out = model.output
decoder = Model(decoder_in, decoder_out, name='decoder')

但这会抛出:

WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "decoder" was not an Input tensor, it was generated by layer emb_dropout.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: emb_dropout/cond_3/Identity:0 

我相信我必须创建一个具有 Input 输出形状的 emb_dropout 层,并可能将其添加到 user_d00 (因为不再需要 Dropout 层,因为训练已经结束)。任何人都知道如何正确地做到这一点?

0 个答案:

没有答案