如何保存网络的一部分?

时间:2019-05-14 12:59:03

标签: tensorflow keras

我已经制作了一个自动编码器,它由编码器和解码器部分组成。 我设法使编码器与整个网络分离,但是我在解码器部分遇到了一些麻烦。

此部分有效:

encoder = tf.keras.Model(inputs=autoencoder.input, outputs=autoencoder.layers[5].output)

这部分不包括:

decoder = tf.keras.Model(inputs=autoencoder.layers[6].input, outputs=autoencoder.output)

错误:

  

W0514 14:57:48.965506 78976 network.py:1619]模型输入必须来自tf.keras.Input(因此保留了过去的层元数据),它们不能是先前的非输入层的输出。此处,指定为“ model_15”输入的张量不是输入张量,而是通过图层展平生成的。   请注意,输入张量是通过tensor = tf.keras.Input(shape)实例化的。   导致该问题的张量是:flatten / Reshape:0

有什么想法可以尝试吗?

谢谢

/ mikael

编辑: 用于kruxx

autoencoder = tf.keras.models.Sequential()

# Encoder Layers
autoencoder.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu',     padding='same', input_shape=x_train_tensor.shape[1:]))
autoencoder.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu',     padding='same'))
autoencoder.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), strides=(2,2),     activation='relu', padding='same'))

# Flatten encoding for visualization
autoencoder.add(tf.keras.layers.Flatten())
autoencoder.add(tf.keras.layers.Reshape((4, 4, 8)))

# Decoder Layers
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
> Model: "sequential"
> _________________________________________________________________ 
> Layer (type).................Output Shape..............Param #   
> ================================================================= 
> conv2d (Conv2D)..............(None, 28, 28, 16)........160       
> _________________________________________________________________
> max_pooling2d (MaxPooling2D).(None, 14, 14, 16)........0         
> _________________________________________________________________ 
> conv2d_1 (Conv2D)............(None, 14, 14, 8).........1160      
> _________________________________________________________________
> max_pooling2d_1 (MaxPooling2.(None, 7, 7, 8)...........0         
> _________________________________________________________________ 
> conv2d_2 (Conv2D)............(None, 4, 4, 8)...........584       
> _________________________________________________________________ 
> flatten (Flatten)............(None, 128)...............0         
> _________________________________________________________________ 
> reshape (Reshape)............(None, 4, 4, 8)...........0         
> _________________________________________________________________ 
> conv2d_3 (Conv2D)............(None, 4, 4, 8)...........584       
> _________________________________________________________________ 
> up_sampling2d (UpSampling2D).(None, 8, 8, 8)...........0         
> _________________________________________________________________ 
> conv2d_4 (Conv2D)............(None, 8, 8, 8)...........584       
> _________________________________________________________________ 
> up_sampling2d_1 (UpSampling2 (None, 16, 16, 8).........0         
> _________________________________________________________________ 
> conv2d_5 (Conv2D)............(None, 14, 14, 16)........1168      
> _________________________________________________________________ 
> up_sampling2d_2 (UpSampling2.(None, 28, 28, 16)........0         
> _________________________________________________________________ 
> conv2d_6 (Conv2D)............(None, 28, 28, 1).........145       
> ================================================================= 
> Total params: 4,385 
> Trainable params: 4,385 
> Non-trainable params: 0
> ______________________________________

1 个答案:

答案 0 :(得分:0)

我会用另一种方式解决这个问题:

# Encoder model:
encoder_input = Input(...)

# Encoder Hidden Layers
encoded = Dense()(...)

encoder_model = Model(inputs=[encoder_input], outputs=encoded)

# Decoder model:
decoder_input = Input(...)

# Decoder Hidden Layers

decoded = Dense()(...)

decoder_model = Model(inputs=[decoder_input], outputs=decoded)

然后将自动编码器定义为:

autoencoder = Model(inputs=[encoder_input], output=decoder_model(encoder_model))