深度卷积自动编码器问题-编码尺寸太大

时间:2019-06-08 15:29:10

标签: python-3.x keras deep-learning conv-neural-network autoencoder

我最近构建了卷积自动编码器,并在其上构建了其他网络的吊索。我现在才意识到自己犯了一个根本性的错误(应该早一点也应该看到)。我以为我的编码层(即标记为'encoder'的maxpooling层的输出,请参见下文)具有'encoding_dim'尺寸。但是,它比我想的要多得多。我想要144,但是我得到144 x 12 x 12(实际上比输入的尺寸大:48x48x3)。

这是自动编码器的代码:

建筑

input_shape = (image_dim, image_dim, 3)

# Build model
autoencoder = Sequential()
autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu', input_shape=input_shape,
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(MaxPooling2D((2, 2), padding='same'))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(MaxPooling2D((2, 2), padding='same', name='encoder'))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(UpSampling2D((2, 2)))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(UpSampling2D((2, 2)))

autoencoder.add(Conv2D(3, (10, 10), padding='same', activation='sigmoid',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())

这是模型摘要(image_dim为48-> 48x48x3尺寸的图像,编码的dim为144):

模型摘要

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 48, 48, 144)       4032      
_________________________________________________________________
batch_normalization_1 (Batch (None, 48, 48, 144)       576       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 24, 24, 144)       0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 144)       186768    
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 144)       576       
_________________________________________________________________
encoder (MaxPooling2D)       (None, 12, 12, 144)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 144)       186768    
_________________________________________________________________
batch_normalization_3 (Batch (None, 12, 12, 144)       576       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 24, 24, 144)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 24, 24, 144)       186768    
_________________________________________________________________
batch_normalization_4 (Batch (None, 24, 24, 144)       576       
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 48, 48, 144)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 48, 48, 3)         43203     
_________________________________________________________________
batch_normalization_5 (Batch (None, 48, 48, 3)         12        
=================================================================
Total params: 609,855
Trainable params: 608,697
Non-trainable params: 1,158

这也使我对其他网络也很不满意,因此我需要调整架构并重新培训所有内容。

有人会向我解释我哪里出错了,更重要的是,我如何调整过滤器/内核以确保我的编码层确实具有“ encoding_dim”尺寸?

0 个答案:

没有答案