Keras示例中的自动编码器

时间:2019-06-07 06:34:10

标签: keras autoencoder

在Keras的文档中,有一个DAE(降噪自动编码器)示例。以下是链接 https://keras.io/examples/mnist_denoising_autoencoder/

众所周知,自动编码器由编码器和解码器网络组成,编码器的输出是编码器的输入。但是,当我一遍遍地检查代码时,发现示例中的解码器输入(称为“潜伏”)也是编码器的输入。这让我很困惑。

以下是相关的代码段

# Build the Autoencoder Model
# First build the Encoder Model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
# Stack of Conv2D blocks
# Notes:
# 1) Use Batch Normalization before ReLU on deep networks
# 2) Use MaxPooling2D as alternative to strides>1
# - faster but not as good as strides>1
for filters in layer_filters:
    x = Conv2D(filters=filters,
               kernel_size=kernel_size,
               strides=2,
               activation='relu',
               padding='same')(x)

# Shape info needed to build Decoder Model
shape = K.int_shape(x)

# Generate the latent vector
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)

# Instantiate Encoder Model
encoder = Model(inputs, latent, name='encoder')
encoder.summary()

# Build the Decoder Model
latent_inputs = Input(shape=(latent_dim,), name='decoder_input')
x = Dense(shape[1] * shape[2] * shape[3])(latent_inputs)
x = Reshape((shape[1], shape[2], shape[3]))(x)
# Stack of Transposed Conv2D blocks
# Notes:
# 1) Use Batch Normalization before ReLU on deep networks
# 2) Use UpSampling2D as alternative to strides>1
# - faster but not as good as strides>1
for filters in layer_filters[::-1]:
    x = Conv2DTranspose(filters=filters,
                        kernel_size=kernel_size,
                        strides=2,
                        activation='relu',
                        padding='same')(x)

x = Conv2DTranspose(filters=1,
                    kernel_size=kernel_size,
                    padding='same')(x)

outputs = Activation('sigmoid', name='decoder_output')(x)

# Instantiate Decoder Model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

请注意,解码器使用latent_inputs作为输入,但是latent_inputs来自Input,而不是来自潜在的编码器输出。

有人可以告诉我为什么会这样吗?还是这是文档中的错误?非常感谢。

1 个答案:

答案 0 :(得分:0)

您对使用Model(..)的输入的命名约定和解码器的输入感到困惑。

在此代码中,分别为编码器和解码器创建了两个Model(...)。当您创建最终的自动编码器模型时,例如在此图中,您需要将编码器的输出馈送到解码器的输入。 enter image description here

如您所述,“解码器使用latent_inputs作为其输入,但是latent_inputs来自Input(此输入是Decoder Model的输入,而不是Autoencoder模型)”。

encoder = Model(inputs, latent, name='encoder')创建编码器模型,而decoder = Model(latent_inputs, outputs, name='decoder')创建使用latent_inputs作为输入的解码器模型,该输入是编码器模型的输出。

最终自动编码器模型将生成,

autoencoder = Model(inputs, decoder(encoder(inputs)), name='autoencoder')

在这里,您对编码器模型的输入来自inputs,而来自解码器模型的输出则是自动编码器的最终输出。为了创建编码器的输出,首先将inputs馈送到encoder(...),然后将编码器的输出作为decoder(encoder(...))馈给解码器

为简单起见,您也可以创建这样的模型,

# Build the Autoencoder Model
# Encoder
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
for filters in layer_filters:
    x = Conv2D(filters=filters,
               kernel_size=kernel_size,
               strides=2,
               activation='relu',
               padding='same')(x)
shape = K.int_shape(x)
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)

# Decoder

x = Dense(shape[1] * shape[2] * shape[3])(latent)
x = Reshape((shape[1], shape[2], shape[3]))(x)

for filters in layer_filters[::-1]:
    x = Conv2DTranspose(filters=filters,
                        kernel_size=kernel_size,
                        strides=2,
                        activation='relu',
                        padding='same')(x)

x = Conv2DTranspose(filters=1,
                    kernel_size=kernel_size,
                    padding='same')(x)

outputs = Activation('sigmoid', name='decoder_output')(x)


autoencoder = Model(inputs, outputs, name='autoencoder')
autoencoder.summary()