Keras Variational Autoenoder-潜在数据

时间:2019-03-09 15:16:16

标签: python tensorflow keras deep-learning autoencoder

我正在与他们的GitHub上的Keras Variational Autoencoder斗争。我试图将数据发送到瓶颈,然后将其发送到解码器的输入。我是否正确,在学习了模型之后,您必须对输入数据进行编码,然后查看下面的“ Z”变量产生的结果:

# VAE model = encoder + decoder
# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
for i in range(2):
filters *= 2
    x = Conv2D(filters=filters,
           kernel_size=kernel_size,
           activation='relu',
           strides=2,
           padding='same')(x)

# shape info needed to build decoder model
shape = K.int_shape(x)

# generate latent vector Q(z|X)
x = Flatten()(x)
x = Dense(16, activation='relu')(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)

# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(shape[1] * shape[2] * shape[3], activation='relu')(latent_inputs)
x = Reshape((shape[1], shape[2], shape[3]))(x)

for i in range(2):
    x = Conv2DTranspose(filters=filters,
                    kernel_size=kernel_size,
                    activation='relu',
                    strides=2,
                    padding='same')(x)
    filters //= 2

outputs = Conv2DTranspose(filters=1,
                      kernel_size=kernel_size,
                      activation='sigmoid',
                      padding='same',
                      name='decoder_output')(x)

# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')

# instantiate VAE model
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae')

# __main__
models = (encoder, decoder)
reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))

reconstruction_loss *= original_dim
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='rmsprop')

vae.fit(x_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, None))

谢谢。

1 个答案:

答案 0 :(得分:1)

请遵循Keras作者编写的此AutoEncoder教程。最后一部分介绍了VAE,并逐步进行了解释。

https://blog.keras.io/building-autoencoders-in-keras.html

由于学习了分布Z,因此可以从中进行采样以生成新​​的数字。