由于激活功能的不同放置,MNIST自动编码器的结果不同

时间:2018-05-23 09:58:16

标签: tensorflow mnist autoencoder activation-function

我在玩变分自动编码器时偶然发现了一个奇怪的现象。这个问题很容易描述:

定义VAE的丢失功能时,必须使用某种重建错误。我决定使用我自己的交叉熵实现,因为我无法通过tensorflow提供的任何函数获得合理的结果。它看起来像这样:

x_hat = tf.contrib.layers.fully_connected(fc2,
                                  input_dim,
                                  activation_fn=tf.sigmoid)

## Define the loss

reconstruction_loss = -tf.reduce_sum(
    x * tf.log(epsilon + x_hat) + 
    (1 - x) * tf.log(epsilon + 1 - x_hat),
    axis=1) 

它使用重建层的输出,它应用sigmoid函数将其输入到[0; 1]范围。现在,我想在损失函数中应用sigmoid并将其更改为

x_hat = tf.contrib.layers.fully_connected(fc2,
                                  input_dim,
                                  activation_fn=None)

## Define the loss

reconstruction_loss = -tf.reduce_sum(
    x * tf.log(epsilon + tf.sigmoid(x_hat)) + 
    (1 - x) * tf.log(epsilon + 1 - tf.sigmoid(x_hat)),
    axis=1) 

我确信这应该提供几乎相同的结果。然而,在实践中,第二次尝试导致奇怪的灰色图片。原件看起来也模糊不清,也更加明亮。首先是好的版本,然后选择"错误"版本

for original code for 2nd attempt

有人可以向我解释导致这种奇怪行为的原因吗?

如果您想自己测试,下面是我的源代码。您必须对相应的块进行注释才能获得结果。谢谢!

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
import numpy as np

mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)
n_samples = mnist.train.num_examples
input_dim = mnist.train.images[0].shape[0]
inter_dim = 256
encoding_dim = 5
epsilon = 1e-10
learning_rate = 1e-4
n_epochs = 20
batch_size = 100
width = 28

## Define the variational autoencoder model 

x = tf.placeholder(dtype=tf.float32,
               shape=[None, input_dim],
               name='x')

fc1 = tf.contrib.layers.fully_connected(x,
                                   inter_dim,
                                   activation_fn=tf.nn.relu)

z_mean = tf.contrib.layers.fully_connected(fc1,
                                       encoding_dim,
                                       activation_fn=None)
z_log_var = tf.contrib.layers.fully_connected(fc1,
                                          encoding_dim,
                                          activation_fn=None)

eps = tf.random_normal(shape=tf.shape(z_log_var),
                   mean=0,
                   stddev=1,
                   dtype=tf.float32)
z = z_mean + tf.exp(z_log_var / 2) * eps

fc2 = tf.contrib.layers.fully_connected(z,
                                    inter_dim,
                                    activation_fn=tf.nn.relu)

x_hat = tf.contrib.layers.fully_connected(fc2,
                                      input_dim,
                                      activation_fn=tf.sigmoid)
                                     #activation_fn=None)
## Define the loss

reconstruction_loss = -tf.reduce_sum(
    x * tf.log(epsilon + x_hat) + 
    (1 - x) * tf.log(epsilon + 1 - x_hat),
    axis=1) 

ALTERNATIVE LOSS W/ APPLYING SIGMOID, REMOVED ACTIVATION FROM OUTPUT LAYER
'''
reconstruction_loss = -tf.reduce_sum(
    x * tf.log(epsilon + tf.sigmoid(x_hat)) + 
    (1 - x) * tf.log(epsilon + 1 - tf.sigmoid(x_hat)),
    axis=1)
'''

KL_div = -.5 * tf.reduce_sum(
    1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var),
    axis=1)

total_loss = tf.reduce_mean(reconstruction_loss + KL_div)

## Define the training operator

train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(total_loss)

## Run it

with tf.Session() as sess:

    sess.run(tf.global_variables_initializer())

    for epoch in range(n_epochs):
        for _ in range(n_samples // batch_size):
            batch = mnist.train.next_batch(batch_size)

            _, loss, recon_loss, KL_loss = sess.run([train_op,
                                                total_loss,
                                                reconstruction_loss,
                                                KL_div],
                                        feed_dict={x:batch[0]})
        print('[Epoch {}] loss: {}'.format(epoch, loss))
    print('Training Done')

    ## Reconstruct a few samples to validate the training

    batch = mnist.train.next_batch(100)

    x_reconstructed = sess.run(x_hat, feed_dict={x:batch[0]})

    n = np.sqrt(batch_size).astype(np.int32)
    I_reconstructed = np.empty((width*n, 2*width*n))
    for i in range(n):
        for j in range(n):
            x = np.concatenate(
                (x_reconstructed[i*n+j, :].reshape(width, width),
                 batch[0][i*n+j, :].reshape(width, width)),
                axis=1
            )
            I_reconstructed[i*width:(i+1)*width, j*2*width:(j+1)*2*width] = x

    fig = plt.figure()
    plt.imshow(I_reconstructed, cmap='gray')

EDIT1:解决方案

感谢@ xdurch0,我意识到重构输出不再通过sigmoid函数重新调整。这意味着在绘制之前必须在图像上应用S形。只需修改输出:

x_reconstructed = sess.run(tf.sigmoid(x_hat), feed_dict={x:batch[0]})

0 个答案:

没有答案