Variationnal自动编码器:在Keras中实现预热

时间:2017-03-14 13:21:22

标签: deep-learning keras autoencoder

我最近阅读了this paper,其中介绍了一个名为" Warm-Up" (WU),其中包括将KL-分歧中的损失乘以一个变量,该变量的值取决于时期的数量(它从0线性演变为1)

我想知道这是否是这样做的好方法:

beta = K.variable(value=0.0)

def vae_loss(x, x_decoded_mean):
    # cross entropy
    xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean))

    # kl divergence
    for k in range(n_sample):
        epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
                              std=1.0)  # used for every z_i sampling
        # Sample several layers of latent variables
        for mean, var in zip(means, variances):
            z_ = mean + K.exp(K.log(var) / 2) * epsilon

            # build z
            try:
                z = tf.concat([z, z_], -1)
            except NameError:
                z = z_
            except TypeError:
                z = z_

            # sum loss (using a MC approximation)
            try:
                loss += K.sum(log_normal2(z_, mean, K.log(var)), -1)
            except NameError:
                loss = K.sum(log_normal2(z_, mean, K.log(var)), -1)
        print("z", z)
        loss -= K.sum(log_stdnormal(z) , -1)
        z = None
    kl_loss = loss / n_sample
    print('kl loss:', kl_loss)

    # result
    result = beta*kl_loss + xent_loss
    return result

# define callback to change the value of beta at each epoch
def warmup(epoch):
    value = (epoch/10.0) * (epoch <= 10.0) + 1.0 * (epoch > 10.0)
    print("beta:", value)
    beta = K.variable(value=value)

from keras.callbacks import LambdaCallback
wu_cb = LambdaCallback(on_epoch_end=lambda epoch, log: warmup(epoch))


# train model
vae.fit(
    padded_X_train[:last_train,:,:],
    padded_X_train[:last_train,:,:],
    batch_size=batch_size,
    nb_epoch=nb_epoch,
    verbose=0,
    callbacks=[tb, wu_cb],
    validation_data=(padded_X_test[:last_test,:,:], padded_X_test[:last_test,:,:])
)

1 个答案:

答案 0 :(得分:4)

这不起作用。我对它进行了测试,以弄清楚它无法正常工作的原因。要记住的关键是Keras在训练开始时创建一次静态图。

因此,vae_loss函数只调用一次来创建损失张量,这意味着每次计算损失时对beta变量的引用都将保持不变。但是,您的warmup功能会将测试版重新分配给新的K.variable。因此,用于计算损失的beta与更新的beta不同,warmup的值始终为0.

这是一个简单的解决方案。只需在beta = K.variable(value=value)回调中更改此行:

K.set_value(beta, value)

为:

beta

这样.index()中的实际值就会“就地”更新,而不是创建一个新变量,并且会正确地重新计算损失。