Tensorflow Eager Execution不适用于学习率衰减

时间:2018-04-30 13:22:57

标签: tensorflow

尝试让急切的exec模型与LR衰变一起工作,但没有成功。这似乎是一个错误,因为看起来学习速率衰减张量没有得到更新。如果我遗失了什么,你可以在这里找到一只手。感谢。

下面的代码是学习一些单词嵌入。但是,学习率衰减部分根本不起作用。

class Word2Vec(tf.keras.Model):
    def __init__(self, vocab_size, embed_size, num_sampled=NUM_SAMPLED):
        self.vocab_size = vocab_size
        self.num_sampled = num_sampled
        self.embed_matrix = tfe.Variable(tf.random_uniform(
            [vocab_size, embed_size]), name="embedding_matrix")
        self.nce_weight = tfe.Variable(tf.truncated_normal(
            [vocab_size, embed_size],
            stddev=1.0 / (embed_size ** 0.5)), name="weights")
        self.nce_bias = tfe.Variable(tf.zeros([vocab_size]), name="biases")

    def compute_loss(self, center_words, target_words):
        """Computes the forward pass of word2vec with the NCE loss."""
        embed = tf.nn.embedding_lookup(self.embed_matrix, center_words)
        loss = tf.reduce_mean(tf.nn.nce_loss(weights=self.nce_weight,
                                             biases=self.nce_bias,
                                             labels=target_words,
                                             inputs=embed,
                                             num_sampled=self.num_sampled,
                                             num_classes=self.vocab_size))
        return loss


def gen():
    yield from word2vec_utils.batch_gen(DOWNLOAD_URL, EXPECTED_BYTES,
                                        VOCAB_SIZE, BATCH_SIZE, SKIP_WINDOW,
                                        VISUAL_FLD)


def main():
    dataset = tf.data.Dataset.from_generator(gen, (tf.int32, tf.int32),
                                             (tf.TensorShape([BATCH_SIZE]),
                                              tf.TensorShape([BATCH_SIZE, 1])))

    global_step = tf.train.get_or_create_global_step()
    starter_learning_rate = 1.0
    end_learning_rate = 0.01
    decay_steps = 1000
    learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step.numpy(),
                                              decay_steps, end_learning_rate,
                                              power=0.5)

    train_writer = tf.contrib.summary.create_file_writer('./checkpoints')
    train_writer.set_as_default()

    optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)
    model = Word2Vec(vocab_size=VOCAB_SIZE, embed_size=EMBED_SIZE)
    grad_fn = tfe.implicit_value_and_gradients(model.compute_loss)
    total_loss = 0.0  # for average loss in the last SKIP_STEP steps

    checkpoint_dir = "./checkpoints/"
    checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
    root = tfe.Checkpoint(optimizer=optimizer,
                          model=model,
                          optimizer_step=tf.train.get_or_create_global_step())

    while global_step < NUM_TRAIN_STEPS:

        for center_words, target_words in tfe.Iterator(dataset):

            with tf.contrib.summary.record_summaries_every_n_global_steps(100):

                if global_step >= NUM_TRAIN_STEPS:
                    break

                loss_batch, grads = grad_fn(center_words, target_words)
                tf.contrib.summary.scalar('loss', loss_batch)
                tf.contrib.summary.scalar('learning_rate', learning_rate)

                # print(grads)
                # print(len(grads))
                total_loss += loss_batch
                optimizer.apply_gradients(grads, global_step)
                if (global_step.numpy() + 1) % SKIP_STEP == 0:
                    print('Average loss at step {}: {:5.1f}'.format(
                        global_step.numpy(), total_loss / SKIP_STEP))
                    total_loss = 0.0

        root.save(file_prefix=checkpoint_prefix)

if __name__ == '__main__':
    main()

1 个答案:

答案 0 :(得分:4)

请注意,当启用eager执行时,tf.Tensor对象represent concrete values(与Session.run()次调用时将发生的计算符号句柄相对)。

因此,在上面的代码段中,行:

learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step.numpy(),
                                          decay_steps, end_learning_rate,
                                          power=0.5)

计算衰减值一次,在调用它时使用global_step,以及在创建优化器时使用:

optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)

它被给予固定的学习率。

要降低学习率,您需要重复调​​用tf.train.polynomial_decay(使用global_step的更新值)。一种方法是使用以下内容复制RNN example中的内容:

starter_learning_rate = 1.0
learning_rate = tfe.Variable(starter_learning_rate)
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.95)
while global_step < NUM_TRAIN_STEPS:
   # ....
   learning_rate.assign(tf.train.polynomial_decay(starter_learning_rate, global_step, decay_steps, end_learning_rate, power=0.5))

这样您就可以在可更新的变量中捕获learning_rate。此外,在检查点中包含当前learning_rate也很简单(通过在创建Checkpoint对象时包含它)。

希望有所帮助。