如何在TF2中更改Adam的学习率?

时间:2019-08-01 04:06:40

标签: tensorflow tensorflow2.0

在TF2中进行学习时,如何更改Adam优化器的学习率? 周围有一些答案,但适用于TF1,例如使用feed_dict。

4 个答案:

答案 0 :(得分:13)

如果您使用自定义训练循环(而不是keras.fit()),则只需执行以下操作:

new_learning_rate = 0.01 
my_optimizer.lr.assign(new_learning_rate)

答案 1 :(得分:4)

您可以通过callback阅读和分配学习率。因此,您可以使用以下内容:

class LearningRateReducerCb(tf.keras.callbacks.Callback):

  def on_epoch_end(self, epoch, logs={}):
    old_lr = self.model.optimizer.lr.read_value()
    new_lr = old_lr * 0.99
    print("\nEpoch: {}. Reducing Learning Rate from {} to {}".format(epoch, old_lr, new_lr))
    self.model.optimizer.lr.assign(new_lr)

例如,使用MNIST demo的情况可以这样应用:

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, callbacks=[LearningRateReducerCb()], epochs=5)

model.evaluate(x_test, y_test)

给出如下输出:

Train on 60000 samples
Epoch 1/5
59744/60000 [============================>.] - ETA: 0s - loss: 0.2969 - accuracy: 0.9151
Epoch: 0. Reducing Learning Rate from 0.0010000000474974513 to 0.0009900000877678394
60000/60000 [==============================] - 6s 92us/sample - loss: 0.2965 - accuracy: 0.9152
Epoch 2/5
59488/60000 [============================>.] - ETA: 0s - loss: 0.1421 - accuracy: 0.9585
Epoch: 1. Reducing Learning Rate from 0.0009900000877678394 to 0.000980100128799677
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1420 - accuracy: 0.9586
Epoch 3/5
59968/60000 [============================>.] - ETA: 0s - loss: 0.1056 - accuracy: 0.9684
Epoch: 2. Reducing Learning Rate from 0.000980100128799677 to 0.0009702991228550673
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1056 - accuracy: 0.9684
Epoch 4/5
59520/60000 [============================>.] - ETA: 0s - loss: 0.0856 - accuracy: 0.9734
Epoch: 3. Reducing Learning Rate from 0.0009702991228550673 to 0.0009605961386114359
60000/60000 [==============================] - 5s 89us/sample - loss: 0.0857 - accuracy: 0.9733
Epoch 5/5
59712/60000 [============================>.] - ETA: 0s - loss: 0.0734 - accuracy: 0.9772
Epoch: 4. Reducing Learning Rate from 0.0009605961386114359 to 0.0009509901865385473
60000/60000 [==============================] - 5s 87us/sample - loss: 0.0733 - accuracy: 0.9772
10000/10000 [==============================] - 0s 43us/sample - loss: 0.0768 - accuracy: 0.9762
[0.07680597708942369, 0.9762]

答案 2 :(得分:2)

如果您想对回调使用低级控制而不是fit功能,请查看tf.optimizers.schedules。这是一些示例代码:

train_steps = 25000
lr_fn = tf.optimizers.schedules.PolynomialDecay(1e-3, train_steps, 1e-5, 2)
opt = tf.optimizers.Adam(lr_fn)

这将使幂数为2的多项式衰减,从而在25000步内将学习率从1e-3衰减到1e-5。

注意:

  • 这并没有像其他答案中那样真正地“存储”学习率,而是学习率现在是每次需要计算当前学习率时都会调用的函数。
  • Optimizer实例具有一个内部步数计数器,每次调用apply_gradients时,该计数器都会加一(据我所知...)。这样可以使该过程在较低级别的上下文中(通常与tf.GradientTape一起使用)正常工作
  • 不幸的是,此功能没有充分的文档说明(文档只是说学习率参数必须是浮点数或张量...),但是它可以工作。您也可以编写自己的衰减时间表。我认为它们只需要是一些函数,使其进入优化器的某些当前“状态”(可能是训练步骤数)并返回浮点数即可用作学习率。

答案 3 :(得分:0)

您有3种解决方案:

以下是this tutorial中的一个示例:

class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
    def __init__(self, d_model, warmup_steps=4000):
        super(CustomSchedule, self).__init__()

        self.d_model = d_model
        self.d_model = tf.cast(self.d_model, tf.float32)

        self.warmup_steps = warmup_steps

    def __call__(self, step):
        arg1 = tf.math.rsqrt(step)
        arg2 = step * (self.warmup_steps ** -1.5)

        return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)

然后将其传递给优化器:

learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)

通过这种方式,CustomSchedule将成为图形的一部分,并在模型训练时更新学习率。