有没有办法在Keras框架中使用global_step?

时间:2019-06-22 01:33:20

标签: tensorflow machine-learning keras deep-learning

我正在尝试在polynomial decay框架中为learning rate decay复制Keras,在Tensorflow框架中实现如下。

def poly_decay(step, initial_value, decay_period_images_seen):
    """
    Decays a variable using a polynomial law.
    :param step: number of images seen by the network since the beginning of the training.
    :param initial_value: The initial value of the variable to decay..
    :param decay_period_images_seen: the decay period in terms of images seen by the network
    (1 epoch of 10 batches of 6 images each means that 1 epoch = 60 images seen).
    Thus this value must be a multiple of the number of batches
    :return: The decayed variable.
    """

    factor = 1.0 - (tf.cast(step, tf.float32) / float(decay_period_images_seen))
    lrate = initial_value * np.power(factor, 0.9)

    return lrate

Keras是否为global step提供了任何隐藏的参数(也许我不知道),还是Keras中有等效的global step?还是有其他方法可以在polynomial learning rate decay框架中实现Keras

1 个答案:

答案 0 :(得分:0)

基本上,参数本身是optimisers的参数。

看看optimizers

sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)

因此,在这里,您只需传递poly_decay()作为参数即可。

通常,我们使用time-based decay代替polynomial decay

learning_rate = 0.1
decay_rate = learning_rate / epochs
momentum = 0.8
sgd = SGD(lr=learning_rate, momentum=momentum, decay=decay_rate, nesterov=False)

选中此blog以获得更多参考!