如何在LearningRateScheduler中参考损失分数

时间:2019-06-28 10:12:18

标签: python machine-learning keras

我有一个有效的Step_Decay:

def step_decay(epoch):
    initial_lr = 0.01
    decay_factor=0.1
    step_size=1
    new_lr = initial_lr * (decay_factor ** np.floor(epoch / step_size))
    print("Learning rate: " + str(new_lr))
    return new_lr

lr_sched = keras.callbacks.LearningRateScheduler(step_decay)

但是一旦损失<0.1,我想停止降低学习率。

我该如何获取时代损失编号?

1 个答案:

答案 0 :(得分:0)

通过keras.callbacks.callback记录训练过程中的丢失历史记录和学习率。

learning-rate-schedules - Access to loss by step_decay with history loss

   def step_decay(epoch):
   initial_lrate = 0.1
   drop = 0.5
   epochs_drop = 10.0
   lrate = initial_lrate * math.pow(drop,  
           math.floor((1+epoch)/epochs_drop))
   return lrate
lrate = LearningRateScheduler(step_decay) 




class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
   self.losses = []
   self.lr = []

def on_epoch_end(self, batch, logs={}):
   self.losses.append(logs.get(‘loss’))
   self.lr.append(step_decay(len(self.losses)))

loss_history = LossHistory()
lrate = LearningRateScheduler(step_decay)
callbacks_list = [loss_history, lrate]
history = model.fit(X_train, y_train, 
   validation_data=(X_test, y_test), 
   epochs=epochs, 
   batch_size=batch_size, 
   callbacks=callbacks_list, 
   verbose=2)

访问时代丢失号码:

def get_learningrate_metric(optimizer):
    def learningrate(y_true, y_pred):
        return optimizer.learningrate
    return learningrate

x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)

optimizer = Adam(lr=0.001)
learningrate_metric = get_learningrate_metric(optimizer)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['acc', learningrate_metric])

# reducing the learning rate by half every 2 epochs
cbks = [LearningRateScheduler(lambda epoch: 0.001 * 0.5 ** (epoch // 2)),
        TensorBoard(write_graph=False)]
X = np.random.rand(1000, 50)
Y = np.random.randint(2, size=1000)
model.fit(X, Y, epochs=10, callbacks=cbks)

Adadelta = optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.1)
  

Adadelta是Adagrad的扩展,旨在降低其激进的,单调降低的学习率。

get the learning rate value after every epoch 还是可以帮助您?

EarlyStopping