我正在使用ReduceLROnPlateau作为fit回调来减小LR,我正在使用Patiente = 10,所以当触发LR减小时,该模型可能无法达到最佳权重。
有没有办法回到最小的acc_loss并从那时开始使用新的LR重新开始训练?
有道理吗?
我可以使用EarlyStopping和ModelCheckpoint('best.hdf5',save_best_only = True,monitor ='val_loss',mode ='min')回调进行手动操作,但是我不知道它是否有意义。
答案 0 :(得分:1)
您可以创建一个继承自ReduceLROnPlateau的自定义回调,类似以下内容:
class CheckpointLR(ReduceLROnPlateau):
# override on_epoch_end()
def on_epoch_end(self, epoch, logs=None):
if not self.in_cooldown():
temp = self.model.get_weights()
self.model.set_weights(self.last_weights)
self.last_weights = temp
super().on_epoch_end(epoch, logs) # actually reduce LR
答案 1 :(得分:0)
这是一个遵循@nuric指导的工作示例:
import { SomeType } from '...';
declare namespace Express {
export interface Response {
getSomeType(): SomeType;
message(str: string): Response;
}
}
ModelCheckpoint回调可用于更新最佳模型转储。例如传递以下两个回调以进行模型拟合:
from tensorflow.python.keras.callbacks import ReduceLROnPlateau
from tensorflow.python.platform import tf_logging as logging
class ReduceLRBacktrack(ReduceLROnPlateau):
def __init__(self, best_path, *args, **kwargs):
super(ReduceLRBacktrack, self).__init__(*args, **kwargs)
self.best_path = best_path
def on_epoch_end(self, epoch, logs=None):
current = logs.get(self.monitor)
if current is None:
logging.warning('Reduce LR on plateau conditioned on metric `%s` '
'which is not available. Available metrics are: %s',
self.monitor, ','.join(list(logs.keys())))
if not self.monitor_op(current, self.best): # not new best
if not self.in_cooldown(): # and we're not in cooldown
if self.wait+1 >= self.patience: # going to reduce lr
# load best model so far
print("Backtracking to best model before reducting LR")
self.model.load_weights(self.best_path)
super().on_epoch_end(epoch, logs) # actually reduce LR