Tensorflow Keras无法在初始时从检查点文件正确恢复训练

时间:2019-05-21 03:00:35

标签: python tensorflow keras callback checkpoint

我正在在tensorflow中加载一个keras模型以恢复训练。我想从停下来的纪元开始继续训练,以便纪元号是唯一的,并且我可以跟踪纪元数。从保存了最高准确性的回调创建的检查点文件中加载模型。当我恢复对model.fit()的训练时,我将“初始纪元”设置为52,并将“纪元”设置为52 + 5。但是,它从1/57而不是53/57开始训练,即使我只想5个epoch也将一直上升到57。我加载错误吗?训练恢复为“正常”状态,准确性是我中断的地方,但时期数不会从我想要的地方继续,而是从1开始重新开始。

我尝试从检查点文件加载时删除检查点回调初始化,但是由于未定义“回调列表”,因此会产生名称错误。

JNIEXPORT void JNICALL Java_org_cocos2dx_cpp_AppActivity_pauseSounds(JNIEnv* env, jclass thiz);

JNIEXPORT jstring JNICALL Java_org_cocos2dx_cpp_AppActivity_score(JNIEnv *env, jobject instance);

从保存的文件恢复时,我希望能看到53/57和5个训练时期。 我得到了1/57和57个训练时期

2 个答案:

答案 0 :(得分:0)

我注意到您忘记在epoch_count中添加下划线。这可能是造成它的原因。

答案 1 :(得分:0)

有同样的问题, 为了解决这个问题,我修改了 ModelCheckpoint (Callback)类。

我在 on_epoch_begin 回调函数中添加并保存了一个专用的tensorflow检查点。

The network doesn't store its training progress with respect to training data - this is not part of its state, because at any point you could decide to change what data set to feed it.

class EpochModelCheckpoint(tf.keras.callbacks.ModelCheckpoint):

    def __init__(self,filepath, monitor='val_loss', verbose=1, 
                 save_best_only=True, save_weights_only=True, 
                 mode='auto', ):

        super(EpochModelCheckpoint, self).__init__(filepath=filepath,monitor=monitor,
             verbose=verbose,save_best_only=save_best_only,
             save_weights_only=save_weights_only, mode=mode)

        self.ckpt = tf.train.Checkpoint(completed_epochs=tf.Variable(0,trainable=False,dtype='int32'))
        ckpt_dir = f'{os.path.dirname(filepath)}/tf_ckpts'
        self.manager = tf.train.CheckpointManager(self.ckpt, ckpt_dir, max_to_keep=3)

    def on_epoch_begin(self,epoch,logs=None):        
        self.ckpt.completed_epochs.assign(epoch)
        self.manager.save()
        print( f"Epoch checkpoint {self.ckpt.completed_epochs.numpy()}  saved to: {self.manager.latest_checkpoint}" ) 
        print(logs)

def callbacks(checkpoint_dir, model_name):

    best_model = os.path.join(checkpoint_dir, '{}_best.hdf5'.format(model_name))
    save_best = EpochModelCheckpoint( best_model  )
    return [ save_best ]

def train():

    ...

    model = get_compiled_model()
    checkpoint_dir = "./checkpoint_dir"
    model_name = "my_model"
    # Init checkpoint value
    ckpt = tf.train.Checkpoint(completed_epochs=tf.Variable(0,trainable=False,dtype='int32'))
    manager = tf.train.CheckpointManager(ckpt, f'{checkpoint_dir}/tf_ckpts', max_to_keep=3)    

    best_weights = os.path.join(checkpoint_dir, f'{model_name}_best.hdf5') 
    if os.path.exists(best_weights):
        print(f'Loading model {best_weights}')
        model.load_weights(best_weights)

        # Restore last Epoch
        ckpt.restore(manager.latest_checkpoint)
        if manager.latest_checkpoint:
            print(f"Restored epoch ckpt from {manager.latest_checkpoint}, value is ",ckpt.completed_epochs.numpy())
        else:
            print("Initializing from scratch.")

     ...
    # Set initial_epoch in the model fit to last seen Epoch
    completed_epochs=ckpt.completed_epochs.numpy()
    history = model.fit(
        x=train_ds,
        epochs=cfg.epochs,
        steps_per_epoch=cfg.steps,
        callbacks=callbacks( checkpoint_dir, model_name ),        
        validation_data=val_ds,
        validation_steps=cfg.val_steps,
        initial_epoch=completed_epochs )