如何设置检查点进行微调

时间:2019-05-24 10:06:10

标签: tensorflow object-detection-api

在训练开始时,我从model_zoo重新训练模型(ssd_mobilenetv2)时发现损失很大,而validation_set的准确性很好。培训日志如下:

日志不能来自经过训练的模型。我怀疑它不会加载检查点来进行微调。请帮助我如何对同一数据集上的训练模型进行微调。我根本没有修改网络结构。

我在pipeline.config中设置检查点路径,如下所示: fine_tune_checkpoint:“ / /ssd_mobilenet_v2_coco_2018_03_29/model.ckpt” 如果将model_dir设置为下载目录,则不会进行训练,因为global_train_step大于max_step。然后放大max_step,可以看到从checkpoint恢复参数的日志。但这会遇到无法恢复某些参数的错误。 所以我将model_dir设置为一个空目录。它可以正常训练,但step0中的损失将非常大。验证结果很差

在pipeline.config中

fine_tune_checkpoint: "/ssd_mobilenet_v2_coco_2018_03_29/model.ckpt"
num_steps: 200000
fine_tune_checkpoint_type: "detection"

培训脚本

model_dir = '/ssd_mobilenet_v2_coco_2018_03_29/retrain0524

pipeline_config_path = '/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config'

checkpoint_dir = '/ssd_mobilenet_v2_coco_2018_03_29/model.ckpt'

num_train_steps = 300000
config = tf.estimator.RunConfig(model_dir=model_dir)
train_and_eval_dict = model_lib.create_estimator_and_inputs(
    run_config=config,
    hparams=model_hparams.create_hparams(hparams_overrides),
    pipeline_config_path=pipeline_config_path,    
    sample_1_of_n_eval_examples=sample_1_of_n_eval_examples,
    sample_1_of_n_eval_on_train_examples=(sample_1_of_n_eval_on_train_examples))
estimator = train_and_eval_dict['estimator']
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fns = train_and_eval_dict['eval_input_fns']
eval_on_train_input_fn = train_and_eval_dict['eval_on_train_input_fn']
predict_input_fn = train_and_eval_dict['predict_input_fn']
train_steps = train_and_eval_dict['train_steps']

train_spec, eval_specs = model_lib.create_train_and_eval_specs(
        train_input_fn,
        eval_input_fns,
        eval_on_train_input_fn,
        predict_input_fn,
        train_steps,
        eval_on_train_data=False)

tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])

INFO:tensorflow:损失= 356.25497,步长= 0 信息:tensorflow:global_step /秒:1.89768 INFO:tensorflow:损失= 11.221423,步长= 100(52.700秒) 信息:tensorflow:global_step /秒:2.21685 INFO:tensorflow:损失= 10.329516,步长= 200(45.109秒)

1 个答案:

答案 0 :(得分:0)

如果初始训练损失为400,则很可能成功地从检查点恢复了模型,只是与检查点并不完全相同。

Hererestore_map模型的ssd函数,请注意,即使您设置了fine_tune_checkpoint_type : detection甚至提供了完全相同的模型检查点,仍然只有feature_extractor范围已还原。要从检查点恢复尽可能多的变量,您将必须在配置文件中设置load_all_detection_checkpoint_vars: true

def restore_map(self,
              fine_tune_checkpoint_type='detection',
              load_all_detection_checkpoint_vars=False):

if fine_tune_checkpoint_type not in ['detection', 'classification']:
  raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
      fine_tune_checkpoint_type))

if fine_tune_checkpoint_type == 'classification':
  return self._feature_extractor.restore_from_classification_checkpoint_fn(
      self._extract_features_scope)

if fine_tune_checkpoint_type == 'detection':
  variables_to_restore = {}
  for variable in tf.global_variables():
    var_name = variable.op.name
    if load_all_detection_checkpoint_vars:
      variables_to_restore[var_name] = variable
    else:
      if var_name.startswith(self._extract_features_scope):
        variables_to_restore[var_name] = variable

return variables_to_restore