获取有关object_detection / eval.py的更多摘要(与train.py中的相同)

时间:2017-07-11 09:30:17

标签: python tensorflow object-detection tensorboard

我在自己的数据上使用new tensorflow object detection API。它运作得很好,但我有点失望的是,Tensorboard中显示的一些统计数据仅用于训练集,而不是eval,反之亦然。例如,我认为在评估期间获得global_step/sec和损失以及培训步骤的PrecisionPerformance指标(无需手动评估)会很棒。

有没有简单的方法呢?

我正在使用API​​提供的脚本进行培训和测试,使用非常标准的配置文件, SSD Faster-RCNN

python tensorflow_models_dir/object_detection/train.py --logtostderr --pipeline_config_path=../models/SSD_v1/config_v1.config --train_dir=../models/SSD_v1/train
python tensorflow_models_dir/object_detection/eval.py --logtostderr --pipeline_config_path=../models/SSD_v1/config_v1.config --checkpoint_dir=../models/SSD_v1/train --eval_dir=../models/SSD_v1/eval

到目前为止,我已尝试将trainer.py中的摘要添加到evaluator.py中的tensor_dict,但在运行时失败了。为此,我在_extract_prediction_tensors中向evaluator.py添加了以下行:

    # Gather initial summaries.
    summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))
    global_summaries = set([])


    # Add summaries.
    for model_var in slim.get_model_variables():
      global_summaries.add(tf.summary.histogram(model_var.op.name, model_var))
    for loss_tensor in tf.losses.get_losses():
      global_summaries.add(tf.summary.scalar(loss_tensor.op.name, loss_tensor))
    # global_summaries.add(
    #     tf.summary.scalar('TotalLoss', tf.losses.get_total_loss()))  # Crashes

    # Add the summaries from the first clone. These contain the summaries
    # created by model_fn and either optimize_clones() or _gather_clone_loss().
    summaries |= set(tf.get_collection(tf.GraphKeys.SUMMARIES))
    summaries |= global_summaries

    # Merge all summaries together.
    summary_op = tf.summary.merge(list(summaries), name='summary_op')
    tensor_dict['summary_op'] = summary_op

0 个答案:

没有答案