Tensorflow导出具有多个ckpt.data文件的冻结图

时间:2019-03-13 19:27:05

标签: tensorflow

我已经完成了对象检测网络的培训,从预先训练的ssd_mobilenet_v2_coco模型开始,然后进行重新训练以检测我自己的10个类。我首先在笔记本电脑上训练了模型,所有模型都按计划进行,但是训练太慢,无法生效。我重新开始了对Google Cloud Service的培训,现在输出文件是

model.ckpt-*.data-00000-of-00003
model.ckpt-*.data-00001-of-00003
model.ckpt-*.data-00002-of-00003
model.ckpt-*.index
model.ckpt-*.meta

(而不是单个model.ckpt-*。data-00000-of-00001文件)

我在这组文件上正常运行了export_inference_graph.py脚本,并得到了Frozen_inference_graph.pb文件,但是当我尝试将其插入到我的其余代码中时,可以使用预先训练的模型此错误:

libc++abi.dylib: terminating with uncaught exception of type cv::Exception:OpenCV(4.0.1) 
/tmp/opencv-20190105-31032-o160to/opencv-4.0.1/modules/dnn/src/tensorflow/tf_importer.cpp:530: 
error: (-2:Unspecified error) Const input blob for weights not found in function 'getConstBlob'

我相信这是因为在导出模型时未完全/正确地合并所有三个.data文件。我想知道如何使用所有三个文件导出冻结图,或者如何将三个文件压缩/合并为单个.data文件?

更新:export_inference_graph.py的内容:

import tensorflow as tf
from google.protobuf import text_format
from object_detection import exporter
from object_detection.protos import pipeline_pb2

slim = tf.contrib.slim
flags = tf.app.flags

flags.DEFINE_string('input_type', 'image_tensor', 'Type of input node. Can be '
                    'one of [`image_tensor`, `encoded_image_string_tensor`, '
                    '`tf_example`]')
flags.DEFINE_string('input_shape', None,
                    'If input_type is `image_tensor`, this can explicitly set '
                    'the shape of this input tensor to a fixed size. The '
                    'dimensions are to be provided as a comma-separated list '
                    'of integers. A value of -1 can be used for unknown '
                    'dimensions. If not specified, for an `image_tensor, the '
                    'default shape will be partially specified as '
                    '`[None, None, None, 3]`.')
flags.DEFINE_string('pipeline_config_path', None,
                    'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
                    'file.')
flags.DEFINE_string('trained_checkpoint_prefix', None,
                    'Path to trained checkpoint, typically of the form '
                    'path/to/model.ckpt')
flags.DEFINE_string('output_directory', None, 'Path to write outputs.')
flags.DEFINE_string('config_override', '',
                    'pipeline_pb2.TrainEvalPipelineConfig '
                    'text proto to override pipeline_config_path.')
flags.DEFINE_boolean('write_inference_graph', False,
                     'If true, writes inference graph to disk.')
tf.app.flags.mark_flag_as_required('pipeline_config_path')
tf.app.flags.mark_flag_as_required('trained_checkpoint_prefix')
tf.app.flags.mark_flag_as_required('output_directory')
FLAGS = flags.FLAGS

def main(_):
  pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
  with tf.gfile.GFile(FLAGS.pipeline_config_path, 'r') as f:
    text_format.Merge(f.read(), pipeline_config)
  text_format.Merge(FLAGS.config_override, pipeline_config)
  if FLAGS.input_shape:
    input_shape = [
        int(dim) if dim != '-1' else None
        for dim in FLAGS.input_shape.split(',')
    ]
  else:
    input_shape = None
  exporter.export_inference_graph(
      FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix,
      FLAGS.output_directory, input_shape=input_shape,
      write_inference_graph=FLAGS.write_inference_graph)


if __name__ == '__main__':

  tf.app.run()

运行带有标志的脚本的命令:

python scripts/export_inference_graph.py \
    --input_type image_tensor \
    --pipeline_config_path training/ssd_mobilenet_v3_coco.config \
    --checkpoint_path gsc/model.ckpt-${CHECKPOINT_NUMBER} \
    --inference_graph_path gsc/output_inference_graph.pb

0 个答案:

没有答案