如何从对象检测中加载保存的模型以进行推理?

时间:2018-12-27 23:23:22

标签: tensorflow

我对Tensorflow还是陌生的,并且一直在使用Tensorflow对象检测API对SSD进行实验。我可以成功训练模型,但是默认情况下,它仅保存最后n个检查点。我想以最低的损失保存最后的n个检查点(我假设这是使用的最佳度量)。

我找到了tf.estimator.BestExporter,它导出了一个saveed_model.pb以及变量。但是,我还没有弄清楚如何加载保存的模型并对其进行推断。在检查点上运行models / research / object_detection / export_inference_graph.py之后,我可以轻松地使用对象检测jupyter笔记本来加载检查点并对其进行推断:https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb

我找到了有关加载保存的模型的文档,并且可以加载这样的图形:

with tf.Session(graph=tf.Graph()) as sess:
        tags = [tag_constants.SERVING]
        meta_graph = tf.saved_model.loader.load(sess, tags, PATH_TO_SAVED_MODEL)
        detection_graph = tf.get_default_graph()

但是,当我在上面的jupyter笔记本中使用该图时,会出现错误:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-17-9e48f0d04df2> in <module>
      7   image_np_expanded = np.expand_dims(image_np, axis=0)
      8   # Actual detection.
----> 9   output_dict = run_inference_for_single_image(image_np, detection_graph)
     10   # Visualization of the results of a detection.
     11   vis_util.visualize_boxes_and_labels_on_image_array(

<ipython-input-16-0df86999596e> in run_inference_for_single_image(image, graph)
     31             detection_masks_reframed, 0)
     32 
---> 33       image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
     34       # image_tensor = tf.get_default_graph().get_tensor_by_name('serialized_example')
     35 

~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in get_tensor_by_name(self, name)
   3664       raise TypeError("Tensor names are strings (or similar), not %s." %
   3665                       type(name).__name__)
-> 3666     return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
   3667 
   3668   def _get_tensor_by_tf_output(self, tf_output):

~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
   3488 
   3489     with self._lock:
-> 3490       return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
   3491 
   3492   def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

~/anaconda3/envs/sb/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
   3530           raise KeyError("The name %s refers to a Tensor which does not "
   3531                          "exist. The operation, %s, does not exist in the "
-> 3532                          "graph." % (repr(name), repr(op_name)))
   3533         try:
   3534           return op.outputs[out_n]

KeyError: "The name 'image_tensor:0' refers to a Tensor which does not exist. The operation, 'image_tensor', does not exist in the graph."

是否有更好的方法来加载保存的模型或将其转换为推理图?

谢谢!

1 个答案:

答案 0 :(得分:1)

Tensorflow检测API在导出过程中支持不同的输入格式,如文件export_inference_graph.py的文档所述:

  • image_tensor:接受形状为[None,None,None,3]的uint8 4-D张量
  • encoded_image_string_tensor:接受形状为[无]的一维字符串张量 包含编码的PNG或JPEG图像。图像分辨率有望达到 如果提供了多于1张图片,则相同。
  • tf_example:接受形状为[None]的一维字符串张量,包含 序列化TFExample原型。图像分辨率应相同 如果提供的图像超过1张。

因此,您应该检查是否使用image_tensor input_type。在导出的模型中,所选的输入节点将被命名为“输入”。因此,我想用image_tensor:0(或者也许是inputs)替换inputs:0可以解决您的问题。

我也想推荐一个有用的工具来运行带有多行代码的导出模型:tf.contrib.predictor.from_saved_model。这是使用方法的示例:

import tensorflow as tf
import cv2

img = cv2.imread("test.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_rgb = np.expand_dims(img, 0)

predict_fn = tf.contrib.predictor.from_saved_model("./saved_model")
output_data = predict_fn({"inputs": img_rgb})
print(output_data)  # detector output dictionary