如何从Tensorflow Object Detection API正确提供对象检测模型?

时间:2017-07-27 23:46:06

标签: tensorflow object-detection tensorflow-serving

我正在使用Tensorflow对象检测API(github.com/tensorflow/models/tree/master/object_detection)和一个对象检测任务。现在我在服务使用Tensorflow服务训练的检测模型时遇到问题(tensorflow.github.io/serving /)。

1。我遇到的第一个问题是将模型导出为可服务文件。 对象检测api包括导出脚本,以便我能够将ckpt文件转换为带变量的pb文件。但是,输出文件中的变量'中不会有任何内容。夹。我虽然这是一个bug并在Github上报告它,但似乎他们实际上将变量转换为常量以便不会有变量。细节可以找到HERE

导出保存的模型时我使用的标志如下:

    CUDA_VISIBLE_DEVICES=0 python export_inference_graph.py \
        --input_type image_tensor \
            --pipeline_config_path configs/rfcn_resnet50_car_Jul_20.config \
                --checkpoint_path resnet_ckpt/model.ckpt-17586 \
                    --inference_graph_path serving_model/1 \
                      --export_as_saved_model True

当我将--export_as_saved_model切换为False时,它在python中运行得非常好。

但是,我在服务模型时遇到了问题。

当我试图跑步时:

~/serving$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=gan --model_base_path=<my_model_path>

我得到了:

2017-07-27 16:11:53.222439: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle.
2017-07-27 16:11:53.222497: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:165] The specified SavedModel has no variables; no checkpoints were restored.
2017-07-27 16:11:53.222502: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle.
2017-07-27 16:11:53.229463: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 281805 microseconds.
2017-07-27 16:11:53.229508: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: gan version: 1}
2017-07-27 16:11:53.244716: I tensorflow_serving/model_servers/main.cc:290] Running ModelServer at 0.0.0.0:9000 ...

我认为模型没有正确加载,因为它显示&#34;指定的SavedModel没有变量;没有恢复检查点。&#34;

但是因为我们已经将所有变量转换为常量,所以它似乎是合理的。我不确定。

2 即可。我无法使用客户端来调用服务器并对示例图像进行检测。

下面列出了客户端脚本:

from __future__ import print_function
from __future__ import absolute_import

# Communication to TensorFlow server via gRPC
from grpc.beta import implementations
import tensorflow as tf
import numpy as np
from PIL import Image
# TensorFlow serving stuff to send messages
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2


# Command line arguments
tf.app.flags.DEFINE_string('server', 'localhost:9000',
                       'PredictionService host:port')
tf.app.flags.DEFINE_string('image', '', 'path to image in JPEG format')
FLAGS = tf.app.flags.FLAGS


def load_image_into_numpy_array(image):
    (im_width, im_height) = image.size
    return np.array(image.getdata()).reshape(
    (im_height, im_width, 3)).astype(np.uint8)

def main(_):
    host, port = FLAGS.server.split(':')
    channel = implementations.insecure_channel(host, int(port))
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

    # Send request
    request = predict_pb2.PredictRequest()
    image = Image.open(FLAGS.image)
    image_np = load_image_into_numpy_array(image)
    image_np_expanded = np.expand_dims(image_np, axis=0)
    # Call GAN model to make prediction on the image
    request.model_spec.name = 'gan'
    request.model_spec.signature_name = 'predict_images'
    request.inputs['inputs'].CopyFrom(
    tf.contrib.util.make_tensor_proto(image_np_expanded))

    result = stub.Predict(request, 60.0)  # 60 secs timeout
    print(result)


if __name__ == '__main__':
    tf.app.run()

为了匹配request.model_spec.signature_name = 'predict_images',我修改了对象检测api中的exporter.py脚本(github.com/tensorflow/models/blob/master/object_detection/exporter.py),从第289行开始:

          signature_def_map={
          signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
              detection_signature,
      },

要:

          signature_def_map={
          'predict_images': detection_signature,
          signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
              detection_signature,
      },

因为我不知道如何调用默认签名密钥。

当我运行以下命令时:

bazel-bin/tensorflow_serving/example/client --server=localhost:9000 --image=<my_image_file>

我收到以下错误消息:

    Traceback (most recent call last):
  File "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/tf_serving/tensorflow_serving/example/client.py", line 54, in <module>
    tf.app.run()
  File "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/home/xinyao/serving/bazel-bin/tensorflow_serving/example/client.runfiles/tf_serving/tensorflow_serving/example/client.py", line 49, in main
    result = stub.Predict(request, 60.0)  # 60 secs timeout
  File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 324, in __call__
    self._request_serializer, self._response_deserializer)
  File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 210, in _blocking_unary_unary
    raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output ToFloat:0")

不太清楚这里发生了什么。

最初,我发现AbortionError来自github.com/tensorflow/tensorflow/blob/f488419cd6d9256b25ba25cbe736097dfeee79f9/tensorflow/core/graph/subgraph.cc,但我的客户端脚本可能不正确。似乎我在构建图形时遇到了这个错误。所以它可能是由我的第一个问题造成的。

我是新手,所以我真的很困惑。我想我一开始可能错了。有什么方法可以正确导出并提供检测模型吗?任何建议都会有很大的帮助!

4 个答案:

答案 0 :(得分:1)

当前导出器代码未正确填充签名字段。因此使用模型服务器不起作用。对此表示歉意。一个更好地支持导出模型的新版本即将推出。它包括服务所需的一些重要修复和改进,尤其是在Cloud ML Engine上提供服务。如果您想尝试早期版本,请参阅github issue

对于“指定的SavedModel没有变量;没有恢复检查点。”消息,由于您说的确切原因,预计会因为所有变量都在图中转换为常量。对于“FeedInputs:无法找到Feed输出ToFloat:0”的错误,请确保在构建模型服务器时使用TF 1.2。

答案 1 :(得分:1)

  1. 你的想法很好。它&#39;好的,有警告。

  2. 问题是输入需要按模型预期转换为uint8。以下是适用于我的代码段。

  3. request = predict_pb2.PredictRequest()
    request.model_spec.name = 'gan'
    request.model_spec.signature_name = 
        signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
    
    image = Image.open('any.jpg')
    image_np = load_image_into_numpy_array(image)
    image_np_expanded = np.expand_dims(image_np, axis=0)
    
    request.inputs['inputs'].CopyFrom(
        tf.contrib.util.make_tensor_proto(image_np_expanded, 
            shape=image_np_expanded.shape, dtype='uint8'))
    

    此部分对您很重要 shape = image_np_expanded.shape,dtype =&#39; uint8&#39; ,请务必提取最新的服务更新。

答案 2 :(得分:0)

我正在努力解决确切的问题。我试图从Tensorflow Object Detection API Zoo托管预训练的SSDMobileNet-COCO检查点

原来我使用的是tensorflow / models的旧提交,恰好是服务的默认子模块

我只是用

提取了最近的提交

cd serving/tf_models git pull origin master git checkout master

之后,再次构建模型服务器。

bazel build //tensorflow_serving/model_servers:tensorflow_model_server

错误消失了,我能够得到准确的预测

答案 3 :(得分:0)

错误

grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output ToFloat:0"

只需将tf_models升级到最新版本,然后重新导出模型。

请参阅https://github.com/tensorflow/tensorflow/issues/11863