TensorFlow服务无法通过预测

时间:2019-07-11 06:49:06

标签: tensorflow tensorflow-serving

按照https://www.tensorflow.org/tutorials/images/hub_with_keras处的教程进行操作,得到了一个SavedModel,我想将其用于TF服务。

当前,我无法使用TF Serving 1.14传递预测,我得到了:

{ "error": "JSON Parse error: Invalid value. at offset: 0" }

原始图片:

curl -o image.jpg https://storage.googleapis.com/cloud-samples-data/ml-engine/flowers/tulips/4520577328_a94c11e806_n.jpg

Python代码:

import numpy as np
import json
from PIL import Image

def convert_to_json(image_file):
  """Open image, convert it to numpy and create JSON request"""
  img = Image.open(image_file).resize(IMAGE_SHAPE)
  img_array = np.array(img)
  predict_request = {"keras_layer_input": [img_array.tolist()]}
  with open('result.json', 'w') as fp:
    json.dump(predict_request, fp)
  return predict_request

prediction_data = convert_to_json('image.jpg')

TensorFlow saved_model_cli

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['keras_layer_input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 224, 224, 3)
        name: keras_layer_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 5)
        name: dense/Softmax:0
  Method name is: tensorflow/serving/predict

TensorFlow服务器:

tensorflow_model_server --rest_api_port=8502 --model_base_path="${MODEL_DIR}"

2019-07-11 06:43:07.124549: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config:  model_name: default model_base_path: gs://dpe-sandbox/saved_models/
2019-07-11 06:43:07.124906: I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models.
2019-07-11 06:43:07.124925: I tensorflow_serving/model_servers/server_core.cc:558]  (Re-)adding model: default
2019-07-11 06:43:09.199345: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: default version: 1}
2019-07-11 06:43:09.199478: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: default version: 1}
2019-07-11 06:43:09.199546: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: default version: 1}
2019-07-11 06:43:09.199630: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:363] Attempting to load native SavedModelBundle in bundle-shim from: gs://dpe-sandbox/saved_models/1
2019-07-11 06:43:09.199644: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: gs://dpe-sandbox/saved_models/1
2019-07-11 06:43:09.422041: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-07-11 06:43:09.473872: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-11 06:43:10.852346: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-07-11 06:43:10.852833: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
pciBusID: 0000:00:04.0
totalMemory: 14.73GiB freeMemory: 14.60GiB
2019-07-11 06:43:10.967813: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-07-11 06:43:10.968300: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 1 with properties: 
name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
pciBusID: 0000:00:05.0
totalMemory: 14.73GiB freeMemory: 14.60GiB
2019-07-11 06:43:10.971176: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1
2019-07-11 06:43:11.998039: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-11 06:43:11.998112: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 1 
2019-07-11 06:43:11.998119: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N Y 
2019-07-11 06:43:11.998125: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1:   Y N 
2019-07-11 06:43:11.998781: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14103 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
2019-07-11 06:43:11.999365: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14103 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5)
2019-07-11 06:43:12.092750: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:182] Restoring SavedModel bundle.
2019-07-11 06:43:13.436094: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:132] Running initialization op on SavedModel bundle.
2019-07-11 06:43:13.517255: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:285] SavedModel load for tags { serve }; Status: success. Took 4317544 microseconds.
2019-07-11 06:43:13.663286: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:101] No warmup data file found at gs://dpe-sandbox/saved_models/1/assets.extra/tf_serving_warmup_requests
2019-07-11 06:43:15.562663: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: default version: 1}
2019-07-11 06:43:15.582902: I tensorflow_serving/model_servers/server.cc:313] Running gRPC ModelServer at 0.0.0.0:8500 ...
2019-07-11 06:43:15.584199: I tensorflow_serving/model_servers/server.cc:333] Exporting HTTP/REST API at:localhost:8502 ...
[evhttp_server.cc : 237] RAW: Entering the event loop ...

不确定我是否正确编码图像? 我复制并粘贴了值,似乎是有效的JSON

1 个答案:

答案 0 :(得分:0)

即使在注释部分中也有提及此(答案)部分中的解决方案,这是为了社区的利益。

通过在下面的代码行中用 keras_layer_input 替换 instances 来解决

问题。这是因为,对于 predict SignatureDef,在语法上,推理的输入应表示为 instances {{1} }

inputs

工作代码为:

predict_request = {"keras_layer_input": [img_array.tolist()]}