如何使用JSON对服务的TensorFlow模型进行REST调用?

时间:2018-04-12 08:04:59

标签: tensorflow tensorflow-serving

我已经构建并训练了TensorFlow模型,该模型使用tf.Estimator范例进行部署。我已经构建了一个服务函数,如下所示:

def serving_input_fn(params):
    feature_placeholders = {
        'inputs' : tf.placeholder(tf.int64, [None], name='inputs')
    }
    features = {
        key: tensor
        for key, tensor in feature_placeholders.items()
    }
    return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) 

现在,我希望能够使用application/json作为内容类型来调用它。所以我构建了一个JSON文件,就像我在question中找到的例子一样:

payload = {'instances': [{'inputs': [1039]}]}
json_string = json.dumps(payload)

当我调用模型时,我会回来:

ERROR in serving: Unsupported request data format: {u'instances': [{u'inputs': [1039]}]}.
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest

任何想法如何实现我的目标?

1 个答案:

答案 0 :(得分:0)

事实证明,JSON应该是:

request = {'dtype': 'DT_INT64', 
           'tensorShape': {'dim':[{'size': 1}]},
           'int64Val': [1039]}

json_string = json.dumps(request)