Tensorflow ServingInputReceiver在客户端输入形状错误

时间:2018-05-03 21:27:17

标签: tensorflow tensorflow-serving tensorflow-estimator

我目前正在使用tensorflow Estimator API,并且在可用的混乱服务选项方面存在问题。我的困惑来自于未经详细说明的张量流文档。

这是我的目标: 通过将序列化的proto消息作为字符串发送到导出的Estimator模型的ServingInputReceiver函数,使用tensorflow-serving prediction_service_pb2。我希望ServingInputReceiver函数在“输入”张量上接收序列化的proto字符串,然后将其反序列化为特征“ink”(= varlength float array)和“shape”(= fixedlength int64)。

这是我(google quickdraw模型的实现)估算工具输入功能:

def _parse_tfexample_fn(example_proto, mode):
    """Parse a single record which is expected to be a tensorflow.Example."""
    feature_to_type = {
        "ink": tf.VarLenFeature(dtype=tf.float32),
        "shape": tf.FixedLenFeature([2], dtype=tf.int64)
    }
    if mode != tf.estimator.ModeKeys.PREDICT:
        # The labels won't be available at inference time, so don't add them
        # to the list of feature_columns to be read.
        feature_to_type["class_index"] = tf.FixedLenFeature([1], dtype=tf.int64)

    parsed_features = tf.parse_single_example(example_proto, feature_to_type)
    parsed_features["ink"] = tf.sparse_tensor_to_dense(parsed_features["ink"])

    if mode != tf.estimator.ModeKeys.PREDICT:
        labels = parsed_features["class_index"]
        return parsed_features, labels
    else:
        return parsed_features  # In prediction, we have no labels

这是我的服务输入功能:

def serving_input_receiver_fn():
"""An input receiver that expects a serialized tf.Example."""
feature_to_type = {"ink": tf.VarLenFeature(dtype=tf.float32), "shape": tf.FixedLenFeature([2], dtype=tf.int64)}

serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input')

parsed_features = tf.parse_example(serialized_tf_example, feature_to_type)
parsed_features["ink"] = tf.sparse_tensor_to_dense(parsed_features["ink"])

return tf.estimator.export.ServingInputReceiver(parsed_features, serialized_tf_example)

这是我的client.py请求:

features = {}
features["ink"] = tf.train.Feature(float_list=tf.train.FloatList(value=np_ink.flatten()))
features["shape"] = tf.train.Feature(int64_list=tf.train.Int64List(value=np_ink.shape))
f = tf.train.Features(feature=features)
data = tf.train.Example(features=f)
serialized=data.SerializeToString() # tensor to byte string
request.inputs['input'].ParseFromString(tf.contrib.util.make_tensor_proto(serialized, shape=[1], verify_shape=True))

这是我在client.py中调用Predict函数后得到的错误

grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input tensor alias not found in signature: ink. Inputs expected to be in the set {input}.")

我尝试了以下服务功能: ServingInputReceiver build_raw_serving_input_receiver_fn 给我同样的grpc错误。当我使用 build_parsing_serving_input_receiver_fn 时,它甚至不会导出我的模型。我试图绕着文档包围我的头但是它非常不详,我不明白何时使用哪个服务输入功能。

0 个答案:

没有答案