如何通过saved_model.simple_save使用保存的张量流模型?

时间:2019-10-22 21:30:54

标签: python tensorflow keras tf.keras

使用以下功能保存了模型:

import tensorflow as tf
from tensorflow.python.estimator.export import export as export_helpers
import keras.backend as K

def save_for_serving(self):
        K.clear_session()  # to reset input tensor names
        model = self.build_model(serving=True)
        model.load_weights(self._train_config['model_checkpoint_path'])

        K.set_learning_phase(0)
        with K.get_session() as sess:
            tf.saved_model.simple_save(
                sess,
                export_helpers.get_timestamped_export_dir(self._output_model_dir),
                inputs=dict(zip([t.name.split(':')[0] for t in model.model.input], model.model.input)),
                outputs=dict(zip([t.name.split('/')[0] for t in model.model.outputs], model.model.outputs))
            )

而且,我具有以下目录结构:

│   ├── 1571292052
│   │   ├── saved_model.pb
│   │   └── variables
│   │       ├── variables.data-00000-of-00001
│   │       └── variables.index

如何使用它来预测新案例的输出?我有一个带有所有输入变量和预处理功能的熊猫数据框,但我不知道如何推断模型?

我从下面开始,但是不知道如何将经过预处理的熊猫行传递给加载的模型以获取输出:

import tensorflow as tf
import keras.backend as K

with K.get_session() as sess:
    i = tf.saved_model.load(sess,tags={'serve'}, export_dir='./exp_model/1571292052/')

>>> print(type(i))

<class 'tensorflow.core.protobuf.meta_graph_pb2.MetaGraphDef'>

>>> print(dir(i))

['ByteSize', 'Clear', 'ClearExtension', 'ClearField', 'CollectionDefEntry', 'CopyFrom', 'DESCRIPTOR', 'DiscardUnknownFields', 'Extensions', 'FindInitializationErrors', 'FromString', 'HasExtension', 'HasField', 'IsInitialized', 'ListFields', 'MergeFrom', 'MergeFromString', 'MetaInfoDef', 'ParseFromString', 'RegisterExtension', 'SerializePartialToString', 'SerializeToString', 'SetInParent', 'SignatureDefEntry', 'UnknownFields', 'WhichOneof', '_CheckCalledFromGeneratedFile', '_SetListener', '__class__', '__deepcopy__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '__unicode__', '_extensions_by_name', '_extensions_by_number', '_tf_api_names', '_tf_api_names_v1', 'asset_file_def', 'collection_def', 'graph_def', 'meta_info_def', 'object_graph_def', 'saver_def', 'signature_def']

然后,输出:

>>> saved_model_cli show --dir ./exp_model/1571292052 --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['name_counts'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: name_counts:0
    .
    .
    .
    .
  The given SavedModel SignatureDef contains the following output(s):
    outputs['o1'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: o1/Sigmoid:0
    outputs['o2'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: o2/BiasAdd:0
  Method name is: tensorflow/serving/predict

1 个答案:

答案 0 :(得分:0)

让我提出一个对我有用的解决方案。在我的情况下,模型具有一个输入和一个输出:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['text'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 1)
        name: input_1:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_2/Softmax:0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: dense_2/Softmax:0
  Method name is: tensorflow/serving/predict

以下是在Ubuntu 18.04上使用TF 1.14.0测试的代码:

import tensorflow as tf
import numpy as np

def extract_tensors(signature_def, graph):
    output = dict()

    for key in signature_def:
        value = signature_def[key]

        if isinstance(value, tf.TensorInfo):
            output[key] = graph.get_tensor_by_name(value.name)

    return output

def extract_input_name(signature_def, graph):
    input_tensors = extract_tensors(signature_def['serving_default'].inputs, graph)
    #Assuming one input in model.
    key = list(input_tensors.keys())[0]
    return input_tensors.get(key).name

def extract_output_name(signature_def, graph):
    output_tensors = extract_tensors(signature_def['serving_default'].outputs, graph)
    #Assuming one output in model.
    key = list(output_tensors.keys())[0]
    return output_tensors.get(key).name

messages = [ "Some input text", "Another text" ]
new_text = np.array(messages, dtype=object)[:, np.newaxis]

model_dir = "./models/use/1"

with tf.Session(graph=tf.Graph()) as session:
    serve = tf.saved_model.load(session ,tags={'serve'}, export_dir=model_dir)

    input_tensor_name = extract_input_name(serve.signature_def, session.graph)
    output_tensor_name = extract_output_name(serve.signature_def, session.graph)

    result = session.run([output_tensor_name], feed_dict={input_tensor_name: new_text})
    print(result)

输出:

INFO:tensorflow:Restoring parameters from ./models/use/1/variables/variables
[array([[0.76517266, 0.2348273 ],
       [0.1170708 , 0.8829292 ]], dtype=float32)]