如何导出估算器的最佳模型?

时间:2019-06-12 01:39:22

标签: python tensorflow tensorflow-serving tensorflow-estimator

我正在基于带有TF记录的自定义估算器训练一个简单的CNN。 我正在尝试在train_and_evaluate阶段导出验证损失方面的最佳模型。

根据tf.estimator.BestExporter的文档,我应该提供一个返回ServingInputReceiver的函数,但是这样做之后,train_and_evaluate阶段会因NotFoundError: model/m01/eval; No such file or directory而崩溃。 / p>

好像BestExporter不允许保存评估结果,就像没有导出器一样。我尝试使用不同的ServingInputReceiver,但始终收到相同的错误。

根据here的定义:

feature_spec = {
        'shape': tf.VarLenFeature(tf.int64),
        'image_raw': tf.FixedLenFeature((), tf.string),
        'label_raw': tf.FixedLenFeature((43), tf.int64)
    }

def serving_input_receiver_fn():
  serialized_tf_example = tf.placeholder(dtype=tf.string,
                                         shape=[120, 120, 3],
                                         name='input_example_tensor')
  receiver_tensors = {'image': serialized_tf_example}
  features = tf.parse_example(serialized_tf_example, feature_spec)
  return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

here

def serving_input_receiver_fn():
    feature_spec = {
            'image': tf.FixedLenFeature((), tf.string)
        }
    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)

这是我的出口商和培训程序:

exporter = tf.estimator.BestExporter(
    name="best_exporter",
    serving_input_receiver_fn=serving_input_receiver_fn,
    exports_to_keep=5)

train_spec = tf.estimator.TrainSpec(
    input_fn=lambda: imgs_input_fn(train_path, True, epochs, batch_size))

eval_spec = tf.estimator.EvalSpec(
    input_fn=lambda: imgs_input_fn(eval_path, perform_shuffle=False, batch_size=1),
    exporters=exporter)

tf.estimator.train_and_evaluate(ben_classifier, train_spec, eval_spec)

This is a gist和输出。 为ServingInputReceiver定义BestExporter的正确方法是什么?

1 个答案:

答案 0 :(得分:2)

您可以尝试下面显示的代码吗?

def serving_input_receiver_fn():
    """
    This is used to define inputs to serve the model.
    :return: ServingInputReciever
    """
    reciever_tensors = {
        # The size of input image is flexible.
        'image': tf.placeholder(tf.float32, [None, None, None, 1]),
    }

    # Convert give inputs to adjust to the model.
    features = {
        # Resize given images.
        'image': tf.reshape(reciever_tensors[INPUT_FEATURE], [-1, INPUT_SHAPE])
    }
    return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
                                                    features=features)

然后使用tf.estimator.BestExporter,如下所示:

best_exporter = tf.estimator.BestExporter(
        serving_input_receiver_fn=serving_input_receiver_fn,
        exports_to_keep=1)
    exporters = [best_exporter]
    eval_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={input_name: eval_data},
        y=eval_labels,
        num_epochs=1,
        shuffle=False)
    eval_spec = tf.estimator.EvalSpec(
        input_fn=eval_input_fn,
        throttle_secs=10,
        start_delay_secs=10,
        steps=None,
        exporters=exporters)

    # Train and evaluate the model.
    tf.estimator.train_and_evaluate(classifier, train_spec=train_spec, eval_spec=eval_spec)

有关更多信息,请参见链接: https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_keras_estimator.py