如何为TensorFlow服务设置Textsum

时间:2016-11-15 02:00:14

标签: tensorflow tensorflow-serving textsum

我正在尝试使用张量流服务来设置文本的解码功能,但我还没有完全理解通过MNIST教程执行完全必要的内容。有没有人遇到过关于设置Tensorflow服务模型的任何其他教程,甚至是更符合textum的东西?任何帮助或方向都会很棒。谢谢!

最后,我尝试从seq2seq_attention.py中通过'train'训练的模型中导出解码功能:https://github.com/tensorflow/models/blob/master/textsum/seq2seq_attention.py

当比较以下2个文件以理解我需要对上述textum模型执行的操作时,我很难理解需要在“default_graph_signature,input tensor,classes_tensor等”中分配的内容。意识到这些可能与textum模型不一致,但是这正是我想要清理并想出的,如果我看到一些其他模型被导出到tensorflow服务,它可能会更有意义。

Comapred: https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/examples/tutorials/mnist/mnist_softmax.py

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_export.py

------------------编辑-------------------

以下是我目前的情况,但我遇到了一些问题。我正在尝试设置Textsum Eval功能以供服务。首先,当发生Saver(sharded = True)的分配时,我收到一条错误,指出“没有要保存的变量”。除此之外,我也不明白我应该分配给“classification_signature”和“named_graph_signature”变量,以便通过文本解码输出结果。

对于我在这里缺少的任何帮助......确定它有点。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import sys
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter

tf.app.flags.DEFINE_string("export_dir", "exports/textsum",
                           "Directory where to export textsum model.")

tf.app.flags.DEFINE_string('checkpoint_dir', 'log_root',
                            "Directory where to read training checkpoints.")
tf.app.flags.DEFINE_integer('export_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_bool("use_checkpoint_v2", False,
                     "If true, write v2 checkpoint files.")
FLAGS = tf.app.flags.FLAGS

def Export():
    try:
        saver = tf.train.Saver(sharded=True)
        with tf.Session() as sess:
            # Restore variables from training checkpoints.
            ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
            if ckpt and ckpt.model_checkpoint_path:
                saver.restore(sess, ckpt.model_checkpoint_path)
                global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
                print('Successfully loaded model from %s at step=%s.' %
                    (ckpt.model_checkpoint_path, global_step))
            else:
                print('No checkpoint file found at %s' % FLAGS.checkpoint_dir)
                return

            # Export model
            print('Exporting trained model to %s' % FLAGS.export_dir)
            init_op = tf.group(tf.initialize_all_tables(), name='init_op')
            model_exporter = exporter.Exporter(saver)

            classification_signature = <-- Unsure what should be assigned here

            named_graph_signature = <-- Unsure what should be assigned here

            model_exporter.init(
                init_op=init_op,
                default_graph_signature=classification_signature,
                named_graph_signatures=named_graph_signature)

            model_exporter.export(FLAGS.export_dir, tf.constant(global_step), sess)
            print('Successfully exported model to %s' % FLAGS.export_dir)
    except:
        err = sys.exc_info()
        print ('Unexpected error:', err[0], ' - ', err[1])
        pass


def main(_):
    Export()

if __name__ == "__main__":
    tf.app.run()

0 个答案:

没有答案