在Tensorflow中,如果使用TFRecord输入(没有占位符)提供元图,如何使用恢复的元图

时间:2017-06-26 21:42:50

标签: tensorflow

我使用TFRecord输入管道训练了一个网络。换句话说,没有占位符。简单的例子是:

input, truth = _get_next_batch()  # TFRecord. `input` is not a tf.placeholder
net = Model(input)
net.set_loss(truth)
optimizer = tf...(net.loss)

我们说,我获得了三个文件ckpt-20000.metackpt-20000.data-0000-of-0001ckpt-20000.index。我明白,以后可以使用.meta文件导入元图并访问张量,例如:

new_saver = tf.train.import_meta_graph('ckpt-20000.meta')
new_saver.restore(sess, 'ckpt-20000')
logits = tf.get_collection("logits")[0]

但是,元图在管道中从一开始就没有占位符。有没有办法可以使用元图和输入的查询推理?

有关信息,在查询应用程序(或脚本)中,我曾经使用占位符和恢复的模型权重定义模型(参见下文)。我想知道我是否可以在没有重新定义的情况下使用元图,因为它会更加简单。

input = tf.placeholder(...)
net = Model(input)
tf.restore(sess, 'ckpt-2000')
lgt = sess.run(net.logits, feed_dict = {input:img})

3 个答案:

答案 0 :(得分:9)

您可以构建一个使用placeholder_with_default()作为输入的图表,因此可以同时使用TFRecord input pipelinefeed_dict{}

一个例子:

input, truth = _get_next_batch()
_x = tf.placeholder_with_default(input, shape=[...], name='input')
_y = tf.placeholder_with_default(truth, shape-[...], name='label')

net = Model(_x)
net.set_loss(_y)
optimizer = tf...(net.loss)

然后在推理期间,

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
  new_saver = tf.train.import_meta_graph('ckpt-20000.meta')
  new_saver.restore(sess, 'ckpt-20000')

  # Get the tensors by their variable name
  input = loaded_graph.get_tensor_by_name('input:0')
  logits = loaded_graph.get_tensor_by_name(...)

  # Now you can feed the inputs to your tensors
  lgt = sess.run(logits, feed_dict = {input:img})

在上面的示例中,如果您没有输入输入,则输入将从TFRecord input pipeline读取。

答案 1 :(得分:4)

有没有办法在没有占位符的情况下进行测试?应该可以使用新的输入管道重新使用图形而不需要使用慢占位符(即测试数据集可能非常大)。在这种情况下,placeholder_with_default是次优解决方案。

答案 2 :(得分:0)

推荐的方法是保存两个元图。一个用于培训/验证/测试,另一个用于推理。

请参阅Building a SavedModel

export_dir = ...
...
builder = tf.saved_model_builder.SavedModelBuilder(export_dir)
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph_and_variables(sess,
                                       [tag_constants.TRAINING],
                                       signature_def_map=foo_signatures,
                                       assets_collection=foo_assets)
...
# Add a second MetaGraphDef for inference.
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph([tag_constants.SERVING])
...
builder.save()

NMT教程还提供了有关使用共享变量创建多个图形的详细示例:Neural Machine Translation (seq2seq) Tutorial-Building Training, Eval, and Inference Graphs

train_graph = tf.Graph()
eval_graph = tf.Graph()
infer_graph = tf.Graph()

with train_graph.as_default():
  train_iterator = ...
  train_model = BuildTrainModel(train_iterator)
  initializer = tf.global_variables_initializer()

with eval_graph.as_default():
  eval_iterator = ...
  eval_model = BuildEvalModel(eval_iterator)

with infer_graph.as_default():
  infer_iterator, infer_inputs = ...
  infer_model = BuildInferenceModel(infer_iterator)

checkpoints_path = "/tmp/model/checkpoints"

train_sess = tf.Session(graph=train_graph)
eval_sess = tf.Session(graph=eval_graph)
infer_sess = tf.Session(graph=infer_graph)

train_sess.run(initializer)
train_sess.run(train_iterator.initializer)

for i in itertools.count():

  train_model.train(train_sess)

  if i % EVAL_STEPS == 0:
    checkpoint_path = train_model.saver.save(train_sess, checkpoints_path, global_step=i)
    eval_model.saver.restore(eval_sess, checkpoint_path)
    eval_sess.run(eval_iterator.initializer)
    while data_to_eval:
      eval_model.eval(eval_sess)

  if i % INFER_STEPS == 0:
    checkpoint_path = train_model.saver.save(train_sess, checkpoints_path, global_step=i)
    infer_model.saver.restore(infer_sess, checkpoint_path)
    infer_sess.run(infer_iterator.initializer, feed_dict={infer_inputs: infer_input_data})
    while data_to_infer:
      infer_model.infer(infer_sess)