如何在MonitoredTrainingSession中获取global_step?

时间:2017-12-29 12:22:28

标签: tensorflow distributed-computing

我在分布式TensorFlow中运行分布式mnist模型。我想监控"手动"用于调试目的的global_step的演变。在分布式TensorFlow设置中获得全局步骤的最佳和最简洁的方法是什么?

我的代码

 ...

with tf.device(device):
  images = tf.placeholder(tf.float32, [None, 784], name='image_input')
  labels = tf.placeholder(tf.float32, [None], name='label_input')
  data = read_data_sets(FLAGS.data_dir,
          one_hot=False,
          fake_data=False)
  logits = mnist.inference(images, FLAGS.hidden1, FLAGS.hidden2)
  loss = mnist.loss(logits, labels)
  loss = tf.Print(loss, [loss], message="Loss = ")
  train_op = mnist.training(loss, FLAGS.learning_rate)

hooks=[tf.train.StopAtStepHook(last_step=FLAGS.nb_steps)]

with tf.train.MonitoredTrainingSession(
    master=target,
    is_chief=(FLAGS.task_index == 0),
    checkpoint_dir=FLAGS.log_dir,
    hooks = hooks) as sess:


  while not sess.should_stop():
    xs, ys = data.train.next_batch(FLAGS.batch_size, fake_data=False)
    sess.run([train_op], feed_dict={images:xs, labels:ys})

      global_step_value = # ... what is the clean way to get this variable

1 个答案:

答案 0 :(得分:0)

通常,好的做法是在图形定义过程中初始化全局step变量,例如global_step = tf.Variable(0, trainable=False, name='global_step')。然后,您可以使用graph.get_tensor_by_name("global_step:0")轻松地迈出全局一步。