使用Tensorflow时,我正在尝试使用检查点文件恢复CIFAR10培训。引用其他一些文章,我尝试了tf.train.Saver()。恢复没有成功。有人可以告诉我如何继续吗?
来自Tensorflow CIFAR10的代码段
def train():
# methods to build graph from the cifar10_train.py
global_step = tf.Variable(0, trainable=False)
images, labels = cifar10.distorted_inputs()
logits = cifar10.inference(images)
loss = cifar10.loss(logits, labels)
train_op = cifar10.train(loss, global_step)
saver = tf.train.Saver(tf.all_variables())
summary_op = tf.merge_all_summaries()
init = tf.initialize_all_variables()
sess = tf.Session(config=tf.ConfigProto(log_device_placement=FLAGS.log_device_placement))
sess.run(init)
print("FLAGS.checkpoint_dir is %s" % FLAGS.checkpoint_dir)
if FLAGS.checkpoint_dir is None:
# Start the queue runners.
tf.train.start_queue_runners(sess=sess)
summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
else:
# restoring from the checkpoint file
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
tf.train.Saver().restore(sess, ckpt.model_checkpoint_path)
# cur_step prints out well with the checkpointed variable value
cur_step = sess.run(global_step);
print("current step is %s" % cur_step)
for step in xrange(cur_step, FLAGS.max_steps):
start_time = time.time()
# **It stucks at this call **
_, loss_value = sess.run([train_op, loss])
# below same as original
答案 0 :(得分:2)
问题似乎是这一行:
tf.train.start_queue_runners(sess=sess)
...仅在FLAGS.checkpoint_dir is None
时执行。如果从检查点恢复,则仍需要启动队列运行程序。
请注意,我建议您在创建tf.train.Saver
之后启动队列运行程序(由于代码的已发布版本中的竞争条件),因此更好的结构将是:
if FLAGS.checkpoint_dir is not None:
# restoring from the checkpoint file
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
tf.train.Saver().restore(sess, ckpt.model_checkpoint_path)
# Start the queue runners.
tf.train.start_queue_runners(sess=sess)
# ...
for step in xrange(cur_step, FLAGS.max_steps):
start_time = time.time()
_, loss_value = sess.run([train_op, loss])
# ...