无法加载模型检查点以继续训练,TensorSliceReader构造函数失败:无法找到任何匹配的文件

时间:2019-03-31 17:25:20

标签: tensorflow neural-network deep-learning recurrent-neural-network

我正在尝试加载模型以继续训练,但我却出现错误

  

NotFoundError:不成功的TensorSliceReader构造函数:未能找到./drive/My Drive / DLSRL / Model /

的任何匹配文件      

[[节点:保存/恢复V2_81 = RestoreV2 [dtypes = [DT_FLOAT],_device =“ / job:localhost / replica:0 / task:0 / cpu:0”](_ arg_save / Const_0_0,保存/恢复V2_81 / tensor_names ,保存/恢复V2_81 / shape_and_slices)]

     

[[节点:save / RestoreV2_3 / _189 = _Recvclient_terminated = false,recv_device =“ / job:localhost /副本:0 / task:0 / gpu:0”,send_device =“ / job:localhost /副本:0 / task:0 / cpu:0“,send_device_incarnation = 1,tensor_name =” edge_396_save / RestoreV2_3“,tensor_type = DT_FLOAT,_device =” / job:localhost /副本:0 / task:0 / gpu:0“]]

     

由op'save / RestoreV2_81'引起,定义为:     _run_module_as_main中的文件“ /usr/lib/python3.6/runpy.py”,第193行       “ 主要”,mod_spec)

我的文件夹结构 模型包含检查点,03-27-09-15_epoch_29.ckpt.data-00000-of-00001,03-27-09-15_epoch_29.ckpt.index,03-27-09-15_epoch_29.ckpt.meta

这是代码

saver = tf.train.import_meta_graph('./drive/My Drive/DLSRL/Model/03-27-09-15_epoch_39.ckpt.meta')
    g = tf.get_default_graph()
    with g.as_default():


      model = Model(config, embeddings, label_dict.size(), g)
      sess = tf.Session(graph=g, config=tf.ConfigProto(allow_soft_placement=True,
                                                       log_device_placement=False))
      saver.restore(sess,'./drive/My Drive/DLSRL/Model/')
      #sess.run(tf.global_variables_initializer())
      ckpt_saver = tf.train.Saver(max_to_keep=config.max_epochs)
      for epoch in range(39,config.max_epochs):
          # save chckpoint from which to load model
          path = runs_dir / "{}_epoch_{}.ckpt".format(time_of_init, epoch)
          ckpt_saver.save(sess, str(path))
          print('Saved checkpoint.')
          evaluate(dev_data, model, sess, epoch, global_step)
          x1, x2, y = shuffle_stack_pad(train_data, config.train_batch_size)
          epoch_start = time.time()
          for x1_b, x2_b, y_b in get_batches(x1, x2, y, config.train_batch_size):
              feed_dict = make_feed_dict(x1_b, x2_b, y_b, model, config.keep_prob)
              if epoch_step % LOSS_INTERVAL == 0:
                  # tensorboard
                  run_options = tf.RunOptions(trace_level=tf.RunOptions.NO_TRACE)
                  scalar_summaries = sess.run(model.scalar_summaries,
                                     feed_dict=feed_dict,
                                     options=run_options)
                  model.train_writer.add_summary(scalar_summaries, global_step)
                  # print info
                  print("step {:>6} epoch {:>3}: loss={:1.3f}, epoch sec={:3.0f}, total hrs={:.1f}".format(
                      epoch_step,
                      epoch,
                      epoch_loss_sum / max(epoch_step, 1),
                      (time.time() - epoch_start),
                      (time.time() - global_start) / 3600))
              loss, _ = sess.run([model.nonzero_mean_loss, model.update], feed_dict=feed_dict)

              epoch_loss_sum+= loss
              epoch_step += 1
              global_step += 1
          epoch_step = 0
          epoch_loss_sum = 0.0

请问您提出修复建议吗?

1 个答案:

答案 0 :(得分:1)

您未指定要还原的检查点。更改为:

saver.restore(sess, tf.train.latest_checkpoint('./drive/My Drive/DLSRL/Model/'))