我正在尝试从我使用TensorFlow tutorials训练的模型中恢复图形,然后我尝试恢复模型:
import tensorflow as tf
import reader
from ptb_word_lm import PTBInput, PTBModel, get_config, run_epoch
def main(_):
checkpoint_path = "/Users/roger/data/ptb_out"
checkpoint_path = tf.train.latest_checkpoint(checkpoint_path)
raw_data = reader.ptb_raw_data("/Users/roger/data/simple-examples/small_data")
train_data, valid_data, test_data, _ = raw_data
config = get_config()
eval_config = get_config()
eval_config.batch_size = 1
eval_config.num_steps = 1
with tf.Session() as session:
initializer = tf.random_uniform_initializer(-config.init_scale,
config.init_scale)
saver = tf.train.import_meta_graph(checkpoint_path + ".meta")
saver.restore(session, checkpoint_path)
with tf.name_scope("Test"):
test_input = PTBInput(config=eval_config, data=test_data, name="TestInput")
with tf.variable_scope("Model", reuse=True, initializer=initializer):
mtest = PTBModel(is_training=False, config=eval_config,
input_=test_input)
test_perplexity = run_epoch(session, mtest)
print("Test Perplexity: %.3f" % test_perplexity)
if __name__ == "__main__":
tf.app.run()
但是,我发现创建here的Varible Model/embedding
未从图中恢复。所以我得到这样的错误:
ValueError: Variable Model/embedding does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
那我怎样才能正确恢复模型呢?
答案 0 :(得分:0)
我认为,既然你在变量范围中设置了reuse = True,它会在你调用PTBModel()时尝试找到该变量而不是创建它。如果在范围中使用带有reuse = True的get_variable(),它将永远不会创建变量。