我尝试使用以前训练过的LSTM生成文本。我找到了一个existing solution,但问题是它抛出了一些异常。据我所知,这是因为较旧的库使用。经过一些修复后,这是我生成文本的最终功能:
def generate_text(train_path, num_sentences, rnn_data):
gen_config = get_config()
gen_config.num_steps = 1
gen_config.batch_size = 1
with tf.Graph().as_default(), tf.Session() as session:
initializer = tf.random_uniform_initializer(-gen_config.init_scale,
gen_config.init_scale)
with tf.name_scope("Generate"):
rnn_input = PTBInput(config=gen_config, data=rnn_data, name="GenOut")
with tf.variable_scope("OutModel", reuse=None, initializer=initializer):
mout = PTBModel(is_training=False, config=gen_config, input_=rnn_input)
# Restore variables from disk. TODO: save/load trained models
# saver = tf.train.Saver()
# saver.restore(session, model_path)
# print("Model restored from file " + model_path)
print('Getting Vocabulary')
words = reader.get_vocab(train_path)
mout.initial_state = tf.convert_to_tensor(mout.initial_state)
state = mout.initial_state.eval()
# state = session.run(mout.initial_state)
x = 0 # the id for '<eos>' from the training set //TODO: fix this
word_input = np.matrix([[x]]) # a 2D numpy matrix
text = ""
count = 0
while count < num_sentences:
output_probs, state = session.run([mout.output_probs, mout.final_state],
{mout.input.input_data: word_input,
mout.initial_state: state})
print('Output Probs = ' + str(output_probs[0]))
x = sample(output_probs[0], 0.9)
if words[x] == "<eos>":
text += ".\n\n"
count += 1
else:
text += " " + words[x]
# now feed this new word as input into the next iteration
word_input = np.matrix([[x]])
print(text)
return
但我得到一个例外:
FailedPreconditionError(参见上面的回溯):尝试使用未初始化的值OutModel / softmax_b [[Node:OutModel / softmax_b / read = IdentityT = DT_FLOAT,_ class = [&#34; loc:@ OutModel / softmax_b&#34;],_ device =&#34; / job:localhost / replica:0 / task:0 / CPU:0&#34;]]
我该如何解决?我的代码还有其他问题吗?
答案 0 :(得分:0)
问题是一个未初始化的变量,您可以通过单独初始化所有变量或使用帮助程序printf