tensorflow:简单LSTM网络的共享变量错误

时间:2016-04-29 14:48:51

标签: python tensorflow neural-network lstm

我正在尝试构建一个最简单的LSTM网络。只是希望它预测序列np_input_data中的下一个值。

import tensorflow as tf
from tensorflow.python.ops import rnn_cell
import numpy as np

num_steps = 3
num_units = 1
np_input_data = [np.array([[1.],[2.]]), np.array([[2.],[3.]]), np.array([[3.],[4.]])]

batch_size = 2

graph = tf.Graph()

with graph.as_default():
    tf_inputs = [tf.placeholder(tf.float32, [batch_size, 1]) for _ in range(num_steps)]

    lstm = rnn_cell.BasicLSTMCell(num_units)
    initial_state = state = tf.zeros([batch_size, lstm.state_size])
    loss = 0

    for i in range(num_steps-1):
        output, state = lstm(tf_inputs[i], state)
        loss += tf.reduce_mean(tf.square(output - tf_inputs[i+1]))

with tf.Session(graph=graph) as session:
    tf.initialize_all_variables().run()

    feed_dict={tf_inputs[i]: np_input_data[i] for i in range(len(np_input_data))}

    loss = session.run(loss, feed_dict=feed_dict)

    print(loss)

口译员返回:

ValueError: Variable BasicLSTMCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
    output, state = lstm(tf_inputs[i], state)

我做错了什么?

3 个答案:

答案 0 :(得分:5)

此处呼叫lstm

for i in range(num_steps-1):
  output, state = lstm(tf_inputs[i], state)

将尝试每次迭代创建具有相同名称的变量,除非您另有说明。您可以使用tf.variable_scope

执行此操作
with tf.variable_scope("myrnn") as scope:
  for i in range(num_steps-1):
    if i > 0:
      scope.reuse_variables()
    output, state = lstm(tf_inputs[i], state)     

第一次迭代创建表示LSTM参数的变量,每次后续迭代(在调用reuse_variables之后)都会按名称在范围内查找它们。

答案 1 :(得分:5)

我使用tf.nn.dynamic_rnn在TensorFlow v1.0.1中遇到了类似的问题。事实证明,如果我必须在训练过程中重新训练或取消并重新开始我的训练过程,那么错误才会出现。基本上图表没有重置。

长话短说,在代码的开头抛出一个tf.reset_default_graph()它应该会有所帮助。至少在使用tf.nn.dynamic_rnn和再培训时。

答案 2 :(得分:1)

使用tf.nn.rnntf.nn.dynamic_rnn执行此操作,以及许多其他好事,为您服务。