RNN Tensorflow错误

时间:2017-09-18 15:35:11

标签: tensorflow recurrent-neural-network

我正在尝试在我的数据上应用带有Tensorflow的RNN算法。我尝试了很多教程来实现它,所有教程都使用了mnist数据。一个教程与我合作,但当我尝试用我的数据实现它时,我面临这个错误:

ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights: 'rnn/basic_lstm_cell'; and the cell was not constructed as BasicLSTMCell(..., reuse=True).  To share the weights of an RNNCell, simply reuse it in your second calculation, or create a new one with the argument reuse=True.

我看到很多解决方案,但它们没有用。添加reuse=True无法解决此问题。 您能否向我解释一下这个错误以及如何使用该函数:tf.contrib.rnn.BasicLSTMCell

RNN的功能如下:

def RNN(X, weights, biases):
    # hidden layer for input to cell
    ########################################


    X = tf.reshape(X, [-1, n_inputs])


    X_in = tf.matmul(X, weights['in']) + biases['in']

    X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])

    # cell
    ##########################################

    # basic LSTM Cell.
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)
    else:
        cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units)
    # lstm cell is divided into two parts (c_state, h_state)
    init_state = cell.zero_state(batch_size, dtype=tf.float32)


    outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False)


    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2]))    
    else:
        outputs = tf.unstack(tf.transpose(outputs, [1,0,2]))
    results = tf.matmul(outputs[-1], weights['out']) + biases['out']    

    return results

编辑:

我想在上一个问题中添加一个问题。我确实做了一些修改,以解决上一个问题并且它正在工作但是如果它是正确的我不会受到影响。这是我的更正:

tf.reset_default_graph()
with tf.variable_scope("conv1"): 
    cell= tf.contrib.rnn.LSTMCell(n_hidden_units, forget_bias=0.0,    state_is_tuple=True,reuse=tf.get_variable_scope().reuse)
    outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False,scope="conv1")

这部分正在运作,但我想问一下是否正确。我使用此代码来计算培训和验证数据。结果是,在某个时刻,训练损失减少,验证测试增加 enter image description here

0 个答案:

没有答案