以下代码(摘自 - https://github.com/dennybritz/tf-rnn/blob/master/bidirectional_rnn.ipynb)
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
# Create input data
X = np.random.randn(2, 10, 8)
# The second example is of length 6
X[1,6:] = 0
X_lengths = [10, 6]
cell = tf.contrib.rnn.LSTMCell(num_units=64, state_is_tuple=True)
outputs, states = tf.nn.bidirectional_dynamic_rnn(
cell_fw=cell,
cell_bw=cell,
dtype=tf.float64,
sequence_length=X_lengths,
inputs=X)
output_fw, output_bw = outputs
states_fw, states_bw = states
为
提供以下错误tensorflow - 2.7和3.5
的1.1
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.LSTMCell object at 0x10ce0c2b0>
with a different variable scope than its first use. First use of cell was with scope
'bidirectional_rnn/fw/lstm_cell', this attempt is with scope 'bidirectional_rnn/bw/lstm_cell'.
Please create a new instance of the cell if you would like it to use a different set of weights.
If before you were using: MultiRNNCell([LSTMCell(...)] * num_layers), change to:
MultiRNNCell([LSTMCell(...) for _ in range(num_layers)]). If before you were using the same cell
instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances
(one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use
existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation,
so this error will remain until then.)
但它正在
中工作tensorflow - 1.0的python 3.5(没有在python上测试 - 2.7)
我试过我在网上找到的多个代码示例,但是
tf.nn.bidirectional_dynamic_rnn 与tensorflow给出相同的错误 - 1.1
tensorflow 1.1中是否有错误或者我错过了什么?
答案 0 :(得分:1)
对不起你碰到了这个。我可以确认错误出现在1.1(docker run -it gcr.io/tensorflow/tensorflow:1.1.0 python
)中但不出现在1.2 RC0(docker run -it gcr.io/tensorflow/tensorflow:1.2.0-rc0 python
)中。
所以看起来你现在可以选择1.2-rc0或1.0.1。