我有一个堆叠的MultiRNNCell定义如下:
batch_size = 256
rnn_size = 512
keep_prob = 0.5
lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob = keep_prob)
lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_2 = tf.nn.rnn_cell.DropoutWrapper(lstm_2, output_keep_prob = keep_prob)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_dropout_2])
rnn_inputs = tf.nn.embedding_lookup(embedding_matrix, ques_placeholder)
init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
在此代码中,有两个RNN层。我只是想处理这个动态RNN的最终状态。我期望状态是形状[batch_size, rnn_size*2]
的2D张量。
final_state的形状是4D - [2,2,256,512]
有人可以解释为什么我会这样吗?另外,我如何处理这个张量,以便我可以通过一个完全连接的层传递它?
答案 0 :(得分:1)
我无法重现[2,2,256,512]
形状。但是用这段代码:
rnn_size = 512
batch_size = 256
time_size = 5
input_size = 2
keep_prob = 0.5
lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob=keep_prob)
lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_2])
rnn_inputs = tf.placeholder(tf.float32, shape=[None, time_size, input_size])
# Shape of the rnn_inputs is (batch_size, time_size, input_size)
init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
print(rnn_outputs)
print(final_state)
我为run_outputs
获得了正确的形状:(batch_size, time_size, rnn_size)
Tensor("rnn/transpose_1:0", shape=(256, 5, 512), dtype=float32)
final_state
确实是一对LSTMStateTuple
(对于2个单元格lstm_dropout_1
和lstm_2
):
(LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_3:0' shape=(256, 512) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_4:0' shape=(256, 512) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_5:0' shape=(256, 512) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_6:0' shape=(256, 512) dtype=float32>))
如tf.nn.dynamic_run
的字符串文档中所述:
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
答案 1 :(得分:0)
没有足够的代表评论.. 最终状态是:
[depth, lstmtuple.c and .h, batch_size, rnn_size]