如何使用注意机制在多层双向中操纵编码器状态

时间:2019-01-17 09:06:48

标签: python tensorflow recurrent-neural-network bidirectional attention-model

我正在实现具有多层双向rnn和关注机制的Seq2Seq模型,在遵循本教程https://github.com/tensorflow/nmt时,我对如何正确操纵双向层之后的encoder_state感到困惑。

引用本教程“对于多个双向层,我们需要稍微操纵编码器状态,有关更多详细信息,请参见model.py,方法_build_bidirectional_rnn()。”这是代码的相关部分(https://github.com/tensorflow/nmt/blob/master/nmt/model.py第770行):

encoder_outputs, bi_encoder_state = (
            self._build_bidirectional_rnn(
            inputs=self.encoder_emb_inp,
            sequence_length=sequence_length,
            dtype=dtype,
            hparams=hparams,
            num_bi_layers=num_bi_layers,
            num_bi_residual_layers=num_bi_residual_layers))

if num_bi_layers == 1:
   encoder_state = bi_encoder_state
else:
   # alternatively concat forward and backward states
   encoder_state = []
   for layer_id in range(num_bi_layers):
      encoder_state.append(bi_encoder_state[0][layer_id])  # forward
      encoder_state.append(bi_encoder_state[1][layer_id])  # backward
   encoder_state = tuple(encoder_state)

这就是我现在拥有的:

def get_a_cell(lstm_size):
    lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size)
    #drop = tf.nn.rnn_cell.DropoutWrapper(lstm, 
                       output_keep_prob=keep_prob)
    return lstm


encoder_FW = tf.nn.rnn_cell.MultiRNNCell(
    [get_a_cell(num_units) for _ in range(num_layers)])
encoder_BW = tf.nn.rnn_cell.MultiRNNCell(
    [get_a_cell(num_units) for _ in range(num_layers)])


bi_outputs, bi_encoder_state = tf.nn.bidirectional_dynamic_rnn(
encoder_FW, encoder_BW, encoderInput,
sequence_length=x_lengths, dtype=tf.float32)
encoder_output = tf.concat(bi_outputs, -1)

encoder_state = []

for layer_id in range(num_layers):
    encoder_state.append(bi_encoder_state[0][layer_id])  # forward
    encoder_state.append(bi_encoder_state[1][layer_id])  # backward
encoder_state = tuple(encoder_state)

#DECODER -------------------

decoder_cell = tf.nn.rnn_cell.MultiRNNCell([get_a_cell(num_units) for _ in range(num_layers)])

# Create an attention mechanism
attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units_attention, encoder_output ,memory_sequence_length=x_lengths)

decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
              decoder_cell,attention_mechanism,
              attention_layer_size=num_units_attention)

decoder_initial_state = decoder_cell.zero_state(batch_size,tf.float32)
                        .clone(cell_state=encoder_state)

问题是我收到错误消息

The two structures don't have the same nested structure.

First structure: type=AttentionWrapperState 
str=AttentionWrapperState(cell_state=(LSTMStateTuple(c=, h=), 
LSTMStateTuple(c=, h=)), attention=, time=, alignments=, alignment_history=
(), attention_state=)

Second structure: type=AttentionWrapperState 
str=AttentionWrapperState(cell_state=(LSTMStateTuple(c=, h=), 
LSTMStateTuple(c=, h=), LSTMStateTuple(c=, h=), LSTMStateTuple(c=, h=)), 
attention=, time=, alignments=, alignment_history=(), attention_state=)

这对我来说很有意义,因为我们不包括所有输出层,而是(我想)仅包括最后一层。虽然对于状态,我们实际上是在连接所有层。

因此,正如我所期望的,仅当连接最后一层状态时,如下所示:

encoder_state = []
encoder_state.append(bi_encoder_state[0][num_layers-1])  # forward
encoder_state.append(bi_encoder_state[1][num_layers-1])  # backward
encoder_state = tuple(encoder_state)

它运行无误。

据我所知,没有代码的任何部分在将编码器状态传递给关注层之前,它们会再次对其进行转换。那么他们的代码如何工作?更重要的是,我的解决方案是否破坏了注意机制的正确行为?

1 个答案:

答案 0 :(得分:0)

这是问题所在:

只有编码器是双向的,但您给解码器提供了双向状态(始终是单向的)。

解决方案:

您要做的只是简单地连接状态,因此,您可以再次操纵“单向数据”!

encoder_state = []

for layer_id in range(num_layers):
    state_fw = bi_encoder_state[0][layer_id]
    state_bw = bi_encoder_state[1][layer_id]

    # Merging the fw state and the bw state
    cell_state = tf.concat([state_fw.c, state_bw.c], 1)
    hidden_state= tf.concat([state_fw.h, state_bw.h], 1)

    # This state as the same structure than an uni-directional encoder state
    state = tf.nn.rnn_cell.LSTMStateTuple(c=cell_state, h=hidden_state)

    encoder_state.append(state)

encoder_state = tuple(encoder_state)