如何在 keras 中为 seq2seq 模型添加自注意力

时间:2021-07-24 12:41:22

标签: python tensorflow keras attention-model seq2seq

我有这个带有点积注意力层的模型。我已经注释掉了代码中的部分。我如何使用自我注意力而不是我拥有的注意力层?所以,基本上,我想用自我关注层替换评论部分。

我对 keras-self-attention 或手动添加的图层持开放态度。任何有用的东西

# Encoder
encoder_inputs = Input(shape=(max_text_len, ))

# Embedding layer
enc_emb = Embedding(x_voc, embedding_dim,
                    trainable=True)(encoder_inputs)

# Encoder LSTM 1
encoder_lstm1 = Bidirectional(LSTM(latent_dim, return_sequences=True,
                     return_state=True, dropout=0.4,
                     recurrent_dropout=0.4))
(encoder_output1, forward_h1, forward_c1, backward_h1, backward_c1) = encoder_lstm1(enc_emb)

# Encoder LSTM 2
encoder_lstm2 = Bidirectional(LSTM(latent_dim, return_sequences=True,
                     return_state=True, dropout=0.4,
                     recurrent_dropout=0.4))
(encoder_output2, forward_h2, forward_c2, backward_h2, backward_c2) = encoder_lstm2(encoder_output1)

# Encoder LSTM 3
encoder_lstm3 = Bidirectional(LSTM(latent_dim, return_state=True,
                     return_sequences=True, dropout=0.4,
                     recurrent_dropout=0.4))
(encoder_outputs, forward_h, forward_c, backward_h, backward_c) = encoder_lstm3(encoder_output2)

state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])

# Set up the decoder, using encoder_states as the initial state
decoder_inputs = Input(shape=(None, ))

# Embedding layer
dec_emb_layer = Embedding(y_voc, embedding_dim, trainable=True)
dec_emb = dec_emb_layer(decoder_inputs)


# Decoder LSTM
decoder_lstm = LSTM(latent_dim*2, return_sequences=True,
                    return_state=True, dropout=0.4,
                    recurrent_dropout=0.2)
(decoder_outputs, decoder_fwd_state, decoder_back_state) = \
    decoder_lstm(dec_emb, initial_state=[state_h, state_c])

#Start attention layer
# attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])
# attention = Activation('softmax')(attention)
# context = dot([attention, encoder_outputs], axes=[2,1])
# decoder_outputs = Concatenate()([context, decoder_outputs])
#End attention layer

# Dense layer
decoder_dense = TimeDistributed(Dense(y_voc, activation='softmax'))(decoder_outputs)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_dense)

0 个答案:

没有答案