Seq2seq编码器,解码器模型堆栈LSTM

时间:2019-09-19 02:00:58

标签: python tensorflow keras lstm seq2seq

我尝试过this的keras示例seq2seq,用于语言翻译的编码器-解码器模型。型号定义:

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = CuDNNLSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = CuDNNLSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
                                     initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

模型成功地学习了英语到俄语字符级别的翻译。但是可以看到,编码器和解码器仅由LSTM layer组成。根据常识,将许多LSTM层堆叠在一起会产生更好的结果。

  

堆叠LSTM层是个好主意吗?如果可以,在这种情况下如何堆叠?

0 个答案:

没有答案