Keras:为什么在官方的seq2seq示例中没有使用TimeDistributed代替Dense?

时间:2018-02-09 19:36:54

标签: python deep-learning keras

我现在正试图在Keras之上构建seq2seq模型。

我提到了这个official seq2seq example,但我想知道为什么不使用TimeDistibuted图层而不是Dense

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
                                     initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax') # <- Here!
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

Seq2seq模型必须处理多对多问题,因此每个时间步都必须单独处理。因此,我认为此模型应该有TimeDistributed(Dense())图层,但实际上,它是Dense

有人可以向我解释原因吗?

0 个答案:

没有答案