如何在Keras,RepeatVector或return_sequence = True中连接LSTM层?

时间:2018-08-08 14:38:01

标签: tensorflow keras deep-learning lstm autoencoder

我正在尝试为时间序列在keras中开发一个Encoder模型。数据的形状为(5039,28,1),这意味着我的seq_len为28,并且我有一个特征。对于编码器的第一层,我使用112 hunits,第二层将具有56 hunits,并且能够返回到解码器的输入形状,我必须添加具有28 hunits的第三层(此自动编码器应该重建其输入)。但是我不知道将LSTM层连接在一起的正确方法是什么。 AFAIK,我可以添加RepeatVectorreturn_seq=True。您可以在以下代码中看到我的两个模型。我不知道会有什么区别,哪种方法是正确的?

使用return_sequence=True的第一个模型:

inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112, return_sequences=True)(inputEncoder)
snd = LSTM(56, return_sequences=True)(firstEncLayer)
outEncoder = LSTM(28)(snd)

context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28,1))(context)

encoder_model = Model(inputEncoder, outEncoder)
firstDecoder = LSTM(112, return_sequences=True)(context_reshaped)
outDecoder = LSTM(1, return_sequences=True)(firstDecoder)

autoencoder = Model(inputEncoder, outDecoder)

带有RepeatVector的第二个模型:

inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112)(inputEncoder)
firstEncLayer = RepeatVector(1)(firstEncLayer)
snd = LSTM(56)(firstEncLayer)
snd = RepeatVector(1)(snd)
outEncoder = LSTM(28)(snd)
encoder_model = Model(inputEncoder, outEncoder)

context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28, 1))(context)

firstDecoder = LSTM(112)(context_reshaped)
firstDecoder = RepeatVector(1)(firstDecoder)
sndDecoder = LSTM(28)(firstDecoder)

outDecoder = RepeatVector(1)(sndDecoder)
outDecoder = Reshape((28, 1))(outDecoder)

autoencoder = Model(inputEncoder, outDecoder)

1 个答案:

答案 0 :(得分:12)

您可能必须亲自检查哪个更好,因为这取决于您要解决的问题。但是,请给您两种方法之间的区别。

Difference <code>return_sequences=True</code> and RepeatVector 本质上,return_sequences=True返回编码器过去观察到的所有输出,其中RepeatVector重复编码器的最后输出