我正在尝试为时间序列在keras中开发一个Encoder模型。数据的形状为(5039,28,1),这意味着我的seq_len为28,并且我有一个特征。对于编码器的第一层,我使用112 hunits,第二层将具有56 hunits,并且能够返回到解码器的输入形状,我必须添加具有28 hunits的第三层(此自动编码器应该重建其输入)。但是我不知道将LSTM层连接在一起的正确方法是什么。 AFAIK,我可以添加RepeatVector
或return_seq=True
。您可以在以下代码中看到我的两个模型。我不知道会有什么区别,哪种方法是正确的?
使用return_sequence=True
的第一个模型:
inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112, return_sequences=True)(inputEncoder)
snd = LSTM(56, return_sequences=True)(firstEncLayer)
outEncoder = LSTM(28)(snd)
context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28,1))(context)
encoder_model = Model(inputEncoder, outEncoder)
firstDecoder = LSTM(112, return_sequences=True)(context_reshaped)
outDecoder = LSTM(1, return_sequences=True)(firstDecoder)
autoencoder = Model(inputEncoder, outDecoder)
带有RepeatVector
的第二个模型:
inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112)(inputEncoder)
firstEncLayer = RepeatVector(1)(firstEncLayer)
snd = LSTM(56)(firstEncLayer)
snd = RepeatVector(1)(snd)
outEncoder = LSTM(28)(snd)
encoder_model = Model(inputEncoder, outEncoder)
context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28, 1))(context)
firstDecoder = LSTM(112)(context_reshaped)
firstDecoder = RepeatVector(1)(firstDecoder)
sndDecoder = LSTM(28)(firstDecoder)
outDecoder = RepeatVector(1)(sndDecoder)
outDecoder = Reshape((28, 1))(outDecoder)
autoencoder = Model(inputEncoder, outDecoder)