如何在LSTM自动编码器中添加更多层?

时间:2019-07-18 15:57:25

标签: python keras deep-learning lstm autoencoder

我正在尝试在Keras中实现LSTM自动编码器。但是,我不确定如何添加更多层。另外,目前瓶颈层有100个神经元,但本质上我想压缩此数据,因此与输入相比,较少数量的神经元会有所帮助。

# lstm autoencoder recreate sequence
from numpy import array
from keras.models import Sequential
from keras.models import Model
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.utils import plot_model
# define input sequence
sequence = array([0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9])
# reshape input into [samples, timesteps, features]
n_in = len(sequence)
sequence = sequence.reshape((1, n_in, 1))
# define model
model = Sequential()
model.add(LSTM(10, activation='relu', input_shape=(n_in,1)))
model.add(RepeatVector(n_in))
model.add(LSTM(10, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')
# fit model
model.fit(sequence, sequence, epochs=300, verbose=1)
# connect the encoder LSTM as the output layer
model = Model(inputs=model.inputs, outputs=model.layers[0].output)
plot_model(model, show_shapes=True, to_file='lstm_encoder.png')
# get the feature vector for the input sequence
yhat = model.predict(sequence)
print(yhat.shape)
print(yhat)

因此,理想情况下,我想从10个输入到100个神经元,然后再到1个神经元,以减少数据的维数。

我也不确定瓶颈在哪里?以及如何更改尺寸?

0 个答案:

没有答案