如何使用TimeDistributed层来预测动态长度序列? PYTHON 3

时间:2020-06-03 06:48:05

标签: keras lstm keras-layer autoencoder seq2seq

因此,我正在尝试构建基于LSTM的自动编码器,我希望将其用于时间序列数据。这些被拆分成不同长度的序列。因此,输入模型的形状为[None,None,n_features],其中第一个None代表样本数,第二个None代表序列的time_steps。该序列由LSTM处理,其参数return_sequences = False,然后由RepeatVector函数重新创建编码维,并再次运行LSTM。最后,我想使用TimeDistributed层,但是如何告诉python time_steps维是动态的呢?查看我的代码:

from keras import backend as K  
.... other dependencies .....
input_ae = Input(shape=(None, 2))  # shape: time_steps, n_features
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = RepeatVector(n=K.shape(input_ae)[1])(LSTM1) # bottleneck layer
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2) # ???????  HOW TO ????

# no problem here so far: 
model = Model(input_ae, outputs=output) 
model.compile(optimizer='adam', loss='mse')

1 个答案:

答案 0 :(得分:1)

此功能似乎可以解决问题

def repeat(x_inp):

    x, inp = x_inp
    x = tf.expand_dims(x, 1)
    x = tf.repeat(x, [tf.shape(inp)[1]], axis=1)

    return x

示例

input_ae = Input(shape=(None, 2))
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = Lambda(repeat)([LSTM1, input_ae])
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2)

model = Model(input_ae, output) 
model.compile(optimizer='adam', loss='mse')

X = np.random.uniform(0,1, (100,30,2))
model.fit(X, X, epochs=5)

我在TF 2.2上使用tf.keras