不能将TimeDistributed与LSTM一起使用

时间:2019-05-31 03:19:54

标签: python keras

我尝试使用Keras的TimeDistributed层,但是有一些问题。

数据集形状:

训练集:(800,7,231),其中7为时间步长。

培训标签:(800,7)

验证集:(700,7,231)

验证标签(700,7)

我的目标是进行二进制分类。我连续七天都有信息(并解释为什么我的时间步长是7)。我还具有连续7天的标签,我想使用这些标签以仅预测最后一天(第七天)

下面是我的代码:

from keras.layers import LSTM

model = Sequential()
model.add(LSTM(120, input_shape=(final_dataset.shape[1], final_dataset.shape[2]), return_sequences=True))
print('ok')
model.add(TimeDistributed(Dense(15, activation='softmax')))

model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
history = model.fit(training_set, labels_train, epochs=1, validation_data=(validation_set, labels_validation))

我的错误:检查目标时出错:预期激活1具有3个维,但是数组的形状为(800,7)

编辑:

我尝试了带有编码解码器的其他版本,该版本暂时不起作用:

from keras.layers import LSTM
from keras.models import Sequential, Model
from keras.layers import Dense, Input, TimeDistributed, Flatten



# Define an input sequence and process it.
# Input layer of the encoder :
encoder_input = Input(shape=(final_dataset.shape[1], final_dataset.shape[2]))

# Hidden layers of the encoder :
encoder_LSTM = LSTM(120, input_shape=(final_dataset.shape[1], final_dataset.shape[2]), return_sequences=True)(encoder_input)

# Output layer of the encoder :
encoder_LSTM2_layer = LSTM(120, return_state=True)
encoder_outputs, state_h, state_c = encoder_LSTM2_layer(encoder_LSTM)

# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]


# Set up the decoder, using `encoder_states` as initial state.
# Input layer of the decoder :
decoder_input = Input(shape=(6,))

# Hidden layers of the decoder :


decoder_LSTM_layer = LSTM(120, input_shape=(6,), return_sequences=True)
decoder_LSTM = decoder_LSTM_layer(decoder_input, initial_state = encoder_states)

decoder_LSTM_2_layer = LSTM(120, return_sequences=True, return_state=True)
decoder_LSTM_2,_,_ = decoder_LSTM_2_layer(decoder_LSTM)

# Output layer of the decoder :
decoder_dense = Dense(2, activation='sigmoid')
decoder_outputs = decoder_dense(decoder_LSTM_2)


# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_input, decoder_input], decoder_outputs)

model.summary()

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit([final_dataset,
               labels_train[:,:6]],
               labels_train[:,6])

0 个答案:

没有答案