LSTM对长时间序列序列进行分类

时间:2018-06-27 13:54:20

标签: python keras lstm

我正在尝试训练LSTM模型以对具有700个时间步长的长时间序列进行分类。

属于1类的示例时间序列

enter image description here

属于2类的示例时间序列

enter image description here

模型

model = Sequential()  
model.add(LSTM(100, input_shape=(700,1), return_sequences=False))  
model.add(Dense(1, activation='sigmoid')) 
sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
history = model.fit(xtrain, y,
              batch_size = 10,
                epochs=20,
              validation_split = 0.2, shuffle= True)

自第一个时期以来,训练损失保持恒定,而验证损失则增加。这里有什么想法吗?

我尝试在LSTM层上添加卷积层。这样可以很好地提高模型的准确性,但是测试损失和准确性会急剧波动。

enter image description here

新模型架构

def generate_model():

ip = Input(shape=(700,1))
x = LSTM(8)(ip)
x = Dropout(0.8)(x)
y = Permute((2, 1))(ip)
y = Conv1D(128, 8, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(256, 5, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(128, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
x = concatenate([x, y])
out = Dense(1, activation='sigmoid')(x)
model = Model(ip, out)
model.summary()
return model

model = generate_model()
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
              optimizer=adam,
              metrics=['accuracy'])
history1 = model.fit(xtrain, y,
          batch_size = 10,
            epochs=20,
          validation_split = 0.2, shuffle= True,callbacks=callbacks)

为什么会这样?

0 个答案:

没有答案