我正在使用以下模型求解分类时间序列。当我使用小的训练数据集(近5000行)时,模型的精度约为0.9,损失约为0.02。但是,当我使用超过900000行的训练数据集时,准确性和损失会分别变为0.3和1.5。对于这种现象有什么解释吗,我应该增加LSTM单位或batch_size吗?那么到哪个值范围?
```
# using keras (tensorflow backend)
timesteps = 6
LSTM_units = 45
dropout = 0.2
epochs = 100
batch_size = 100
model_input = Input(shape=(timesteps, features))
batchNormalization = BatchNormalization()(model_input)
lstm1 = LSTM(units = LSTM_units, return_sequences = True)(batchNormalization)
dropout1 = Dropout(dropout)(lstm1)
lstm2 = LSTM(units = LSTM_units)(dropout1)
dropout2 = Dropout(dropout)(lstm2)
dense1 = Dense(150, activation='relu')(dropout2)
outputs = Dense(7, activation='softmax')(dense1)
model = Model(inputs=[model_input], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit_generator(generator, epochs=epochs, steps_per_epoch=batch_size, workers=2, verbose=2)
```