验证准确性随训练准确性的增加而波动吗?

时间:2019-12-04 12:57:50

标签: machine-learning keras deep-learning lstm

我有一个取决于历史数据的多分类问题。我正在尝试使用loss ='sparse_categorical_crossentropy'的LSTM。列车精度和损耗分别增加和减少。但是,我的测试准确性开始出现剧烈波动。

我做错了什么?

输入数据:

X = np.reshape(X, (X.shape[0], X.shape[1], 1))
X.shape
(200146, 13, 1)

我的模型

# fix random seed for reproducibility
seed = 7
np.random.seed(seed)

# define 10-fold cross validation test harness
kfold = StratifiedKFold(n_splits=10, shuffle=False, random_state=seed)
cvscores = []
for train, test in kfold.split(X, y):
    regressor = Sequential()

    # Units = the number of LSTM that we want to have in this first layer -> we want very high dimentionality, we need high number
    # return_sequences =  True because we are adding another layer after this
    # input shape = the last two dimensions and the indicator
    regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X[train].shape[1], 1)))
    regressor.add(Dropout(0.2))

    # Extra LSTM layer
    regressor.add(LSTM(units=50, return_sequences=True))
    regressor.add(Dropout(0.2))
    # 3rd
    regressor.add(LSTM(units=50, return_sequences=True))
    regressor.add(Dropout(0.2))

    #4th
    regressor.add(LSTM(units=50))
    regressor.add(Dropout(0.2))

    # output layer
    regressor.add(Dense(4, activation='softmax', kernel_regularizer=regularizers.l2(0.001)))

    # Compile the RNN
    regressor.compile(optimizer='adam', loss='sparse_categorical_crossentropy',metrics=['accuracy'])

    # Set callback functions to early stop training and save the best model so far
    callbacks = [EarlyStopping(monitor='val_loss', patience=9),
             ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)]


    history = regressor.fit(X[train], y[train], epochs=250, callbacks=callbacks, 
                        validation_data=(X[test], y[test]))

    # plot train and validation loss
    pyplot.plot(history.history['loss'])
    pyplot.plot(history.history['val_loss'])
    pyplot.title('model train vs validation loss')
    pyplot.ylabel('loss')
    pyplot.xlabel('epoch')
    pyplot.legend(['train', 'validation'], loc='upper right')
    pyplot.show()


    # evaluate the model
    scores = regressor.evaluate(X[test], y[test], verbose=0)
    print("%s: %.2f%%" % (regressor.metrics_names[1], scores[1]*100))
    cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))

结果:

trainingmodel

Plot

2 个答案:

答案 0 :(得分:1)

您似乎一遍又一遍地堆叠了太多LSTM层,最终导致过度拟合。可能应该减少层数。

答案 1 :(得分:0)

您在这里描述的是过度拟合。这意味着您的模型会继续学习您的训练数据,并且不会一概而论,或者其他人说它正在学习训练集的确切功能。这是您可以在深度学习中处理的主要问题。本身没有解决方案。您必须尝试不同的体系结构,不同的超参数等等。

您可以尝试使用一个不合格的小型模型(即火车acc和验证的百分比很低),并继续增加模型直到其过拟合。然后,您可以使用优化器和其他超参数。

通过较小的模型,我的意思是隐藏单元或层数更少的模型。