如何防止Keras顺序模型的过度拟合?

时间:2018-11-15 00:48:12

标签: python machine-learning scikit-learn keras text-classification

我已经添加了辍学正则化。我正在尝试建立一个多类文本分类多层感知器模型。 我的模特:

model = Sequential([
                Dropout(rate=0.2, input_shape=features),
                Dense(units=64, activation='relu'),
                Dropout(rate=0.2),
                Dense(units=64, activation='relu'),
                Dropout(rate=0.2),
                Dense(units=16, activation='softmax')])

我的模型。summary():

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dropout_1 (Dropout)          (None, 20000)             0
_________________________________________________________________
dense_1 (Dense)              (None, 64)                1280064
_________________________________________________________________
dropout_2 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 64)                4160
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_3 (Dense)              (None, 16)                1040
=================================================================
Total params: 1,285,264
Trainable params: 1,285,264
Non-trainable params: 0
_________________________________________________________________
None
Train on 6940 samples, validate on 1735 samples

我得到:

Epoch 16/1000
 - 4s - loss: 0.4926 - acc: 0.8719 - val_loss: 1.2640 - val_acc: 0.6640
Validation accuracy: 0.6639769498140736, loss: 1.2639631692545559

验证准确性比准确性低约20%,并且验证损失远高于训练损失。

我已经在使用辍学正则化,并且使用epochs = 1000,批处理大小= 512并提前停止val_loss。

有什么建议吗?

0 个答案:

没有答案