如何防止过度拟合

时间:2020-04-17 14:08:11

标签: python keras deep-learning

我正在做我的最后一个项目,而我在ConVnets中是新手。我想分类哪个是真实图像和欺骗图像。我有+ -8000数据(结合)。我想给你看一些我的训练记录。

Epoch 7/100
311/311 [==============================] - 20s 63ms/step - loss: 0.3274 - accuracy: 0.8675 - val_loss: 0.2481 - val_accuracy: 0.9002
Epoch 8/100
311/311 [==============================] - 20s 63ms/step - loss: 0.3189 - accuracy: 0.8691 - val_loss: 0.3015 - val_accuracy: 0.8684
Epoch 9/100
311/311 [==============================] - 19s 62ms/step - loss: 0.3201 - accuracy: 0.8667 - val_loss: 0.2460 - val_accuracy: 0.9036
Epoch 10/100
311/311 [==============================] - 19s 62ms/step - loss: 0.3063 - accuracy: 0.8723 - val_loss: 0.2752 - val_accuracy: 0.8901
Epoch 11/100
311/311 [==============================] - 19s 62ms/step - loss: 0.3086 - accuracy: 0.8749 - val_loss: 0.2717 - val_accuracy: 0.8988
[INFO] evaluating network...
model = Sequential()
inputShape = (height, width, depth)
chanDim = -1

if K.image_data_format() == "channels_first":
    inputShape = (depth, height, width)
    chanDim = 1

model.add(Conv2D(16, (3, 3), padding="same", input_shape=inputShape))   model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(16, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))

model.add(Conv2D(32, (5, 5), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.6))

model.add(Dense(classes))
model.add(Activation("softmax"))

输入为32x32,它有两个类。我在喀拉拉邦使用EarlyStopping来防止过度拟合。而且我总是更改学习率的值并尝试更改神经元节点的数量,但仍然总是在20个纪元以下停止训练。有什么建议可以防止过度拟合吗?因为我是卷积神经网络的初学者。预先感谢!

PS LR:0.001 BS:20 EPOCHS:100

0 个答案:

没有答案