验证损失在大范围内随机平衡,但是训练损失却在完美减少

时间:2020-05-06 08:16:35

标签: python tensorflow machine-learning keras deep-learning

我正在训练CNN模型,过拟合,因此我可视化了验证损失和准确性,发现它以某种方式给出了如下随机数:

Train on 111003 samples, validate on 10357 samples
Epoch 1/20
111003/111003 [==============================] - 4121s 37ms/step - loss: 0.1805 - accuracy: 0.9561 - val_loss: 1.4469 - val_accuracy: 0.8522
Epoch 2/20
111003/111003 [==============================] - 4108s 37ms/step - loss: 0.0653 - accuracy: 0.9816 - val_loss: 4.2320 - val_accuracy: 0.5754
Epoch 3/20
111003/111003 [==============================] - 4114s 37ms/step - loss: 0.0468 - accuracy: 0.9872 - val_loss: 1.8273 - val_accuracy: 0.7318
Epoch 4/20
111003/111003 [==============================] - 4128s 37ms/step - loss: 0.0351 - accuracy: 0.9898 - val_loss: 7.4632 - val_accuracy: 0.6724
Epoch 5/20
111003/111003 [==============================] - 4127s 37ms/step - loss: 0.0288 - accuracy: 0.9919 - val_loss: 0.7178 - val_accuracy: 0.8104
Epoch 6/20
111003/111003 [==============================] - 4127s 37ms/step - loss: 0.0223 - accuracy: 0.9941 - val_loss: 8.4583 - val_accuracy: 0.401

我想使用提早停车,但在这种情况下无济于事。

我的模型和超参数:

def cnn(input_img):


    conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) #28 x 28 x 32
    conv1 = BatchNormalization()(conv1)
    conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
    conv1 = BatchNormalization()(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32
    conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) #14 x 14 x 64
    conv2 = BatchNormalization()(conv2)
    conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
    conv2 = BatchNormalization()(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64
    conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) #7 x 7 x 128 (small and thick)
    conv3 = BatchNormalization()(conv3)
    conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
    conv3 = BatchNormalization()(conv3)
    conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv3) #7 x 7 x 256 (small and thick)
    conv4 = BatchNormalization()(conv4)
    conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
    conv4 = BatchNormalization()(conv4)
    return conv4

def fc(enco):
    flat = Flatten()(enco)
    den = Dense(128, activation='relu')(flat)
    out = Dense(5, activation='softmax')(den)
    return out
opt = tf.keras.optimizers.Adam( beta_1=0.9, beta_2=0.999, amsgrad=False,learning_rate=0.00005)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])

model.fit( x= train.reshape(train.shape[0],280,252,1),
           y= train_Y_one_hot,           
           epochs=20,
           batch_size=32,
           validation_data=(x_val.reshape(x_val.shape[0],280,252,1),val_Y_one_hot),
           verbose=1)

请告诉我该怎么办!

1 个答案:

答案 0 :(得分:0)

首先尝试使用默认值

将此更改为

tf.keras.optimizers.Adam( beta_1=0.9, beta_2=0.999, amsgrad=False,learning_rate=0.00005)

tf.keras.optimizers.Adam()