解释深度神经网络的训练轨迹:非常低的训练损失和甚至更低的验证损失

时间:2017-01-28 11:40:38

标签: python machine-learning neural-network deep-learning keras

我对下面的日志有点怀疑,我在训练深度神经网络时得到的回归目标值介于-1.0和1.0之间,学习率为0.001和19200/4800训练/验证样本:

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to
====================================================================================================
cropping2d_1 (Cropping2D)        (None, 138, 320, 3)   0           cropping2d_input_1[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda)                (None, 66, 200, 3)    0           cropping2d_1[0][0]
____________________________________________________________________________________________________
lambda_2 (Lambda)                (None, 66, 200, 3)    0           lambda_1[0][0]
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D)  (None, 31, 98, 24)    1824        lambda_2[0][0]
____________________________________________________________________________________________________
spatialdropout2d_1 (SpatialDropo (None, 31, 98, 24)    0           convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)  (None, 14, 47, 36)    21636       spatialdropout2d_1[0][0]
____________________________________________________________________________________________________
spatialdropout2d_2 (SpatialDropo (None, 14, 47, 36)    0           convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D)  (None, 5, 22, 48)     43248       spatialdropout2d_2[0][0]
____________________________________________________________________________________________________
spatialdropout2d_3 (SpatialDropo (None, 5, 22, 48)     0           convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D)  (None, 3, 20, 64)     27712       spatialdropout2d_3[0][0]
____________________________________________________________________________________________________
spatialdropout2d_4 (SpatialDropo (None, 3, 20, 64)     0           convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D)  (None, 1, 18, 64)     36928       spatialdropout2d_4[0][0]
____________________________________________________________________________________________________
spatialdropout2d_5 (SpatialDropo (None, 1, 18, 64)     0           convolution2d_5[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten)              (None, 1152)          0           spatialdropout2d_5[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 1152)          0           flatten_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 1152)          0           dropout_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 100)           115300      activation_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout)              (None, 100)           0           dense_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 50)            5050        dropout_2[0][0]
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 10)            510         dense_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout)              (None, 10)            0           dense_3[0][0]
____________________________________________________________________________________________________
dense_4 (Dense)                  (None, 1)             11          dropout_3[0][0]
====================================================================================================
Total params: 252,219
Trainable params: 252,219
Non-trainable params: 0
____________________________________________________________________________________________________
None
Epoch 1/5
19200/19200 [==============================] - 795s - loss: 0.0292 - val_loss: 0.0128
Epoch 2/5
19200/19200 [==============================] - 754s - loss: 0.0169 - val_loss: 0.0120
Epoch 3/5
19200/19200 [==============================] - 753s - loss: 0.0161 - val_loss: 0.0114
Epoch 4/5
19200/19200 [==============================] - 723s - loss: 0.0154 - val_loss: 0.0100
Epoch 5/5
19200/19200 [==============================] - 1597s - loss: 0.0151 - val_loss: 0.0098

两者都训练验证损失减少,这是一见钟情的好消息。但是,在第一个时代,训练损失怎么会如此之低?验证损失怎么能更低?这是我模型或培训设置中某处系统性错误的指示吗?

1 个答案:

答案 0 :(得分:6)

实际上 - 小于训练损失的验证损失并不像人们想象的那么罕见。它可能发生在例如当验证数据中的所有示例都通过您的训练集中的示例进行了覆盖时,您的网络只是简单地了解了数据集的实际结构。

当数据结构不是很复杂时,它经常发生。实际上 - 在第一个时代之后,一个让你感到惊讶的损失的微小价值可能是你在这种情况下发生这种情况的线索。

就损失而言 - 你没有说明你的损失是多少,但假设你的任务是回归 - 我猜测它是mse - 在这种情况下,这个级别的均方误差0.01表示真实值与实际值之间的平均欧几里德距离等于0.1您的值集5%直径的[-1, 1]。那么 - 这个错误实际上是如此之小吗?

您还没有指定在一个纪元期间分析的批次数。也许如果您的数据结构不那么复杂且批量很小 - 一个时代足以让您充分了解数据。

为了检查您的模型是否经过良好训练,我建议您在correlation plot上绘制y_pred时绘制一个y_true。 X轴和Y轴上的{{1}}。然后你会真正看到你的模型是如何被实际训练的。

编辑:正如Neil所提到的 - 可能还有更多原因导致小验证错误 - 例如不能很好地分离案例。我还要补充一点 - 这个事实 - 5个时代不超过90分钟 - 也许通过使用经典的交叉验证模式检查模型的结果是好的。 5折。这可以向您保证,如果您的数据集 - 您的模型表现良好。