无论学习率如何,损失都不会改变

时间:2018-08-17 18:56:13

标签: python tensorflow keras deep-learning

我建立了一个深度学习模型,与VGG网络有点类似。我正在将Keras与Tensorflow后端一起使用。模型摘要如下:

model = Sequential()
model.add(Conv2D(64, 3, border_mode='same', activation='relu', input_shape=(180,320,3)))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(9,  activation='relu'))

我尝试了优化器(SGD,Adam等),损失(MSE,MAE等),批处理大小(32和64)的不同组合。我什至进行了实验,学习率的范围从0.001到10000。但是,即使经过20个时间段,无论我使用哪种损失函数,验证损失仍然完全相同。训练损失的变化很小。我究竟做错了什么?

应该训练我的网络做什么:给定输入图像,网络需要预测可以从该图像得出的9个真实值的集合。

培训期间的端子输出:

    Epoch 1/100
    4800/4800 [==============================] - 96s 20ms/step - loss: 133.6534 - mean_absolute_error: 133.6534 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 2/100
    4800/4800 [==============================] - 49s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 3/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 4/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 5/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 6/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 7/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 8/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 9/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 10/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 11/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 12/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 13/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 14/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 15/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 16/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 17/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 18/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 19/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 20/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 21/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744

1 个答案:

答案 0 :(得分:3)

LOAD DATA INFILE

请不要随意使用relu!它具有恒定的零区域,没有梯度。卡住是完全正常的。

  • 最严重的错误是在最后一层有relu。
    • 如果您希望输出从0到无限,请使用relu
    • 如果要在0到1之间使用'softplus'
    • 如果您要在-1和+1之间使用'sigmoid'
  • 您的学习速度巨大。使用relu,您需要小型学习率:
    • 选择'tanh'及以下版本。
  • 尝试其他不会卡死的激活
  • 尝试在激活之前添加批处理规范化(通过这种方式,您可以确保无论什么都将大于零):
    • 这也使您拥有更高的学习率

0.00001