我的卷积网络损失没有改变,并且在整个训练过程中一直停滞不前。如何解决这个问题?

时间:2019-04-15 22:25:38

标签: python-3.x tensorflow keras conv-neural-network relu

我正在尝试训练卷积网络,但是无论我做什么,损失的确会改变。我想知道我哪里出了问题,也希望得到任何友好的建议,因为这是我第一次处理如此大的数据。

我尝试了优化器(adam,SGD,adamdelta ...),损失函数(sqauare平均误差,二进制交叉熵...)和激活(Relu,elu,selu ....)的许多组合,但是问题仍然存在。

我的项目的性质:这是我在模拟中训练一辆简单的自动驾驶汽车的尝试。

训练数据:训练数据分为约4000个.h5文件。每个文件正好有200张图像,每张图像都有各自的数据,例如速度,加速度等。

由于数据的性质,我决定以200的小批量训练,并循环浏览所有文件。

# model (I am a beginner so forgive my sloppy code)
rgb_in = Input(batch_shape=(200, 88, 200, 3), name='rgb_in')
conv_1 = Conv2D(filters=10,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(rgb_in)
conv_2 = Conv2D(filters=16,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(conv_1)
conv_3 = Conv2D(filters=24,kernel_size=5,activation="elu",data_format="channels_last",init = "he_normal")(conv_2)
conv_4 = Conv2D(filters=32,kernel_size=3,activation="elu",data_format="channels_last",init = "he_normal")(conv_3)
conv_5 = Conv2D(filters=32,kernel_size=3,activation="elu",data_format="channels_last",init = "he_normal")(conv_4)
flat = Flatten(data_format="channels_last")(conv_5)
t_in = Input(batch_shape=(200,14), name='t_in')
x = concatenate([flat, t_in])
dense_1 = Dense(100,activation="elu",init = "he_normal")(x)
dense_2 = Dense(50,activation="elu",init = "he_normal")(dense_1)
dense_3 = Dense(25,activation="elu",init = "he_normal")(dense_2)
out = Dense(5,activation="elu",init = "he_normal")(dense_3)
model = Model(inputs=[rgb_in, t_in], outputs=[out])
model.compile(optimizer='Adadelta', loss='binary_crossentropy')



for i in range(3663,6951):
    filename = 'data_0'+str(i)+'.h5'
    f = h5py.File(filename, 'r')
    rgb = f["rgb"][:,:,:,:]
    targets = f["targets"][:,:]
    rgb = (rgb - rgb.mean())/rgb.std()
    input_target[:,0] = targets[:,10]
    input_target[:,1] = targets[:,11]
    input_target[:,2] = targets[:,12]
    input_target[:,3] = targets[:,13]
    input_target[:,4] = targets[:,16]
    input_target[:,5] = targets[:,17]
    input_target[:,6] = targets[:,18]
    input_target[:,7] = targets[:,21]
    input_target[:,8] = targets[:,22]
    input_target[:,9] = targets[:,23]
    a = one_hot(targets[:,24].astype(int),6)
    input_target[:,10] = a[:,2]
    input_target[:,11] = a[:,3]
    input_target[:,12] = a[:,4]
    input_target[:,13] = a[:,5]
    output[:,0] = targets[:,0]
    output[:,1] = targets[:,1]
    output[:,2] = targets[:,2]
    output[:,3] = targets[:,4]
    output[:,4] = targets[:,5]
    model.fit([rgb,input_target], output,epochs=10,batch_size=200)

结果:

Epoch 1/10
200/200 [==============================] - 7s 35ms/step - loss: 6.1657
Epoch 2/10
200/200 [==============================] - 0s 2ms/step - loss: 2.3812
Epoch 3/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2955
Epoch 4/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 5/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 7/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 2.2778
Epoch 1/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 2/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 3/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 4/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 5/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 7/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 1.9241

最后,如果您对该项目有任何建议,我将不胜感激。

2 个答案:

答案 0 :(得分:0)

如何使用ReduceLROnPlateau callback?

from keras.callbacks import ReduceLROnPlateau

reduce_lr = ReduceLROnPlateau(monitor='loss', patience=6)

model.fit(X,y,num_epochs=666,callbacks=[reduce_lr])

答案 1 :(得分:0)

我使用了周期性学习率,它已经解决了这个问题。 对于曾经遇到过类似问题的人,这里是一个链接

https://github.com/bckenstler/CLR