指标有什么问题?

时间:2018-12-07 10:18:03

标签: python machine-learning neural-network computer-vision conv-neural-network

我正在尝试使用CNN进行道路图片的简单分类(1way / 2way),我的数据集由1k类的4k图像和2类的〜4K图像组成,因此通常这些类是均衡的,每个类都存储在其他文件夹中。

但是这些指标会发生某种“跳跃”吗?我尝试了不同大小的input_shape,不同的优化程序(“ adam”,“ rmsprop”),批处理大小(10、16、20),并且得到了相同的结果……任何人都知道导致此行为的原因是什么?

代码:

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(300, 300,3)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
          optimizer='rmsprop',
          metrics=['accuracy']) 

batch_size = 10
train_datagen = ImageDataGenerator(
#        rescale=1./255,
#        shear_range=0.2,
#        zoom_range=0.2,
#        horizontal_flip=True
    featurewise_std_normalization=True,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    rescale=1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest')


test_datagen = ImageDataGenerator(featurewise_std_normalization=True,
                              rescale=1./255)


train_generator = train_datagen.flow_from_directory(
    'data/train',  
    target_size=(300, 300),  
    batch_size=batch_size,
    class_mode='binary')


validation_generator = test_datagen.flow_from_directory(
    'data/validation',
    target_size=(300, 300),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=2000 // batch_size,
    epochs=50,
    validation_data=validation_generator,
    validation_steps=800 // batch_size)

运行此代码时,我得到以下结果:

Epoch 1/50
125/125 [==============================] - 253s 2s/step - loss: 0.8142 - acc: 0.5450 - val_loss: 0.4937 - val_acc: 0.8662
Epoch 2/50
125/125 [==============================] - 254s 2s/step - loss: 0.6748 - acc: 0.5980 - val_loss: 0.5782 - val_acc: 0.7859
Epoch 3/50
125/125 [==============================] - 255s 2s/step - loss: 0.6679 - acc: 0.6580 - val_loss: 0.5068 - val_acc: 0.8562
Epoch 4/50
125/125 [==============================] - 255s 2s/step - loss: 0.6438 - acc: 0.6780 - val_loss: 0.5018 - val_acc: 0.8766
Epoch 5/50
125/125 [==============================] - 257s 2s/step - loss: 0.6427 - acc: 0.7245 - val_loss: 0.3760 - val_acc: 0.9213
Epoch 6/50
125/125 [==============================] - 256s 2s/step - loss: 0.5635 - acc: 0.7435 - val_loss: 0.6140 - val_acc: 0.6398
Epoch 7/50
125/125 [==============================] - 254s 2s/step - loss: 0.6226 - acc: 0.7320 - val_loss: 0.1852 - val_acc: 0.9433
Epoch 8/50
125/125 [==============================] - 252s 2s/step - loss: 0.4858 - acc: 0.7765 - val_loss: 0.1617 - val_acc: 0.9437
Epoch 9/50
125/125 [==============================] - 253s 2s/step - loss: 0.4433 - acc: 0.8120 - val_loss: 0.5577 - val_acc: 0.6788
Epoch 10/50
125/125 [==============================] - 252s 2s/step - loss: 0.4621 - acc: 0.7935 - val_loss: 0.1000 - val_acc: 0.9762
Epoch 11/50
125/125 [==============================] - 254s 2s/step - loss: 0.4572 - acc: 0.8035 - val_loss: 0.3797 - val_acc: 0.8161
Epoch 12/50
125/125 [==============================] - 257s 2s/step - loss: 0.4707 - acc: 0.8105 - val_loss: 0.0903 - val_acc: 0.9761
Epoch 13/50
125/125 [==============================] - 254s 2s/step - loss: 0.4134 - acc: 0.8390 - val_loss: 0.1587 - val_acc: 0.9437
Epoch 14/50
125/125 [==============================] - 252s 2s/step - loss: 0.4023 - acc: 0.8355 - val_loss: 0.1149 - val_acc: 0.9584
Epoch 15/50
125/125 [==============================] - 253s 2s/step - loss: 0.4286 - acc: 0.8255 - val_loss: 0.0897 - val_acc: 0.9700
Epoch 16/50
125/125 [==============================] - 253s 2s/step - loss: 0.4665 - acc: 0.8140 - val_loss: 0.6411 - val_acc: 0.8136
Epoch 17/50
125/125 [==============================] - 252s 2s/step - loss: 0.4010 - acc: 0.8315 - val_loss: 0.1205 - val_acc: 0.9736
Epoch 18/50
125/125 [==============================] - 253s 2s/step - loss: 0.3790 - acc: 0.8550 - val_loss: 0.0993 - val_acc: 0.9613
Epoch 19/50
125/125 [==============================] - 251s 2s/step - loss: 0.3717 - acc: 0.8620 - val_loss: 0.1154 - val_acc: 0.9748
Epoch 20/50
125/125 [==============================] - 250s 2s/step - loss: 0.4434 - acc: 0.8405 - val_loss: 0.1251 - val_acc: 0.9537
Epoch 21/50
125/125 [==============================] - 253s 2s/step - loss: 0.4535 - acc: 0.7545 - val_loss: 0.6766 - val_acc: 0.3640
Epoch 22/50
125/125 [==============================] - 252s 2s/step - loss: 0.7482 - acc: 0.7140 - val_loss: 0.4803 - val_acc: 0.7950
Epoch 23/50
125/125 [==============================] - 252s 2s/step - loss: 0.3712 - acc: 0.8585 - val_loss: 0.1056 - val_acc: 0.9685
Epoch 24/50
125/125 [==============================] - 251s 2s/step - loss: 0.3836 - acc: 0.8545 - val_loss: 0.1267 - val_acc: 0.9673
Epoch 25/50
125/125 [==============================] - 250s 2s/step - loss: 0.3879 - acc: 0.8805 - val_loss: 0.8669 - val_acc: 0.8100
Epoch 26/50
 125/125 [==============================] - 250s 2s/step - loss: 0.3735 - acc: 0.8825 - val_loss: 0.1472 - val_acc: 0.9685
Epoch 27/50
125/125 [==============================] - 250s 2s/step - loss: 0.4577 - acc: 0.8620 - val_loss: 0.3285 - val_acc: 0.8925
Epoch 28/50
125/125 [==============================] - 252s 2s/step - loss: 0.3805 - acc: 0.8875 - val_loss: 0.3930 - val_acc: 0.7821
Epoch 29/50
125/125 [==============================] - 250s 2s/step - loss: 0.3565 - acc: 0.8930 - val_loss: 0.1087 - val_acc: 0.9647
Epoch 30/50
125/125 [==============================] - 250s 2s/step - loss: 0.4680 - acc: 0.8845 - val_loss: 0.1012 - val_acc: 0.9688
Epoch 31/50
125/125 [==============================] - 250s 2s/step - loss: 0.3293 - acc: 0.9080 - val_loss: 0.0700 - val_acc: 0.9811
Epoch 32/50
125/125 [==============================] - 250s 2s/step - loss: 0.4197 - acc: 0.9060 - val_loss: 0.1464 - val_acc: 0.9700
Epoch 33/50
125/125 [==============================] - 251s 2s/step - loss: 0.3656 - acc: 0.9005 - val_loss: 8.8236 - val_acc: 0.4307
Epoch 34/50
125/125 [==============================] - 249s 2s/step - loss: 0.4593 - acc: 0.9015 - val_loss: 4.3916 - val_acc: 0.6826
Epoch 35/50
125/125 [==============================] - 250s 2s/step - loss: 0.4824 - acc: 0.8605 - val_loss: 0.0748 - val_acc: 0.9850
Epoch 36/50
125/125 [==============================] - 250s 2s/step - loss: 0.4629 - acc: 0.8875 - val_loss: 0.2257 - val_acc: 0.8728
Epoch 37/50
125/125 [==============================] - 250s 2s/step - loss: 0.3708 - acc: 0.9075 - val_loss: 0.1196 - val_acc: 0.9537
Epoch 38/50
125/125 [==============================] - 250s 2s/step - loss: 0.9151 - acc: 0.8605 - val_loss: 0.1266 - val_acc: 0.9559
Epoch 39/50
125/125 [==============================] - 250s 2s/step - loss: 0.3700 - acc: 0.9035 - val_loss: 0.1038 - val_acc: 0.9812
Epoch 40/50
125/125 [==============================] - 249s 2s/step - loss: 0.5900 - acc: 0.8625 - val_loss: 0.0838 - val_acc: 0.9887
Epoch 41/50
125/125 [==============================] - 250s 2s/step - loss: 0.4409 - acc: 0.9065 - val_loss: 0.0828 - val_acc: 0.9773
Epoch 42/50
125/125 [==============================] - 250s 2s/step - loss: 0.3415 - acc: 0.9115 - val_loss: 0.8084 - val_acc: 0.8788
Epoch 43/50
125/125 [==============================] - 250s 2s/step - loss: 0.5181 - acc: 0.8440 - val_loss: 0.0998 - val_acc: 0.9786
Epoch 44/50
125/125 [==============================] - 249s 2s/step - loss: 0.3270 - acc: 0.8970 - val_loss: 0.1155 - val_acc: 0.9625
Epoch 45/50
125/125 [==============================] - 250s 2s/step - loss: 0.3810 - acc: 0.9125 - val_loss: 0.2881 - val_acc: 0.9484
Epoch 46/50
125/125 [==============================] - 249s 2s/step - loss: 0.3499 - acc: 0.9220 - val_loss: 0.3109 - val_acc: 0.8564
Epoch 47/50
125/125 [==============================] - 250s 2s/step - loss: 0.3505 - acc: 0.9160 - val_loss: 0.0861 - val_acc: 0.9788
Epoch 48/50
125/125 [==============================] - 250s 2s/step - loss: 0.3073 - acc: 0.9225 - val_loss: 0.0999 - val_acc: 0.9874
Epoch 49/50
125/125 [==============================] - 250s 2s/step - loss: 0.4418 - acc: 0.9000 - val_loss: 0.0301 - val_acc: 0.9925
Epoch 50/50
125/125 [==============================] - 250s 2s/step - loss: 0.3501 - acc: 0.9190 - val_loss: 0.0351 - val_acc: 0.9861

过拟合吗?还是我的损失函数的随机设置参数位置?我将尝试找到其他图片来构建新的验证数据集...

1 个答案:

答案 0 :(得分:0)

“每个类存储在不同的文件夹中”

那么您是说1个类位于“火车”文件夹中吗? 另一个类位于“ validate”文件夹中?

尝试将批次大小设置为32
训练与验证集的大小,比例分别为0.8与0.2之比

编辑

我找到了您可能会引用的链接:
https://stats.stackexchange.com/questions/187335/validation-error-less-than-training-error

编辑

尝试获取更多样本。
如果要获取更多样本有困难,
尝试从现有示例创建/修改。