在keras中提高对象检测模型的效率

时间:2019-02-26 15:05:08

标签: python-3.x keras object-detection

我正在使用的模型:

num_classes = 20
INIT_LR = 1e-3
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 56, 56), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

epochs = 40
lrate = 0.01
decay = lrate/epochs
opt = Adam(lr=INIT_LR, decay=INIT_LR / epochs)
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
print(model.summary())

我得到的准确性:

Train on 36124 samples, validate on 4014 samples
Epoch 1/40
36124/36124 [==============================] - 2161s 60ms/step - loss: 2.1642 - acc: 0.4387 - val_loss: 1.8971 - val_acc: 0.4584
Epoch 2/40
36124/36124 [==============================] - 2185s 60ms/step - loss: 1.8403 - acc: 0.4813 - val_loss: 1.6874 - val_acc: 0.4983
Epoch 3/40
36124/36124 [==============================] - 3774s 104ms/step - loss: 1.6476 - acc: 0.5231 - val_loss: 1.5375 - val_acc: 0.5451
Epoch 4/40
36124/36124 [==============================] - 2194s 61ms/step - loss: 1.5143 - acc: 0.5572 - val_loss: 1.4662 - val_acc: 0.5688
Epoch 5/40
36124/36124 [==============================] - 2079s 58ms/step - loss: 1.4169 - acc: 0.5792 - val_loss: 1.3685 - val_acc: 0.5952
Epoch 6/40
36124/36124 [==============================] - 2203s 61ms/step - loss: 1.3441 - acc: 0.6011 - val_loss: 1.4403 - val_acc: 0.5850
Epoch 7/40
36124/36124 [==============================] - 2212s 61ms/step - loss: 1.2922 - acc: 0.6140 - val_loss: 1.2964 - val_acc: 0.6168
Epoch 8/40
36124/36124 [==============================] - 2179s 60ms/step - loss: 1.2490 - acc: 0.6254 - val_loss: 1.2622 - val_acc: 0.6243
Epoch 9/40
36124/36124 [==============================] - 2169s 60ms/step - loss: 1.2033 - acc: 0.6377 - val_loss: 1.2622 - val_acc: 0.6206
Epoch 10/40
36124/36124 [==============================] - 2171s 60ms/step - loss: 1.1762 - acc: 0.6460 - val_loss: 1.3887 - val_acc: 0.6001
Epoch 11/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.1313 - acc: 0.6577 - val_loss: 1.1599 - val_acc: 0.6452
Epoch 12/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.1002 - acc: 0.6658 - val_loss: 1.2067 - val_acc: 0.6390
Epoch 13/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 1.0932 - acc: 0.6676 - val_loss: 1.2386 - val_acc: 0.6335
Epoch 14/40
36124/36124 [==============================] - 2169s 60ms/step - loss: 1.0518 - acc: 0.6768 - val_loss: 1.1448 - val_acc: 0.6490
Epoch 15/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.0342 - acc: 0.6832 - val_loss: 1.1420 - val_acc: 0.6522
Epoch 16/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 1.0104 - acc: 0.6894 - val_loss: 1.2271 - val_acc: 0.6385
Epoch 17/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 0.9855 - acc: 0.6964 - val_loss: 1.1793 - val_acc: 0.6517
Epoch 18/40
36124/36124 [==============================] - 2184s 60ms/step - loss: 0.9635 - acc: 0.7029 - val_loss: 1.1647 - val_acc: 0.6574
Epoch 19/40
36124/36124 [==============================] - 2074s 57ms/step - loss: 0.9517 - acc: 0.7071 - val_loss: 1.1118 - val_acc: 0.6639
Epoch 20/40
36124/36124 [==============================] - 2063s 57ms/step - loss: 0.9276 - acc: 0.7144 - val_loss: 1.1187 - val_acc: 0.6662
Epoch 21/40
36124/36124 [==============================] - 2104s 58ms/step - loss: 0.9111 - acc: 0.7202 - val_loss: 1.1444 - val_acc: 0.6637
Epoch 22/40
36124/36124 [==============================] - 2156s 60ms/step - loss: 0.8872 - acc: 0.7231 - val_loss: 1.1062 - val_acc: 0.6684
Epoch 23/40
36124/36124 [==============================] - 2181s 60ms/step - loss: 0.8716 - acc: 0.7279 - val_loss: 1.1912 - val_acc: 0.6540
Epoch 24/40
36124/36124 [==============================] - 2100s 58ms/step - loss: 0.8596 - acc: 0.7336 - val_loss: 1.1339 - val_acc: 0.6664
Epoch 25/40
36124/36124 [==============================] - 3357s 93ms/step - loss: 0.8412 - acc: 0.7380 - val_loss: 1.1295 - val_acc: 0.6627
Epoch 26/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 0.8104 - acc: 0.7475 - val_loss: 1.1511 - val_acc: 0.6572
Epoch 27/40
36124/36124 [==============================] - 2131s 59ms/step - loss: 0.8091 - acc: 0.7468 - val_loss: 1.1501 - val_acc: 0.6679
Epoch 28/40
36124/36124 [==============================] - 2107s 58ms/step - loss: 0.7791 - acc: 0.7569 - val_loss: 1.1579 - val_acc: 0.6637
Epoch 29/40
36124/36124 [==============================] - 2247s 62ms/step - loss: 0.7665 - acc: 0.7598 - val_loss: 1.1310 - val_acc: 0.6724
Epoch 30/40
36124/36124 [==============================] - 2019s 56ms/step - loss: 0.7575 - acc: 0.7615 - val_loss: 1.1065 - val_acc: 0.6766
Epoch 31/40
36124/36124 [==============================] - 2098s 58ms/step - loss: 0.7344 - acc: 0.7705 - val_loss: 1.1025 - val_acc: 0.6751
Epoch 32/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 0.7246 - acc: 0.7726 - val_loss: 1.1563 - val_acc: 0.6694
Epoch 33/40
36124/36124 [==============================] - 4057s 112ms/step - loss: 0.7133 - acc: 0.7777 - val_loss: 1.1328 - val_acc: 0.6714
Epoch 34/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6873 - acc: 0.7832 - val_loss: 1.1047 - val_acc: 0.6886
Epoch 35/40
36124/36124 [==============================] - 2175s 60ms/step - loss: 0.6816 - acc: 0.7860 - val_loss: 1.1477 - val_acc: 0.6662
Epoch 36/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6684 - acc: 0.7885 - val_loss: 1.1006 - val_acc: 0.6886
Epoch 37/40
36124/36124 [==============================] - 2179s 60ms/step - loss: 0.6622 - acc: 0.7951 - val_loss: 1.1352 - val_acc: 0.6814
Epoch 38/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6393 - acc: 0.7976 - val_loss: 1.1688 - val_acc: 0.6707
Epoch 39/40
36124/36124 [==============================] - 2137s 59ms/step - loss: 0.6263 - acc: 0.8018 - val_loss: 1.1279 - val_acc: 0.6896
Epoch 40/40
 8160/36124 [=====>........................] - ETA: 26:35 - loss: 0.5668 - acc: 0.8205

任何人都可以提出一种提高模型效率的方法吗?我尝试不增加层数,不增加历元,但获得的效率约为65%到68%。

1 个答案:

答案 0 :(得分:0)

在ng的课程中,他说:

*如果具有**高偏置,则您必须:***

  • 建立更大的NN或
  • 训练更大或
  • 更改架构

*如果具有**高差异,则:***

  • 收集更多数据
  • 规范化您的NN
  • 更改架构