Keras预测精度与训练精度不匹配

时间:2020-06-09 21:26:13

标签: python tensorflow opencv machine-learning keras

我用一些业余时间来快速学习一些Python和Keras。我创建了一个图像集,其中包含A类(三叶草)的4.050图像和B类(草)的2.358图像。可能还会有更多的类,因此我没有使用二进制class_mode。

对于每个班级,这些图像按子文件夹进行组织,我将它们随机分配为具有有趣文件夹结构的70%训练和30%测试数据。训练和测试数据尚未标准化。

我训练了模型并保存了结果。我得到的训练准确性约为90%。现在,当我尝试预测单个图像(这是所需的用例)时,该预测的平均准确度约为64%,非常接近总体a类图像的百分比(4.050 /(4.050 + 2.358) =〜63%)。在此测试中,我使用了实际数据集的随机图像,但是对于真实的新数据,可以看到相同的不良结果。从预测来看,它主要是a级,几次是b级。为什么会这样呢?我不知道怎么了。你可以看看吗?

因此,模型是在此处构建的:

epochs = 50
IMG_HEIGHT = 50
IMG_WIDTH = 50

train_image_generator = ImageDataGenerator(
                    rescale=1./255,
                    rotation_range=45,
                    width_shift_range=.15,
                    height_shift_range=.15,
                    horizontal_flip=True,
                    zoom_range=0.1)


validation_image_generator = ImageDataGenerator(rescale=1./255)
train_path = os.path.join(global_dir,"Train")
validate_path = os.path.join(global_dir,"Validate")

train_data_gen = train_image_generator.flow_from_directory(directory=train_path,
                                                               shuffle=True,
                                                               target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                               class_mode='categorical')
val_data_gen = validation_image_generator.flow_from_directory(directory=validate_path,
                                                               shuffle=True,
                                                               target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                               class_mode='categorical')


model = Sequential([
        Conv2D(16, 3, padding='same', activation='relu',
               input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
        MaxPooling2D(),
        Conv2D(32, 3, padding='same', activation='relu'),
        MaxPooling2D(),
        Dropout(0.2),
        Conv2D(64, 3, padding='same', activation='relu'),
        MaxPooling2D(),
        Dropout(0.2),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(64, activation='relu'),
        Dense(2, activation='softmax')
    ])

model.compile(optimizer='adam',
              loss=keras.losses.categorical_crossentropy,
              metrics=['accuracy'])

model.summary()

history = model.fit(
    train_data_gen,
    batch_size=200,
    epochs=epochs,
    validation_data=val_data_gen
)

model.save(global_dir + "/Model/1)

培训结果如下:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 50, 50, 16)        448       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 25, 25, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 25, 25, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32)        0         
_________________________________________________________________
dropout (Dropout)            (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 12, 12, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 6, 6, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 2304)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               1180160   
_________________________________________________________________
dense_1 (Dense)              (None, 64)                32832     
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 130       
=================================================================
Total params: 1,236,706
Trainable params: 1,236,706
Non-trainable params: 0
_________________________________________________________________
Epoch 1/50
141/141 [==============================] - 14s 102ms/step - loss: 0.6216 - accuracy: 0.6468 - val_loss: 0.5396 - val_accuracy: 0.7120
Epoch 2/50
141/141 [==============================] - 12s 86ms/step - loss: 0.5129 - accuracy: 0.7488 - val_loss: 0.4427 - val_accuracy: 0.8056
Epoch 3/50
141/141 [==============================] - 12s 86ms/step - loss: 0.4917 - accuracy: 0.7624 - val_loss: 0.5004 - val_accuracy: 0.7705
Epoch 4/50
141/141 [==============================] - 15s 104ms/step - loss: 0.4510 - accuracy: 0.7910 - val_loss: 0.4226 - val_accuracy: 0.8198
Epoch 5/50
141/141 [==============================] - 12s 85ms/step - loss: 0.4056 - accuracy: 0.8219 - val_loss: 0.3439 - val_accuracy: 0.8514
Epoch 6/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3904 - accuracy: 0.8295 - val_loss: 0.3207 - val_accuracy: 0.8646
Epoch 7/50
141/141 [==============================] - 12s 85ms/step - loss: 0.3764 - accuracy: 0.8304 - val_loss: 0.3185 - val_accuracy: 0.8702
Epoch 8/50
141/141 [==============================] - 12s 87ms/step - loss: 0.3695 - accuracy: 0.8362 - val_loss: 0.2958 - val_accuracy: 0.8743
Epoch 9/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3455 - accuracy: 0.8574 - val_loss: 0.3096 - val_accuracy: 0.8687
Epoch 10/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3483 - accuracy: 0.8473 - val_loss: 0.3552 - val_accuracy: 0.8412
Epoch 11/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3362 - accuracy: 0.8616 - val_loss: 0.3004 - val_accuracy: 0.8804
Epoch 12/50
141/141 [==============================] - 12s 85ms/step - loss: 0.3277 - accuracy: 0.8616 - val_loss: 0.2974 - val_accuracy: 0.8733
Epoch 13/50
141/141 [==============================] - 12s 85ms/step - loss: 0.3243 - accuracy: 0.8589 - val_loss: 0.2732 - val_accuracy: 0.8931
Epoch 14/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3324 - accuracy: 0.8563 - val_loss: 0.2568 - val_accuracy: 0.8941
Epoch 15/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3071 - accuracy: 0.8701 - val_loss: 0.2706 - val_accuracy: 0.8911
Epoch 16/50
141/141 [==============================] - 12s 84ms/step - loss: 0.3114 - accuracy: 0.8696 - val_loss: 0.2503 - val_accuracy: 0.9059
Epoch 17/50
141/141 [==============================] - 12s 85ms/step - loss: 0.2978 - accuracy: 0.8794 - val_loss: 0.2853 - val_accuracy: 0.8896
Epoch 18/50
141/141 [==============================] - 12s 85ms/step - loss: 0.3029 - accuracy: 0.8725 - val_loss: 0.2458 - val_accuracy: 0.9033
Epoch 19/50
141/141 [==============================] - 12s 84ms/step - loss: 0.2988 - accuracy: 0.8721 - val_loss: 0.2713 - val_accuracy: 0.8916
Epoch 20/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2960 - accuracy: 0.8747 - val_loss: 0.2649 - val_accuracy: 0.8926
Epoch 21/50
141/141 [==============================] - 13s 92ms/step - loss: 0.2901 - accuracy: 0.8819 - val_loss: 0.2611 - val_accuracy: 0.8957
Epoch 22/50
141/141 [==============================] - 12s 89ms/step - loss: 0.2879 - accuracy: 0.8821 - val_loss: 0.2497 - val_accuracy: 0.8947
Epoch 23/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2831 - accuracy: 0.8817 - val_loss: 0.2396 - val_accuracy: 0.9069
Epoch 24/50
141/141 [==============================] - 12s 89ms/step - loss: 0.2856 - accuracy: 0.8799 - val_loss: 0.2386 - val_accuracy: 0.9059
Epoch 25/50
141/141 [==============================] - 12s 87ms/step - loss: 0.2834 - accuracy: 0.8817 - val_loss: 0.2472 - val_accuracy: 0.9048
Epoch 26/50
141/141 [==============================] - 12s 88ms/step - loss: 0.3038 - accuracy: 0.8768 - val_loss: 0.2792 - val_accuracy: 0.8835
Epoch 27/50
141/141 [==============================] - 13s 91ms/step - loss: 0.2786 - accuracy: 0.8854 - val_loss: 0.2326 - val_accuracy: 0.9079
Epoch 28/50
141/141 [==============================] - 12s 86ms/step - loss: 0.2692 - accuracy: 0.8846 - val_loss: 0.2325 - val_accuracy: 0.9115
Epoch 29/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2770 - accuracy: 0.8841 - val_loss: 0.2507 - val_accuracy: 0.8972
Epoch 30/50
141/141 [==============================] - 13s 92ms/step - loss: 0.2751 - accuracy: 0.8886 - val_loss: 0.2329 - val_accuracy: 0.9104
Epoch 31/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2902 - accuracy: 0.8785 - val_loss: 0.2901 - val_accuracy: 0.8758
Epoch 32/50
141/141 [==============================] - 13s 94ms/step - loss: 0.2665 - accuracy: 0.8915 - val_loss: 0.2314 - val_accuracy: 0.9089
Epoch 33/50
141/141 [==============================] - 13s 91ms/step - loss: 0.2797 - accuracy: 0.8805 - val_loss: 0.2708 - val_accuracy: 0.8921
Epoch 34/50
141/141 [==============================] - 13s 90ms/step - loss: 0.2895 - accuracy: 0.8799 - val_loss: 0.2332 - val_accuracy: 0.9140
Epoch 35/50
141/141 [==============================] - 13s 93ms/step - loss: 0.2696 - accuracy: 0.8857 - val_loss: 0.2512 - val_accuracy: 0.8972
Epoch 36/50
141/141 [==============================] - 13s 90ms/step - loss: 0.2641 - accuracy: 0.8868 - val_loss: 0.2304 - val_accuracy: 0.9104
Epoch 37/50
141/141 [==============================] - 13s 94ms/step - loss: 0.2675 - accuracy: 0.8895 - val_loss: 0.2706 - val_accuracy: 0.8830
Epoch 38/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2699 - accuracy: 0.8839 - val_loss: 0.2285 - val_accuracy: 0.9053
Epoch 39/50
141/141 [==============================] - 12s 87ms/step - loss: 0.2577 - accuracy: 0.8917 - val_loss: 0.2469 - val_accuracy: 0.9043
Epoch 40/50
141/141 [==============================] - 12s 87ms/step - loss: 0.2547 - accuracy: 0.8948 - val_loss: 0.2205 - val_accuracy: 0.9074
Epoch 41/50
141/141 [==============================] - 12s 86ms/step - loss: 0.2553 - accuracy: 0.8930 - val_loss: 0.2494 - val_accuracy: 0.9038
Epoch 42/50
141/141 [==============================] - 14s 97ms/step - loss: 0.2705 - accuracy: 0.8883 - val_loss: 0.2263 - val_accuracy: 0.9109
Epoch 43/50
141/141 [==============================] - 12s 88ms/step - loss: 0.2521 - accuracy: 0.8926 - val_loss: 0.2319 - val_accuracy: 0.9084
Epoch 44/50
141/141 [==============================] - 12s 84ms/step - loss: 0.2694 - accuracy: 0.8850 - val_loss: 0.2199 - val_accuracy: 0.9109
Epoch 45/50
141/141 [==============================] - 12s 83ms/step - loss: 0.2601 - accuracy: 0.8901 - val_loss: 0.2318 - val_accuracy: 0.9079
Epoch 46/50
141/141 [==============================] - 12s 83ms/step - loss: 0.2535 - accuracy: 0.8917 - val_loss: 0.2342 - val_accuracy: 0.9089
Epoch 47/50
141/141 [==============================] - 12s 84ms/step - loss: 0.2584 - accuracy: 0.8897 - val_loss: 0.2238 - val_accuracy: 0.9089
Epoch 48/50
141/141 [==============================] - 12s 83ms/step - loss: 0.2580 - accuracy: 0.8944 - val_loss: 0.2219 - val_accuracy: 0.9120
Epoch 49/50
141/141 [==============================] - 12s 83ms/step - loss: 0.2514 - accuracy: 0.8895 - val_loss: 0.2225 - val_accuracy: 0.9150
Epoch 50/50
141/141 [==============================] - 12s 83ms/step - loss: 0.2483 - accuracy: 0.8977 - val_loss: 0.2370 - val_accuracy: 0.9084

历史记录如下: Training history plot

通过以下代码进行预测:

model = tf.keras.models.load_model(global_dir + "/Model/1")

image = cv.resize(image,(50,50))    
image= image.astype('float32')/255

image= np.expand_dims(image, axis=0)

predictions = model.predict(image)
top = np.array(tf.argmax(predictions, 1))

result = top[0]

此功能收集所有输入图像并保存分类(0,1),然后对数组进行随机排列。之后,我遍历数组,预测图像并将结果与​​实际类进行比较。

def test_model():
    dir_good = os.fsencode(global_dir + "/Contours/Clover")
    dir_bad = os.fsencode(global_dir + "/Contours/Grass")
    test = []
    for file2 in os.listdir(dir_good):
        filename2 = os.fsdecode(file2)
        if (filename2.endswith(".jpg")):
            test.append([0,os.path.join(global_dir + "/Contours/Clover", filename2)])
    for file2 in os.listdir(dir_bad):
        filename2 = os.fsdecode(file2)
        if (filename2.endswith(".jpg")):
            test.append([1,os.path.join(global_dir + "/Contours/Grass", filename2)])

    random.shuffle(test)
    count = 0
    right = 0
    for i in range(0,len(test)):
        tmp = cv.imread(test[i][1])
        result = predict_image(tmp) #<--- this function is already quoted above
        count += 1
        right += (1 if result == test[i][0] else 0)
        print(str(test[i][0]) + "->" + str(result),count,right,round(right/count*100,1))

提前谢谢!干杯,欢呼

2 个答案:

答案 0 :(得分:1)

如我们的对话中所述,您正在使用cv2.imread加载以BGR格式加载到彩色通道中的图像。 Keras数据生成器在内部以RGB格式加载图像。您必须在推断之前反转通道:

tmp = tmp[...,::-1]

答案 1 :(得分:0)

好吧,看来您遇到了课程过度拟合的问题。您可以通过在训练模型后查看训练和验证批次上的损失函数图来进行诊断。

import matplotlib.pyplot as plt

plt.plot(history['loss'])
plt.plot(history['val_loss'])

可能会修复一些问题,但这取决于上面的诊断。看到有关enter image description here的惊人答案。

相关问题