我正在建立一个卷积网络来预测3类图像,猫,狗和人。我训练并训练了它,但是随后当我通过猫的图像进行预测时,总是给出错误的输出。我试过其他猫的照片,但结果没有改变。对于人和狗来说,没有问题,只与猫一样。
cnn = Sequential()
#------------------- Convolução e Pooling
cnn.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
cnn.add(Dropout(0.5))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(32, (3, 3), activation = 'relu'))
cnn.add(Dropout(0.5))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(64, (3, 3), activation = 'relu'))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(64, (3, 3), activation = 'relu'))
cnn.add(Dropout(0.5))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
#Full connection
cnn.add(Flatten())
cnn.add(Dense(units = 128, activation = 'relu'))
cnn.add(Dense(units = 4, activation = 'softmax'))
# Compiling the CNN
cnn.compile(optimizer = OPTIMIZER, loss = 'categorical_crossentropy', metrics = ['accuracy'])
filepath="LPT-{epoch:02d}-{loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
12000张火车图像-3000张测试图像
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('data/train',
target_size = tgt_size,
batch_size = batch_size,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('data/test',
target_size = tgt_size,
batch_size = batch_size,
class_mode = 'categorical')
cnn.fit_generator(training_set,
#steps_per_epoch = 12000,
steps_per_epoch = nb_train_samples // batch_size,
epochs = EPOCHS,
verbose = VERBOSE,
validation_data = test_set,
validation_steps = nb_validation_samples // batch_size,
callbacks = callbacks_list)
我最好的训练结果:
loss: 0.6410 - acc: 0.7289 - val_loss: 0.6308 - val_acc: 0.7293
类索引:
{'.ipynb_checkpoints': 0, 'cats': 1, 'dogs':2, 'person':3}
(我无法删除该ipynb文件夹)
预测:
pred1 = 'single_prediction/ct.jpg'
pred2 = 'single_prediction/ps.jpg'
pred3 = 'data/single_prediction/dg.jpg'
test_img = image.load_img(pred1, target_size = tgt_size)
test_img = image.img_to_array(test_img)
test_img = np.expand_dims(test_img, axis = 0)
pred = new_model.predict(test_img)
print(pred)
if pred[0][1] == 1:
print('It is a cat!')
elif pred[0][2] == 1:
print('It is a dog!')
elif pred[0][3] == 1:
print('It is a Person!')
以及猫图像的输出:
[[0.000000e+00 0.000000e+00 8.265931e-34 1.000000e+00]]
我已经尝试过: 更改层数(添加和删除),增加历元,减少批处理...我也尝试使用np.argmax()。有人可以在这里给我点灯吗?
更新:我使用shutil.rmtree()命令删除了jupyter笔记本的隐藏文件夹,并对其进行了大约40个时期的训练,直到停止改进。最后,我对预测图像进行了重新缩放并正确处理。
test_img = image.img_to_array(test_img)/255
感谢所有帮助!
答案 0 :(得分:1)
问题出在ipynb checkpoints文件夹。这是一个隐藏的文件夹。您需要先删除它。然后将您的输出密集层更改为3个单位(类)。 改变这个
cnn.add(Dense(units = 4, activation = 'softmax'))
到
cnn.add(Dense(units = 3, activation = 'softmax'))