keras cnn模型只能为所有测试图像预测一个类别

时间:2019-02-20 16:02:46

标签: python keras conv-neural-network

我正在尝试使用2个类别(带有(1)或不带有(0))构建图像分类模型。我可以构建模型并获得1的精度。这太好了以至于不能成立(这是一个问题),但是当我使用predict_generator来保存图像时,它仅返回1类0(无类)。似乎有一个问题,但我无法解决,我看了很多文章,但仍然无法解决问题。

image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3

epochs = 10

no_train_images = 11957              #!ls ../data/train/* | wc -l
no_test_images = 652                 #!ls ../data/test/* | wc -l
no_valid_images = 6156               #!ls ../data/test/* | wc -l

train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'


test folder structure is the following:
test/test_folder/images_from_both_classes.jpg


#!ls ../data/train/without/ | wc -l 5606        #theres no class inbalance
#!ls ../data/train/with/ | wc -l 6351

#!ls ../data/valid/without/ | wc -l 2899
#!ls ../data/valid/with/ | wc -l 3257

classification_model = Sequential()

# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next 
classification_model.add(Dropout(0.2))

classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))

classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))

classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))

classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))

# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])



batch_size = 32

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    zoom_range=0.2)

# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()

train_generator = train_datagen.flow_from_directory(
    train_dir,
    target_size = (img_width, img_height),
    batch_size = batch_size,
    class_mode = 'binary',
    shuffle = True)

valid_generator = valid_datagen.flow_from_directory(
    valid_dir,
    target_size = (img_width, img_height),
    batch_size = batch_size,
    class_mode = 'binary',
    shuffle = False)

test_generator = test_datagen.flow_from_directory(
    test_dir,
    target_size = (img_width, img_height),
    batch_size = 1,
    class_mode = None,
    shuffle = False)

mpd = classification_model.fit_generator(
    train_generator,
    steps_per_epoch = no_train_images // batch_size,         # number of images per epoch
    epochs = epochs,                                         # number of iterations over the entire data
    validation_data = valid_generator,
    validation_steps = no_valid_images // batch_size)  

史诗1/10 373/373 [==============================]-119s 320ms / step-损耗:0.5214-acc:0.7357-val_loss :0.2720-val_acc:0.8758

史诗2/10 373/373 [==============================]-120s 322ms / step-损耗:0.2485-acc:0.8935-val_loss :0.0568-val_acc:0.9829

史诗3/10 373/373 [==============================]-130s 350ms / step-损耗:0.1427-acc:0.9435-val_loss :0.0410-val_acc:0.9796

史诗4/10 373/373 [==============================]-127s 341ms / step-损失:0.1053-acc:0.9623-val_loss :0.0197-val_acc:0.9971

史诗5/10 373/373 [==============================]-126s 337ms / step-损耗:0.0817-acc:0.9682-val_loss :0.0136-val_acc:0.9948

史诗6/10 373/373 [==============================]-123s 329ms / step-损耗:0.0665-acc:0.9754-val_loss :0.0116-val_acc:0.9985

史诗7/10 373/373 [==============================]-140s 376ms / step-损耗:0.0518-acc:0.9817-val_loss :0.0035-val_acc:0.9997

史诗8/10 373/373 [==============================]-144s 386ms / step-损耗:0.0539-acc:0.9832-val_loss :8.9459e-04-val_acc:1.0000

史诗9/10 373/373 [==============================]-122s 327ms / step-损耗:0.0434-acc:0.9850-val_loss :0.0023-val_acc:0.9997

Epoch 10/10 373/373 [==============================]-125s 336ms / step-损耗:0.0513-acc:0.9844-val_loss :0.0014-val_acc:1.0000

valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator, 
                                                no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))

print("Loss: ", score[0], "Accuracy: ", score[1])

predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)

labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]

filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
                      "Predictions":predictions})

print(results)

损失:5.404246180551993e-06准确性:1.0

print(predicted_class_indices)-全部0

[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

                              Filename Predictions
0      test_folder/video_3_frame10.jpg   without
1    test_folder/video_3_frame1001.jpg   without
2    test_folder/video_3_frame1006.jpg   without
3    test_folder/video_3_frame1008.jpg   without
4    test_folder/video_3_frame1009.jpg   without
5    test_folder/video_3_frame1010.jpg   without
6    test_folder/video_3_frame1013.jpg   without
7    test_folder/video_3_frame1014.jpg   without
8    test_folder/video_3_frame1022.jpg   without
9    test_folder/video_3_frame1023.jpg   without
10    test_folder/video_3_frame103.jpg   without
11   test_folder/video_3_frame1036.jpg   without
12   test_folder/video_3_frame1039.jpg   without
13    test_folder/video_3_frame104.jpg   without
14   test_folder/video_3_frame1042.jpg   without
15   test_folder/video_3_frame1043.jpg   without
16   test_folder/video_3_frame1048.jpg   without
17    test_folder/video_3_frame105.jpg   without
18   test_folder/video_3_frame1051.jpg   without
19   test_folder/video_3_frame1052.jpg   without
20   test_folder/video_3_frame1054.jpg   without
21   test_folder/video_3_frame1055.jpg   without
22   test_folder/video_3_frame1057.jpg   without
23   test_folder/video_3_frame1059.jpg   without
24   test_folder/video_3_frame1060.jpg   without

...只是一些输出,但是所有650+都没有分类。

这是输出,您可以看到,不带类的所有预测值均为0。

这是我第一次使用Keras和CNN,因此我们将不胜感激。

更新

我解决了这个问题。我目前正在努力提高准确性,但现在已经解决了主要问题。

这是导致问题的行。

predicted_class_indices=np.argmax(scores,axis=1)

argmax将返回结果的索引位置,但是当我使用二进制类时,在我的最后一层中,我有1个稠密的。它只会返回一个值,因此它将始终返回第一类(0作为索引位置)。由于仅设置了网络,因此要返回一个类。

更改以下内容解决了我的问题。

  1. 将训练生成器和测试生成器的class_mode更改为“类别”
  2. 将最终的密集层从1更改为2,这样将返回两个类的分数/概率。因此,当您使用argmax时,它将返回最高分的索引位置,指示其已预测了哪个类。

2 个答案:

答案 0 :(得分:0)

您应该更改此行:

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="link-list">
	<div class="title" role="heading">
		<a href="javascript:;">Accordion One</a>
	</div>
	<div class="link-list__items" style="display: none;">
		<ul role="menu">
			<li><a href="#">Accordion Inner Item</a></li>
			<li><a href="#">Accordion Inner Item</a></li>
			<li><a href="#">Accordion Inner Item</a></li>
		</ul>
	</div>
</div>
<div class="link-list">
	<div class="title" role="heading">
		<a href="javascript:;">Accordion Two</a>
	</div>
	<div class="link-list__items" style="display: none;">
		<ul role="menu">
			<li><a href="#">Accordion Inner Item</a></li>
			<li><a href="#">Accordion Inner Item</a></li>
			<li><a href="#">Accordion Inner Item</a></li>
		</ul>
	</div>
</div>

通过:

test_datagen = ImageDataGenerator()

如果您未以与训练/有效集相同的方式对测试集进行预处理,则不会获得预期的结果

答案 1 :(得分:0)

通过尝试使用更多时期(例如50个)来给它更多时间。 还要更改学习率(每次尝试将其除以10)和其他正则化参数。