我目前正在构建一个CNN,以便从CT图像(灰度为1024 x 1024像素)开始对2个不同的类别进行分类。我知道我的数据集很小:训练/验证集有100个样本(50个“ 0级”和50个“ 1类”),测试集有27个样本(6和21个)。经过多次仿真,这种网络配置似乎是最佳解决方案:
model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(1024, 1024, 1)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.7))
model.add(Conv2D(32, (3,3), activation='relu')))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.7))
model.add(Conv2D(32, (3,3), activation='relu')))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dropout(0.7))
model.add(Dense(16, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(8, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(4, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(2, activation='softmax'))
model.summary()
下面,您可以看到其他设置:
# compile model
model.compile(loss='categorical_crossentropy', optimizer='SGD', metrics=['accuracy'])
# callbacks
earlyStopping = EarlyStopping(min_delta=0.01, monitor = 'accuracy', patience=40, mode='max', restore_best_weights=True)
reduce_lr = ReduceLROnPlateau()
# Fitting
seed(100)
set_random_seed(100)
model.fit(X_train, y_train, batch_size=10, validation_split=0.1, epochs=50, shuffle = True, callbacks=[reduce_lr, earlyStopping])
如下所示,性能似乎不错;实际上,损失和验证损失会随着时代的发展而不断下降,而准确性却越来越高。
Epoch 1/50
90/90 [==============================] - 43s 478ms/step - loss: 0.8159 - accuracy: 0.4889 - val_loss: 0.6816 - val_accuracy: 0.6000
Epoch 2/50
90/90 [==============================] - 42s 470ms/step - loss: 0.7045 - accuracy: 0.6000 - val_loss: 0.5591 - val_accuracy: 1.0000
Epoch 3/50
90/90 [==============================] - 42s 467ms/step - loss: 0.6565 - accuracy: 0.6667 - val_loss: 0.4171 - val_accuracy: 1.0000
Epoch 4/50
90/90 [==============================] - 43s 474ms/step - loss: 0.6253 - accuracy: 0.6111 - val_loss: 0.0789 - val_accuracy: 1.0000
Epoch 5/50
90/90 [==============================] - 43s 474ms/step - loss: 0.6086 - accuracy: 0.6889 - val_loss: 0.0068 - val_accuracy: 1.0000
Epoch 6/50
90/90 [==============================] - 43s 474ms/step - loss: 0.6311 - accuracy: 0.6444 - val_loss: 0.0067 - val_accuracy: 1.0000
Epoch 7/50
90/90 [==============================] - 42s 468ms/step - loss: 0.5163 - accuracy: 0.7444 - val_loss: 5.7406e-04 - val_accuracy: 1.0000
Epoch 8/50
90/90 [==============================] - 43s 473ms/step - loss: 0.5752 - accuracy: 0.7222 - val_loss: 7.9747e-05 - val_accuracy: 1.0000
Epoch 9/50
90/90 [==============================] - 43s 475ms/step - loss: 0.5440 - accuracy: 0.7444 - val_loss: 1.9908e-06 - val_accuracy: 1.0000
Epoch 10/50
90/90 [==============================] - 42s 468ms/step - loss: 0.5481 - accuracy: 0.7444 - val_loss: 1.5497e-07 - val_accuracy: 1.0000
Epoch 11/50
90/90 [==============================] - 42s 471ms/step - loss: 0.5032 - accuracy: 0.7778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 12/50
90/90 [==============================] - 43s 475ms/step - loss: 0.5036 - accuracy: 0.7889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 13/50
90/90 [==============================] - 42s 471ms/step - loss: 0.4714 - accuracy: 0.8222 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 14/50
90/90 [==============================] - 42s 470ms/step - loss: 0.4353 - accuracy: 0.8667 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 15/50
90/90 [==============================] - 42s 467ms/step - loss: 0.4243 - accuracy: 0.8556 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 16/50
90/90 [==============================] - 42s 471ms/step - loss: 0.4046 - accuracy: 0.8444 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 17/50
90/90 [==============================] - 43s 479ms/step - loss: 0.4068 - accuracy: 0.8556 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 18/50
90/90 [==============================] - 43s 476ms/step - loss: 0.4336 - accuracy: 0.8111 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 19/50
90/90 [==============================] - 42s 470ms/step - loss: 0.3909 - accuracy: 0.8778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 20/50
90/90 [==============================] - 42s 470ms/step - loss: 0.3343 - accuracy: 0.9111 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 21/50
90/90 [==============================] - 43s 475ms/step - loss: 0.4284 - accuracy: 0.8444 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 22/50
90/90 [==============================] - 42s 469ms/step - loss: 0.4517 - accuracy: 0.8111 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 23/50
90/90 [==============================] - 42s 465ms/step - loss: 0.3613 - accuracy: 0.9000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 24/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3489 - accuracy: 0.9111 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 25/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3843 - accuracy: 0.8778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 26/50
90/90 [==============================] - 42s 471ms/step - loss: 0.3456 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 27/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3643 - accuracy: 0.8778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 28/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3856 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 29/50
90/90 [==============================] - 42s 471ms/step - loss: 0.3988 - accuracy: 0.8667 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 30/50
90/90 [==============================] - 43s 475ms/step - loss: 0.3948 - accuracy: 0.8778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 31/50
90/90 [==============================] - 42s 470ms/step - loss: 0.4920 - accuracy: 0.7333 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 32/50
90/90 [==============================] - 42s 472ms/step - loss: 0.3573 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 33/50
90/90 [==============================] - 42s 465ms/step - loss: 0.4097 - accuracy: 0.8333 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 34/50
90/90 [==============================] - 42s 469ms/step - loss: 0.3378 - accuracy: 0.9111 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 35/50
90/90 [==============================] - 42s 472ms/step - loss: 0.3782 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 36/50
90/90 [==============================] - 43s 475ms/step - loss: 0.3891 - accuracy: 0.8333 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 37/50
90/90 [==============================] - 42s 470ms/step - loss: 0.3851 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 38/50
90/90 [==============================] - 42s 472ms/step - loss: 0.3569 - accuracy: 0.9000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 39/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3795 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 40/50
90/90 [==============================] - 42s 466ms/step - loss: 0.4399 - accuracy: 0.7667 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 41/50
90/90 [==============================] - 42s 466ms/step - loss: 0.4298 - accuracy: 0.8556 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 42/50
90/90 [==============================] - 43s 473ms/step - loss: 0.3553 - accuracy: 0.8778 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 43/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3967 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 44/50
90/90 [==============================] - 43s 475ms/step - loss: 0.3783 - accuracy: 0.8556 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 45/50
90/90 [==============================] - 43s 474ms/step - loss: 0.3379 - accuracy: 0.9444 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 46/50
90/90 [==============================] - 43s 473ms/step - loss: 0.3679 - accuracy: 0.9000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 47/50
90/90 [==============================] - 43s 475ms/step - loss: 0.3074 - accuracy: 0.9333 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 48/50
90/90 [==============================] - 43s 475ms/step - loss: 0.3210 - accuracy: 0.9333 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 49/50
90/90 [==============================] - 43s 476ms/step - loss: 0.4226 - accuracy: 0.8556 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 50/50
90/90 [==============================] - 43s 476ms/step - loss: 0.3811 - accuracy: 0.8889 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
但是,当我尝试预测两个数据集(训练/验证集和测试集)的类别时,却得到了意料不到的令人惊讶的结果:CNN仅预测了一个类别。
model.predict(X_train)
array([[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.]], dtype=float32)
model.predict(X_test)
array([[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.],
[0., 1.]], dtype=float32)
Here,我发现BatchNormalization()
层可能会产生此问题。这篇文章中的解决方案是使用以下几行来预测样本:
#Prediction on training set
learning_phase = 1
sample_weights_TR = np.ones(100)
ins_TR = [X_train, y_train, sample_weights_TR, learning_phase]
model.test_function(ins_TR)
[0.39475524, 0.8727273] # Average Accuracy = 0.873
#Prediction on test set
sample_weights_TS = np.ones(27)
ins_TS = [X_test, y_test, sample_weights_TS, learning_phase]
model.test_function(ins_TS)
[0.7853608, 0.79562044] # Average Accuracy = 0.796
但是,如果我一次又一次启动model.test_function(ins_TR)
或model.test_function(ins_TS)
,结果总是不同的;准确性不断下降!
model.test_function(ins_TS)
[0.7853608, 0.79562044]
model.test_function(ins_TS)
[0.88033366, 0.75]
model.test_function(ins_TS)
[0.9703789, 0.7068063]
model.test_function(ins_TS)
[0.86600196, 0.69266057]
model.test_function(ins_TS)
[0.79449946, 0.67755103]
model.test_function(ins_TS)
[0.77942985, 0.6691176]
model.test_function(ins_TS)
[0.83834535, 0.6588629]
因此,我的问题是:
预先感谢, Mattia
答案 0 :(得分:1)
主要问题是,如@Masoud Erfani所述,您想训练一个二元分类器(只有两个可能的输出的分类器:0或1),但您正在使用S型激活函数和分类交叉熵损失函数,用于多分类器。
将最后一层的激活函数更改为sigmoid
,将丢失函数更改为binary_crossentropy
,这样可以解决您的问题。
保持联系!