我正在使用keras处理以下数据子集:
5000 images of class A
5000 images of class B
每个类别的1000张图像用作验证。将图像缩放到96x96x3通道,并规格化为0-1。我正在使用以下模型:
model.add(Conv2D(32, (3, 3), activation="relu", input_shape=inputshape))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
然后通过以下方式训练模型:
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss="binary_crossentropy", optimizer=sgd, metrics=["accuracy"])
但是准确度很少(偶然)会超过50%:
Epoch 1/100
8000/8000 [==============================] - 23s 3ms/step - loss: 0.6939 - acc: 0.5011 - val_loss: 0.6932 - val_acc: 0.5060
Epoch 2/100
8000/8000 [==============================] - 22s 3ms/step - loss: 0.6938 - acc: 0.4941 - val_loss: 0.6941 - val_acc: 0.4940
Epoch 3/100
8000/8000 [==============================] - 22s 3ms/step - loss: 0.6937 - acc: 0.4981 - val_loss: 0.6932 - val_acc: 0.4915
Epoch 4/100
8000/8000 [==============================] - 22s 3ms/step - loss: 0.6933 - acc: 0.5056 - val_loss: 0.6931 - val_acc: 0.5060
Epoch 5/100
8000/8000 [==============================] - 22s 3ms/step - loss: 0.6935 - acc: 0.4970 - val_loss: 0.6932 - val_acc: 0.4940
我不认为问题出在数据本身,因为我使用了一种替代的机器学习方法,并且在完全相同的图像上获得了94%的准确性(除了每个班级仅使用5个训练图像,但这是在点)。
任何帮助将不胜感激。
哦!万一重要:我正在使用CNTK后端。
编辑:这是我用来读取图像的代码,该代码还将像素值标准化为0-1范围:
import cv2
import numpy as np
from keras.preprocessing.image import img_to_array
healthy_files = sorted(os.listdir("../../uninfected/"))
healthy_imgs = [cv2.imread("../../uninfected/" + x) for x in healthy_files]
data = []
labels = []
for img in healthy_imgs[:5000]:
resized = cv2.resize(img, (96, 96)).astype(numpy.float32) / 255.0 # normalise data to 0..1 range
arr = img_to_array(resized)
data += [arr]
labels += [0]
# The for loop above is then repeated over the other half of the dataset, with the labels line using the label [1] instead
data = np.array(data, numpy.float32)
编辑2:这是model.summary()的输出:
Model built:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 94, 94, 32) 896
_________________________________________________________________
conv2d_2 (Conv2D) (None, 92, 92, 32) 9248
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 46, 46, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 44, 44, 64) 18496
_________________________________________________________________
conv2d_4 (Conv2D) (None, 42, 42, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 21, 21, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 28224) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 7225600
_________________________________________________________________
dense_2 (Dense) (None, 1) 257
=================================================================
Total params: 7,291,425
Trainable params: 7,291,425
Non-trainable params: 0
我注意到此摘要中没有明确列出激活层,因此我将模型更改为:
model.add(Conv2D(32, (3, 3), input_shape=inputshape))
model.add(Activation("relu"))
model.add(Conv2D(32, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3)))
model.add(Activation("relu"))
model.add(Conv2D(64, (3, 3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation("relu"))
#model.add(Dropout(0.5))
#model.add(Dense(10, activation="relu"))
model.add(Dense(1))
model.add(Activation("sigmoid"))
其中给出了以下内容的摘要输出:
Model built:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 94, 94, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 94, 94, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 92, 92, 32) 9248
_________________________________________________________________
activation_2 (Activation) (None, 92, 92, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 46, 46, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 44, 44, 64) 18496
_________________________________________________________________
activation_3 (Activation) (None, 44, 44, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 42, 42, 64) 36928
_________________________________________________________________
activation_4 (Activation) (None, 42, 42, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 21, 21, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 28224) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 1806400
_________________________________________________________________
activation_5 (Activation) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 65
_________________________________________________________________
activation_6 (Activation) (None, 1) 0
=================================================================
Total params: 1,872,033
Trainable params: 1,872,033
Non-trainable params: 0
不用说,结果保持不变...
答案 0 :(得分:2)
因此,在尝试了很棒的人在评论中提出的所有建议之后,我没有运气。我决定返回绘图板,或者在这种情况下,请在其他计算机上尝试。我的原始代码有效!
最后,我将其范围缩小到了后端-我在第一台计算机上使用CNTK,在第二台计算机上使用Tensorflow。我在第二台计算机上尝试了CNTK,它运行得非常完美...因此,我决定在第一台计算机上重新安装CNTK。这次,代码运行良好。因此,我不知道最初发生了什么问题,但是它与我安装的CNTK有一些联系。我想最后,整个问答环节并没有真正帮助任何人..但是,如果有人遇到类似的问题,请尝试对问题的评论中的建议,那里有一些非常好的建议。如果那不起作用,请尝试更改您的后端!
欢呼
答案 1 :(得分:-1)
在卷积层中使用dropout通常是一个坏主意,而是使用批处理规范化。