我一直在研究图像分类问题。我使用了 ImageDataGenerator 来加载和预处理数据,然后在图像数据集上训练我的 CNN 模型,但是准确度却保持在 51%。尝试使用:
我的数据集包含1000个签名,其中每个签名有4000个真实样本图像和4000个伪样本图像。总共我有8000张图像。
但是在精度仍为51%或精度进一步降低的情况下,两种模型都过拟合。
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
shear_range=0.15,
zoom_range=0.15,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.4)
train_data_gen = train_image_generator.flow_from_directory(
train_dir,
target_size=(IMG_HEIGHT,IMG_WIDTH),
batch_size=batch_size,
class_mode='binary',
subset='training')
val_data_gen = train_image_generator.flow_from_directory(
train_dir, # same directory as training data
target_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=batch_size,
class_mode='binary',
subset='validation')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history_more = model.fit_generator(
train_data_gen,
steps_per_epoch=train_data_gen.samples // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=val_data_gen.samples // batch_size)
37/37 [==============================] - 2886s 78s/step - loss: 0.8010 - accuracy: 0.4994 - val_loss: 0.6933 - val_accuracy: 0.5000
Epoch 2/15
37/37 [==============================] - 985s 27s/step - loss: 0.6934 - accuracy: 0.5015 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 3/15
37/37 [==============================] - 986s 27s/step - loss: 0.6931 - accuracy: 0.4991 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 4/15
37/37 [==============================] - 985s 27s/step - loss: 0.6931 - accuracy: 0.4998 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 5/15
37/37 [==============================] - 988s 27s/step - loss: 0.6930 - accuracy: 0.4961 - val_loss: 0.6927 - val_accuracy: 0.5000
Epoch 6/15
37/37 [==============================] - 991s 27s/step - loss: 0.6934 - accuracy: 0.5021 - val_loss: 0.6923 - val_accuracy: 0.5000
Epoch 7/15
37/37 [==============================] - 979s 26s/step - loss: 0.6917 - accuracy: 0.5028 - val_loss: 0.6909 - val_accuracy: 0.5000
Epoch 8/15
37/37 [==============================] - 974s 26s/step - loss: 0.6858 - accuracy: 0.4998 - val_loss: 0.6897 - val_accuracy: 0.4991
Epoch 9/15
37/37 [==============================] - 967s 26s/step - loss: 0.6802 - accuracy: 0.5078 - val_loss: 0.6909 - val_accuracy: 0.5003
Epoch 10/15
37/37 [==============================] - 970s 26s/step - loss: 0.6808 - accuracy: 0.5045 - val_loss: 0.6943 - val_accuracy: 0.5081
Epoch 11/15
37/37 [==============================] - 967s 26s/step - loss: 0.6741 - accuracy: 0.5103 - val_loss: 0.7072 - val_accuracy: 0.5131
Epoch 12/15
37/37 [==============================] - 950s 26s/step - loss: 0.6732 - accuracy: 0.5128 - val_loss: 0.7064 - val_accuracy: 0.5041
Epoch 13/15
37/37 [==============================] - 947s 26s/step - loss: 0.6707 - accuracy: 0.5171 - val_loss: 0.6996 - val_accuracy: 0.5078
Epoch 14/15
37/37 [==============================] - 951s 26s/step - loss: 0.6675 - accuracy: 0.5103 - val_loss: 0.7122 - val_accuracy: 0.5016
Epoch 15/15
37/37 [==============================] - 952s 26s/step - loss: 0.6724 - accuracy: 0.5197 - val_loss: 0.7105 - val_accuracy: 0.5119