Keras图像分类验证准确度更高

时间:2017-06-17 20:39:02

标签: python image deep-learning classification keras

我正在运行带有图像的图像分类模型,我的问题是我的验证准确度高于我的训练准确度。 数据(训练/验证)是随机设置的。我使用InceptionV3作为预先训练的模型。准确度和验证准确度之间的比率在100个时期内保持不变 我尝试了较低的学习率和额外的批量标准化层。

有没有人对什么有所了解?我很感激你的帮助,谢谢!

base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(468, activation='relu')(x)
x = Dropout(0.5)(x)

# and a logistic layer
predictions = Dense(468, activation='softmax')(x)

# this is the model we will train
model = Model(base_model.input,predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
adam = Adam(lr=0.0001, beta_1=0.9)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])

# train the model on the new data for a few epochs
batch_size = 64
epochs = 100
img_height = 224
img_width = 224
train_samples = 127647
val_samples = 27865

train_datagen = ImageDataGenerator(
    rescale=1./255,
    #shear_range=0.2,
    zoom_range=0.2,
    zca_whitening=True,
    #rotation_range=0.5,
    horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    'AD/AutoDetect/',
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    'AD/validation/',
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

# fine-tune the model
model.fit_generator(
    train_generator,
    samples_per_epoch=train_samples // batch_size,
    nb_epoch=epochs,
    validation_data=validation_generator,
    nb_val_samples=val_samples // batch_size)

找到127647张属于468个班级的图像 共找到27865幅图像,属于468类。
大纪元1/100
2048/1994 [==============================] - 48s - 损失:6.2839 - acc:0.0073 - val_loss:5.8506 - val_acc:0.0179
大纪元2/100
2048/1994 [==============================] - 44s - 损失:5.8338 - acc:0.0430 - val_loss:5.4865 - val_acc:0.1004
Epoch 3/100
2048/1994 [==============================] - 45s - 损失:5.5147 - acc:0.0786 - val_loss:5.1474 - val_acc:0.1161
大纪元4/100
2048/1994 [==============================] - 44s - 损失:5.1921 - acc:0.1074 - val_loss:4.8049 - val_acc:0.1786

1 个答案:

答案 0 :(得分:-2)

see this answer

This occours becauce you add a dropout layer in your model that prevents the accuracy from going to 1.0 during training.