通过Keras进行学习,从一开始(超出原始基线)就不会提高验证准确性,而会提高训练准确性

时间:2019-11-03 01:13:21

标签: machine-learning keras deep-learning conv-neural-network resnet

我正在为Food-101数据集建立分类器(具有101个类的图像数据集和每个类1000个图像)。我的方法是使用Keras并将ResNet50(来自imagenet的权重)转移到学习中。

在训练模型时,训练精度在几个时期内会有所改善(30%-> 45%),但验证准确性基本上保持在0.9-1.0%。我尝试过简化,交换优化程序,减少和增加隐藏层中的单位,去除所有图像扩充并在flow_from_directory()上设置一致的随机种子。

当我查看模型在验证集上所做的预测时,它始终是同一类。

我感觉模型并没有过度拟合,以致无法解释验证准确性方面的不足。

任何希望提高验证准确性的建议都会受到赞赏。

作为参考,以下是相关的代码段:

datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)

train_datagen = datagen.flow_from_directory('data/train/', seed=42, class_mode='categorical', subset='training', target_size=(256,256))
# prints "60603 images belonging to 101 classes"
val_datagen = datagen.flow_from_directory('data/train/', seed=42, class_mode='categorical', subset='validation', target_size=(256,256)) 
# prints "15150 images belonging to 101 classes"

train_steps = len(train_datagen) #1894
val_steps = len(val_datagen) #474
classes = len(list(train_datagen.class_indices.keys())) #101

conv_base = ResNet50(weights='imagenet', include_top=False, pooling='avg', input_shape=(256, 256, 3))

from keras.layers import GlobalAveragePooling2D
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import BatchNormalization

model = Sequential()

model.add(conv_base)
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(classes, activation='softmax'))

conv_base.trainable = False

from keras.optimizers import Adam

model.compile(loss='categorical_crossentropy',
              optimizer=Adam(),
              metrics=['acc','top_k_categorical_accuracy'])

history = model.fit_generator(
    train_datagen,
    steps_per_epoch=train_steps,
    epochs=5,
    verbose=2,
    validation_data=val_datagen,
    validation_steps=val_steps
)

这是.fit_generator()的结果:

  • 第1/5集
    • 724s-损失:3.1305-acc:0.3059-top_k_categorical_accuracy:0.5629-val_loss:6.5914 val_acc:0.0099-val_top_k_categorical_accuracy:0.0494
  • 史诗2/5
    • 715s-损失:2.4812-acc:0.4021-top_k_categorical_accuracy:0.6785-val_loss:7.4093-val_acc:0.0099-val_top_k_categorical_accuracy:0.0495
  • 第3/5集
    • 714s-损失:2.3559-acc:0.4248-top_k_categorical_accuracy:0.7026-val_loss:8.9146-val_acc:0.0094-val_top_k_categorical_accuracy:0.0495
  • 史诗4/5
    • 714s-损失:2.2661-acc:0.4459-top_k_categorical_accuracy:0.7200-val_loss:8.0597-val_acc:0.0100-val_top_k_categorical_accuracy:0.0494
  • 史诗5/5
    • 715s-损失:2.1870-acc:0.4583-top_k_categorical_accuracy:0.7348-val_loss:7.5171-val_acc:0.0100-val_top_k_categorical_accuracy:0.0483

这里是model.summary()

Layer (type)                 Output Shape              Param #   
=================================================================
resnet50 (Model)             (None, 2048)              23587712  
_________________________________________________________________
batch_normalization_1 (Batch (None, 2048)              8192      
_________________________________________________________________
dropout_1 (Dropout)          (None, 2048)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               1049088   
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 101)               51813     
=================================================================
Total params: 24,696,805
Trainable params: 1,104,997
Non-trainable params: 23,591,808
_________________________________________________________________

3 个答案:

答案 0 :(得分:4)

验证准确性低的原因与构建模型的方式有关。可以合理预期迁移学习在这种情况下会很好地工作。但是,您的top-1和top-5分别徘徊在1/101和5/101之间。这将表明您的模型是按偶然分类的,并且尚未了解数据集的基础信号(特征)。因此,转移学习在这种情况下不起作用。但是,这确实意味着它不会永远起作用。

我重复了您的实验,并获得了相同的结果,即前1个和前5个准确度反映了随机选择的分类。但是,我随后冻结了ResNet50模型的各层并重复了该实验。这只是进行迁移学习的不同方式。经过10个培训阶段,我得到了以下结果:

时代10/50 591/591 [=============================]-1492s 3s / step-损耗:1.0594-精度:0.7459-val_loss :1.1397-val_accuracy:0.7143

这不是完美的。但是,该模型尚未收敛,可以应用一些预处理步骤来进一步改善结果。

您进行观察的原因在于,冻结的ResNet50模型是从与Food101数据集根本不同的图像分布中训练出来的。数据分配中的这种不匹配会导致性能不佳,因为冻结网络执行的转换未调整到Food101图像。解冻网络可以让神经元实际学习Food101图像,从而获得更好的结果。

希望这对您有所帮助。

答案 1 :(得分:1)

减少模型中的冻结层(或增加可训练层)。我遇到了同样的问题,然后我对数据进行了一半的训练,准确性大大提高了。

答案 2 :(得分:0)

尝试一下

datagen = ImageDataGenerator(rescale=1./255, validation_split=0.4)

train_datagen = datagen.flow_from_directory('data/train/', seed=42, class_mode='categorical', subset='training', target_size=(256,256))
# prints "60603 images belonging to 101 classes"
val_datagen = datagen.flow_from_directory('data/train/', seed=42, class_mode='categorical', subset='validation', target_size=(256,256)) 
# prints "15150 images belonging to 101 classes"

train_steps = len(train_datagen) #1894
val_steps = len(val_datagen) #474
classes = len(list(train_datagen.class_indices.keys())) #101

conv_base = ResNet50(weights='imagenet', include_top=False, pooling='avg', input_shape=(256, 256, 3))

from keras.layers import GlobalAveragePooling2D
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import BatchNormalization

model = Sequential()

model.add(conv_base)
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(classes, activation='softmax'))

conv_base.trainable = False

from keras.optimizers import Adam

model.compile(loss='categorical_crossentropy',
              optimizer=Adam(),
              metrics=['acc','top_k_categorical_accuracy'])

history = model.fit_generator(
    train_datagen,
    steps_per_epoch=train_steps,
    epochs=50,
    verbose=2,
    validation_data=val_datagen,
    validation_steps=val_steps
)