我正在尝试使用一堆图像对VGG19模型进行微调以进行分类。 共有18个类别,每个位置精心策划了6000张图片。 使用Keras 2.2.4
型号:
INIT_LR = 0.00001
BATCH_SIZE = 128
IMG_SIZE = (256, 256)
epochs = 150
model_base = keras.applications.vgg19.VGG19(include_top=False, input_shape=(*IMG_SIZE, 3), weights='imagenet')
output = Flatten()(model_base.output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(64, activation='relu')(output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(len(all_character_names), activation='softmax')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-10]:
layer.trainable = False
opt = optimizers.Adam(lr=INIT_LR, decay=INIT_LR / epochs)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy', 'top_k_categorical_accuracy'])
数据扩充:
image_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=.15,
height_shift_range=.15,
#rescale=1./255,
shear_range=0.15,
zoom_range=0.15,
channel_shift_range=1,
vertical_flip=True,
horizontal_flip=True)
火车模型:
validation_steps = data_generator.validation_samples/BATCH_SIZE
steps_per_epoch = data_generator.train_samples/BATCH_SIZE
model.fit_generator(
generator,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=validation_data,
validation_steps=validation_steps
)
模型摘要:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 256, 256, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 128, 128, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 128, 128, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 128, 128, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 64, 64, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 64, 64, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 32, 32, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 16, 16, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 8, 8, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32768) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32768) 131072
_________________________________________________________________
dropout_1 (Dropout) (None, 32768) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 2097216
_________________________________________________________________
batch_normalization_2 (Batch (None, 64) 256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 19) 1235
=================================================================
Total params: 22,254,163
Trainable params: 19,862,931
Non-trainable params: 2,391,232
_________________________________________________________________
<keras.engine.input_layer.InputLayer object at 0x00000224568D0D68> False
<keras.layers.convolutional.Conv2D object at 0x00000224568D0F60> False
<keras.layers.convolutional.Conv2D object at 0x00000224568F0438> False
<keras.layers.pooling.MaxPooling2D object at 0x00000224570A5860> False
<keras.layers.convolutional.Conv2D object at 0x00000224570A58D0> False
<keras.layers.convolutional.Conv2D object at 0x00000224574196D8> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457524048> False
<keras.layers.convolutional.Conv2D object at 0x0000022457524D30> False
<keras.layers.convolutional.Conv2D object at 0x0000022457053160> False
<keras.layers.convolutional.Conv2D object at 0x00000224572E15C0> False
<keras.layers.convolutional.Conv2D object at 0x000002245707B080> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457088400> False
<keras.layers.convolutional.Conv2D object at 0x0000022457088E10> True
<keras.layers.convolutional.Conv2D object at 0x00000224575DB240> True
<keras.layers.convolutional.Conv2D object at 0x000002245747A320> True
<keras.layers.convolutional.Conv2D object at 0x0000022457486160> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574924E0> True
<keras.layers.convolutional.Conv2D object at 0x0000022457492D68> True
<keras.layers.convolutional.Conv2D object at 0x00000224574AD320> True
<keras.layers.convolutional.Conv2D object at 0x00000224574C6400> True
<keras.layers.convolutional.Conv2D object at 0x00000224574D2240> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574DAF98> True
<keras.layers.core.Flatten object at 0x00000224574EA080> True
<keras.layers.normalization.BatchNormalization object at 0x00000224574F82B0> True
<keras.layers.core.Dropout object at 0x000002247134BA58> True
<keras.layers.core.Dense object at 0x000002247136A7B8> True
<keras.layers.normalization.BatchNormalization object at 0x0000022471324438> True
<keras.layers.core.Dropout object at 0x00000224713249B0> True
<keras.layers.core.Dense object at 0x00000224713BF7F0> True
batchsize:128
LR:1e-05
尝试:
我每天添加约2000张新图像,以尝试达到每类约20000张图像,因为我认为这是我的问题。
答案 0 :(得分:0)
在较低层中,网络学习了诸如边缘,轮廓等低层特征。在较高层中,这些特征组合在一起。因此,在您的情况下,您需要更精细的功能,例如头发的颜色,人的身材等。您可以尝试从最后几层(从块4-5开始)进行微调的一件事。另外,您可以使用不同的学习率,对于VGG
块来说,学习率非常低,对于全新的dense
层,学习率却稍高。对于实施,this-thread会有所帮助。