在小型数据集上使用Keras和vgg16进行学习转移

时间:2019-06-15 15:54:28

标签: machine-learning keras neural-network face-recognition

我必须建立一个可以识别15个人面孔的神经网络。我正在使用keras。我的数据集由300张总图像组成,分为训练,验证和测试。对于这15个人中的每一个,我都有以下细分:

  • 培训:13
  • 验证:3
  • 测试:4

由于我无法从头开始构建高效的神经网络,因此我也相信,因为我的数据集很小,所以我试图通过进行迁移学习来解决我的问题。我使用了vgg16网络。在训练和验证阶段,我得到了不错的结果,但是当我运行测试时,结果却是灾难性的。

我不知道我的问题是什么。这是我使用的代码:

img_width, img_height = 256, 256
train_data_dir = 'dataset_biometria/face/training_set'
validation_data_dir = 'dataset_biometria/face/validation_set'
nb_train_samples = 20   
nb_validation_samples = 20 
batch_size = 16
epochs = 5

model = applications.VGG19(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))

for layer in model.layers:
    layer.trainable = False

#Adding custom Layers 
x = model.output
x = Flatten()(x)
x = Dense(1024, activation="relu")(x)
x = Dropout(0.4)(x)
x = Dense(1024, activation="relu")(x)
predictions = Dense(15, activation="softmax")(x)

# creating the final model 
model_final = Model(input = model.input, output = predictions)

# compile the model 
model_final.compile(loss = "categorical_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=["accuracy"])

# Initiate the train and test generators with data Augumentation 
train_datagen = ImageDataGenerator(
rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.3,
width_shift_range = 0.3,
height_shift_range=0.3,
rotation_range=30)

test_datagen = ImageDataGenerator(
rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.3,
width_shift_range = 0.3,
height_shift_range=0.3,
rotation_range=30)

train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_height, img_width),
batch_size = batch_size, 
class_mode = "categorical")

validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size = (img_height, img_width),
class_mode = "categorical")

# Save the model according to the conditions  
checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=10, verbose=1, mode='auto')


# Train the model 
model_final.fit_generator(
train_generator,
samples_per_epoch = nb_train_samples,
epochs = epochs,
validation_data = validation_generator,
nb_val_samples = nb_validation_samples,
callbacks = [checkpoint, early])

model('model_face_classification.h5')

我还尝试训练一些图层,而不是不训练任何图层,如下例所示:

for layer in model.layers[:10]:
    layer.trainable = False

我还尝试了更改时期数,批处理大小,nb_validation_samples,nb_validation_sample。

很遗憾,结果没有改变,在测试阶段,我的网络无法正确识别人脸。

1 个答案:

答案 0 :(得分:0)

没有看到实际结果或错误,我不能说这里是什么问题。
确实,小型数据集是一个问题,但是有很多方法可以解决它。
您可以使用图像增强来增加样本。您可以参考augement.py

但是,没有修改您上面的网络,而是一个非常酷的模型:siamese network/one-shot learning。它不需要太多照片,而且精度很高。

因此,您可以看到以下链接以获得一些帮助:

  1. Facial-Recognition-Using-FaceNet-Siamese-One-Shot-Learning
  2. Face-recognition-using-deep-learning