用于人脸识别的Keras自动编码器模型运行速度非常慢,准确性很差

时间:2019-11-18 06:14:06

标签: python-3.x tensorflow keras face-recognition autoencoder

我正在努力在keras中使用自动编码器创建人脸识别模型。我的数据集由20个人组成,每人有20张图像用于训练,每人10张图像用于验证。数据集包含一个文件夹,其中包括每个有名称的人的单独文件夹。这是用于创建人脸识别模型的代码。

def fixed_generator(generator):
    for batch in generator:
        yield (batch, batch)

# dimensions of our images.
img_width, img_height = 512, 512

train_data_dir = 'train'
test_data_dir = 'test'
nb_train_samples = 400
nb_validation_samples = 200
nb_epoch = 70
batch_size = 32

input_img = Input(shape=( img_width, img_height,3))

x = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)

x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
encoded = MaxPooling2D((2, 2), border_mode='same')(x)
# at this point the representation is (8, 4, 4) i.e. 128-dimensional

# Decoder
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
x = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2, 2))(x)
#decoded = Convolution2D(3, 3, 3, activation='sigmoid', border_mode='same')(x)
decoded = Convolution2D(3, 3, 3, activation='sigmoid', border_mode='same')(x)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy',metrics=['accuracy'])

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        train_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode=None)

test_generator = test_datagen.flow_from_directory(
        test_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode=None)

checkpoint = ModelCheckpoint('model_best_weights.h5', monitor='loss', verbose=1, save_best_only=True, mode='min', period=1)

autoencoder = load_model('model_best_weights.h5')

autoencoder.fit_generator(
        fixed_generator(train_generator),
        samples_per_epoch=nb_train_samples,
        nb_epoch=nb_epoch,
        validation_data=fixed_generator(test_generator),
        nb_val_samples=nb_validation_samples,
        callbacks = [checkpoint],
        initial_epoch=30
        )

我将此模型运行了多达70个纪元,但显示的准确性非常差,几乎要花一个小时才能运行一个纪元。

enter image description here

任何帮助将不胜感激。

0 个答案:

没有答案