在使用keras VGGFace Framework训练CNN时,Epoch并没有开始

时间:2018-06-10 21:13:24

标签: python machine-learning keras transfer-learning

我正在尝试在我自己的数据集上使用Creating and appending arrays in Firebase-ios-swift,该数据集由12类面部图像组成。我已经在训练集中使用非常少的数据的某些类上应用了扩充。

在使用resnet50进行微调之后,当我尝试训练我的模型时,它会陷入时代,即它不会开始训练,而是继续显示Epoch 1/50。 这是它的样子:

Layer (type)                 Output Shape              Param #   
=================================================================
model_1 (Model)              (None, 12)                23585740  
=================================================================
Total params: 23,585,740
Trainable params: 23,532,620
Non-trainable params: 53,120
_________________________________________________________________
Found 1774 images belonging to 12 classes.
Found 313 images belonging to 12 classes.
Epoch 1/50

这是我的代码:

train_data_path = 'dataset_cfps/train'
validation_data_path = 'dataset_cfps/validation'

#Parametres
img_width, img_height = 224, 224

vggface = VGGFace(model='resnet50', include_top=False, input_shape=(img_width, img_height, 3))

#vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))
last_layer = vggface.get_layer('avg_pool').output
x = Flatten(name='flatten')(last_layer)
out = Dense(12, activation='sigmoid', name='classifier')(x)
custom_vgg_model = Model(vggface.input, out)


# Create the model
model = models.Sequential()

# Add the convolutional base model
model.add(custom_vgg_model)

# Add new layers
# model.add(layers.Flatten())
# model.add(layers.Dense(1024, activation='relu'))
# model.add(BatchNormalization())
# model.add(layers.Dropout(0.5))
# model.add(layers.Dense(12, activation='sigmoid'))

# Show a summary of the model. Check the number of trainable parameters
model.summary()

train_datagen = ImageDataGenerator(
      rescale=1./255,
      rotation_range=20,
      width_shift_range=0.2,
      height_shift_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

validation_datagen = ImageDataGenerator(rescale=1./255)


train_batchsize = 16
val_batchsize = 16

train_generator = train_datagen.flow_from_directory(
        train_data_path,
        target_size=(img_width, img_height),
        batch_size=train_batchsize,
        class_mode='categorical')

validation_generator = validation_datagen.flow_from_directory(
        validation_data_path,
        target_size=(img_width, img_height),
        batch_size=val_batchsize,
        class_mode='categorical',
        shuffle=True)

# Compile the model
model.compile(loss='categorical_crossentropy',
              optimizer=optimizers.SGD(lr=1e-3),
              metrics=['acc'])
# Train the model
history = model.fit_generator(
      train_generator,
      steps_per_epoch=train_generator.samples/train_generator.batch_size ,
      epochs=50,
      validation_data=validation_generator,
      validation_steps=validation_generator.samples/validation_generator.batch_size,
      verbose=1)

# Save the model
model.save('facenet_resnet.h5')

有谁知道可能存在什么问题?我怎样才能让我的模型变得更好(如果有我可以做的话)。随意建议我改进。

2 个答案:

答案 0 :(得分:0)

等你几个小时(基于你的gpu)。最后它将告诉每个时代的损失和val_loss。

答案 1 :(得分:0)

等待没有解决问题,我通过重新启动整个程序解决了问题。