如何提高培训和测试的准确性

时间:2020-05-18 00:14:11

标签: image-processing keras conv-neural-network pre-trained-model

我对深度学习非常陌生。我正在对自己的数据集进行预训练的模型进行微调,但无法提高测试和训练的准确性。从培训开始到最后,这两个损失都在62左右徘徊。我将Xception作为预先训练的模型,并与GlobalAveragePooling2D,致密层和0.2的辍学结合使用。

数据集由属于2类训练的3522张图像和属于2类测试集的881张图像组成。问题是我无法将更多图像添加到数据集中。这是我可以添加到数据集中的最大图像数量。尝试了ImageDataGenerator,但仍然没有用。在此约束下,两类图像看起来有点相似,可以提高准确性。

Code:
base_model = Xception(include_top=False, weights='imagenet')
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation="relu")(x)
x = Dropout(0.2)(x)
predictions = Dense(2, activation='sigmoid')(x)

model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
    layer.trainable = False
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
num_training_img=3522
num_test_img=881
stepsPerEpoch = num_training_img/batch_size
validationSteps= num_test_img/batch_size
history= model.fit_generator(
        train_data_gen,
        steps_per_epoch=stepsPerEpoch,
        epochs=20,
        validation_data = test_data_gen,
        validation_steps=validationSteps
        )
layer_num = len(model.layers)
for layer in model.layers[:129]:
    layer.trainable = False

for layer in model.layers[129:]:
    layer.trainable = True

    # update the weights
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='binary_crossentropy', metrics=['accuracy'])
num_training_img=3522
num_test_img=881
stepsPerEpoch = num_training_img/batch_size
validationSteps= num_test_img/batch_size
history= model.fit_generator(
        train_data_gen,
        steps_per_epoch=stepsPerEpoch,
        epochs=20,
        validation_data = test_data_gen,
        validation_steps=validationSteps
        )

1 个答案:

答案 0 :(得分:0)

在创建模型之前,应使图层不可训练。

base_model = Xception(include_top=False, weights='imagenet')
for layer in base_model.layers:
    layer.trainable = False
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation="relu")(x)
x = Dropout(0.2)(x)
predictions = Dense(2, activation='softmax')(x)

model = Model(inputs=base_model.input, outputs=predictions)

您的最后一层有2个单位,这表明softmax更合适。

predictions = Dense(2, activation='softmax')(x)

尝试与亚当并改变损失。

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])