无法解释迁移学习中的行为模型

时间:2020-03-31 10:05:15

标签: tensorflow keras deep-learning

我的图像数据集非常小(用于训练的747幅图像和用于将图像调整为256 x 256的250幅用于测试的图像)。任务是多标签分类(两次感染之间可能会同时发生,但我的训练数据没有这种情况。)

由于我的数据集很小,所以我决定使用通过VGG16和InceptionV3进行的转移学习。当我训练VGG16时,一切都遵循理论,例如训练损失和验证损失不断减少,其值没有太大差异,如图1所示。

VGG16: batch_size=32, learning_rate=0.00001

当我训练InceptionV3时,模型似乎过拟合,但我不确定,因为训练损失约为0.6,而评估损失约为训练损失的10倍,如图2所示

InceptionV3: batch_size=8, learning_rate=0.00001

两个模型都添加了3个密集层。我附上了代码以供参考。我找不到解释为什么超大型模型(VGG)不能与该数据集过度拟合,而InceptionV3确实适合。我可以提出一些建议吗,InceptionV3出了什么问题?

def xvgg16(self, height, width, depth, num_class, hparams):
        """
        This function defines transfer learning for vgg16

        Parameters
        ----------
        height : Integer
            Image height (pixel)
        width : Integer
            Image width (pixel)
        depth : Integer
            Image channel
        num_class : Integer
            Number of class labels
        hparams: Dictionary
            Hyperparameters

        Returns
        -------
        model : Keras model object
            The transfer model

        """
        input_tensor = Input(shape=(height, width, depth))
        pretrain = VGG16(weights="imagenet", include_top=False, input_tensor=input_tensor)

        conv1_1 = pretrain.layers[1]
        conv1_2 = pretrain.layers[2]
        pool1 = pretrain.layers[3]
        conv2_1 = pretrain.layers[4]
        conv2_2 = pretrain.layers[5]
        pool2 = pretrain.layers[6]
        conv3_1 = pretrain.layers[7]
        conv3_2 = pretrain.layers[8]
        conv3_3 = pretrain.layers[9]
        pool3 = pretrain.layers[10]
        conv4_1 = pretrain.layers[11]
        conv4_2 = pretrain.layers[12]
        conv4_3 = pretrain.layers[13]
        pool4 = pretrain.layers[14]
        conv5_1 = pretrain.layers[15]
        conv5_2 = pretrain.layers[16]
        conv5_3 = pretrain.layers[17]
        pool5 = pretrain.layers[18]

        x = BatchNormalization(axis=-1)(conv1_1.output)
        x = conv1_2(x)
        x = BatchNormalization(axis=-1)(x)
        x = pool1(x)
        x = conv2_1(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv2_2(x)
        x = BatchNormalization(axis=-1)(x)
        x = pool2(x)
        x = conv3_1(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv3_2(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv3_3(x)
        x = BatchNormalization(axis=-1)(x)
        x = pool3(x)
        x = conv4_1(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv4_2(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv4_3(x)
        x = BatchNormalization(axis=-1)(x)
        x = pool4(x)
        x = conv5_1(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv5_2(x)
        x = BatchNormalization(axis=-1)(x)
        x = conv5_3(x)
        x = BatchNormalization(axis=-1)(x)
        x = pool5(x)

        x = Flatten()(x)
        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(num_class)(x)
        x = Activation("sigmoid")(x)

        model = Model(inputs=pretrain.layers[0].input, outputs=x)

        for layer in model.layers:
            if "conv" in layer.name:
                layer.trainable = False    

        model.compile(loss="binary_crossentropy", optimizer=Adam(lr=hparams["learning_rate"]), metrics=["binary_accuracy"])

        return model

def inception3(self, height, width, depth, num_class, hparams):
        """
        This function defines transfer learning for densenet

        Parameters
        ----------
        height : Integer
            Image height (pixel)
        width : Integer
            Image width (pixel)
        depth : Integer
            Image channel
        num_class : Integer
            Number of class labels
        hparams: Dictionary
            Hyperparameters

        Returns
        -------
        model : Keras model object
            The transfer model
        """
        input_tensor = Input(shape=(height, width, depth))
        pretrain = InceptionV3(weights="imagenet", include_top=False, input_tensor=input_tensor)

        x = pretrain.output
        x = GlobalAveragePooling2D()(x)

        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(64, use_bias=False)(x)
        x = Dropout(0.25)(x)
        x = BatchNormalization(axis=-1)(x)
        x = Activation("relu")(x)

        x = Dense(num_class)(x)
        x = Activation("sigmoid")(x)

        model = Model(inputs=pretrain.input, outputs=x)

        for layer in pretrain.layers:
            layer.trainable = False

        model.compile(loss="binary_crossentropy", optimizer=Adam(lr=hparams["learning_rate"]), metrics=["binary_accuracy"])

        return model

1 个答案:

答案 0 :(得分:0)

您应该知道VGG中的InceptionKeras模型都是使用imagenet进行了预训练的,但是具有不同的预处理功能。

虽然VGG预处理图像的像素值在(0, 255)范围内,但是Inception_v3预处理图像的像素值在(-1, 1)范围内

因此,在训练VGG时,首先应按以下步骤预处理输入图像:

from keras.applications.vgg16 import preprocess_input
X_train = ... # read your training images
X_train = preprocess_input(X_train)
print(X_train.max(), X_train.min(), X_train.mean())

您将看到最大,最小和平均像素值在(0, 255)

范围内

对于Inception_v3,您应遵循以下步骤:

from keras.applications.inception_v3 import preprocess_input
X_train = ... # read your training images
X_train = preprocess_input(X_train)
print(X_train.max(), X_train.min(), X_train.mean())

此处,值将在-11之间

在您当前的代码中,VGG可以正常工作,因为您的图像具有VGG模型所预期的像素范围为0到255,但不适用于Inception V3,因为它希望图像的像素范围为-1和1。 / p>

希望这会有所帮助。