Keras Unet + VGG16的预测都相同

时间:2019-05-12 17:29:18

标签: python keras deep-learning

我正在Keras中使用VGG16(解码器部分)训练U-Net。该模型训练得很好并且正在学习-我看到验证集的成绩有所提高。

但是,当我尝试在图像上调用predict时,我收到的矩阵的所有值都相同。

下面是模型:

class Gray2VGGInput(Layer):
    """Custom conversion layer"""
    def build(self, x):
        self.image_mean = K.variable(value=np.array([103.939, 116.779, 123.68]).reshape([1,1,1,3]).astype('float32'), 
                                     dtype='float32', 
                                     name='imageNet_mean' )
        self.built = True
        return
    def call(self, x):
        rgb_x = K.concatenate([x,x,x], axis=-1 )
        norm_x = rgb_x - self.image_mean
        return norm_x

    def compute_output_shape(self, input_shape):
        return input_shape[:3] + (3,)


def UNET1_VGG16(img_rows=864, img_cols=1232):
    ''' 
    UNET with pretrained layers from VGG16
    '''
    def upsampleLayer(in_layer, concat_layer, input_size):
        '''
        Upsampling (=Decoder) layer building block
        Parameters
        ----------
        in_layer: input layer
        concat_layer: layer with which to concatenate
        input_size: input size fot convolution
        '''
        upsample = Conv2DTranspose(input_size, (2, 2), strides=(2, 2), padding='same')(in_layer)    
        upsample = concatenate([upsample, concat_layer])
        conv = Conv2D(input_size, (1, 1), activation='relu', kernel_initializer='he_normal', padding='same')(upsample)
        conv = BatchNormalization()(conv)
        conv = Dropout(0.2)(conv)
        conv = Conv2D(input_size, (1, 1), activation='relu', kernel_initializer='he_normal', padding='same')(conv)
        conv = BatchNormalization()(conv)
        return conv

    #--------
    #INPUT
    #--------
    #batch, height, width, channels
    inputs_1 = Input((img_rows, img_cols, 1))

    #-----------------------
    #INPUT CONVERTER & VGG16
    #-----------------------
    inputs_3 = Gray2VGGInput(name='gray_to_rgb')(inputs_1)  #shape=(img_rows, img_cols, 3)
    base_VGG16 = VGG16(include_top=False, weights='imagenet', input_tensor=inputs_3)

    #--------
    #DECODER
    #--------
    c1 = base_VGG16.get_layer("block1_conv2").output #(None, 864, 1232, 64)
    c2 = base_VGG16.get_layer("block2_conv2").output #(None, 432, 616, 128) 
    c3 = base_VGG16.get_layer("block3_conv2").output #(None, 216, 308, 256) 
    c4 = base_VGG16.get_layer("block4_conv2").output #(None, 108, 154, 512) 

    #--------
    #BOTTLENECK
    #--------
    c5 = base_VGG16.get_layer("block5_conv2").output #(None, 54, 77, 512)

    #--------
    #ENCODER
    #--------    
    c6 = upsampleLayer(in_layer=c5, concat_layer=c4, input_size=512)
    c7 = upsampleLayer(in_layer=c6, concat_layer=c3, input_size=256)
    c8 = upsampleLayer(in_layer=c7, concat_layer=c2, input_size=128)
    c9 = upsampleLayer(in_layer=c8, concat_layer=c1, input_size=64)

    #--------
    #DENSE OUTPUT
    #--------
    outputs = Conv2D(1, (1, 1), activation='sigmoid')(c9)

    model = Model(inputs=inputs_1, outputs=outputs)

    #Freeze layers
    for layer in model.layers[:16]:
        layer.trainable = False

    print(model.summary())

    model.compile(optimizer='adam', 
                  loss=fr.diceCoefLoss, 
                  metrics=[fr.diceCoef])

    return model

然后,我加载模型并调用predict

model = un.UNET1_VGG16()

pth_to_model = PTH_OUTPUT + 'weights__L_01.h5'
model.load_weights(pth_to_model) 

preds = model.predict(X_image_test, verbose=1)

但是,结果如下:

[[0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
 [0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
 [0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
 ...
 [0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
 [0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]
 [0.4567569 0.4567569 0.4567569 ... 0.4567569 0.4567569 0.4567569]]

我在没有VGG16的其他型号上使用相同的步骤,并且一切正常。因此,我认为与VGG16相关的某些问题是错误的。也许是输入层,我正在将其转换为“伪” RGB图像?

2 个答案:

答案 0 :(得分:2)

问题在于VGG冻结的图层。如果您的数据集与imagenet完全不同,也许您应该端到端训练整个模型。另外,很明显,如果冻结BatchNormalization层,它们的行为可能会很奇怪。供参考,请参阅此discussion

答案 1 :(得分:1)

如果您在某些特定的约束条件下训练网络(例如,减去平均值),则在测试网络时(由于通过测试,您还会通过网络进行前向传递),因此您需要减去平均值(就像在训练中)。

这可以解决您的问题。