U-net:如何提高多类细分的准确性?

时间:2020-02-01 18:24:38

标签: python tensorflow keras conv-neural-network image-segmentation

我使用U-net已有一段时间了,请注意,在我的大多数应用程序中,它会围绕特定类产生过高估计。

例如,这是一个灰度图像:

enter image description here

以及3种类别的手动分割(病变[绿色],组织[品红色],背景[所有其他]):

enter image description here

我在预测中注意到的问题(边界上的高估):

enter image description here

使用的典型架构如下所示:

def get_unet(dim=128, dropout=0.5, n_classes=3):

 inputs = Input((dim, dim, 1))
 conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
 conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
 pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

 conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
 conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
 pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

 conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
 conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
 pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

 conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
 conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
 conv4 = Dropout(dropout)(conv4)
 pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

 conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
 conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
 conv5 = Dropout(dropout)(conv5)

 up6 = concatenate([UpSampling2D(size=(2, 2))(conv5), conv4], axis=3)
 conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up6)
 conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)

 up7 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv3], axis=3)
 conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up7)
 conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)

 up8 = concatenate([UpSampling2D(size=(2, 2))(conv7), conv2], axis=3)
 conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up8)
 conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)

 up9 = concatenate([UpSampling2D(size=(2, 2))(conv8), conv1], axis=3)
 conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up9)
 conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)

 conv10 = Conv2D(n_classes, (1, 1), activation='relu', padding='same', ker nel_initializer='he_normal')(conv9)
 conv10 = Reshape((dim * dim, n_classes))(conv10)

 output = Activation('softmax')(conv10)

 model = Model(inputs=[inputs], outputs=[output])

 return model

加:

mgpu_model.compile(optimizer='adadelta', loss='categorical_crossentropy',
                   metrics=['accuracy'], sample_weight_mode='temporal')  

open(p, 'w').write(json_string)

model_checkpoint = callbacks.ModelCheckpoint(f, save_best_only=True)
reduce_lr_cback = callbacks.ReduceLROnPlateau(
    monitor='val_loss', factor=0.2,
    patience=5, verbose=1,
    min_lr=0.05 * 0.0001)

h = mgpu_model.fit(train_gray, train_masks,
                   batch_size=64, epochs=50,
                   verbose=1, shuffle=True, validation_split=0.2, sample_weight=sample_weights,
                   callbacks=[model_checkpoint, reduce_lr_cback])

我的问题: 您对如何更改体系结构或超参数以减轻过高估计有任何见解或建议吗?这甚至可能包括使用可能更擅长更精确细分的其他体系结构。 (请注意,我已经在进行班级平衡/加权以补偿班级频率的不平衡)

1 个答案:

答案 0 :(得分:1)

您可以尝试使用各种损失函数代替交叉熵。对于多类别细分,您可以尝试:

臭小子2018的获胜者使用了自动编码器正则化(https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization)。您也可以尝试一下。该论文的想法是该模型还正在学习如何更好地编码潜在空间中的特征,从而以某种方式帮助模型进行分割。