keras / tensorflow中用于语义图像分割的多类加权损失

时间:2019-12-29 15:40:16

标签: tensorflow keras deep-learning semantic-segmentation

给出批处理的RGB图像作为输入,形状=(batch_size,宽度,高度,3)

和一个多目标对象,表示为一角形,形状=(批量大小,宽度,高度,n个类)

最后一层具有softmax激活的模型(Unet,DeepLab)。

我正在寻找kera /张量流中的加权分类交叉熵损失函数。

class_weight中的fit_generator参数似乎不起作用,我在这里或https://github.com/keras-team/keras/issues/2115中都找不到答案。

def weighted_categorical_crossentropy(weights):
    # weights = [0.9,0.05,0.04,0.01]
    def wcce(y_true, y_pred):
        # y_true, y_pred shape is (batch_size, width, height, n_classes)
        loos = ?...
        return loss

    return wcce

3 个答案:

答案 0 :(得分:1)

我会回答我的问题:

def weighted_categorical_crossentropy(weights):
    # weights = [0.9,0.05,0.04,0.01]
    def wcce(y_true, y_pred):
        Kweights = K.constant(weights)
        if not K.is_tensor(y_pred): y_pred = K.constant(y_pred)
        y_true = K.cast(y_true, y_pred.dtype)
        return K.categorical_crossentropy(y_true, y_pred) * K.sum(y_true * Kweights, axis=-1)
    return wcce

用法:

loss = weighted_categorical_crossentropy(weights)
optimizer = keras.optimizers.Adam(lr=0.01)
model.compile(optimizer=optimizer, loss=loss)

答案 1 :(得分:1)

我正在使用广义骰子损失。在我的案例中,它比加权分类交叉熵更好。我的实现是在PyTorch中进行的,但是翻译起来应该很容易。

class GeneralizedDiceLoss(nn.Module):
    def __init__(self):
        super(GeneralizedDiceLoss, self).__init__()

    def forward(self, inp, targ):
        inp = inp.contiguous().permute(0, 2, 3, 1)
        targ = targ.contiguous().permute(0, 2, 3, 1)

        w = torch.zeros((targ.shape[-1],))
        w = 1. / (torch.sum(targ, (0, 1, 2))**2 + 1e-9)

        numerator = targ * inp
        numerator = w * torch.sum(numerator, (0, 1, 2))
        numerator = torch.sum(numerator)

        denominator = targ + inp
        denominator = w * torch.sum(denominator, (0, 1, 2))
        denominator = torch.sum(denominator)

        dice = 2. * (numerator + 1e-9) / (denominator + 1e-9)

        return 1. - dice

答案 2 :(得分:0)

此问题可能类似于:Unbalanced data and weighted cross entropy,其答案已被接受。