损失函数正在减小,但度量函数保持不变?

时间:2019-01-18 06:16:55

标签: python tensorflow keras image-segmentation semantic-segmentation

我正在从事医学图像分割。我有两节课。 0级为背景,1级为病变。由于数据集高度不平衡,因此我将损失函数用作(1-加权骰子系数),将度量函数用作骰子系数。我已经将数据集从0-255标准化为0-1。我正在使用带有tensorflow后端的keras来训练模型。在训练UNet ++模型时,我的损失函数在每个时期都在减少,但我的指标保持不变。我不明白为什么指标会保持不变,因为损失正在按预期减少?另外,我无法理解,为什么骰子系数返回的值在0到1之间,所以损失大于1?

这是我的损失函数:

ngx-cookie-service

这是度量标准函数:

def dice_loss(y_true, y_pred):
    smooth = 1.
    w1 = 0.3
    w2 = 0.7

    y_true_f = K.flatten(y_true[...,0])
    y_pred_f = K.flatten(y_pred[...,0])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef1 = (2 * intersect + smooth) / (denom + smooth)

    y_true_f1 = K.flatten(y_true[...,1])
    y_pred_f1 = K.flatten(y_pred[...,1])
    intersect1 = K.abs(K.sum(y_true_f1 * y_pred_f1, axis = -1))
    denom1 = K.abs(K.sum(y_true_f1, axis = -1)) + K.abs(K.sum(y_pred_f1, axis = -1))
    coef2 = (2 * intersect1 + smooth) / (denom1 + smooth)

    weighted_dice_coef = w1 * coef1 + w2 * coef2
    return (1 - weighted_dice_coef)

培训损失与历时:

Training loss vs epoch

这是示例代码:

def dsc(y_true, y_pred):
    """
    DSC = (|X and Y|)/ (|X| + |Y|)
    """
    smooth = 1.
    y_true_f = K.flatten(y_true[...,1])
    y_pred_f = K.flatten(y_pred[...,1])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef = (2 * intersect + smooth) / (denom + smooth)

    return coef

2 个答案:

答案 0 :(得分:4)

看起来您已经拿走了 model code 并且几乎完好无损。你从 sigmoid 到 softmax 的转换有点可疑。您是否将来自网络的单热编码 y_pred 与不是单热编码的 y_true 进行比较?也许您可以打印输出层的形状并将其与 y_true 的形状进行比较。

我在语义分割解决方案中使用了 Tversky Index,因为它是 Intersection-over-Union 和 Sørensen-Dice Coefficient 计算的概括,让您比加权骰子更优雅地强调假阳性或假阴性系数方法,您不必使用 axis=-1,我认为这是您问题的根源。对于损失,我简单地反转了 Tversky 指数指标。

def tversky_index(y_true, y_pred):
    # generalization of dice coefficient algorithm
    #   alpha corresponds to emphasis on False Positives
    #   beta corresponds to emphasis on False Negatives (our focus)
    #   if alpha = beta = 0.5, then same as dice
    #   if alpha = beta = 1.0, then same as IoU/Jaccard
    alpha = 0.5
    beta = 0.5
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (intersection) / (intersection + alpha * (K.sum(y_pred_f*(1. - y_true_f))) + beta *  (K.sum((1-y_pred_f)*y_true_f)))
def tversky_index_loss(y_true, y_pred):
    return -tversky_index(y_true, y_pred)
learning_rate = 5e-5  # also try 5e-4, 5e-3, depending on your network
optimizer = Adam(lr=learning_rate)
unet_model.compile(optimizer=optimizer, loss=tversky_index_loss, metrics=['accuracy','sparse_categorical_accuracy',tversky_index])

答案 1 :(得分:0)

1. metric 的值只有在 loss 的值低到一定程度后才会显着下降。在图像分割问题中,两者不存在正相关。

2.Dice loss 大于 1,因为总损失是批次中各个损失的相加。