Keras:骰子系数损失函数是负的并随着时期而增加

时间:2018-04-11 22:11:11

标签: python machine-learning deep-learning keras loss-function

根据骰子Co-eff损失函数的这种Keras实现,损失是骰子系数的计算值的减去。损失应该随着时期而减少但是通过这种实现,我自然会得到负损失并且损失随着时期而减少,即从0向负无穷远侧移动,而不是接近于0. 如果我使用(1-骰子co-eff)代替(-dice co-eff)作为损失,是不是错了? 这是完整的Keras实现(我正在讨论):https://github.com/jocicmarko/ultrasound-nerve-segmentation/blob/master/train.py

smooth = 1.

def dice_coef(y_true, y_pred):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)


def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)

我已经与你分享了我的实验记录,尽管只有2个时代:

Train on 2001 samples, validate on 501 samples
Epoch 1/2
Epoch 00001: loss improved from inf to -0.73789, saving model to unet.hdf5
 - 3229s - loss: -7.3789e-01 - dice_coef: 0.7379 - val_loss: -7.9304e-01 - val_dice_coef: 0.7930
Epoch 2/2
Epoch 00002: loss improved from -0.73789 to -0.81037, saving model to unet.hdf5
 - 3077s - loss: -8.1037e-01 - dice_coef: 0.8104 - val_loss: -8.2842e-01 - val_dice_coef: 0.8284
predict test data
9/9 [==============================] - 4s 429ms/step
dict_keys(['val_dice_coef', 'loss', 'val_loss', 'dice_coef'])

2 个答案:

答案 0 :(得分:3)

1-dice_coef-dice_coef对于收敛都没有影响。但 1-dice_coef使监视更加熟悉,因为值在[0,1]范围内,而不是[-1,0]

答案 1 :(得分:0)

我认为正确的损失是1 - dice_coef(y_true,y_pred)