我正在尝试使用偏差作为自定义损失函数来优化神经网络。我试过了:
#building model
model = keras.Sequential()
model.add(Dense(10, input_dim = 6, activation = "relu"))
model.add(Dense(5, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
#DEF CUSTOM LOSS
def custom_loss():
def loss(y_true, y_pred):
return (2. *(KB.log(y_true) - KB.log(y_pred)))
return loss
model.compile(loss = custom_loss(), optimizer = 'sgd')
model.fit(factorsTrain, yTrain, epochs = 2)
但是它给-inf带来了损失,所以我猜它根本无法正常工作,那是我做错了什么吗?
Edit:我将最后一层的激活更改为指数以确保值在0到1之间。我还注意到,由于我的y_true(实际上大多数都是)等于0,因此我将损失函数更改为(还添加了epsilon,它是1e-07,以确保我不计算ln(0):
#DEF CUSTOM LOSS
def custom_loss():
def loss(y_true, y_pred):
return (( KB.sqrt( KB.square(2 * (KB.log(y_true + KB.epsilon()) - KB.log(y_pred + KB.epsilon())) ))))
return loss
现在我不再得到-inf了,但是我仍然得到了NaN
答案 0 :(得分:0)
设法通过稍微改变公式来修复它,以强制将日志中的值设置为> = 0
def Deviance_loss():
def loss(y_true, y_pred):
y_true = KB.max(y_true, 0)
return (KB.sqrt(KB.square( 2 * KB.log(y_true + KB.epsilon()) - KB.log(y_pred))))
return loss