我实现了带有自定义损失功能的MLP,下面是代码:
def custom_loss(groups_id_count):
print('Computing loss...')
def listnet_loss(real_labels, predicted_labels):
start_range = 0
for group in groups_id_count:
end_range = start_range + group[1]
batch_real_labels = real_labels[start_range:end_range]
batch_predicted_labels = predicted_labels[start_range:end_range]
loss = -K.sum(get_top_one_probability(batch_real_labels)) * tf.math.log(get_top_one_probability(batch_predicted_labels))
start_range = end_range
print('loss: ', loss)
return loss
return listnet_loss
每个时期印制的损失始终为0.0000e+00
,关于print
变量的loss
语句为Tensor("listnet_loss/mul_24:0", shape=(None, None), dtype=float32)
。
这是get_top_one_probability
函数:
def get_top_one_probability(vector):
return (K.exp(vector) / K.sum(K.exp(vector)))
更新:
get_top_one_probability(batch_predicted_labels)
的输出始终为:
Tensor("listnet_loss/truediv_36:0", shape=(None, 1), dtype=float32)
real_labels
的输出是:
Tensor("ExpandDims:0", shape=(None, 1), dtype=float32)
batch_real_labels
和batch_predicted_labels
的输出始终为:
Tensor("listnet_loss/strided_slice:0", shape=(None, 1), dtype=float32)
更新2 :
使用K.shape(real_labels)
,我注意到形状是(2,)
,但是我希望形状与传递给fit
函数的标签数量相对应。是吗?
我的损失功能有问题吗?预先感谢。
答案 0 :(得分:0)
我认为问题出在变量loss
的范围内。您也不会在循环的每次迭代中增加损失。
尝试以下代码:
def listnet_loss(real_labels, predicted_labels):
start_range = 0
loss = 0
for group in groups_id_count:
end_range = start_range + group[1]
batch_real_labels = real_labels[start_range:end_range]
batch_predicted_labels = predicted_labels[start_range:end_range]
loss += -K.sum(get_top_one_probability(batch_real_labels)) * tf.math.log(get_top_one_probability(batch_predicted_labels))
start_range = end_range
print('loss: ', loss)
return loss