当我手动计算二进制交叉熵时,我采用S形来获得概率,然后使用交叉熵公式并求出结果的平均值:
logits = tf.constant([-1, -1, 0, 1, 2.])
labels = tf.constant([0, 0, 1, 1, 1.])
probs = tf.nn.sigmoid(logits)
loss = labels * (-tf.math.log(probs)) + (1 - labels) * (-tf.math.log(1 - probs))
print(tf.reduce_mean(loss).numpy()) # 0.35197204
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
loss = cross_entropy(labels, logits)
print(loss.numpy()) # 0.35197204
当logits
和labels
的大小不同时,如何计算分类交叉熵?
logits = tf.constant([[-3.27133679, -22.6687183, -4.15501118, -5.14916372, -5.94609261,
-6.93373299, -5.72364092, -9.75725174, -3.15748906, -4.84012318],
[-11.7642536, -45.3370094, -3.17252636, 4.34527206, -17.7164974,
-0.595088899, -17.6322937, -2.36941719, -6.82157373, -3.47369862],
[-4.55468369, -1.07379043, -3.73261762, -7.08982277, -0.0288562477,
-5.46847963, -0.979336262, -3.03667569, -3.29502845, -2.25880361]])
labels = tf.constant([2, 3, 4])
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction='none')
loss = loss_object(labels, logits)
print(loss.numpy()) # [2.0077195 0.00928135 0.6800677 ]
print(tf.reduce_mean(loss).numpy()) # 0.8990229
我的意思是我怎样才能手工获得相同的结果([2.0077195 0.00928135 0.6800677 ]
)?
@OverLordGoldDragon答案是正确的。在TF 2.0
中,它看起来像这样:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
one_hot_labels = tf.one_hot(labels, 10)
preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
# [2.0077195 0.00928135 0.6800677 ]
# 2.697068691253662
# [2.0077198 0.00928142 0.6800677 ]
# 2.697068929672241
对于语言模型:
vocab_size = 9
seq_len = 6
batch_size = 2
labels = tf.reshape(tf.range(batch_size*seq_len), (batch_size,seq_len)) # (2, 6)
logits = tf.random.normal((batch_size,seq_len,vocab_size)) # (2, 6, 9)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
one_hot_labels = tf.one_hot(labels, vocab_size)
preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
# [[1.341706 3.2518263 2.6482694 3.039099 1.5835983 4.3498387]
# [2.67237 3.3978183 2.8657475 nan nan nan]]
# nan
# [[1.341706 3.2518263 2.6482694 3.039099 1.5835984 4.3498387]
# [2.67237 3.3978183 2.8657475 0. 0. 0. ]]
# 25.1502742767334
答案 0 :(得分:2)
SparseCategoricalCrossentropy
是CategoricalCrossentropy
,它使用整数标签,而不是 one 。来自source code的示例,以下两个是等效的:
scce = tf.keras.losses.SparseCategoricalCrossentropy()
cce = tf.keras.losses.CategoricalCrossentropy()
labels_scce = K.variable([[0, 1, 2]])
labels_cce = K.variable([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
preds = K.variable([[.90,.05,.05], [.50,.89,.60], [.05,.01,.94]])
loss_cce = cce(labels_cce, preds, from_logits=False)
loss_scce = scce(labels_scce, preds, from_logits=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run([loss_cce, loss_scce])
print(K.get_value(loss_cce))
print(K.get_value(loss_scce))
# [0.10536055 0.8046684 0.0618754]
# [0.10536055 0.8046684 0.0618754]
关于“手动”操作的方法,我们可以参考Numpy backend:
np_labels = K.get_value(labels_cce)
np_preds = K.get_value(preds)
losses = []
for label, pred in zip(np_labels, np_preds):
pred /= pred.sum(axis=-1, keepdims=True)
losses.append(np.sum(label * -np.log(pred), axis=-1, keepdims=False))
print(losses)
# [0.10536055 0.8046684 0.0618754]
from_logits = True
:preds
是模型输出 之前,将其传递到softmax
中(因此我们将其传递到softmax中)from_logits = False
:preds
是模型输出 之后的结果,将其传递到softmax
中(因此我们跳过此步骤)因此,总而言之,要手动进行计算:
pred /= ...
在计算日志之前归一化预测;这样,高概率。倾向于零标签对一个标签的 penalize 正确预测。如果为from_logits = False
,则此步骤被跳过 ,因为softmax
进行了归一化。参见this snippet。 Further reading log
(基于e)仅在 label==1
最后,分类交叉熵的数学公式为:
i
遍历N
个观测值c
遍历C
类C
的向量进行操作p_model [y_i \in C_c]
-属于类别i
的预测观察概率c