我在我的keras模型中编写了一个自定义损失函数
def sparse_cross_entropy(y_true, y_pred):
"""
Calculate the cross-entropy loss between y_true and y_pred.
y_true is a 2-rank tensor with the desired output.
The shape is [batch_size, sequence_length] and it
contains sequences of integer-tokens.
y_pred is the decoder's output which is a 3-rank tensor
with shape [batch_size, sequence_length, num_words]
so that for each sequence in the batch there is a one-hot
encoded array of length num_words.
"""
# Calculate the loss. This outputs a
# 2-rank tensor of shape [batch_size, sequence_length]
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true,
logits=y_pred)
# Keras may reduce this across the first axis (the batch)
# but the semantics are unclear, so to be sure we use
# the loss across the entire 2-rank tensor, we reduce it
# to a single scalar with the mean function.
loss_mean = tf.reduce_mean(loss)
return loss_mean
但是在.h5文件中保存和放置模型时,它表明“ sparse_cross_entropy”未定义