如何实现张量流中的多级铰链损失

时间:2016-04-28 02:39:07

标签: neural-network tensorflow

我想在张量流中实现多级铰链损失。配方如下:

multi-class hinge loss function

我发现当预测正确时,很难获得第二个最大预测概率。我尝试使用tf.nn.top_k来计算它,但不幸的是tf.nn.top_k没有实现渐变操作。那么我该如何实现呢?

3 个答案:

答案 0 :(得分:3)

top_k具有渐变,在版本0.8 here

中添加

答案 1 :(得分:2)

使用三行代码添加另一个实现 得分:未缩放的分数,张量,形状=(n_classes,batch_size),dtype = float32 classes:tensor,shape =(batch_size,batch_size),dtype = float32

通过选择最违反的类而不是考虑所有类来实现上述损失

#H - hard negative for each sample
H = tf.reduce_max(scores * (1 - classes), 0)    
L = tf.nn.relu((1 - scores + H) * classes)
final_loss = tf.reduce_mean(tf.reduce_max(L, 0))

我们对所有负面类进行求和的另一种实现

# implements loss as sum_(j~=y) max(0, 1 - s(x, y) + s(x, j))
def multiclasshingeloss1(scores, classes):
    true_classes = tf.argmax(classes, 0)
    idx_flattened = tf.range(0, scores.get_shape()[1]) * scores.get_shape()[0]+\
    tf.cast(true_classes, dtype=tf.int32)
    true_scores = tf.gather(tf.reshape(tf.transpose(scores), [-1]),
                            idx_flattened)
    L = tf.nn.relu((1 - true_scores + scores) * (1 - classes))
    final_loss = tf.reduce_mean(L)
    return final_loss

您可以根据您的实施最小化此处的转置。

答案 2 :(得分:1)

我的实现如下,但我认为必须有更高效的实现。

logits:unscaled scores,tensor,shape =(batch_size,n_classes)

label:tensor,shape =(batch_size,)

batch_size,n_classes:int

def multi_class_hinge_loss(logits, label, batch_size, n_classes):
    # get the correct logit
    flat_logits = tf.reshape(logits, (-1,))
    correct_id = tf.range(0, batch_size) * n_classes + label
    correct_logit = tf.gather(flat_logits, correct_id)

    # get the wrong maximum logit
    max_label = tf.argmax(logits, 1)
    top2, _ = tf.nn.top_k(logits, k=2, sorted=True)
    top2 = tf.split(1, 2, top2)
    for i in xrange(2):
        top2[i] = tf.reshape(top2[i], (batch_size, ))
    wrong_max_logit = tf.select(tf.equal(max_label, label), top2[1], top2[0])

    # calculate multi-class hinge loss
    return tf.reduce_mean(tf.maximum(0., 1. + wrong_max_logit - correct_logit))