张量流中的加权成本函数

时间:2017-08-16 14:31:17

标签: tensorflow

我试图将加权引入以下成本函数:

_cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=_logits, labels=y))

但不必自己做softmax交叉熵。所以我在考虑将成本计算结果分解为cost1和cost2,然后将修改后的logits和y值提供给每个。

我想做类似的事情,但不确定正确的代码是什么:

mask=(y==0)
y0 = tf.boolean_mask(y,mask)*y1Weight

(这给出了掩码不能是标量的错误)

2 个答案:

答案 0 :(得分:1)

可以使用tf.where计算权重掩码。以下是加权成本示例:

batch_size = 100
y1Weight = 0.25
y0Weight = 0.75


_logits = tf.Variable(tf.random_normal(shape=(batch_size, 2), stddev=1.))
y = tf.random_uniform(shape=(batch_size,), maxval=2, dtype=tf.int32)

_cost = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=_logits, labels=y)

#Weight mask, the weights for label=0 is y0Weight and for 1 is y1Weight
y_w = tf.where(tf.cast(y, tf.bool), tf.ones((batch_size,))*y0Weight, tf.ones((batch_size,))*y1Weight)

# New weighted cost
cost_w = tf.reduce_mean(tf.multiply(_cost, y_w))

正如@ user1761806所建议的那样,更简单的解决方案是使用允许加权类的tf.losses.sparse_softmax_cross_entropy()

答案 1 :(得分:0)

您可以按如下方式计算加权成本;使用形状为weights_per_class的预定义(num_classes, 1)张量。对于标签使用one_hot编码。

# here labels shape should be [batch_size, num_classes] ; obtained using one_hot
_cost = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=_logits, labels=y)

# Here you can define a deterministic weights tensor. 
# weights_per_class = tf.constant(np.array([y0weights, y1weights, ...]))
weights_per_class =tf.random_normal(shape=(num_classes, 1), dtype=tf.float32)

# Use the weights tensor to compute weighted loss
_weighted_cost =  tf.reduce_mean(tf.matmul(_cost, weights_per_class))