使DeepLabV3损失函数加权

时间:2019-11-15 17:27:08

标签: tensorflow deeplab

我正在使用DeepLabV3+ repository,并且我注意到loss_weight设置为qplot(N2016, geom="density", xlim=c((n_star_2016 -5), (CIs2016[[3]])), xlab="Número estimado del total de asesinatos 2016", ylab="Probabilidad de la estimación", adjust=5) ,这意味着我们以相同的方式对类进行加权。但是,我有一个非常不平衡的数据集,例如80%的负值和20%的正值。

1.0

这是他们使用的损失函数,您可以看到def add_softmax_cross_entropy_loss_for_each_scale(scales_to_logits, labels, num_classes, ignore_label, loss_weight=1.0, upsample_logits=True, hard_example_mining_step=0, top_k_percent_pixels=1.0, scope=None): """Adds softmax cross entropy loss for logits of each scale. Args: scales_to_logits: A map from logits names for different scales to logits. The logits have shape [batch, logits_height, logits_width, num_classes]. labels: Groundtruth labels with shape [batch, image_height, image_width, 1]. num_classes: Integer, number of target classes. ignore_label: Integer, label to ignore. loss_weight: Float, loss weight. upsample_logits: Boolean, upsample logits or not. hard_example_mining_step: An integer, the training step in which the hard exampling mining kicks off. Note that we gradually reduce the mining percent to the top_k_percent_pixels. For example, if hard_example_mining_step = 100K and top_k_percent_pixels = 0.25, then mining percent will gradually reduce from 100% to 25% until 100K steps after which we only mine top 25% pixels. top_k_percent_pixels: A float, the value lies in [0.0, 1.0]. When its value < 1.0, only compute the loss for the top k percent pixels (e.g., the top 20% pixels). This is useful for hard pixel mining. scope: String, the scope for the loss. Raises: ValueError: Label or logits is None. """ if labels is None: raise ValueError('No label for softmax cross entropy loss.') for scale, logits in six.iteritems(scales_to_logits): loss_scope = None if scope: loss_scope = '%s_%s' % (scope, scale) if upsample_logits: # Label is not downsampled, and instead we upsample logits. logits = tf.image.resize_bilinear( logits, preprocess_utils.resolve_shape(labels, 4)[1:3], align_corners=True) scaled_labels = labels else: # Label is downsampled to the same size as logits. scaled_labels = tf.image.resize_nearest_neighbor( labels, preprocess_utils.resolve_shape(logits, 4)[1:3], align_corners=True) scaled_labels = tf.reshape(scaled_labels, shape=[-1]) not_ignore_mask = tf.to_float(tf.not_equal(scaled_labels, ignore_label)) * loss_weight one_hot_labels = tf.one_hot( scaled_labels, num_classes, on_value=1.0, off_value=0.0) if top_k_percent_pixels == 1.0: # Compute the loss for all pixels. tf.compat.v1.losses.softmax_cross_entropy( one_hot_labels, tf.reshape(logits, shape=[-1, num_classes]), weights=not_ignore_mask, scope=loss_scope) else: logits = tf.reshape(logits, shape=[-1, num_classes]) weights = not_ignore_mask with tf.name_scope(loss_scope, 'softmax_hard_example_mining', [logits, one_hot_labels, weights]): one_hot_labels = tf.stop_gradient( one_hot_labels, name='labels_stop_gradient') pixel_losses = tf.nn.softmax_cross_entropy_with_logits_v2( labels=one_hot_labels, logits=logits, name='pixel_losses') weighted_pixel_losses = tf.multiply(pixel_losses, weights) num_pixels = tf.to_float(tf.shape(logits)[0]) # Compute the top_k_percent pixels based on current training step. if hard_example_mining_step == 0: # Directly focus on the top_k pixels. top_k_pixels = tf.to_int32(top_k_percent_pixels * num_pixels) else: # Gradually reduce the mining percent to top_k_percent_pixels. global_step = tf.to_float(tf.train.get_or_create_global_step()) ratio = tf.minimum(1.0, global_step / hard_example_mining_step) top_k_pixels = tf.to_int32( (ratio * top_k_percent_pixels + (1.0 - ratio)) * num_pixels) top_k_losses, _ = tf.nn.top_k(weighted_pixel_losses, k=top_k_pixels, sorted=True, name='top_k_percent_pixels') total_loss = tf.reduce_sum(top_k_losses) num_present = tf.reduce_sum( tf.to_float(tf.not_equal(top_k_losses, 0.0))) loss = _div_maybe_zero(total_loss, num_present) tf.losses.add_loss(loss) 是1.0。

loss_weight

我想给负数类的权重为0.2,给我正数类的权重为0.8(预测)。 有谁知道该怎么做或以前做过的任何回购/示例?

致谢

1 个答案:

答案 0 :(得分:3)

您可以在“ utils”文件夹中的“ train_utils.py”中更改权重。

在“ def add_softmax_cross_entropy_loss_for_each_scale(...)”中

类似这样的东西:

for scale, logits in six.iteritems(scales_to_logits):
  loss_scope = None
  irgore_weight = 0
  label0_weight = 0.2 #I don't know your labels order...
  label1_weight = 0.8 #I don't know your labels order...

也可以像这样更改not_ignore_mask

not_ignore_mask = tf.to_float(tf.equal(scaled_labels, 0)) * label0_weight + tf.to_float(tf.equal(scaled_labels, 1)) * label1_weight + tf.to_float(tf.equal(scaled_labels, ignore_label)) * irgore_weight

我希望这会有所帮助。