Tensorflow:如何在语义分割期间忽略特定标签?

时间:2017-01-31 10:36:49

标签: tensorflow

我使用tensorflow进行语义分割。在计算像素损失时如何告诉tensorflow忽略特定标签?

我已阅读in this post,对于图片分类,可以将标签设置为-1,然后将其忽略。如果确实如此,给定标签张量,我如何修改我的标签,以便将某些值更改为-1

在Matlab中,它将类似于:

ignore_label = 255
myLabelTensor(myLabelTensor == ignore_label) = -1

但我不知道如何在TF中做到这一点?

一些背景信息:
这是标签的加载方式:

label_contents = tf.read_file(input_queue[1])
label = tf.image.decode_png(label_contents, channels=1)

这是当前计算损失的方式:

raw_output = net.layers['fc1_voc12']
prediction = tf.reshape(raw_output, [-1, n_classes])
label_proc = prepare_label(label_batch, tf.pack(raw_output.get_shape()[1:3]),n_classes)
gt = tf.reshape(label_proc, [-1, n_classes])

# Pixel-wise softmax loss.
loss = tf.nn.softmax_cross_entropy_with_logits(prediction, gt)
reduced_loss = tf.reduce_mean(loss)

def prepare_label(input_batch, new_size, n_classes):
    """Resize masks and perform one-hot encoding.

    Args:
      input_batch: input tensor of shape [batch_size H W 1].
      new_size: a tensor with new height and width.

    Returns:
      Outputs a tensor of shape [batch_size h w 21]
      with last dimension comprised of 0's and 1's only.
    """
    with tf.name_scope('label_encode'):
        input_batch = tf.image.resize_nearest_neighbor(input_batch, new_size) # as labels are integer numbers, need to use NN interp.
        input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) # reducing the channel dimension.
        input_batch = tf.one_hot(input_batch, depth=n_classes)
    return input_batch

我使用tensorflow-deeplab-resnet model使用caffe-tensorflow将Caffe中实现的Resnet模型转移到tensorflow。

2 个答案:

答案 0 :(得分:0)

根据文档,必须使用labels上的有效概率分布调用tf.nn.softmax_cross_entropy_with_logits,否则计算将不正确,并使用tf.nn.sparse_softmax_cross_entropy_with_logits(这可能会更方便case)带有否定标签会导致错误或返回NaN值。我不会依赖它来忽略一些标签。

我要做的是在那些正确的类被忽略的像素中用无穷大替换被忽略的类的logits,这样它们就不会对损失做出任何贡献:

ignore_label = ...
# Make zeros everywhere except for the ignored label
input_batch_ignored = tf.concat(input_batch.ndims - 1,
    [tf.zeros_like(input_batch[:, :, :, :ignore_label]),
     tf.expand_dims(input_batch[:, :, :, ignore_label], -1),
     tf.zeros_like(input_batch[:, :, :, ignore_label + 1:])])
# Make corresponding logits "infinity" (a big enough number)
predictions_fix = tf.select(input_batch_ignored > 0,
    1e30 * tf.ones_like(predictions), predictions)
# Compute loss with fixed logits
loss = tf.nn.softmax_cross_entropy_with_logits(prediction, gt)

唯一的问题是你正在考虑忽略类的像素总是被正确预测,这意味着包含大量这些像素的图像的损失将人为地减小。根据具体情况,这可能会有所不同,但如果你想要非常准确,你必须根据未忽略像素的数量来衡量每个图像的损失,而不是仅仅取平均值。

# Count relevant pixels on each image
input_batch_relevant = 1 - input_batch_ignored
input_batch_weight = tf.reduce_sum(input_batch_relevant, [1, 2, 3])
# Compute relative weights
input_batch_weight = input_batch_weight / tf.reduce_sum(input_batch_weight)
# Compute reduced loss according to weights
reduced_loss = tf.reduce_sum(loss * input_batch_weight)

答案 1 :(得分:-1)

抱歉,我是新手,但我相信https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/faq.md,需要在此处添加新的数据集。在“segmentation_dataset.py”中,在每个数据集中,您可以指定ignore_label。例如,

_PASCAL_VOC_SEG_INFORMATION = DatasetDescriptor(
    splits_to_sizes={
        'train': 1464,
        'trainval': 2913,
        'val': 1449,
    },
    num_classes=21,
    ignore_label=255,
)