通道明智的CrossEntropyLoss用于pytorch中的图像分割

时间:2018-06-17 12:00:42

标签: image-segmentation pytorch loss-function cross-entropy semantic-segmentation

我正在做一个图像分割任务。共有7个类,所以最终的输出是一个张量,如[batch,7,height,width],这是softmax输出。现在直觉上我想使用CrossEntropy丢失但是pytorch实现不适用于通道明智的单热编码向量

所以我打算自己做一个功能。在一些stackoverflow的帮助下,到目前为止,我的代码看起来像这样

from torch.autograd import Variable
import torch
import torch.nn.functional as F


def cross_entropy2d(input, target, weight=None, size_average=True):
    # input: (n, c, w, z), target: (n, w, z)
    n, c, w, z = input.size()
    # log_p: (n, c, w, z)
    log_p = F.log_softmax(input, dim=1)
    # log_p: (n*w*z, c)
    log_p = log_p.permute(0, 3, 2, 1).contiguous().view(-1, c)  # make class dimension last dimension
    log_p = log_p[
       target.view(n, w, z, 1).repeat(0, 0, 0, c) >= 0]  # this looks wrong -> Should rather be a one-hot vector
    log_p = log_p.view(-1, c)
    # target: (n*w*z,)
    mask = target >= 0
    target = target[mask]
    loss = F.nll_loss(log_p, target.view(-1), weight=weight, size_average=False)
    if size_average:
        loss /= mask.data.sum()
    return loss


images = Variable(torch.randn(5, 3, 4, 4))
labels = Variable(torch.LongTensor(5, 3, 4, 4).random_(3))
cross_entropy2d(images, labels)

我收到两个错误。在代码本身中提到了一个,它期望一个热矢量。第二个说了以下

RuntimeError: invalid argument 2: size '[5 x 4 x 4 x 1]' is invalid for input with 3840 elements at ..\src\TH\THStorage.c:41

出于示例目的,我试图让它解决3类问题。所以目标和标签是(不包括批处理参数以简化!)

目标:

 Channel 1     Channel 2  Channel 3

[[0 1 1 0 ] [0 0 0 1 ] [1 0 0 0 ] [0 0 1 1 ] [0 0 0 0 ] [1 1 0 0 ] [0 0 0 1 ] [0 0 0 0 ] [1 1 1 0 ] [0 0 0 0 ] [0 0 0 1 ] [1 1 1 0 ]

标签:

 Channel 1     Channel 2  Channel 3

[[0 1 1 0 ] [0 0 0 1 ] [1 0 0 0 ] [0 0 1 1 ] [.2 0 0 0] [.8 1 0 0 ] [0 0 0 1 ] [0 0 0 0 ] [1 1 1 0 ] [0 0 0 0 ] [0 0 0 1 ] [1 1 1 0 ]

那么如何修复我的代码以计算通道明智的CrossEntropy损失?

3 个答案:

答案 0 :(得分:2)

正如Shai的回答所述,可以在here上找到torch.nn.CrossEntropy()函数的文档,而在here上可以找到代码。内置功能确实已经支持KD交叉熵损失。

在3D情况下,torch.nn.CrossEntropy()函数需要两个参数:4D输入矩阵和3D目标矩阵。输入矩阵的形状为:(最小批,类,H,W)。目标矩阵的形状为(Minibatch,H,W),数字范围为0到(Classes-1)。如果您从单点编码矩阵开始,则必须使用np.argmax()对其进行转换。

具有三个类别且最小批处理大小为1的示例:

import pytorch
import numpy as np

input_torch = torch.randn(1, 3, 2, 5, requires_grad=True)

one_hot = np.array([[[1, 1, 1, 0, 0], [0, 0, 0, 0, 0]],    
                    [[0, 0, 0, 0, 0], [1, 1, 1, 0, 0]],
                    [[0, 0, 0, 1, 1], [0, 0, 0, 1, 1]]])

target = np.array([np.argmax(a, axis = 0) for a in target])
target_torch = torch.tensor(target_argmax)

loss = torch.nn.CrossEntropyLoss()
output = loss(input_torch, target_torch)
output.backward()

答案 1 :(得分:0)

2D(或KD)交叉熵是NN中非常基本的构建块。 pytorch不太可能没有“开箱即用”的实现。
查看torch.nn.CrossEntropyLoss和基础torch.nn.functional.cross_entropy,您会看到损失可以处理2D输入(即4D输入预测张量)。
此外,您可以查看实际实现此here的代码,并查看它如何根据dim张量的input来处理不同的案例。

所以,不要打扰,它已经为你完成了!

答案 2 :(得分:0)

这是我的代码

`for batch, data in enumerate(trainloader, 0):

        inputs, labels = data
        labels = labels.long()
        inputs, labels = inputs.to(device), labels.to(device)

        labels = labels.view([-1, ])



        optimizer = optim.Adam(net.parameters(), lr=lr)


        optimizer.zero_grad()
        outputs = net(inputs)


        outputs = outputs.view(-1, num_of_class)


        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()



        # # sanity check
        # print()
        # print('epoch ', epoch, 'batch ', batch, " inputs", inputs.shape, "labels", labels.shape,
        #       "outputs", outputs.shape)
        # # sanity check end



        outputs = outputs.to('cpu')
        outputs = outputs.data.numpy()
        outputs = outputs.reshape([-1, num_of_class])

        mask = np.zeros([outputs.shape[0]])
        #
        for i in range(len(outputs)):
            mask[i] = np.argmax(outputs[i])

        mask = mask.reshape([-1, 1])

        IoU = jaccard_similarity_score(labels.to('cpu').data, mask)

`