损失函数中的所有变量是否都必须在pytorch中带有渐变张量?

时间:2019-10-25 14:17:55

标签: python pytorch autograd

我具有以下功能


def msfe(ys, ts):
    ys=ys.detach().numpy() #output from the network
    ts=ts.detach().numpy() #Target (true labels)
    pred_class = (ys>=0.5) 
    n_0 = sum(ts==0) #Number of true negatives
    n_1 = sum(ts==1) #Number of true positives
    FPE = sum((ts==0)[[bool(p) for p in (pred_class==1)]])/n_0 #False positive error
    FNE = sum((ts==1)[[bool(p) for p in (pred_class==0)]])/n_1 #False negative error
    loss= FPE**2+FNE**2

    loss=torch.tensor(loss,dtype=torch.float64,requires_grad=True)


    return loss

我想知道,由于ysts没有grad标志,Pytorch中的自动分级是否正常工作。

所以我的问题是:在FPE,FNE,ys,ts,n_1,n_0起作用之前,所有变量(optimizer.step())是否都必须是张量吗?或者它仅仅是最终函数(loss)可以吗?是哪个?

1 个答案:

答案 0 :(得分:3)

您要通过optimizer.step()优化的所有变量都必须具有渐变。

在您的情况下,网络会y进行预测,因此,您不应该detach(根据图表)进行预测。

通常,您不需要更改targets,因此这些不需要渐变。不过,您不必detach就可以使用它们,默认情况下张量不需要梯度,并且不会向后传播。

Loss将具有渐变(如果成分(至少一种)具有渐变)。

总体来说,您几乎不需要手动进行维护。

顺便说一句。 请勿在PyTorch中使用numpy,这种情况很少见。您可以在PyTorch张量的numpy数组上执行大部分操作。

BTW2。 Variable中不再有pytorch这样的东西,只有需要梯度的张量和不需要梯度的张量。

不可分性

1.1现有代码存在问题

实际上,您正在使用不可区分的函数(即>===)。那些仅在输出时会给您带来麻烦,因为那些需要渐变(尽管您可以对==使用>=targets)。

下面我附上了损失函数,并在评论中概述了其中的问题:

# Gradient can't propagate if you detach and work in another framework
# Most Python constructs should be fine, detaching will ruin it though.
def msfe(outputs, targets):
    # outputs=outputs.detach().numpy() # Do not detach, no need to do that
    # targets=targets.detach().numpy() # No need for numpy either
    pred_class = outputs >= 0.5  # This one is non-differentiable
    # n_0 = sum(targets==0) # Do not use sum, there is pytorch function for that
    # n_1 = sum(targets==1)

    n_0 = torch.sum(targets == 0)  # Those are not differentiable, but...
    n_1 = torch.sum(targets == 1)  # It does not matter as those are targets

    # FPE = sum((targets==0)[[bool(p) for p in (pred_class==1)]])/n_0 # Do not use Python bools
    # FNE = sum((targets==1)[[bool(p) for p in (pred_class==0)]])/n_1 # Stay within PyTorch
    # Those two below are non-differentiable due to == sign as well
    FPE = torch.sum((targets == 0.0) * (pred_class == 1.0)).float() / n_0
    FNE = torch.sum((targets == 1.0) * (pred_class == 0.0)).float() / n_1
    # This is obviously fine
    loss = FPE ** 2 + FNE ** 2

    # Loss should be a tensor already, don't do things like that
    # Gradient will not be propagated, you will have a new tensor
    # Always returning gradient of `1` and that's all
    # loss = torch.tensor(loss, dtype=torch.float64, requires_grad=True)

    return loss

1.2可能的解决方案

因此,您需要除去3个不可微的部分。原则上,您可以尝试使用网络中的连续输出来对其进行近似(假设您使用sigmoid作为激​​活)。这是我的看法:

def msfe_approximation(outputs, targets):
    n_0 = torch.sum(targets == 0)  # Gradient does not flow through it, it's okay
    n_1 = torch.sum(targets == 1)  # Same as above
    FPE = torch.sum((targets == 0) * outputs).float() / n_0
    FNE = torch.sum((targets == 1) * (1 - outputs)).float() / n_1

    return FPE ** 2 + FNE ** 2

请注意,要使FPE最小化,outputs将尝试在zero为零的索引上成为targets。与FNE类似,如果目标是1,则网络也会尝试输出1

请注意,该想法与BCELoss(二进制交叉熵)相似。

最后,例如,您可以对其进行运行,仅用于完整性检查:

if __name__ == "__main__":
    model = torch.nn.Sequential(
        torch.nn.Linear(30, 100),
        torch.nn.ReLU(),
        torch.nn.Linear(100, 200),
        torch.nn.ReLU(),
        torch.nn.Linear(200, 1),
        torch.nn.Sigmoid(),
    )
    optimizer = torch.optim.Adam(model.parameters())
    targets = torch.randint(high=2, size=(64, 1)) # random targets
    inputs = torch.rand(64, 30) # random data
    for _ in range(1000):
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = msfe_approximation(outputs, targets)
        print(loss)
        loss.backward()
        optimizer.step()

    print(((model(inputs) >= 0.5) == targets).float().mean())