PyTorch-在自定义层中未调用向后()

时间:2019-06-07 22:23:30

标签: conv-neural-network pytorch backpropagation autograd

我要做的是在loss.backward()之后和optimizer.step()之前修改Conv2d中的权重。一种解决方案是在loss.backward()之后修改相应层中的权重。我只想制作一个自定义层来保持train()函数的整洁。这是代码段。

class CustomConv(nn.Conv2d):
    def __init__(self, *kargs, **kwargs):
        super(CustomConv, self).__init__(*kargs, **kwargs)

    def forward(self, input):
        input = WeightModifier(self.weight)(input)
        out = nn.functional.conv2d(
            input,
            self.weight,
            None,
            self.stride,
            self.padding,
            self.dilation,
            self.groups)
        return out

class WeightModifier(torch.autograd.Function):
    def __init__(self, weight):
        self.weight = weight

    def forward(self, input):
        return input.clone()

    def backward(self, grad):
        # Some op to change self.weight.
        # In other words, change weights in the following conv2d
        # after loss.backward() and before optimizer.step()
        return grad.clone()

hidden = CustomConv()(input_tensor)  # backward not called 
hidden = CustomConv()(hidden)
loss = cal_loss(hidden, gt)
optimizer.zero_grad()
loss.backward()
optimizer.step()

问题是未调用第一个CustomConv中的WeightModifier的向后()(在第二个CustomConv中被调用)。我猜想原因是Pytorch发现input_tensor不需要渐变,并且Layer WeightModifier没有任何参数,因此它跳过了向后()。是强制或“欺骗” Pytorch执行backward()的任何方法吗?

谢谢!

0 个答案:

没有答案