了解Pytorch中LSTMCell的向后机制

时间:2019-05-07 03:47:10

标签: neural-network lstm pytorch recurrent-neural-network

我想了解pytorch中LSTMCell函数的向后传递,因此在初始化传递中,我需要执行以下操作(num_layers = 4,hidden_​​size = 1000,input_size = 1000):

self.layers = nn.ModuleList([
        LSTMCell(
            input_size=input_size,
            hidden_size=hidden_size,
        )
        for layer in range(num_layers)
    ])

for l in self.layers:
    l.register_backward_hook(backward_hook)

在前向传递中,我只需遍历序列长度和num_layers即可遍历LSTMCell,如下所示:

for j in range(seqlen):            
    input = #some tensor of size (batch_size, input_size)
    for i, rnn in enumerate(self.layers):
        # recurrent cell
        hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i]))

其中输入的大小为(batch_size, input_size)prev_hiddens[i]的大小为(batch_size, hidden_size)prev_cells[i]的大小为(batch_size, hidden_size)

backward_hook中,打印此函数输入的张量的大小:

def backward_hook(module, grad_input, grad_output):
    for grad in grad_output:
        print ("grad_output {}".format(grad))

    for grad in grad_input:
         print ("grad_input.size () {}".format(grad.size()))

作为结果,第一次调用backward_hook,例如:

[A]对于grad_output,我得到2个张量,其中第二张量为None。这是可以理解的,因为在反向阶段,我们具有内部状态的梯度(c)和输出的梯度(h)。时间维度的最后一次迭代没有未来的隐患,因此其梯度为“无”。

[B]对于grad_input,我得到5个张量(batch_size = 9):

grad_input.size () torch.Size([9, 4000])
grad_input.size () torch.Size([9, 4000])
grad_input.size () torch.Size([9, 1000])
grad_input.size () torch.Size([4000])
grad_input.size () torch.Size([4000])

我的问题是:

(1)我对[A]的理解正确吗?

(2)如何从grad_input元组解释5个张量?我认为应该只有3个,因为LSTMCell forward()只有3个输入?

谢谢

1 个答案:

答案 0 :(得分:1)

您对grad_inputgrad_output的理解是错误的。我试图用一个简单的例子来解释它。

def backward_hook(module, grad_input, grad_output):
    for grad in grad_output:
        print ("grad_output.size {}".format(grad.size()))

    for grad in grad_input:
        if grad is None:
            print('None')
        else:
            print ("grad_input.size: {}".format(grad.size()))
    print()

model = nn.Linear(10, 20)
model.register_backward_hook(backward_hook)

input = torch.randn(8, 3, 10)
Y = torch.randn(8, 3, 20)

Y_pred = []
for i in range(input.size(1)):
    out = model(input[:, i])
    Y_pred.append(out)

loss = torch.norm(Y - torch.stack(Y_pred, dim=1), 2)
loss.backward()

输出为:

grad_output.size torch.Size([8, 20])
grad_input.size: torch.Size([8, 20])
None
grad_input.size: torch.Size([10, 20])

grad_output.size torch.Size([8, 20])
grad_input.size: torch.Size([8, 20])
None
grad_input.size: torch.Size([10, 20])

grad_output.size torch.Size([8, 20])
grad_input.size: torch.Size([8, 20])
None
grad_input.size: torch.Size([10, 20])
  

说明

  • grad_output:损失w.r.t.的梯度。图层输出Y_pred

  • grad_input:无层输入时的损耗梯度。对于Linear层,输入为input张量以及weightbias

因此,在输出中您将看到:

grad_input.size: torch.Size([8, 20])  # for the `bias`
None                                  # for the `input`
grad_input.size: torch.Size([10, 20]) # for the `weight`

PyTorch中的Linear层使用的LinearFunction如下。

class LinearFunction(Function):

    # Note that both forward and backward are @staticmethods
    @staticmethod
    # bias is an optional argument
    def forward(ctx, input, weight, bias=None):
        ctx.save_for_backward(input, weight, bias)
        output = input.mm(weight.t())
        if bias is not None:
            output += bias.unsqueeze(0).expand_as(output)
        return output

    # This function has only a single output, so it gets only one gradient
    @staticmethod
    def backward(ctx, grad_output):
        # This is a pattern that is very convenient - at the top of backward
        # unpack saved_tensors and initialize all gradients w.r.t. inputs to
        # None. Thanks to the fact that additional trailing Nones are
        # ignored, the return statement is simple even when the function has
        # optional inputs.
        input, weight, bias = ctx.saved_tensors
        grad_input = grad_weight = grad_bias = None

        # These needs_input_grad checks are optional and there only to
        # improve efficiency. If you want to make your code simpler, you can
        # skip them. Returning gradients for inputs that don't require it is
        # not an error.
        if ctx.needs_input_grad[0]:
            grad_input = grad_output.mm(weight)
        if ctx.needs_input_grad[1]:
            grad_weight = grad_output.t().mm(input)
        if bias is not None and ctx.needs_input_grad[2]:
            grad_bias = grad_output.sum(0).squeeze(0)

        return grad_input, grad_weight, grad_bias

对于LSTM,有四组权重参数。

weight_ih_l0
weight_hh_l0
bias_ih_l0
bias_hh_l0

因此,在您的情况下,grad_input将是5个张量的元组。正如您提到的,grad_output是两个张量。