Pytorch:LSTM输入和输出尺寸

时间:2019-12-08 08:52:22

标签: python deep-learning nlp pytorch lstm

我对LSTM输入和输出尺寸有些困惑:

Here is my network:
Intent_LSTM(
  (embedding): Embedding(41438, 400)
  (lstm): LSTM(400, 512, num_layers=2, batch_first=True, dropout=0.5)
  (dropout): Dropout(p=0.5, inplace=False)
  (fc): Linear(in_features=512, out_features=3, bias=True)
).

这里的嵌入图形是:[50,150,400] 50是批处理大小,是输入内容的150 seq的长度。 400是我的内在尺寸。我将其输入我的LSTM中。但是当我浏览pytorch文档时。它指出输入必须采用以下形式:

输入形状(seq_len,批处理,input_size)

因此,应将输入转换为这种格式。 ([150,50,400])吗?

如果是,我该怎么做?

这是我的前传:

def forward(self, x):
        """
        Perform a forward pass

        """
        batch_size = x.size(0)

        x = x.long()
        embeds = self.embedding(x)

        lstm_out, hidden = self.lstm(embeds)


        # stack up lstm outputs
        lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)


        out = self.dropout(lstm_out)
        out = self.fc(out)



        # reshape to be batch_size first
        out = out.view(batch_size, -1,3)
        #print("sig_out",sig_out.shape)
        out = out[:, -1,:] # get last batch of labels

        # return last sigmoid output and hidden state
        return out

1 个答案:

答案 0 :(得分:0)

您可以将输入参数 State 设置为首先具有批次维度。

请参阅 docs 以供参考。