Pytorch:如何在预测之前保存隐藏状态

时间:2019-06-03 09:04:26

标签: neural-network nlp pytorch

我是pytorch的新手,我正在尝试执行以下操作:

  • 创建一个函数,将我的文本作为输入并返回我的令牌的索引列表。

  • 创建一个具有输入批量大小(batch_size,seq_len)并返回隐藏状态的函数:(batch_size,hidden_​​states)

我包括了我的模型,我所相信的forward函数必须创建一个forward_prime函数,该函数在预测层之前停止并保存隐藏状态,但是我不知道该怎么做。

class ClassificationMultiLabel(nn.Module):
    def __init__(self, op_size, n_tokens, pretrained_vectors, nl=2, bidirectional=True, emb_sz=300, n_hiddenUnits=100):
        super(ClassificationMultiLabel, self).__init__()
        self.n_hidden = n_hiddenUnits
        self.embeddings = nn.Embedding(n_tokens, emb_sz)
        self.embeddings.weight.data.copy_(pretrained_vectors)
        #         self.embeddings.weight.requires_grad = False
        self.rnn = nn.LSTM(emb_sz, n_hiddenUnits, num_layers=2, bidirectional=True, dropout=0.2)
        self.lArr = []
        if bidirectional:
            n_hiddenUnits = 2 * n_hiddenUnits
        self.bn1 = nn.BatchNorm1d(num_features=n_hiddenUnits)
        for i in range(nl):
            if i == 0:
                self.lArr.append(nn.Linear(n_hiddenUnits * 3, n_hiddenUnits))
            else:
                self.lArr.append(nn.Linear(n_hiddenUnits, n_hiddenUnits))
        self.lArr = nn.ModuleList(self.lArr)
        self.l1 = nn.Linear(n_hiddenUnits, op_size)

    def forward(self, data, lengths):
        torch.cuda.empty_cache()
        bs = data.shape[1]
        self.h = self.init_hidden(bs)
        embedded = self.embeddings(data)
        embedded = nn.Dropout()(embedded)
        #         embedded = pack_padded_sequence(embedded, torch.as_tensor(lengths))
        rnn_out, self.h = self.rnn(embedded, (self.h, self.h))
        #         rnn_out, lengths = pad_packed_sequence(rnn_out,padding_value=1)
        avg_pool = F.adaptive_avg_pool1d(rnn_out.permute(1, 2, 0), 1).view(bs, -1)
        max_pool = F.adaptive_max_pool1d(rnn_out.permute(1, 2, 0), 1).view(bs, -1)
        ipForLinearLayer = torch.cat([avg_pool, max_pool, rnn_out[-1]], dim=1)
        for linearlayer in self.lArr:
            outp = linearlayer(ipForLinearLayer)
            ipForLinearLayer = self.bn1(F.relu(outp))
            ipForLinearLayer = nn.Dropout(p=0.6)(ipForLinearLayer)
        outp = self.l1(ipForLinearLayer)
        return outp

完整代码可在此处找到: https://colab.research.google.com/drive/1dKvMQWukbhujgXNWJxKRJSqcfBU1IWZl

0 个答案:

没有答案