在文本处理中填充PyTorch张量

时间:2019-07-18 08:30:14

标签: python deep-learning nlp pytorch

我是NNpytorch的新朋友。我对用CNN对文本进行分类感兴趣,并且给我使用的代码是below。但是,当我在另一个数据集中运行它时,在conved = [F.relu(conv(embedded)) for conv in self.convs]

行下面返回错误
  

RuntimeError:计算的每个通道的填充输入大小:(1 x 2)。内核大小:(1 x 3)。内核大小不能大于实际输入大小

我认为这个问题是因为输入句子的长度小于内核的大小。您能否建议一种有效的聪明方法来解决此问题?

class CNN1d(nn.Module):

def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, 
             dropout, pad_idx):

    super().__init__()

    self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)

    self.convs = nn.ModuleList([
                                nn.Conv1d(in_channels = embedding_dim, 
                                          out_channels = n_filters, 
                                          kernel_size = fs)
                                for fs in filter_sizes
                                ])

    self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)

    self.dropout = nn.Dropout(dropout)

def forward(self, text):

    #text = [sent len, batch size]

    text = text.permute(1, 0)

    #text = [batch size, sent len]

    embedded = self.embedding(text)

    #embedded = [batch size, sent len, emb dim]

    embedded = embedded.permute(0, 2, 1)

    #embedded = [batch size, emb dim, sent len]

    conved = [F.relu(conv(embedded)) for conv in self.convs]

    #conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]

    pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]

    #pooled_n = [batch size, n_filters]

    cat = self.dropout(torch.cat(pooled, dim = 1))

    #cat = [batch size, n_filters * len(filter_sizes)]

    return self.fc(cat)

编辑:将conved = [F.relu(conv(embedded)) for conv in self.convs]替换为

是否正确?
conved = []
for conv in self.convs:
    if embedded.shape[2] < conv.kernel_size[0]:
        padding_length = conv.kernel_size[0] - embedded.shape[2] + 1
        embedded = F.pad(embedded, (0, padding_length), 'constant', 0)
    conved.append(F.relu(conv(embedded)))

0 个答案:

没有答案