我正在尝试将conv2d层应用于一批序列长度不相同的序列数据。我尝试传递一个压缩序列,但这不起作用...
import torch
import torch.nn as nn
conv1 = nn.Conv2d(1, 16, 5)
x = torch.randn([16,1580,201]) # Where x has dimensions (batch*sequence*feats)
lengths = torch.tensor(
[1580, 959, 896, 881, 881, 881, 881, 881, 881, 881, 881, 881, 881, 335, 254, 219]
)
# So only the first element in the batch has size 1580, the rest have been padded
x_pack = torch.nn.utils.rnn.pack_padded_sequence(x, lengths, batch_first=True)
x_pack = conv1(x_pack)
print(x_pack)
# ... then feed x_pack to an LSTM...
有人知道如何不对输入的填充值应用卷积吗?我是否必须遍历批处理中的每个元素?
谢谢。