Pytorch:TypeError'torch.LongTensor'对象不可逆

时间:2018-02-26 00:24:48

标签: pytorch

我正在尝试通过pytorch执行NLP任务,并使用以下代码来打包我的一批句子。

for iter in range(0, n_iters, batch_size):
    # batch size * max length Variable
    input_batch = input_data[iter:iter + batch_size]
    target_batch = target_data[iter:iter + batch_size]

    # batch size * 1 LongTensor
    input_batch_length = input_length[iter:iter + batch_size]
    target_batch_length = target_length[iter:iter + batch_size]

    # sort batch for pack_packed_sequence
    input_batch_length, input_indices = input_batch_length.sort(0, descending=True)
    input_batch = input_batch[input_indices]

    target_batch_length, target_indices = target_batch_length.sort(0, descending=True)
    target_batch = target_batch[target_indices]

    # TypeError 'torch.LongTensor' object is not reversible 
    packed_input = nn.utils.rnn.pack_padded_sequence(input_batch, input_batch_length, batch_first=True)
    packed_target = nn.utils.rnn.pack_padded_sequence(target_batch, target_batch_length, batch_first=True)

    loss = train(packed_input, packed_target, input_batch_length, target_batch_length, encoder, decoder,
                 encoder_optimizer, decoder_optimizer, criterion)

然而,我收到一个错误,'torch.LongTensor'对象在包装批次的行中是不可逆的。任何人都可以帮助我吗?

1 个答案:

答案 0 :(得分:0)

PackedInput用期望列表longs编码。

如果你想使用一个变量(LongTensor([list]))。cuda()作为索引,那么你必须将它带回到cpu中,并将其缩小,然后将其重新放入。

packed_in= pack_padded_sequence(embedded, seqs_len.data.cpu().numpy(), batch_first=True)
    # throw them through your LSTM (remember to give 
    # batch_first=True to the LSTM constructor if you packed with it)
    packed_output, (ht, ct) = lstm(packed_in)
    unpacked_output, _ = pad_packed_sequence(packed_output, batch_first=True)

很明显可以在pytorch中改进。