如何将单个张量放入批处理中并在PyTorch中进行处理?

时间:2020-09-02 18:02:58

标签: python machine-learning pytorch

我的网络按1来处理每个张量,因此它将生成一个输入和输出对,将输入馈入网络,获得该单个张量对的输出,并将其与输出进行比较。

张量对的一个例子是张量作为输入:[10,1]和输出[12,1]等。这无需填充。这是生成张量对的一般过程:

Input Tensor:  torch.Size([6, 1])
Target Tensor:  torch.Size([7, 1])

Input Tensor:  torch.Size([8, 1])
Target Tensor:  torch.Size([10, 1])

Input Tensor:  torch.Size([8, 1])
Target Tensor:  torch.Size([10, 1])

更深入地了解所生成的张量:

Input Tensor:  
tensor([[22],
        [ 7],
        [18],
        [ 5],
        [ 1]])

Target Tensor: 
tensor([[23],
        [ 8],
        [ 6],
        [19],
        [ 6],
        [ 5],
        [ 1]])

我现在想做的是在此过程中实现批处理。因此,我不想生成单个对,而是对该对进行前馈,而是想生成一个大对(32、64 ...)的对,并将其馈入。

我将如何创建输入批次和输出批次? 我相信它应该类似于[40,32]或[40,64]

这是我目前的程序。

def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.001, batch_size=64):
  
    training_pairs = [tensorsFromPair(random.choice(pairs))
                      for i in range(n_iters)]
    criterion = nn.CrossEntropyLoss()#nn.NLLLoss()

    for iter in range(1, n_iters + 1):
        

        training_pair = training_pairs[iter - 1]
        input_tensor = training_pair[0]
        target_tensor = training_pair[1]

        print("Input Tensor: ", input_tensor.shape)
        print("Target Tensor: ",target_tensor.shape)

        loss = train(input_tensor, target_tensor, encoder,
                     decoder, encoder_optimizer, decoder_optimizer, criterion)

def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
    encoder_hidden = encoder.initHidden()

    encoder_optimizer.zero_grad()
    decoder_optimizer.zero_grad()

    input_length = input_tensor.size(0)
    target_length = target_tensor.size(0)

    encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)

    loss = 0
    for ei in range(input_length):

        encoder_output, encoder_hidden = encoder(
            input_tensor[ei], encoder_hidden)
        encoder_outputs[ei] = encoder_output[0, 0]


    decoder_input = torch.tensor([[SOS_token]], device=device)

    decoder_hidden = encoder_hidden

    use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False

    if use_teacher_forcing:
        # Teacher forcing: Feed the target as the next input
        for di in range(target_length):
            decoder_output, decoder_hidden, decoder_attention = decoder(
                decoder_input, decoder_hidden, encoder_outputs)
            #print(decoder_output, target_tensor[di])
            loss += criterion(decoder_output, target_tensor[di])
            decoder_input = target_tensor[di]  # Teacher forcing

    else:
        # Without teacher forcing: use its own predictions as the next input
        for di in range(target_length):
            decoder_output, decoder_hidden, decoder_attention = decoder(
                decoder_input, decoder_hidden, encoder_outputs)
            topv, topi = decoder_output.topk(1)
            decoder_input = topi.squeeze().detach()  # detach from history as input

            loss += criterion(decoder_output, target_tensor[di])
            if decoder_input.item() == EOS_token:
                break

    loss.backward()

    encoder_optimizer.step()
    decoder_optimizer.step()

    return loss.item() 

0 个答案:

没有答案