Pytorch:反向传播不止一种损失

时间:2020-05-24 06:09:41

标签: python neural-network pytorch reinforcement-learning

我想向后传播多个样本。这意味着PyTorch损失不止一个。我想在特定的时间戳记上执行此操作。 我正在尝试这样做:

        losso = 0
        for g, logprob in zip(G, self.action_memory):
            losso += -g * logprob
        self.buffer.append(losso)

        if (self.game_counter > self.pre_training_games):
            for element in self.buffer:
                self.policy.optimizer.zero_grad()
                element.backward(retain_graph=True)
                self.policy.optimizer.step()

但是我遇到了运行时错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [91, 9]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

1 个答案:

答案 0 :(得分:0)

您似乎正在重复使用loss
一方面,您将每次迭代的损失添加到loss
另一方面,您backward()到每次迭代的loss

这可能是导致您出错的原因。