pytorch backprop比Tensorflow慢吗?

时间:2019-01-01 12:36:59

标签: python tensorflow pycharm pytorch

我在pytorch和tensorflow中实现了一个简单的DDQN网络。网络很浅。 尽管PyTorch中的前向传递比TF快得多,但反向传播的步伐比TF慢得多。这两个反向传播步骤均在CPU上完成。 关于如何改进它的任何想法。

网络部分是:

def __init__(self, hidden_size_IP=100, hidden_size_rest=100, alpha=0.01, state_size=27, action_size=8,
             learning_rate=1e-6):
    super().__init__()

    # build hidden layers
    self.l1 = nn.Sequential(nn.Linear(in_features=500, out_features=400),
                            nn.LeakyReLU(negative_slope=alpha))
    self.l2 = nn.Sequential(nn.Linear(in_features=400, out_features=200),
                            nn.LeakyReLU(negative_slope=alpha))
    self.l3 = nn.Sequential(nn.Linear(in_features=200, out_features=200),
                            nn.LeakyReLU(negative_slope=alpha))
    # build output layer
    self.Qval = nn.Linear(in_features=200, out_features=24)

def forward(self, observation):
    if isinstance(observation, np.ndarray):
        observation = torch.from_numpy(observation).float()
    out1 = self.l1(observation)
    out2 = self.l2(out1)
    out3 = self.l3(out2)
    qval = self.Qval(out3)
    return qval

,反向传播代码可以是,例如:

self.optimizer = optim.Adam(self.q_net.parameters(), lr=1e-4)
self.optimizer.zero_grad()

state_batch=torch.rand([64,500])
act_batch=np.randi(0,24,[64,1]
act_batch_torch=torch.as_tensor(act_batch)
label_batch = torch.rand([64,500])
Q=self.q_net.forward(state_batch).gather(1, act_batch_torch) # q_net is an instance of the network above
loss = mse_loss(input=Q, target=label_batch.detach())
loss.backward()

self.optimizer.step()

请注意,由于使用CPU进行推理要快得多,因此我也在CPU上进行反向传播。我曾尝试将网络传输到GPU,然后在GPU上进行反向传输,但事实证明速度较慢。

有什么想法为什么pyTorch较慢?如何提高这种浅层网络的速度?

0 个答案:

没有答案