TypeError:“ NoneType”对象不可丢失,在pytorch中向后

时间:2019-11-01 07:40:05

标签: pytorch

我正在尝试使用使用动态图库(DGL)的图卷积网络(GCN)来实现Deep Q网络(DQN)。基本代码来自此repository。但是,在我计算了策略网络和目标网络之间的损耗并运行loss.backward()之后,我得到了TypeError: 'NoneType' object is not iterable。我已经打印出损失值,它不是None

我从存储库中运行了原始代码,它运行良好。我还在DGL中实现了GCN代码,它似乎可以运行。我还使用torchviz可视化了该图,但无法找到它给出错误的原因。

代码段如下:

target = reward_tens + self.gamma * torch.max(self.model_target(observation_tens, self.G) + observation_tens * (-1e5), dim=1)[0]

current_q_values= self.model(last_observation_tens, self.G)
next_q_values=current_q_values.clone()
current_q_values[range(self.minibatch_length),action_tens,:] = target

L=self.criterion(current_q_values,next_q_values)
print('loss:',L.item())
self.optimizer.zero_grad()
L.backward(retain_graph=True)

self.optimizer.step()
  

损失:1461729.125

TypeError                                 Traceback (most recent call last)
<ipython-input-17-cd5e862dd609> in <module>()
     62 
     63 if __name__ == "__main__":
---> 64     main()

7 frames
<ipython-input-17-cd5e862dd609> in main()
     55         print("Running a single instance simulation...")
     56         my_runner = Runner(env_class, agent_class, args.verbose)
---> 57         final_reward = my_runner.loop(graph_dic,args.ngames,args.epoch, args.niter)
     58         print("Obtained a final reward of {}".format(final_reward))
     59         agent_class.save_model()

<ipython-input-14-45cfc883a37b> in loop(self, graphs, games, nbr_epoch, max_iter)
     45                         # if self.verbose:
     46                         #   print("Simulation step {}:".format(i))
---> 47                         (obs, act, rew, done) = self.step()
     48                         action_list.append(act)
     49 

<ipython-input-14-45cfc883a37b> in step(self)
     16         #reward = torch.tensor([reward], device=device)
     17 
---> 18         self.agent.reward(observation, action, reward,done)
     19 
     20         return (observation, action, reward, done)

<ipython-input-16-76d612e8663c> in reward(self, observation, action, reward, done)
    129               print('loss:',L.item())
    130               self.optimizer.zero_grad()
--> 131               L.backward(retain_graph=True)
    132               self.optimizer.step()
    133 

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
    148                 products. Defaults to ``False``.
    149         """
--> 150         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    151 
    152     def register_hook(self, hook):

/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     97     Variable._execution_engine.run_backward(
     98         tensors, grad_tensors, retain_graph, create_graph,
---> 99         allow_unreachable=True)  # allow_unreachable flag
    100 
    101 

/usr/local/lib/python3.6/dist-packages/torch/autograd/function.py in apply(self, *args)
     75 
     76     def apply(self, *args):
---> 77         return self._forward_cls.backward(self, *args)
     78 
     79 

/usr/local/lib/python3.6/dist-packages/dgl/backend/pytorch/tensor.py in backward(ctx, grad_out)
    394     def backward(ctx, grad_out):
    395         reducer, graph, target, in_map, out_map, in_data_nd, out_data_nd, degs \
--> 396             = ctx.backward_cache
    397         ctx.backward_cache = None
    398         grad_in = None

TypeError: 'NoneType' object is not iterable

1 个答案:

答案 0 :(得分:0)

更新:通过在主分支上从源代码构建解决了问题。 查看此issue以获得详细信息。

因此,在DGL中生成随机图的玩具数据集时,我遇到了同样的问题。对于每个图,我使用G.update_all(fn.copy_e(‘msg’),fn.sum(‘msg’,’c’))target=dgl.sum_nodes(G,’c’)计算了相应的目标。当我打电话给loss.backward()时,遇到了与您相同的错误。

我通过在创建torch.no_grad()对象的周围添加DataLoader来解决此问题,因此loss.backward()不会调用backward()内部的target函数,其中{ {1}},如果尚未在dgl/tensor.py中的ctx.backward_cache = None类的forward()之前调用它。

我不确定我的解决方案是否直接适用于您的问题,但是在此之前,您应该检查是否有无前向通过的后向传递,或者损失函数中的张量引用图中的相同计算,并且从而两次调用CopyReduce()

我希望这会有所帮助。