我正在遵循PyTorch教程示例:
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
该示例运行时没有任何问题,但是当我将其切换到自己的数据集(稀疏张量,因为它太大而无法用作密集张量)时,我就遇到了这个错误:>
RuntimeError Traceback (most recent call last)
<ipython-input-127-8b4999644085> in <module>()
41 # Backward pass: compute gradient of the loss with respect to model
42 # parameters
---> 43 loss.backward()
44
45 # Calling the step function on an Optimizer makes an update to its
~/miniconda3/envs/py3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to ``False``.
92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~/miniconda3/envs/py3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
87 Variable._execution_engine.run_backward(
88 tensors, grad_tensors, retain_graph, create_graph,
---> 89 allow_unreachable=True) # allow_unreachable flag
90
91
RuntimeError: Expected object of type torch.FloatTensor but found type torch.sparse.FloatTensor for argument #2 'mat2'
我尝试切换优化器(Adagrad,Adam),但似乎不起作用。
编辑:添加了更多错误输出。错误发生在backward()