PyTorch autograd-仅可为标量输出隐式创建grad

时间:2018-09-13 15:49:57

标签: python pytorch autograd

我正在使用autograd中的PyTorch工具,发现自己处于一种需要通过整数索引访问一维张量中的值的情况。像这样:

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    return Variable(torch.FloatTensor(res))


def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

我收到以下错误消息:

[tensor(1., grad_fn=<ThMulBackward>), tensor(4., grad_fn=<ThMulBackward>), tensor(9., grad_fn=<ThMulBackward>), tensor(16., grad_fn=<ThMulBackward>), tensor(25., grad_fn=<ThMulBackward>)]
Traceback (most recent call last):
  File "/home/mhy/projects/pytorch-optim/predict.py", line 74, in <module>
    print(get_grad(x_cloned, x))
  File "/home/mhy/projects/pytorch-optim/predict.py", line 68, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

通常,我对如何使用变量的克隆版本将其保留在梯度计算中有些怀疑。变量本身实际上未用于A的计算中,因此,当您调用A.backward()时,它不应成为该操作的一部分。

感谢您使用此方法的帮助,或者是否有更好的方法来避免丢失渐变历史,并且仍然可以使用requires_grad=True通过一维张量索引!

**编辑(9月15日):**

res是包含1到5的平方值的零维张量的列表。为了连接到包含[1.0,4.0,...,25.0]的单个张量,我更改了return Variable(torch.FloatTensor(res))torch.stack(res, dim=0),产生tensor([ 1., 4., 9., 16., 25.], grad_fn=<StackBackward>)

但是,我收到由A.backward()行引起的这个新错误。

Traceback (most recent call last):
  File "<project_path>/playground.py", line 22, in <module>
    print(get_grad(x_cloned, x))
  File "<project_path>/playground.py", line 16, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 84, in backward
    grad_tensors = _make_grads(tensors, grad_tensors)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 28, in _make_grads
    raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

2 个答案:

答案 0 :(得分:1)

在basic_fun函数中,res变量已经是torch-autograd-Variable了,您无需再次对其进行转换。恕我直言

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    #return Variable(torch.FloatTensor(res))
    return res[0]

def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

答案 1 :(得分:0)

我将basic_fun更改为以下内容,这解决了我的问题:

def basic_fun(x_cloned):
    res = torch.FloatTensor([0])
    for i in range(len(x)):
        res += x_cloned[i] * x_cloned[i]
    return res

此版本返回标量值。