以下内容在RuntimeError: grad can be implicitly created only for scalar outputs
通话中给了我autograd.grad
:
import torch
import torch.autograd as autograd
# Note `x` and `other_x` are both torch tensors of shape torch.Size([1, 20]), with values [-1.0, 1.0]
# and `y` is a torch tensor of shape torch.Size([7, 1]), with values [-1.0, 1.0]
def my_func(x):
# compute something that's a different shape from input
return y
def closure():
minimizer.zero_grad()
y = my_func(x)
autograd.backward(other_x, autograd.grad(y, other_x))
return y
minimizer.step(closure)
return x.detach()
但是为什么呢?对于标量输入(张量均为torch.Size([1])
形状),此操作按预期运行。我无法从autograd docs看出问题所在。预先感谢!