将PyTorch张量的大小调整为较小的大小

时间:2020-03-13 03:33:14

标签: python pytorch

我正在尝试将张量从(3,3)减小到(1, 1),但我想保留原始张量:

import torch

a = torch.rand(3, 3)
a_copy = a.clone()
a_copy.resize_(1, 1)

我的初始张量需要requires_grad=True,但PyTorch禁止我尝试调整副本的大小:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone()
a_copy.resize_(1, 1)

引发错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 7, in <module>
    a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad

克隆和分离

我也尝试过.clone().detach()

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone().detach()

with torch.no_grad():
    a_copy.resize_(1, 1)

则会显示此错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 14, in <module>
    a_copy.resize_(1, 1)
RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block.
For example, change:
    x.data.set_(y)
to:
    with torch.no_grad():
        x.set_(y)

此行为已在the docs#15070中指出。

使用no_grad()

因此,按照他们在错误消息中所说的那样,我删除了.detach()并改用no_grad()

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.clone()

with torch.no_grad():
    a_copy.resize_(1, 1)

但是它仍然给我关于grad的错误:

Traceback (most recent call last):
  File "pytorch_test.py", line 21, in <module>
    a_copy.resize_(1, 1)
RuntimeError: cannot resize variables that require grad

类似的问题

我看过Resize PyTorch Tensor,但是在该示例中它的张量保留了所有原始值。 我还查看了Pytorch preferred way to copy a tensor,这是我用来复制张量的方法。

我正在使用PyTorch版本1.4.0

2 个答案:

答案 0 :(得分:0)

有一个narrow()函数:

def samestorage(x,y):
    if x.storage().data_ptr()==y.storage().data_ptr():
        print("same storage")
    else:
        print("different storage")
def contiguous(y):
    if True==y.is_contiguous():
        print("contiguous")
    else:
        print("non contiguous")
# narrow => same storage contiguous tensors
import torch
x = torch.randn(3, 3, requires_grad=True)
y = x.narrow(0, 1, 2) #dim, start, len  
print(x)
print(y)
contiguous(y)
samestorage(x,y)

出局:

tensor([[ 1.1383, -1.2937,  0.8451],
        [ 0.0151,  0.8608,  1.4623],
        [ 0.8490, -0.0870, -0.0254]], requires_grad=True)
tensor([[ 0.0151,  0.8608,  1.4623],
        [ 0.8490, -0.0870, -0.0254]], grad_fn=<SliceBackward>)
contiguous
same storage

答案 1 :(得分:-1)

我认为您应该先分离,然后克隆:

a = torch.rand(3, 3, requires_grad=True)
a_copy = a.detach().clone()
a_copy.resize_(1, 1)

注意:a.detach()返回从当前图形分离的 new 张量(它不像a那样从图形分离a.detach_()本身)。但是,由于它与a共享存储,因此您也应该克隆它。这样,您对a_copy所做的任何操作都不会影响a。但是,我不确定a.detach().clone()为何起作用,但是a.clone().detach()却给出了错误。

修改

以下代码也可以使用(这可能是一个更好的解决方案):

a = torch.rand(3, 3, requires_grad=True)

with torch.no_grad():
    a_copy = a.clone()
    a_copy.resize_(1, 1)