在PyTorch中复制子张量

时间:2019-04-19 05:57:23

标签: python deep-learning pytorch tensor

我有一个张量“ image_features”,形状为torch.Size([100, 1024, 14, 14])。我需要将每个次张量(1024, 14, 14)复制10次,以获得具有形状torch.Size([1000, 1024, 14, 14])的张量。

基本上,结果张量的前十行应与原始张量的第一行相对应,结果张量的随后十行应与原始张量的第二行相对应,依此类推。如果可能,我不想创建一个副本(每个复制的张量都可以与其复制的张量共享内存),但是如果没有其他方法,可以创建一个副本。

我该怎么办?

非常感谢您。

3 个答案:

答案 0 :(得分:1)

可以解决您问题的另一种方法是:

orig_shape = (100, 1024, 14, 14)
new_shape = (100, 10, 1024, 14, 14)
input = torch.randn(orig_shape) # [100, 1024, 14, 14]
input = input.unsqueeze(1) # [100, 1, 1024, 14, 14]
input = input.expand(*new_shape) # [100, 10, 1024, 14, 14]
input = input.transpose(0, 1).contiguous() # [10, 100, 1024, 14, 14]
input = input.view(-1, *orig_shape[1:]) # [1000, 1024, 14, 14]

我们可以验证。

orig_shape = (2, 3, 4)
new_shape = (2, 5, 3, 4)
input = torch.randn(orig_shape)
print(input)
input = input.unsqueeze(1)
input = input.expand(*new_shape)
input = input.transpose(0, 1).contiguous()
input = input.view(-1, *orig_shape[1:])
print(input)

代码段导致:

tensor([[[-1.1728,  1.0421, -1.0716,  0.6456],
     [-1.2214,  1.1484, -0.1436,  1.2353],
     [-0.4395, -0.9473, -0.1382, -0.9357]],

    [[-0.4735, -1.4329, -0.0025, -0.6384],
     [ 0.5102,  0.7813,  1.2810, -0.6013],
     [ 0.6152,  1.1734, -0.4591, -1.7447]]])

tensor([[[-1.1728,  1.0421, -1.0716,  0.6456],
     [-1.2214,  1.1484, -0.1436,  1.2353],
     [-0.4395, -0.9473, -0.1382, -0.9357]],

    [[-0.4735, -1.4329, -0.0025, -0.6384],
     [ 0.5102,  0.7813,  1.2810, -0.6013],
     [ 0.6152,  1.1734, -0.4591, -1.7447]],

    [[-1.1728,  1.0421, -1.0716,  0.6456],
     [-1.2214,  1.1484, -0.1436,  1.2353],
     [-0.4395, -0.9473, -0.1382, -0.9357]],

    [[-0.4735, -1.4329, -0.0025, -0.6384],
     [ 0.5102,  0.7813,  1.2810, -0.6013],
     [ 0.6152,  1.1734, -0.4591, -1.7447]]])

答案 1 :(得分:0)

这是使用tensor.repeat()的一种方法,涉及复制数据:

# sample tensor for us to work with
In [89]: shp = (100, 1024, 14, 14)
In [90]: t = torch.randn(shp)

# number of desired repetitions
In [91]: reps = 10

# all the magic happens here
# 10 -> we wish to repeat the entries `reps` times along first dimension
# 1 -> we don't want to repeat along the rest of the dimensions
In [92]: rep_tensor = t.repeat(reps, 1, 1, 1).view(-1, *shp[1:])

In [93]: rep_tensor.shape
Out[93]: torch.Size([1000, 1024, 14, 14])

这是一个进行健全性检查的简单示例:

In [109]: shp = (1, 3, 2)
In [110]: t = torch.randn(shp)

In [111]: t
Out[111]: 
tensor([[[-0.8974,  0.7790],
         [-0.0637, -1.0532],
         [-0.1682, -0.1921]]])

# repeat 3 times along axis 0
In [112]: rep_tensor = t.repeat(3, 1, 1).view(-1, *shp[1:])

In [113]: rep_tensor
Out[113]: 
tensor([[[-0.8974,  0.7790],
         [-0.0637, -1.0532],
         [-0.1682, -0.1921]],

        [[-0.8974,  0.7790],
         [-0.0637, -1.0532],
         [-0.1682, -0.1921]],

        [[-0.8974,  0.7790],
         [-0.0637, -1.0532],
         [-0.1682, -0.1921]]])

答案 2 :(得分:0)

Pytorch有一个单行解决方案可以实现此目的:

--------------------------------------------------------------------------------
  ^                        the beginning of the string
--------------------------------------------------------------------------------
  \w+                      word characters (a-z, A-Z, 0-9, _) (1 or
                           more times (matching the most amount
                           possible))
--------------------------------------------------------------------------------
  (?:                      group, but do not capture (0 or more times
                           (matching the most amount possible)):
--------------------------------------------------------------------------------
    -                        '-'
--------------------------------------------------------------------------------
    \w+                      word characters (a-z, A-Z, 0-9, _) (1 or
                             more times (matching the most amount
                             possible))
--------------------------------------------------------------------------------
  )*                       end of grouping
--------------------------------------------------------------------------------
  (?:                      group, but do not capture (2 times):
--------------------------------------------------------------------------------
    \s+                      whitespace (\n, \r, \t, \f, and " ") (1
                             or more times (matching the most amount
                             possible))
--------------------------------------------------------------------------------
    \w+                      word characters (a-z, A-Z, 0-9, _) (1 or
                             more times (matching the most amount
                             possible))
--------------------------------------------------------------------------------
    (?:                      group, but do not capture (0 or more
                             times (matching the most amount
                             possible)):
--------------------------------------------------------------------------------
      -                        '-'
--------------------------------------------------------------------------------
      \w+                      word characters (a-z, A-Z, 0-9, _) (1
                               or more times (matching the most
                               amount possible))
--------------------------------------------------------------------------------
    )*                       end of grouping
--------------------------------------------------------------------------------
  ){2}                     end of grouping

a = torch.randn(100,1024,14,14) b = torch.repeat_interleave(a,10,dim=0) #a.size() --> torch.Size([100, 1024, 14, 14]) #b.size() --> torch.Size([1000, 1024, 14, 14]) 在给定的轴上重复给定次数的值-在轴0上重复10次。它具有以下函数定义:

repeat_interleave()