如何将不同大小的张量列表转换为单个张量?

时间:2020-04-03 00:46:33

标签: python pytorch tensor

我想将具有不同大小的张量列表转换为单个张量。

我尝试了torch.stack,但显示错误。

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-237-76c3ff6f157f> in <module>
----> 1 torch.stack(t)

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 5 and 6 in dimension 1 at C:\w\1\s\tmp_conda_3.7_105232\conda\conda-bld\pytorch_1579085620499\work\aten\src\TH/generic/THTensor.cpp:612

我的张量列表:

[tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]),
 tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]),
 tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])]

我还尝试了另一种方法,而不是张量,而是使用这些单个张量的列表,并尝试从中得出张量。那也显示了一个错误。

list: [[-0.18729999661445618, -0.6179999709129333, -0.3917999863624573, -0.5849000215530396, -0.36070001125335693], [-0.6873000264167786, -0.3917999863624573, -0.5849000215530396, -0.9768000245094299, -0.7590000033378601, -0.6707000136375427], [-0.6686000227928162, -0.7021999955177307, -0.7436000108718872, -0.8230999708175659, -0.6348000168800354, -0.40400001406669617, -0.6074000000953674, -0.6920999884605408]]

错误:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-245-489aea87f307> in <module>
----> 1 torch.FloatTensor(t)

ValueError: expected sequence of length 5 at dim 1 (got 6)

显然,它说,如果我没记错的话,它期望列表的长度相同。

有人可以在这里帮助吗?

3 个答案:

答案 0 :(得分:0)

尝试:

>>>data = [tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]),tensor([-0.6873, -0.3918, 
-0.5849, -0.9768, -0.7590, -0.6707]),tensor([-0.6686, -0.7022, -0.7436, -0.8231, 
-0.6348, -0.4040, -0.6074, -0.6921])]

>>>dataTensor = torch.cat(data).reshape(x,y)  #x*y = data.numel()

>>>print(type(dataTensor))
<class 'torch.Tensor'>

torch.stack连接具有相同大小的张量序列。

torch.cat连接一系列张量。

摘自torch.cat的文档:

在给定维度上连接给定序列张量的序列。所有张量必须具有相同的形状(除级联维度外)或为空。

答案 1 :(得分:0)

我同意@ helloswift123,您不能堆叠不同长度的张量。

此外,@ helloswift123的答案仅在元素总数可被所需形状整除的情况下有效。在这种情况下,元素的总数为19,并且在任何情况下都不能重塑为有用的数字,因为它是质数。

torch.cat()

data = [torch.tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]),
                torch.tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]),
                torch.tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])]
dataTensor = torch.cat(data)
dataTensor.numel()

输出:

tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607, -0.6873, -0.3918, -0.5849,
        -0.9768, -0.7590, -0.6707, -0.6686, -0.7022, -0.7436, -0.8231, -0.6348,
        -0.4040, -0.6074, -0.6921])
19 

可能的解决方案:

这也不是完美的解决方案,但可以解决此问题。

# Have a list of tensors (which can be of different lengths) 
data = [torch.tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]),
        torch.tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]),
        torch.tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])]

# Determine maximum length
max_len = max([x.squeeze().numel() for x in data])

# pad all tensors to have same length
data = [torch.nn.functional.pad(x, pad=(0, max_len - x.numel()), mode='constant', value=0) for x in data]

# stack them
data = torch.stack(data)

print(data)
print(data.shape)

输出:

tensor([[-0.1873, -0.6180, -0.3918, -0.5849, -0.3607,  0.0000,  0.0000,  0.0000],
        [-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707,  0.0000,  0.0000],
        [-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921]])
torch.Size([3, 8])

这会将零添加到元素数量较少的任何张量的末尾,在这种情况下,您可以照常使用torch.stack()

我希望这会有所帮助!

答案 2 :(得分:0)

另一种可能的解决方案,使用 using System; using System.Runtime.InteropServices; class Example { // Use DllImport to import the Win32 MessageBox function. [DllImport("user32.dll", CharSet = CharSet.Unicode)] public static extern int MessageBox(IntPtr hWnd, String text, String caption, uint type); static void Main() { // Call the MessageBox function using platform invoke. MessageBox(new IntPtr(0), "Hello World!", "Hello Dialog", 0); } }

torch.nn.utils.rnn.pad_sequence