OpenAI的强化学习的REINFORCE和行为者批评示例具有以下代码:
$('.carousel').carousel({
interval: 2000
});
$('.carousel-text').html($('.active > .carousel-caption').html());
$('.carousel').on('slid.bs.carousel', function () {
$('.carousel-text').html($('.active > .carousel-caption').html());
});
policy_loss = torch.cat(policy_loss).sum()
一个正在使用loss = torch.stack(policy_losses).sum() + torch.stack(value_losses).sum()
,另一个正在使用torch.cat
。
据我所知,the doc没有对它们进行任何明显的区分。
我很高兴知道这些功能之间的区别。
答案 0 :(得分:13)
stack
沿着新维度连接张量序列。
cat
在给定维度上连接给定序列seq张量 。
因此,如果A
和B
的形状为(3,4),则torch.cat([A, B], dim=0)
的形状为(6,4),而torch.stack([A, B], dim=0)
的形状为( 2、3、4)。
答案 1 :(得分:8)
t1 = torch.tensor([[1, 2],
[3, 4]])
t2 = torch.tensor([[5, 6],
[7, 8]])
这些函数类似于 numpy.stack
和 numpy.concatenate
。
答案 2 :(得分:1)
原始答案缺少一个自包含的好例子,所以这里是:
import torch
# stack vs cat
# cat "extends" a list in the given dimension e.g. adds more rows or columns
x = torch.randn(2, 3)
print(f'{x.size()}')
# add more rows (thus increasing the dimensionality of the column space to 2 -> 6)
xnew_from_cat = torch.cat((x, x, x), 0)
print(f'{xnew_from_cat.size()}')
# add more columns (thus increasing the dimensionality of the row space to 3 -> 9)
xnew_from_cat = torch.cat((x, x, x), 1)
print(f'{xnew_from_cat.size()}')
print()
# stack serves the same role as append in lists. i.e. it doesn't change the original
# vector space but instead adds a new index to the new tensor, so you retain the ability
# get the original tensor you added to the list by indexing in the new dimension
xnew_from_stack = torch.stack((x, x, x, x), 0)
print(f'{xnew_from_stack.size()}')
xnew_from_stack = torch.stack((x, x, x, x), 1)
print(f'{xnew_from_stack.size()}')
xnew_from_stack = torch.stack((x, x, x, x), 2)
print(f'{xnew_from_stack.size()}')
# default appends at the from
xnew_from_stack = torch.stack((x, x, x, x))
print(f'{xnew_from_stack.size()}')
print('I like to think of xnew_from_stack as a \"tensor list\" that you can pop from the front')
输出:
torch.Size([2, 3])
torch.Size([6, 3])
torch.Size([2, 9])
torch.Size([4, 2, 3])
torch.Size([2, 4, 3])
torch.Size([2, 3, 4])
torch.Size([4, 2, 3])
I like to think of xnew_from_stack as a "tensor list"
这里的定义供参考:
<块引用>cat:在给定维度连接给定的 seq 张量序列。结果是特定维度改变大小,例如dim=0 则您将向行添加元素,从而增加列空间的维数。
<块引用>stack:沿新维度连接张量序列。我喜欢将其视为火炬“追加”操作,因为您可以通过从前面“弹出”来索引/获取原始张量。没有参数,它将张量附加到张量的前面。
相关:
tensor.torch
将张量的嵌套列表转换为具有许多维度的大张量,尊重嵌套的深度列表。def tensorify(lst):
"""
List must be nested list of tensors (with no varying lengths within a dimension).
Nested list of nested lengths [D1, D2, ... DN] -> tensor([D1, D2, ..., DN)
:return: nested list D
"""
# base case, if the current list is not nested anymore, make it into tensor
if type(lst[0]) != list:
if type(lst) == torch.Tensor:
return lst
elif type(lst[0]) == torch.Tensor:
return torch.stack(lst, dim=0)
else: # if the elements of lst are floats or something like that
return torch.tensor(lst)
current_dimension_i = len(lst)
for d_i in range(current_dimension_i):
tensor = tensorify(lst[d_i])
lst[d_i] = tensor
# end of loop lst[d_i] = tensor([D_i, ... D_0])
tensor_lst = torch.stack(lst, dim=0)
return tensor_lst
这里有一些单元测试(我没有写更多的测试,但它与我的真实代码一起工作,所以我相信它很好。如果你愿意,可以通过添加更多测试来帮助我):
def test_tensorify():
t = [1, 2, 3]
print(tensorify(t).size())
tt = [t, t, t]
print(tensorify(tt))
ttt = [tt, tt, tt]
print(tensorify(ttt))
if __name__ == '__main__':
test_tensorify()
print('Done\a')