是否可以使用pyTorch创建FIFO队列?

时间:2018-08-09 08:04:42

标签: python machine-learning queue pytorch tensor

我需要在pyTorch中创建固定长度的Tensor,就像FIFO队列一样。

我有此功能:

def push_to_tensor(tensor, x):
    tensor[:-1] = tensor[1:]
    tensor[-1] = x
    return tensor

例如,我有:

tensor = Tensor([1,2,3,4])

>> tensor([ 1.,  2.,  3.,  4.])

然后使用该函数将给出:

push_to_tensor(tensor, 5)

>> tensor([ 2.,  3.,  4.,  5.])

但是,我想知道:

  • pyTorch有本地方法吗?
  • 如果没有,还有其他更聪明的方法吗?

2 个答案:

答案 0 :(得分:2)

我实现了另一个FIFO队列:

def push_to_tensor_alternative(tensor, x):
    return torch.cat((tensor[1:], Tensor([x])))

功能相同,但是随后我检查了它们的性能:

# Small Tensor
tensor = Tensor([1,2,3,4])

%timeit push_to_tensor(tensor, 5)
>> 30.9 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

%timeit push_to_tensor_alternative(tensor, 5)
>> 22.1 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

# Larger Tensor
tensor = torch.arange(10000)

%timeit push_to_tensor(tensor, 5)
>> 57.7 µs ± 4.88 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

%timeit push_to_tensor_alternative(tensor, 5)
>> 28.9 µs ± 570 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

使用push_to_tensor_alternative(而不是将所有项目向左移动)的torch.cat这样的速度更快。

答案 1 :(得分:0)

也许有点晚了,但我找到了另一种方法来做到这一点并节省一些时间。 就我而言,我需要一个类似的 FIFO 结构,但我只需要实际解析 每 N 次迭代一次 FIFO 张量。即我需要一个 FIFO 结构来保存 n 整数,并且每一次 n 迭代我都需要通过我的模型解析该张量。我发现实现 collections.deque 结构并将该双端队列转换为张量火炬要快得多。

import time
import torch
from collections import deque
length = 5000

que = deque([0]*200)

ten = torch.tensor(que)

s = time.time()
for i in range(length):
    for j in range(200):  
        que.pop()      
        que.appendleft(j*10)        
    torch.tensor(que)
    # after some appending/popping elements, cast to tensor
print("finish deque:", time.time()-s)


s = time.time()
for i in range(length):
    for j in range(200):
        newelem = torch.tensor([j*10])
        ten = torch.cat((ten[1:], newelem))
        #using tensor as FIFO, no need to cast to tensor
print("finish tensor:", time.time()-s)

结果如下:

finish deque: 0.15857529640197754
finish tensor: 9.483643531799316

我还注意到,当使用双端队列并总是投射到 torch.tensor 时 使用 push_alternative 可以让您的时间增加约 20%。

s = time.time()
for j in range(length):    
        que.pop()      
        que.appendleft(j*10)        
        torch.tensor(que)    
print("finish queue:", time.time()-s)


s = time.time()
for j in range(length):    
        newelem = torch.tensor([j*10])
        ten = torch.cat((ten[1:], newelem))
print("finish tensor:", time.time()-s)

结果:

finish queue: 8.422480821609497
finish tensor: 11.169137477874756