Numpy:更快地计算涉及总和减少的三重嵌套循环

时间:2017-12-03 17:02:56

标签: python numpy parallel-processing gpu jit

我的目标是有效地计算以下嵌套循环,

Ab = np.random.randn(1000, 100)    
Tb = np.zeros((100, 100, 100))

for i in range(d):
    for j in range(d):
        for k in range(d):
            Tb[i, j, k] = np.sum(Ab[:, i] * Ab[:, j] * Ab[:, k])

我通过循环遍历组合找到了一种更快的方法来执行嵌套循环:

for i,j,k in itertools.combinations_with_replacement(np.arange(100), 3):
    Abijk = np.sum(Ab[:, i] * Ab[:, j] * Ab[:, k])

    Tb[i, j, k] = Abijk
    Tb[i, k, j] = Abijk

    Tb[j, i, k] = Abijk
    Tb[j, k, i] = Abijk

    Tb[k, j, i] = Abijk
    Tb[k, i, j] = Abijk

有更有效的方法吗?

我希望有一种方法可以利用Numpy的Blas,Numba的JIT或Pytorch GPU实现。

谢谢!

1 个答案:

答案 0 :(得分:3)

方法#1

我们可以直接将迭代器用作einsum string notation和NumPy的内置np.einsum。因此,解决方案将使用单个einsum调用 -

Tb = np.einsum('ai,aj,ak->ijk',Ab,Ab,Ab)

方法#2

对于所有broadcasted elementwise-multiplication,我们可以使用np.tensordot然后np.matmulsum-reductions的组合。

因此,再次使用einsum或显式维度扩展和broadcasting -

进行广播元素乘法
parte1 = np.einsum('ai,aj->aij',Ab,Ab)
parte1 = (Ab[:,None,:]*Ab[:,:,None]

然后,tensordotnp.matmul -

Tb = np.tensordot(parte1,Ab,axes=((0),(0)))
Tb = np.matmul(parte1.T, Ab) # Or parte1.T @ Ab on Python 3.x

因此,第二种方法总共有四种变体。

运行时测试

In [140]: d = 100
     ...: m = 1000
     ...: Ab = np.random.randn(m,d)

In [148]: %%timeit  # original faster method
     ...: d = 100
     ...: Tb = np.zeros((d,d,d))
     ...: for i,j,k in itertools.combinations_with_replacement(np.arange(100), 3):
     ...:     Abijk = np.sum(Ab[:, i] * Ab[:, j] * Ab[:, k])
     ...: 
     ...:     Tb[i, j, k] = Abijk
     ...:     Tb[i, k, j] = Abijk
     ...: 
     ...:     Tb[j, i, k] = Abijk
     ...:     Tb[j, k, i] = Abijk
     ...: 
     ...:     Tb[k, j, i] = Abijk
     ...:     Tb[k, i, j] = Abijk
1 loop, best of 3: 2.08 s per loop

In [141]: %timeit np.einsum('ai,aj,ak->ijk',Ab,Ab,Ab)
1 loop, best of 3: 3.08 s per loop

In [142]: %timeit np.tensordot(np.einsum('ai,aj->aij',Ab,Ab),Ab,axes=((0),(0)))
     ...: %timeit np.tensordot(Ab[:,None,:]*Ab[:,:,None],Ab,axes=((0),(0)))
     ...: %timeit np.matmul(np.einsum('ai,aj->ija',Ab,Ab), Ab)
     ...: %timeit np.matmul(Ab.T[None,:,:]*Ab.T[:,None,:], Ab)


10 loops, best of 3:  56.8 ms per loop
10 loops, best of 3:  59.2 ms per loop
 1 loop,  best of 3: 673   ms per loop
 1 loop,  best of 3: 670   ms per loop

最快的似乎是基于tensordot的。因此,使35x+加速基于更快的基于itertools的快速方法。