在多维阵列产品中,如何在有和没有求和的情况下对齐轴?

时间:2016-10-10 15:00:20

标签: numpy scipy lapack intel-mkl numpy-einsum

当有一些重复的索引被求和而不是其他索引时,执行数组操作的最佳方法是什么?似乎我可能不得不使用einsum进行这些操作,但如果有一个tensordot替代方案,其中包含已对齐但未汇总的维度的标记会更好。

是否有人知道一个快速的数值例程(在lapack中可能?),其行为类似于tensordot,除了某些轴可以在不进行求和的情况下对齐?

==

这是一个示例代码,用于显示所需的数组操作类型。我需要的操作由method_summethod_einsummethod_matmul完成。类似的操作在匹配的j轴上求和,由method2_einsummethod2_tensordot完成。

通过比较时间,tensordot似乎应该能够在第一个问题上击败einsum。但是,它没有在不加总的情况下对齐轴的功能。

#import scipy
import scipy as sp

# Shapes of arrays
I = 200
J = 50
K = 200
L = 100

a = sp.ones((I, J, L))
b = sp.ones((J, K, L))


# The desired product has a sum over the l-axis

## Use broadcasting to multiply and sum over the last dimension
def method_sum(a, b):
    "Multiply arrays and sum over last dimension."   
    c = (a[:, :, None, :] * b[None, :, :, :]).sum(-1)
    return c

## Use einsum to multiply arrays and sum over the l-axis
def method_einsum(a, b):
    "Multiply arrays and sum over last dimension."
    c = sp.einsum('ijl,jkl->ijk', a, b)
    return c

## Use matmul to multiply arrays and sum over one of the axes
def method_matmul(a, b):
    "Multiply arrays using the new matmul operation."
    c = sp.matmul(a[:, :, None, None, :], 
                  b[None, :, :, :, None])[:, :, :, 0, 0]
    return c


# Compare einsum vs tensordot on summation over j and l

## Einsum takes about the same amount of time when j is not summed over) 
def method2_einsum(a, b):
    "Multiply arrays and sum over last dimension."
    c = sp.einsum('ijl,jkl->ik', a, b)
    return c

## Tensor dot can do this faster but it always sums over the aligned axes
def method2_tensordot(a, b):
    "Multiply and sum over all overlapping dimensions."
    c = sp.tensordot(a, b, axes=[(1, 2,), (0, 2,)])
    return c

以下是我计算机上各种例程的一些情况。 Tensordot可以击败method2的einsum,因为它使用多个核心。我希望在J和L轴对齐的计算中实现类似tensordot的性能,但只对L轴求和。

Time for method_sum:
1 loops, best of 3: 744 ms per loop

Time for method_einsum:
10 loops, best of 3: 95.1 ms per loop

Time for method_matmul:
10 loops, best of 3: 93.8 ms per loop

Time for method2_einsum:
10 loops, best of 3: 90.4 ms per loop

Time for method2_tensordot:
100 loops, best of 3: 10.9 ms per loop

1 个答案:

答案 0 :(得分:1)

In [85]: I,J,K,L=2,3,4,5
In [86]: a=np.ones((I,J,L))
In [87]: b=np.ones((J,K,L))

In [88]: np.einsum('ijl,jkl->ijk',a,b).shape
Out[88]: (2, 3, 4)

使用新的@运算符,我发现我可以生成:

In [91]: (a[:,:,None,None,:]@b[None,:,:,:,None]).shape
Out[91]: (2, 3, 4, 1, 1)

In [93]: (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
Out[93]: (2, 3, 4)

形状是正确的,但我还没有检查过这些值。一些None排列ijk轴,两个产生常规dot行为(最后一个轴,倒数第二个)。

根据您的尺寸,时间大致相同:

In [97]: np.einsum('ijl,jkl->ijk',a,b).shape
Out[97]: (200, 50, 200)
In [98]: (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
Out[98]: (200, 50, 200)
In [99]: timeit np.einsum('ijl,jkl->ijk',a,b).shape
1 loop, best of 3: 2 s per loop
In [100]: timeit (a[:,:,None,None,:]@b[None,:,:,:,None])[...,0,0].shape
1 loop, best of 3: 2.06 s per loop