加快Kronecker产品的数量

时间:2019-05-09 21:11:59

标签: python numpy linear-algebra

因此,我试图计算每个任意维度的两个矩阵的kronecker积。 (我仅出于示例目的使用相同维数的正方形矩阵)

最初,我尝试使用kron

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.kron(a,b)
end = time.time()

Output: 0.160096406936645

为了尝试加快速度,我使用了tensordot

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.transpose(a,(0,2,1,3))
a = np.reshape(a,(3600,3600))
end = time.time()

Output: 0.11808371543884277

在网上搜索了一下之后,我发现(或者至少在我的理解下)numpy在必须重塑已转置的张量时会产生一个额外的副本。

因此,我然后尝试了以下操作(此代码显然没有给出a和b的kronecker乘积,但我只是在做测试):

a = np.random.random((60,60))
b = np.random.random((60,60))

start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.reshape(a,(3600,3600))
end = time.time()

Output: 0.052041053771972656

我的问题是:如何在不遇到与移调相关的问题的情况下计算kronecker积?

我只是想提高速度,因此解决方案不必使用tensordot

编辑

我刚刚在以下堆栈文章中发现:speeding up numpy kronecker products,还有另一种方法可以实现:

a = np.random.random((60,60))
b = np.random.random((60,60))

c = a

start = time.time()
a = a[:,np.newaxis,:,np.newaxis]
a = a[:,np.newaxis,:,np.newaxis]*b[np.newaxis,:,np.newaxis,:]
a.shape = (3600,3600)
end = time.time()

test = np.kron(c,b)
print(np.array_equal(a,test))
print(end-start)


Output: True
0.05503702163696289

我仍然对您是否可以进一步加快计算速度感兴趣?

2 个答案:

答案 0 :(得分:2)

einsum似乎有效:

>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>> ab = np.kron(a,b)
>>> abe = np.einsum('ik,jl', a, b).reshape(3600,3600)
>>> (abe==ab).all()
True
>>> timeit(lambda: np.kron(a, b), number=10)
1.0697475590277463
>>> timeit(lambda: np.einsum('ik,jl', a, b).reshape(3600,3600), number=10)
0.42500176999601535

简单广播甚至更快:

>>> abb = (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600)
>>> (abb==ab).all()
True
>>> timeit(lambda:  (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600), number=10)
0.28011218502069823

更新:使用blas和cython,我们可以获得另一种适度(30%)的加速。自己决定是否值得麻烦。

[setup.py]

from distutils.core import setup
from Cython.Build import cythonize

setup(name='kronecker',
      ext_modules=cythonize("cythkrn.pyx"))

[cythkrn.pyx]

import cython
cimport scipy.linalg.cython_blas as blas
import numpy as np

@cython.boundscheck(False)
@cython.wraparound(False)
def kron(double[:, ::1] a, double[:, ::1] b):
    cdef int i = a.shape[0]
    cdef int j = a.shape[1]
    cdef int k = b.shape[0]
    cdef int l = b.shape[1]
    cdef int onei = 1
    cdef double oned = 1
    cdef int m, n
    result = np.zeros((i*k, j*l), float)
    cdef double[:, ::1] result_v = result
    for n in range(i):
        for m in range(k):
            blas.dger(&l, &j, &oned, &b[m, 0], &onei, &a[n, 0], &onei, &result_v[m+k*n, 0], &l)
    return result

要构建,先运行cython cythkrn.pyx,然后运行python3 setup.py build

>>> from timeit import timeit
>>> import cythkrn
>>> import numpy as np
>>> 
>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>>
>>> np.all(cythkrn.kron(a, b)==np.kron(a, b))
True
>>> 
>>> timeit(lambda: cythkrn.kron(a, b), number=10)
0.18925874299020506

答案 1 :(得分:1)

加快内存绑定计算

  • 完全避免它是可能的(例如kron_and_sum示例)
  • Blocked execution,与其他计算结合使用
  • 也许float64的float32 intead也足够
  • 如果此计算是循环的,则仅分配一次内存

此代码和@Paul Panzers实现的计时时间完全相同,但是在这两种实现上,我都具有相同的怪异行为。使用预分配的内存,如果将计算并行化(这是预期的),则绝对不会提高速度,但是如果没有预分配的内存,则可以大大提高速度。

代码

import numba as nb
import numpy as np


@nb.njit(fastmath=True,parallel=True)
def kron(A,B):
    out=np.empty((A.shape[0],B.shape[0],A.shape[1],B.shape[1]),dtype=A.dtype)
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out

@nb.njit(fastmath=True,parallel=False)
def kron_preallocated(A,B,out):
    for i in nb.prange(A.shape[0]):
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out[i,j,k,l]=A[i,k]*B[j,l]
    return out

@nb.njit(fastmath=True,parallel=True)
def kron_and_sum(A,B):
    out=0.
    for i in nb.prange(A.shape[0]):
        TMP=np.float32(0.)
        for j in range(B.shape[0]):
            for k in range(A.shape[1]):
                for l in range(B.shape[1]):
                    out+=A[i,k]*B[j,l]
    return out

时间

#Create some data
out_float64=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float64)
out_float32=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float32)
a_float64 = np.random.random((60,60))
b_float64 = np.random.random((60,60))
a_float32 = a_float64.astype(np.float32)
b_float32 = b_float64.astype(np.float32)


#Reference
%timeit np.kron(a_float64,b_float64)
147 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

#If you have to allocate memory for every calculation (float64)
%timeit B=kron(a_float64,b_float64).reshape(3600,3600)
17.6 ms ± 244 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you have to allocate memory for every calculation (float64)
%timeit B=kron_preallocated(a_float64,b_float64,out_float64).reshape(3600,3600)
8.08 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you have to allocate memory for every calculation (float32)
%timeit B=kron(a_float32,b_float32).reshape(3600,3600)
9.27 ms ± 185 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#If you don't have to allocate memory for every calculation (float32)
%timeit B=kron_preallocated(a_float32,b_float32,out_float32).reshape(3600,3600)
3.95 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

#Example for a joined operation (sum of kroncker product)
#which isn't memory bottlenecked
%timeit B=kron_and_sum(a_float64,b_float64)
881 µs ± 104 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)