Numpy"矢量化"逐行点产品运行速度比for循环慢

时间:2017-05-13 23:46:49

标签: python arrays numpy matrix vectorization

给定矩阵 A ,其形状为(n,k),且矢量 s 的大小为 n ,我想用(k,k)计算矩阵 G ,如下所示:

G + = s [i] * A [i] .T * A [i] ,对于 {0,...中的所有 i 中,n-1}

我尝试使用for循环(方法1 )并以矢量化方式(方法2 )实现它,但for循环实现对于大值更快of k (特别是 k> 500 时)。

代码编写如下:

import numpy as np
k = 200
n = 50000
A = np.random.randint(0, 1000, (n,k)) # generates random data for the matrix A (n,k)
G1 = np.zeros((k,k)) # initialize G1 as a (k,k) matrix
s = np.random.randint(0, 1000, n) * 1.0 # initialize a random vector of size n

# METHOD 1
for i in xrange(n):
    G1 += s[i] * np.dot(np.array([A[i]]).T, np.array([A[i]]))

# METHOD 2
G2 = np.dot(A[:,np.newaxis].T, s[:,np.newaxis]*A)
G2 = np.squeeze(G2) # reduces dimension from (k,1,k) to (k,k)

矩阵G1和G2是相同的(它们是矩阵 G ),唯一的区别在于它们的计算方式。是否有更聪明有效的方法来计算它?

最后,这些是我用 k n 随机大小的时间:

Test #: 1
k,n: (866, 45761)
Method1: 337.457569838s
Method2: 386.290487051s
--------------------
Test #: 2
k,n: (690, 48011)
Method1: 152.999140978s
Method2: 226.080267191s
--------------------
Test #: 3
k,n: (390, 5317)
Method1: 5.28722500801s
Method2: 4.86999702454s
--------------------
Test #: 4
k,n: (222, 5009)
Method1: 1.73456382751s
Method2: 0.929286956787s
--------------------
Test #: 5
k,n: (915, 16561)
Method1: 101.782826185s
Method2: 159.167108059s
--------------------
Test #: 6
k,n: (130, 11283)
Method1: 1.53138184547s
Method2: 0.64450097084s
--------------------
Test #: 7
k,n: (57, 37863)
Method1: 1.44776391983s
Method2: 0.494270086288s
--------------------
Test #: 8
k,n: (110, 34599)
Method1: 3.51851701736s
Method2: 1.61688089371s

1 个答案:

答案 0 :(得分:5)

两个更加改进的版本将是 -

(A.T*s).dot(A)
(A.T).dot(A*s[:,None])

method2的问题:

使用method2,我们正在创建A[:,np.newaxis].T,其形状为(k,1,n),是3D数组。我认为使用3D数组,np.dot进入某种循环并且不是真正的矢量化(源代码可以在这里显示更多信息)。

对于此类3D张量乘法,使用张量等价物更好:np.tensordot。因此,method2的改进版本变为:

G2 = np.tensordot(A[:,np.newaxis].T, s[:,np.newaxis]*A, axes=((2),(0)))
G2 = np.squeeze(G2)

由于我们sum-reducing仅使用np.tensordot的每个输入中的一个轴,因此我们在这里并不需要tensordot而只需要np.dot squeezed-in版本就足够了。这会将我们带回method4

运行时测试

方法 -

def method1(A, s):
    G1 = np.zeros((k,k)) # initialize G1 as a (k,k) matrix
    for i in xrange(n):
        G1 += s[i] * np.dot(np.array([A[i]]).T, np.array([A[i]]))
    return G1

def method2(A, s):
    G2 = np.dot(A[:,np.newaxis].T, s[:,np.newaxis]*A)
    G2 = np.squeeze(G2) # reduces dimension from (k,1,k) to (k,k)
    return G2

def method3(A, s):
    return (A.T*s).dot(A)

def method4(A, s):
    return (A.T).dot(A*s[:,None])

def method2_improved(A, s):
    G2 = np.tensordot(A[:,np.newaxis].T, s[:,np.newaxis]*A, axes=((2),(0)))
    G2 = np.squeeze(G2)
    return G2

计时和验证 -

In [56]: k = 200
    ...: n = 5000
    ...: A = np.random.randint(0, 1000, (n,k))
    ...: s = np.random.randint(0, 1000, n) * 1.0
    ...: 

In [72]: print np.allclose(method1(A, s), method2(A, s))
    ...: print np.allclose(method1(A, s), method3(A, s))
    ...: print np.allclose(method1(A, s), method4(A, s))
    ...: print np.allclose(method1(A, s), method2_improved(A, s))
    ...: 
True
True
True
True

In [73]: %timeit method1(A, s)
    ...: %timeit method2(A, s)
    ...: %timeit method3(A, s)
    ...: %timeit method4(A, s)
    ...: %timeit method2_improved(A, s)
    ...: 
1 loops, best of 3: 1.12 s per loop
1 loops, best of 3: 693 ms per loop
100 loops, best of 3: 8.12 ms per loop
100 loops, best of 3: 8.17 ms per loop
100 loops, best of 3: 8.28 ms per loop