CUDA中相同Type Float32的不同计算时间

时间:2017-05-13 22:15:16

标签: numpy parallel-processing numba autojit

我用以下脚本计算一个简单的矩阵乘法:

import numpy as np
import math
from timeit import default_timer as timer
from numba import cuda
from numba import *
from numba import autojit

@autojit
def mult2(a,b):
    return a*b
@autojit
def mult_cpu(a,b,c):
    Ni=c.shape[0]
    Nj=c.shape[1]
    Nk=c.shape[2]
    for i in range(Ni):
        for j in range(Nj):
            for k in range(Nk):
                c[i,j,k]=mult2(a[i,k],b[j,k])

dimx=20
dimy=3072
dimz=50000

print "\ntest1"
A=np.ones((dimx,dimz),dtype=np.float32)
B=np.ones((dimy,dimz),dtype=np.float32)
C=np.ones((dimx,dimy,dimz),dtype=np.float32)
print A.shape,A.dtype
print B.shape,B.dtype
print C.shape,C.dtype
start=timer()
mult_cpu(A,B,C)
dt=timer()-start    
print "Computation autojit done in %f s"%(dt)
print 'C[:3,1,1] = ',C[:3,1,1]
print 'C[-3:,1,1] = ',C[-3:,1,1]
del A
del B
del C
del start
del dt


print "\ntest2"
A=np.zeros((dimx,dimz),dtype=np.float32)
B=np.zeros((dimy,dimz),dtype=np.float32)
C=np.zeros((dimx,dimy,dimz),dtype=np.float32)
print A.shape,A.dtype
print B.shape,B.dtype
print C.shape,C.dtype
start=timer()
mult_cpu(A,B,C)
dt=timer()-start    
print "Computation autojit done in %f s"%(dt)
print 'C[:3,1,1] = ',C[:3,1,1]
print 'C[-3:,1,1] = ',C[-3:,1,1]
del A
del B
del C
del start
del dt


print "\ntest3"
A=0.0001*np.random.randn(dimx,dimz).astype(np.float32)
B=0.0001*np.random.randn(dimy,dimz).astype(np.float32)
C=0.0001*np.random.randn(dimx,dimy,dimz).astype(np.float32)
print A.shape,A.dtype
print B.shape,B.dtype
print C.shape,C.dtype
start=timer()
mult_cpu(A,B,C)
dt=timer()-start    
print "Computation autojit done in %f s"%(dt)
print 'C[:3,1,1] = ',C[:3,1,1]
print 'C[-3:,1,1] = ',C[-3:,1,1]

除了ABC的初始化之外,每个测试都是相同的。输出是:

test1
(20, 50000) float32
(3072, 50000) float32
(20, 3072, 50000) float32
Computation autojit done in 4.485923 s
C[:3,1,1] =  [ 1.  1.  1.]
C[-3:,1,1] =  [ 1.  1.  1.]

test2
(20, 50000) float32
(3072, 50000) float32
(20, 3072, 50000) float32
Computation autojit done in 7.031277 s
C[:3,1,1] =  [ 0.  0.  0.]
C[-3:,1,1] =  [ 0.  0.  0.]

test3
(20, 50000) float32
(3072, 50000) float32
(20, 3072, 50000) float32
Computation autojit done in 45.372899 s
C[:3,1,1] =  [ -3.09475023e-09   4.71271910e-09   2.36787634e-09]
C[-3:,1,1] =  [ -7.29189642e-09  -3.03451442e-09   1.95249439e-09]

因此,对于np.ones而不是np.zeros初始化,矩阵乘法更快。而随机初始化要慢得多。怎么能解释这种行为?

没有@autojit优化,计算时间几乎相等。

1 个答案:

答案 0 :(得分:0)

autojit编译器实现你将所有0相乘并完全去除矩阵乘法并简单地返回所有0的矩阵,在1s中它跳过实际的乘法部分并且只是矩阵乘法的求和部分这比稍微返回所有0稍慢,最后最后一个实际上迫使编译器必须进行矩阵乘法,因为它不能假设答案。

这是编译器比你预期的更聪明的情况。