在过去的几天里,我一直试图理解为什么Numbapro(加速来自Continuum Analytics,Inc。;我正在运行30天试用版)在我的MacBook Pro上没有加速(英特尔酷睿i7) ,2.6GHz,16GB RAM,配备NVIDIA GeForce GT 650M,1GB PCI总线)。
我从(NxM)x(MxN)矩阵乘法的代码中选取了一个例子,其中Continuum Analytics,Inc。声称通过CUDA加速计算,我比较了CUDA.JIT和numpy之间的时间。我的想法是运行例如1e4 迭代,并且矩阵B在每次迭代时被随机化。在下面我使用的代码下面,我引用了我获得的时间。那有什么解决方案吗?谢谢!
from numbapro import *
from numba import *
import numpy as np
import math
from timeit import default_timer as timer
m=1000
n=1000
A = np.array(np.random.random((n,m)), dtype=np.float32)
C = np.empty([n,n])
iterations = 10000
start = timer()
for i in range(iterations):
B = np.array(np.random.random((m,n)), dtype=np.float32)
X=np.dot(A,B)
numpy_time=(timer() - start)
@cuda.jit(void(float32[:,:],float32[:,:],float32[:,:]))
def cu_square_matrix_mul(A, B, C):
tx = cuda.threadIdx.x
ty = cuda.threadIdx.y
bx = cuda.blockIdx.x
by = cuda.blockIdx.y
bw = cuda.blockDim.x
bh = cuda.blockDim.y
x = tx + bx * bw
y = ty + by * bh
n = C.shape[0]
if x >= n or y >= n:
return
cs = 0
for i in range(n):
cs += A[y,i]*B[i,x]
C[y,x]= cs
cuda.syncthreads()
blockdim = 256,3
griddim = 10,3
stream = cuda.stream()
dA = cuda.to_device(A, stream)
dC = cuda.to_device(C, stream)
start = timer()
for i in range(iterations):
B = np.array(np.random.random((m,n)), dtype=np.float32)
dB = cuda.to_device(B, stream)
cu_square_matrix_mul[griddim,blockdim,stream](dA, dB, dC)
dC.to_host()
stream.synchronize()
cuda_time = (timer() - start)
print
print("Numpy took %f seconds" % numpy_time)
print("CUDA JIT took %f seconds, %.5fx speedup" % (cuda_time, numpy_time / cuda_time))
结果:
Vendor: Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 30 days
Vendor: Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 30 days
Vendor: Continuum Analytics, Inc.
Package: numbapro
Message: trial mode expires in 30 days
Numpy took 378.328881 seconds
CUDA JIT took 342.723757 seconds, 1.10389x speedup
答案 0 :(得分:3)
这是GPU上一个完全天真的矩阵乘法例程,而numpy例程实际上是一个库调用:
X=np.dot(A,B)
可能会高度优化。令我印象深刻的是GPU更快。
"解决方案"将make a call to CUBLAS用于矩阵多重复制,而不是编写自己的内核。