pycuda的gpuarray.dot()函数是否正常运行?

时间:2019-07-03 18:11:48

标签: python numpy pycuda

Pycuda的gpuarray.dot()操作与numpy.dot()操作不同。这是故意的吗?

例如,下面的代码先执行numpy.dot()然后执行gpuarray.dot()。前者返回5x5数组,而后者返回单个数字。

import numpy as np
import pycuda.autoinit
import pycuda.gpuarray as gpuarray
np.random.seed(1)

print ("\nNUMPY: result of np.dot - OK")
a = np.array(2 * np.random.random((5, 5)) - 1)
b = np.array(2 * np.random.random((5, 5)) - 1)
a_b_dot = np.dot(a, b)
print (type(a_b_dot), a_b_dot.shape)
print (a_b_dot)

print ("\nPYCUDA: result of gpuarray.dot - NOT OK")
a_gpu = gpuarray.to_gpu(a)
b_gpu = gpuarray.to_gpu(b)
a_b_dot = gpuarray.dot(a_gpu, b_gpu)
print (type(a_b_dot), a_b_dot.shape)
print (a_b_dot)

输出为:

NUMPY: result of np.dot - OK
<class 'numpy.ndarray'> (5, 5)
[[-0.4289689  -1.07826831  0.35264673  1.17316284  0.37989478]
 [-0.23539466  0.62140658  0.02890465  0.64194572 -0.90554719]
 [ 0.6308665  -0.5418927   0.15072667  1.53949101 -0.17648109]
 [-0.28165967 -1.06345895  0.17784186 -0.50902276  1.27061422]
 [ 0.15769648  0.01993701 -0.42621895 -0.07254009 -0.23463897]]

PYCUDA: result of gpuarray.dot - NOT OK
<class 'pycuda.gpuarray.GPUArray'> ()
-0.3611777016515303

1 个答案:

答案 0 :(得分:1)

我非常确定pycuda这样做的原因是因为它将要求pycuda要么依赖于cusparse / cublas(这样就不必重新实现此东西)。扩展此功能的最简单方法是将点乘积应用于整个对象,并让最终用户去寻找自己的矩阵乘法库(如果他们需要更高级的东西)。

如果您实际上希望在矩阵上使用点积,则只需使用矩阵乘法,请参见以下示例进行证明:

import numpy as np

print("\nNUMPY: result of np.dot - OK")
a = np.array(2 * np.random.random((5, 5)) - 1)
b = np.array(2 * np.random.random((5, 5)) - 1)
a_b_dot = np.dot(a, b)
a_mul_b = np.matmul(a, b)
print(type(a_b_dot), a_b_dot.shape)
print(a_b_dot)

print(type(a_mul_b), a_mul_b.shape)
print(a_mul_b)

NUMPY: result of np.dot - OK
<class 'numpy.ndarray'> (5, 5)
[[-0.12441477 -0.28175903  0.36632673  0.35687491 -0.25773564]
 [-0.57845471 -0.4097741   0.3505651  -0.23822489  1.17375904]
 [-0.19920533 -0.43918224  0.62438656  0.6326451  -0.27798801]
 [ 0.67128494  0.44472894 -0.57700879 -0.57246653 -0.0336262 ]
 [ 0.49149948 -0.65774616  1.09320886  0.76179777 -0.76590202]]
<class 'numpy.ndarray'> (5, 5)
[[-0.12441477 -0.28175903  0.36632673  0.35687491 -0.25773564]
 [-0.57845471 -0.4097741   0.3505651  -0.23822489  1.17375904]
 [-0.19920533 -0.43918224  0.62438656  0.6326451  -0.27798801]
 [ 0.67128494  0.44472894 -0.57700879 -0.57246653 -0.0336262 ]
 [ 0.49149948 -0.65774616  1.09320886  0.76179777 -0.76590202]]

要进行真正的矩阵乘法,您将要么A:需要实现自己的矩阵,要么B:使用scikit cuda(这都取决于pycuda和与之互操作)。

在scikit cuda中,这几乎是https://developers.google.com/drive/api/v3/manage-downloads#downloading_a_file(直接从scikit cuda文档中删除)

>>> import pycuda.autoinit
>>> import pycuda.gpuarray as gpuarray
>>> import numpy as np
>>> import skcuda.linalg as linalg
>>> import skcuda.misc as misc
>>> linalg.init()
>>> a = np.asarray(np.random.rand(4, 2), np.float32)
>>> b = np.asarray(np.random.rand(2, 2), np.float32)
>>> a_gpu = gpuarray.to_gpu(a)
>>> b_gpu = gpuarray.to_gpu(b)
>>> c_gpu = linalg.dot(a_gpu, b_gpu)
>>> np.allclose(np.dot(a, b), c_gpu.get())
True
>>> d = np.asarray(np.random.rand(5), np.float32)
>>> e = np.asarray(np.random.rand(5), np.float32)
>>> d_gpu = gpuarray.to_gpu(d)
>>> e_gpu = gpuarray.to_gpu(e)
>>> f = linalg.dot(d_gpu, e_gpu)
>>> np.allclose(np.dot(d, e), f)
True

在使用scipy的情况下,您正在使用cuda dll后端,该后端将事物转换为ctypes等,并且您会注意到,与numpy相比,用于乘法的基元要低得多(在大多数情况下,维数为2)。如果您确实确实需要在nd矩阵中使用乘法矩阵乘法,则它们仍将是2d,但是您可以使用后端批处理函数或same as numpy

批量进行处理