PyCUDA矩阵乘法码的精度

时间:2014-01-15 05:59:51

标签: python cuda pycuda

我正在尝试学习CUDA并使用PyCUDA来编写简单的矩阵乘法代码。对于两个4x4随机生成的矩阵,我得到以下解决方案:

Cuda:
[[ -5170.86181641 -21146.49609375  20690.02929688 -35413.9296875 ]
 [-18998.5         -3199.53271484  13364.62890625   7141.36816406]
 [ 31197.43164062  21095.02734375   1750.64453125  11304.63574219]
 [  -896.64978027  18424.33007812 -17135.00390625   7418.28417969]]

Python:
[[ -5170.86035156 -21146.49609375  20690.02929688 -35413.9296875 ]
 [-18998.5         -3199.53271484  13364.62695312   7141.36816406]
 [ 31197.43164062  21095.02929688   1750.64404297  11304.63574219]
 [  -896.64941406  18424.33007812 -17135.00390625   7418.28417969]]

Cuda-Python:
[[-0.00146484  0.          0.          0.        ]
 [ 0.          0.          0.00195312  0.        ]
 [ 0.         -0.00195312  0.00048828  0.        ]
 [-0.00036621  0.          0.          0.        ]]

错误的大小为1e-3,随着我增加矩阵的大小而增加。我不确定它是否是一个bug。我的问题是int32这样的“大”错误是否正常,或者我做错了什么?

以下是源代码:

matmul.py

import numpy as np
import time
# import pycuda stuff
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule

BLOCK_SIZE = 16

n = 4
ni = np.int32(n)

# matrix A 
a = np.random.randn(n, n)*100
a = a.astype(np.float32)

# matrix B
b = np.random.randn(n, n)*100
b = b.astype(np.float32)

# matrix B
c = np.empty([n, n])
c = c.astype(np.float32)

# allocate memory on device
a_gpu = cuda.mem_alloc(a.nbytes)
b_gpu = cuda.mem_alloc(b.nbytes)
c_gpu = cuda.mem_alloc(c.nbytes)

# copy matrix to memory
cuda.memcpy_htod(a_gpu, a)
cuda.memcpy_htod(b_gpu, b)

# compile kernel
mod = SourceModule(open("kernels.cu", "r").read())

# get function
matmul = mod.get_function("matmul");


# set grid size
if n%BLOCK_SIZE != 0:
    grid=(n/BLOCK_SIZE+1,n/BLOCK_SIZE+1,1)
else:
    grid=(n/BLOCK_SIZE,n/BLOCK_SIZE,1)

# call gpu function
start = time.time()
matmul(ni, a_gpu, b_gpu, c_gpu, block=(BLOCK_SIZE,BLOCK_SIZE,1), grid=grid);
end = time.time()
print "Time: %.5f s"%(end-start)

# copy back the result
cuda.memcpy_dtoh(c, c_gpu)

print np.linalg.norm(c - np.dot(a,b))
print c
print np.dot(a,b)
print c - np.dot(a,b)

kernels.cu

__global__ void matmul(int n, const float *A, const float *B, float *C){

  int tx = threadIdx.x;
  int ty = threadIdx.y;

  int bx = blockIdx.x;
  int by = blockIdx.y;

  int row = by*blockDim.y + ty;
  int col = bx*blockDim.x + tx;

  if(row < n && col < n){
    float val = 0.0;
    for(int i=0; i<n; ++i){
      val += A[row*n + i]*B[n*i + col];
    }
    C[row*n + col] = val;
  }
}

1 个答案:

答案 0 :(得分:3)

只是添加沃伦所说的内容。我认为这里没有任何错误。您正在比较两台不同机器(CPU和GPU)生成的浮点结果。对于您正在考虑的级别的操作,它们不能保证按位相同,部分原因是GPU上的操作顺序不一定与GPU上的操作顺序相同。当你增加矩阵的大小时,你增加了总和的值的数量,并且你的绝对误差增加了,因为你在一起添加了一堆小的错误。

通常,在比较浮点结果时,应始终考虑这些类型的注意事项。两种不同的计算很少能够预期按位相同的结果。甚至像改变操作顺序这样简单的事情也可以使它成为浮点运算的不同计算。您可能需要阅读this paper,尤其是第2.2节。