PyCUDA对curandState *

时间:2019-06-23 00:39:31

标签: cuda pycuda curand

我正在研究一种入侵物种的传播,并试图使用XORWOW随机数生成器在PyCUDA内核中生成随机数。我需要能够用作研究中输入的矩阵很大(最大8,000 x 8,000)。

索引XORWOW生成器的get_random_number时,似乎在curandState*内部发生错误。该代码在较小的矩阵上执行时没有错误,并产生正确的结果。我正在2个NVidia Tesla K20X GPU上运行代码。

内核代码和设置:

kernel_code = '''
    #include <curand_kernel.h>
    #include <math.h>

    extern "C" {

    __device__ float get_random_number(curandState* global_state, int thread_id) {

        curandState local_state = global_state[thread_id];
        float num = curand_uniform(&local_state);
        global_state[thread_id] = local_state;
        return num;
    }

    __global__ void survival_of_the_fittest(float* grid_a, float* grid_b, curandState* global_state, int grid_size, float* survival_probabilities) {

        int x = threadIdx.x + blockIdx.x * blockDim.x;             // column index of cell
        int y = threadIdx.y + blockIdx.y * blockDim.y;             // row index of cell

        // make sure this cell is within bounds of grid
        if (x < grid_size && y < grid_size) {

            int thread_id = y * grid_size + x;                      // thread index
            grid_b[thread_id] = grid_a[thread_id];                  // copy current cell
            float num;

            // ignore cell if it is not already populated
            if (grid_a[thread_id] > 0.0) {

                num = get_random_number(global_state, thread_id);

                // agents in this cell die
                if (num < survival_probabilities[thread_id]) {
                    grid_b[thread_id] = 0.0;                        // cell dies
                    //printf("Cell (%d,%d) died (probability of death was %f)\\n", x, y, survival_probabilities[thread_id]);
                }
            }
        }
    }

mod = SourceModule(kernel_code, no_extern_c = True)
survival = mod.get_function('survival_of_the_fittest')

数据设置:

matrix_size = 2000
block_dims = 32
grid_dims = (matrix_size + block_dims - 1) // block_dims

grid_a = gpuarray.to_gpu(np.ones((matrix_size,matrix_size)).astype(np.float32))
grid_b = gpuarray.to_gpu(np.zeros((matrix_size,matrix_size)).astype(np.float32))
generator = curandom.XORWOWRandomNumberGenerator()
grid_size = np.int32(matrix_size)
survival_probabilities = gpuarray.to_gpu(np.random.uniform(0,1,(matrix_size,matrix_size)))

内核调用:

survival(grid_a, grid_b, generator.state, grid_size, survival_probabilities, 
    grid = (grid_dims, grid_dims), block = (block_dims, block_dims, 1))

我希望能够为(8,000 x 8,000)以下的矩阵生成(0,1]范围内的随机数,但是在大型矩阵上执行我的代码会导致非法的内存访问错误。

pycuda._driver.LogicError: cuMemcpyDtoH failed: an illegal memory access was encountered
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: an illegal memory access was encountered

我在curandState*中为get_random_number索引不正确吗?如果没有,还有什么可能导致此错误?

1 个答案:

答案 0 :(得分:2)

这里的问题是this code之间的脱节,它决定了PyCUDA curandom接口为其内部状态分配的状态的大小以及您帖子中的以下代码:

matrix_size = 2000
block_dims = 32
grid_dims = (matrix_size + block_dims - 1) // block_dims

您似乎假设PyCUDA会为您在代码中选择的任何块和网格尺寸神奇地分配足够的状态。这显然是不可能的,尤其是在大型电网上。您要么需要

  • 修改代码以使用与curandom模块内部为选择使用的任何生成器使用的相同块和网格大小,或
  • 分配和管理自己的状态暂存空间,以便分配足够的状态来服务所选的块和网格大小

我将它留给读者练习,以了解这两种方法中的哪一种在您的应用程序中会更好。