CUDA地址超出界限

时间:2016-04-22 06:12:06

标签: c cuda gpu

我一直在玩一个简单的CUDA程序,只是将全局内存归零。以下是设备代码和主机代码:

#include <stdio.h>

__global__ void kernel(float *data, int width) {
    int x = blockDim.x * blockIdx.x + threadIdx.x;
    int y = blockDim.y * blockIdx.y + threadIdx.y;

    if (x > (width-1)) {
        printf("x = %d\n", x);
        printf("blockDim.x = %d\n", blockDim.x);
        printf("blockIdx.x = %d\n", blockIdx.x);
        printf("threadIdx.x = %d\n", threadIdx.x);
    }   

    if (y > (width-1)) {
        printf("y = %d\n", y);
        printf("blockDim.y = %d\n", blockDim.y);
        printf("blockIdx.y = %d\n", blockIdx.y);
        printf("threadIdx.y = %d\n", threadIdx.y);
    }   

    data[y * width + x] = 0.0;
}

int main(void) {
    const int MATRIX_SIZE = 256;
    float *data, *dataGPU;
    int sizeOfMem;
    int x = MATRIX_SIZE;
    int y = MATRIX_SIZE;

    cudaDeviceReset();
    cudaDeviceSynchronize();

    sizeOfMem = sizeof(float) * x * y;

    data = (float *)malloc(sizeOfMem);
    cudaMalloc((void **)&dataGPU, sizeOfMem);

    cudaMemcpy(dataGPU, data, sizeOfMem, cudaMemcpyHostToDevice);

    //int threads = 256;
    //int blocks = ((x * y) + threads - 1) / threads;

    dim3 threads(16, 16);
    dim3 blocks(x / 16, y / 16);

    kernel<<<blocks, threads>>>(dataGPU, MATRIX_SIZE);
    cudaThreadSynchronize();

    cudaMemcpy(data, dataGPU, sizeOfMem, cudaMemcpyDeviceToHost);

    cudaFree(dataGPU);

    free(data);

    return 0;
}

使用cuda-memcheck运行代码时,我继续收到地址超出范围的错误消息。但这只有在我创建的矩阵的尺寸为128或更大时。如果我的维度小于128,则错误的频率较低(我几乎从未收到错误)。您可能会注意到我在我的内核函数中包含了print语句。这些语句只在我收到错误消息时打印,因为x和y永远不应该大于width-1,或者在这种情况下为255.如果我正确地完成了我的数学计算,我认为我有这样的说法。以下是我从cuda-memcheck收到的错误消息:

  ========= CUDA-MEMCHECK
  ========= Invalid __global__ write of size 4
  =========     at 0x00000298 in kernel(float*, int)
  =========     by thread (3,10,0) in block (15,1,0)
  =========     Address 0x2300da6bcc is out of bounds
  =========     Saved host backtrace up to driver entry point at kernel launch time
  =========     Host Frame:/usr/lib64/nvidia/libcuda.so.1 (cuLaunchKernel + 0x2c5) [0x472225]
  =========     Host Frame:./test_reg_memory [0x16c41]
  =========     Host Frame:./test_reg_memory [0x31453]
  =========     Host Frame:./test_reg_memory [0x276d]
  =========     Host Frame:./test_reg_memory [0x24f0]
  =========     Host Frame:/lib64/libc.so.6 (__libc_start_main + 0xf5) [0x21b15]
  =========     Host Frame:./test_reg_memory [0x25cd]
  =========
  y = 2074
  blockDim.y = 16
  blockIdx.y = 1
  threadIdx.y = 10

这个输出对我来说没有意义,因为如果我做数学,

y = blockDim.y * blockIdx.y + threadIdx.y = 16 * 1 + 10 = 26 (not 2074)

我花了一些时间看CUDA编程论坛,似乎没有任何帮助。我读过一个帖子表明我可能已经损坏了寄存器内存。然而,开始该线程的那个具有不同GPU的这个问题。线程有点无关,但我还是包含了链接。

https://devtalk.nvidia.com/default/topic/498784/memory-corruption-on-a-fermi-class-gpu-error-only-on-fermis-program-works-on-non-fermis-/?offset=6

下面我已经包含了nvcc版本。

 nvcc: NVIDIA (R) Cuda compiler driver
 Copyright (c) 2005-2015 NVIDIA Corporation
 Built on Tue_Aug_11_14:27:32_CDT_2015
 Cuda compilation tools, release 7.5, V7.5.17

此外,这是我正在使用的GPU。

 Device 0: "GeForce GT 640"
 CUDA Driver Version / Runtime Version 8.0 / 7.5
 CUDA Capability Major/Minor version number: 3.0

任何有CUDA经验的人都会指出我可能做错了吗?

1 个答案:

答案 0 :(得分:0)

此问题似乎仅限于特定系统,并且由某种硬件问题引起。代码本身很好,并且更改为不同的系统确认它正常工作。

[这个答案已经从评论中汇总并添加为社区wiki条目,以便将他的问题从CUDA标记的未答复队列中删除]。