openCL Long Overflowing

时间:2015-11-04 22:13:02

标签: opencl integer-overflow

在我开始之前,我是一名C初学者,我正在尝试做一些可能是错误的openCL工作。下面是我的内核代码:

__kernel void collatz(__global int* in, __global int* out)
{
    uint id = get_global_id(0);
    unsigned long n = (unsigned long)id;
    uint count = 0;

    while (n > 1) { 
        if (n % 2 == 0) {
            n = n / 2; 
        } else { 
            if(n == 1572066143) {
                unsigned long test = n;
                printf("BEFORE - %lu\n", n);
                test = (3 * test) + 1; 
                printf("AFTER  - %lu\n", test);

                n = (3 * n) + 1; 
             } else {
                 n = (3 * n) + 1; 
            }

       }

       count = count + 1;
    }

    out[id] = count;

}

和输出:

BEFORE - 1572066143
AFTER  - 421231134

对我来说,看起来n很多,但我无法弄清楚为什么会发生这种情况。

有趣的是,如果我创建一个新变量来存储与n相同的值,那么它似乎正常工作。

unsigned long test = 1572066143;
printf("BEFORE - %lu\n", test);
test = (3 * test) + 1; 
printf("AFTER  - %lu\n", test);

输出:

 BEFORE - 1572066143
 AFTER  - 4716198430

正如我所说我是C初学者,所以我可以做一些非常愚蠢的事情!任何帮助都会受到赞赏,因为我已经把头发拉了几个小时了!

谢谢, 斯蒂芬

更新

这是我的主机代码,以防我在这方面做了一些愚蠢的事情:

int _tmain(int argc, _TCHAR* argv[])
{
    /*Step1: Getting platforms and choose an available one.*/
    cl_uint numPlatforms;   //the NO. of platforms
    cl_platform_id platform = NULL; //the chosen platform
    cl_int  status = clGetPlatformIDs(0, NULL, &numPlatforms);

    cl_platform_id* platforms = (cl_platform_id*)malloc(numPlatforms*   sizeof(cl_platform_id));
    status = clGetPlatformIDs(numPlatforms, platforms, NULL);
    platform = platforms[0];
    free(platforms);

    /*Step 2:Query the platform and choose the first GPU device if has one.*/
    cl_device_id        *devices;
    devices = (cl_device_id*)malloc(1 * sizeof(cl_device_id));
    clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, 1, devices, NULL);

    /*Step 3: Create context.*/
    cl_context context = clCreateContext(NULL, 1, devices, NULL, NULL, NULL);

    /*Step 4: Creating command queue associate with the context.*/
    cl_command_queue commandQueue = clCreateCommandQueue(context, devices[0], 0, NULL);

    /*Step 5: Create program object */
    const char *filename = "HelloWorld_Kernel.cl";
    std::string sourceStr;
    status = convertToString(filename, sourceStr);
    const char *source = sourceStr.c_str();
    size_t sourceSize[] = { strlen(source) };
    cl_program program = clCreateProgramWithSource(context, 1, &source, sourceSize, NULL);

    status = clBuildProgram(program, 1, devices, NULL, NULL, NULL);

    /*Step 7: Initial input,output for the host and create memory objects for the kernel*/
    cl_ulong max = 2000000;
    cl_ulong *numbers = NULL;
    numbers = new cl_ulong[max];
    for (int i = 1; i <= max; i++) {
        numbers[i] = i;
    }

    int *output = (int*)malloc(sizeof(cl_ulong) * max);

    cl_mem inputBuffer = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, max * sizeof(cl_ulong), (void *)numbers, NULL);
    cl_mem outputBuffer = clCreateBuffer(context, CL_MEM_WRITE_ONLY, max * sizeof(cl_ulong), NULL, NULL);

    /*Step 8: Create kernel object */
    cl_kernel kernel = clCreateKernel(program, "collatz", NULL);

    /*Step 9: Sets Kernel arguments.*/
    status = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&inputBuffer);


    // Determine the size of the log
    size_t log_size;
    clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, 0, NULL, &log_size);

    // Allocate memory for the log
    char *log = (char *)malloc(log_size);

    // Get the log
    clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, log_size, log, NULL);

    // Print the log
    printf("%s\n", log);


    status = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&outputBuffer);

    /*Step 10: Running the kernel.*/
    size_t global_work_size[] = { max };
    status = clEnqueueNDRangeKernel(commandQueue, kernel, 1, NULL, global_work_size, NULL, 0, NULL, NULL);

   /*Step 11: Read the data put back to host memory.*/
   status = clEnqueueReadBuffer(commandQueue, outputBuffer, CL_TRUE, 0, max * sizeof(cl_ulong), output, 0, NULL, NULL);


return SUCCESS;

}

2 个答案:

答案 0 :(得分:0)

主机端和设备大小值具有不同的大小。在主机中,long可以在32到64位之间变化,具体取决于平台。在设备中,long仅指64位。

printf()函数,如C中所定义,%ld用于打印长(主机端长)数字。你在内核中使用printf,所以....可能是使用了类似C的解析器,因此将变量打印为32位长。

您可以尝试将其打印为%lld或浮动点吗?

答案 1 :(得分:0)

我终于找到了问题的根源。

我在我的英特尔高清显卡4600芯片上运行代码,它产生了原始问题中显示的奇怪行为。我切换到使用我的AMD卡,然后开始按预期工作!

很奇怪。感谢大家的帮助!