在NVIDIA GPU上编译GPU内核的预期表达错误

时间:2020-01-24 00:00:27

标签: opencl nvidia pyopencl

如果我尝试在具有NDIVA GPU的服务器上运行OpenCL内核,则会遇到以下问题, 在我的Mac上没有问题。 这行代码似乎是问题所在,

float largest_0 = max(float (sin_i_angle), float (cos_i_angle));

这是错误消息。

  File "threed_dp.py", line 918, in gpu_calculate_segment_costs_orig
    bld = prg.build()
  File "/work/mrdrygal/.local/lib/python3.6/site-packages/pyopencl/__init__.py", line 510, in build
    options_bytes=options_bytes, source=self._source)
  File "/work/mrdrygal/.local/lib/python3.6/site-packages/pyopencl/__init__.py", line 554, in _build_and_catch_errors
    raise err
pyopencl._cl.RuntimeError: clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE - clBuildProgram failed: BUILD_PROGRAM_FAILURE

Build on <pyopencl.Device 'Tesla P100-PCIE-16GB' on 'NVIDIA CUDA' at 0x3767e50>:

<kernel>:82:33: error: expected expression
          float largest_0 = max(float (sin_i_angle), float (cos_i_angle));

1 个答案:

答案 0 :(得分:2)

float (sin_i_angle)

在C中不是有效的表达式。在C ++中有效(显式调用float()上的讲师),因此也许这就是Apple的OpenCL编译器允许它的原因。您应该将行更改为:

float largest_0 = max((float)sin_i_angle, (float)cos_i_angle);