Cuda Kernel中值滤波器中的分割错误

时间:2018-07-26 19:16:00

标签: c++ linux cuda

我正在为3x3中值滤镜编程一个内核,并希望将其应用于图像。我的图像以// define the line var valueline = function (datum, boolean) { return (d3.line() .x(function(d) { return x(d.date); }) .y(function(d) { return boolean ? y(d.close) : y(0); }) )(datum); }; 的形式存储在RGB中。

我为每个像素启动一个线程,并计算每个线程中的所有3种颜色。

我在不同的图像尺寸上尝试了不同的结果:

  • 512x512:使用cuda-memcheck运行时,没有cuda错误,错误
  • 1024x1024及更高版本:CUDA错误和memcheck-erros

我收到的cuda错误:

float *myImage = new float[pixelCount * channelCount]

(第一个)cuda-memcheck输出:

an illegal memory access was encountered 

内核:

========= CUDA-MEMCHECK  
========= Invalid __global__ read of size 4  
=========     at 0x00001410 in   BackwardMappingCUDAUtils::parallelMedianInImage(float*, float*, unsigned int, unsigned int, int)  
=========     by thread (257,0,0) in block (127,0,0)  
=========     Address 0x7f535e5c0000 is out of bounds  
=========     Saved host backtrace up to driver entry point at kernel launch time  
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1   (cuLaunchKernel + 0x2cd) [0x22b3fd]  
=========     Host Frame:/path/to/libcudart.so.9.1 [0x15f70]  
=========     Host Frame:/path/to/libcudart.so.9.1 (cudaLaunch + 0x14e) [0x347be]  
=========     Host Frame:/path/to/build_debug  /lib/libBackwardMappingCudaUtilsD.so [0x23fc]  
=========     Host Frame:/path/to/build_debug  /lib/libBackwardMappingCudaUtilsD.so  (_Z75__device_stub__ZN24BackwardMappingCUDAUtils21parallelMedianInImageEPfS0_jjiPfS_jji + 0xd6) [0x20f2]  
=========     Host Frame:/path/to/build_debug/lib/libBackwardMappingCudaUtilsD.so (_ZN24BackwardMappingCUDAUtils21parallelMedianInImageEPfS0_jji + 0x36) [0x2139]  
=========     Host Frame:./CUDAStream (main + 0x1476) [0xf211]  
=========     Host Frame:/path/to/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]  
=========     Host Frame:./CUDAStream (_start + 0x2a) [0xd63a]  

有效变量用于检查所有值是否在图像边界内。
排序功能:

__global__
void parallelMedianInImage(float *source, float *sink, unsigned int width, unsigned int pixelCount, int channelCount)
{
  unsigned int pixelID = blockIdx.x * blockDim.x + threadIdx.x; 
  unsigned int colorIndexRed = pixelID * channelCount;
  unsigned int colorWidth = width * channelCount;
  unsigned int valueCount = pixelCount * channelCount;

  if(pixelID<pixelCount)
  {
    int validValues = 0;
    bool valid[9];
    int indizes[9];

    indizes[0] = colorIndexRed - colorWidth - channelCount;
    indizes[1] = colorIndexRed - colorWidth;
    indizes[2] = colorIndexRed - colorWidth + channelCount;
    indizes[3] = colorIndexRed - channelCount;
    indizes[4] = colorIndexRed;
    indizes[5] = colorIndexRed + channelCount;
    indizes[6] = colorIndexRed + colorWidth - channelCount;
    indizes[7] = colorIndexRed + colorWidth;
    indizes[8] = colorIndexRed + colorWidth + channelCount;

    for(int u=0;u<9;u++)
    {
      valid[u] = true;

      if(u/3==0&&((indizes[u] / colorWidth) != (colorIndexRed / colorWidth) - 1)) valid[u] = false;
      if(u/3==1&&((indizes[u] / colorWidth) != (colorIndexRed / colorWidth))) valid[u] = false;
      if(u/3==2&&((indizes[u] / colorWidth) != (colorIndexRed / colorWidth) + 1)) valid[u] = false;
      if(indizes[u]<0 || indizes[u]>valueCount) valid[u] = false;

      if(valid[u]) validValues++;
    }

    for(int channel=0;channel<channelCount;channel++)
    {
      float values[9];
      for(int u=0;u<9;u++)
      {
        if(valid[u])
          values[u]=source[indizes[u] + channel];
        else
          values[u]=0.0;

      }
      insertionSortFloatArray(values, 9);
      int middleIndex = 8 - (validValues/2);
      sink[colorIndexRed + channel] = values[middleIndex];
    }
  }
}

内核调用循环运行,因为我有几个图像:

__device__
void insertionSortFloatArray(float array[], int length)
{
  float swapper;
  for(int i=1;i<length;i++)
  {
    swapper = array[i];
    for(int u=i-1;u>=0;u--)
    {
      if(array[u]>swapper)
      {
        array[u+1] = array[u];
        array[u] = swapper;
      }
    }
  }
}

我尝试了每个块变量使用不同的线程。
由于内核位于动态链接的库中,因此cuda-memcheck不能告诉段错误的确切行。因为它总是在线程(1,0,0)中发生,所以我用cudaMalloc((void**)&smallUndistortedDeviceImages[reducedIndex], sizeSmall); parallelMedianShrinking<<<(pixelCountSmall+TPB-1)/TPB,TPB>>>(undistortedDeviceImages[reducedIndex], smallUndistortedDeviceImages[reducedIndex], widthSmall, pixelCountSmall, channelCount); error = cudaGetLastError(); if(error != cudaSuccess) { printf(" ### CUDA error: %s\n", cudaGetErrorString(error)); } //removed code that copies the result to the devices and stores it as an image cudaFree(smallLightMaskStep1Images[reducedIndex]); 写了一个printf,给了我所有可以想到的变量地址,以便与战后cuda-memcheck输出的变量地址进行比较,但是我永远找不到它是哪个变量。由于对特定行的注释过高,我可以将其跟踪到内核的最后三行附近。排序功能有效,并在另一个内核中使用。但是,当我评论排序调用并仅使用if(threadIdx.x==1)时,它就可以工作(就像在不使用过滤器的情况下复制图像一样)。

我无法解决这个问题。先感谢您。这是我的第一篇文章,希望我包括所有相关内容,如果错过了任何事情,对不起。

规格:Ubuntu 18.04,Cuda V9.1.85,具有8GB RAM的Geforce GTX1080

1 个答案:

答案 0 :(得分:1)

错误发生在if(indizes[u]<0 || indizes[u]>valueCount) valid[u] = false;中,它应该说indizes[u]>=valueCount,因为我们当然从零开始计数。这引起了段错误。现在工作正常。