在循环展开时出现“资源不足”错误

时间:2011-09-28 14:37:44

标签: cuda pycuda loop-unrolling

当我在内核中将展开从8个增加到9个循环时,它会因out of resources错误而中断。

我在How do I diagnose a CUDA launch failure due to being out of resources?中读到,参数不匹配和寄存器过度使用可能是一个问题,但这似乎并非如此。

我的内核计算n点与m质心之间的距离,并为每个点选择最近的质心。它适用于8个维度,但不适用于9.当我设置dimensions=9并取消注释距离计算的两条线时,我得到pycuda._driver.LaunchError: cuLaunchGrid failed: launch out of resources

您怎么看,可能会导致这种行为?还有哪些其他因素导致out of resources *?

我使用的是Quadro FX580。这是最小(ish)的例子。为了展开真正的代码我使用模板。

import numpy as np
from pycuda import driver, compiler, gpuarray, tools
import pycuda.autoinit


## preference
np.random.seed(20)
points = 512
dimensions = 8
nclusters = 1

## init data
data = np.random.randn(points,dimensions).astype(np.float32)
clusters = data[:nclusters]

## init cuda
kernel_code = """

      // the kernel definition 
    __device__ __constant__ float centroids[16384];

    __global__ void kmeans_kernel(float *idata,float *g_centroids,
    int * cluster, float *min_dist, int numClusters, int numDim) {
    int valindex = blockIdx.x * blockDim.x + threadIdx.x ;
    float increased_distance,distance, minDistance;
    minDistance = 10000000 ;
    int nearestCentroid = 0;
    for(int k=0;k<numClusters;k++){
      distance = 0.0;
      increased_distance = idata[valindex*numDim] -centroids[k*numDim];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+1] -centroids[k*numDim+1];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+2] -centroids[k*numDim+2];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+3] -centroids[k*numDim+3];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+4] -centroids[k*numDim+4];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+5] -centroids[k*numDim+5];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+6] -centroids[k*numDim+6];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+7] -centroids[k*numDim+7];
      distance = distance +(increased_distance * increased_distance);
      //increased_distance =  idata[valindex*numDim+8] -centroids[k*numDim+8];
      //distance = distance +(increased_distance * increased_distance);

      if(distance <minDistance) {
        minDistance = distance ;
        nearestCentroid = k;
        } 
      }
      cluster[valindex]=nearestCentroid;
      min_dist[valindex]=sqrt(minDistance);
    } 
 """
mod = compiler.SourceModule(kernel_code)
centroids_adrs = mod.get_global('centroids')[0]    
kmeans_kernel = mod.get_function("kmeans_kernel")
clusters_gpu = gpuarray.to_gpu(clusters)
cluster = gpuarray.zeros(points, dtype=np.int32)
min_dist = gpuarray.zeros(points, dtype=np.float32)

driver.memcpy_htod(centroids_adrs,clusters)

distortion = gpuarray.zeros(points, dtype=np.float32)
block_size= 512

## start kernel
kmeans_kernel(
    driver.In(data),driver.In(clusters),cluster,min_dist,
    np.int32(nclusters),np.int32(dimensions),
    grid = (points/block_size,1),
    block = (block_size, 1, 1),
)
print cluster
print min_dist

1 个答案:

答案 0 :(得分:8)

由于block_size(512)太大,您的注册用完了。

ptxas报告您的内核使用了带注释行的16个寄存器:

$ nvcc test.cu -Xptxas --verbose
ptxas info    : Compiling entry function '_Z13kmeans_kernelPfS_PiS_ii' for 'sm_10'
ptxas info    : Used 16 registers, 24+16 bytes smem, 65536 bytes cmem[0]

取消注释行会将寄存器使用增加到17并在运行时出错:

$ nvcc test.cu -run -Xptxas --verbose
ptxas info    : Compiling entry function '_Z13kmeans_kernelPfS_PiS_ii' for 'sm_10'
ptxas info    : Used 17 registers, 24+16 bytes smem, 65536 bytes cmem[0]
error: too many resources requested for launch

内核的每个线程使用的物理寄存器数限制了可以在运行时启动的块的大小。 SM 1.0器件具有8K寄存器,可由一个线程块使用。我们可以将它与内核的寄存器需求进行比较:17 * 512 = 8704 > 8K。在16个寄存器中,您原来的注释内核只会发出吱吱声:16 * 512 = 8192 == 8K

如果未指定体系结构,则nvcc默认为SM 1.0设备编译内核。 PyCUDA的工作方式可能相同。

要解决您的问题,您可以减少block_size(比方说256)或找到一种方法来配置PyCUDA来为SM 2.0设备编译内核。诸如QuadroFX 580之类的SM 2.0设备提供32K寄存器,足以支持512的原始block_size