CUDA + OpenGL。未知代码= 4(cudaErrorLaunchFailure)错误

时间:2013-06-17 11:16:15

标签: opengl cuda

我正在对CUDA进行简单的n-body模拟,然后我试图用OpenGL进行可视化。

在CPU初始化粒子数据后,分配了相应的内存并将数据传输到GPU上,程序必须进入以下循环:

1)计算每个粒子上的力(CUDA部分)

2)更新粒子位置(CUDA部分)

3)显示此时间步骤的粒子(OpenGL部分)

4)回到1)

我通过以下代码实现了CUDA和OpenGL之间的接口:

GLuint dataBufferID;
particle_t* Particles_d;
particle_t* Particles_h;
cudaGraphicsResource *resources[1];

我在OpenGLs Array_Buffer上分配空间,并使用以下代码将后者注册为cudaGraphicsResource:

void createVBO()
{

    // create buffer object
    glGenBuffers(1, &dataBufferID);
    glBindBuffer(GL_ARRAY_BUFFER, dataBufferID);
    glBufferData(GL_ARRAY_BUFFER, bufferStride*N*sizeof(float), 0, GL_DYNAMIC_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

checkCudaErrors(cudaGraphicsGLRegisterBuffer(resources, dataBufferID, cudaGraphicsMapFlagsNone));

}

最后,我描述的程序循环(步骤1到4)通过以下函数更新(int)来实现

void update(int value)
{
// map OpenGL buffer object for writing from CUDA
float* dataPtr;
checkCudaErrors(cudaGraphicsMapResources(1, resources, 0));
size_t num_bytes;
//get a pointer to that buffer object for manipulation with cuda! 
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void **)&dataPtr, &num_bytes,resources[0]));

//fill the Graphics Resource with particle position Data!        
launch_kernel<<<NUM_BLOCKS,NUM_THREADS>>>(Particles_d,dataPtr,1);
// unmap buffer object
checkCudaErrors(cudaGraphicsUnmapResources(1, resources, 0));
glutPostRedisplay();
glutTimerFunc(milisec,update,0);    
}

我编译结束时遇到以下错误:

src / main.cu处的CUDA错误:390代码= 4(cudaErrorLaunchFailure)“cudaGraphicsMapResources(1,resources,0)”

src / main.cu处的CUDA错误:392 code = 4(cudaErrorLaunchFailure)“cudaGraphicsResourceGetMappedPointer((void **)&amp; dataPtr,&amp; num_bytes,resources [0])”

src / main.cu上的CUDA错误:397代码= 4(cudaErrorLaunchFailure)“cudaGraphicsUnmapResources(1,resources,0)”

有谁知道这个例外的原因是什么?我是否应该在执行update(int)之前每次使用createVBO()创建dataBuffer ...?

P.S。为了更清楚,我的内核函数如下:

__global__ void launch_kernel(particle_t* Particles,float* data, int KernelMode){

int i = blockIdx.x*THREADS_PER_BLOCK + threadIdx.x;

if(KernelMode == 1){
//N_d is allocated on device memory 
if(i > N_d) 
    return;
//and update dataBuffer! 
updateX(Particles+i);

for(int d=0;d<DIM_d;d++){
    data[i*bufferStride_d+d] = Particles[i].p[d]; // update the new coordinate positions in the data buffer! 
}
    // fill in also the RGB data and the radius. In general THIS IS NOT NECESSARY!! NEED TO PERFORM ONCE! REFACTOR!!!
data[i*bufferStride_d+DIM_d] =Particles[i].r;
data[i*bufferStride_d+DIM_d+1] =Particles[i].g;
data[i*bufferStride_d+DIM_d+2] =Particles[i].b;
data[i*bufferStride_d+DIM_d+3] =Particles[i].radius;

}else{
// if KernelMode = 2 then Update Y
    float* Fold = new float[DIM_d];
    for(int d=0;d<DIM_d;d++)
        Fold[d]=Particles[i].force[d];

    //of course in parallel :)
    computeForces(Particles,i);
    updateV(Particles+i,Fold);
    delete [] Fold;
    }
// in either case wait for all threads to finish! 
__syncthreads();


}

1 个答案:

答案 0 :(得分:1)

正如我在上面的一条评论中所提到的,事实证明我误认为计算能力编译器选项。我跑了cuda-memcheck,我看到cuda Api发射失败了。在找到合适的编译器选项后,一切都像魅力一样。