我想找到一种方法来动态计算计算所需的网格和块大小。我遇到的问题是,我想要处理的问题太大而无法从线程限制的角度在GPU的单次运行中处理。这是一个示例内核设置,它遇到了我所遇到的错误:
__global__ void populateMatrixKernel(char * outMatrix, const int pointsToPopulate)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < pointsToPopulate)
{
outMatrix[i] = 'A';
}
}
cudaError_t populateMatrixCUDA(char * outMatrix, const int pointsToPopulate, cudaDeviceProp &deviceProp)
{
//Device arrays to be used
char * dev_outMatrix = 0;
cudaError_t cudaStatus;
//THIS IS THE CODE HERE I'M WANTING TO REPLACE
//Calculate the block and grid parameters
auto gridDiv = div(pointsToPopulate, deviceProp.maxThreadsPerBlock);
auto gridX = gridDiv.quot;
if (gridDiv.rem != 0)
gridX++; //Round up if we have stragling points to populate
auto blockSize = deviceProp.maxThreadsPerBlock;
int gridSize = min(16 * deviceProp.multiProcessorCount, gridX);
//END REPLACE CODE
//Allocate GPU buffers
cudaStatus = cudaMalloc((void**)&dev_outMatrix, pointsToPopulate * sizeof(char));
if (cudaStatus != cudaSuccess)
{
cerr << "cudaMalloc failed!" << endl;
goto Error;
}
populateMatrixKernel << <gridSize, blockSize >> > (dev_outMatrix, pointsToPopulate);
//Check for errors launching the kernel
cudaStatus = cudaGetLastError();
if (cudaStatus != cudaSuccess)
{
cerr << "Population launch failed: " << cudaGetErrorString(cudaStatus) << endl;
goto Error;
}
//Wait for threads to finish
cudaStatus = cudaDeviceSynchronize();
if (cudaStatus != cudaSuccess) {
cerr << "cudaDeviceSynchronize returned error code " << cudaStatus << " after launching visit and bridger analysis kernel!" << endl;
cout << "Cuda failure " << __FILE__ << ":" << __LINE__ << " '" << cudaGetErrorString(cudaStatus);
goto Error;
}
//Copy output to host memory
cudaStatus = cudaMemcpy(outMatrix, dev_outMatrix, pointsToPopulate * sizeof(char), cudaMemcpyDeviceToHost);
if (cudaStatus != cudaSuccess) {
cerr << "cudaMemcpy failed!" << endl;
goto Error;
}
Error:
cudaFree(dev_outMatrix);
return cudaStatus;
}
现在,当我使用以下测试设置测试此代码时:
//Make sure we can use the graphics card (This calculation would be unresonable otherwise)
if (cudaSetDevice(0) != cudaSuccess) {
cerr << "cudaSetDevice failed! Do you have a CUDA-capable GPU installed?" << endl;
}
cudaDeviceProp deviceProp;
cudaError_t cudaResult;
cudaResult = cudaGetDeviceProperties(&deviceProp, 0);
if (cudaResult != cudaSuccess)
{
cerr << "cudaGetDeviceProperties failed!" << endl;
}
int pointsToPopulate = 250000 * 300;
auto gpuMatrix = new char[pointsToPopulate];
fill(gpuMatrix, gpuMatrix + pointsToPopulate, 'B');
populateMatrixCUDA(gpuMatrix, pointsToPopulate, deviceProp);
for (int i = 0; i < pointsToPopulate; ++i)
{
if (gpuMatrix[i] != 'A')
{
cout << "ERROR: " << i << endl;
cin.get();
}
}
我在i = 81920时收到错误。此外,如果我在执行前后检查内存,81920之后的所有内存值都来自&#39; B&#39;为空。看来这个错误源于内核执行参数代码中的这一行:
int gridSize = min(16 * deviceProp.multiProcessorCount, gridX);
对于我的显卡(GTX 980M),我得到了deviceProp.multiProcessorCount为5的值,如果我将其乘以16和1024(对于每个网格的最大块数),我会得出81920.看来,我在内存空间方面做得很好,我被可以运行的线程堵塞了。现在,这16只是被设置为一个任意值(在查看我的朋友制作的一些示例代码之后),我想知道是否有办法实际计算&#34;应该是什么16&#34;基于GPU属性而不是任意设置它。我想编写一个迭代代码,它能够确定能够在一个时间点执行的最大计算量,然后相应地逐个填充矩阵,但我需要知道最大值计算值来做到这一点。有谁知道计算这些参数的方法?如果需要更多信息,我很乐意帮忙。谢谢!
答案 0 :(得分:1)
您发布的代码基本上没有任何问题。它可能接近最佳实践。但它与内核的设计习惯不兼容。
正如您所看到的here,您的GPU能够运行2 ^ 31 - 1或2147483647块。因此,您可以将相关代码更改为:
unsigned int gridSize = min(2147483647u, gridX);
它应该可行。更好的是,根本不要改变代码,但是将内核更改为:
__global__ void populateMatrixKernel(char * outMatrix, const int pointsToPopulate)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
for(; i < pointsToPopulate; i += blockDim.x * gridDim.x)
{
outMatrix[i] = 'A';
}
}
这样你的内核就会为每个线程发出多个输出,所有内容应该按照预期的方式工作。