我想在CUDA上生成一些决策树,下面我们有伪代码(代码非常原始,只是为了理解我写的内容):
class Node
{
public :
Node* father;
Node** sons;
int countSons;
__device__ __host__ Node(Node* father)
{
this->father = father;
sons = NULL;
}
};
__global__ void GenerateSons(Node** fathers, int countFathers*, Node** sons, int* countSons)
{
int Thread_Index = (blockDim.x * blockIdx.x) + threadIdx.x;
if(Thread_Index < *(countFathers))
{
Node* Thread_Father = fathers[Thread_Index];
Node** Thread_Sons;
int Thread_countSons;
//Now we are creating new sons for our Thread_Father
/*
* Generating Thread_Sons for Thread_Father;
*/
Thread_Father->sons = Thread_Sons;
Thread_Father->countSons = Thread_countSons;
//Wait for others
/*I added here __syncthreads because I want to count all generated sons
by threads
*/
*(countSons) += Thread_countSons;
__syncthreads();
//Get all generated sons from whole Block and copy to sons
if(threadIdx.x == 0)
{
sons = new Node*[*(countSons)];
}
/*I added here __syncthreads because I want to allocated array for sons
*/
__syncthreads();
int Thread_Offset;
/*
* Get correct offset for actual thread
*/
for(int i = 0; i < Thread_countSons; i++)
sons[Thread_Offset + i] = Thread_Sons[i];
}
}
void main ()
{
Node* root = new Node();
//transfer root to kernel by cudaMalloc and cudaMemcpy
Node* root_d = root->transfer();
Node** fathers_d;
/*
* preapre array with father root and copy him to kernel
*/
int* countFathers, countSons;
/*
* preapre pointer of int for kernel and for countFathers set value 1
*/
for(int i = 0; i < LevelTree; i++)
{
Node** sons = NULL;
int threadsPerBlock = 256;
int blocksPerGrid = (*(countFathers)/*get count of fathers*/ + threadsPerBlock - 1) / threadsPerBlock;
GenerateSons<<<blocksPerGrid , threadsPerBlock >>>(fathers_d, countFathers, sons, countSons);
//Wait for end of kernel call
cudaDeviceSynchronize();
//replace
fathers_d = sons;
countFathers = countSons;
}
}
因此,它适用于5级(为检查器生成决策树),但在6级我有错误。在内核代码中的某个地方,malloc正在返回NULL
,而对我而言,这是一个信息,即blockThreads中的某些线程无法分配更多内存。我很确定我在调用内核的每一端都清理了我不需要的所有对象。我想,我无法理解CUDA中一些使用记忆的事实。如果我在线程的本地内存中创建对象并且内核结束了他的活动,那么在内核的启动时我可以看到第一次调用内核的节点是。所以我的问题是第一次调用内核的对象Node
存储在哪里?它们是否存储在块中的线程的本地内存中?所以,如果它是真的,那么在每次调用我的内核函数时,我会减少这个线程的本地内存空间吗?
我正在使用带有Compute功能2.1的GT 555m,带有NSight 3.0的CUDA SDK 5.0,Visual Studio 2010 Premium
答案 0 :(得分:2)
Okey,
我发现,内核中的new
和malloc
调用是在设备上的全局内存中分配的。
我也发现了这个
默认情况下,CUDA会创建一个8MB的堆。
CUDA Application Design and Development, page 128
所以,我使用这个方法cudaDeviceSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
将设备上的堆内存增加到128Mb并且程序正确地生成了6级树(22110个儿子),但实际上我得到了一些内存泄漏......我需要找到。