优化主机到GPU的传输

时间:2014-10-26 04:19:51

标签: opencl gpu

我正在使用OpenCL(矩阵乘法的一种变体)将工作卸载到GPU。矩阵代码本身运行得非常好,但将数据移动到GPU的成本过高。

我已经从使用clEnqueueRead / clEnqueueWrite转移到内存映射缓冲区,如下所示:

d_a  = clCreateBuffer(context,  CL_MEM_READ_ONLY|CL_MEM_ALLOC_HOST_PTR,
                    sizeof(char) * queryVector_size,
                    NULL, NULL);
checkErr(err,"Buf A");

d_b  = clCreateBuffer(context,  CL_MEM_READ_ONLY|CL_MEM_ALLOC_HOST_PTR,
                    sizeof(char) * segment_size,
                     NULL, NULL);

checkErr(err,"Buf B");




err  = clSetKernelArg(ko_smat, 0, sizeof(cl_mem), &d_c);
checkErr(err,"Compute Kernel");
err = clSetKernelArg(ko_smat, 1, sizeof(cl_mem), &d_a);
checkErr(err,"Compute Kernel");
err = clSetKernelArg(ko_smat, 2, sizeof(cl_mem), &d_b);
checkErr(err,"Compute Kernel");

  query_vector = (char*) clEnqueueMapBuffer(commands, d_a, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * queryVector_size, 0, NULL, NULL, &err);
 checkErr(err,"Write A");

 segment_data = (char*) clEnqueueMapBuffer(commands, d_b, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * segment_size, 0, NULL, NULL, &err);
    checkErr(err,"Write B");

     // code which initialises buffers using ptrs (segment_data and queryV)

  err = clEnqueueUnmapMemObject(commands,
                             d_a,
                      query_vector, 0, NULL, NULL);
 checkErr(err,"Unmap Buffer");

  err = clEnqueueUnmapMemObject(commands,
                       d_b,
                      segment_data, 0, NULL, NULL);
 checkErr(err,"Unmap Buff");
 err = clEnqueueNDRangeKernel(commands, ko_smat, 2, NULL, globalWorkItems, localWorkItems, 0, NULL, NULL);

 err = clFinish(commands);
 checkErr(err, "Execute Kernel");

     result = (char*) clEnqueueMapBuffer(commands, d_c, CL_TRUE,CL_MAP_WRITE, 0, sizeof(char) * result_size, 0, NULL, NULL, &err);
     checkErr(err,"Write C");

  printMatrix(result, result_row, result_col);

当我使用ReadEnqueue / WriteEnqueue方法并通过它初始化d_a,d_b,d_c时,此代码工作正常,但是当我使用MappedBuffers时,由于d_a和d_b为null,结果为0 在运行内核时。

映射/取消映射缓冲区的适当方法是什么?

编辑: 核心问题似乎来自这里

  segment_data = (char*) clEnqueueMapBuffer(commands, d_b, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * segment_width * segment_length, 0, NULL, NULL, &err);

  // INITIALISE

  printMatrix(segment_data, segment_length, segment_width);

  // ALL GOOD    

   err = clEnqueueUnmapMemObject(commands,
                           d_b,
                          segment_data, 0, NULL, NULL);
  checkErr(err,"Unmap Buff");

   segment_data = (char*) clEnqueueMapBuffer(commands, d_b, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * segment_width * segment_length, 0\
, NULL, NULL, &err);

   printMatrix(segment_data, segment_length, segment_width);

   // ALL ZEROs again

第一个printMatrix()返回正确的输出,一旦我取消映射并重新映射它,segment_data就变为全0(它的初始值)。我怀疑我在某个地方使用了不正确的旗帜?我不能'弄清楚在哪里。

3 个答案:

答案 0 :(得分:2)

  query_vector = (char*) clEnqueueMapBuffer(commands, d_a, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * queryVector_size, 0, NULL, NULL, &err);
 checkErr(err,"Write A");

 segment_data = (char*) clEnqueueMapBuffer(commands, d_b, CL_TRUE,CL_MAP_READ, 0, sizeof(char) * segment_size, 0, NULL, NULL, &err);
    checkErr(err,"Write B");

缓冲区映射为CL_MAP_READ但写入它们。与缓冲区创建不同,这些标志不采用内存的设备视图,而是采用主机视图,因此应使用CL_MAP_WRITE标志对其进行映射,否则任何更改将在未映射时被丢弃

答案 1 :(得分:1)

来自OpenCL 1.2规范:

  

5.4.3访问内存对象的映射区域

     

...

     

如果当前映射了一个内存对象以进行读取,则应用程序必须确保在任何排队的内核或命令写入此内存对象或其任何关联的内存对象(子缓冲区或1D图像缓冲区)之前取消映射内存对象对象)或其父对象(如果内存对象是子缓冲区或1D图像缓冲区对象)开始执行;否则行为未定义。

因此,您需要在排队内核之后映射results缓冲区。同样,您需要在排队内核之前取消映射输入缓冲区。映射/取消映射缓冲区的时间表应大致如下:

Create input buffers
Create output buffers
Map input buffers
Write input data
Unmap input buffers
Enqueue kernel
Map output buffers
Read output data
Unmap output buffers

答案 2 :(得分:0)

显然,加速代码的最佳方法是使用映射缓冲区。您可以使用CL_MEM_ALLOC_HOST_PTR创建缓冲区,这通过启动DMA传输基本上可以减轻CPU的负担。

以下是使用映射缓冲区的示例:

// pointer to hold the result
int * host_ptr = malloc(size * sizeof(int));

d_mem = clCreateBuffer(context,CL_MEM_READ_WRITE|CL_MEM_ALLOC_HOST_PTR,
                       size*sizeof(cl_int), NULL, &ret);

int * map_ptr = clEnqueueMapBuffer(command_queue,d_mem,CL_TRUE,CL_MAP_WRITE,
                                   0,size*sizeof(int),0,NULL,NULL,&ret);
// initialize data
for (i=0; i<size;i++) {
  map_ptr[i] = i;
}

ret = clEnqueueUnmapMemObject(command_queue,d_mem,map_ptr,0,NULL,NULL); 

//Set OpenCL Kernel Parameters
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&d_mem);

size_t global_work[1]  = { size };
//Execute OpenCL Kernel
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, 
                             global_work, NULL, 0, 0, NULL);

map_ptr = clEnqueueMapBuffer(command_queue,d_mem,CL_TRUE,CL_MAP_READ,
                             0,size*sizeof(int),0,NULL,NULL,&ret);
// copy the data to result array 
for (i=0; i<size;i++){
  host_ptr[i] = map_ptr[i];
} 

ret = clEnqueueUnmapMemObject(command_queue,d_mem,map_ptr,0,NULL,NULL);        

// cl finish etc   

取自 this 帖子。