我想知道如何创建在多个设备上运行的openCL测试 假设我想创建一个openCL程序来计算表达式A B + C D,这就是我的想法
请帮帮我 谢谢
答案 0 :(得分:2)
OpenCL是一个非常明确的API。它要求您在创建上下文时指定特定设备,并要求您在创建队列时指定特定上下文。因此,从字面上看,完成任务就像
一样简单//This is going to be pseudocode; I'm not going to look up the literal syntax for this stuff
//It is going to closely resemble how you'd write this code in C++, though
std::vector<_type> perform_tasks(cl_device_id ab_device, cl_device_id cd_device, cl_device_id n_m_device) {
cl_context ab_context = clCreateContext(ab_device);
cl_context cd_context = clCreateContext(cd_device);
cl_context n_m_context = clCreateContext(n_m_device);
cl_command_queue ab_queue = clCreateQueue(ab_context, ab_device);
cl_command_queue cd_queue = clCreateQueue(cd_context, cd_device);
cl_command_queue n_m_queue = clCreateQueue(n_m_context, n_m_device);
cl_kernel ab_kernel = get_ab_kernel(ab_context, ab_device);
cl_kernel cd_kernel = get_ab_kernel(cd_context, cd_device);
cl_kernel n_m_kernel = get_ab_kernel(n_m_context, n_m_device);
set_args_for_ab(ab_kernel);
set_args_for_cd(cd_kernel);
set_args_for_n_m(n_m_kernel);
cl_event events[2];
clEnqueueKernel(ab_queue, ab_kernel, &events[0]);
clEnqueueKernel(cd_queue, cd_kernel, &events[1]);
//Here, I'm assuming that the n_m kernel depends on the results of ab and cd, and thus
//must be sequenced afterwards.
clWaitForEvents(2, events);
copy_ab_and_cd_data_into_n_m_buffers();
cl_event n_m_event;
clEnqueueKernel(n_m_queue, n_m_kernel, &n_m_event);
clWaitForEvents(1, &n_m_event);
return copy_n_m_data_to_host();
}
但是还有一个更大的问题需要解决,这个问题似乎没有被你的问题所忽视: 为什么?
您期望从这种逻辑中获得什么样的性能提升,而不是简单地在单个设备上执行类似下面的操作?
kernel void ab_cd(global _type * a, global _type * b, global _type * c, global _type * d, global _type * output) {
long id = get_global_id(0);
output[id] = a[id] * b[id] + c[id] * d[id];
}
根据您提出的程序逻辑,您只需尝试在不同设备之间传输数据就会产生不可避免的开销(这将发生在我描述的伪代码中的copy_ab_and_cd_data_into_n_m_buffers()
内部)。如果您已经承诺为这种程序使用多个设备,那么编写类似的内容仍然更简单(并且可能更高性能!):
//Again; using pseudocode. Again, gonna look like C++ code.
cl_event perform_tasks(cl_device_id device, cl_context * context, cl_command_queue * queue, cl_kernel * kernel) {
*context = clCreateContext(device);
*queue = clCreateQueue(context, device);
*kernel = get_kernel();
cl_event event;
clEnqueueKernel(queue, kernel, &event);
return event;
}
int main() {
std::vector<cl_device_id> device_ids = get_device_ids();
std::vector<_type> results;
std::vector<cl_context> contexts(device_ids.size());
std::vector<cl_command_queue> queues(device_ids.size());
std::vector<cl_kernel> kernels(device_ids.size());
std::vector<cl_event> events;
for(size_t i = 0; i < device_ids.size(); i++) {
events.emplace_back(perform_tasks(device_ids[i], &contexts[i], &queues[i], &kernels[i]));
}
clWaitForEvents(events.size(), events.data());
for(cl_command_queue const& queue : queues) {
std::vector<_type> result = read_results_from_queue(queue);
results.insert(results.end(), result.begin(), result.end());
}
//results now contains the results of all executions
return 0;
}
除非您正在使用FPGA,或者处理特别异乎寻常的工作负载,否则让不同的设备完成不同的工作绝对必不可少,您可能只为自己创造了更多的工作。你需要。