对大小为4的向量进行算术运算时,有许多PyopenCL示例。如果我必须在Mac上通过PyOpenCL使用AMD GPU一次性地将100个整数与另外100个整数相乘,有人可以提供并解释代码吗?由于最大矢量大小可以是16,我想知道如何让GPU执行此操作,这需要并行处理超过16个整数。
我有一台AMD D500 firepro GPU。 每个工作项(线程)是否独立执行任务,如果是,则有24个计算单元,每个计算单元具有255个单维工作项和[255,255,255]三维。这是否意味着我的GPU有6120个独立工作项?
答案 0 :(得分:0)
我为两个一维整数数组的入口乘法做了一个简短的例子。请注意,如果您计划仅乘以100个值,则不会比在CPU上执行此操作更快,因为复制数据会产生大量开销,等等。
import pyopencl as cl
import numpy as np
#this is compiled by the GPU driver and will be executed on the GPU
kernelsource = """
__kernel void multInt( __global int* res,
__global int* a,
__global int* b){
int i = get_global_id(0);
int N = get_global_size(0); //this is the dimension given as second argument in the kernel execution
res[i] = a[i] * b[i];
}
"""
device = cl.get_platforms()[0].get_devices()[0]
context = cl.Context([device])
program = cl.Program(context, kernelsource).build()
queue = cl.CommandQueue(context)
#preparing input data in numpy arrays in local memory (i.e. accessible by the CPU)
N = 100
a_local = np.array(range(N)).astype(np.int32)
b_local = (np.ones(N)*10).astype(np.int32)
#preparing result buffer in local memory
res_local = np.zeros(N).astype(np.int32)
#copy input data to GPU-memory
a_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=a_local)
b_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=b_local)
#prepare result buffer in GPU-memory
res_buf = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, res_local.nbytes)
#execute previously compiled kernel on GPU
program.multInt(queue,(N,), None, res_buf, a_buf, b_buf)
#copy the result from GPU-memory to CPU-memory
cl.enqueue_copy(queue, res_local, res_buf)
print("result: {}".format(res_local))
关于PyOpenCL的文档:一旦你理解了GPGPU编程的工作原理和OpenCL的编程概念,PyOpenCL非常简单。