如何使用Python多处理队列访问GPU(通过PyOpenCL)?

时间:2015-04-13 18:46:24

标签: python queue multiprocessing pyopencl

我的代码需要很长时间才能运行,因此我一直在调查Python的多处理库以加快速度。我的代码还有一些通过PyOpenCL利用GPU的步骤。问题是,如果我设置多个进程同时运行,它们最终都会尝试同时使用GPU,这通常会导致一个或多个进程抛出异常并退出。

为了解决这个问题,我错开了每个流程的开始,这样他们就不太可能碰到彼此:

process_list = []
num_procs = 4

# break data into chunks so each process gets it's own chunk of the data
data_chunks = chunks(data,num_procs)
for chunk in data_chunks:
    if len(chunk) == 0:
        continue
    # Instantiates the process
    p = multiprocessing.Process(target=test, args=(arg1,arg2))
    # Sticks the thread in a list so that it remains accessible
    process_list.append(p)

# Start threads
j = 1
for process in process_list:
    print('\nStarting process %i' % j)
    process.start()
    time.sleep(5)
    j += 1

for process in process_list:
    process.join()

我还在调用GPU的函数周围包含一个try except循环,这样如果两个进程同时尝试访问它,那么没有访问权限的人将等待几秒钟并尝试再次:

wait = 2
n = 0
while True:
    try:
        gpu_out = GPU_Obj.GPU_fn(params)
    except:
        time.sleep(wait)
        print('\n Waiting for GPU memory...')
        n += 1
        if n == 5:
            raise Exception('Tried and failed %i times to allocate memory for opencl kernel.' % n)
        continue
    break

这种解决方法非常笨重,即使它在大多数情况下都有效,但是进程偶尔会抛出异常,我觉得应该使用multiprocessing.queue或类似的东西来提供更有效/更优雅的解决方案。但是,我不确定如何将其与PyOpenCL集成以进行GPU访问。

1 个答案:

答案 0 :(得分:4)

听起来您可以使用multiprocessing.Lock来同步对GPU的访问:

data_chunks = chunks(data,num_procs)
lock = multiprocessing.Lock()
for chunk in data_chunks:
    if len(chunk) == 0:
        continue
    # Instantiates the process
    p = multiprocessing.Process(target=test, args=(arg1,arg2, lock))
    ...

然后,在您访问GPU的test内:

with lock:  # Only one process will be allowed in this block at a time.
    gpu_out = GPU_Obj.GPU_fn(params)

修改

要使用游泳池,您可以这样做:

# At global scope
lock = None

def init(_lock):
    global lock
    lock = _lock

data_chunks = chunks(data,num_procs)
lock = multiprocessing.Lock()
for chunk in data_chunks:
    if len(chunk) == 0:
        continue
    # Instantiates the process
    p = multiprocessing.Pool(initializer=init, initargs=(lock,))
    p.apply(test, args=(arg1, arg2))
    ...

或者:

data_chunks = chunks(data,num_procs)
m = multiprocessing.Manager()
lock = m.Lock()
for chunk in data_chunks:
    if len(chunk) == 0:
        continue
    # Instantiates the process
    p = multiprocessing.Pool()
    p.apply(test, args=(arg1, arg2, lock))