我正在比较几个Python模块/扩展或方法,以实现以下目标:
import numpy as np
def fdtd(input_grid, steps):
grid = input_grid.copy()
old_grid = np.zeros_like(input_grid)
previous_grid = np.zeros_like(input_grid)
l_x = grid.shape[0]
l_y = grid.shape[1]
for i in range(steps):
np.copyto(previous_grid, old_grid)
np.copyto(old_grid, grid)
for x in range(l_x):
for y in range(l_y):
grid[x,y] = 0.0
if 0 < x+1 < l_x:
grid[x,y] += old_grid[x+1,y]
if 0 < x-1 < l_x:
grid[x,y] += old_grid[x-1,y]
if 0 < y+1 < l_y:
grid[x,y] += old_grid[x,y+1]
if 0 < y-1 < l_y:
grid[x,y] += old_grid[x,y-1]
grid[x,y] /= 2.0
grid[x,y] -= previous_grid[x,y]
return grid
此函数是有限差分时域(FDTD)方法的一个非常基本的实现。我已经通过多种方式实现了这个功能:
现在我想将性能与NumbaPro CUDA进行比较。
这是我第一次为CUDA编写代码,我想出了下面的代码。
from numbapro import cuda, float32, int16
import numpy as np
@cuda.jit(argtypes=(float32[:,:], float32[:,:], float32[:,:], int16, int16, int16))
def kernel(grid, old_grid, previous_grid, steps, l_x, l_y):
x,y = cuda.grid(2)
for i in range(steps):
previous_grid[x,y] = old_grid[x,y]
old_grid[x,y] = grid[x,y]
for i in range(steps):
grid[x,y] = 0.0
if 0 < x+1 and x+1 < l_x:
grid[x,y] += old_grid[x+1,y]
if 0 < x-1 and x-1 < l_x:
grid[x,y] += old_grid[x-1,y]
if 0 < y+1 and y+1 < l_x:
grid[x,y] += old_grid[x,y+1]
if 0 < y-1 and y-1 < l_x:
grid[x,y] += old_grid[x,y-1]
grid[x,y] /= 2.0
grid[x,y] -= previous_grid[x,y]
def fdtd(input_grid, steps):
grid = cuda.to_device(input_grid)
old_grid = cuda.to_device(np.zeros_like(input_grid))
previous_grid = cuda.to_device(np.zeros_like(input_grid))
l_x = input_grid.shape[0]
l_y = input_grid.shape[1]
kernel[(16,16),(32,8)](grid, old_grid, previous_grid, steps, l_x, l_y)
return grid.copy_to_host()
不幸的是我收到以下错误:
File ".../fdtd_numbapro.py", line 98, in fdtd
return grid.copy_to_host()
File "/opt/anaconda1anaconda2anaconda3/lib/python2.7/site-packages/numbapro/cudadrv/devicearray.py", line 142, in copy_to_host
File "/opt/anaconda1anaconda2anaconda3/lib/python2.7/site-packages/numbapro/cudadrv/driver.py", line 1702, in device_to_host
File "/opt/anaconda1anaconda2anaconda3/lib/python2.7/site-packages/numbapro/cudadrv/driver.py", line 772, in check_error
numbapro.cudadrv.error.CudaDriverError: CUDA_ERROR_LAUNCH_FAILED
Failed to copy memory D->H
我也使用过grid.to_host(),但两者都没用。 CUDA肯定在这个系统上使用NumbaPro。
答案 0 :(得分:3)
问题由用户解决。我正在针对此问题交叉引用Anaconda邮件列表上的讨论:https://groups.google.com/a/continuum.io/forum/#!searchin/anaconda/fdtd/anaconda/VgiN4h37UrA/18tAc60EIkcJ
答案 1 :(得分:1)
我对原始代码进行了一些小修改,以使其在Parakeet中运行:
1)将化合物比较例如“0 2)用显式索引赋值替换 np.copyto (previous_grid [:,:] = old_grid)。 之后,我将C,OpenMP和CUDA后端的Parakeet运行时与原始Python时间和Numba的autojit在1000x1000网格上进行比较,步长为20. 由于代码中几乎没有可用的并行性,因此并行后端实际上比顺序后端更差。这主要是由于Parakeet为每个后端运行循环优化的不同,以及与CUDA内存传输和启动OpenMP线程组相关的一些额外开销。我不确定为什么Numba的autojit在这里很慢,我相信使用类型注释或使用NumbaPro会更快。 Parakeet (backend = c) cold: fdtd : 0.5590s
Parakeet (backend = c) warm: fdtd : 0.1260s
Parakeet (backend = openmp) cold: fdtd : 0.4317s
Parakeet (backend = openmp) warm: fdtd : 0.1693s
Parakeet (backend = cuda) cold: fdtd : 2.6357s
Parakeet (backend = cuda) warm: fdtd : 0.2455s
Numba (autojit) cold: 672.3666s
Numba (autojit) warm: 657.8858s
Python: 203.3907s