我正在尝试使用this code,这是CUDA python的最后一部分。
给出的示例工作正常但是当我设置更大的尺寸时,我收到此错误:
C:\Users\Lichar\Nextcloud\python\Fractal>python3 CUDA_Example.py
Traceback (most recent call last):
File "CUDA_Example.py", line 57, in <module>
d_image.to_host()
File "C:\Python35\lib\site-packages\numba\cuda\cudadrv\devicearray.py", line 217, in to_host
self.copy_to_host(self.__writeback, stream=stream)
File "C:\Python35\lib\site-packages\numba\cuda\cudadrv\devicearray.py", line 200, in copy_to_host
_driver.device_to_host(hostary, self, self.alloc_size, stream=stream)
File "C:\Python35\lib\site-packages\numba\cuda\cudadrv\driver.py", line 1606, in device_to_host
fn(host_pointer(dst), device_pointer(src), size, *varargs)
File "C:\Python35\lib\site-packages\numba\cuda\cudadrv\driver.py", line 288, in safe_cuda_api_call
self._check_error(fname, retcode)
File "C:\Python35\lib\site-packages\numba\cuda\cudadrv\driver.py", line 323, in _check_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [719] Call to cuMemcpyDtoH results in UNKNOWN_CUDA_ERROR
8500 * 8500图像工作正常,但10000 * 10000图像会引发错误。我想我传递的数组太“大”但我无法理解为什么。以下是我的gpu的详细信息:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\extras\demo_suite>deviceQuery
deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1070"
CUDA Driver Version / Runtime Version 9.1 / 9.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8192 MBytes (8589934592 bytes)
(15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores
GPU Max Clock rate: 1772 MHz (1.77 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 9.0, NumDevs = 1, Device0
= GeForce GTX 1070
Result = PASS
我已阅读this question,但不明白如何将其应用于我的案例。
我应该如何正确配置griddim
和blockdim
变量?(猜猜(128,8)是我的最大块暗淡)
导致该错误的原因以及如何在不切割不同部分的图像的情况下避免错误?
由于
更新:
如果我将最大迭代次数从1000更改为100,则不再有任何错误。