如何在我的GPU上运行此功能? (蟒蛇)

时间:2020-04-28 12:19:12

标签: python optimization numba

Id喜欢通过使用我的GPU来加速此程序:

import string
from numba.typed import List
from numba import jit, cuda
import time

alp = string.ascii_letters


@jit(target="gpu")
def test():
    for i in alp:
        for x in alp:
            print(i + x)

def speedTest():
    start_time = time.time()
    test()
    print(time.time() - start_time)

但是每次我收到此错误:

Traceback (most recent call last):
  File "<pyshell#15>", line 1, in <module>
    speed()
  File "F:/Script Projects#/Tester.py", line 17, in speed
    test()
  File "D:\Python\lib\site-packages\numba\cuda\dispatcher.py", line 40, in __call__
    return self.compiled(*args, **kws)
  File "D:\Python\lib\site-packages\numba\cuda\compiler.py", line 758, in __call__
    kernel = self.specialize(*args)
  File "D:\Python\lib\site-packages\numba\cuda\compiler.py", line 769, in specialize
    kernel = self.compile(argtypes)
  File "D:\Python\lib\site-packages\numba\cuda\compiler.py", line 784, in compile
    kernel = compile_kernel(self.py_func, argtypes,
  File "D:\Python\lib\site-packages\numba\core\compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
TypeError: compile_kernel() got an unexpected keyword argument 'boundscheck'

我以后想用其他东西代替'print()',但是现在我只需要它即可。 它适用于nopython模式而不是target =“ gpu”。 感谢您的帮助!

0 个答案:

没有答案