Numba功能和泡菜

时间:2018-09-29 15:03:59

标签: python numba

我在使用joblib并行执行递归numba函数时遇到问题。当在递归函数上使用numba的jit,然后尝试在该函数上使用joblib时,我得到一个错误(最后复制)。

您能想到的任何解决方法吗?我唯一想到的就是无需递归就可以重写函数。 我通常会在github上报告此问题,但我不知道是谁的问题,cloudpickly或numba。你觉得呢?

谢谢!

此代码重现了该问题:

from joblib import Parallel, delayed
from numba import njit, int64

@njit(int64(int64))
def df(n):
    if n <= 0:
        return 1
    else:
        return n * df(n - 2)

Parallel(n_jobs=2, verbose=1)(delayed(df)(2)  for _ in range(2))

错误消息(许多行开头被忽略了)

  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/pickle.py", line 504, in save                                       
    f(self, obj) # Call unbound method with explicit self                                                                          
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/pickle.py", line 856, in save_dict                                  
    self._batch_setitems(obj.items())                                                                                              
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/pickle.py", line 882, in _batch_setitems                            
    save(v)                                                                                                                        
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/pickle.py", line 524, in save                                       
    rv = reduce(self.proto)                                                                                                        
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/numba/dispatcher.py", line 585, in __reduce__         
    globs = self._compiler.get_globals_for_reduction()                                                                             
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/numba/dispatcher.py", line 89, in get_globals_for_redu
ction                                                                                                                              
    return serialize._get_function_globals_for_reduction(self.py_func)                                                             
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/numba/serialize.py", line 55, in _get_function_globals
_for_reduction                                                                                                                     
    func_id = bytecode.FunctionIdentity.from_function(func)                                                                        
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/numba/bytecode.py", line 291, in from_function        
    func = get_function_object(pyfunc)
RecursionError: maximum recursion depth exceeded 



During handling of the above exception, another exception occurred:                                                                

Traceback (most recent call last):                                                                                                 
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/externals/loky/backend/queues.py", line 151, in
 _feed                                                                                                                             
    obj, reducers=reducers)                                                                                                        
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/externals/loky/backend/reduction.py", line 145,
 in dumps                                                                                                                          
    p.dump(obj)                                                                                                                    
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 290, in __getstate__        
    for func, args, kwargs in self.items]                                                                                          
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 290, in <listcomp>          
    for func, args, kwargs in self.items]                                                                                          
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 278, in _wrap_non_picklable_
objects                                                                                                                            
    wrapped_obj = CloudpickledObjectWrapper(obj)                                                                                   
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 208, in __init__            
    self.pickled_obj = dumps(obj)                                                                                                  
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/externals/cloudpickle/cloudpickle.py", line 918
, in dumps                                                                                                                         
    cp.dump(obj)                                                                                                                   
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/externals/cloudpickle/cloudpickle.py", line 272
, in dump                                                                                                                          
    raise pickle.PicklingError(msg)                                                                                                
_pickle.PicklingError: Could not pickle object as excessively deep recursion required.                                             
"""                                                                                                                                

The above exception was the direct cause of the following exception:                                                               

Traceback (most recent call last):                                                                                                 
  File "<stdin>", line 1, in <module>                                                                                              
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 996, in __call__            
    self.retrieve()                                                                                                                
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/parallel.py", line 899, in retrieve            
    self._output.extend(job.get(timeout=self.timeout))                                                                             
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 517, in wrap_futur
e_result                                                                                                                           
    return future.result(timeout=timeout)                                                                                          
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/concurrent/futures/_base.py", line 432, in result                   
    return self.__get_result()                                                                                                     
  File "/home/.../anaconda3/envs/test_bug/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result             
    raise self._exception                                                                                                          
_pickle.PicklingError: Could not pickle the task to send it to the workers

1 个答案:

答案 0 :(得分:0)

这是一个错误,将在Numba https://github.com/numba/numba/issues/3370中修复(我猜是在0.41版中)