不可腌制的平行Joblib

时间:2019-05-24 13:54:20

标签: python parallel-processing multiprocessing

我有一个包含许多.dat文件的zip文件。我希望在每一个函数中都应用一些输出两个结果的函数,并且我想保存该函数的结果以及花费在三个列表中的时间。顺序很重要。这是无需并行计算即可执行的代码:

result_1 = []
result_2 = []
runtimes = []
args_function = 'some args' # Always the same

with zipfile.ZipFile(zip_file, "r") as zip_ref:
    for name in sorted(zip_ref.namelist()):
        data = np.loadtxt(zip_ref.open(name))
        start_time = time.time()
        a, b = function(data, args_function)
        runtimes.append(time.time() - start_time)

result_1.append(a)
result_2.append(b)

在我看来这很尴尬,所以我做到了:

result_1 = []
result_2 = []
runtimes = []
args_function = 'some args' # Always the same

def compute_paralel(name, zip_ref):
    data = np.loadtxt(zip_ref.open(name))
    start_time = time.time()
    a, b = function(data, args_function)
    runtimes.append(time.time() - start_time)

    result_1.append(a)
    result_2.append(b)

with zipfile.ZipFile(zip_file, "r") as zip_ref:
     Parallel(n_jobs=-1)(delayed(compute_paralel)(name, zip_ref) for name in sorted(zip_ref.namelist()))

但是引起我以下错误:pickle.PicklingError: Could not pickle the task to send it to the workers.。因此,我不太确定该怎么做...有什么想法吗?

0 个答案:

没有答案