如何使用joblib并行化scipy fftconvolve?

时间:2019-05-02 16:13:28

标签: python scipy multiprocessing convolution joblib

因此,我正在使用scipy's fftconvolve过滤大图像,并且我想并行化对单个图像所做的不同过滤。对于并行化,我想使用joblib。 但是,我被2个结果所困扰:

  • 具有多处理后端,任务要慢得多(慢1.5倍)
  • 具有多线程后端,任务更快(快25%)

我对这2个结果感到惊讶,因为我确信卷积受CPU限制。

这是我在jupyter笔记本中用于计算运行时间的代码:

from joblib import Parallel, delayed
import numpy as np
from scipy.signal import fftconvolve

im_size = (512, 512)
filter_size = tuple(s-1 for s in im_size)
n_filters = 3
image = np.random.rand(*im_size)
filters = [np.random.rand(*filter_size) for i in range(n_filters)]
%%timeit
s = np.sum(
    Parallel(n_jobs=n_filters, backend='multiprocessing')(
        delayed(fftconvolve)(image, f) for f in filters
    )
)

283 ms ± 12.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%%timeit
s = np.sum(
    Parallel(n_jobs=n_filters, backend='threading')(
        delayed(fftconvolve)(image, f) for f in filters
    )
)

142 ms ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%%timeit
s = np.sum([fftconvolve(image, f) for f in filters])

198 ms ± 2.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

我还尝试了其他操作,例如将图像保存在memmap中,或减少了预先调度的作业,但根本没有改变结果。

为什么在多线程处理时多处理不能加快计算速度?

1 个答案:

答案 0 :(得分:2)

基准并行处理的问题在于,您必须适当考虑代码中引起的开销,才能得出正确的结论。使用并行处理时,有3种开销来源:

  • 生成线程或进程:每次调用Parallel都会执行此操作,除了您依赖托管Parallel对象( with上下文)或使用loky后端时。有关更多信息,请参见here

  • 在新解释器中导入模块:对于依赖于新进程的后端(当start方法不是fork时),需要重新导入所有模块。这可能会导致开销。

  • 进程之间的通信:使用进程时(而不是与backend=threading一起使用),您需要将数组传达给每个工作人员。通信可能会减慢计算速度,尤其是对于fftconvolve这样的大输入量的短期任务。

如果您的目标是多次调用此函数,则应修改基准,以使用托管的Parallel来实际消除为Parallel对象生成工作者的成本。对象或依靠backend=loky的此功能。并避免由于加载模块而产生的开销:

from joblib import Parallel, delayed
import numpy as np
from scipy.signal import fftconvolve

from time import time, sleep


def start_processes(im, filter, mode=None, delay=0):
    sleep(delay)
    return im if im is not None else 0


def time_parallel(name, parallel, image, filters, n_rep=50):
        print(80*"=" + "\n" + name + "\n" + 80*"=")

        # Time to start the pool of workers and initialize the processes
        # With this first call, the processes/threads are actually started
        # and further calls will not incure this overhead anymore
        t0 = time()
        np.sum(parallel(
            delayed(start_processes)(image, f, mode='valid') for f in filters)
        )
        print(f"Pool init overhead: {(time() - t0) / 1e-3:.3f}ms")

        # Time the overhead due to loading of the scipy module
        # With this call, the scipy.signal module is loaded in the child
        # processes. This import can take up to 200ms for fresh interpreter.
        # This overhead is only present for the `loky` backend. For the
        # `multiprocessing` backend, as the processes are started with `fork`,
        # they already have a loaded scipy module. For the `threading` backend
        # and the iterative run, there no need to re-import the module so this
        # overhead is non-existent
        t0 = time()
        np.sum(parallel(
            delayed(fftconvolve)(image, f, mode='valid') for f in filters)
        )
        print(f"Library load overhead: {(time() - t0) / 1e-3:.3f}ms")

        # Average the runtime on multiple run, once the external overhead have
        # been taken into account.
        times = []
        for _ in range(n_rep):
            t0 = time()
            np.sum(parallel(
                delayed(fftconvolve)(image, f, mode='valid') for f in filters
            ))
            times.append(time() - t0)
        print(f"Runtime without init overhead: {np.mean(times) / 1e-3:.3f}ms,"
              f" (+-{np.std(times) / 1e-3:.3f}ms)\n")


# Setup the problem size
im_size = (512, 512)
filter_size = tuple(5 for s in im_size)
n_filters = 3
n_jobs = 3
n_rep = 50

# Generate random data
image = np.random.rand(*im_size)
filters = np.random.rand(n_filters, *filter_size)


# Time the `backend='multiprocessing'`
with Parallel(n_jobs=n_jobs, backend='multiprocessing') as parallel:
    time_parallel("Multiprocessing", parallel, image, filters, n_rep=n_rep)
sleep(.5)

# Time the `backend='threading'`
with Parallel(n_jobs=n_jobs, backend='threading') as parallel:
    time_parallel("Threading", parallel, image, filters, n_rep=n_rep)

sleep(.5)


# Time the `backend='loky'`.
# For this backend, there is no need to rely on a managed `Parallel` object
# as loky reuses the previously created pool by default. We will thus mimique
# the creation of a new `Parallel` object for each repetition
def parallel_loky(it):
    Parallel(n_jobs=n_jobs)(it)


time_parallel("Loky", parallel_loky, image, filters, n_rep=n_rep)
sleep(.5)


# Time the iterative run.
# We rely on the SequentialBackend of joblib which is used whenever `n_jobs=1`
# to allow using the same function. This should not change the computation
# much.
def parallel_iterative(it):
    Parallel(n_jobs=1)(it)


time_parallel("Iterative", parallel_iterative, image, filters, n_rep=n_rep)

$ python main.py 
================================================================================
Multiprocessing
================================================================================
Pool init overhead: 12.112ms
Library load overhead: 96.520ms
Runtime without init overhead: 77.548ms (+-16.119ms)

================================================================================
Threading
================================================================================
Pool init overhead: 11.887ms
Library load overhead: 76.858ms
Runtime without init overhead: 31.931ms (+-3.569ms)

================================================================================
Loky
================================================================================
Pool init overhead: 502.369ms
Library load overhead: 245.368ms
Runtime without init overhead: 44.808ms (+-4.074ms)

================================================================================
Iterative
================================================================================
Pool init overhead: 1.048ms
Library load overhead: 92.595ms
Runtime without init overhead: 47.749ms (+-4.081ms)

使用此基准测试,您可以看到启动loky后端实际上更快。但是,如果您不多次使用它,那么开销会太大。