在模块化python代码库中使用Dask LocalCluster()

时间:2020-04-09 13:13:50

标签: python python-3.x parallel-processing dask dask-distributed

我正在尝试使用Dask Distributed的LocalCluster使用一台计算机的所有内核并行运行代码。

考虑一个示例python数据管道,其文件夹结构如下。

sample_dask_program
├── main.py
├── parallel_process_1.py
├── parallel_process_2.py
├── process_1.py
├── process_2.py
└── process_3.py

main.py 是入口点,它在流水线顺序执行时执行。

例如:

def run_pipeline():
    stage_one_run_util()
    stage_two_run_util()

    ...

    stage_six_run_util()


if __name__ == '__main__':

    ...

    run_pipeline()

parallel_process_1.py parallel_process_2.py 是创建Client()并使用futures来实现并行性的模块。

with Client() as client:
            # list to store futures after they are submitted
            futures = []

            for item in items:
                future = client.submit(
                    ...
                )
                futures.append(future)

            results = client.gather(futures)

process_1.py process_2.py process_3.py 是执行简单计算的模块,不需要使用所有模块并行运行CPU内核。

跟踪:

  File "/sm/src/calculation/parallel.py", line 140, in convert_qty_to_float
    results = client.gather(futures)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/client.py", line 1894, in gather
    asynchronous=asynchronous,
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/client.py", line 778, in sync
    self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/utils.py", line 348, in sync
    raise exc.with_traceback(tb)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/utils.py", line 332, in f
    result[0] = yield future
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
    value = future.result()
concurrent.futures._base.CancelledError

这是工人抛出的错误:

distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33901 -> tcp://127.0.0.1:38821
Traceback (most recent call last):
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 248, in write
    future = stream.write(frame)
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/iostream.py", line 546, in write
    self._check_closed()
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/iostream.py", line 1035, in _check_closed
    raise StreamClosedError(real_error=self.error)
tornado.iostream.StreamClosedError: Stream is closed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/worker.py", line 1248, in get_data
    compressed = await comm.write(msg, serializers=serializers)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 255, in write
    convert_stream_closed_error(self, e)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 121, in convert_stream_closed_error
    raise CommClosedError("in %s: %s: %s" % (obj, exc.__class__.__name__, exc))
distributed.comm.core.CommClosedError: in <closed TCP>: BrokenPipeError: [Errno 32] Broken pipe

由于该错误的发生是突然的,所以我无法在本地重现此错误或找到最小的可重现示例。

这是在模块化python程序中使用Dask LocalCluster的正确方法吗?

编辑

我观察到,当使用相对大量的线程和进程创建LocalCluster时,就会出现这些错误。我正在使用NumPy和Pandas进行计算,这不是here中所述的好习惯。

有时,使用4个工作程序和16个进程创建LocalCluster时,不会引发任何错误。当使用8个工作程序和40个进程创建LocalCluster时,会抛出上述错误。

据我了解,dask随机选择了此组合(dask是否有这个问题?),就像我在同一个AWS Batch实例(具有8个内核(16个vCPU))上进行测试一样。

当我强制只创建线程的集群时,不会弹出此问题。

例如:

cluster = LocalCluster(processes=False)
with Client(cluster) as client:
    client.submit(...)
    ...

但是,仅使用线程创建LocalCluster会使执行速度降低2-3倍。

那么,解决问题的方法是找到适合程序的正确数量的进程/线程吗?

1 个答案:

答案 0 :(得分:1)

通常一次创建一个Dask客户端,然后在其上运行许多工作负载。

with Client() as client:
    stage_one(client)
    stage_two(client)

话虽如此,你在做什么应该没问题。如果您能够用一个最小的示例重现该错误,那将很有用(但没有期望)。