在某些情况下,dask集群似乎在重新启动时挂起了
为了模拟这一点,我编写了以下愚蠢的代码:
import contextlib2
from distributed import Client, LocalCluster
for i in xrange(100):
print i
with contextlib2.ExitStack() as es:
cluster = LocalCluster(processes=True, n_workers=4)
client = Client(cluster)
es.callback(client.close)
es.callback(es.callback(client.close))
此代码将永远无法完成循环 我收到此错误
raise_exc_info(self._exc_info)
File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "//anaconda/lib/python2.7/site-packages/distributed/deploy/local.py", line 191, in _start
yield [self._start_worker(**self.worker_kwargs) for i in range(n_workers)]
File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "//anaconda/lib/python2.7/site-packages/tornado/concurrent.py", line 269, in result
raise_exc_info(self._exc_info)
File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 883, in callback
result_list.append(f.result())
File "//anaconda/lib/python2.7/site-packages/tornado/concurrent.py", line 269, in result
raise_exc_info(self._exc_info)
File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "//anaconda/lib/python2.7/site-packages/distributed/deploy/local.py", line 217, in _start_worker
raise gen.TimeoutError("Worker failed to start")
我使用在Mac上运行的dask分布式1.25.1和python 2.7
答案 0 :(得分:0)
这在Dask中是一个问题,而在Linux上使用python 2.7时,启动新工作程序(多进程)的唯一方法是使用fork
叉可能会导致死锁 详情 看到门票开放 https://github.com/dask/distributed/issues/2446