在EC2实例中使用Dask会引发“无法收集1个键...”

时间:2018-07-16 03:23:14

标签: amazon-ec2 dask

我启动了几个EC2实例,使用conda安装了dask,在各自的实例中启动了调度程序和工作程序,并且调度程序能够从工作程序接收连接。但是,在启动客户端并收集结果(例如x.result())后,会引发错误

  

警告-无法收集1个密钥,无法重新计划,并且计划程序与工作程序之间的连接已终止。

此问题20951278中已解决的错误几乎相同。不幸的是,很明显如何使用新标记解决问题。

这是我的会话的样子:

计划程序-终端

>>> from dask.distributed import Client
>>> client = Client('<domain-scheduler>:8786')
>>> def inc(x):
...   return x + 1
...
>>> x = client.submit(inc, 10)
>>> x.result()
distributed.client - WARNING - Couldn't gather 1 keys, rescheduling {'inc-17ff1aa09aeed9c364fc31df7522511e': ('tcp://172.30.3.63:38971',)}
^CTraceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ubuntu/anaconda2/envs/dask-env/lib/python2.7/site-packages/distributed/client.py", line 190, in result
    raiseit=False)
  File "/home/ubuntu/anaconda2/envs/dask-env/lib/python2.7/site-packages/distributed/client.py", line 652, in sync
    return sync(self.loop, func, *args, **kwargs)
  File "/home/ubuntu/anaconda2/envs/dask-env/lib/python2.7/site-packages/distributed/utils.py", line 273, in sync
    e.wait(10)
  File "/home/ubuntu/anaconda2/envs/dask-env/lib/python2.7/threading.py", line 614, in wait
    self.__cond.wait(timeout)
  File "/home/ubuntu/anaconda2/envs/dask-env/lib/python2.7/threading.py", line 359, in wait
    _sleep(delay)
KeyboardInterrupt

计划程序-dasch-scheduler

(dask-env) ubuntu@ip-172-30-3-136:~$ dask-scheduler --host <domain-scheduler>:8786 --bokeh-port 8080
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:   tcp://172.30.3.136:8786
distributed.scheduler - INFO -       bokeh at:         172.30.3.136:8080
distributed.scheduler - INFO - Local Directory:      /tmp/scheduler-TX9nqO
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Register tcp://172.30.3.63:38971
distributed.scheduler - INFO - Starting worker compute stream, tcp://172.30.3.63:38971
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-b5d903b5-8620-11e8-8a4c-06a866fbd474
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove worker tcp://172.30.3.63:38971
distributed.core - INFO - Removing comms to tcp://172.30.3.63:38971
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - ERROR - Workers don't have promised key: ['tcp://172.30.3.63:38971'], inc-17ff1aa09aeed9c364fc31df7522511e
None
^Cdistributed.scheduler - INFO - End scheduler at u'tcp://<domain>:8786'

工作者-达人工作者

(dask-env) ubuntu@ip-172-30-3-63:~$ dask-worker --host <domain-worker>:8786 <domain-scheduler>:8786
distributed.nanny - INFO -         Start Nanny at: 'tcp://172.30.3.63:8786'
distributed.worker - INFO -       Start worker at:    tcp://172.30.3.63:38971
distributed.worker - INFO -          Listening to:    tcp://172.30.3.63:38971
distributed.worker - INFO -              bokeh at:           172.30.3.63:8789
distributed.worker - INFO -              nanny at:           172.30.3.63:8786
distributed.worker - INFO - Waiting to connect to: tcp://<domain-schedule>:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                    1.04 GB
distributed.worker - INFO -       Local Directory: /home/ubuntu/dask-worker-space/worker-EnKL22
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to: tcp://<domain-scheduler>:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Stopping worker at tcp://172.30.3.63:38971
distributed.worker - WARNING - Heartbeat to scheduler failed
distributed.nanny - INFO - Closing Nanny at 'tcp://172.30.3.63:8786'
distributed.dask_worker - INFO - End worker

如您所见,会话在运行x.result()后终止。我还尝试将--listen-address--contact-address包括在内,但没有成功。

2 个答案:

答案 0 :(得分:1)

过去我遇到此问题时,是因为调度程序无法联系到工作人员。如果从调度程序运行curl <domain-worker>:8789,是否返回bokeh html?我猜这不是您需要在AWS中更改网络设置的原因。

答案 1 :(得分:0)

解决方案是为dask-schedulerdask-worker提供特定的开放端口以使用,而不是允许它们选择其他随机端口。命令应如下所示:

计划程序

dask-scheduler --host <domain-scheduler> --port 8786 --bokeh-port <open-port>

工人

dask-worker --host <domain-worker> <domain-scheduler>:8786 --worker-port 8786

终端

client = Client('tcp://<domain-scheduler>:8786')