分布式.scheduler.KilledWorker异常的根本原因是什么?

时间:2019-07-10 14:14:09

标签: dask dask-distributed

我正在尝试在YARN群集上运行Dask作业。此作业使用hdfs3 library读写HDFS。

  • 当我在没有Kerberos安全层的群集上运行时,它运行良好。
  • 但是,在具有 Kerberos安全层的群集上,我必须实现解决方案here以避免Kerberos相关的错误。运行相同的作业,导致以下错误:
  File "/fsstreamdevl/f6_development/acoustics/acoustics_analysis_dask/acoustics_analytics/task_runner/task_runner.py", line 123, in run
    dask.compute(tasks)
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/dask/base.py", line 446, in compute
    results = schedule(dsk, keys, **kwargs)
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 2568, in get
    results = self.gather(packed, asynchronous=asynchronous, direct=direct)
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1822, in gather
    asynchronous=asynchronous,
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 753, in sync
    return sync(self.loop, func, *args, **kwargs)
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
    six.reraise(*error[0])
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
    raise value
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
    result[0] = yield future
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
    value = future.result()
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 742, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1653, in _gather
    six.reraise(type(exception), exception, traceback)
  File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
    raise value
distributed.scheduler.KilledWorker: ('__call__-6af7aa29-2a09-45f3-a5e2-207c06562672', <Worker 'tcp://10.194.211.132:11927', memory: 0, processing: 1>)
  • 很奇怪,在没有Kerberos安全层的情况下在前一个群集上运行相同的解决方案 ,我遇到了相同的错误。

查看YARN应用程序日志,我看到以下回溯,但无法确定其含义。

distributed.nanny - INFO - Closing Nanny at 'tcp://10.194.211.133:17659'
Traceback (most recent call last):
  File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
    send_bytes(obj)
  File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

End of LogType:dask.worker.log

我没有在日志中看到有关内存不足的任何显式消息。有人知道如何诊断这个问题吗?

1 个答案:

答案 0 :(得分:0)

hdfs3不再被主动维护。与HDFS交互有两个主要选择:

  • pyarrow的hdfs driver(通过libhdfs jni库),它要求您正确设置Java和hadoop要求并可供调用它的会话使用
  • 例如fsspec中的webhdfs,它不需要Java库,并且如果系统上允许HTTP身份验证,则可以与kerberos 进行交互。