群集模式下通过火花提交的pyarrow失败

时间:2018-07-05 14:11:47

标签: pyspark spark-submit cluster-mode

我有一个简单的Pyspark代码

进口罂粟 fs = pyarrow.hdfs.connect()

如果我在“客户端”模式下使用spark-submit运行此程序,它可以正常工作,但在“集群”模式下会引发错误

Traceback (most recent call last):
  File "t3.py", line 17, in <module>
    fs = pa.hdfs.connect()
  File "/opt/anaconda/3.6/lib/python3.6/site-packages/pyarrow/hdfs.py", line 181, in connect
    kerb_ticket=kerb_ticket, driver=driver)
  File "/opt/anaconda/3.6/lib/python3.6/site-packages/pyarrow/hdfs.py", line 37, in __init__
    self._connect(host, port, user, kerb_ticket, driver)
  File "io-hdfs.pxi", line 99, in pyarrow.lib.HadoopFileSystem._connect
  File "error.pxi", line 79, in pyarrow.lib.check_status
pyarrow.lib.ArrowIOError: HDFS connection failed

所有必需的python库都安装在Hadoop集群中的每个节点上。我已经通过在pyspark下分别测试每个节点的代码来验证。

但是不能使其在群集模式下通过Spark-Submit工作吗?

有什么想法吗?

shankar

0 个答案:

没有答案