Spark提交申请主人

时间:2015-05-27 06:16:09

标签: apache-spark spark-streaming hosts-file

您好我是Spark的新手,我在提交申请时遇到了问题。我设置了一个主节点,其中包含两个带有spark的从属节点,一个带有zookeeper的节点,以及一个带有kafka的节点。我想在python中使用spark streaming启动kafka wordcount示例的修改版本。

要提交应用程序,我所做的就是ssh到主要的spark节点并运行<path to spark home>/bin/spark-submit。如果我用它的ip指定主节点一切都很好,应用程序正确消耗来自kafka的消息,我可以从SparkUI看到应用程序正在两个从属服务器上正确运行:

./bin/spark-submit --master spark://<spark master ip>:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py <zookeeper ip>:2181 test

但是如果我用主机名指定主节点:

./bin/spark-submit --master spark://spark-master01:7077 --jars ./external/spark-streaming-kafka-assembly_2.10-1.3.1.jar ./examples/src/main/python/streaming/kafka_wordcount.py zookeeper01:2181 test

然后它会挂起这些日志:

15/05/27 02:01:58 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:18 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:38 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark-master01:7077/user/Master...
15/05/27 02:02:58 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/27 02:02:58 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
15/05/27 02:02:58 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.

我的/etc/hosts文件如下所示:

<spark master ip> spark-master01
127.0.0.1 localhost

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
<spark slave-01 ip> spark-slave01
<spark slave-02 ip> spark-slave02
<kafka01 ip> kafka01
<zookeeper ip> zookeeper01

更新

这是netstat -n -a输出的第一部分:

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address           State
tcp        0      0 0.0.0.0:22              0.0.0.0:*                 LISTEN
tcp        0      0 <spark master ip>:22    <my laptop ip>:60113      ESTABLISHED
tcp        0    260 <spark master ip>:22    <my laptop ip>:60617      ESTABLISHED
tcp6       0      0 :::22                   :::*                      LISTEN
tcp6       0      0 <spark master ip>:7077  :::*                      LISTEN
tcp6       0      0 :::8080                 :::*                      LISTEN
tcp6       0      0 <spark master ip>:6066  :::*                      LISTEN
tcp6       0      0 127.0.0.1:60105         127.0.0.1:44436           TIME_WAIT
tcp6       0      0 <spark master ip>:43874 <spark master ip>:7077    TIME_WAIT
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55029           TIME_WAIT
tcp6       0      0 <spark master ip>:7077  <spark slave 01 ip>:37061 ESTABLISHED
tcp6       0      0 <spark master ip>:7077  <spark slave 02 ip>:47516 ESTABLISHED
tcp6       0      0 127.0.0.1:51220         127.0.0.1:55026           TIME_WAIT

2 个答案:

答案 0 :(得分:1)

您使用的是Hostname而不是ip地址。所以你应该在每个节点的/etc/hosts文件中提到你的主机名。然后它会工作。

答案 1 :(得分:0)

您可以先尝试ping spark-master01查看解决了什么是ip spark-master01。然后你可以尝试netstat -n -a看看你的火花大师端口7077是否正确绑定到你的火花主节点的IP。