并非在Spark网站上可以找到所有工作人员3

时间:2018-11-01 09:23:41

标签: apache-spark logging worker

我们已经建立了Spark独立集群。一切似乎还好。但是,并非所有工人都可以在spark网站上找到。

我们按如下方式启动火花

[hadoop@master spark-2.0.2-bin-hadoop2.7]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/hadoop/spark-2.0.2-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.telegis.out
master.telegis: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.0.2-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-master.telegis.out
slave1.telegis: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.0.2-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.telegis.out
slave2.telegis: starting org.apache.spark.deploy.worker.Worker, logging to /home/hadoop/spark-2.0.2-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.telegis.out

我们可以在每个节点上通过jps命令找到辅助服务。

此外,如果我们以纱线簇模式提交任务,那么只有一名工人可以工作。

spark web page shows: hadoop cluster web page shows:

0 个答案:

没有答案