如何从eclipse运行Apache Spark远程?

时间:2015-03-04 14:54:09

标签: java eclipse apache-spark

我有一个Spark集群设置,其中包含一个主服务器和3个工作服务器。 我使用vagrant和Docker来启动一个集群。

我正在尝试从我的本地eclipse提交一个Spark工作,它将连接到master,并允许我执行它。所以,这是Spark Conf:

SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("spark://scale1.docker:7077");

当我在Master的UI上从eclipse运行我的应用程序时,我可以看到一个正在运行的应用程序。所有工作人员都是ALIVE,使用了4/4核心,并为应用程序分配了512 MB。

eclipse控制台只会打印相同的警告:

15/03/04 15:39:27 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/03/04 15:39:27 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:838
15/03/04 15:39:27 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[2] at mapToPair at CountLines.java:35)
15/03/04 15:39:27 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/03/04 15:39:42 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/04 15:39:57 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/1 is now EXITED (Command exited with code 1)
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Executor app-20150304143926-0001/1 removed: Command exited with code 1
15/03/04 15:40:04 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor added: app-20150304143926-0001/2 on worker-20150304140319-scale3.docker-55425 (scale3.docker:55425) with 4 cores
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150304143926-0001/2 on hostPort scale3.docker:55425 with 4 cores, 512.0 MB RAM
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/2 is now RUNNING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/2 is now LOADING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/0 is now EXITED (Command exited with code 1)
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Executor app-20150304143926-0001/0 removed: Command exited with code 1
15/03/04 15:40:04 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor added: app-20150304143926-0001/3 on worker-20150304140317-scale2.docker-60646 (scale2.docker:60646) with 4 cores
15/03/04 15:40:04 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150304143926-0001/3 on hostPort scale2.docker:60646 with 4 cores, 512.0 MB RAM
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/3 is now RUNNING
15/03/04 15:40:04 INFO AppClient$ClientActor: Executor updated: app-20150304143926-0001/3 is now LOADING

阅读Spark的Spark文档我发现了这个:

  

因为驱动程序在集群上计划任务,所以应该运行它   靠近工作节点,最好是在同一局域网上。   如果您想远程向群集发送请求,最好是   打开驱动程序的RPC并让它从附近提交操作   而不是远离工作节点运行驱动程序。

我认为问题是由于我的机器本地运行的驱动程序。

我正在使用Spark 1.2.0。

是否可以在eclipse中运行应用程序并使用本地驱动程序将其提交到远程集群?如果是这样,我该怎么办?

1 个答案:

答案 0 :(得分:0)

远程dubugging非常有可能,并且在边缘节点上执行以下选项时可以正常工作。

- driver-java-options -agentlib:jdwp = transport = dt_socket,server = y,suspend = y,address = 5005

Debugging Spark Applications 你不需要表达主人或任何东西。这是示例命令。

spark-submit --master yarn-client --class org.hkt.spark.jstest.javascalawordcount.JavaWordCount --driver-java-options -agentlib:jdwp = transport = dt_socket,server = y,suspend = y,地址= 5005 javascalawordcount-0.0.1-SNAPSHOT.jar