Spark无法分配请求的地址:16次重试后服务驱动程序失败

时间:2018-10-01 09:53:54

标签: java apache-spark

我有3名工人Spark集群。 (工人1,工人2,工人3)与Spark 2.0.2。一起运行。

Spark Master在worker-1上启动。

我使用以下脚本提交申请:

#!/bin/bash

sparkMaster=spark://worker-1:6066
mainClass=my.package.Main

jar=/path/to/my/jar-with-dependencies.jar

driverPort=7079
blockPort=7082

deployMode=cluster

$SPARK_HOME/bin/spark-submit \
  --conf "spark.driver.port=${driverPort}"\
  --conf "spark.blockManager.port=${blockPort}"\
  --class $mainClass \
  --master $sparkMaster \
  --deploy-mode $deployMode \
  $jar

当我的驱动程序在worker-1(Worker + Master)上启动时,一切正常,并且使用所有worker都能正确执行我的应用程序

但是,当我的驱动程序启动另一个工人(worker-2或worker-3)时,他失败并出现错误:

Launch Command: "/usr/java/jdk1.8.0_181-amd64/jre/bin/java" "-cp" "/root/spark-2.0.2-bin-hadoop2.7/conf/:/root/spark-2.0.2-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.submit.deployMode=cluster" "-Dspark.app.name=my.package.Main" "-Dspark.driver.port=7083" "-Dspark.blockManager.port=7082" "-Dspark.master=spark://worker-1:7077" "-Dspark.jars=file:/path/to/my/jar-with-dependencies.jar" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@worker-2:7078" "/data/spark/work/driver-20181001132624-0001/jar-with-dependencies.jar" "my.package.Main"
========================================

org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66) | Service 'Driver' could not bind on port 0. Attempting port 1.
org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66) | Service 'Driver' could not bind on port 0. Attempting port 1.
...
org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66) | Service 'Driver' could not bind on port 0. Attempting port 1.
org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66) | Service 'Driver' could not bind on port 0. Attempting port 1.

Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'Driver' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'Driver' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
        at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
        at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
        at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
        at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
        at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:748)

我的3名工作人员配置如下:

SPARK_LOCAL_IP=worker-[X]
SPARK_LOCAL_DIRS=/data/spark/tmp
SPARK_WORKER_PORT=7078
SPARK_WORKER_DIR=/data/spark/work
SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.appDataTtl=86400 -Dspark.worker.cleanup.interval=1800"

经过多次尝试解决此问题后,我尝试通过在我的提交中添加以下选项来强制启动主计算机上的驱动程序:

  --conf "spark.driver.host=worker-1"

但是驱动程序仍然是由随机工作的工人启动的,因此不能解决我的问题。

编辑:

当我使用spark.driver.host选项提交时,该选项未出现在“启动命令”日志中(但是出现了spark.driver.port,所以我不明白为什么这次不选择我的选项)

编辑2:

我做了一些更深入的测试: 我现在只有一名工人在worker-2上运行,仍然从我的主服务器正在运行的worker-1上提交。

提交申请时,我可以在工作日志中看到:

2018-10-04 11:27:39,794 | dispatcher-event-loop-6 | INFO | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54) | Asked to launch driver driver-20181004112739-0003
2018-10-04 11:27:39,833 | DriverRunner for driver-20181004112739-0003 | INFO | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54) | Copying user jar file:/path/to/myjar-with-depencies.jar to /data/spark/work/driver-20181004112739-0003/myjar-with-depencies.jar
2018-10-04 11:27:39,833 | DriverRunner for driver-20181004112739-0003 | INFO | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54) | Copying /path/to/myjar-with-depencies.jar to /data/spark/work/driver-20181004112739-0003/myjar-with-depencies.jar
2018-10-04 11:27:40,243 | DriverRunner for driver-20181004112739-0003 | INFO | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54) | Launch Command: "/usr/java/jdk1.8.0_181-amd64/jre/bin/java" "-cp" "/root/spark-2.0.0-bin-hadoop2.7/conf/:/root/spark-2.0.0-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.supervise=false" "-Dspark.history.fs.cleaner.interval=12h" "-Dspark.submit.deployMode=cluster" "-Dspark.master=spark://worker-1:7077" "-Dspark.history.fs.cleaner.maxAge=1d" "-Dspark.app.name=my.package.Main" "-Dspark.jars=file:/path/to/myjar-with-depencies.jar" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@worker-2:7078" "/data/spark/work/driver-20181004112739-0003/myjar-with-depencies.jar" "my.package.Main"
2018-10-04 11:27:42,692 | dispatcher-event-loop-8 | WARN | org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66) | Driver driver-20181004112739-0003 exited with failure

并且我的驱动程序日志中仍然存在相同的错误。 然后,我尝试手动运行DriverRunner启动的命令:

"/usr/java/jdk1.8.0_181-amd64/jre/bin/java" "-cp" "/root/spark-2.0.0-bin-hadoop2.7/conf/:/root/spark-2.0.0-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.supervise=false" "-Dspark.history.fs.cleaner.interval=12h" "-Dspark.submit.deployMode=cluster" "-Dspark.master=spark://worker-1:7077" "-Dspark.history.fs.cleaner.maxAge=1d" "-Dspark.app.name=my.package.Main" "-Dspark.jars=file:/path/to/myjar-with-depencies.jar" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@worker-2:7078" "/data/spark/work/driver-20181004112739-0003/myjar-with-depencies.jar" "my.package.Main"

当我这样做时,应用程序正确启动(令人惊讶)。

我的手动启动与Driver-Runner的启动会导致我的绑定错误有什么区别?

注意:

  • 我没有对Driver-Runner命令行进行任何修改以使其正常工作
  • 我以root身份手动启动了命令行,而我的spark也以root身份运行。
  • 在Spark 2.0.0和Spark 2.0.2上具有相同的行为

1 个答案:

答案 0 :(得分:0)

所以,当我发现这种奇怪行为的原因时,我回答了自己的问题。

当我从有spark-env.sh文件的机器上运行spark-submit时,确实发生了。更准确地说,是在此计算机上设置了SPARK_LOCAL_IP时。

为避免此问题,我创建了第四台计算机,该计算机仅运行一个Spark Master,没有spark-env.sh文件,并且从中运行我的spark提交。