尝试使用公共IP在EC2节点上启动Spark master时获取java.net.BindException

时间:2015-07-27 17:11:54

标签: amazon-ec2 apache-spark

我正在尝试为EC2节点上的独立群集启动Spark主服务器。我使用的CLI命令如下所示:

JAVA_HOME=<location of my JDK install> \ java -cp <spark install dir>/sbin/../conf/:<spark install dir>/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:<spark install dir>/lib/datanucleus-core-3.2.10.jar:<spark install dir>/lib/datanucleus-api-jdo-3.2.6.jar:<spark install dir>/lib/datanucleus-rdbms-3.2.9.jar \ -Xms512m -Xmx512m -XX:MaxPermSize=128m \ org.apache.spark.deploy.master.Master --port 7077 --webui-port 8080 --host 54.xx.xx.xx

请注意,我指定了--host参数;我希望我的Spark master能够监听特定的IP地址。我指定的主机(即54.xx.xx.xx)是我的EC2节点的公共IP;我已经确认没有其他任何东西正在侦听端口7077,并且我的EC2安全组已打开所有端口。我还仔细检查了公共IP是否正确。

当我使用--host 54.xx.xx.xx时,收到以下错误消息:

15/07/27 17:04:09 ERROR NettyTransport: failed to bind to /54.xx.xx.xx:7093, shutting down Netty transport Exception in thread "main" java.net.BindException: Failed to bind to: /54.xx.xx.xx:7093: Service 'sparkMaster' failed after 16 retries! at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393) at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389) at scala.util.Success$$anonfun$map$1.apply(Try.scala:206) at scala.util.Try$.apply(Try.scala:161) at scala.util.Success.map(Try.scala:206) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

如果我省略了--host参数,如果我使用--host 10.0.xx.xx,10.0.xx.xx是我的私有EC2 IP地址,则不会发生这种情况。

为什么Spark无法绑定到公共EC2地址?

2 个答案:

答案 0 :(得分:1)

我在使用Oracle Cloud Instance时遇到了同样的问题。我的私有IP就像10.x.x.2,而我的公共IP就像140.x.x.238。

您可以按照以下步骤操作:

  1. 检查您的私人IP地址

    使用命令ifconfig来查找网卡的地址

ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9000
        inet 10.x.x.2  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::17ff:fe00:7cf9  prefixlen 64  scopeid 0x20<link>
        ether 02:00:17:00:7c:f9  txqueuelen 1000  (Ethernet)
        RX packets 146457  bytes 61901565 (61.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 142865  bytes 103614447 (103.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. 设置conf / spark-env.sh
SPARK_LOCAL_IP=127.0.0.1
SPARK_MASTER_IP=YOUR_HOST_NAME
  1. 更改主机文件

    在Ubuntu 18.04中,修改/ etc / hosts

    删除类似127.0.1.1 YOUR_HOST_NAME

    在我的情况下,将140.x.x.238 YOUR_HOST_NAME更改为10.x.x.2 YOUR_HOST_NAME

答案 1 :(得分:0)

尝试设置环境变量SPARK_LOCAL_IP = 54.xx.xx.xx

引用第一个SO answer to a similar problem here