通过IP

时间:2015-11-17 22:11:34

标签: deployment apache-spark

我尝试通过Java连接到我的Apache Spark Master,但提供IP:

这是我创建SparkConf

的代码
return new SparkConf()
    .setAppName(appName)
    .setMaster(master)
    .setJars(jars)
    .set("spark.serializer",
            "org.apache.spark.serializer.KryoSerializer");

我想提供主要的spark:// IP:PORT。不幸的是,这似乎不起作用。它仅适用于主机名(例如spark:// MyMacbook:7077),而不适用于IP(例如spark://127.0.0.1:7077)。是否可以通过IP接受请求的方式启动主站?

我需要这个,因为我通过Docker创建了一个非常复杂的设置,并希望从外部访问Master(最初仅用于测试目的)。

编辑: 我现在检查了主控制台,并说明:

dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://sparkMaster@192.168.99.100:7077/]] arriving at [akka.tcp://sparkMaster@192.168.99.100:7077] inbound addresses are [akka.tcp://sparkMaster@spark-master:7077]

所以我们可以看到Akka丢弃了消息,因为它被发送到IP(192.168.99.100)而不是主机名(spark-master)。但我想使用IP ...也提供-h 192.168.99.100作为主启动参数在我的情况下不起作用(因为使用Docker和192.168.99.100是机器IP)。

是不是可以定义多个主机名或者至少接受所有请求?

编辑: 问题仍未得到解决,但我发现了另一个问题。当我尝试启动Spark Standalone Master并将其绑定到公共IP(在我的docker 192.168.99.100的情况下)时,我收到以下错误:

Exception in thread "main" java.net.BindException: Failed to bind to: /192.168.99.100:7093: Service 'sparkMaster' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

这个问题与(Wayne Song未回答的问题)有关(/相同?): Getting java.net.BindException when attempting to start Spark master on EC2 node with public IP

0 个答案:

没有答案