I've some problems to start a Spark cluster with a master and a worker. I downloaded and installed Hadoop 2.7.3 and Spark 2.0.0 on Ubuntu 16.04 LTS. I made a conf/slaves file with my slave's ip and this is my spark-env.sh
#!/usr/bin/env #bash
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_WORKER_CORES=2
export SPARK_MASTER_IP=192.168.1.6
export SPARK_LOCAL_IP=192.168.1.6
export SPARK_YARN_USER_ENV="JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre"
I started the master using start-master.sh and it's all ok. I've some problems when I attempt to run worker.
I've tried with:
(1) - start-slave.sh spark://192.168.1.6:7077 (from worker)
(2) - start-slaves.sh (from master)
(3) - ./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.1.6:7077 (from worker)
With (1) e (2) the slave is apparently started but in master:8080 it wasn't shown. Using (3) it throws this exception:
16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077...
16/08/31 14:17:03 WARN worker.Worker: Failed to connect to master master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to connect to master/192.168.1.6:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
... 4 more
Caused by: java.net.ConnectException: Connection refused: master/192.168.1.6:7077
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
16/08/31 14:17:40 ERROR worker.Worker: All masters are unresponsive! Giving up.
Master and worker are hosted by a VMWare VM installed on the same Windows 10 host using a bridged connection.
I've also disabled the firewall.
What can I do??
Thanks in advance.
答案 0 :(得分:1)
在日志中:
16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077...
您可以看到,它正在尝试连接master:7077
确保主主机名解析为给定的IP(192.168.1.6)。
您可以检查/ etc / hosts文件中的主机名。
答案 1 :(得分:0)
仅对此进行详细说明。由于它正在寻找主文件,因此您有两种选择,或者编辑文件:
/etc/hosts
# add to following anywhere in the file
192.168.1.6 master
或尝试转到您的spark配置目录(可能是/ opt / spark / conf)并编辑spark-defaults.conf
# you may just want to change //master:7077 to 192.168.1.6 to the actual hostname
spark.master spark://master:7077