我在独立模式下使用Spark 0.7.2,使用以下驱动程序来处理~90GB(压缩:19GB)的logdata,使用7个worker和1个不同的master:
System.setProperty("spark.default.parallelism", "32")
val sc = new SparkContext("spark://10.111.1.30:7077", "MRTest", System.getenv("SPARK_HOME"), Seq(System.getenv("NM_JAR_PATH")))
val logData = sc.textFile("hdfs://10.111.1.30:54310/logs/")
val dcxMap = logData.map(line => (line.split("\\|")(0),
line.split("\\|")(9)))
.reduceByKey(_ + " || " + _)
dcxMap.saveAsTextFile("hdfs://10.111.1.30:54310/out")
完成第1阶段的ShuffleMapTasks
之后:
Stage 1 (reduceByKey at DcxMap.scala:31) finished in 111.312 s
提交第0阶段:
Submitting Stage 0 (MappedRDD[6] at saveAsTextFile at DcxMap.scala:38), which is now runnable
经过一些序列化后,它会打印
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host23
spark.MapOutputTracker - Size of output statuses for shuffle 0 is 2008 bytes
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host21
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host22
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host26
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host24
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host27
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host28
在此之后,没有任何事情发生,top
表明工人们现在都处于闲置状态。
如果我查看工作机器上的日志,每个人都会发生同样的事情:
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:34288]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:36040]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:50467]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:60833]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:49893]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:39907]
然后,对于每次这些“初始连接”尝试,它会在每个工作者处抛出相同的错误(以host27的日志为例,只显示错误的第一次出现) :
13/06/21 07:32:25 WARN network.SendingConnection: Error finishing connection to host27/127.0.1.1:49893
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
at spark.network.SendingConnection.finishConnect(Connection.scala:221)
at spark.network.ConnectionManager.spark$network$ConnectionManager$$run(ConnectionManager.scala:127)
at spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:70)
为什么会这样?似乎工人们可以互相沟通,当他们想要向自己发送信息时,似乎唯一的问题就出现了;在上面的例子中,host27尝试向自己发送6条消息,但是失败了6次。向其他工作人员发送消息的工作正常。 有人有想法吗?
修改:也许它与使用127.0。 1 .1而不是127.0。 0 .1?有关。
/etc/hosts
如下所示:
127.0.0.1 localhost
127.0.1.1 host27.<ourdomain> host27
答案 0 :(得分:0)
我发现问题与this问题有关。
但是,对我来说,在工人上设置SPARK_LOCAL_IP并没有解决问题。
我必须将/etc/hosts
更改为:
127.0.0.1 localhost
现在它运行顺利。