丢失的执行程序尝试在Yarn / hdfs集群中使用Spark / GraphX加载Graph

时间:2016-02-27 09:03:52

标签: scala apache-spark hdfs yarn spark-graphx

我尝试使用hdfs在YARN集群上运行用Scala编写的Spark / Graphx程序。该集群有16个节点,每个节点有16GB RAM和2TB HD。我想要的是使用由GraphX库提供的edgeListFile函数加载一个3.29GB无向图(称为orkutUndirected.txt):

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.SparkConf
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import java.io._
import java.util.Date
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.StreamingContext

import scala.util.control.Breaks._
import scala.math._


object MyApp {

   def main(args: Array[String]): Unit = {

     // Create spark configuration and spark context
     val conf = new SparkConf().setAppName("My App")
     val sc = new SparkContext(conf)
     val edgeFile = "hdfs://master-bigdata:8020/user/sparklab/orkutUndirected.txt" 
      // Load the edges as a graph
     val graph =GraphLoader.edgeListFile(sc,edgeFile,false,1,StorageLevel.MEMORY_AND_DISK,StorageLevel.MEMORY_AND_DISK)
   }
}

我在命令行中使用以下spark-submit启动运行:

nohup spark-submit --master yarn --executor-memory 7g --num-executors 4 --executor-cores 2 ./target/scala-2.10/myapp_2.10-1.0.jar &

我尝试了不同大小的--executor-memory但没有运气! 几分钟后,我可以在nohup.out中看到以下内容:

16/02/24 23:45:25 ERROR YarnScheduler: Lost executor 1 on node12-bigdata:     Executor heartbeat timed out after 160351 ms
16/02/24 23:45:29 ERROR YarnScheduler: Lost executor 1 on node12-bigdata:     remote Rpc client disassociated
16/02/25 00:04:08 ERROR YarnScheduler: Lost executor 3 on node13-bigdata:     remote Rpc client disassociated
16/02/25 00:18:05 ERROR YarnScheduler: Lost executor 4 on node06-bigdata:     Executor heartbeat timed out after 129723 ms
16/02/25 00:18:07 ERROR YarnScheduler: Lost executor 4 on node06-bigdata:     remote Rpc client disassociated
16/02/25 00:21:52 ERROR YarnScheduler: Lost executor 4 on node16-bigdata:     remote Rpc client disassociated
16/02/25 00:41:29 ERROR YarnScheduler: Lost executor 1 on node03-bigdata:     remote Rpc client disassociated
16/02/25 00:44:52 ERROR YarnScheduler: Lost executor 5 on node16-bigdata:     remote Rpc client disassociated
16/02/25 00:44:52 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 0.0 failed 4 times, most     recent failure: Lost task 0.3 in stage 0.0 (TID 3, node16-bigdata):
ExecutorLostFailure (executor 5 lost)
Driver stacktrace:
at     org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)

... ...

你知道可能出现什么问题吗?

1 个答案:

答案 0 :(得分:0)

根据您从原始文本文件创建的对象类型,3.9GB的原始数据很容易超出此群集的工作内存,从而导致较大的GC暂停和丢失执行程序。除了在Java对象中包装数据所带来的开销之外,GraphX还会在RDD之上带来额外的开销。 VisualVMGanglia是调试这些内存相关问题的好工具。另外,请参阅Tuning Spark,了解有关如何保持图表精益的提示。

另一种可能性是数据未被最佳分区,导致某些任务停滞。查看Spark UI阶段信息,确保每个任务都在处理均匀分布的数据。如果分布不均匀,则应重新分区数据。我发现这个主题的Cloudera Blog很有用。