当我尝试在此文件夹上运行它时,每次都会向我发送ExecutorLostFailure
嗨,我是Spark的初学者。我试图在Spark 1.4.1上运行一个带有8个从属节点的工作,每个3.2 GB磁盘有11.7 GB内存。我从从属节点(来自8个节点)之一运行Spark任务(因此,每个节点上只有0.7个存储分数大约4.8 gb)并使用Mesos作为Cluster Manager。我正在使用此配置:
spark.master mesos://uc1f-bioinfocloud-vamp-m-1:5050
spark.eventLog.enabled true
spark.driver.memory 6g
spark.storage.memoryFraction 0.7
spark.core.connection.ack.wait.timeout 800
spark.akka.frameSize 50
spark.rdd.compress true
我正在尝试在大约14 GB的数据文件夹上运行Spark MLlib朴素贝叶斯算法。 (当我在6 GB文件夹上运行任务时没有问题)我正在从谷歌存储中读取此文件夹作为RDD并将32作为分区参数。(我也尝试过增加分区)。然后使用TF创建特征向量并基于此进行预测。 但是当我试图在这个文件夹上运行它时,它每次都给我 ExecutorLostFailure 。我尝试了不同的配置,但没有任何帮助。可能是我遗漏了一些非常基本但却无法弄清楚的东西。任何帮助或建议都将非常有价值。
记录是:
15/07/21 01:18:20 ERROR TaskSetManager: Task 3 in stage 2.0 failed 4 times; aborting job
15/07/21 01:18:20 INFO TaskSchedulerImpl: Cancelling stage 2
15/07/21 01:18:20 INFO TaskSchedulerImpl: Stage 2 was cancelled
15/07/21 01:18:20 INFO DAGScheduler: ResultStage 2 (collect at /opt/work/V2ProcessRecords.py:213) failed in 28.966 s
15/07/21 01:18:20 INFO DAGScheduler: Executor lost: 20150526-135628-3255597322-5050-1304-S8 (epoch 3)
15/07/21 01:18:20 INFO BlockManagerMasterEndpoint: Trying to remove executor 20150526-135628-3255597322-5050-1304-S8 from BlockManagerMaster.
15/07/21 01:18:20 INFO DAGScheduler: Job 2 failed: collect at /opt/work/V2ProcessRecords.py:213, took 29.013646 s
Traceback (most recent call last):
File "/opt/work/V2ProcessRecords.py", line 213, in <module>
secondPassRDD = firstPassRDD.map(lambda ( name, title, idval, pmcId, pubDate, article, tags , author, ifSigmaCust, wclass): ( str(name), title, idval, pmcId, pubDate, article, tags , author, ifSigmaCust , "Yes" if ("PMC" + pmcId) in rddNIHGrant else ("No") , wclass)).collect()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 745, in collect
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 4 times, most recent failure: Lost task 3.3 in stage 2.0 (TID 12, vamp-m-2.c.quantum-854.internal): ExecutorLostFailure (executor 20150526-135628-3255597322-5050-1304-S8 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/07/21 01:18:20 INFO BlockManagerMaster: Removed 20150526-135628-3255597322-5050-1304-S8 successfully in removeExecutor
15/07/21 01:18:20 INFO DAGScheduler: Host added was in lost list earlier:vamp-m-2.c.quantum-854.internal
Jul 21, 2015 1:01:15 AM INFO: parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
15/07/21 01:18:20 INFO SparkContext: Invoking stop() from shutdown hook
{"Event":"SparkListenerTaskStart","Stage ID":2,"Stage Attempt ID":0,"Task Info":{"Task ID":11,"Index":6,"Attempt":2,"Launch Time":1437616381852,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":0,"Failed":false,"Accumulables":[]}}
{&#34; Event&#34;:&#34; SparkListenerExecutorRemoved&#34;,&#34; Timestamp&#34;:1437616389696,&#34; Executor ID&#34;:&#34; 20150526-135628 -3255597322-5050-1304-S8&#34;,&#34;删除原因&#34;:&#34;失去执行者&#34;} {&#34; Event&#34;:&#34; SparkListenerTaskEnd&#34;,&#34; Stage ID&#34;:2,&#34; Stage Attempt ID&#34;:0,&#34; Task Type& #34;:&#34; ResultTask&#34;,&#34;任务结束原因&#34;:{&#34;原因&#34;:&#34; ExecutorLostFailure&#34;,&#34;执行者ID&# 34;:&#34; 20150526-135628-3255597322-5050-1304-S8&#34;},&#34;任务信息&#34;:{&#34;任务ID&#34;:11,&#34;索引&#34;:6,&#34;尝试&#34;:2,&#34;发布时间&#34;:1437616381852,&#34;执行者ID&#34;:&#34; 20150526-135628-3255597322- 5050-1304-S8&#34;&#34;主机&#34;:&#34; uc1f-bioinfocloud鞋面 - 间2.c.quantum器件用854.internal&#34;&#34;&局部性#34;:&#34; PROCESS_LOCAL&#34;,&#34;推测&#34;:false,&#34;获得结果时间&#34;:0,&#34;完成时间&#34;:1437616389697,& #34;失败&#34;:真,&#34; Accumulables&#34;:[]}} {&#34;事件&#34;:&#34; SparkListenerExecutorAdded&#34;,&#34; Timestamp&#34;:1437616389707,&#34; Executor ID&#34;:&#34; 20150526-135628-3255597322- 5050-1304-S8&#34;,&#34;执行者信息&#34;:{&#34;主持人&#34;:&#34; uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854 .internal&#34;,&#34; Total Cores&#34;:1,&#34; Log urls&#34;:{}}} {&#34; Event&#34;:&#34; SparkListenerTaskStart&#34;,&#34; Stage ID&#34;:2,&#34; Stage Attempt ID&#34;:0,&#34; Task Info& #34;:{&#34;任务ID&#34;:12,&#34;索引&#34;:6,&#34;尝试&#34;:3,&#34;发布时间&#34;:1437616389702 ,&#34;执行者ID&#34;:&#34; 20150526-135628-3255597322-5050-1304-S8&#34;,&#34;主持人&#34;:&#34; uc1f-bioinfocloud-vamp-m -2.c.quantum器件用854.internal&#34;&#34;局部性&#34;:&#34; PROCESS_LOCAL&#34;&#34;投机&#34;:假,&#34;获取结果时间&#34;:0,&#34;完成时间&#34;:0,&#34;失败&#34;:false,&#34;累积&#34;:[]}} {&#34; Event&#34;:&#34; SparkListenerExecutorRemoved&#34;,&#34; Timestamp&#34;:1437616397743,&#34; Executor ID&#34;:&#34; 20150526-135628-3255597322- 5050-1304-S8&#34;,&#34;删除原因&#34;:&#34;失去执行者&#34;} {&#34; Event&#34;:&#34; SparkListenerTaskEnd&#34;,&#34; Stage ID&#34;:2,&#34; Stage Attempt ID&#34;:0,&#34; Task Type& #34;:&#34; ResultTask&#34;,&#34;任务结束原因&#34;:{&#34;原因&#34;:&#34; ExecutorLostFailure&#34;,&#34;执行者ID&# 34;:&#34; 20150526-135628-3255597322-5050-1304-S8&#34;},&#34;任务信息&#34;:{&#34;任务ID&#34;:12,&#34;索引&#34;:6,&#34;尝试&#34;:3,&#34;发布时间&#34;:1437616389702,&#34;执行者ID&#34;:&#34; 20150526-135628-3255597322- 5050-1304-S8&#34;&#34;主机&#34;:&#34; uc1f-bioinfocloud鞋面 - 间2.c.quantum器件用854.internal&#34;&#34;&局部性#34;:&#34; PROCESS_LOCAL&#34;,&#34;推测&#34;:false,&#34;获得结果时间&#34;:0,&#34;完成时间&#34;:1437616397743,& #34;失败&#34;:真,&#34; Accumulables&#34;:[]}} {&#34; Event&#34;:&#34; SparkListenerStageCompleted&#34;,&#34; Stage Info&#34;:{&#34; Stage ID&#34;:2,&#34; Stage Attempt ID&# 34;:0,&#34;阶段名称&#34;:&#34;收集于/opt/work/V2ProcessRecords.py:215" ;,"任务数量&#34;:72,&# 34; RDD Info&#34;:[{&#34; RDD ID&#34;:6,&#34; Name&#34;:&#34; PythonRDD&#34;,&#34; Parent ID&#34;: [0],&#34;存储级别&#34;:{&#34;使用磁盘&#34;:false,&#34;使用内存&#34;:false,&#34;使用ExternalBlockStore&#34;:false ,&#34;反序列化&#34;:false,&#34;复制&#34;:1},&#34;分区数&#34;:72,&#34;缓存分区数&#34;:0 ,&#34;内存大小&#34;:0,&#34; ExternalBlockStore大小&#34;:0,&#34;磁盘大小&#34;:0},{&#34; RDD ID&#34;:0 &#34;名称&#34;:&#34; GS://uc1f-bioinfocloud-vamp-m/literature/xml/P*/*.nxml",&#34;适用范围&#34;:& #34; {\&#34; ID为\&#34;:\&#34; 0 \&#34; \&#34;名称\&#34;:\&#34; wholeTextFiles \&#34 ;}&#34;,&#34;父ID&#34;:[],&#34;存储级别&#34;:{&#34;使用磁盘&#34;:false,&#34;使用内存&# 34;:false,&#34;使用ExternalBlockStore&#34;:false, &#34;反序列化&#34;:false,&#34;复制&#34;:1},&#34;分区数&#34;:72,&#34;缓存分区数&#34;:0, &#34;内存大小&#34;:0,&#34; ExternalBlockStore大小&#34;:0,&#34;磁盘大小&#34;:0}],&#34;父ID&#34;:[] ,&#34;详细信息&#34;:&#34;&#34;,&#34;提交时间&#34;:1437616365566,&#34;完成时间&#34;:1437616397753,&#34;失败原因&# 34;:&#34;由于阶段失败导致作业中止:阶段2.0中的任务6失败4次,最近失败:阶段2.0失去任务6.3(TID 12,uc1f-bioinfocloud-vamp-m-2.c.quantum -device-854.internal):ExecutorLostFailure(执行者20150526-135628-3255597322-5050-1304-S8丢失)\ nDriver stacktrace:&#34;,&#34; Accumulables&#34;:[]}} {&#34;事件&#34;:&#34; SparkListenerJobEnd&#34;,&#34;工作ID&#34;:2,&#34;完成时间&#34;:1437616397755,&#34;工作结果&# 34;:{&#34;结果&#34;:&#34; JobFailed&#34;,&#34;例外&#34;:{&#34;消息&#34;:&#34;作业由于阶段而中止失败:阶段2.0中的任务6失败4次,最近失败:阶段2.0中失去的任务6.3(TID 12,uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal):ExecutorLostFailure(executor 20150526) -135628-3255597322-5050-1304-S8丢失)\ nDriver stacktrace:&#34;,&#34; Stack Trace&#34;:[{&#34;声明Class&#34;:&#34; org.apache .spark.scheduler.DAGScheduler&#34;,&#34;方法名称&#34;:&#34; org $ apache $ spark $ scheduler $ DAGScheduler $$ failJobAndIndependentStages&#34;,&#34;文件名&#34;: &#34; DAGScheduler.scala&#34;,&#34;行号&#34;:1266},{&#34;声明类&#34;:&#34; org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1&#34;,&#34;方法名称&#34;:&#34;申请&#34;,&#34;文件名&#34;:&#34; DAGScheduler.scala&#34;,& #34;行号&#34;:1257},{&#34;声明ing Class&#34;:&#34; org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1&#34;,&#34; Method Name&#34;:&#34; apply&#34;,& #34;文件名&#34;:&#34; DAGScheduler.scala&#34;,&#34;行号&#34;:1256},{&#34;声明类&#34;:&#34; scala。 collection.mutable.ResizableArray $ class&#34;,&#34; Method Name&#34;:&#34; foreach&#34;,&#34; File Name&#34;:&#34; ResizableArray.scala&#34; ,&#34;行号&#34;:59},{&#34;声明类&#34;:&#34; scala.collection.mutable.ArrayBuffer&#34;,&#34;方法名称&#34;: &#34; foreach&#34;,&#34;文件名&#34;:&#34; ArrayBuffer.scala&#34;,&#34;行号&#34;:47},{&#34;声明类& #34;:&#34; org.apache.spark.scheduler.DAGScheduler&#34;,&#34;方法名称&#34;:&#34; abortStage&#34;,&#34;文件名&#34;: &#34; DAGScheduler.scala&#34;,&#34;行号&#34;:1256},{&#34;声明类&#34;:&#34; org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1&#34;,&#34; Method Name&#34;:&#34; apply&#34;,&#34; File Name&#34;:&#34; DAGScheduler.scala&#34;,& #34;行号&# 34;:730},{&#34;声明Class&#34;:&#34; org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1&#34;,&#34; Method Name&#34; :&#34;申请&#34;,&#34;文件名&#34;:&#34; DAGScheduler.scala&#34;,&#34;行号&#34;:730},{&#34;声明Class&#34;:&#34; scala.Option&#34;,&#34; Method Name&#34;:&#34; foreach&#34;,&#34; File Name&#34;:&#34; Option .scala&#34;,&#34;行号&#34;:236},{&#34;声明类&#34;:&#34; org.apache.spark.scheduler.DAGScheduler&#34;,&#34 ;方法名称&#34;:&#34; handleTaskSetFailed&#34;,&#34;文件名&#34;:&#34; DAGScheduler.scala&#34;,&#34;行号&#34;:730}, {&#34;声明Class&#34;:&#34; org.apache.spark.scheduler.DAGSchedulerEventProcessLoop&#34;,&#34; Method Name&#34;:&#34; onReceive&#34;,&#34 ;文件名&#34;:&#34; DAGScheduler.scala&#34;,&#34;行号&#34;:1450},{&#34;声明类&#34;:&#34; org.apache。 spark.scheduler.DAGSchedulerEventProcessLoop&#34;,&#34;方法名称&#34;:&#34; onReceive&#34;,&#34;文件名&#34;:&#34; DAGScheduler.scala&#34;,& #34;林e Number&#34;:1411},{&#34;声明Class&#34;:&#34; org.apache.spark.util.EventLoop $$ anon $ 1&#34;,&#34; Method Name&#34 ;:&#34;运行&#34;,&#34;文件名&#34;:&#34; EventLoop.scala&#34;,&#34;行号&#34;:48}]}}}
答案 0 :(得分:4)
发生此错误是因为任务失败超过四次。 尝试使用以下参数增加群集中的并行度。
--conf "spark.default.parallelism=100"
将并行度值设置为群集上可用核心数的2到3倍。如果这不起作用。尝试以指数方式增加并行性。即如果你当前的并行性不能将它乘以2,依此类推。此外,我观察到,如果你的并行度是一个素数,它会有所帮助,特别是如果你使用groupByKkey。
答案 1 :(得分:2)
如果没有失败的执行程序的日志而不是驱动程序的日志,很难说问题是什么,但很可能是内存问题。尝试显着增加分区号(如果你的当前是32尝试200)
答案 2 :(得分:2)
我遇到了这个问题,对我来说问题是reduceByKey
任务中一个密钥的发生率非常高。这是(我认为)导致大量列表收集在其中一个执行程序上,然后会抛出OOM错误。
对我来说,解决方法是在执行reduceByKey
之前过滤出人口众多的密钥,但我很欣赏这可能会或可能不会取决于您的应用程序。反正我也不需要我的所有数据。
答案 3 :(得分:2)
根据我的理解,ExecutorLostFailure的最常见原因是执行程序中的OOM。
为了解决OOM问题,需要弄清楚究竟是什么导致了它。简单地增加默认并行性或增加执行程序内存不是一个战略解决方案。
如果你看一下增加并行性的作用,它会尝试创建更多的执行程序,以便每个执行程序可以处理越来越少的数据。但是,如果您的数据偏斜,使得发生数据分区的键(对于并行性)具有更多数据,那么简单地增加并行性将没有任何效果。
同样只是通过增加Executor内存将是一种处理这种情况的非常低效的方式,就好像只有一个执行程序使用ExecutorLostFailure失败,为所有执行程序请求增加的内存将使您的应用程序需要更多的内存然后实际预期。