Apache spark作业失败执行:ArrayIndexOutOfBoundsException

时间:2015-04-28 16:12:55

标签: apache-spark

以下是代码

val path = "C:\\Users\\John\\Downloads\\crimes.csv"
val crimeFile = sc.textFile(path)
val crimerows = crimeFile.map(l=>l.split(",").map(e=>e.trim))
//taking the first row as header
val header = crimerows.first
//filter out the header
val crimes = crimerows.filter(_(0)!=header(0))
//mapping the field to be reduced
val crimetype = crimes.map(l=>(l(5),1))

val stats = crimetype.reduceByKey(_+_)
stats.count

以下是我得到的错误。 我正在使用Spark 1.2.0,使用Scala版本2.10.4(Java HotSpot(TM)客户端VM,Java 1.8.0_45)。该文件大小约为1GB,JVM设置为默认堆256MB。

感谢任何帮助,以下是错误:

15/04/28 21:17:47 ERROR Executor: Exception in task 32.0 in stage 11.0 (TID 169)
java.lang.ArrayIndexOutOfBoundsException: 5
at $line13.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
at $line13.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1311)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
15/04/28 21:17:47 WARN TaskSetManager: Lost task 32.0 in stage 11.0 (TID 169, localhost): java.lang.ArrayIndexOutOfBoun
sException: 5
at $line13.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
at $line13.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1311)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

15/04/28 21:17:47 ERROR TaskSetManager: Task 32 in stage 11.0 failed 1 times; aborting job
15/04/28 21:17:47 INFO TaskSchedulerImpl: Removed TaskSet 11.0, whose tasks have all completed, from pool
15/04/28 21:17:47 INFO TaskSchedulerImpl: Cancelling stage 11
15/04/28 21:17:47 INFO DAGScheduler: Job 9 failed: count at <console>:25, took 17.366680 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 32 in stage 11.0 failed 1 times, most recent fa
lure: Lost task 32.0 in stage 11.0 (TID 169, localhost): java.lang.ArrayIndexOutOfBoundsException: 5
        at $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
        at $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:22)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1311)
        at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
        at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        at org.apache.spark.scheduler.Task.run(Task.scala:56)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages
DAGScheduler.scala:1214)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1
20)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

3 个答案:

答案 0 :(得分:0)

此处val crimetype = crimes.map(l=>(l(5),1))您希望crime数组至少包含6个元素。文件中的某些条目不符合该条件,您得到 java.lang.ArrayIndexOutOfBoundsException: 5

为了处理可能不合规的数据,应该做更多的防御性编码,在这种情况下,如果允许忽略缺失值(而不是错误),我们可以:

val crimetype = crimes.flatMap(l=> l.lift()(5).map(value => (value,1)))

作为替代方案:过滤RDD以获取正确的值:

val crimetype = crimes.filter(l => l.length > 5).map(l=>(l(5),1))

并不是说只有一种方法可以做简单的事情:

val crimetype = crimes.collect{ case l if (l.length > 5) => (l(5),1)}

答案 1 :(得分:0)

Java promise旨在导致错误。它会丢弃尾随的空字符串:String.split"aa,a,".split(",")。要获得预期的["aa", "a"],您需要使用["aa", "a", ""]

答案 2 :(得分:0)

这是@maasg建议的数据错误,因此以下代码有效:

val crimetype = crimes.filter(l => l.length > 5).map(l=>(l(5),1))