Graphx java.lang.ArrayIndexOutOfBoundsException:2

时间:2018-06-10 22:03:14

标签: java apache-spark spark-graphx

我一直在Graphx上创建多个图形,当创建最后一个图形时,我正在运行此运行时错误。

我试图打印出我正在阅读的行的长度,看看是否在数组限制之外运行,但看起来并不像这个输出显示的那样: < / p>

    55915,Alias,israeli wind virtuosi and friends,
3
55916,Alias,israel,
3
18/06/10 23:59:09 INFO TaskSchedulerImpl: Stage 37 was cancelled
18/06/10 23:59:09 INFO Executor: Executor is trying to kill task 0.0 in stage 37.0 (TID 74), reason: Stage cancelled
18/06/10 23:59:09 INFO DAGScheduler: ShuffleMapStage 37 (map at GraphXAnalysis2.scala:227) failed in 5.402 s due to Job aborted due to stage failure: Task 1 in stage 37.0 failed 1 times, most recent failure: Lost task 1.0 in stage 37.0 (TID 75, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 2
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:236)
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:228)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
18/06/10 23:59:09 INFO DAGScheduler: Job 9 failed: foreach at GraphXAnalysis2.scala:401, took 5.448187 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 37.0 failed 1 times, most recent failure: Lost task 1.0 in stage 37.0 (TID 75, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 2
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:236)
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:228)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:921)
    at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:919)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.foreach(RDD.scala:919)
    at assignment.GraphXAnalysis2$.main(GraphXAnalysis2.scala:401)
    at assignment.GraphXAnalysis2.main(GraphXAnalysis2.scala)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:236)
    at assignment.GraphXAnalysis2$$anonfun$15.apply(GraphXAnalysis2.scala:228)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
18/06/10 23:59:09 INFO Executor: Executor killed task 0.0 in stage 37.0 (TID 74), reason: Stage cancelled
18/06/10 23:59:09 WARN TaskSetManager: Lost task 0.0 in stage 37.0 (TID 74, localhost, executor driver): TaskKilled (Stage cancelled)
18/06/10 23:59:09 INFO TaskSchedulerImpl: Removed TaskSet 37.0, whose tasks have all completed, from pool 
18/06/10 23:59:09 INFO SparkContext: Invoking stop() from shutdown hook
18/06/10 23:59:09 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
18/06/10 23:59:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/06/10 23:59:10 INFO MemoryStore: MemoryStore cleared
18/06/10 23:59:10 INFO BlockManager: BlockManager stopped
18/06/10 23:59:10 INFO BlockManagerMaster: BlockManagerMaster stopped
18/06/10 23:59:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/06/10 23:59:10 INFO SparkContext: Successfully stopped SparkContext
18/06/10 23:59:10 INFO ShutdownHookManager: Shutdown hook called
18/06/10 23:59:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-57502587-0123-4c72-8651-d2ae99a0f3cd

这是我用来创建顶点的代码:

val ArtistAlias: RDD[(VertexId, VertexProperty)] = sc.textFile(vertexArtistAlias).map {
  line =>
    val row = line.split(",")
    val id = row(0).toLong
    val vertexType = row(1)
    println(line)
    println(row.length)
    val prop = vertexType match {
      case "Artist" => ArtistProperty(vertexType, row(2), row(3))
      case "Alias" => AliasProperty(vertexType, row(2))
    }
    (id, prop)
}

0 个答案:

没有答案