在scala中分割显示超出范围的数组异常

时间:2019-05-13 12:52:50

标签: scala

我在scala中使用了split函数。它抛出一个数组超出范围的异常。尝试使用数组但没有用。我有一个如下所示的函数:

def commonDate(line: String) = {
    var fields = line.split(",")  
    var dates = fields(12)
    println(dates)
    val date = dates.split("-")

    (date(0))

  }

第一张照片显示: 2006年9月20日 96年11月6日 2006年9月22日 96年12月14日 ......

该函数返回的值为20、6、22、14 ..... 但是,如果我尝试返回date(1),则会出现异常

19/05/13 13:48:21 ERROR TaskSetManager: Task 0 in stage 6.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 12, localhost, executor driver): java.lang.ArrayIndexOutOfBoundsException: 1
    at com.sundogsoftware.spark.SachinOdi$.commonDate(SachinOdi.scala:35)
    at com.sundogsoftware.spark.SachinOdi$$anonfun$7.apply(SachinOdi.scala:73)
    at com.sundogsoftware.spark.SachinOdi$$anonfun$7.apply(SachinOdi.scala:73)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
    at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1214)
    at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1214)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1213)
    at com.sundogsoftware.spark.SachinOdi$.main(SachinOdi.scala:75)
    at com.sundogsoftware.spark.SachinOdi.main(SachinOdi.scala)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    at com.sundogsoftware.spark.SachinOdi$.commonDate(SachinOdi.scala:35)
    at com.sundogsoftware.spark.SachinOdi$$anonfun$7.apply(SachinOdi.scala:73)
    at com.sundogsoftware.spark.SachinOdi$$anonfun$7.apply(SachinOdi.scala:73)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

我的目标是提取20-Sep,11-6-Nov,22-Sep,14-Dec等。因此,我想我可以分割为“-”,然后返回date(0)+ date(1)< / p>

1 个答案:

答案 0 :(得分:0)

这是一种方法:

val line = "20-Sep-06 6-Nov-96 22-Sep-06 14-Dec-96"

val result = line.split(" ").map(_.split("-").take(2).mkString(","))

result.foreach(println _)

输出:

20,Sep
6,Nov
22,Sep
14,Dec

即使您有无效的字符串,也应该可以使用,即"20-Sep 6 22-Sep-06 14-Dec-96"将返回:

20,Sep
6
22,Sep
14,Dec