Scala Spark中出现NullPointerException

时间:2018-06-25 20:19:23

标签: scala apache-spark nullpointerexception

基本上,我有一个文件列表,并希望使用spark将那些文件处理为所需的数据格式。我首先有一个处理每个文件的功能

def extractInfo(args: (File, String)) = {
    val inputFile = args._1
    val outputFolder = args._2
    val fw = myOutputFile 
    ......
    val data = sc.textFile(inputFile.getAbsolutePath())
    val content = data.filter(...).map(...).reduceByKey(...).map(...).map(...)
    content.collect().foreach(line => fw.write(line + "\n"))
  }

然后我使用了另一个函数来生成文件列表。 val fileList = getListOfFiles(inputFolder)

那我就可以做

fileList.foreach(x => extractInfo((x,outputFolder)))

完全没有问题。

但我想使它瘫痪

val op = sc.parallelize(fileList).map(x => extractInfo((x, outputFolder)))
op.collect()

然后我得到了NullPointerExceptions 错误执行程序:91-阶段238.0(TID 499)中的任务16.0中的异常 java.lang.NullPointerException         在$ line22。$ read $$ iw $$ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw.extractInfo(:43)         在$ line30。$ read $$ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ anonfun $ 1.apply(:37)         在$ line30。$ read $$ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ iw $ anonfun $ 1.apply(:37)         在scala.collection.Iterator $$ anon $ 11.next(Iterator.scala:409)         在scala.collection.Iterator $ class.foreach(Iterator.scala:893)         在scala.collection.AbstractIterator.foreach(Iterator.scala:1336)         在scala.collection.generic.Growable $ class。$ plus $ plus $ eq(Growable.scala:59)         在scala.collection.mutable.ArrayBuffer。$ plus $ plus $ eq(ArrayBuffer.scala:104)         在scala.collection.mutable.ArrayBuffer。$ plus $ plus $ eq(ArrayBuffer.scala:48)         在scala.collection.TraversableOnce $ class.to(TraversableOnce.scala:310)         在scala.collection.AbstractIterator.to(Iterator.scala:1336)         在scala.collection.TraversableOnce $ class.toBuffer(TraversableOnce.scala:302)         在scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)         在scala.collection.TraversableOnce $ class.toArray(TraversableOnce.scala:289)         在scala.collection.AbstractIterator.toArray(Iterator.scala:1336)         在org.apache.spark.rdd.RDD $$ anonfun $ collect $ 1 $$ anonfun $ 12.apply(RDD.scala:939)         在org.apache.spark.rdd.RDD $$ anonfun $ collect $ 1 $$ anonfun $ 12.apply(RDD.scala:939)         在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:2074)         在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:2074)         在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)         在org.apache.spark.scheduler.Task.run(Task.scala:109)         在org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:345)         在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)         在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624)         在java.lang.Thread.run(Thread.java:748)

这是因为我在不同的地方使用相同的sc吗?请帮助

0 个答案:

没有答案