Spark:如何将List <rdd>联合到RDD

时间:2015-05-25 09:38:44

标签: scala intellij-idea apache-spark

我对spark和scala语言很陌生,并希望将List中的所有RDD联合起来(List<RDD> to RDD):

 val data = for (item <- paths) yield {
        val ad_data_path = item._1
        val ad_data = SparkCommon.sc.textFile(ad_data_path).map {
            line => {
                val ad_data = new AdData(line)
                (ad_data.ad_id, ad_data)
            }
        }.distinct()
    }
 val ret = SparkCommon.sc.parallelize(data).reduce(_ ++ _)

我在IntelliJ中运行代码,同时总是收到错误:

ava.lang.NullPointerException
at org.apache.spark.rdd.RDD.<init>(RDD.scala:125)
at org.apache.spark.rdd.UnionRDD.<init>(UnionRDD.scala:59)
at org.apache.spark.rdd.RDD.union(RDD.scala:438)
at org.apache.spark.rdd.RDD.$plus$plus(RDD.scala:444)
at data.GenerateData$$anonfun$load_data$1.apply(GenerateData.scala:99)
at data.GenerateData$$anonfun$load_data$1.apply(GenerateData.scala:99)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:177)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:172)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:172)
at org.apache.spark.InterruptibleIterator.reduceLeft(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$18.apply(RDD.scala:845)
at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:1157)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.scheduler.Task.run(Task.scala:54)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

有人对错误有任何疑问吗?在此先感谢:)

2 个答案:

答案 0 :(得分:19)

这可能是原因,

val listA = 1 to 10
for(i <- listA; if i%2 == 0)yield {i}

将返回Vector(2,4,6,8,10),而

for(i <- listA; if i%2 == 0)yield {val c = i}

将返回Vector((),(),(),(),())

这正是您案件中发生的事情。您正在初始化 ad_data ,但不会将其恢复为收益。

就您的问题而言,即列出[RDD]到RDD

这是解决方案:

val listA = sc.parallelize(1 to 10)
val listB = sc.parallelize(10 to 1 by -1)

创建 2个RDDS的列表

val listC = List(listA,listB)

列表[RDD]转换为RDD

val listD = listC.reduce(_ union _)

希望,这可以回答你的问题。

答案 1 :(得分:0)

从RDD列表转换为RDD的另一种简单方法。 SparkContext有两个重载的联合方法,一个接受两个RDD,另一个接受RDD列表

联盟(第一,休息) union(rdds:Seq [RDD [T]]))