Spark从Iterable类型的值制作RDD

时间:2018-09-04 20:53:05

标签: apache-spark

我的RDD为(Int,Iterable [String]),如何从可迭代部分制作RDD?

scala> val salgrp=salname.groupByKey
salgrp: org.apache.spark.rdd.RDD[(Int, Iterable[String])] = ShuffledRDD[11] at groupByKey at <console>:41
scala> salgrp.collect
18/09/04 20:51:06 INFO DAGScheduler: Job 0 finished: collect at <console>:44, took 1.723661 s
res0: Array[(Int, Iterable[String])] = Array((50000,CompactBuffer(Bhupesh, Tejas, Dinesh, Lokesh)), (10000,CompactBuffer(Sheela, Kumar, Venkat)), (45000,CompactBuf
fer(Pavan, Ratan, Amit)))

1 个答案:

答案 0 :(得分:0)

可以使用功能“ flatMap”:

val data = List((1, List("one", "two", "three")))
val rdd = sparkContext.parallelize(data)
rdd.flatMap(v => v._2).foreach(println)

输出:

one
two
three