我尝试打印RDD RDD[(String,List[(String,String)])]
的内容:
val sc = new SparkContext(conf)
val splitted = rdd.map(line => line.split(","))
val processed = splitted.map(x=>(x(1),List((x(0),x(2),x(3),x(4)))))
val grouped = processed.reduceByKey((x,y) => (x ++ y))
System.out.println(grouped)
然而,不是得到我看到的内容:
ShuffledRDD[4] at reduceByKey at Consumer.scala:88
更新:
TXT文件的内容:
100001082016,230,111,1,1
100001082016,121,111,1,1
100001082016,110,111,1,1
更新2(整个代码):
class Consumer()
{
def run() = {
val conf = new SparkConf()
.setAppName("TEST")
.setMaster("local[*]")
val sc = new SparkContext(conf)
val rdd = sc.textFile("file:///usr/test/myfile.txt")
val splitted = rdd.map(line => line.split(","))
val processed = splitted.map(x=>(x(1),List((x(0),x(2),x(3),x(4)))))
val grouped = processed.reduceByKey((x,y) => (x ++ y))
System.out.println(grouped)
}
}
答案 0 :(得分:3)
这里没有问题:
scala> val rdd = sc.parallelize(Seq("100001082016,230,111,1,1","100001082016,121,111,1,1","100001082016,110,111,1,1"))
// rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:27
scala> val splitted = rdd.map(line => line.split(","))
// splitted: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[1] at map at <console>:29
scala> val processed = splitted.map(x=>(x(1),List((x(0),x(2),x(3),x(4)))))
// processed: org.apache.spark.rdd.RDD[(String, List[(String, String, String, String)])] = MapPartitionsRDD[2] at map at <console>:31
scala> val grouped = processed.reduceByKey((x,y) => (x ++ y))
// grouped: org.apache.spark.rdd.RDD[(String, List[(String, String, String, String)])] = ShuffledRDD[3] at reduceByKey at <console>:33
scala> grouped.collect().foreach(println)
// (121,List((100001082016,111,1,1)))
// (110,List((100001082016,111,1,1)))
// (230,List((100001082016,111,1,1)))
以下是错误的。它按预期工作,但您必须正确理解语言以了解所期望的内容:
scala> System.out.println(grouped)
// ShuffledRDD[3] at reduceByKey at <console>:33
编辑:为了清楚起见,如果您想要打印一个集合,您需要使用可用于打印的集合的mkString方法将其转换为格式你想要的。