给定一个包含多行的数据集:
0,1,2
7,8,9
18,19,5
如何在Spark中生成结果:
Array(Array(Array(0),Array(1),Array(2)),Array(Array(7),Array(8),Array(9)), Array(Array(18),Array(19),Array(5))
答案 0 :(得分:1)
如果你在火花中谈论的RDD[Array[Array[Int]]]
相当于scala中的Array[Array[Array[Int]]]
,那么你可以执行以下操作
假设您有一个文本文件(/home/test.csv)
0,1,2
7,8,9
18,19,5
你可以做到
scala> val data = sc.textFile("/home/test.csv")
data: org.apache.spark.rdd.RDD[String] = /home/test.csv MapPartitionsRDD[4] at textFile at <console>:24
scala> val array = data.map(line => line.split(",").map(x => Array(x.toInt)))
array: org.apache.spark.rdd.RDD[Array[Array[Int]]] = MapPartitionsRDD[5] at map at <console>:26
您可以更进一步让RDD[Array[Array[Array[Int]]]]
表示 rdd的每个值都是您想要的类型,然后您可以在读取文件时使用wholeTextFile
进入tuple2(filename, texts in the file)
scala> val data = sc.wholeTextFiles("/home/test.csv")
data: org.apache.spark.rdd.RDD[(String, String)] = /home/test.csv MapPartitionsRDD[3] at wholeTextFiles at <console>:24
scala> val array = data.map(t2 => t2._2.split("\n").map(line => line.split(",").map(x => Array(x.toInt))))
array: org.apache.spark.rdd.RDD[Array[Array[Array[Int]]]] = MapPartitionsRDD[4] at map at <console>:26