Spark版本2.0.2.6和Scala版本2.11.11
我有以下csv文件。
sno name number
1 hello 1
1 hello 2
2 hai 12
2 hai 22
2 hai 32
3 how 43
3 how 44
3 how 45
3 how 46
4 are 33
4 are 34
4 are 45
4 are 44
4 are 43
我希望输出为:
sno name number
1 hello [1,2]
2 hai [12,22,32]
3 how [43,44,45,46]
4 are [33,34,44,45,43]
列表中元素的顺序并不重要。
使用适当的数据帧或RDD。
由于 汤姆
答案 0 :(得分:3)
import org.apache.spark.sql.functions._
scala> df.groupBy("sno", "name").agg(collect_list("number").alias("number")).sort("sno").show()
+---+-----+--------------------+
|sno| name| number|
+---+-----+--------------------+
| 1|hello| [1, 2]|
| 2| hai| [12, 22, 32]|
| 3| how| [43, 44, 45, 46]|
| 4| are|[33, 34, 45, 44, 43]|
+---+-----+--------------------+