如何使用groupBy将行收集到地图中?

时间:2017-01-24 02:59:08

标签: apache-spark apache-spark-sql

上下文

sqlContext.sql(s"""
SELECT
school_name,
name,
age
FROM my_table
""")

鉴于上表,我想按学校名称分组,并将姓名,年龄收集到Map[String, Int]

例如 - 伪代码

val df = sqlContext.sql(s"""
SELECT
school_name,
age
FROM my_table
GROUP BY school_name
""")


------------------------
school_name | name  | age
------------------------
school A | "michael"| 7 
school A | "emily"  | 5
school B | "cathy"  | 10
school B | "shaun"  | 5


df.groupBy("school_name").agg(make_map)

------------------------------------
school_name | map
------------------------------------
school A    | {"michael": 7, "emily": 5}
school B    | {"cathy": 10, "shaun": 5}

3 个答案:

答案 0 :(得分:14)

以下内容适用于 Spark 2.0 。您可以使用自2.0版本以来可用的map函数将列作为Map。

val df1 = df.groupBy(col("school_name")).agg(collect_list(map($"name",$"age")) as "map")
df1.show(false)

这将为您提供以下输出。

+-----------+------------------------------------+
|school_name|map                                 |
+-----------+------------------------------------+
|school B   |[Map(cathy -> 10), Map(shaun -> 5)] |
|school A   |[Map(michael -> 7), Map(emily -> 5)]|
+-----------+------------------------------------+

现在,您可以使用UDF将单个地图加入单个地图,如下所示。

import org.apache.spark.sql.functions.udf
val joinMap = udf { values: Seq[Map[String,Int]] => values.flatten.toMap }

val df2 = df1.withColumn("map", joinMap(col("map")))
df2.show(false)

这将为Map[String,Int]提供所需的输出。

+-----------+-----------------------------+
|school_name|map                          |
+-----------+-----------------------------+
|school B   |Map(cathy -> 10, shaun -> 5) |
|school A   |Map(michael -> 7, emily -> 5)|
+-----------+-----------------------------+

如果要将列值转换为JSON字符串,则 Spark 2.1.0 引入了to_json函数。

val df3 = df2.withColumn("map",to_json(struct($"map")))
df3.show(false)

to_json函数将返回以下输出。

+-----------+-------------------------------+
|school_name|map                            |
+-----------+-------------------------------+
|school B   |{"map":{"cathy":10,"shaun":5}} |
|school A   |{"map":{"michael":7,"emily":5}}|
+-----------+-------------------------------+

答案 1 :(得分:6)

从spark 2.4开始,您可以使用map_from_arrays函数来实现此目的。

val df = spark.sql(s"""
    SELECT *
    FROM VALUES ('s1','a',1),('s1','b',2),('s2','a',1)
    AS (school, name, age)
""")

val df2 = df.groupBy("school").agg(map_from_arrays(collect_list($"name"), collect_list($"age")).as("map"))



+------+----+---+
|school|name|age|
+------+----+---+
|    s1|   a|  1|
|    s1|   b|  2|
|    s2|   a|  1|
+------+----+---+

+------+----------------+
|school|             map|
+------+----------------+
|    s2|        [a -> 1]|
|    s1|[a -> 1, b -> 2]|
+------+----------------+

答案 2 :(得分:-2)

df.select($"school_name",concat_ws(":",$"age",$"name").as("new_col")).groupBy($"school_name").agg(collect_set($"new_col")).show
+-----------+--------------------+                                              
|school_name|collect_set(new_col)|
+-----------+--------------------+
|   school B| [5:shaun, 10:cathy]|
|   school A|[7:michael, 5:emily]|
+-----------+--------------------+