Scala Spark-使用数组作为输入执行多个分组依据级别

时间:2019-01-16 16:50:29

标签: scala apache-spark apache-spark-sql

在我的Scala程序中,我正在处理一个问题,以合并来自GroupBy多个级别的结果。 我正在使用的数据集很大。作为一个小样本,我有一个如下数据框:

+---+---+----+-----+-----+
|  F|  L| Loy|Email|State|
+---+---+----+-----+-----+
| f1| l1|loy1| null|   s1|
| f1| l1|loy1|   e1|   s1|
| f2| l2|loy2|   e2|   s2|
| f2| l2|loy2|   e3| null|
| f1| l1|null|   e1|   s3|
+---+---+----+-----+-----+

对于第一级组,我使用以下脚本,基于相同的(F,L,Loy)列获得结果:

df.groupBy("F", "L", "Loy").agg(collect_set($"Email").alias("Email"), collect_set($"State").alias("State")).show

结果是这样的:

+---+---+----+--------+-----+
|  F|  L| Loy|   Email|State|
+---+---+----+--------+-----+
| f1| l1|null|    [e1]| [s3]|
| f2| l2|loy2|[e2, e3]| [s2]|
| f1| l1|loy1|    [e1]| [s1]|
+---+---+----+--------+-----+

我要解决的问题是如何根据条件(F,L,电子邮件)执行第二级分组依据作为输入 F L 作为字符串,而 Email 列作为数组[String]。该groupBy应该返回如下结果:

+---+---+----+--------+---------+
|  F|  L| Loy|   Email|    State|
+---+---+----+--------+---------+
| f1| l1|loy1|    [e1]| [s3, s1]|
| f2| l2|loy2|[e2, e3]|     [s2]|
+---+---+----+--------+---------+

主要目标是通过在不同级别应用groupBy来尽可能减少条目的数量。我对Scala还是很陌生,将不胜感激:)

1 个答案:

答案 0 :(得分:0)

只需将concat_ws()与空分隔符一起使用,这会将状态数组移除为简单元素,然后collect_set将再次使您将数组转换为状态。检查一下。

scala> val df = Seq( ("f1","l1","loy1",null,"s1"),("f1","l1","loy1","e1","s1"),("f2","l2","loy2","e2","s2"),("f2","l2","loy2","e3",null),("f1","l1",null,"e1","s3")).toDF("F","L","loy","email","state")
df: org.apache.spark.sql.DataFrame = [F: string, L: string ... 3 more fields]

scala> df.show(false)
+---+---+----+-----+-----+
|F  |L  |loy |email|state|
+---+---+----+-----+-----+
|f1 |l1 |loy1|null |s1   |
|f1 |l1 |loy1|e1   |s1   |
|f2 |l2 |loy2|e2   |s2   |
|f2 |l2 |loy2|e3   |null |
|f1 |l1 |null|e1   |s3   |
+---+---+----+-----+-----+


scala> val df2 = df.groupBy("F", "L", "Loy").agg(collect_set($"Email").alias("Email"), collect_set($"State").alias("State"))
df2: org.apache.spark.sql.DataFrame = [F: string, L: string ... 3 more fields]

scala> df2.show(false)
+---+---+----+--------+-----+
|F  |L  |Loy |Email   |State|
+---+---+----+--------+-----+
|f1 |l1 |null|[e1]    |[s3] |
|f2 |l2 |loy2|[e2, e3]|[s2] |
|f1 |l1 |loy1|[e1]    |[s1] |
+---+---+----+--------+-----+


scala> df2.groupBy("F","L","email").agg(max('loy).as("loy"),collect_set(concat_ws("",'state)).as("state")).show
+---+---+--------+----+--------+
|  F|  L|   email| loy|   state|
+---+---+--------+----+--------+
| f2| l2|[e2, e3]|loy2|    [s2]|
| f1| l1|    [e1]|loy1|[s3, s1]|
+---+---+--------+----+--------+


scala>