Apache Spark组通过组合类型和子类型

时间:2018-05-03 02:40:52

标签: scala apache-spark pyspark apache-spark-sql pyspark-sql

我在spark中有这个数据集,

val sales = Seq(
  ("Warsaw", 2016, "facebook","share",100),
  ("Warsaw", 2017, "facebook","like",200),
  ("Boston", 2015,"twitter","share",50),
  ("Boston", 2016,"facebook","share",150),
  ("Toronto", 2017,"twitter","like",50)
).toDF("city", "year","media","action","amount")

我现在可以通过像这样的城市和媒体对此进行分组,

val groupByCityAndYear = sales
  .groupBy("city", "media") 
  .count()
groupByCityAndYear.show()

+-------+--------+-----+
|   city|   media|count|
+-------+--------+-----+
| Boston|facebook|    1|
| Boston| twitter|    1|
|Toronto| twitter|    1|
| Warsaw|facebook|    2|
+-------+--------+-----+

但是,如何在一列中将媒体和动作组合在一起,因此预期的输出应为

+-------+--------+-----+
| Boston|facebook|    1|
| Boston| share  |    2|
| Boston| twitter|    1|
|Toronto| twitter|    1|
|Toronto| like   |    1|
| Warsaw|facebook|    2|
| Warsaw|share   |    1|
| Warsaw|like    |    1|
+-------+--------+-----+

1 个答案:

答案 0 :(得分:1)

mediaaction列合并为array列,explode,然后groupBy count

sales.select(
    $"city", explode(array($"media", $"action")).as("mediaAction")
).groupBy("city", "mediaAction").count().show()

+-------+-----------+-----+
|   city|mediaAction|count|
+-------+-----------+-----+
| Boston|      share|    2|
| Boston|   facebook|    1|
| Warsaw|      share|    1|
| Boston|    twitter|    1|
| Warsaw|       like|    1|
|Toronto|    twitter|    1|
|Toronto|       like|    1|
| Warsaw|   facebook|    2|
+-------+-----------+-----+

或假设mediaaction不相交(两列不具有共同元素):

sales.groupBy("city", "media").count().union(
    sales.groupBy("city", "action").count()
).show
+-------+--------+-----+
|   city|   media|count|
+-------+--------+-----+
| Boston|facebook|    1|
| Boston| twitter|    1|
|Toronto| twitter|    1|
| Warsaw|facebook|    2|
| Boston|   share|    2|
| Warsaw|   share|    1|
| Warsaw|    like|    1|
|Toronto|    like|    1|
+-------+--------+-----+