如何使用group by在spark中聚合结构数组

时间:2019-09-02 12:48:46

标签: scala apache-spark user-defined-functions distribution

我正在使用Spark 2.1。我有一个具有以下架构的数据框:

scala> df.printSchema

|-- id: integer (nullable = true)
|-- sum: integer (nullable = true)
|-- distribution: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- lower: integer (nullable = true)
|    |    |-- upper: integer (nullable = true)
|    |    |-- count: integer (nullable = true)

我要汇总:

  • 按“ id”列分组
  • “分布”内“和”和“计数”的总和(按“较低”和“较高”分组)

在这里,我无法爆炸数据框,因为我将重复行,并且无法进行“和”列的总和。一种可能性是分别进行分布的求和和聚合,然后通过“ id”进行连接,但是用户定义的函数将更简单。

作为输入,我有:

scala> df.show(false)

+---+---+------------------------------------------------------------+
|id |sum|distribution                                                |
+---+---+------------------------------------------------------------+
|1  |1  |[[0,1,2]]                                                   |
|1  |1  |[[1,2,5]]                                                   |
|1  |7  |[[0,1,1], [1,2,6]]                                          |
|1  |7  |[[0,1,5], [1,2,1], [2,3,1]]                                 |
|2  |1  |[[0,1,1]]                                                   |
|2  |2  |[[0,1,1], [1,2,1]]                                          |
|2  |1  |[[0,1,1]]                                                   |
|2  |1  |[[2,3,1]]                                                   |
|2  |1  |[[0,1,1]]                                                   |
|2  |4  |[[0,1,1], [1,2,1], [2,3,1], [3,4,1]]                        |
+---+---+------------------------------------------------------------+

预期输出:

+---+---+------------------------------------------------------------+
|id |sum|distribution                                                |
+---+---+------------------------------------------------------------+
|1  |16 |[[0,1,8], [1,2,12], [2,3,1]]                                |
|2  |10 |[[0,1,5], [1,2,2], [2,3,3], [3,4,1]]                        |
+---+---+------------------------------------------------------------+

1 个答案:

答案 0 :(得分:0)

您可以使用以下UDF:

import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.{Column, DataFrame, Row, SparkSession}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.{ArrayType, IntegerType, StructField, StructType}


# schema to be used in the UDF to have this format in output this format as output of your 
val schema: ArrayType = ArrayType(StructType(Seq(
      StructField("lower", IntegerType, false),
      StructField("upper", IntegerType, false),
      StructField("count", IntegerType, false)
    )))

val customAggregation = udf((xs: Seq[Seq[Row]]) =>
  xs.flatten.map(row => (
    row.getAs[Int]("lower"),
    row.getAs[Int]("upper"),
    row.getAs[Int]("count")
  ))
    .groupBy(x => (x._1, x._2))
    .mapValues(_.map(_._3).sum).toSeq
    .map(x => (x._1._1, x._1._2, x._2)), schema
)


val df: DataFrame = df_input
.groupBy("id")
.agg(sum("sum"),collect_list("distribution"))
.toDF("id", "sum" ,"distribution")
.withColumn("distribution_agg", flatten(col("distribution")))

结果将是

scala> dfOutput.select("id","sum","distribution_agg").show
+---+---+------------------------------------------------------------+
|id |sum|distribution                                                |
+---+---+------------------------------------------------------------+
|1  |16 |[[0,1,8], [1,2,12], [2,3,1]]                                |
|2  |10 |[[0,1,5], [1,2,2], [2,3,3], [3,4,1]]                        |
+---+---+------------------------------------------------------------+