Spark GroupBy聚合函数

时间:2017-08-24 22:15:55

标签: apache-spark spark-dataframe aggregate-functions apache-spark-dataset

case class Step (Id : Long,
                 stepNum : Long,
                 stepId : Int,
                 stepTime: java.sql.Timestamp
                 )

我有一个数据集[Step],我想在" Id"上执行groupBy操作。山坳。 我的输出应该看起来像Dataset [(Long,List [Step])]。我该怎么做?

让变量" inquiryStepMap"是数据集[步骤]类型然后我们可以使用RDD执行此操作

val inquiryStepGrouped: RDD[(Long, Iterable[Step])] = inquiryStepMap.rdd.groupBy(x => x.Id)

1 个答案:

答案 0 :(得分:2)

您似乎需要groupByKey

样品:

import java.sql.Timestamp    
val t = new Timestamp(2017, 5, 1, 0, 0, 0, 0)    
val ds = Seq(Step(1L, 21L, 1, t), Step(1L, 20L, 2, t), Step(2L, 10L, 3, t)).toDS()

groupByKey然后mapGroups

ds.groupByKey(_.Id).mapGroups((Id, Vals) => (Id, Vals.toList))
// res18: org.apache.spark.sql.Dataset[(Long, List[Step])] = [_1: bigint, _2: array<struct<Id:bigint,stepNum:bigint,stepId:int,stepTime:timestamp>>]

结果如下:

ds.groupByKey(_.Id).mapGroups((Id, Vals) => (Id, Vals.toList)).show()
+---+--------------------+
| _1|                  _2|
+---+--------------------+
|  1|[[1,21,1,3917-06-...|
|  2|[[2,10,3,3917-06-...|
+---+--------------------+