Apache Spark GroupBy / Aggregate

时间:2018-03-28 14:02:24

标签: java apache-spark spark-dataframe

新Spark并使用Java。我如何在下面执行等效的SQL:

select id, sum(thistransaction::decimal) , date_part('month', transactiondisplaydate::timestamp) as month from history  group by id, month

数据集如下所示:

ID, Spend, DateTime
468429,13.3,2017-09-01T11:43:16.999
520003,84.34,2017-09-01T11:46:49.999
520003,46.69,2017-09-01T11:24:34.999
520003,82.1,2017-09-01T11:45:19.999
468429,20.0,2017-09-01T11:40:14.999
468429,20.0,2017-09-01T11:38:16.999
520003,26.57,2017-09-01T12:46:25.999
468429,20.0,2017-09-01T12:25:04.999
468429,20.0,2017-09-01T12:25:04.999
520003,20.25,2017-09-01T12:24:51.999

期望的结果是客户每周平均消费。

这与使用哪种流程比获取数据的基本要素等有关。

2 个答案:

答案 0 :(得分:0)

这应该有效

df=df.withColumn("Month", col("DateTime").substr(6, 2));
df=df.groupBy(col("ID"),col("Month")).agg(sum("Spend"));
df.show();

哪个会产生

+------+-----+----------+
|    ID|Month|sum(Spend)|
+------+-----+----------+
|520003|   09|    259.95|
|468429|   09|      93.3|
+------+-----+----------+

答案 1 :(得分:0)

这是另一个版本......

val df = Seq(
(468429,13.3,"2017-09-01T11:43:16.999"),
(520003,84.34,"2017-09-01T11:46:49.999"),
(520003,46.69,"2017-09-01T11:24:34.999"),
(520003,82.1,"2017-09-01T11:45:19.999"),
(468429,20.0,"2017-09-01T11:40:14.999"),
(468429,20.0,"2017-09-12T11:38:16.999"),
(520003,26.57,"2017-09-22T12:46:25.999"),
(468429,20.0,"2017-09-01T12:25:04.999"),
(468429,20.0,"2017-09-17T12:25:04.999"),
(520003,20.25,"2017-09-01T12:24:51.999")
).toDF("id","spend","datetime")

import org.apache.spark.sql.functions._
val df2 = df.select('id,'datetime,date_format($"datetime", "M").name("month"),
                        date_format($"datetime", "W").name("week"),'spend)

// Monthly

df2.groupBy('id,'month).agg(sum('spend).as("spend_sum"),avg('spend).as("spend_avg")).
select('id,'month,'spend_sum,'spend_avg).show()

+------+-----+---------+------------------+
|    id|month|spend_sum|         spend_avg|
+------+-----+---------+------------------+
|520003|    9|   259.95|51.989999999999995|
|468429|    9|     93.3|             18.66|
+------+-----+---------+------------------+

// Weekly

df2.groupBy('id,'month,'week).agg(sum('spend).as("spend_sum"),avg('spend).as("spend_avg")).
select('id,'month,'week,'spend_sum,'spend_avg).show()

+------+-----+----+---------+------------------+
|    id|month|week|spend_sum|         spend_avg|
+------+-----+----+---------+------------------+
|520003|    9|   4|    26.57|             26.57|
|468429|    9|   3|     20.0|              20.0|
|520003|    9|   1|   233.38|            58.345|
|468429|    9|   4|     20.0|              20.0|
|468429|    9|   1|     53.3|17.766666666666666|
+------+-----+----+---------+------------------+