我正在使用Spark / Scala处理Hive
表,该表包含每个成员的交易数据。我需要获取每个成员的最大记录。我使用下面的代码完成了此任务,它可以成功运行,但性能未达到。
我需要问是否还有其他方法可以增强此代码的性能?我发现了一些使用spark-sql的方法,但我更喜欢Spark
数据框或数据集。
下面的示例将重现我的代码和数据。
val mamberData = Seq(
Row("1234", "CX", java.sql.Timestamp.valueOf("2018-09-09 00:00:00")),
Row("1234", "CX", java.sql.Timestamp.valueOf("2018-03-02 00:00:00")),
Row("5678", "NY", java.sql.Timestamp.valueOf("2019-01-01 00:00:00")),
Row("5678", "NY", java.sql.Timestamp.valueOf("2018-01-01 00:00:00")),
Row("7088", "SF", java.sql.Timestamp.valueOf("2018-09-01 00:00:00"))
)
val MemberDataSchema = List(
StructField("member_id", StringType, nullable = true),
StructField("member_state", StringType, nullable = true),
StructField("activation_date", TimestampType, nullable = true)
)
import spark.implicits._
val memberDF =spark.createDataFrame(
spark.sparkContext.parallelize(mamberData),
StructType(MemberDataSchema)
)
val memberDfMaxDate = memberDF.groupBy('member_id).agg(max('activation_date).as("activation_date"))
val memberDFMaxOnly = memberDF.join(memberDfMaxDate,Seq("member_id","activation_date"))
输出如下
+---------+------------+-------------------+
|member_id|member_state|activation_date |
+---------+------------+-------------------+
|1234 |CX |2018-09-09 00:00:00|
|1234 |CX |2018-03-02 00:00:00|
|5678 |NY |2019-01-01 00:00:00|
|5678 |NY |2018-01-01 00:00:00|
|7088 |SF |2018-09-01 00:00:00|
+---------+------------+-------------------+
+---------+-------------------+------------+
|member_id| activation_date|member_state|
+---------+-------------------+------------+
| 7088|2018-09-01 00:00:00| SF|
| 1234|2018-09-09 00:00:00| CX|
| 5678|2019-01-01 00:00:00| NY|
+---------+-------------------+------------+
答案 0 :(得分:1)
您可以使用很多技术,例如Ranking
或Dataset
。我更喜欢使用reduceGroups
,因为它是函数样式且易于解释。
case class MemberDetails(member_id: String, member_state: String, activation_date: FileStreamSource.Timestamp)
val dataDS: Dataset[MemberDetails] = spark.createDataFrame(
spark.sparkContext.parallelize(mamberData),
StructType(MemberDataSchema)
).as[MemberDetails]
.groupByKey(_.member_id)
.reduceGroups((r1, r2) ⇒ if (r1.activation_date > r2.activation_date) r1 else r2)
.map { case (key, row) ⇒ row }
dataDS.show(truncate = false)
答案 1 :(得分:1)
DataFrame的groupBy
一样高效(由于部分聚合,因此比Window-Functions效率更高)。
但是您可以通过在聚合子句中使用struct
来避免加入:
val memberDfMaxOnly = memberDF.groupBy('member_id).agg(max(struct('activation_date, 'member_state)).as("row_selection"))
.select(
$"member_id",
$"row_selection.activation_date",
$"row_selection.member_state"
)
答案 2 :(得分:0)
使用window functions分配等级并过滤每个组中的第一名。
import org.apache.spark.sql.expressions.Window
// Partition by member_id order by activation_date
val byMemberId = Window.partitionBy($"member_id").orderBy($"activation_date" desc)
// Get the new DF applying window function
val memberDFMaxOnly = memberDF.select('*, rank().over(byMemberId) as 'rank).where($"rank" === 1).drop("rank")
// View the results
memberDFMaxOnly.show()
+---------+------------+-------------------+
|member_id|member_state| activation_date|
+---------+------------+-------------------+
| 1234| CX|2018-09-09 00:00:00|
| 5678| NY|2019-01-01 00:00:00|
| 7088| SF|2018-09-01 00:00:00|
+---------+------------+-------------------+