我通常会在Spark 2.1.0中使用Windows功能。以下是工作示例-
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
// Below is REJOIN case when next start date of the updated record is after the previous record's end date
(1,173777,"7777","2018-12-18","2019-01-04"),
(1,173777,"7778","2019-01-05","2019-01-10"),
(1,173777,"7779","2019-02-01",null),
(2,173788,"6666","2004-09-16","2006-03-18"),
(2,173788,"6668","2018-12-18",null),
(3,173799,"1111","2002-09-16","2003-03-18"),
(3,173799,"1112","2007-09-16","2008-03-18"),
(4,173711,"9566","2009-09-16","2011-03-18"),
(4,173711,"9555","2007-09-16","2008-03-18"),
// Below is UPDATED case when either all record are null with end_date for a window or don't have next record start date > previous end like above
(5,1737111,"1111","2016-09-26",null),
(5,1737111,"1112","2017-09-26",null),
(5,1737111,"1113","2018-09-26",null),
(6,1737444,"3334","2004-09-16","2019-02-15"),
(6,1737444,"3333","2018-12-18","2019-01-31"),
(7,1737555,"5555","2009-01-01","2011-02-18"),
(7,1737555,"5556","2008-12-18","2019-01-31"),
(7,1737555,"5557","2019-01-01","2019-02-15"),
(7,1737555,"5558","2019-02-14","2020-02-18"),
(7,1737555,"5559","2010-01-01","2026-02-18"))
.toDF("id","user_id", "record_modify_ts", "start_time", "end_time")
val partitionWindowR = Window.partitionBy($"id", $"user_id").orderBy($"record_modify_ts".asc)
我已经计算出结束日期的下一个值,如下所示-
val nextStart = lead($"start_time", 1).over(partitionWindowR)
我正在尝试使用以下代码将status
计算为另一列-
df.withColumn("next_start", nextStart)
.withColumn("status", when(to_date($"next_start") >= to_date($"end_time"), lit("REJOIN")).otherwise(lit("UPDATE"))).show
对于上一条记录,从null
返回的next_start
似乎不是什么null
,这给我显示了分配记录状态的麻烦。我想要的是在status
后面加一列。
以下是预期的输出-
/*
+---+-------+----------------+----------+----------+----------+-------+
| id|user_id|record_modify_ts|start_time| end_time|next_start| status|
+---+-------+----------------+----------+----------+----------+-------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16| REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null| REJOIN| <--
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16| REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null| REJOIN| <--
| 15|1737111| 1111|2016-09-26| null|2017-09-26|UPDATED|
| 15|1737111| 1112|2017-09-26| null|2018-09-26|UPDATED|
| 15|1737111| 1113|2018-09-26| null| null|UPDATED|
| 16|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATED|
| 16|1737444| 3334|2004-09-16|2019-02-15| null|UPDATED|
| 17|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATED|
| 17|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATED|
| 17|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATED|
| 17|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATED|
| 17|1737555| 5559|2010-01-01|2026-02-18| null|UPDATED|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05| REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01| REJOIN|
| 1| 173777| 7779|2019-02-01| null| null| REJOIN| <--
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18| REJOIN|
| 2| 173788| 6668|2018-12-18| null| null| REJOIN| <--
+---+-------+----------------+----------+----------+----------+-------+
*/
请尽可能帮助我,我将不胜感激。谢谢
PS:o / p中的其他列可能不正确。我真的很在乎status
,仅此而已。
答案 0 :(得分:1)
这应该可以解决您的问题,
只需要添加null检查,因为对于null情况,否则else()将始终执行。
val df = Seq((1,173777,"7777","2018-12-18","2019-01-04"),(1,173777,"7778","2019-01-05","2019-01-10"),(1,173777,"7779","2019-02-01",null),(2,173788,"6666","2004-09-16","2006-03-18"),(2,173788,"6668","2018-12-18",null),(3,173799,"1111","2002-09-16","2003-03-18"),(3,173799,"1112","2007-09-16","2008-03-18"),(4,173711,"9566","2009-09-16","2011-03-18"),(4,173711,"9555","2007-09-16","2008-03-18"),(5,1737111,"1111","2016-09-26",null),(5,1737111,"1112","2017-09-26",null),(5,1737111,"1113","2018-09-26",null),(6,1737444,"3334","2004-09-16","2019-02-15"),(6,1737444,"3333","2018-12-18","2019-01-31"),(7,1737555,"5555","2009-01-01","2011-02-18"),(7,1737555,"5556","2008-12-18","2019-01-31"),(7,1737555,"5557","2019-01-01","2019-02-15"),(7,1737555,"5558","2019-02-14","2020-02-18"),(7,1737555,"5559","2010-01-01","2026-02-18")).toDF("id","user_id", "record_modify_ts", "start_time", "end_time")
val partitionWindowR = Window.partitionBy($"id", $"user_id").orderBy($"record_modify_ts".asc)
val nextStart = lead($"start_time", 1).over(partitionWindowR)
val df_t =df.withColumn("next_start", nextStart).withColumn("status", when( to_date($"next_start") >= to_date($"end_time"), lit("REJOIN")).otherwise(lit("UPDATE"))).show()
输出:
+---+-------+----------------+----------+----------+----------+------+
| id|user_id|record_modify_ts|start_time| end_time|next_start|status|
+---+-------+----------------+----------+----------+----------+------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16|REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null|UPDATE|
| 6|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATE|
| 6|1737444| 3334|2004-09-16|2019-02-15| null|UPDATE|
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16|REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null|UPDATE|
| 5|1737111| 1111|2016-09-26| null|2017-09-26|UPDATE|
| 5|1737111| 1112|2017-09-26| null|2018-09-26|UPDATE|
| 5|1737111| 1113|2018-09-26| null| null|UPDATE|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05|REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01|REJOIN|
| 1| 173777| 7779|2019-02-01| null| null|UPDATE|
| 7|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATE|
| 7|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATE|
| 7|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATE|
| 7|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATE|
| 7|1737555| 5559|2010-01-01|2026-02-18| null|UPDATE|
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18|REJOIN|
| 2| 173788| 6668|2018-12-18| null| null|UPDATE|
+---+-------+----------------+----------+----------+----------+------+
df_t.withColumn("status_updated",first($"status").over(partitionWindowR)).show()
固定输出:
+---+-------+----------------+----------+----------+----------+------+--------------+
| id|user_id|record_modify_ts|start_time| end_time|next_start|status|status_updated|
+---+-------+----------------+----------+----------+----------+------+--------------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16|REJOIN| REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null|UPDATE| REJOIN|
| 6|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATE| UPDATE|
| 6|1737444| 3334|2004-09-16|2019-02-15| null|UPDATE| UPDATE|
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16|REJOIN| REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null|UPDATE| REJOIN|
| 5|1737111| 1111|2016-09-26| null|2017-09-26|UPDATE| UPDATE|
| 5|1737111| 1112|2017-09-26| null|2018-09-26|UPDATE| UPDATE|
| 5|1737111| 1113|2018-09-26| null| null|UPDATE| UPDATE|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05|REJOIN| REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01|REJOIN| REJOIN|
| 1| 173777| 7779|2019-02-01| null| null|UPDATE| REJOIN|
| 7|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATE| UPDATE|
| 7|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATE| UPDATE|
| 7|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATE| UPDATE|
| 7|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATE| UPDATE|
| 7|1737555| 5559|2010-01-01|2026-02-18| null|UPDATE| UPDATE|
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18|REJOIN| REJOIN|
| 2| 173788| 6668|2018-12-18| null| null|UPDATE| REJOIN|
+---+-------+----------------+----------+----------+----------+------+--------------+