使用窗口触发SQL-根据列条件从当前行之后的行收集数据

时间:2018-11-29 17:43:05

标签: scala apache-spark

我有一个Spark DataFrame(在Scala中),如下所示:

+---------+-------------+------+---------+------------+
|  user_id|      item_id|  mood|     time|sessionBegin|
+---------+-------------+------+---------+------------+
|        1|            A| Happy|        0|           0|
|        1|            B| Happy|        1|           0|
|        1|            C| Happy|        3|           0|
|        1|            D| Happy|        5|           0|
|        1|            C| Happy|        6|           0|
|        1|            D|   Sad|        6|           0|
|        1|            C|   Sad|       10|           0|
|        1|            A| Happy|       28|           0|
|        1|            E| Happy|       35|           0|
|        1|            E|   Sad|       60|           0|
|        2|            F| Happy|        6|           6|
|        2|            E| Happy|       17|           6|
|        2|            D| Happy|       20|           6|
|        2|            D|   Sad|       21|           6|
|        2|            E| Happy|       27|           6|
|        2|            G| Happy|       37|           6|
|        2|            H| Happy|       39|           6|
|        2|            G|   Sad|       45|           6|
+---------+-------------+------+---------+------------+

我在列(user_id,sessionBegin)上定义了一个窗口,并按时间排序

val window = Window.partitionBy("user_id","sessionBegin").orderBy("time")

现在,我想添加一列result

1)检查心情是否为Happy,然后仅在当前行&& item_id之后收集所有mood = Sad。否则,如果心情是sad:放空数组。

2)必须超过我在上面指定的window。 (例如,此数据帧有两个窗口->第一个是(user_id = 1,sessionBegin = 0),第二个是(user_id = 2,sessionBegin = 6)

因此,生成的DF将为:

+---------+-------------+------+---------+------------+---------+
|  user_id|      item_id|  mood|     time|sessionBegin|   result|
+---------+-------------+------+---------+------------+---------+
|        1|            A| Happy|        0|           0|  [D,C,E]|
|        1|            B| Happy|        1|           0|  [D,C,E]|
|        1|            C| Happy|        3|           0|  [D,C,E]|
|        1|            D| Happy|        5|           0|  [D,C,E]|
|        1|            C| Happy|        6|           0|  [D,C,E]|
|        1|            D|   Sad|        6|           0|       []|
|        1|            C|   Sad|       10|           0|       []|
|        1|            A| Happy|       28|           0|      [E]|
|        1|            E| Happy|       35|           0|      [E]|
|        1|            E|   Sad|       60|           0|       []|
|        2|            F| Happy|        6|           6|    [D,G]|
|        2|            E| Happy|       17|           6|    [D,G]|
|        2|            D| Happy|       20|           6|    [D,G]|
|        2|            D|   Sad|       21|           6|       []|
|        2|            E| Happy|       27|           6|      [G]|
|        2|            G| Happy|       37|           6|      [G]|
|        2|            H| Happy|       39|           6|      [G]|
|        2|            G|   Sad|       45|           6|       []|
+---------+-------------+------+---------+------------+---------+

我在窗口上使用collect_set的{​​{1}}方法,但无法弄清两件事:

  1. 如何仅考虑当前行之后的行
  2. 对于所有带有when..otherwise的行,如何仅在mood=Happy时收集item_id集?

有人指出如何解决这个问题吗?

1 个答案:

答案 0 :(得分:1)

我无法在分区末尾的下一行和下一行之间提供行。因此,我使用当前行并跟随其后,然后使用udf删除了第一个Array元素。我已经用完了-spark.sql,udf和df操作..检查一下

val df = Seq((1,"A","Happy","0","0"),(1,"B","Happy","1","0"),(1,"C","Happy","3","0"),(1,"D","Happy","5","0"),(1,"C","Happy","6","0"),(1,"D","Sad","6","0"),(1,"C","Sad","10","0"),(1,"A","Happy","28","0"),(1,"E","Happy","35","0"),(1,"E","Sad","60","0"),(2,"F","Happy","6","6"),(2,"E","Happy","17","6"),(2,"D","Happy","20","6"),(2,"D","Sad","21","6"),(2,"E","Happy","27","6"),(2,"G","Happy","37","6"),(2,"H","Happy","39","6"),(2,"G","Sad","45","6")).toDF("user_id","item_id","mood","time","sessionBegin")
val df2 = df.withColumn("time", 'time.cast("int"))
df2.createOrReplaceTempView("user")

val df3 = spark.sql(
  """
    select user_id, item_id, mood, time, sessionBegin,
    case when mood='Happy' then
    collect_list(case when mood='Happy' then ' ' when mood='Sad' then item_id end) over(partition by user_id order by time rows between current row  and unbounded following )
    when mood='Sad' then array()
    end as result from user

  """)
def sliceResult(x:Seq[String]):Seq[String]={
  val y = x.drop(1).filter( _ != " ")
  y.toSet.toSeq
}
val udf_sliceResult = udf ( sliceResult(_:Seq[String]):Seq[String]  )
df3.withColumn("result1", udf_sliceResult('result) ).show(false)

结果:

+-------+-------+-----+----+------------+------------------------------+---------+
|user_id|item_id|mood |time|sessionBegin|result                        |result1  |
+-------+-------+-----+----+------------+------------------------------+---------+
|1      |A      |Happy|0   |0           |[ ,  ,  ,  ,  , D, C,  ,  , E]|[D, C, E]|
|1      |B      |Happy|1   |0           |[ ,  ,  ,  , D, C,  ,  , E]   |[D, C, E]|
|1      |C      |Happy|3   |0           |[ ,  ,  , D, C,  ,  , E]      |[D, C, E]|
|1      |D      |Happy|5   |0           |[ ,  , D, C,  ,  , E]         |[D, C, E]|
|1      |C      |Happy|6   |0           |[ , D, C,  ,  , E]            |[D, C, E]|
|1      |D      |Sad  |6   |0           |[]                            |[]       |
|1      |C      |Sad  |10  |0           |[]                            |[]       |
|1      |A      |Happy|28  |0           |[ ,  , E]                     |[E]      |
|1      |E      |Happy|35  |0           |[ , E]                        |[E]      |
|1      |E      |Sad  |60  |0           |[]                            |[]       |
|2      |F      |Happy|6   |6           |[ ,  ,  , D,  ,  ,  , G]      |[D, G]   |
|2      |E      |Happy|17  |6           |[ ,  , D,  ,  ,  , G]         |[D, G]   |
|2      |D      |Happy|20  |6           |[ , D,  ,  ,  , G]            |[D, G]   |
|2      |D      |Sad  |21  |6           |[]                            |[]       |
|2      |E      |Happy|27  |6           |[ ,  ,  , G]                  |[G]      |
|2      |G      |Happy|37  |6           |[ ,  , G]                     |[G]      |
|2      |H      |Happy|39  |6           |[ , G]                        |[G]      |
|2      |G      |Sad  |45  |6           |[]                            |[]       |
+-------+-------+-----+----+------------+------------------------------+---------+

EDIT1:

如OP所述,可以将''替换为null,而df3本身将是最终结果。这样就可以避免udf()

scala> :paste
// Entering paste mode (ctrl-D to finish)

val df3 = spark.sql(
  """
    select user_id, item_id, mood, time, sessionBegin,
    case when mood='Happy' then
    collect_list(case when mood='Happy' then null when mood='Sad' then item_id end) over(partition by user_id order by time rows between current row  and unbounded following )
    when mood='Sad' then array()
    end as result from user
  """)

// Exiting paste mode, now interpreting.

df3: org.apache.spark.sql.DataFrame = [user_id: int, item_id: string ... 4 more fields]

scala> df3.show(false)
+-------+-------+-----+----+------------+---------+
|user_id|item_id|mood |time|sessionBegin|result   |
+-------+-------+-----+----+------------+---------+
|1      |A      |Happy|0   |0           |[D, C, E]|
|1      |B      |Happy|1   |0           |[D, C, E]|
|1      |C      |Happy|3   |0           |[D, C, E]|
|1      |D      |Happy|5   |0           |[D, C, E]|
|1      |C      |Happy|6   |0           |[D, C, E]|
|1      |D      |Sad  |6   |0           |[]       |
|1      |C      |Sad  |10  |0           |[]       |
|1      |A      |Happy|28  |0           |[E]      |
|1      |E      |Happy|35  |0           |[E]      |
|1      |E      |Sad  |60  |0           |[]       |
|2      |F      |Happy|6   |6           |[D, G]   |
|2      |E      |Happy|17  |6           |[D, G]   |
|2      |D      |Happy|20  |6           |[D, G]   |
|2      |D      |Sad  |21  |6           |[]       |
|2      |E      |Happy|27  |6           |[G]      |
|2      |G      |Happy|37  |6           |[G]      |
|2      |H      |Happy|39  |6           |[G]      |
|2      |G      |Sad  |45  |6           |[]       |
+-------+-------+-----+----+------------+---------+


scala>