PySpark-在分组后选择列具有非连续值的行

时间:2019-01-10 11:15:37

标签: apache-spark dataframe group-by pyspark

我有一个格式为

的数据框
|user_id| action | day |
------------------------
| d25as | AB     | 2   |
| d25as | AB     | 3   |
| d25as | AB     | 5   |
| m3562 | AB     | 1   |
| m3562 | AB     | 7   |
| m3562 | AB     | 9   |
| ha42a | AB     | 3   |
| ha42a | AB     | 4   |
| ha42a | AB     | 5   |

我希望过滤掉连续几天内看到的用户,如果他们至少在一天内没有连续出现过。结果数据框应为:

|user_id| action | day |
------------------------
| d25as | AB     | 2   |
| d25as | AB     | 3   |
| d25as | AB     | 5   |
| m3562 | AB     | 1   |
| m3562 | AB     | 7   |
| m3562 | AB     | 9   |

最近一个用户被删除的位置,因为他是连续几天出现的。 有谁知道如何在火花中做到这一点?

2 个答案:

答案 0 :(得分:1)

阅读介于两者之间的注释。该代码将不言自明。

from pyspark.sql.functions import udf, collect_list, explode
#Creating the DataFrame
values = [('d25as','AB',2),('d25as','AB',3),('d25as','AB',5),
          ('m3562','AB',1),('m3562','AB',7),('m3562','AB',9),
          ('ha42a','AB',3),('ha42a','AB',4),('ha42a','AB',5)]
df = sqlContext.createDataFrame(values,['user_id','action','day'])
df.show() 
+-------+------+---+
|user_id|action|day|
+-------+------+---+
|  d25as|    AB|  2|
|  d25as|    AB|  3|
|  d25as|    AB|  5|
|  m3562|    AB|  1|
|  m3562|    AB|  7|
|  m3562|    AB|  9|
|  ha42a|    AB|  3|
|  ha42a|    AB|  4|
|  ha42a|    AB|  5|
+-------+------+---+

# Grouping together the days in one list.
df = df.groupby(['user_id','action']).agg(collect_list('day'))
df.show()
+-------+------+-----------------+
|user_id|action|collect_list(day)|
+-------+------+-----------------+
|  ha42a|    AB|        [3, 4, 5]|
|  m3562|    AB|        [1, 7, 9]|
|  d25as|    AB|        [2, 3, 5]|
+-------+------+-----------------+

# Creating a UDF to check if the days are consecutive or not. Only keep False ones.
check_consecutive = udf(lambda row: sorted(row) == list(range(min(row), max(row)+1)))
df = df.withColumn('consecutive',check_consecutive(col('collect_list(day)')))\
      .where(col('consecutive')==False)
df.show()
+-------+------+-----------------+-----------+
|user_id|action|collect_list(day)|consecutive|
+-------+------+-----------------+-----------+
|  m3562|    AB|        [1, 7, 9]|      false|
|  d25as|    AB|        [2, 3, 5]|      false|
+-------+------+-----------------+-----------+

# Finally, exploding the DataFrame from above to get the result.
df = df.withColumn("day", explode(col('collect_list(day)')))\
       .drop('consecutive','collect_list(day)')
df.show()
+-------+------+---+
|user_id|action|day|
+-------+------+---+
|  m3562|    AB|  1|
|  m3562|    AB|  7|
|  m3562|    AB|  9|
|  d25as|    AB|  2|
|  d25as|    AB|  3|
|  d25as|    AB|  5|
+-------+------+---+

答案 1 :(得分:1)

使用spark-sql窗口函数且没有任何udfs。 df的构建在scala中完成,但sql部分在python中将相同。检查一下:

val df = Seq(("d25as","AB",2),("d25as","AB",3),("d25as","AB",5),("m3562","AB",1),("m3562","AB",7),("m3562","AB",9),("ha42a","AB",3),("ha42a","AB",4),("ha42a","AB",5)).toDF("user_id","action","day")
df.createOrReplaceTempView("qubix")
spark.sql(
  """ with t1( select user_id, action, day, row_number() over(partition by user_id order by day)-day diff from qubix),
           t2( select user_id, action, day, collect_set(diff) over(partition by user_id) diff2 from t1)
                select user_id, action, day from t2 where size(diff2) > 1
  """).show(false)

结果:

+-------+------+---+
|user_id|action|day|
+-------+------+---+
|d25as  |AB    |2  |
|d25as  |AB    |3  |
|d25as  |AB    |5  |
|m3562  |AB    |1  |
|m3562  |AB    |7  |
|m3562  |AB    |9  |
+-------+------+---+

pyspark版本

>>> from pyspark.sql.functions import  *
>>> values = [('d25as','AB',2),('d25as','AB',3),('d25as','AB',5),
...           ('m3562','AB',1),('m3562','AB',7),('m3562','AB',9),
...           ('ha42a','AB',3),('ha42a','AB',4),('ha42a','AB',5)]
>>> df = spark.createDataFrame(values,['user_id','action','day'])
>>> df.show()
+-------+------+---+
|user_id|action|day|
+-------+------+---+
|  d25as|    AB|  2|
|  d25as|    AB|  3|
|  d25as|    AB|  5|
|  m3562|    AB|  1|
|  m3562|    AB|  7|
|  m3562|    AB|  9|
|  ha42a|    AB|  3|
|  ha42a|    AB|  4|
|  ha42a|    AB|  5|
+-------+------+---+

>>> df.createOrReplaceTempView("qubix")
>>> spark.sql(
...   """ with t1( select user_id, action, day, row_number() over(partition by user_id order by day)-day diff from qubix),
...            t2( select user_id, action, day, collect_set(diff) over(partition by user_id) diff2 from t1)
...                 select user_id, action, day from t2 where size(diff2) > 1
...   """).show()
+-------+------+---+
|user_id|action|day|
+-------+------+---+
|  d25as|    AB|  2|
|  d25as|    AB|  3|
|  d25as|    AB|  5|
|  m3562|    AB|  1|
|  m3562|    AB|  7|
|  m3562|    AB|  9|
+-------+------+---+

>>>