如何在PySpark中为一个组迭代Dataframe / RDD的每一行。

时间:2017-01-30 06:07:53

标签: apache-spark pyspark apache-spark-sql

我想根据组的上一行中该列的值设置column的值。然后,这个更新的值将用于下一行。

我有以下数据框

id | start_date|sort_date | A | B |
-----------------------------------
1 | 1/1/2017 | 31-01-2015 | 1 | 0 | 
1 | 1/1/2017 | 28-02-2015 | 0 | 0 | 
1 | 1/1/2017 | 31-03-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-04-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-05-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-06-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-07-2015 | 1 | 0 | 
1 | 1/1/2017 | 31-08-2015 | 1 | 0 | 
1 | 1/1/2017 | 30-09-2015 | 0 | 0 | 
2 | 1/1/2017 | 31-10-2015 | 1 | 0 | 
2 | 1/1/2017 | 30-11-2015 | 0 | 0 | 
2 | 1/1/2017 | 31-12-2015 | 1 | 0 | 
2 | 1/1/2017 | 31-01-2016 | 1 | 0 | 
2 | 1/1/2017 | 28-02-2016 | 1 | 0 | 
2 | 1/1/2017 | 31-03-2016 | 1 | 0 | 
2 | 1/1/2017 | 30-04-2016 | 1 | 0 | 
2 | 1/1/2017 | 31-05-2016 | 1 | 0 | 
2 | 1/1/2017 | 30-06-2016 | 0 | 0 | 

输出:

id | start_date|sort_date | A | B | C
---------------------------------------
1 | 1/1/2017 | 31-01-2015 | 1 | 0 | 1
1 | 1/1/2017 | 28-02-2015 | 0 | 0 | 0
1 | 1/1/2017 | 31-03-2015 | 1 | 0 | 1
1 | 1/1/2017 | 30-04-2015 | 1 | 0 | 2
1 | 1/1/2017 | 31-05-2015 | 1 | 0 | 3
1 | 1/1/2017 | 30-06-2015 | 1 | 0 | 4
1 | 1/1/2017 | 31-07-2015 | 1 | 0 | 5
1 | 1/1/2017 | 31-08-2015 | 1 | 0 | 6
1 | 1/1/2017 | 30-09-2015 | 0 | 0 | 0
2 | 1/1/2017 | 31-10-2015 | 1 | 0 | 1
2 | 1/1/2017 | 30-11-2015 | 0 | 0 | 0
2 | 1/1/2017 | 31-12-2015 | 1 | 0 | 1
2 | 1/1/2017 | 31-01-2016 | 1 | 0 | 2
2 | 1/1/2017 | 28-02-2016 | 1 | 0 | 3
2 | 1/1/2017 | 31-03-2016 | 1 | 0 | 4
2 | 1/1/2017 | 30-04-2016 | 1 | 0 | 5
2 | 1/1/2017 | 31-05-2016 | 1 | 0 | 6
2 | 1/1/2017 | 30-06-2016 | 0 | 0 | 0

群组是id和日期

C列是根据A列和B列派生的。

如果A == 1且B == 0则C从前一行+ 1得到C. 还有一些其他条件,但我正在努力解决这个问题。

假设我们在dataframe中有一个列sort_date。

我尝试了以下查询:

SELECT
id,
date,
sort_date,
lag(A) OVER (PARTITION BY  id, date ORDER BY sort_date) as prev,
CASE
   WHEN A=1 AND B= 0  THEN 1
   WHEN  A=1 AND B> 0 THEN prev +1
   ELSE 0
 END AS A
FROM
Table

这就是我为UDAF做的事情

val myFunc = new MyUDAF
val w = Window.partitionBy(col("ID"), col("START_DATE")).orderBy(col("SORT_DATE"))
val df = df.withColumn("C", myFunc(col("START_DATE"), col("X"),
  col("Y"), col("A"),
  col("B")).over(w))

P.S:我正在使用Spark 1.6

1 个答案:

答案 0 :(得分:3)

首先定义一个窗口:

import org.apache.spark.sql.expressions.Window
val winspec = Window.partitionBy("id","start_date").orderBy("sort_date")

接下来创建一个接收A和B的UDAF,基本上从0开始计算C,每当条件出现时(A = 1,B = 0)变为0,并在任何其他时间增加1。要了解如何编写UDAF,请参阅hereherehere

中的示例

修改 以下是UDAF的示例实现(未经过实际测试,因此可能存在拼写错误):

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer,UserDefinedAggregateFunction}
 import org.apache.spark.sql.types._

 class myFunc() extends UserDefinedAggregateFunction {

  // Input Data Type Schema
  def inputSchema: StructType = StructType(Array(StructField("A", IntegerType), StructField("A", IntegerType)))

   // Intermediate Schema
  def bufferSchema = StructType(Array(StructField("C", IntegerType)))

  // Returned Data Type .
  def dataType: DataType = IntegerType

  // Self-explaining
  def deterministic = true

  // This function is called whenever key changes
  def initialize(buffer: MutableAggregationBuffer) = {
    buffer(0) = 0 // set number of items to 0
  }

  // Iterate over each entry of a group
  def update(buffer: MutableAggregationBuffer, input: Row) = {
    buffer(0) = if (input.getInt(0) == 1 && input.getInt(1) == 0) buffer.getInt(0) + 1 else 0
  }

  // Merge two partial aggregates - doesn't really matter because the window will make sure the buffer remains in a
  // single partition
  def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
    buffer1(0) = buffer1.getInt(0) + buffer2.getInt(0)
  }

  // Called after all the entries are exhausted.
  def evaluate(buffer: Row) = {
    buffer.getInt(0)
  }

}

最后将其应用于您的数据框。我们假设您将UDAF命名为myFunc:

val f = new myFunc()
val newDF = df.withColumn("newC", f($"A",$"B").over(winspec))