如何迭代行并将一行列值与Scala中的下一行列值进行比较?

时间:2017-01-25 12:07:09

标签: scala hadoop apache-spark

我是scala的新手。我需要一些直接的帮助。

我有M * N spark sql数据帧,如下所示。我需要将每个行列值与下一行列值进行比较。

有些东西比如A1到A2,A1到A3,最后是N. B1至B2 B1至B3。

请问有人请指导我如何比较spark sql中的行?

ID  COLUMN1 Column2
1   A1  B1
2   A2  B2
3   A3  B3

先谢谢你 Santhosh

1 个答案:

答案 0 :(得分:0)

如果我正确理解了这个问题 - 你想比较(使用某个函数)每个值与前一个记录中同一列的值。您可以使用lag 窗口函数

执行此操作
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
import spark.implicits._

// some data...
val df = Seq(
  (1, "A1", "B1"),
  (2, "A2", "B2"),
  (3, "A3", "B3")
).toDF("ID","COL1", "COL2")

// some made-up comparisons - fill in whatever you want...
def compareCol1(curr: Column, prev: Column): Column = curr > prev
def compareCol2(curr: Column, prev: Column): Column = concat(curr, prev)

// creating window - ordered by ID
val window = Window.orderBy("ID")

// using the window with lag function to compare to previous value in each column
df.withColumn("COL1-comparison", compareCol1($"COL1", lag("COL1", 1).over(window)))
  .withColumn("COL2-comparison", compareCol2($"COL2", lag("COL2", 1).over(window)))
  .show()

// +---+----+----+---------------+---------------+
// | ID|COL1|COL2|COL1-comparison|COL2-comparison|
// +---+----+----+---------------+---------------+
// |  1|  A1|  B1|           null|           null|
// |  2|  A2|  B2|           true|           B2B1|
// |  3|  A3|  B3|           true|           B3B2|
// +---+----+----+---------------+---------------+