Apache Spark-Scala API-按顺序增加的密钥进行汇总

时间:2018-08-10 08:11:12

标签: scala apache-spark

我有一个看起来像这样的数据框:

Get()

我需要通过基于TotalN和顺序增加的ID(N)将字符串串联在一起来聚合字符串。问题是我可以分组的每个聚合都没有唯一的ID。因此,我需要做类似“对于每一行查看TotalN,遍历接下来的N行并进行连接,然后重新设置”。

val df = sc.parallelize(Seq(
  (3,1,"A"),(3,2,"B"),(3,3,"C"),
  (2,1,"D"),(2,2,"E"),
  (3,1,"F"),(3,2,"G"),(3,3,"G"),
  (2,1,"X"),(2,2,"X")
)).toDF("TotalN", "N", "String")

+------+---+------+
|TotalN|  N|String|
+------+---+------+
|     3|  1|     A|
|     3|  2|     B|
|     3|  3|     C|
|     2|  1|     D|
|     2|  2|     E|
|     3|  1|     F|
|     3|  2|     G|
|     3|  3|     G|
|     2|  1|     X|
|     2|  2|     X|
+------+---+------+

任何指针都值得赞赏。

使用Spark 2.3.1和Scala Api。

2 个答案:

答案 0 :(得分:2)

尝试一下:

val df = spark.sparkContext.parallelize(Seq(
  (3, 1, "A"), (3, 2, "B"), (3, 3, "C"),
  (2, 1, "D"), (2, 2, "E"),
  (3, 1, "F"), (3, 2, "G"), (3, 3, "G"),
  (2, 1, "X"), (2, 2, "X")
)).toDF("TotalN", "N", "String")


df.createOrReplaceTempView("data")

val sqlDF = spark.sql(
  """
    | SELECT TotalN d, N, String, ROW_NUMBER() over (order by TotalN) as rowNum
    | FROM data
  """.stripMargin)

sqlDF.withColumn("key", $"N" - $"rowNum")
  .groupBy("key").agg(collect_list('String).as("texts")).show()

答案 1 :(得分:0)

解决方案是使用row_number函数计算分组变量,该函数可在以后的groupBy中使用。

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.row_number

var w = Window.orderBy("TotalN")
df.withColumn("GeneratedID", $"N" - row_number.over(w)).show

+------+---+------+-----------+
|TotalN|  N|String|GeneratedID|
+------+---+------+-----------+
|     2|  1|     D|          0|
|     2|  2|     E|          0|
|     2|  1|     X|         -2|
|     2|  2|     X|         -2|
|     3|  1|     A|         -4|
|     3|  2|     B|         -4|
|     3|  3|     C|         -4|
|     3|  1|     F|         -7|
|     3|  2|     G|         -7|
|     3|  3|     G|         -7|
+------+---+------+-----------+