对Spark DataFrame中的结构数组进行排序

时间:2017-11-27 09:37:36

标签: scala apache-spark dataframe

考虑以下数据框:

case class ArrayElement(id:Long,value:Double)

val df = Seq(
  Seq(
    ArrayElement(1L,-2.0),ArrayElement(2L,1.0),ArrayElement(0L,0.0)
  )
).toDF("arr")

df.printSchema

root
 |-- arr: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: long (nullable = false)
 |    |    |-- value: double (nullable = false)

除了使用udf之外,有没有办法按arr排序value

我已经看过org.apache.spark.sql.functions.sort_array,这个方法在复杂数组元素的情况下实际上做了什么?是按第一个元素(即id?)

对数组进行排序

1 个答案:

答案 0 :(得分:3)

spark functions说"根据数组元素的自然顺序,按升序对给定列的输入数组进行排序。"

在我解释之前,让我们看一下sort_array的一些例子。

+----------------------------+----------------------------+
|arr                         |sorted                      |
+----------------------------+----------------------------+
|[[1,-2.0], [2,1.0], [0,0.0]]|[[0,0.0], [1,-2.0], [2,1.0]]|
+----------------------------+----------------------------+

+----------------------------+----------------------------+
|arr                         |sorted                      |
+----------------------------+----------------------------+
|[[0,-2.0], [2,1.0], [0,0.0]]|[[0,-2.0], [0,0.0], [2,1.0]]|
+----------------------------+----------------------------+

+-----------------------------+-----------------------------+
|arr                          |sorted                       |
+-----------------------------+-----------------------------+
|[[0,-2.0], [2,1.0], [-1,0.0]]|[[-1,0.0], [0,-2.0], [2,1.0]]|
+-----------------------------+-----------------------------+

所以sort_array通过检查第一个元素然后第二个元素进行排序,依此类推,查找定义列中数组中的每个元素

我希望它清楚