我有以下示例数据框:
I / P
accountNumber assetValue
A100 1000
A100 500
B100 600
B100 200
O / P
AccountNumber assetValue Rank
A100 1000 1
A100 500 2
B100 600 1
B100 200 2
现在我的问题是我们如何在按帐号排序的数据框中添加此排名列。如果我需要在数据帧之外进行操作,我不期待大量的行如此开放。
我使用Spark版本1.5和SQLContext因此无法使用Windows功能
答案 0 :(得分:7)
您可以使用row_number
函数和Window
表达式来指定partition
和order
列:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.row_number
val df = Seq(("A100", 1000), ("A100", 500), ("B100", 600), ("B100", 200)).toDF("accountNumber", "assetValue")
df.withColumn("rank", row_number().over(Window.partitionBy($"accountNumber").orderBy($"assetValue".desc))).show
+-------------+----------+----+
|accountNumber|assetValue|rank|
+-------------+----------+----+
| A100| 1000| 1|
| A100| 500| 2|
| B100| 600| 1|
| B100| 200| 2|
+-------------+----------+----+
答案 1 :(得分:6)
原始SQL:
val df = sc.parallelize(Seq(
("A100", 1000), ("A100", 500), ("B100", 600), ("B100", 200)
)).toDF("accountNumber", "assetValue")
df.registerTempTable("df")
sqlContext.sql("SELECT accountNumber,assetValue, RANK() OVER (partition by accountNumber ORDER BY assetValue desc) AS rank FROM df").show
+-------------+----------+----+
|accountNumber|assetValue|rank|
+-------------+----------+----+
| A100| 1000| 1|
| A100| 500| 2|
| B100| 600| 1|
| B100| 200| 2|
+-------------+----------+----+