PySpark - 从UDF获取行索引

时间:2017-12-21 08:36:45

标签: dataframe pyspark user-defined-functions row-number

我有一个数据帧,我需要获取特定行的行号/索引。我想添加一个新行,使其包含Letter以及行号/索引,例如。 " A - 1"," B - 2"

#sample data
a= sqlContext.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "distances"])

带输出

+------+---------+
|Letter|distances|
+------+---------+
|     A|       20|
|     B|       30|
|     D|       80|
+------+---------+

我希望新的推出是这样的,

+------+---------------+
|Letter|distances|index|
+------+---------------+
|     A|       20|A - 1|
|     B|       30|B - 2|
|     D|       80|D - 3|
+------+---------------+

这是我一直在努力的功能

def cate(letter):
    return letter + " - " + #index
a.withColumn("index", cate(a["Letter"])).show()

2 个答案:

答案 0 :(得分:4)

由于您希望使用UDF(仅)实现结果,请尝试使用

from pyspark.sql.functions import udf, monotonically_increasing_id
from pyspark.sql.types import StringType

#sample data
a= sqlContext.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "distances"])

def cate(letter, idx):
    return letter + " - " + str(idx)
cate_udf = udf(cate, StringType())
a = a.withColumn("temp_index", monotonically_increasing_id())
a = a.\
    withColumn("index", cate_udf(a.Letter, a.temp_index)).\
    drop("temp_index")
a.show()

输出是:

+------+---------+--------------+
|Letter|distances|         index|
+------+---------+--------------+
|     A|       20|         A - 0|
|     B|       30|B - 8589934592|
|     D|       80|D - 8589934593|
+------+---------+--------------+

答案 1 :(得分:-1)

这应该有效

df = spark.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "distances"])
df.createOrReplaceTempView("df")

spark.sql("select concat(Letter,' - ',row_number() over (order by Letter)) as num, * from df").show()

+-----+------+---------+                                                        
|  num|Letter|distances|
+-----+------+---------+
|A - 1|     A|       20|
|B - 2|     B|       30|
|D - 3|     D|       80|
+-----+------+---------+