分组并在PySpark数据框中创建一个新列

时间:2019-08-20 06:12:32

标签: python pyspark

我有一个像这样的pyspark数据框,

+----------+--------+
|id_       | p      |
+----------+--------+
|  1       | A      |
|  1       | B      |
|  1       | B      |
|  1       | A      |
|  1       | A      |
|  1       | B      |
|  2       | C      |
|  2       | C      |
|  2       | C      |
|  2       | A      |
|  2       | A      |
|  2       | C      |
---------------------

我想为id_的每组创建另一列。列是使用现在的熊猫代码编写的,

sample.groupby(by=['id_'], group_keys=False).apply(lambda grp : grp['p'].ne(grp['p'].shift()).cumsum())

如何在pyspark数据框中执行此操作?

目前,我是在运行速度非常慢的Pandas UDF的帮助下完成此操作的。

有哪些选择?

预期的列将是这样,

1
2
2
3
3
4
1
1
1
2
2
3

1 个答案:

答案 0 :(得分:3)

您可以结合使用udf和window函数来获得结果:

# required imports
from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType

# define a window, which we will use to calculate lag values
w = Window().partitionBy().orderBy(F.col('id_'))

# define user defined function (udf) to perform calculation on each row
def f(lag_val, current_val):
    if lag_val != current_val:
       return 1
    return 0
# register udf so we can use with our dataframe
func_udf = F.udf(f, IntegerType())

# read csv file
df = spark.read.csv('/path/to/file.csv', header=True)

# create new column with lag on window we created earlier, apply udf on lagged
# and current value and then apply window function again to calculate cumsum
df.withColumn("new_column", func_udf(F.lag("p").over(w), df['p'])).withColumn('cumsum', F.sum('new_column').over(w.partitionBy(F.col('id_')).rowsBetween(Window.unboundedPreceding, 0))).show()

+---+---+----------+------+
|id_|  p|new_column|cumsum|
+---+---+----------+------+
|  1|  A|         1|     1|
|  1|  B|         1|     2|
|  1|  B|         0|     2|
|  1|  A|         1|     3|
|  1|  A|         0|     3|
|  1|  B|         1|     4|
|  2|  C|         1|     1|
|  2|  C|         0|     1|
|  2|  C|         0|     1|
|  2|  A|         1|     2|
|  2|  A|         0|     2|
|  2|  C|         1|     3|
+---+---+----------+------+

# where:
#  w.partitionBy : to partition by id_ column
#  w.rowsBetween : to specify frame boundaries
#  ref https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/expressions/Window.html#rowsBetween-long-long-