如何在pyspark中应用窗口函数而不是在聚合中需要聚合的分组数据?

时间:2017-09-13 20:00:27

标签: apache-spark pyspark spark-dataframe rdd

我有一个复杂的winodwing操作,我需要在pyspark帮助。

我有一些按srcdest分组的数据,我需要为每个组执行以下操作: - 仅选择socket2的金额为socket1的行(对于此组中的所有行) - 应用该过滤条件后,在amounts字段

中总和金额
amounts     src    dest    socket1   socket2
10          1        2           A       B
11          1        2           B        C
12           1        2          C       D
510          1       2          C       D
550          1        2          B       C  
500          1        2          A       B
80            1         3          A        B

我想以下列方式汇总:
512 + 10 = 522,而80是src = 1和dest = 3

的唯一记录
amounts     src    dest    
522          1        2      
80          1        3    

我从这里借用了样本数据:How to write Pyspark UDAF on multiple columns?

1 个答案:

答案 0 :(得分:3)

您可以将数据框拆分为2个数据框,其中一个数据框为socket1,另一个数据框为socket2,然后使用leftanti加入而不是过滤(适用于spark >= 2.0) 。

首先让我们创建数据框:

df = spark.createDataFrame(
    sc.parallelize([
        [10,1,2,"A","B"],
        [11,1,2,"B","C"],
        [12,1,2,"C","D"],
        [510,1,2,"C","D"],
        [550,1,2,"B","C"],
        [500,1,2,"A","B"],
        [80,1,3,"A","B"]
    ]), 
    ["amounts","src","dest","socket1","socket2"]
)

现在拆分数据框:

Spark> = 2.0

df1 = df.withColumnRenamed("socket1", "socket").drop("socket2")
df2 = df.withColumnRenamed("socket2", "socket").drop("socket1")
res = df2.join(df1, ["src", "dest", "socket"], "leftanti")

Spark 1.6

df1 = df.withColumnRenamed("socket1", "socket").drop("socket2").withColumnRenamed("amounts", "amounts1")
df2 = df.withColumnRenamed("socket2", "socket").drop("socket1")
res = df2.join(df1.alias("df1"), ["src", "dest", "socket"], "left").filter("amounts1 IS NULL").drop("amounts1")

最后聚合:

import pyspark.sql.functions as psf
res.groupBy("src", "dest").agg(
    psf.sum("amounts").alias("amounts")
).show()

    +---+----+-------+
    |src|dest|amounts|
    +---+----+-------+
    |  1|   3|     80|
    |  1|   2|    522|
    +---+----+-------+