我有一个复杂的winodwing操作,我需要在pyspark帮助。
我有一些按src
和dest
分组的数据,我需要为每个组执行以下操作:
- 仅选择socket2
中不的金额为socket1
的行(对于此组中的所有行)
- 应用该过滤条件后,在amounts
字段
amounts src dest socket1 socket2
10 1 2 A B
11 1 2 B C
12 1 2 C D
510 1 2 C D
550 1 2 B C
500 1 2 A B
80 1 3 A B
我想以下列方式汇总:
512 + 10 = 522,而80是src = 1和dest = 3
amounts src dest
522 1 2
80 1 3
我从这里借用了样本数据:How to write Pyspark UDAF on multiple columns?
答案 0 :(得分:3)
您可以将数据框拆分为2个数据框,其中一个数据框为socket1
,另一个数据框为socket2
,然后使用leftanti
加入而不是过滤(适用于spark >= 2.0
) 。
首先让我们创建数据框:
df = spark.createDataFrame(
sc.parallelize([
[10,1,2,"A","B"],
[11,1,2,"B","C"],
[12,1,2,"C","D"],
[510,1,2,"C","D"],
[550,1,2,"B","C"],
[500,1,2,"A","B"],
[80,1,3,"A","B"]
]),
["amounts","src","dest","socket1","socket2"]
)
现在拆分数据框:
Spark> = 2.0
df1 = df.withColumnRenamed("socket1", "socket").drop("socket2")
df2 = df.withColumnRenamed("socket2", "socket").drop("socket1")
res = df2.join(df1, ["src", "dest", "socket"], "leftanti")
Spark 1.6
df1 = df.withColumnRenamed("socket1", "socket").drop("socket2").withColumnRenamed("amounts", "amounts1")
df2 = df.withColumnRenamed("socket2", "socket").drop("socket1")
res = df2.join(df1.alias("df1"), ["src", "dest", "socket"], "left").filter("amounts1 IS NULL").drop("amounts1")
最后聚合:
import pyspark.sql.functions as psf
res.groupBy("src", "dest").agg(
psf.sum("amounts").alias("amounts")
).show()
+---+----+-------+
|src|dest|amounts|
+---+----+-------+
| 1| 3| 80|
| 1| 2| 522|
+---+----+-------+