根据pyspark中的条件汇总值

时间:2020-06-29 13:31:12

标签: apache-spark hadoop pyspark apache-spark-sql

我是Spark的新手,我需要一些有关值汇总的帮助。

 +--------------------+--------------------+-----+
|              amount|    transaction_code|Total|
+--------------------+--------------------+-----+
|[10, 20, 30, 40, ...|[buy, buy, sell, ...|210.0|
+--------------------+--------------------+-----+

我需要在此数据框中添加一个新列,如果在transaction_code中看到“购买”,我将在其中添加以金额显示的值 例如,我将10和20加起来,因为他们的transaction_code是'buy'。

我知道如何汇总它们,下面是我编写的代码。

df2extract = df2extract.select(
    'amount',
    'transaction_code',
   F.expr('AGGREGATE(amount, cast(0 as float), (acc, x) -> acc + x)').alias('Total')
 ).show()

我发现我们可以使用if函数,但是我无法确定如何初始化它们以及如何跟踪金额。请帮我解决这个问题。非常感谢!

1 个答案:

答案 0 :(得分:3)

您可以使用array_zipfilter

    from pyspark.sql import SparkSession
    from pyspark.sql import functions as F

    spark = SparkSession.builder \
        .appName('SO')\
        .getOrCreate()

    sc= spark.sparkContext

    df = sc.parallelize([
        ([10, 20, 30, 40], ["buy", "buy", "sell"])]).toDF(["amount", "transaction_code"])

    df.show()

    # +----------------+----------------+
    # |          amount|transaction_code|
    # +----------------+----------------+
    # |[10, 20, 30, 40]|[buy, buy, sell]|
    # +----------------+----------------+

    df1 = df.withColumn("zip", F.arrays_zip(F.col('amount'),F.col('transaction_code')))

    df2 = df1.withColumn("buy_filter", F.expr('''filter(zip, x-> x.transaction_code == 'buy')'''))

    df3 = df2.select("amount", "transaction_code", F.col("buy_filter.amount").alias("buy_values"))

    df3.select("amount", "transaction_code", F.expr('AGGREGATE(buy_values, cast(0 as float), (acc, x) -> acc + x)').alias('total')).show()

    # +----------------+----------------+-----+
    # |          amount|transaction_code|total|
    # +----------------+----------------+-----+
    # |[10, 20, 30, 40]|[buy, buy, sell]| 30.0|
    # +----------------+----------------+-----+