在pyspark中,是否可以使用1个groupBy进行2个聚合?

时间:2019-06-21 23:06:23

标签: pyspark aggregation

我想知道的是,使用pyspark是否允许以下​​操作: 假设以下df:

|model  |  year  | price   |    mileage |
+++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017   | 27841   |17529       |
|Galaxy | 2017   | 29395   |11892       |
|Novato | 2018   | 35644   |22876       |
|Novato | 2018   |  8765   |54817       |


df.groupBy('model', 'year')\
  .agg({'price':'sum'})\
  .agg({'mileage':sum'})\
  .withColumnRenamed('sum(price)', 'total_prices')\
  .withColumnRenamed('sum(mileage)', 'total_miles')

希望产生

|model  |  year  | price   |    mileage | total_prices| total_miles|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017   | 27841   |17529       |    57236    |     29421  |
|Galaxy | 2017   | 29395   |11892       |    57236    |     29421  |
|Novato | 2018   | 35644   |22876       |    44409    |     77693  |
|Novato | 2018   |  8765   |54817       |    44409    |     77693  |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

2 个答案:

答案 0 :(得分:0)

使用pandas udf,您可以获取任意数量的聚合

import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType,StructType,StructField,StringType
import pandas as pd

agg_schema = StructType(
    [StructField("model", StringType(), True),
     StructField("year", IntegerType(), True),
     StructField("price", IntegerType(), True),
     StructField("mileage", IntegerType(), True),
     StructField("total_prices", IntegerType(), True),
     StructField("total_miles", IntegerType(), True)
     ]
)

@F.pandas_udf(agg_schema, F.PandasUDFType.GROUPED_MAP)
def agg(pdf):
    total_prices = pdf['price'].sum()
    total_miles = pdf['mileage'].sum()
    pdf['total_prices'] = total_prices
    pdf['total_miles'] = total_miles
    return pdf

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)
df.groupBy('model','year').apply(agg).show()

这将导致

+------+----+-----+-------+------------+-----------+
| model|year|price|mileage|total_prices|total_miles|
+------+----+-----+-------+------------+-----------+
|Galaxy|2017|27841|  17529|       57236|      29421|
|Galaxy|2017|29395|  11892|       57236|      29421|
|Novato|2018|35644|  22876|       44409|      77693|
|Novato|2018| 8765|  54817|       44409|      77693|
+------+----+-----+-------+------------+-----------+

答案 1 :(得分:0)

您实际上并不是在寻找groupby,而是在寻找window函数或join,因为您想使用汇总值扩展行。

窗口:

from pyspark.sql import functions as F
from pyspark.sql import Window

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)

w = Window.partitionBy('model', 'year')

df = df.withColumn('total_prices', F.sum('price').over(w))
df = df.withColumn('total_miles', F.sum('mileage').over(w))
df.show()

加入:

from pyspark.sql import functions as F

df = spark.createDataFrame(
    [('Galaxy', 2017, 27841, 17529),
     ('Galaxy', 2017, 29395, 11892),
     ('Novato', 2018, 35644, 22876),
     ('Novato', 2018, 8765,  54817)],
    ['model','year','price','mileage']
)

df = df.join(df.groupby('model', 'year').agg(F.sum('price').alias('total_price'), F.sum('mileage').alias('total_miles')), ['model', 'year'])
df.show()

输出:

+------+----+-----+-------+------------+-----------+ 
| model|year|price|mileage|total_prices|total_miles| 
+------+----+-----+-------+------------+-----------+ 
|Galaxy|2017|27841|  17529|       57236|      29421| 
|Galaxy|2017|29395|  11892|       57236|      29421| 
|Novato|2018|35644|  22876|       44409|      77693| 
|Novato|2018| 8765|  54817|       44409|      77693| 
+------+----+-----+-------+------------+-----------+