在PySpark Dataframe中重命名透视和聚合列

时间:2016-06-15 13:28:08

标签: python apache-spark pyspark apache-spark-sql

使用如下数据框:

from pyspark.sql.functions import avg, first

rdd = sc.parallelize(
    [
        (0, "A", 223,"201603", "PORT"), 
        (0, "A", 22,"201602", "PORT"), 
        (0, "A", 422,"201601", "DOCK"), 
        (1,"B", 3213,"201602", "DOCK"), 
        (1,"B", 3213,"201601", "PORT"), 
        (2,"C", 2321,"201601", "DOCK")
    ]
)
df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"])

df_data.show()

我对它进行了调整,

df_data.groupby(df_data.id, df_data.type).pivot("date").agg(avg("cost"), first("ship")).show()

+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
| id|type|201601_avg(cost)|201601_first(ship)()|201602_avg(cost)|201602_first(ship)()|201603_avg(cost)|201603_first(ship)()|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
|  2|   C|          2321.0|                DOCK|            null|                null|            null|                null|
|  0|   A|           422.0|                DOCK|            22.0|                PORT|           223.0|                PORT|
|  1|   B|          3213.0|                PORT|          3213.0|                DOCK|            null|                null|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+

但是我为列提供了这些非常复杂的名称。在聚合上应用alias通常有效,但由于pivot在这种情况下名称更糟糕:

+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+
| id|type|201601_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201601_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|201602_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201602_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|201603_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201603_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|
+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+
|  2|   C|                                                        2321.0|                                                              DOCK|                                                          null|                                                              null|                                                          null|                                                              null|
|  0|   A|                                                         422.0|                                                              DOCK|                                                          22.0|                                                              PORT|                                                         223.0|                                                              PORT|
|  1|   B|                                                        3213.0|                                                              PORT|                                                        3213.0|                                                              DOCK|                                                          null|                                                              null|
+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+ 

有没有办法在数据透视和聚合上动态重命名列名?

5 个答案:

答案 0 :(得分:5)

一个简单的正则表达式可以解决这个问题:

import re

def clean_names(df):
    p = re.compile("^(\w+?)_([a-z]+)\((\w+)\)(?:\(\))?")
    return df.toDF(*[p.sub(r"\1_\3", c) for c in df.columns])

pivoted = df_data.groupby(...).pivot(...).agg(...)

clean_names(pivoted).printSchema()
## root
##  |-- id: long (nullable = true)
##  |-- type: string (nullable = true)
##  |-- 201601_cost: double (nullable = true)
##  |-- 201601_ship: string (nullable = true)
##  |-- 201602_cost: double (nullable = true)
##  |-- 201602_ship: string (nullable = true)
##  |-- 201603_cost: double (nullable = true)
##  |-- 201603_ship: string (nullable = true)

如果要保留函数名称,请将替换模式更改为例如\1_\2_\3

答案 1 :(得分:4)

一种简单的方法是在聚合函数之后使用别名。 我从你创建的df_data spark dataFrame开始。

<span *ngFor="let item of items;let i=index">
      <input type="text" #[InputItem+i] value="{{item}}"/>
      <div *ngIf="('InputItem'+i).value" >
        I'm focused!
      </div>
</span>

列名称将是&#34; original_column_name_aliased_column_name&#34;的形式。对于您的情况,original_column_name将是201601,aliased_column_name将是avg_cost,列名称是201601_avg_cost(由下划线链接&#34; _&#34;)。

答案 2 :(得分:0)

您可以直接为聚合添加别名:

pivoted = df_data \
    .groupby(df_data.id, df_data.type) \
    .pivot("date") \
    .agg(
       avg('cost').alias('cost'),
       first("ship").alias('ship')
    )

pivoted.printSchema()
##root
##|-- id: long (nullable = true)
##|-- type: string (nullable = true)
##|-- 201601_cost: double (nullable = true)
##|-- 201601_ship: string (nullable = true)
##|-- 201602_cost: double (nullable = true)
##|-- 201602_ship: string (nullable = true)
##|-- 201603_cost: double (nullable = true)
##|-- 201603_ship: string (nullable = true)

答案 3 :(得分:0)

编写了一个简单快捷的功能来执行此操作。请享用! :)

# This function efficiently rename pivot tables' urgly names
def rename_pivot_cols(rename_df, remove_agg):
    """change spark pivot table's default ugly column names at ease.
        Option 1: remove_agg = True: `2_sum(sum_amt)` --> `sum_amt_2`.
        Option 2: remove_agg = False: `2_sum(sum_amt)` --> `sum_sum_amt_2`
    """
    for column in rename_df.columns:
        if remove_agg == True:
            start_index = column.find('(')
            end_index = column.find(')')
            if (start_index > 0 and end_index > 0):
                rename_df = rename_df.withColumnRenamed(column, column[start_index+1:end_index]+'_'+column[:1])
        else:
            new_column = column.replace('(','_').replace(')','')
            rename_df = rename_df.withColumnRenamed(column, new_column[2:]+'_'+new_column[:1])   
    return rename_df

答案 4 :(得分:0)

zero323的修改版本,适用于spark 2.4

import re

def clean_names(df):
    p = re.compile("^(\w+?)_([a-z]+)\((\w+)(,\s\w+)\)(:\s\w+)?")
    return df.toDF(*[p.sub(r"\1_\3", c) for c in df.columns])

当前列名称类似于0_first(is_flashsale, false): int

enter image description here