重命名spark数据帧中的嵌套字段

时间:2017-03-24 16:41:18

标签: python apache-spark dataframe pyspark rename

Spark中有一个数据框df

 |-- array_field: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- a: string (nullable = true)
 |    |    |-- b: long (nullable = true)
 |    |    |-- c: long (nullable = true)

如何将字段array_field.a重命名为array_field.a_renamed

[更新]:

.withColumnRenamed()不适用于嵌套字段,所以我尝试了这种hacky和不安全的方法:

# First alter the schema:
schema = df.schema
schema['array_field'].dataType.elementType['a'].name = 'a_renamed'

ind = schema['array_field'].dataType.elementType.names.index('a')
schema['array_field'].dataType.elementType.names[ind] = 'a_renamed'

# Then set dataframe's schema with altered schema
df._schema = schema

我知道设置私有属性不是一个好习惯,但我不知道为df设置架构的其他方法

我认为我走在正确的轨道上,df.printSchema()仍显示array_field.a的旧名称,但df.schema == schemaTrue

4 个答案:

答案 0 :(得分:6)

<强>的Python

无法修改单个嵌套字段。您必须重新创建整个结构。在这种特殊情况下,最简单的解决方案是使用cast

首先是一堆进口:

from collections import namedtuple
from pyspark.sql.functions import col
from pyspark.sql.types import (
    ArrayType, LongType, StringType, StructField, StructType)

和示例数据:

Record = namedtuple("Record", ["a", "b", "c"])

df = sc.parallelize([([Record("foo", 1, 3)], )]).toDF(["array_field"])

让我们确认架构与您的情况相同:

df.printSchema()
root
 |-- array_field: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- a: string (nullable = true)
 |    |    |-- b: long (nullable = true)
 |    |    |-- c: long (nullable = true)

您可以将新架构定义为字符串:

str_schema = "array<struct<a_renamed:string,b:bigint,c:bigint>>"

df.select(col("array_field").cast(str_schema)).printSchema()
root
 |-- array_field: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- a_renamed: string (nullable = true)
 |    |    |-- b: long (nullable = true)
 |    |    |-- c: long (nullable = true)

DataType

struct_schema = ArrayType(StructType([
    StructField("a_renamed", StringType()),
    StructField("b", LongType()),
    StructField("c", LongType())
]))

 df.select(col("array_field").cast(struct_schema)).printSchema()
root
 |-- array_field: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- a_renamed: string (nullable = true)
 |    |    |-- b: long (nullable = true)
 |    |    |-- c: long (nullable = true)

Scala

Scala中可以使用相同的技术:

case class Record(a: String, b: Long, c: Long)

val df = Seq(Tuple1(Seq(Record("foo", 1, 3)))).toDF("array_field")

val strSchema = "array<struct<a_renamed:string,b:bigint,c:bigint>>"

df.select($"array_field".cast(strSchema))

import org.apache.spark.sql.types._

val structSchema = ArrayType(StructType(Seq(
    StructField("a_renamed", StringType),
    StructField("b", LongType),
    StructField("c", LongType)
)))

df.select($"array_field".cast(structSchema))

可能的改进

如果您使用富有表现力的数据操作或JSON处理库,可以更容易地将数据类型转储到dict或JSON字符串,并从那里获取它(例如Python / toolz):

from toolz.curried import pipe, assoc_in, update_in, map
from operator import attrgetter

# Update name to "a_updated" if name is "a"
rename_field = update_in(
    keys=["name"], func=lambda x: "a_updated" if x == "a" else x)

updated_schema = pipe(
   #  Get schema of the field as a dict
   df.schema["array_field"].jsonValue(),
   # Update fields with rename
   update_in(
       keys=["type", "elementType", "fields"],
       func=lambda x: pipe(x, map(rename_field), list)),
   # Load schema from dict
   StructField.fromJson,
   # Get data type
   attrgetter("dataType"))

df.select(col("array_field").cast(updated_schema)).printSchema()

答案 1 :(得分:1)

您可以递归遍历数据框的架构以创建具有所需更改的新架构。

PySpark中的模式是一个StructType,它包含一个StructFields列表,每个StructField可以容纳一些primitve类型或另一个StructType。

这意味着我们可以根据类型是否为StructType来决定是否要递归。

下面是带注释的示例实现,向您展示如何实现上述想法。

# Some imports
from pyspark.sql import *
from copy import copy

# We take a dataframe and return a new one with required changes
def cleanDataFrame(df: DataFrame) -> DataFrame:
    # Returns a new sanitized field name (this function can be anything really)
    def sanitizeFieldName(s: str) -> str:
        return s.replace("-", "_").replace("&", "_").replace("\"", "_")\
            .replace("[", "_").replace("]", "_").replace(".", "_")

    # We call this on all fields to create a copy and to perform any changes we might
    # want to do to the field.
    def sanitizeField(field: StructField) -> StructField:
        field = copy(field)
        field.name = sanitizeFieldName(field.name)
        # We recursively call cleanSchema on all types
        field.dataType = cleanSchema(field.dataType)
        return field

    def cleanSchema(dataType: [DataType]) -> [DateType]:
        dataType = copy(dataType)
        # If the type is a StructType we need to recurse otherwise we can return since
        # we've reached the leaf node
        if isinstance(dataType, StructType):
            # We call our sanitizer for all top level fields
            dataType.fields = [sanitizeField(f) for f in dataType.fields]
        elif isinstance(dataType, ArrayType):
            dataType.elementType = cleanSchema(dataType.elementType)
        return dataType

    # Now since we have the new schema we can create a new DataFrame by using the old Frame's RDD as data and the new schema as the schema for the data
    return spark.createDataFrame(df.rdd, cleanSchema(df.schema))

答案 2 :(得分:1)

我发现比@ zero323提供的方法简单得多的方法 @MaxPY:

Pyspark 2.4:

# Get the schema from the dataframe df
schema = df.schema

# Override `fields` with a list of new StructField, equals to the previous but for the names
schema.fields = (list(map(lambda field: 
                          StructField(field.name + "_renamed", field.dataType), schema.fields)))

# Override also `names` with the same mechanism
schema.names = list(map(lambda name: name + "_renamed", table_schema.names))

现在df.schema将打印所有更新的名称。

答案 3 :(得分:0)

另一个更简单的解决方案,如果它对你有用,就像它对我一样有用,就是压平结构,然后重命名:

使用 Scala:

val df_flat = df.selectExpr("array_field.*")

现在重命名有效

val df_renamed = df_flat.withColumnRenamed("a", "a_renamed")

当然这仅在您不需要层次结构时才对您有用(尽管我认为如果需要它可以再次重新创建)