如何在Spark中遍历架构?

时间:2018-07-17 06:32:13

标签: scala apache-spark

我想遍历Spark中的架构。使用df.schema给出了嵌套的StructTypeStructFields的列表。

可以像这样索引根元素。

IN: val temp = df.schema

IN: temp(0)
OUT: StructField(A,StringType,true)

IN: temp(3)
OUT: StructField(D,StructType(StructField(D1,StructType(StructField(D11,StringType,true), StructField(D12,StringType,true), StructField(D13,StringType,true)),true), StructField(D2,StringType,true), StructField(D3,StringType,true)),true)

当我尝试访问嵌套的StructType时,会发生以下情况

IN: val temp1 = temp(3).dataType

IN: temp1(0)
OUT:
Name: Unknown Error
Message: <console>:38: error: org.apache.spark.sql.types.DataType does not take parameters
       temp1(0)
            ^
StackTrace: 

我不了解的是temptemp1都是StructType类的,但是temp是可迭代的,但是temp1不是

IN: temp.getClass
OUT: class org.apache.spark.sql.types.StructType

IN: temp1.getClass
OUT: class org.apache.spark.sql.types.StructType

我也尝试过dtypes,但是在尝试访问嵌套元素时遇到了类似的问题。

IN: df.dtypes(3)(0)
OUT:
Name: Unknown Error
Message: <console>:36: error: (String, String) does not take parameters
       df.dtypes(3)(0)
                   ^
StackTrace: 

那么,如何在了解子字段之前遍历架构?

2 个答案:

答案 0 :(得分:1)

好吧,如果想要所有嵌套列的列表,可以编写类似的递归函数

给出:

  val schema = StructType(
    StructField("name", StringType) ::
      StructField("nameSecond", StringType) ::
      StructField("nameDouble", StringType) ::
      StructField("someStruct", StructType(
        StructField("insideS", StringType) ::
          StructField("insideD", StructType(
            StructField("inside1", StringType) :: Nil
          )) ::
          Nil
      )) ::
      Nil
  )
  val rdd = session.sparkContext.emptyRDD[Row]
  val df = session.createDataFrame(rdd, schema)

 df.printSchema()

哪个会产生:

root
 |-- name: string (nullable = true)
 |-- nameSecond: string (nullable = true)
 |-- nameDouble: string (nullable = true)
 |-- someStruct: struct (nullable = true)
 |    |-- insideS: string (nullable = true)
 |    |-- insideD: struct (nullable = true)
 |    |    |-- inside1: string (nullable = true)

如果要获取列的全名列表,可以编写如下内容:

def fullFlattenSchema(schema: StructType): Seq[String] = {
  def helper(schema: StructType, prefix: String): Seq[String] = {
    val fullName: String => String = name => if (prefix.isEmpty) name else s"$prefix.$name"
    schema.fields.flatMap {
      case StructField(name, inner: StructType, _, _) =>
        fullName(name) +: helper(inner, fullName(name))
      case StructField(name, _, _, _) => Seq(fullName(name))
    }
  }

  helper(schema, "")
}

哪个会返回:

ArraySeq(name, nameSecond, nameDouble, someStruct, someStruct.insideS, someStruct.insideD, someStruct.insideD.inside1)

答案 1 :(得分:1)

在Spark SQL类型架构中,当通过它进行递归时需要担心一些复杂的数据类型,例如StructType,ArrayType和MapType。要编写一个完全遍历带有结构映射和映射数组的架构的函数,是非常复杂的。

要递归我遇到的大多数架构,我只需要考虑StructType和ArrayType。

给出类似以下的模式:

    root
     |-- name: string (nullable = true)
     |-- nameSecond: long (nullable = true)
     |-- acctRep: string (nullable = true)
     |-- nameDouble: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- insideK: string (nullable = true)
     |    |    |-- insideS: string (nullable = true)
     |    |    |-- insideD: long (nullable = true)
     |-- inside1: long (nullable = true)

我将使用像这样的递归函数:

    def collectAllFieldNames(schema: StructType): List[String] = 
        schema.fields.flatMap {
            case StructField(name, structType: StructType, _, _) => name :: collectAllFieldNames(structType)
            case StructField(name, ArrayType(structType: StructType, _), _, _) => name :: collectAllFieldNames(structType)
            case StructField(name, _, _, _) => name :: Nil
        }

给出结果:

    List(name, nameSecond, acctRep, nameDouble, insideK, insideS, insideK, inside1)