从嵌套的struct元素数组创建Spark DataFrame?

时间:2019-03-28 13:04:05

标签: scala apache-spark apache-spark-sql

我已将一个JSON文件读入Spark。该文件具有以下结构:

   root
      |-- engagement: struct (nullable = true)
      |    |-- engagementItems: array (nullable = true)
      |    |    |-- element: struct (containsNull = true)
      |    |    |    |-- availabilityEngagement: struct (nullable = true)
      |    |    |    |    |-- dimapraUnit: struct (nullable = true)
      |    |    |    |    |    |-- code: string (nullable = true)
      |    |    |    |    |    |-- constrained: boolean (nullable = true)
      |    |    |    |    |    |-- id: long (nullable = true)
      |    |    |    |    |    |-- label: string (nullable = true)
      |    |    |    |    |    |-- ranking: long (nullable = true)
      |    |    |    |    |    |-- type: string (nullable = true)
      |    |    |    |    |    |-- version: long (nullable = true)
      |    |    |    |    |    |-- visible: boolean (nullable = true)

我创建了一个递归函数,以使用嵌套的StructType列来展平架构

def flattenSchema(schema: StructType, prefix: String = null):Array[Column]= 
        {
        schema.fields.flatMap(f => {
          val colName = if (prefix == null) f.name else (prefix + "." + f.name)

          f.dataType match {
            case st: StructType => flattenSchema(st, colName)
            case _ => Array(col(colName).alias(colName))
          }
        })
        }

val newDF=SIWINSDF.select(flattenSchema(SIWINSDF.schema):_*)

val secondDF=newDF.toDF(newDF.columns.map(_.replace(".", "_")): _*)

我如何展平包含嵌套的StructType的ArrayType,例如engagementItems:数组(nullable = true)

感谢您的帮助。

1 个答案:

答案 0 :(得分:0)

这里的问题是,您需要管理ArrayType的大小写,然后将其转换为StructType。因此,您可以为此使用Scala运行时转换。

首先,我生成了下一个场景(顺便说一句,将其包含在您的问题中将非常有帮助,因为这样可以更轻松地再现问题)

  case class DimapraUnit(code: String, constrained: Boolean, id: Long, label: String, ranking: Long, _type: String, version: Long, visible: Boolean)
  case class AvailabilityEngagement(dimapraUnit: DimapraUnit)
  case class Element(availabilityEngagement: AvailabilityEngagement)
  case class Engagement(engagementItems: Array[Element])
  case class root(engagement: Engagement)
  def getSchema(): StructType ={
    import org.apache.spark.sql.types._
    import org.apache.spark.sql.catalyst.ScalaReflection
    val schema = ScalaReflection.schemaFor[root].dataType.asInstanceOf[StructType]

    schema.printTreeString()
    schema
  }

这将打印出来:

root
 |-- engagement: struct (nullable = true)
 |    |-- engagementItems: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- availabilityEngagement: struct (nullable = true)
 |    |    |    |    |-- dimapraUnit: struct (nullable = true)
 |    |    |    |    |    |-- code: string (nullable = true)
 |    |    |    |    |    |-- constrained: boolean (nullable = false)
 |    |    |    |    |    |-- id: long (nullable = false)
 |    |    |    |    |    |-- label: string (nullable = true)
 |    |    |    |    |    |-- ranking: long (nullable = false)
 |    |    |    |    |    |-- _type: string (nullable = true)
 |    |    |    |    |    |-- version: long (nullable = false)
 |    |    |    |    |    |-- visible: boolean (nullable = false)

然后我通过添加对ArrayType的额外检查并将其使用asInstanceOf转换为StructType来修改您的函数:

  import org.apache.spark.sql.types._  
  def flattenSchema(schema: StructType, prefix: String = null):Array[Column]=
  {
    schema.fields.flatMap(f => {
      val colName = if (prefix == null) f.name else (prefix + "." + f.name)

      f.dataType match {
        case st: StructType => flattenSchema(st, colName)
        case at: ArrayType =>
          val st = at.elementType.asInstanceOf[StructType]
          flattenSchema(st, colName)
        case _ => Array(new Column(colName).alias(colName))
      }
    })
  }

最后是结果:

val s = getSchema()
val res = flattenSchema(s)

res.foreach(println(_))

输出:

engagement.engagementItems.availabilityEngagement.dimapraUnit.code AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.code`
engagement.engagementItems.availabilityEngagement.dimapraUnit.constrained AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.constrained`
engagement.engagementItems.availabilityEngagement.dimapraUnit.id AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.id`
engagement.engagementItems.availabilityEngagement.dimapraUnit.label AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.label`
engagement.engagementItems.availabilityEngagement.dimapraUnit.ranking AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.ranking`
engagement.engagementItems.availabilityEngagement.dimapraUnit._type AS `engagement.engagementItems.availabilityEngagement.dimapraUnit._type`
engagement.engagementItems.availabilityEngagement.dimapraUnit.version AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.version`
engagement.engagementItems.availabilityEngagement.dimapraUnit.visible AS `engagement.engagementItems.availabilityEngagement.dimapraUnit.visible`