如何从Spark中的数组列中选择字段子集?

时间:2016-04-07 12:34:37

标签: scala apache-spark dataframe apache-spark-sql

假设我有一个DataFrame如下:

case class SubClass(id:String, size:Int,useless:String)
case class MotherClass(subClasss: Array[SubClass])
val df = sqlContext.createDataFrame(List(
      MotherClass(Array(
        SubClass("1",1,"thisIsUseless"),
        SubClass("2",2,"thisIsUseless"),
        SubClass("3",3,"thisIsUseless")
      )),
      MotherClass(Array(
        SubClass("4",4,"thisIsUseless"),
        SubClass("5",5,"thisIsUseless")
      ))
    ))

架构是:

root
 |-- subClasss: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: string (nullable = true)
 |    |    |-- size: integer (nullable = false)
 |    |    |-- useless: string (nullable = true)

我正在寻找一种方法来只选择字段的一个子集:数组列id的{​​{1}}和size,但保留嵌套的数组结构。 结果模式将是:

subClasss

我试过做

root
     |-- subClasss: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- id: string (nullable = true)
     |    |    |-- size: integer (nullable = false)

但是这会将数组df.select("subClasss.id","subClasss.size") 分成两个数组:

subClasss

有没有办法保留原始结构,只是为了消除root |-- id: array (nullable = true) | |-- element: string (containsNull = true) |-- size: array (nullable = true) | |-- element: integer (containsNull = true) 字段?看起来像:

useless

感谢您的时间。

1 个答案:

答案 0 :(得分:4)

Spark> = 2.4

可以将arrays_zipcast

一起使用
import org.apache.spark.sql.functions.arrays_zip

df.select(arrays_zip(
  $"subClasss.id", $"subClasss.size"
).cast("array<struct<id:string,size:int>>"))

rename nested fields需要cast - 没有它,Spark会使用自动生成的名称01,... n

Spark&lt; 2.4

您可以使用这样的UDF:

import org.apache.spark.sql.Row

case class Record(id: String, size: Int)

val dropUseless = udf((xs: Seq[Row]) =>  xs.map{
  case Row(id: String, size: Int, _) => Record(id, size)
})

df.select(dropUseless($"subClasss"))