如何使用functions.explode来展平dataFrame中的元素

时间:2017-10-12 17:28:43

标签: scala apache-spark

我已经制作了这段代码:

case class RawPanda(id: Long, zip: String, pt: String, happy: Boolean, attributes: Array[Double])
case class PandaPlace(name: String, pandas: Array[RawPanda])

object TestSparkDataFrame extends App{

  System.setProperty("hadoop.home.dir", "E:\\Programmation\\Libraries\\hadoop")
  val conf = new SparkConf().setAppName("TestSparkDataFrame").set("spark.driver.memory","4g").setMaster("local[*]")
  val session = SparkSession.builder().config(conf).getOrCreate()

  import session.implicits._

  def createAndPrintSchemaRawPanda(session:SparkSession):DataFrame = {
    val newPanda = RawPanda(1,"M1B 5K7", "giant", true, Array(0.1, 0.1))
    val pandaPlace = PandaPlace("torronto", Array(newPanda))
    val df =session.createDataFrame(Seq(pandaPlace))
    df
  }
  val df2 = createAndPrintSchemaRawPanda(session)
  df2.show

+--------+--------------------+
|    name|              pandas|
+--------+--------------------+
|torronto|[[1,M1B 5K7,giant...|
+--------+--------------------+


  val pandaInfo = df2.explode(df2("pandas")) {
    case Row(pandas: Seq[Row]) =>
      pandas.map{
        case (Row(
          id: Long,
          zip: String,
          pt: String,
          happy: Boolean,
          attrs: Seq[Double])) => RawPanda(id, zip, pt , happy,      attrs.toArray)
      }
  }

  pandaInfo2.show

+--------+--------------------+---+-------+-----+-----+----------+
|    name|              pandas| id|    zip|   pt|happy|attributes|
+--------+--------------------+---+-------+-----+-----+----------+
|torronto|[[1,M1B 5K7,giant...|  1|M1B 5K7|giant| true|[0.1, 0.1]|
+--------+--------------------+---+-------+-----+-----+----------+

我使用它的爆炸函数的问题已被弃用,因此我想重新计算pandaInfo2数据框,但在警告中使用了adviced方法。

  

使用flatMap()或select()代替functions.explode()

但是当我这样做时:

 val pandaInfo = df2.select(functions.explode(df("pandas"))

我获得了与df2相同的结果。 我不知道如何继续使用flatMap或functions.explode。

我如何使用flatMap或functions.explode来获取我想要的结果?(pandaInfo中的那个)

我见过this postthis other one,但没有人帮助过我。

1 个答案:

答案 0 :(得分:3)

使用select函数调用explode会返回一个DataFrame,其中数组pandas被"分解"个人记录;然后,如果你想"压扁"结果" RawPanda"对于每条记录,您可以使用以点分隔的"路线":

来选择各列
val pandaInfo2 = df2.select($"name", explode($"pandas") as "pandas")
  .select($"name", $"pandas",
    $"pandas.id" as "id",
    $"pandas.zip" as "zip",
    $"pandas.pt" as "pt",
    $"pandas.happy" as "happy",
    $"pandas.attributes" as "attributes"
  )

完全相同的操作的简洁版本将是:

import org.apache.spark.sql.Encoders // going to use this to "encode" case class into schema
val pandaColumns = Encoders.product[RawPanda].schema.fields.map(_.name)

val pandaInfo3 = df2.select($"name", explode($"pandas") as "pandas")
  .select(Seq($"name", $"pandas") ++ pandaColumns.map(f => $"pandas.$f" as f): _*)