如何将Spark的DataFrame转换为嵌套的DataFrame

时间:2017-01-12 15:22:19

标签: scala apache-spark apache-spark-sql

我有一个包含6列的DataFrame:

df.printSchema
root
 |-- d1: string (nullable = true)
 |-- d2: string (nullable = true)
 |-- d3: string (nullable = true)
 |-- m1: string (nullable = true)
 |-- m2: string (nullable = true)
 |-- m3: string (nullable = true)

出于某些原因,我想将其转换为:

root
 |-- d1: string (nullable = true)
 |-- d2: string (nullable = true)
 |-- d3: string (nullable = true)
 |-- metric: nested
     |-- m1: string (nullable = true)
     |-- m2: string (nullable = true)
     |-- m3: string (nullable = true)

我花了几个小时但我无法弄明白。到目前为止我所做的是

case class Metric(m1: String, m2: String, m3: String)
case class Dimension(d1: String, d2: String, d3: String, metric: Metric)

scala> df.map(row => Dimension(row.getAs[String]("d1"),
     |   row.getAs[String]("d2"),
     |   row.getAs[String]("d3"),
     |   Metric(row.getAs[String]("m1"),
     |       row.getAs[String]("m2"),
     |       row.getAs[String]("m3"))))
res48: org.apache.spark.rdd.RDD[Dimension] = MapPartitionsRDD[32] at map at <console>:46

scala> df.map(row => Dimension(row.getAs[String]("d1"),
     |   row.getAs[String]("d2"),
     |   row.getAs[String]("d3"),
     |   Metric(row.getAs[String]("m1"),
     |       row.getAs[String]("m2"),
     |       row.getAs[String]("m3")))).collect().foreach(println)
WARN scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 220, hostname): java.lang.ClassNotFoundException: $line55.$read$$iwC$$iwC$Dimension

scala> df.map(row => Dimension(row.getAs[String]("d1"),
     |   row.getAs[String]("d2"),
     |   row.getAs[String]("d3"),
     |   Metric(row.getAs[String]("m1"),
     |       row.getAs[String]("m2"),
     |       row.getAs[String]("m3")))).toDF
res50: org.apache.spark.sql.DataFrame = [d1: string, d2: string, d3: string, metric: struct<m1:string,m2:string,m3:string>]

scala> df.map(row => Dimension(row.getAs[String]("d1"),
     |   row.getAs[String]("d2"),
     |   row.getAs[String]("d3"),
     |   Metric(row.getAs[String]("m1"),
     |       row.getAs[String]("m2"),
     |       row.getAs[String]("m3")))).toDF.select("d1").show()
ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerSQLExecutionStart(1,show at <console>:51,org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)

请帮帮我。感谢。

1 个答案:

答案 0 :(得分:2)

必需的导入:

// SQLContext in Spark 1.x
val spark: SparkSession = ???

import org.apache.spark.sql.functions.struct
import spark.implicits._

import sqlContext.implicits._ // Spark 1.x

简单选择:

df.select($"d1", $"d2", $"d3", struct($"m1", $"m2", $"m3").alias("metrics"))

后跟(Spark 2.x):

.as[Dimension] 

如果您想要静态Dataset[Dimension]而不是DataFrame