Spark scala - 从dataframe列解析json并返回带有列的RDD

时间:2017-02-06 12:18:14

标签: json scala apache-spark

我有一个sparkScala RDD,如下所示:

df.printSchema()

 |-- stock._id: string (nullable = true)
 |-- stock.value: string (nullable = true)

RDD的第二列是嵌套的JSON

[ { ""warehouse"" : ""Type1"" , ""amount"" : ""0.0"" }, { ""warehouse"" : ""Type1"" , ""amount"" : ""25.0"" }]

我需要生成一个RDD,其中包含现有的两列,以及JSON中的列,如:

_id, value , warehouse , amount

我尝试使用自定义函数执行此操作,但我很难将此函数应用于RDD并获得所需的结果

import org.json4s.jackson.JsonMethods._

import org.json4s._

 def extractWarehouses (value: String)  {
    val json = parse(value)
    for {
      JObject(warehouses) <- json
      JField("warehouse", JString(warehouse)) <- warehouses
      JField("amount", JDouble(amount)) <- warehouses
    } yield (warehouse, amount)
  }

1 个答案:

答案 0 :(得分:1)

正如你所说value是一个json数组,它包含json对象的列表,你需要将它展开并将各个属性作为列,如下所示:

import org.apache.spark.sql.functions

val flattenedDF = df.select(functions.column("_id"), functions.explode(df("value")).as("value"))
val result = flattenedDF.select("_id", "value.warehouse", "value.amount")
result.printSchema()