我有一个sparkScala RDD
,如下所示:
df.printSchema()
|-- stock._id: string (nullable = true)
|-- stock.value: string (nullable = true)
RDD
的第二列是嵌套的JSON
:
[ { ""warehouse"" : ""Type1"" , ""amount"" : ""0.0"" }, { ""warehouse"" : ""Type1"" , ""amount"" : ""25.0"" }]
我需要生成一个RDD
,其中包含现有的两列,以及JSON
中的列,如:
_id, value , warehouse , amount
我尝试使用自定义函数执行此操作,但我很难将此函数应用于RDD
并获得所需的结果
import org.json4s.jackson.JsonMethods._
import org.json4s._
def extractWarehouses (value: String) {
val json = parse(value)
for {
JObject(warehouses) <- json
JField("warehouse", JString(warehouse)) <- warehouses
JField("amount", JDouble(amount)) <- warehouses
} yield (warehouse, amount)
}
答案 0 :(得分:1)
正如你所说value
是一个json数组,它包含json对象的列表,你需要将它展开并将各个属性作为列,如下所示:
import org.apache.spark.sql.functions
val flattenedDF = df.select(functions.column("_id"), functions.explode(df("value")).as("value"))
val result = flattenedDF.select("_id", "value.warehouse", "value.amount")
result.printSchema()