如何在Scala中将嵌套的JSON转换为映射对象

时间:2019-07-15 17:41:25

标签: json scala apache-spark

我有以下JSON对象:

{
    "user_id": "123",
    "data": {
        "city": "New York"
    },
    "timestamp": "1563188698.31",
    "session_id": "6a793439-6535-4162-b333-647a6761636b"
}
{
    "user_id": "123",
    "data": {
        "name": "some_name",
        "age": "23",
        "occupation": "teacher"
    },
    "timestamp": "1563188698.31",
    "session_id": "6a793439-6535-4162-b333-647a6761636b"
}

我正在使用val df = sqlContext.read.json("json")将文件读取到数据框

将所有数据属性组合到数据结构中,如下所示:

root
 |-- data: struct (nullable = true)
 |    |-- age: string (nullable = true)
 |    |-- city: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- occupation: string (nullable = true)
 |-- session_id: string (nullable = true)
 |-- timestamp: string (nullable = true)
 |-- user_id: string (nullable = true)

是否可以将数据字段转换为MAP [String,String]数据类型?因此,它只具有与原始json相同的属性?

3 个答案:

答案 0 :(得分:2)

是的,您可以通过从JSON数据中导出Map [String,String]来实现这一目标,如下所示:

import org.apache.spark.sql.types.{MapType, StringType}
import org.apache.spark.sql.functions.{to_json, from_json}

val jsonStr = """{
    "user_id": "123",
    "data": {
        "name": "some_name",
        "age": "23",
        "occupation": "teacher"
    },
    "timestamp": "1563188698.31",
    "session_id": "6a793439-6535-4162-b333-647a6761636b"
}"""

val df = spark.read.json(Seq(jsonStr).toDS)

val mappingSchema = MapType(StringType, StringType)

df.select(from_json(to_json($"data"), mappingSchema).as("map_data"))

//Output
// +-----------------------------------------------------+
// |map_data                                             |
// +-----------------------------------------------------+
// |[age -> 23, name -> some_name, occupation -> teacher]|
// +-----------------------------------------------------+

首先,我们将data字段的内容提取为带有to_json($"data")的字符串,然后解析并提取出带有from_json(to_json($"data"), schema)的Map。

答案 1 :(得分:0)

如果您打算转换JSON To parque,则以下操作可能会起作用。

 sqlContext.read.json("json").write.option("mode", "overwrite").parquet("/path/to/parquet/file")

答案 2 :(得分:0)

不确定将其转换为(String,String)的映射的意思,但是请查看下面的内容是否有帮助。

val dataDF = spark.read.option("multiline","true").json("madhu/user.json").select("data").toDF

dataDF
.withColumn("age", $"data"("age")).withColumn("city", $"data"("city"))
.withColumn("name", $"data"("name"))
.withColumn("occupation", $"data"("occupation"))
.drop("data")
.show