如何访问JSON文件中的子实体?

时间:2017-06-29 01:25:52

标签: scala apache-spark apache-spark-sql

我有一个json文件,如下所示:

{
  "employeeDetails":{
    "name": "xxxx",
    "num":"415"
  },
  "work":[
    {
      "monthYear":"01/2007",
      "workdate":"1|2|3|....|31",
      "workhours":"8|8|8....|8"
    },
    {
      "monthYear":"02/2007",
      "workdate":"1|2|3|....|31",
      "workhours":"8|8|8....|8"
    }
  ]
}

我必须从这个json数据中获得工作日期和工作时间。

我试过这样:

import org.apache.spark.{SparkConf, SparkContext}

object JSON2 {
  def main (args: Array[String]) {
    val spark =
      SparkSession.builder()
        .appName("SQL-JSON")
        .master("local[4]")
        .getOrCreate()

    import spark.implicits._

    val employees = spark.read.json("sample.json")
    employees.printSchema()
    employees.select("employeeDetails").show()
  }
}

我得到这样的例外:

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`employeeDetails`' given input columns: [_corrupt_record];;
'Project ['employeeDetails]
+- Relation[_corrupt_record#0] json

我是Spark的新手。

1 个答案:

答案 0 :(得分:6)

  

给定输入列:[_ corrupt_record] ;;

原因是Spark支持JSON文件,其中“每行必须包含一个单独的,自包含的有效JSON对象。”

引用JSON Datasets

  

请注意,作为json文件提供的文件不是典型的JSON文件。每行必须包含一个单独的,自包含的有效JSON对象。有关更多信息,请参阅JSON Lines文本格式,也称为换行符分隔的JSON。因此,常规的多行JSON文件通常会失败。

如果Spark的JSON文件不正确,它会将其存储在_corrupt_record下(您可以使用columnNameOfCorruptRecord选项进行更改)。

scala> spark.read.json("employee.json").printSchema
root
 |-- _corrupt_record: string (nullable = true)

你的文件不正确不仅因为它是一个多行JSON,而且因为jq(一个轻量级且灵活的命令行JSON处理器)这样说。

$ cat incorrect.json
{
  "employeeDetails":{
    "name": "xxxx",
    "num:"415"
  }
  "work":[
  {
    "monthYear":"01/2007"
    "workdate":"1|2|3|....|31",
    "workhours":"8|8|8....|8"
  },
  {
    "monthYear":"02/2007"
    "workdate":"1|2|3|....|31",
    "workhours":"8|8|8....|8"
  }
  ],
}
$ cat incorrect.json | jq
parse error: Expected separator between values at line 4, column 14

修复JSON文件后,使用以下技巧加载多行JSON文件。

scala> spark.version
res5: String = 2.1.1

val employees = spark.read.json(sc.wholeTextFiles("employee.json").values)
scala> employees.printSchema
root
 |-- employeeDetails: struct (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- num: string (nullable = true)
 |-- work: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- monthYear: string (nullable = true)
 |    |    |-- workdate: string (nullable = true)
 |    |    |-- workhours: string (nullable = true)

scala> employees.select("employeeDetails").show()
+---------------+
|employeeDetails|
+---------------+
|     [xxxx,415]|
+---------------+

Spark> = 2.2

从Spark 2.2(released quite recently并强烈建议使用)开始,您应该使用multiLine选项。 SPARK-20980 Rename the option wholeFile to multiLine for JSON and CSV已添加multiLine选项。

scala> spark.version
res0: String = 2.2.0

scala> spark.read.option("multiLine", true).json("employee.json").printSchema
root
 |-- employeeDetails: struct (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- num: string (nullable = true)
 |-- work: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- monthYear: string (nullable = true)
 |    |    |-- workdate: string (nullable = true)
 |    |    |-- workhours: string (nullable = true)