如何使用Spark Scala读取具有特定格式的Json文件?

时间:2015-07-29 13:40:47

标签: json scala apache-spark

我试图读取一个类似的Json文件:

[ 
{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} 
,{"MLrate":"31","Nrout":"0","up":null,"Crate":"2"} 
,{"MLrate":"30","Nrout":"5","up":null,"Crate":"2"} 
,{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"} 
,{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"} 
,{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"} 
]} 
,{"IFAM":"EQR","KTM":1430006400000,"COL":22,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} 
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"0"} 
,{"MLrate":"35","Nrout":"1","up":null,"Crate":"5"} 
,{"MLrate":"30","Nrout":"6","up":null,"Crate":"2"} 
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} 
,{"MLrate":"38","Nrout":"8","up":null,"Crate":"1"} 
]} 
,...
] 

我已尝试过命令:

    val df = sqlContext.read.json("namefile") 
    df.show() 

但这不起作用:我的列无法识别......

2 个答案:

答案 0 :(得分:6)

如果您想使用read.json,则每行需要一个JSON文档。如果您的文件包含带有文档的有效JSON数组,那么它根本无法按预期工作。例如,如果我们采用您的示例数据输入文件应该格式化如下:

{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}, {"MLrate":"31","Nrout":"0","up":null,"Crate":"2"}, {"MLrate":"30","Nrout":"5","up":null,"Crate":"2"} ,{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"} ,{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"} ]}
{"IFAM":"EQR","KTM":1430006400000,"COL":22,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"0","up":null,"Crate":"0"} ,{"MLrate":"35","Nrout":"1","up":null,"Crate":"5"} ,{"MLrate":"30","Nrout":"6","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"38","Nrout":"8","up":null,"Crate":"1"} ]}

如果您在上述结构中使用read.json,则会看到它按预期进行解析:

scala> sqlContext.read.json("namefile").printSchema
root
 |-- COL: long (nullable = true)
 |-- DATA: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- Crate: string (nullable = true)
 |    |    |-- MLrate: string (nullable = true)
 |    |    |-- Nrout: string (nullable = true)
 |    |    |-- up: string (nullable = true)
 |-- IFAM: string (nullable = true)
 |-- KTM: long (nullable = true)

答案 1 :(得分:0)

如果您不想格式化JSON文件(逐行),可以使用SparkSQL函数使用StructType和MapType创建模式

import org.apache.spark.sql.DataFrame 
import org.apache.spark.sql.functions._ 
import org.apache.spark.sql.types._

// Convenience function for turning JSON strings into DataFrames
def jsonToDataFrame(json: String, schema: StructType = null): 
DataFrame = {
    val reader = spark.read
    Option(schema).foreach(reader.schema)   
    reader.json(sc.parallelize(Array(json))) 
}

// Using a struct
val schema = new StructType().add("a", new StructType().add("b", IntegerType))

// call the function passing the sample JSON data and the schema as parameter
val json_df = jsonToDataFrame("""
   {
     "a": {
        "b": 1
     }
   } """, schema)

// now you can access your json fields
val b_value = json_df.select("a.b")
b_value.show()

有关更多示例和详细信息,请参阅此参考文档 https://docs.databricks.com/spark/latest/spark-sql/complex-types.html#transform-complex-data-types-scala