使用Spark

时间:2017-02-03 06:19:36

标签: json apache-spark pyspark apache-spark-sql spark-dataframe

我是新来的火花。我试图使用SparkSQL在spark中解析下面提到的JSON文件,但它没有用。有人可以帮我解决这个问题。

InputJSON:

[{"num":"1234","Projections":[{"Transactions":[{"14:45":0,"15:00":0}]}]}]

预期输出:

1234 14:45 0\n
1234 15:00 0

我尝试使用以下代码,但它无效

val sqlContext = new SQLContext(sc)
val df = sqlContext.read.json("hdfs:/user/aswin/test.json").toDF();
val sql_output = sqlContext.sql("SELECT num, Projections.Transactions FROM df group by Projections.TotalTransactions ")
sql_output.collect.foreach(println)

输出:

[01532,WrappedArray(WrappedArray([0,0]))]

1 个答案:

答案 0 :(得分:2)

Spark将您的{"14:45":0,"15:00":0}地图识别为结构,因此读取数据的唯一方法可能是手动指定架构:

>>> from pyspark.sql.types import *
>>> schema = StructType([StructField('num', StringType()), StructField('Projections', ArrayType(StructType([StructField('Transactions', ArrayType(MapType(StringType(), IntegerType())))])))])

然后,您可以使用多次爆炸来查询此临时表以获取结果:

>>> sqlContext.read.json('sample.json', schema=schema).registerTempTable('df')
>>> sqlContext.sql("select num, explode(col) from (select explode(col.Transactions), num from (select explode(Projections), num from df))").show()
+----+-----+-----+
| num|  key|value|
+----+-----+-----+
|1234|14:45|    0|
|1234|15:00|    0|
+----+-----+-----+