解析包含json数据的RDD

时间:2017-12-10 17:37:00

标签: apache-spark pyspark pyspark-sql

我有一个包含以下数据的json文件:

{"year":"2016","category":"physics","laureates":[{"id":"928","firstname":"David J.","surname":"Thouless","motivation":"\"for theoretical discoveries of topological phase transitions and topological phases of matter\"","share":"2"},{"id":"929","firstname":"F. Duncan M.","surname":"Haldane","motivation":"\"for theoretical discoveries of topological phase transitions and topological phases of matter\"","share":"4"},{"id":"930","firstname":"J. Michael","surname":"Kosterlitz","motivation":"\"for theoretical discoveries of topological phase transitions and topological phases of matter\"","share":"4"}]}
{"year":"2016","category":"chemistry","laureates":[{"id":"931","firstname":"Jean-Pierre","surname":"Sauvage","motivation":"\"for the design and synthesis of molecular machines\"","share":"3"},{"id":"932","firstname":"Sir J. Fraser","surname":"Stoddart","motivation":"\"for the design and synthesis of molecular machines\"","share":"3"},{"id":"933","firstname":"Bernard L.","surname":"Feringa","motivation":"\"for the design and synthesis of molecular machines\"","share":"3"}]}

我需要将RDD作为键值对返回,其中我将类别作为键,并将诺贝尔奖获得者的姓氏列表作为值。我怎么可能使用转换呢?

对于给定的数据集,它应该是:

"physics"-"Thouless","haldane","Kosterlitz"
"chemistry"-"Sauvage","Stoddart","Feringa"

1 个答案:

答案 0 :(得分:2)

你只受限于RDD吗?如果你可以使用DataFrames,那么加载非常简单,你得到一个模式,爆炸嵌套字段,组,然后使用RDD。这是你可以做到的一种方式

将JSON加载到DataFrame中,您还可以确认您的架构

>>> nobelDF = spark.read.json('/user/cloudera/nobel.json')
>>> nobelDF.printSchema()
root
 |-- category: string (nullable = true)
 |-- laureates: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- firstname: string (nullable = true)
 |    |    |-- id: string (nullable = true)
 |    |    |-- motivation: string (nullable = true)
 |    |    |-- share: string (nullable = true)
 |    |    |-- surname: string (nullable = true)
 |-- year: string (nullable = true)

现在您可以爆炸嵌套数组,然后转换为可以分组的RDD

nobelRDD = nobelDF.select('category', explode('laureates.surname')).rdd

只是一个fyi,爆炸的DataFrame看起来像这样

+---------+----------+
| category|       col|
+---------+----------+
|  physics|  Thouless|
|  physics|   Haldane|
|  physics|Kosterlitz|
|chemistry|   Sauvage|
|chemistry|  Stoddart|
|chemistry|   Feringa|
+---------+----------+

现在按类别分组

from pyspark.sql.functions import collect_list
nobelRDD = nobelDF.select('category', explode('laureates.surname')).groupBy('category').agg(collect_list('col').alias('sn')).rdd
nobelRDD.collect()

现在你得到了一个你需要的RDD,虽然它仍然是一个Row对象(我添加了新行来显示完整的行)

>>> for n in nobelRDD.collect():
...     print n
...
Row(category=u'chemistry', sn=[u'Sauvage', u'Stoddart', u'Feringa'])
Row(category=u'physics', sn=[u'Thouless', u'Haldane', u'Kosterlitz'])

但这将是一个简单的地图来获取元组(我添加了新行来显示完整的行)

>>> nobelRDD.map(lambda x: (x[0],x[1])).collect()
[(u'chemistry', [u'Sauvage', u'Stoddart', u'Feringa']), 
 (u'physics', [u'Thouless', u'Haldane', u'Kosterlitz'])]