如何分组和加入火花?

时间:2016-08-19 02:16:42

标签: python apache-spark pyspark distributed-computing rdd

我有这样的RDD:

{"key1" : "fruit" , "key2" : "US" , "key3" : "1" }

{"key1" : "fruit" , "key2" : "US" , "key3" : "2" }

{"key1" : "vegetable" , "key2" : "US" , "key3" : "1" }

{"key1" : "fruit" , "key2" : "Japan" , "key3" : "3" }

{"key1" : "vegetable" , "key2" : "Japan" , "key3" : "3" }

我的目标是 第一组由 key1 组成,然后按 key2 分组 最后添加 key3

我期待最终结果,如

key1          key2      key3
"fruit"     , "US"    , 3
"vegetable" , "US"    , 1
"fruit"     , "Japan" , 3
"vegetable" , "Japan" , 3

我的代码如下所示,

rdd_arm = rdd_arm.map(lambda x: x[1])

rdd_arm包含以上键:值格式。

我不知道下一步该去哪儿。 有人可以帮助我吗?

2 个答案:

答案 0 :(得分:2)

我自己解决了。

我必须创建一个包含多个密钥的密钥,然后加起来。

rdd_arm.map( lambda x : x[0] + ", " + x[1] , x[2] ).reduceByKey( lambda a,b : a + b )

以下问题很有用。

How to group by multiple keys in spark?

答案 1 :(得分:1)

让我们创建你的RDD:

In [1]: rdd_arm = sc.parallelize([{"key1" : "fruit" , "key2" : "US" , "key3" : "1" }, {"key1" : "fruit" , "key2" : "US" , "key3" : "2" }, {"key1" : "vegetable" , "key2" : "US" ,  "key3" : "1" }, {"key1" : "fruit" , "key2" : "Japan" , "key3" : "3" }, {"key1" : "vegetable" , "key2" : "Japan" , "key3" : "3" }])
In [2]: rdd_arm.collect()
Out[2]: 
[{'key1': 'fruit', 'key2': 'US', 'key3': '1'},
 {'key1': 'fruit', 'key2': 'US', 'key3': '2'},
 {'key1': 'vegetable', 'key2': 'US', 'key3': '1'},
 {'key1': 'fruit', 'key2': 'Japan', 'key3': '3'},
 {'key1': 'vegetable', 'key2': 'Japan', 'key3': '3'}]

首先,您必须创建一个新密钥,该密钥将是key1key2对。它的值为key3,所以你想做这样的事情:

In [3]: new_rdd = rdd_arm.map(lambda x: (x['key1'] + ", " + x['key2'], x['key3']))

In [4]: new_rdd.collect()
Out[4]: 
[('fruit, US', '1'),
 ('fruit, US', '2'),
 ('vegetable, US', '1'),
 ('fruit, Japan', '3'),
 ('vegetable, Japan', '3')]

然后,我们想要添加重复键的值,只需调用reduceByKey(),如下所示:

In [5]: new_rdd = new_rdd.reduceByKey(lambda a, b: int(a) + int(b))

In [6]: new_rdd.collect()
Out[6]: 
[('fruit, US', 3),
 ('fruit, Japan', '3'),
 ('vegetable, US', '1'),
 ('vegetable, Japan', '3')]

我们完成了!

当然,这可能是单行,如下:

new_rdd = rdd_arm.map(lambda x: (x['key1'] + ", " + x['key2'], x['key3'])).reduceByKey(lambda a, b: int(a) + int(b))