将多个字典合并到pyspark rdd中的另一个字典

时间:2019-06-19 10:47:00

标签: python pyspark rdd

我有一个数据框,如下所示:

from pyspark.sql import SparkSession

sqlContext = SparkSession.builder.appName("test").enableHiveSupport().getOrCreate()
data = [(1,2,0.1,0.3),(1,2,0.1,0.3),(1,3,0.1,0.3),(1,3,0.1,0.3),
        (11, 12, 0.1, 0.3),(11,12,0.1,0.3),(11,13,0.1,0.3),(11,13,0.1,0.3)]

trajectory_df = sqlContext.createDataFrame(data, schema=['grid_id','rider_id','lng','lat'])
trajectory_df.show()

+-------+--------+---+---+
|grid_id|rider_id|lng|lat|
+-------+--------+---+---+
|      1|       2|0.1|0.3|
|      1|       2|0.1|0.3|
|      1|       3|0.1|0.3|
|      1|       3|0.1|0.3|
|     11|      12|0.1|0.3|
|     11|      12|0.1|0.3|
|     11|      13|0.1|0.3|
|     11|      13|0.1|0.3|
+-------+--------+---+---+

我想将同一网格中的数据合并到字典中。其中rider_id是字典的键,纬度和经度是字典的值。

我期望的结果如下:

[(1, {3:[[0.1, 0.3], [0.1, 0.3]],2:[[0.1, 0.3], [0.1, 0.3]]}),
 (11,{13:[[0.1, 0.3], [0.1, 0.3]],12:[[0.1, 0.3], [0.1, 0.3]]})]

我可以使用groupByKey()grid_id进行分组。

def trans_point(row):
    return ((row.grid_id, row.rider_id), [row.lng, row.lat])
trajectory_df = trajectory_df.rdd.map(trans_point).groupByKey().mapValues(list)
print(trajectory_df.take(10))

[((1, 3), [[0.1, 0.3], [0.1, 0.3]]), ((11, 13), [[0.1, 0.3], [0.1, 0.3]]), ((1, 2), [[0.1, 0.3], [0.1, 0.3]]), ((11, 12), [[0.1, 0.3], [0.1, 0.3]])]

但是当我合并多个字典时我无法得到结果:

trajectory_df = trajectory_df.map(lambda x:(x[0][0],{x[0][1]:x[1]})).reduceByKey(lambda x,y:x.update(y))
print(trajectory_df.take(10))
[(1, None), (11, None)]

出于某些原因,我希望它在RDD类型下完成。我该如何实现?预先感谢。

1 个答案:

答案 0 :(得分:1)

dict.update正常工作并返回None。从文档中:

  

使用其他键/值对更新字典,覆盖现有键。返回None

您需要编写自己的reduce函数来组合字典。我们可以从@Aaron HallanswerHow to merge two dictionaries in a single expression?借来

def merge_two_dicts(x, y):
    """From https://stackoverflow.com/a/26853961/5858851"""
    z = x.copy()   # start with x's keys and values
    z.update(y)    # modifies z with y's keys and values & returns None
    return z

trajectory_df = trajectory_df.map(lambda x:(x[0][0],{x[0][1]:x[1]}))\
    .reduceByKey(merge_two_dicts)

print(trajectory_df.collect())
#[(1, {2: [[0.1, 0.3], [0.1, 0.3]], 3: [[0.1, 0.3], [0.1, 0.3]]}),
# (11, {12: [[0.1, 0.3], [0.1, 0.3]], 13: [[0.1, 0.3], [0.1, 0.3]]})]