在地图操作中发出多个对

时间:2015-02-27 07:01:56

标签: apache-spark pyspark

我们说我有一系列电话记录格式:

[CallingUser, ReceivingUser, Duration]

如果我想知道给定用户在电话上的总时间(用户是CallingUser或ReceivingUser的持续时间总和)。

实际上,对于给定的记录,我想创建2对(CallingUser, Duration)(ReceivingUser, Duration)

最有效的方法是什么?我可以将2 RDDs加在一起,但我不清楚这是否是一个好方法:

#Sample Data:
callData = sc.parallelize([["User1", "User2", 2], ["User1", "User3", 4], ["User2", "User1", 8]  ])


calls = callData.map(lambda record: (record[0], record[2]))

#The potentially inefficient map in question:
calls += callData.map(lambda record: (record[1], record[2]))


reduce = calls.reduceByKey(lambda a, b: a + b)

2 个答案:

答案 0 :(得分:11)

你想要平面地图。如果你编写一个返回列表[(record[0], record[2]),(record[1],record[2])]的函数,那么你可以平面映射它!

答案 1 :(得分:8)

使用flatMap(),这对于获取单个输入和生成多个映射输出非常有用。完成代码:

callData = sc.parallelize([["User1", "User2", 2], ["User1", "User3", 4], ["User2", "User1", 8]])

calls = callData.flatMap(lambda record: [(record[0], record[2]), (record[1], record[2])])
print calls.collect()
# prints [('User1', 2), ('User2', 2), ('User1', 4), ('User3', 4), ('User2', 8), ('User1', 8)]

reduce = calls.reduceByKey(lambda a, b: a + b)
print reduce.collect()
# prints [('User2', 10), ('User3', 4), ('User1', 14)]