PYSPARK:使用reduceByKey排序

时间:2017-01-27 21:29:42

标签: apache-spark pyspark

我有一个类似以下的RDD

dataSource = sc.parallelize( [("user1", (3, "blue")), ("user1", (4, "black")), ("user2", (5, "white"), ("user2", (3, "black")), ("user2", (6, "red")), ("user1", (1, "red"))] )

我想使用reduceByKey为每个用户找到前2种颜色,因此输出将是RDD,如:

sc.parallelize([("user1", ["black", "blue"]), ("user2", ["red", "white"])])

因此我需要按键减少,然后对每个键的值进行排序,即(数字,颜色)数字,并返回前n个颜色。

我不想使用groupBy。如果除了reduceByKey之外还有groupBy以外的任何内容,那就太好了:)

1 个答案:

答案 0 :(得分:1)

例如,您可以使用heap queue。必需的进口:

import heapq
from functools import partial

助手功能:

def zero_value(n):
    """Initialize a queue. If n is large
    it could be more efficient to track a number of the elements
    on heap (cnt, heap) and switch between heappush and heappushpop
    if we exceed n. I leave this as an exercise for the reader."""
    return [(float("-inf"), None) for _ in range(n)]

def seq_func(acc, x):
    heapq.heappushpop(acc, x)
    return acc

def merge_func(acc1, acc2, n):
    return heapq.nlargest(n, heapq.merge(acc1, acc2))

def finalize(kvs):
    return [v for (k, v) in kvs if k != float("-inf")]

数据:

rdd = sc.parallelize([
    ("user1", (3, "blue")), ("user1", (4, "black")),
    ("user2", (5, "white")), ("user2", (3, "black")),
    ("user2", (6, "red")), ("user1", (1, "red"))])

解决方案:

(rdd
    .aggregateByKey(zero_value(2), seq_func, partial(merge_func, n=2))
    .mapValues(finalize)
    .collect())

结果:

[('user2', ['red', 'white']), ('user1', ['black', 'blue'])]