Spark - 按键对DStream进行排序,并限制为5个值

时间:2016-10-02 16:29:39

标签: apache-spark pyspark spark-streaming rdd

我开始学习spark,我写了一个pyspark流媒体程序,用于从端口3333读取股票数据(符号,数量)。

3333

流式传输的示例数据
"AAC",111113
"ABT",7451020
"ABBV",7325429
"ADPT",318617
"AET",1839122
"ALR",372777
"AGN",4170581
"ABC",3001798
"ANTM",1968246

我想根据volume显示前5个符号。所以我用mapper来读取每一行,然后用comma拆分并反转​​。

from pyspark import SparkContext
from pyspark.streaming import StreamingContext

sc = SparkContext("local[2]", "NetworkWordCount")
ssc = StreamingContext(sc, 5)

lines = ssc.socketTextStream("localhost", 3333)
stocks = lines.map(lambda line: sorted(line.split(','), reverse=True))
stocks.pprint()

以下是stocks.pprint()

的输出
[u'111113', u'"AAC"']
[u'7451020', u'"ABT"']
[u'7325429', u'"ABBV"']
[u'318617', u'"ADPT"']
[u'1839122', u'"AET"']
[u'372777', u'"ALR"']
[u'4170581', u'"AGN"']
[u'3001798', u'"ABC"']
[u'1968246', u'"ANTM"']

我已经考虑了以下功能来显示股票代码但不确定如何按键(volume)对股票进行排序,然后将功能限制为仅显示前5个值。

stocks.foreachRDD(processStocks)

def processStocks(stock):
    for st in stock.collect():
        print st[1]

1 个答案:

答案 0 :(得分:6)

由于stream表示无限序列,因此您可以对每个批处理进行排序。首先,您必须正确解析数据:

lines = ssc.queueStream([sc.parallelize([
    "AAC,111113", "ABT,7451020", "ABBV,7325429","ADPT,318617",
    "AET,1839122", "ALR,372777", "AGN,4170581", "ABC,3001798", 
    "ANTM,1968246"
])])

def parse(line):
    try:
        k, v = line.split(",")
        yield (k, int(v))
    except ValueError:
        pass 

parsed = lines.flatMap(parse)

接下来对每个批次进行排序:

sorted_ = parsed.transform(
    lambda rdd: rdd.sortBy(lambda x: x[1], ascending=False))

最后,你可以pprint顶级元素:

sorted_.pprint(5)

如果一切顺利,你应该得到如下输出:

-------------------------------------------                         
Time: 2016-10-02 14:52:30
-------------------------------------------
('ABT', 7451020)
('ABBV', 7325429)
('AGN', 4170581)
('ABC', 3001798)
('ANTM', 1968246)
...

根据批次的大小,完全排序可能会非常昂贵。在这种情况下,您可以topparallelize

sorted_ = parsed.transform(lambda rdd:rdd.ctx.parallelize(rdd.top(5)))

甚至reduceByKey

from operator import itemgetter
import heapq

key = itemgetter(1)

def create_combiner(key=lambda x: x):
    def _(x):
        return [(key(x), x)]
    return _

def merge_value(n=5, key=lambda x: x):
    def _(acc, x):
        heapq.heappush(acc, (key(x), x))
        return heapq.nlargest(n, acc) if len(acc) > n else acc
    return _

def merge_combiners(n=5):
    def _(acc1, acc2):
        merged = list(heapq.merge(acc1, acc2))
        return heapq.nlargest(n, merged) if len(merged) > n else merged
    return _

(parsed
    .map(lambda x: (None, x))
    .combineByKey(
        create_combiner(key=key), merge_value(key=key), merge_combiners())
    .flatMap(lambda x: x[1]))