SPARK STREAMING(PYTHON)中的%wordcount

时间:2016-12-12 15:39:44

标签: python apache-spark pyspark spark-streaming word-count

在下一个例子中,我从Kafka收到一个序列词:

('cat')
('dog')
('rat')
('dog')

我的目标是计算每个单词的历史百分比。我将有两个RDD,一个带有历史wordcount,另一个带有所有单词的总和:

values = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers})


def updatefunc (new_value, last_value):
    if last_value is None:
        last_value = 0
    return sum(new_value, last_value)


words=values.map(lambda x: (x,1)).reduceByKey(lambda a,b: a+b)

historic= words.updateStateByKey(updatefunc).\
    transform(lambda  rdd: rdd.sortBy(lambda (x,v): x))

totalNo = words.\
    map(lambda x: x[1]).reduce(lambda a,b:a+b).map(lambda x: (('totalsum',x))).updateStateByKey(updatefunc).map(lambda x:x[1])

现在我试图划分:((每个键的历史值)/ totalNo)* 100以获得每个单词的百分比:

solution=historic.map(lambda x: x[0],x[1]*100/totalNo)

但我收到错误:

 It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063

如何修复totalNO的值以使用它在另一个RDD中运行?

1 个答案:

答案 0 :(得分:0)

最后这种方式也可以起作用:

words = KafkaUtils.createDirectStream(ssc, topics=['test'], kafkaParams={'bootstrap.servers': 'localhost:9092'})\
    .map(lambda x: x[1]).flatMap(lambda x: list(x))

historic = words.map(lambda x: (x, 1)).updateStateByKey(lambda x, y: sum(x) + (y or 0))

def func(rdd):
    if not rdd.isEmpty():
        totalNo = rdd.map(lambda x: x[1]).reduce(add)
        rdd = rdd.map(lambda x: (x[0], x[1] / totalNo))
    return rdd

solution = historic.transform(func)

solution.pprint()

这是你想要的吗?