Apache Spark Python GroupByKey或reduceByKey或combineByKey

时间:2015-09-25 05:20:03

标签: python apache-spark pyspark

我正在尝试处理一个3 GB的文件。该文件的结构是这样的,它包含多行,一组n行可以按特定键分组,每个键出现在特定位置

以下是文件结构示例

abc123Key1asdas
abc124Key1asdas
abc126Key1asasd
abcw23Key2asdad
asdfsaKey2asdsa
....
.....
.....
abcasdKeynasdas
asfssdfKeynasda
asdaasdKeynsdfa

我想要实现的结构是

((Key1,(abc123Key1asdas,abc124Key1asdas,abc126Key1asasd)),(Key2,(abcw23Key2asdad,asdfsaKey2asdsa)),...(Keyn,(abcasdKeynasdas,asfssdfKeynasda,asdaasdKeynsdfa))

我正在尝试做这样的事情

lines = sc.textFile(fileName)
counts = lines.flatMap(lambda line: line.split('\n')).map(lambda line: (line[10:21],line))
        output = counts.combineByKey().collect()

任何人都可以帮我实现我想做的事吗?

1 个答案:

答案 0 :(得分:2)

只需将combineByKey()替换为groupByKey(),然后就可以了。

示例代码

data = sc.parallelize(['abc123Key1asdas','abc123Key1asdas','abc123Key1asdas', 'abcw23Key2asdad', 'abcw23Key2asdad', 'abcasdKeynasdas', 'asfssdKeynasda', 'asdaasKeynsdfa'])
data.map(lambda line: (line[6:10],line)).groupByKey().mapValues(list).collect()
  

[(' Key1',[' abc123Key1asdas',' abc123Key1asdas',' abc123Key1asdas']),(' Key2& #39;,[' abcw23Key2asdad',' abcw23Key2asdad']),(' Keyn',' abcasdKeynasdas',' asfssdKeynasda& #39;,' asdaasKeynsdfa'])]

更多信息:http://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=groupbykey#pyspark.RDD.groupByKey