pyspark:使用reduceByKey聚合后写入文件

时间:2017-05-28 17:21:32

标签: pyspark apache-spark-sql

我的代码如下所示:

sc = SparkContext("local", "App Name")
eventRDD = sc.textFile("file:///home/cloudera/Desktop/python/event16.csv")
outRDDExt = eventRDD.filter(lambda s: "Topic" in s).map(lambda s: s.split('|'))
outRDDExt2 = outRDDExt.keyBy(lambda x: (x[1],x[2][:-19]))
outRDDExt3 = outRDDExt2.mapValues(lambda x: 1)
outRDDExt4 = outRDDExt3.reduceByKey(lambda x,y: x + y)
outRDDExt4.saveAsTextFile("file:///home/cloudera/Desktop/python/outDir1")

当前输出文件如下所示: ((u'Topic',u'2017 / 05/08'),15)

我想要的是我的文件:

u'Topic',u'2017 / 05/08',15

如何获得上述输出(即从当前输出中删除元组等?

1 个答案:

答案 0 :(得分:0)

您可以手动展开元组并将所有元素连接为字符串

outRDDExt4\
.map(lambda row : ",".join([row[0][1],row[0][1],str(row[1])])\
.saveAsTextFile("file:///home/cloudera/Desktop/python/outDir1")