使用Java组合来自多个RDD的数据

时间:2018-05-10 18:39:24

标签: java csv apache-spark rdd

我有3个CSV文件,如下所示,尝试创建RDD并将RDD组合成一个我可以应用过滤器的最终输出。我不知道从哪里开始 有了这个。有什么建议吗?

JavaRDD<String> file1 = sc.textFile("D:\\tmp\\file1.csv");
JavaRDD<String> file2 = sc.textFile("D:\\tmp\\file2.csv");
JavaRDD<String> file3 = sc.textFile("D:\\tmp\\file3.csv");

JavaRDD<String> combRDD = file1.union(file2).union(file3); //doesn't give expected output

csv file1

"user","source_ip","action","type"
"abc","10.0.0.1","login","ONE"
"xyz","10.0.1.1","login","ONE"
"abc","10.0.0.1","playing","ONE"
"def","10.1.0.1","login","ONE"

csv file2

"user","url","type"
"abc","/test","TWO"
"xyz","/wonder","TWO"

csv file3

"user","total_time","type","status"
"abc","5min","THREE","true"
"xyz","2min","THREE","fail"

最终预期产出

"user","source_ip","action","type","url","total_time","status"
"abc","10.0.0.1","login","ONE","","",""
"xyz","10.0.1.1","login","ONE","","",""
"abc","10.0.0.1","playing","ONE","","",""
"def","10.1.0.1","login","ONE","","",""
"abc","","","TWO","/test","",""
"xyz","","","TWO","/wonder","",""
"abc","","","THREE","","5min","true"
"xyz","","","THREE","","2min","fail"

每个csv文件每天都以相同的格式生成,所以我想从具有* .csv的特定文件夹中读取它们来构建RDD

1 个答案:

答案 0 :(得分:0)

如果SparkSession对象为spark

spark.read.option("header", "true").csv("file1.csv").join(
  spark.read.option("header", "true").csv("file2.csv"), "user"
).join(
  spark.read.option("header", "true").csv("file3.csv"), "user"
).write.csv("some_output");