Spark scala在两个数据集之间连接RDD

时间:2017-12-18 22:05:01

标签: scala apache-spark join rdd

假设我有两个数据集如下:

数据集1:

id, name, score
1, Bill, 200
2, Bew, 23
3, Amy, 44
4, Ramond, 68

数据集2:

id,message
1, i love Bill
2, i hate Bill
3, Bew go go !
4, Amy is the best
5, Ramond is the wrost
6, Bill go go
7, Bill i love ya
8, Ramond is Bad
9, Amy is great

我想加入以上两个数据集并根据dataset1中的名称计算数据集2中显示的人名的最高位数,结果应为:

Bill, 4
Ramond, 2 
..
..

我设法将他们两个一起加入,但不知道如何计算每个人出现的时间。

任何建议都将不胜感激。

编辑: 我的加入代码:

val rdd = sc.textFile("dataset1")
val rdd2 = sc.textFile("dataset2")
val rddPair1 = rdd.map { x =>
  var data = x.split(",")
  new Tuple2(data(0), data(1))
}
val rddPair2 = rdd2.map { x =>
  var data = x.split(",")
  new Tuple2(data(0), data(1))
}
rddPair1.join(rddPair2).collect().foreach(f =>{
  println(f._1+" "+f._2._1+" "+f._2._2)
})

1 个答案:

答案 0 :(得分:2)

使用RDDs,实现您想要的解决方案将会很复杂。而不是使用dataframes

第一步是将您拥有的两个文件读入dataframes,如下所示

val df1 = sqlContext.read.format("com.databricks.spark.csv")
    .option("header", true)
  .load("dataset1")
val df2 = sqlContext.read.format("com.databricks.spark.csv")
  .option("header", true)
  .load("dataset1")

所以你应该

df1
+---+------+-----+
|id |name  |score|
+---+------+-----+
|1  |Bill  |200  |
|2  |Bew   |23   |
|3  |Amy   |44   |
|4  |Ramond|68   |
+---+------+-----+

df2
+---+-------------------+
|id |message            |
+---+-------------------+
|1  |i love Bill        |
|2  |i hate Bill        |
|3  |Bew go go !        |
|4  |Amy is the best    |
|5  |Ramond is the wrost|
|6  |Bill go go         |
|7  |Bill i love ya     |
|8  |Ramond is Bad      |
|9  |Amy is great       |
+---+-------------------+

joingroupBycount应该提供您想要的输出

df1.join(df2, df2("message").contains(df1("name")), "left").groupBy("name").count().as("count").show(false)

最终输出将是

+------+-----+
|name  |count|
+------+-----+
|Ramond|2    |
|Bill  |4    |
|Amy   |2    |
|Bew   |1    |
+------+-----+