scala -Joining Spark RDD按键功能

时间:2018-06-08 23:56:55

标签: scala apache-spark join rdd

我正在运行Apache Spark 2.11并使用Scala。有没有办法通过密钥的功能加入两个RDD?

具体来说,如果我有一个RDD [(K,V1),(K-x,V2),(K+x,V3)],我想生成一个RDD [(K,(V1,V2)),(K-x,(V2)),(K+x,(V1,V3))],其中加入函数是f(K) = K-x

3 个答案:

答案 0 :(得分:0)

不确定您的输入输出希望以下示例有帮助

示例1

import org.apache.spark.sql.functions._ 
import sqlContext.implicits._



val df1 = Seq(("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo"), ("aaa", "bbb","ccc","ddd")).toDF("k1","v1","v2","v3")


val df2 = Seq(("aaa", "bbb","ddd"), ("www", "eee","rrr"), ("jjj", "rrr","www")).toDF("k1","v1","v2")

//df1 = df1.withColumn("id", monotonically_increasing_id())

//df2 = df2.withColumn("id", monotonically_increasing_id())

df1.show()

df2.show()

val df3 = df2.join(df1, Seq("k1"), "outer")
// You can use outer ,inner ,right,left any join as per fit in your requirmens


df3.show()

结果:

+---+---+---+---+
| k1| v1| v2| v3|
+---+---+---+---+
|foo|bar|too|aaa|
|bar|bar|aaa|foo|
|aaa|bbb|ccc|ddd|
+---+---+---+---+

+---+---+---+
| k1| v1| v2|
+---+---+---+
|aaa|bbb|ddd|
|www|eee|rrr|
|jjj|rrr|www|
+---+---+---+

+---+----+----+----+----+----+
| k1|  v1|  v2|  v1|  v2|  v3|
+---+----+----+----+----+----+
|jjj| rrr| www|null|null|null|
|aaa| bbb| ddd| bbb| ccc| ddd|
|bar|null|null| bar| aaa| foo|
|foo|null|null| bar| too| aaa|
|www| eee| rrr|null|null|null|
+---+----+----+----+----+----+

import org.apache.spark.sql.functions._
import sqlContext.implicits._
df1: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 2 more fields]
df2: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 1 more field]
df3: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 4 more fields]

示例2:

import org.apache.spark.sql.functions._ 
import sqlContext.implicits._



val df12 = sc.parallelize(Seq(("1001","vaquar"),("2001","khan1"))).toDF("Key" ,"Value")
val df22 = sc.parallelize(Seq(("1001","Noman"),("2001","khan2"))).toDF("Key" ,"Value")

df12.show()
df22.show()

val df33 = df22.join(df12, Seq("Key"), "left_outer")

df33.show()

<强>结果:

+----+------+
| Key| Value|
+----+------+
|1001|vaquar|
|2001| khan1|
+----+------+

+----+-----+
| Key|Value|
+----+-----+
|1001|Noman|
|2001|khan2|
+----+-----+

+----+-----+------+
| Key|Value| Value|
+----+-----+------+
|2001|khan2| khan1|
|1001|Noman|vaquar|
+----+-----+------+

import org.apache.spark.sql.functions._
import sqlContext.implicits._
rdd1: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df12: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df22: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df33: org.apache.spark.sql.DataFrame = [Key: string, Value: string ... 1 more field]

答案 1 :(得分:0)

以下是一个例子:

//Just to simulate functional join
val appendZero = ((id: String) => id + "0")

val rdd1 = sc.parallelize(Seq(("100","Tom"),("200","Rick")))
val rdd2 = sc.parallelize(Seq(("1000","phone1000"),("2000","phone2000")))
val rdd3 = sc.parallelize(Seq(("1000","addr1000"),("2000","addr2000")))

rdd1.map(x => (appendZero(x._1),(x._2))).join(rdd2).join(rdd3).map {
  case(k, ((v1, v2), v3)) => ((k,(v1,v2),(k,v2),(k,v1,v3)))
}.collect.foreach(println)

输出:

(2000,(Rick,phone2000),(2000,phone2000),(2000,Rick,addr2000))
(1000,(Tom,phone1000),(1000,phone1000),(1000,Tom,addr1000)) 

答案 2 :(得分:0)

如果我正确理解您的要求,可以使用leftOuterJoin在函数_ - x的反函数(可逆)上实现,如下例所示:

val x = 5
val f: (Int) => Int = _ - x
val fInverse: (Int) => Int = _ + x

val rdd = sc.parallelize(Seq(
  (100, "V1"),
  (100 - x, "V2"),
  (100 + x, "V3")
))

rdd.
  leftOuterJoin(rdd.map{ case (k, v) => (fInverse(k), v) }).
  map{ case(k, (u, v)) => (k, (u, v.getOrElse("")))}.
  collect
// res1: Array[(Int, (String, String))] = Array((105,(V3,V1)), (100,(V1,V2)), (95,(V2,"")))