RDD Scala Spark中的完全外部联接

时间:2019-06-05 04:22:26

标签: scala apache-spark rdd

我下面有两个文件:
文件1

0000003 杉山______ 26 F
0000005 崎村______ 50 F
0000007 梶川______ 42 F

file2

0000005 82 79 16 21 80
0000001 46 39 8 5 21
0000004 58 71 20 10 6
0000009 60 89 33 18 6
0000003 30 50 71 36 30
0000007 50 2 33 15 62

现在,我希望联接列在字段1中具有相同的值。
我想要这样的东西:

0000005 崎村______ 50 F 82 79 16 21 80
0000003 杉山______ 26 F 30 50 71 36 30
0000007 梶川______ 42 F 50 2  33 15 62

2 个答案:

答案 0 :(得分:0)

您可以使用数据帧连接概念代替RDD连接。那很容易。您可以在下面参考我的示例代码。希望对您有所帮助。 我认为您的数据格式与上述相同。如果是CSV或其他任何格式,则可以跳过 Step-2 并根据数据格式更新 Step-1 。如果需要RDD格式的输出,则可以使用 Step-5 ,否则可以按照代码段中提到的注释将其忽略。
我出于可读性考虑修改了数据(例如A _____,B _____,C ____)。

//Step1: Loading file1 and file2 to corresponding DataFrame in text format

val df1  = spark.read.format("text").load("<path of file1>")
val df2  = spark.read.format("text").load("<path of file2>")

//Step2: Spliting  single column "value" into multiple column for join Key

val file1 = ((((df1.withColumn("col1", split($"value", " ")(0)))
                        .withColumn("col2", split($"value", " ")(1)))
                        .withColumn("col3", split($"value", " ")(2)))
                        .withColumn("col4", split($"value", " ")(3)))
                        .select("col1","col2", "col3", "col4")

/* 
+-------+-------+----+----+                                                     
|col1   |col2   |col3|col4|
+-------+-------+----+----+
|0000003|A______|26  |F   |
|0000005|B______|50  |F   |
|0000007|C______|42  |F   |
+-------+-------+----+----+

*/

val file2 =   ((((((df2.withColumn("col1", split($"value", " ")(0)))
                            .withColumn("col2", split($"value", " ")(1)))
                            .withColumn("col3", split($"value", " ")(2)))
                            .withColumn("col4", split($"value", " ")(3)))
                            .withColumn("col5", split($"value", " ")(4)))
                            .withColumn("col6", split($"value", " ")(5)))
                            .select("col1","col2", "col3", "col4","col5","col6")

/*
+-------+----+----+----+----+----+
|col1   |col2|col3|col4|col5|col6|
+-------+----+----+----+----+----+
|0000005|82  |79  |16  |21  |80  |
|0000001|46  |39  |8   |5   |21  |
|0000004|58  |71  |20  |10  |6   |
|0000009|60  |89  |33  |18  |6   |
|0000003|30  |50  |71  |36  |30  |
|0000007|50  |2   |33  |15  |62  |
+-------+----+----+----+----+----+

*/

//Step3: you can do alias to refer column name with aliases to  increase readablity

val file01 = file1.as("f1")
val file02 = file2.as("f2")

//Step4: Joining files on Key
file01.join(file02,col("f1.col1") === col("f2.col1"))

/*
+-------+-------+----+----+-------+----+----+----+----+----+                    
|col1   |col2   |col3|col4|col1   |col2|col3|col4|col5|col6|
+-------+-------+----+----+-------+----+----+----+----+----+
|0000005|B______|50  |F   |0000005|82  |79  |16  |21  |80  |
|0000003|A______|26  |F   |0000003|30  |50  |71  |36  |30  |
|0000007|C______|42  |F   |0000007|50  |2   |33  |15  |62  |
+-------+-------+----+----+-------+----+----+----+----+----+
*/

// Step5: if you want file data in RDD format the  you can use below command

file01.join(file02,col("f1.col1") === col("f2.col1")).rdd.collect

/* 
Array[org.apache.spark.sql.Row] = Array([0000005,B______,50,F,0000005,82,79,16,21,80], [0000003,A______,26,F,0000003,30,50,71,36,30], [0000007,C______,42,F,0000007,50,2,33,15,62])
*/

答案 1 :(得分:0)

我找到了解决方案,这是我的代码:

val rddPair1 = logData1.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}



val rddPair2 = logData2.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}
rddPair1.join(rddPair2).collect().foreach(f =>{
println(f._1+" "+f._2._1+" "+f._2._2
)})
}

结果:

0000003 杉山______ 26 F 30 50 71 36 30
0000005 崎村______ 50 F 82 79 16 21 80
0000007 梶川______ 42 F 50 2 33 15 62