如何使用spark sql来区分两个表?

时间:2016-12-06 05:07:41

标签: mysql sql-server apache-spark-sql

现在我需要使用spark sql来区分两个表,我找到一个sql server的答案如下:

(SELECT *
 FROM   table1
 EXCEPT
 SELECT *
 FROM   table2)
UNION ALL
(SELECT *
 FROM   table2
 EXCEPT
 SELECT *
 FROM   table1) 

希望有人能告诉我如何在sql server中使用这样的spark sql? (不要关心特殊的col,只需使用*)

1 个答案:

答案 0 :(得分:2)

你可以这样做:

scala> val df1=sc.parallelize(Seq((1,2),(3,4))).toDF("a","b")
df1: org.apache.spark.sql.DataFrame = [a: int, b: int]

scala> val df2=sc.parallelize(Seq((1,2),(5,6))).toDF("a","b")
df2: org.apache.spark.sql.DataFrame = [a: int, b: int]

scala> df1.create
createOrReplaceTempView   createTempView

scala> df1.createTempView("table1")

scala> df2.createTempView("table2")

scala> spark.sql("select * from table1 EXCEPT select * from table2").show
+---+---+                                                                       
|  a|  b|
+---+---+
|  3|  4|
+---+---+


scala> spark.sql("(select * from table2 EXCEPT select * from table1) UNION ALL (select * from table1 EXCEPT select * from table2)").show
+---+---+                                                                       
|  a|  b|
+---+---+
|  5|  6|
|  3|  4|
+---+---+

注意:在您的情况下,您必须从JDBC调用中创建数据帧,然后注册表并执行操作。