Spark:通过在临时表上执行SQL查询来创建临时表

时间:2018-06-20 13:46:20

标签: scala apache-spark jenkins jdbc

我正在使用Spark,我想知道:如何通过对表A和B执行sql查询来创建名为C的临时表?

sqlContext
   .read.json(file_name_A)
   .createOrReplaceTempView("A")

sqlContext
   .read.json(file_name_B)
   .createOrReplaceTempView("B")

val tableQuery = "(SELECT A.id, B.name FROM A INNER JOIN B ON A.id = B.fk_id) C"

sqlContext.read
   .format(SQLUtils.FORMAT_JDBC)
   .options(SQLUtils.CONFIG())
   .option("dbtable", tableQuery)
   .load()

2 个答案:

答案 0 :(得分:3)

您需要将结果另存为临时表

tableQuery .createOrReplaceTempView("dbtable")

您可以使用JDBC的外部表上的永久存储

val prop = new java.util.Properties
prop.setProperty("driver", "com.mysql.jdbc.Driver")
prop.setProperty("user", "vaquar")
prop.setProperty("password", "khan") 

//jdbc mysql url - destination database is named "temp"
val url = "jdbc:mysql://localhost:3306/temp"

//destination database table 
val table = "sample_data_table"

//write data from spark dataframe to database
df.write.mode("append").jdbc(url, dbtable, prop)

https://docs.databricks.com/spark/latest/data-sources/sql-databases.html

http://spark.apache.org/docs/latest/sql-programming-guide.html#saving-to-persistent-tables

答案 1 :(得分:1)

sqlContext.read.json(file_name_A).createOrReplaceTempView("A")
sqlContext.read.json(file_name_B).createOrReplaceTempView("B")
val tableQuery = "(SELECT A.id, B.name FROM A INNER JOIN B ON A.id = B.fk_id) C"
sqlContext.sql(tableQuery).createOrReplaceTempView("C")

尝试上面的代码将起作用。