Spark 2.x发布是否会破坏SQL连接语法?

时间:2018-01-08 20:28:59

标签: sql apache-spark apache-spark-sql

当我提交复杂的连接SQL查询时,通常会给一个或两个操作数一个较短的名称来阐明我的意图,例如:以下2个查询:

SELECT *
FROM transactions
JOIN accounts ON transactions.cardnumber=accounts.cardnumber

SELECT *
FROM transactions AS left
JOIN accounts ON left.cardnumber=accounts.cardnumber

应具有相同的效果。

我已经在Spark 1.6.3中对这两个查询进行了测试,两者都有效。但是在我转移到Spark 2.2.1之后,第二个查询引发了以下错误:

org.apache.spark.sql.AnalysisException: cannot resolve '`left.cardnumber`' given input columns: [name, sku, sin, accountnumber, purchase_date, sin, cardnumber, purchase_date, cardnumber, amount, sku, name, amount]; line 4 pos 17;
'Project [*]
+- 'Join LeftOuter, ('left.cardnumber = cardnumber#77)
   :- SubqueryAlias AS
   :  +- SubqueryAlias transactions
   :     +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).cardnumber, true) AS cardnumber#53, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).name, true) AS name#54, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).amount AS amount#55, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).purchase_date, true) AS purchase_date#56, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sin, true) AS sin#57, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sku, true) AS sku#58]
   :        +- ExternalRDD [obj#52]
   +- SubqueryAlias accounts
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).accountnumber, true) AS accountnumber#76, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).cardnumber, true) AS cardnumber#77, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).name, true) AS name#78, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).amount AS amount#79, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).purchase_date, true) AS purchase_date#80, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sin, true) AS sin#81, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sku, true) AS sku#82]
         +- ExternalRDD [obj#75]

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:279)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:289)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:290)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$6.apply(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)

导致此故障的原因是什么?如何解决?

1 个答案:

答案 0 :(得分:2)

问题是您使用保留关键字(LEFT)作为别名,因此查询被解释为:

SELECT * 
FROM transactions AS ``
LEFT JOIN  accounts ON left.cardnumber = accounts.cardnumber

带有空别名。实际上是在查询之后:

SELECT * 
FROM transactions AS ``
LEFT JOIN  accounts ON ``.cardnumber = accounts.cardnumber 

虽然不完全等效,但工作得很好。这是标准的SQL行为,而不是错误。

选择不同的名称,一切都会正常运作:

Seq[Int]().toDF("cardnumber").createOrReplaceTempView("accounts")
Seq[Int]().toDF("cardnumber").createOrReplaceTempView("transactions")

spark.sql("""SELECT *
             FROM transactions AS l
             JOIN accounts AS r
             ON l.cardnumber = r.cardnumber""")

引用别名也会起作用:

spark.sql("""SELECT *
             FROM transactions AS `left`
             JOIN accounts AS r
             ON left.cardnumber = r.cardnumber""")