在Spark 2.0.1 DataFrame

时间:2016-10-18 16:24:29

标签: scala apache-spark spark-dataframe

其他人遇到这个问题,并就如何解决这个问题提出想法?

我一直在尝试更新我的代码以使用Spark 2.0.1和Scala 2.11。使用Scala 2.10在Spark 1.6.0中一切都很愉快。我有一个简单的数据帧到dataframe内连接,返回错误。数据来自AWS RDS Aurora。请注意,下面的foo数据帧实际上是92列,而不是我显示的两列。即使只有两列,问题仍然存在。

相关信息:

带架构的DataFrame 1

foo.show()

+--------------------+------+
|      Transaction ID|   BIN|
+--------------------+------+
|               bbBW0|134769|
|               CyX50|173622|
+--------------------+------+

println(foo.printSchema())

root
|-- Transaction ID: string (nullable = true)
|-- BIN: string (nullable = true)

带架构的DataFrame 2

bar.show()

+--------------------+-----------------+-------------------+
|              TranId|       Amount_USD|     Currency_Alpha|
+--------------------+-----------------+-------------------+
|               bbBW0|            10.99|                USD|
|               CyX50|           438.53|                USD|
+--------------------+-----------------+-------------------+

println(bar.printSchema())

root
|-- TranId: string (nullable = true)
|-- Amount_USD: string (nullable = true)
|-- Currency_Alpha: string (nullable = true)

使用explain

连接数据帧
val asdf = foo.join(bar, foo("Transaction ID") === bar("TranId"))
println(foo.join(bar, foo("Transaction ID") === bar("TranId")).explain())

== Physical Plan ==
*BroadcastHashJoin [Transaction ID#0], [TranId#202], Inner, BuildRight
:- *Scan JDBCRelation((SELECT

        ...
        I REMOVED A BUNCH OF LINES FROM THIS PRINT OUT
        ...

      ) as x) [Transaction ID#0,BIN#8] PushedFilters: [IsNotNull(Transaction ID)], ReadSchema: struct<Transaction ID:string,BIN:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false]))
   +- *Filter isnotnull(TranId#202)
      +- InMemoryTableScan [TranId#202, Amount_USD#203, Currency_Alpha#204], [isnotnull(TranId#202)]
         :  +- InMemoryRelation [TranId#202, Amount_USD#203, Currency_Alpha#204], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
         :     :  +- Scan ExistingRDD[TranId#202,Amount_USD#203,Currency_Alpha#204]

现在我得到的错误是:

16/10/18 11:36:50 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ID IS NOT NULL)' at line 54

这里可以看到完整的堆栈(http://pastebin.com/C9bg2HFt

在我的代码或jdbc查询中没有任何地方,从数据库中提取数据,我是否ID IS NOT NULL)。我花了大量时间谷歌搜索并发现Spark的提交,在连接的查询计划中添加了空过滤器。这是提交(https://git1-us-west.apache.org/repos/asf?p=spark.git;a=commit;h=ef770031

1 个答案:

答案 0 :(得分:0)

如果您尝试了以下内容,那就很好奇;

val dfRenamed = bar.withColumnRenamed("TranId", " Transaction ID")
val newDF = foo.join(dfRenamed, Seq("Transaction ID"), "inner")