java.sql.SQLException:从Apache Spark连接到Oracle数据库时,无法识别的SQL类型-102

时间:2019-06-16 14:33:58

标签: sql oracle scala apache-spark jdbc

我正在尝试将远程Oracle数据库表加载到Apache Spark Shell。

这就是我启动火花壳的方式。

./spark-shell --driver-class-path ../jars/ojdbc6.jar --jars ../jars/ojdbc6.jar --master local

然后我收到一个Scala提示,在这里我尝试加载如下所示的Oracle数据库表。 (我使用自定义JDBC URL)

val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=WHATEVER)(HOST=myDummyHost.com)(PORT=xxxx)))(CONNECT_DATA=(SERVICE_NAME=dummy)(INSTANCE_NAME=dummyKaMummy)(UR=A)(SERVER=DEDICATED)))").option("dbtable", "THE_DUMMY_TABLE").option("user", "DUMMY_USER").option("password", "DUMMYPASSWORD").option("driver", "oracle.jdbc.driver.OracleDriver").load()

(用伪变量替换了雇主数据)

然后我得到这个错误。

java.sql.SQLException: Unrecognized SQL type -102
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getCatalystType(JdbcUtils.scala:246)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:315)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:63)
  at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
  at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
  ... 49 elided

我试图查看引号是否存在问题,但不是那样。

有人可以救我一命吗?

1 个答案:

答案 0 :(得分:1)

问题是数据库中的字段不兼容。如果您不能修改数据库,但仍然想读取数据库,解决方案将是忽略特定的列(在我的情况下,它是类型为geography的字段)。在How to select specific columns through Spack JDBC?的帮助下,这是 pyspark 中的解决方案(scala解决方案将与此类似):

df = spark.read.jdbc(url=connectionString, table="(select colName from Table) as CompatibleTable", properties=properties)