java.sql.SQLFeatureNotSupportedException:[Cloudera] [JDBC](10220)驱动程序不支持此可选功能

时间:2018-08-27 06:29:15

标签: scala apache-spark jdbc hive

我粘贴了我的一小段代码,试图在加载数据时将数据加载到hive表(Linux框)中,当我通过窗口计算机执行相同的操作时,我遇到了以下错误提示:能够成功将数据加载到配置单元表中。我正在使用与窗口平台相同的版本。

ServerUrl=jdbc:hive2://hiveWeb.xxx.com:10000/xxxxx;principal=hive/hiveWeb.xxx.com@internal.imsxcnkm.com;SSL=1;mapred.job.queue.name=co9l;AuthMech=3;user=xxxxx;password=xxxx

  val jdbcOptions:JDBCOptions = new JDBCOptions(Map(
    "url"->s"$serverUrl",
    "dbtable"-> "hiveTable",
    "driver" -> "com.cloudera.hive.jdbc41.HS2Driver",
    "batchSize"->"10000",
    "SSLTrustStore"->"/usr/java/jdk1.8.0_144/jre/lib/security/jssecaxx",
    "format" -> "parquet"
  ))

  JdbcUtils.saveTable(dataFrame,serverUrl,sourceTableName,jdbcOptions)

build.sbt:-

.settings(libraryDependencies ++= Seq("org.apache.spark" %% "spark-sql" % "2.1.0" % "provided",
    "org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
    //"org.apache.spark" %% "spark-sql" % "2.1.0" ,
    //"org.apache.spark" %% "spark-core" % "2.1.0" ,
    "org.apache.hive" % "hive-jdbc" % "1.1.0",
    "org.apache.spark" %% "spark-hivecontext-compatibility" % "2.0.0-preview",
    "com.ClouderaHiveJDBC41"% "ClouderaHiveJDBC41" % "2.5.17.1047",
    "org.apache.hadoop" % "hadoop-client" % "1.1.0" % "provided"

  ))

错误:-

ava.sql.SQLFeatureNotSupportedException: [Cloudera][JDBC](10220) Driver does not support this optional feature.
        at com.cloudera.hiveserver2.exceptions.ExceptionConverter.toSQLException(Unknown Source)
        at com.cloudera.hiveserver2.jdbc.common.SPreparedStatement.checkTypeSupported(Unknown Source)
        at com.cloudera.hiveserver2.jdbc.common.SPreparedStatement.setNull(Unknown Source)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:583)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:670)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
        at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

0 个答案:

没有答案