java.sql.SQLNonTransientConnectionException:[Cloudera] [JDBC](10060)连接已关闭

时间:2018-09-04 13:09:11

标签: apache-spark jdbc hive

我试图将数据从远程服务器加载到hive表中,在那里我一次向hive表发送20个请求,我遇到此错误“ [Cloudera] JDBC连接已关闭。” < / strong>并成功处理了部分请求和其余请求,从而可以将数据成功加载到配置单元表中。

连接字符串

ServerUrl=jdbc:hive2://hhhjkh.ghcprp.com:10000/hjhjhkhjkhj;principal=hive/hhhjkh.ghcprp.com@internal.cvbglobal.com;SSL=1;mapred.job.queue.name=la9;AuthMech=3;user=xxxxxxx;password=xxxxxxx;SocketTimeout=0

错误

java.sql.SQLNonTransientConnectionException: [Cloudera][JDBC](10060) Connection has been closed.
    at com.cloudera.hiveserver2.exceptions.ExceptionConverter.toSQLException(Unknown Source)
    at com.cloudera.hiveserver2.jdbc.common.SConnection.closeConnection(Unknown Source)
    at com.cloudera.hiveserver2.jdbc.common.SConnection.abortInternal(Unknown Source)
    at com.cloudera.hiveserver2.jdbc.common.SConnection.close(Unknown Source)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:99)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:499)
    at com.rxcorp.canada_taunus.LoadData.loadDFToDB(LoadData.scala:193)
    at com.rxcorp.canada_taunus.CanadaTaunus$.main(CanadaTaunus.scala:155)
    at com.rxcorp.canada_taunus.CanadaTaunus.main(CanadaTaunus.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

0 个答案:

没有答案