找不到JDBC驱动程序 - 从Spark提交给YARN

时间:2015-10-12 20:45:22

标签: apache-spark yarn apache-spark-sql

尝试从DB表中读取所有行,并将其写入另一个空目标表。因此,当我在主节点发出以下命令时,它按预期工作 -

$./bin/spark-submit --class cs.TestJob_publisherstarget --driver-class-path ./lib/mysql-connector-java-5.1.35-bin.jar --jars ./lib/mysql-connector-java-5.1.35-bin.jar,./lib/univocity-parsers-1.5.6.jar,./lib/commons-csv-1.1.1-SNAPSHOT.jar ./lib/uber-ski-spark-job-0.0.1-SNAPSHOT.jar

(其中:uber-ski-spark-job-0.0.1-SNAPSHOT.jar是../spark/lib文件夹中的打包jar,cs.TestJob_publisherstarget是该类)

上面的命令完全适用于代码并从MySQL中的表读取所有行,并使用--jars选项中提到的JDBC驱动程序将所有roes转储到目标表。

以下是问题:

当我向YARN提交相同的工作时,所有内容都与上面一样,它失败并且异常表示 - 无法找到驱动程序

$。/ bin / spark-submit --verbose --class cs.TestJob_publisherstarget --master yarn-cluster --driver-class-path ./lib/mysql-connector-java-5.1.35-bin.jar --jars ./lib/mysql-connector-java-5.1.35-bin.jar ./lib/uber-ski-spark-job-0.0.1-SNAPSHOT.jar

YARN控制台中的例外:

Error: application failed with exception
org.apache.spark.SparkException: Application finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:625)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:650)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

例外日志:

5/10/12 20:38:59 ERROR yarn.ApplicationMaster: User class threw exception: No suitable driver found for jdbc:mysql://localhost:3306/pubs?user=root&password=root
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/pubs?user=root&password=root
    at java.sql.DriverManager.getConnection(DriverManager.java:596)
    at java.sql.DriverManager.getConnection(DriverManager.java:187)
    at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:96)
    at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:133)
    at org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:121)
    at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219)
    at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697)
    at com.cambridgesemantics.application.sdi.compiler.spark.DataSource.getDataFrame(DataSource.scala:20)
    at cs.TestJob_publisherstarget$.main(TestJob_publisherstarget.scala:29)
    at cs.TestJob_publisherstarget.main(TestJob_publisherstarget.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:484)
15/10/12 20:38:59 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: No suitable driver found for jdbc:mysql://localhost:3306/pubs?user=root&password=root)

无论如何:我应该把JDBC驱动程序jar文件放在哪里?我把它复制到每个子节点的lib,仍然没有运气!

2 个答案:

答案 0 :(得分:0)

我遇到了同样的问题,它在本地模式下工作,但不在纱线客户端。

我添加了火花提交:

--conf "spark.executor.extraClassPath=/path/to/mysql-connector-java-5.1.34.jar

这对我有用

答案 1 :(得分:0)

对于Spark 1.6,我有使用org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable

将DataFrame存储到Oracle的问题

在yarn-cluster模式下,我将这些选项放在提交脚本中:

--conf "spark.driver.extraClassPath=$HOME/jdbc-11.2.0.3.0.jar" \
--conf "spark.executor.extraClassPath=$HOME/jdbc-11.2.0.3.0.jar" \

我还必须在保存行之前将Class.forName(“..”)放在下面:

try {
   Class.forName("oracle.jdbc.OracleDriver");
                org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable(ds, url, "RD_SPARK_DTL_INCL_HY ", p);
            } catch (Exception e) {....

当然,您必须将lib复制到每个节点。不漂亮,但它的工作原理。希望有人能在以后找到更好的解决方案。

我强烈建议使用此API - 非常方便快捷。