Spark无法看到默认的Hive数据库

时间:2018-02-22 16:45:45

标签: hadoop apache-spark hive

我尝试通过创建HiveContext来通过Spark 2.2.1查询Hive表。事实证明,Spark(无论我是通过spark-submit提交我的作业还是在pyspark shell中运行它 - 效果相同)都有效,但只能看到Hive中的默认数据库而无法看到任何其他数据库。看起来这个问题已经有一段时间了,并且所有建议都是关于将这样的Spark参数调整为--deploy-mode和--master并将hive-site.xml文件明确地传递给Spark。

在阅读了我能找到的关于这个问题的所有内容之后,我将spark-submit命令更改为以下内容:

/bin/spark-submit --driver-class-path /opt/sqljdbc_6.0/sqljdbc_6.0/enu/jre8/sqljdbc42.jar --deploy-mode cluster --files /usr/hdp/current/spark2-client/conf/hive-site.xml --master yarn /home/konstantin/myscript.py

( - driver-class-path参数用于在脚本中查询MSSQL库,这与问题无关。)

运行此命令后,出现以下错误:

18/02/22 19:23:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/22 19:23:45 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
    at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createTimelineClient(YarnClientImpl.java:181)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:168)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:152)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1109)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1168)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 17 more

Process finished with exit code 0

根据我发现here的建议,我下载了jersey-bundle-1.17.1.jar,将其放在本地系统上,并使用--jars键将其传递给spark-submit:< / p>

/bin/spark-submit --driver-class-path /opt/sqljdbc_6.0/sqljdbc_6.0/enu/jre8/sqljdbc42.jar --jars /home/konstantin/jersey-bundle-1.17.1.jar --deploy-mode cluster --files /usr/hdp/current/spark2-client/conf/hive-site.xml --master yarn /home/konstantin/myscript.py

这没有效果,我仍然得到与上面相同的NoClassDefFoundError。因此,我无法评估最初问题的旧解决方案(Spark无法看到Hive数据库),因为我遇到了错误。

欢迎任何建议。

1 个答案:

答案 0 :(得分:0)

请检查yarn日志中spark.hive.warehouse属性的设置。 如果它是nil那么你的hive-site.xml没有正确分发。

问题主要是由于hive-site.xml引起的。请在spark ui环境选项卡中检查文件是否正确分发