hiveserver2无法在Spark上运行sql

时间:2015-11-29 14:17:40

标签: apache-spark hive yarn

这是我的版本: 蜂巢:1.2 Hadoop:CDH5.3 Spark:1.4.1

我用hive客户端成功搭配了hive,但是在我启动了hiveserver2并尝试使用beeline的sql后,它失败了。

错误是:

2015-11-29 21:49:42,786 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:42 INFO spark.SparkContext: Added JAR file:/root/cdh/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at http://10.96.30.51:10318/jars/hive-exec-1.2.1.jar with timestamp 1448804982784
2015-11-29 21:49:43,336 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm297
2015-11-29 21:49:43,356 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm297 after 1 fail over attempts. Trying to fail over immediately.
2015-11-29 21:49:43,357 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm280
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm280 after 2 fail over attempts. Trying to fail over after sleeping for 477ms.
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - java.net.ConnectException: Call From hd-master-001/10.96.30.51 to hd-master-001:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) -    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

我的纱线状态是hd-master-002是活动资源管理器而hd-master-001是备份。 hd-master-001上的8032端口未打开。当然,尝试连接到hd-master-001的8032端口时会发生连接错误。

但为什么她试图连接备份资源管理器。 如果我在火花上使用hive客户端命令shell,一切都还可以。

PS:我没有hive重建火花装配罐,我只删除了#org.apache.hive'和' org.apache.hadoop.hive'来自建造的装配罐。但我不认为这是问题所在,因为我成功地使用了蜂巢客户端上的火花。

0 个答案:

没有答案