Hive on Spark&gt;纱线模式&gt; <火花配置>给spark.master带来什么价值

时间:2015-10-05 18:12:14

标签: configuration apache-spark hive

我正在使用我自己的自定义serde尝试HiveQL(它与纯Hive一起正常工作)。我按照https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

中的说明进行操作

但是我对这个部分非常困惑:启动Spark集群(支持独立和YARN上的Spark)。 根据我的理解,如果Spark以独立模式运行,我们只需要启动Spark集群。但我打算在Yarn上运行Spark,是否需要启动Spark集群?我做的是:我刚开始使用Hadoop Yarn,因为我真的不知道要设置什么属性spark.master,我根本就没有设置它。可能是因为这个设置,我在运行Hive查询时收到错误消息,该查询使用我自己的Serde:

2015-10-05 20:42:07,184 INFO  [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(67)) - Job hasn't been submitted after 61s. Abor

它。

2015-10-05 20:42:07,184 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(960)) - Status: SENT
2015-10-05 20:42:07,184 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob start=1444066866174 end=1444066927184 duration=61010 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>
2015-10-05 20:42:07,300 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
2015-10-05 20:42:07,300 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1444066848958 end=1444066927300 duration=78342 from=org.apache.hadoop.hive.ql.Driver>

...

最后还有以下例外:

2015-10-05 20:42:16,658 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/10/05 20:42:16 INFO yarn.Client: Application report for application_1444066615793_0001 (state: ACCEPTED)
2015-10-05 20:42:17,337 WARN  [main]: client.SparkClientImpl (SparkClientImpl.java:stop(154)) - Timed out shutting down remote driver, interrupting...
2015-10-05 20:42:17,337 WARN  [Driver]: client.SparkClientImpl (SparkClientImpl.java:run(430)) - Waiting thread interrupted, killing child process.
2015-10-05 20:42:17,345 WARN  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(572)) - Error in redirector thread.
java.io.IOException: Stream closed
    at     java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:272)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
    at java.io.InputStreamReader.read(InputStreamReader.java:184)
    at java.io.BufferedReader.fill(BufferedReader.java:154)
    at java.io.BufferedReader.readLine(BufferedReader.java:317)
    at java.io.BufferedReader.readLine(BufferedReader.java:382)
    at org.apache.hive.spark.client.SparkClientImpl$Redirector.run(SparkClientImpl.java:568)
    at java.lang.Thread.run(Thread.java:745)

2015-10-05 20:42:17,371 INFO [Thread-15]:session.SparkSessionManagerImpl(SparkSessionManagerImpl.java:shutdown(146)) - 关闭会话管理器。

忠实地希望任何人都能提出一些建议,非常感谢

2 个答案:

答案 0 :(得分:2)

从官方文档Spark on YARN开始,你的主人将基本上是:

  • 纱线群集:如果您要提交作业以激活OR
  • yarn-client :如果您想在本地实例化SparkContext

不要忘记在配置中提供配置文件(core-site.xml,hdfs-site.xml,yarn-site.xml,mapred-site.xml,hive-site.xml等) HADOOP_CONF_DIRYARN_CONF_DIR。您可以在<spark_home>/conf/spark-env.sh

设置这些变量

答案 1 :(得分:1)

请尝试set spark.master=yarn-client;