FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.spark.SparkTask返回代码2

时间:2016-01-10 16:01:44

标签: hadoop apache-spark hive hiveql

我正在跑步

  • Apache的蜂房-1.2.1槽
  • 的hadoop-2.7.1
  • 火花1.5.1彬hadoop2.6

我能够在Spark上配置配置单元,但是当我尝试执行查询时,它会给出以下错误信息。

hive> SELECT COUNT(*) AS rcount, yom From service GROUP BY yom;
Query ID = hduser_20160110105649_4c90528a-76ba-4127-8849-54f2152be817
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Spark Job = b9cbbd47-f41f-48b5-98c3-efcaa145390e
Status: SENT
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

我该如何解决这个问题?

2 个答案:

答案 0 :(得分:0)

我有同样的问题,但由于某些工作正在运行,我没有配置纱线。我不确定这是问题的解决方案。

yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

您是否按照文件说明配置了纱线?

答案 1 :(得分:-1)

纱-site.xml中:

<property>
    <name>yarn.resourcemanager.scheduler.class</name>
   <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>