我已将hive执行引擎更改为SPARK。在做任何DML / DDL时我都会遇到异常。
hive> select count(*) from tablename;
Query ID = jibi_john_20160602153012_6ec1da36-dcb3-4f2f-a855-3b68be118b36
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
**Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
**FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask****
&#13;
答案 0 :(得分:1)
一个可能的原因是您在YARN分配ApplicationMaster之前达到了超时值。您可以通过设置hive.spark.client.server.connect.timeout
来扩展此超时值默认值为90000ms。
答案 1 :(得分:0)
可能是由于内存问题。尝试将YARN Container内存和最大值设置为大于Spark Executor Memory + Overhead。
yarn.scheduler.maximum分配-MB yarn.nodemanager.resource.memory-MB