如何使用Spark Execution Engine(Apache Hive版本2.1.1和Apache Spark版本2.2.0)运行Hive

时间:2018-01-05 07:18:39

标签: hive apache-spark-sql

我们已将Hive执行引擎从MapReduce切换到Spark,并尝试使用beelinejdbc在Hive shell中运行查询。

我们能够运行简单查询(例如:select * from table),因为它不需要处理数据,但是当我们尝试运行包含聚合函数的查询时(例如:select count(*) from table })我们面临以下错误:

Query ID = hadoop_20180105123047_5bcd0d7a-78bd-4b66-b5fb-fc430726c2a9
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

问题是什么?

1 个答案:

答案 0 :(得分:0)

第一个查询的工作原因是因为它不需要运行任何MR或Spark作业。 HS2或Hive客户端只是直接读取数据。第二个查询需要运行MR或Spark作业。在测试或排除群集故障时,这是记住的关键。

你能在Hive一边运行Spark工作吗?