如何在使用Spark执行引擎时配置Hive cli?

时间:2017-10-31 01:07:56

标签: apache-spark hive

我已将hive.execution.engine设置为spark并且还使用了启用spark的队列。 Spark sql能够访问配置单元表 - 来自直接连接的集群计算机的beeline也是如此。

但是hive cli似乎需要额外的步骤。到目前为止,已完成以下工作:

**将scala库复制到$HIVE_HOME/libs目录(或我们获取ClassNotFoundException

**在hive脚本开头(或.hiverc

开始运行以下内容
set hive.execution.engine=spark;
set mapred.job.queue.name=root.spark.sbg.hos;

但是现在发生以下错误Failed to create spark client.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/Cellar/hive/2.1.1/libexec/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
hive (default)> insert into sb.test2 values (1,'ab');
Query ID = sboesch_20171030175629_dc310c9a-519e-4f84-a632-f3a44f1df8c3
Total jobs = 3
Launching Job 1 out of 3
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

是否有人设法连接到spark后端hive?我通过vanilla hive(不是ClouderaHortonworksMapR)进行连接。

1 个答案:

答案 0 :(得分:1)

你必须单独启动Hive Metastore Server以通过spark访问hive表。

在新终端中尝试hive --service metastore,您将收到Starting Hive Metastore Server

等回复

<强>蜂房-site.xml中

`<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>   
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>**mysql metastore username**</value>   
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>**mysql metastore DB password**</value>   
</property>

<property>
<name>hive.querylog.location</name>
<value>/tmp/hivequerylogs/${user.name}</value>    
</property>

<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/local/hive/apache-hive-2.1.1-bin/lib/hive-hbase-handler-2.1.1.jar,file:///usr/local/hive/apache-hive-2.1.1-bin/lib/zookeeper-3.4.6.jar</value>
<description>A comma separated list (with no spaces) of the jar files required for Hive-HBase integration</description>
</property>

<property>
<name>hive.support.concurrency</name>
<value>false</value>   
</property>

<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>    
</property>

<property>
<name>hive.server2.authentication</name>
<value>PAM</value>    
</property>

 <property>
<name>hive.server2.custom.authentication.class</name>
<value>org.apache.hive.service.auth.PamAuthenticationProvider</value>  
</property>

<property>
<name>hive.server2.authentication.pam.services</name>
<value>sshd,sudo</value>    
</property>

<property>
<name>hive.stats.dbclass</name>
<value>jdbc:mysql</value>    
</property>

<property>
<name>hive.stats.jdbcdriver</name>
<value>com.mysql.jdbc.Driver</value>
</property>

<property>
<name>hive.session.history.enabled</name>
<value>true</value>
</property>  

<property>
 <name>hive.metastore.schema.verification</name>
 <value>false</value>    
</property>

 <property>
 <name>hive.optimize.sort.dynamic.partition</name>
 <value>false</value>    
 </property>

 <property>
   <name>hive.optimize.insert.dest.volume</name>
   <value>false</value>
 </property>

 <property>
 <name>hive.exec.scratchdir</name>
 <value>/tmp/hive/${user.name}</value>
 <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
 </property>   

  <property>
  <name>datanucleus.fixedDatastore</name>
  <value>true</value>
  <description/>
  </property>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>

<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
<description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>
 </property>

 <property>
 <name>datanucleus.schema.autoCreateAll</name>
 <value>true</value>
 </property>

<property>
<name>datanucleus.schema.validateConstraints</name>
<value>true</value>
</property>

  <property>
  <name>datanucleus.schema.validateColumns</name>
  <value>true</value>
  </property>

  <property>
    <name>datanucleus.schema.validateTables</name>
  <value>true</value>
  </property>
</configuration>`