如何配置spark sql来访问hive Metastore?

时间:2015-06-30 17:08:06

标签: hive cloudera apache-spark-sql

我正在尝试使用Spark SQL从Hive Metastore读取一个表,但Spark提供了有关未找到表的错误。我担心Spark SQL会创建一个全新的空的Metastore。

我通过这个命令提交了spark任务:

spark-submit --class etl.EIServerSpark --driver-class-path '/opt/cloudera/parcels/CDH/lib/hive/lib/*' --driver-java-options '-Dspark.executor.extraClassPath=/opt/cloudera/parcels/CDH/lib/hive/lib/*' --jars $HIVE_CLASSPATH --files /etc/hive/conf/hive-site.xml,/etc/hadoop/conf/yarn-site.xml --master yarn-client /root/etl.jar

这是错误:

2015-06-30 17:50:51,563 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hive/conf/hive-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/hive-site.xml
2015-06-30 17:50:51,568 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hive/conf/hive-site.xml at http://10.136.149.126:43349/files/hive-site.xml with timestamp 1435683051561
2015-06-30 17:50:51,568 INFO  [main] util.Utils (Logging.scala:logInfo(59)) - Copying /etc/hadoop/conf/yarn-site.xml to /tmp/spark-568de027-8b66-40fa-97a4-2ec50614f486/yarn-site.xml
2015-06-30 17:50:51,570 INFO  [main] spark.SparkContext (Logging.scala:logInfo(59)) - Added file file:/etc/hadoop/conf/yarn-site.xml at http://10.136.149.126:43349/files/yarn-site.xml with timestamp 1435683051568
2015-06-30 17:50:51,637 INFO  [sparkDriver-akka.actor.default-dispatcher-5] util.AkkaUtils (Logging.scala:logInfo(59)) - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@gateway.edp.hadoop:52818/user/HeartbeatReceiver
2015-06-30 17:50:51,756 INFO  [main] netty.NettyBlockTransferService (Logging.scala:logInfo(59)) - Server created on 40198
2015-06-30 17:50:51,757 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Trying to register BlockManager
2015-06-30 17:50:51,759 INFO  [sparkDriver-akka.actor.default-dispatcher-2] storage.BlockManagerMasterActor (Logging.scala:logInfo(59)) - Registering block manager localhost:40198 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 40198)
2015-06-30 17:50:51,761 INFO  [main] storage.BlockManagerMaster (Logging.scala:logInfo(59)) - Registered BlockManager
2015-06-30 17:50:52,840 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: SELECT id, name FROM eiserver.eismpt
2015-06-30 17:50:53,141 INFO  [main] parse.ParseDriver (ParseDriver.java:parse(206)) - Parse Completed
2015-06-30 17:50:54,041 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(502)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2015-06-30 17:50:54,064 INFO  [main] metastore.ObjectStore (ObjectStore.java:initialize(247)) - ObjectStore, initialize called
2015-06-30 17:50:54,227 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-rdbms-3.2.9.jar."
2015-06-30 17:50:54,268 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-api-jdo-3.2.6.jar."
2015-06-30 17:50:54,274 WARN  [main] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hive/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/jars/datanucleus-core-3.2.10.jar."
2015-06-30 17:50:54,314 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property datanucleus.cache.level2 unknown - will be ignored
2015-06-30 17:50:54,315 INFO  [main] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2015-06-30 17:50:56,109 INFO  [main] metastore.ObjectStore (ObjectStore.java:getPMF(318)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2015-06-30 17:50:56,170 INFO  [main] metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(110)) - MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
2015-06-30 17:50:57,315 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,316 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,688 INFO  [main] DataNucleus.Datastore (Log4JLogger.java:info(77)) - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2015-06-30 17:50:57,842 INFO  [main] DataNucleus.Query (Log4JLogger.java:info(77)) - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
2015-06-30 17:50:57,844 INFO  [main] metastore.ObjectStore (ObjectStore.java:setConf(230)) - Initialized ObjectStore
2015-06-30 17:50:58,113 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(560)) - Added admin role in metastore
2015-06-30 17:50:58,115 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles(569)) - Added public role in metastore
2015-06-30 17:50:58,198 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers(597)) - No user is added in admin role, since config is empty
2015-06-30 17:50:58,376 INFO  [main] session.SessionState (SessionState.java:start(383)) - No Tez session required at this point. hive.execution.engine=mr.
2015-06-30 17:50:58,525 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(632)) - 0: get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,525 INFO  [main] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(314)) - ugi=root     ip=unknown-ip-addr      cmd=get_table : db=eiserver tbl=eismpt
2015-06-30 17:50:58,567 ERROR [main] metadata.Hive (Hive.java:getTable(1003)) - NoSuchObjectException(message:eiserver.eismpt table not found)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1569)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

如何配置spark sql来访问postgres上部署的hive Metastore?我正在使用CDH 5.3.2。

谢谢

3 个答案:

答案 0 :(得分:1)

配置Spark以使用Hive Metastore thriftserver:

修改$SPARK_HOME/conf/hive-site.xml以删除直接连接信息并添加此属性:

<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://localhost:9083</value> /*make sure to replace with your hive-metastore service's thrift url*/
    <description>URI for client to contact metastore server</description>
  </property>
</configuration>

如果hive-site.xml中没有$SPARK_HOME/conf,那么要连接到hive Metastore,您需要将hive-site.xml文件复制到spark / conf目录中。因此,以root用户身份登录后运行以下命令,

cp  /usr/lib/hive/conf/hive-site.xml    /usr/lib/spark/conf/

创建Hive上下文

scala> REPL提示符下键入以下内容:

import org.apache.spark.sql.hive.HiveContext
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)

创建Hive表

hiveContext.sql("CREATE TABLE IF NOT EXISTS TestTable (key INT, value STRING)")

显示配置表

scala> hiveContext.hql("SHOW TABLES").collect().foreach(println)

测试配置(可选)

  • 使用cd $SPARK_HOME; sbin/stop-thriftserver.sh
  • 停止Spark SQL thriftserver
  • 使用cd;./start-thriftserver.sh
  • 启动Hive Metastore thriftserver
  • 检查$HIVE_HOME/logs/metastore.out处的日志是否有任何错误。
  • Spark SQL thriftserver不会启动,直到它可以成功连接到 这个服务器,所以必须运行。
  • 启动Spark SQL thriftserver 与cd $SPARK_HOME; sbin/start-thriftserver.sh 检查返回行中指示的日志文件。
  • 你应该看到这样的行:
16/12/29 20:22:19 INFO metastore: Trying to connect to metastore with URI thrift://localhost:9083
16/12/29 20:22:19 INFO metastore: Connected to metastore.

运行$SPARK_HOME/bin/beeline -u 'jdbc:hive2://localhost:10000/'并尝试!tables命令以确保您能够列出元数据。

答案 1 :(得分:0)

The doc说要在配置文件中放置spark.sql.hive.metastore.sharedPrefixes = org.postgresql,你试试这个吗?

答案 2 :(得分:0)

确保$HIVE_HOME/conf/hive-site.xml配置指向元存储的完整路径。

<property>
  <name>javax.jdo.option.ConnectionURL</name> 
    <value>jdbc:derby:;databaseName=/home/hive/metastore_db;create=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
   </property>
<property>

将hive-site.xml文件放在$SPARK_HOME/conf中,以将SparkR指向与Hive相同的metastore。

希望这可以解决您的问题。