我正在使用cdh 5.13.0环境 每当我尝试执行hive cmd时,它都会显示错误
FAILED:SemanticException org.apache.hadoop.hive.ql.metadata.HiveException:java.lang.RuntimeException:无法实例化org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
我检查了hive-metastore.log,显示
2018-05-02 06:15:53,225 ERROR [main]:Datastore.Schema(Log4JLogger.java:error(125)) - 初始化数据库失败。 无法打开与给定数据库的测试连接。 JDBC url = jdbc:derby :; databaseName = metastore_db; create = true,username = APP。终止连接池(如果您希望在应用程序之后启动数据库,请将lazyInit设置为true)。原始例外:------ java.sql.SQLException:无法创建数据库ɶ metastore_db',有关详细信息,请参阅下一个异常。 at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(未知来源) at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(未知来源) 在org.apache.derby.impl.jdbc.Util.seeNextException(未知来源) at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source) 在org.apache.derby.impl.jdbc.EmbedConnection。(未知来源) 在org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(未知来源) 在org.apache.derby.jdbc.InternalDriver.connect(未知来源) 在org.apache.derby.jdbc.InternalDriver.connect(未知来源) 在org.apache.derby.jdbc.AutoloadedDriver.connect(未知来源) 在java.sql.DriverManager.getConnection(DriverManager.java:571) 在java.sql.DriverManager.getConnection(DriverManager.java:187) 在com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361) 在com.jolbox.bonecp.BoneCP。(BoneCP.java:416) 在com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120) at org.datanucleus.store.rdbms.ConnectionFactoryImpl $ ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501) 在org.datanucleus.store.rdbms.RDBMSStoreManager。(RDBMSStoreManager.java:298) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 在org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187) 在org.datanucleus.NucleusContext.initialise(NucleusContext.java:356) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) 在javax.jdo.JDOHelper $ 16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) 在javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) 在javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) 在javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) 在javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) 在org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:418) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:447) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:342) 在org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:298) 在org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) 在org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 在org.apache.hadoop.hive.metastore.RawStoreProxy。(RawStoreProxy.java:60) 在org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:69) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.newRawStore(HiveMetaStore.java:682) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.getMS(HiveMetaStore.java:660) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.createDefaultDB(HiveMetaStore.java:709) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.init(HiveMetaStore.java:508) 在org.apache.hadoop.hive.metastore.RetryingHMSHandler。(RetryingHMSHandler.java:78) 在org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) 在org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6474) 在org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6469) 在org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6719) 在org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6646) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) 在org.apache.hadoop.util.RunJar.run(RunJar.java:221) 在org.apache.hadoop.util.RunJar.main(RunJar.java:136) 引发者:错误XJ041:无法创建数据库&#cos ;_core',有关详细信息,请参阅下一个例外。 at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory.wrapArgsForTransportAcrossDRDA(Unknown Source) ......还有61个 引发者:错误XBM0H:无法创建目录/ metastore_db。 at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) 在org.apache.derby.impl.services.monitor.StorageFactoryService $ 10.run(未知来源) at java.security.AccessController.doPrivileged(Native Method) at org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown Source) at org.apache.derby.impl.services.monitor.FileMonitor.createPersistentService(Unknown Source) 在org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(未知来源)
org.datanucleus.exceptions.NucleusDataStoreException:无法打开与给定数据库的测试连接。 JDBC url = jdbc:derby :; databaseName = metastore_db; create = true,username = APP。终止连接池(如果您希望在应用程序之后启动数据库,请将lazyInit设置为true)。原始例外:------ java.sql.SQLException:无法创建数据库ɶ metastore_db',有关详细信息,请参阅下一个异常。 at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(未知来源) at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(未知来源) 在org.apache.derby.impl.jdbc.Util.seeNextException(未知来源) at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown Source) 在org.apache.derby.impl.jdbc.EmbedConnection。(未知来源) 在org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(未知来源) 在org.apache.derby.jdbc.InternalDriver.connect(未知来源) 在org.apache.derby.jdbc.InternalDriver.connect(未知来源) 在org.apache.derby.jdbc.AutoloadedDriver.connect(未知来源) 在java.sql.DriverManager.getConnection(DriverManager.java:571) 在java.sql.DriverManager.getConnection(DriverManager.java:187) 在com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361) 在com.jolbox.bonecp.BoneCP。(BoneCP.java:416) 在com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120) at org.datanucleus.store.rdbms.ConnectionFactoryImpl $ ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501) 在org.datanucleus.store.rdbms.RDBMSStoreManager。(RDBMSStoreManager.java:298) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 在org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187) 在org.datanucleus.NucleusContext.initialise(NucleusContext.java:356) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) 在org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) 在javax.jdo.JDOHelper $ 16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) 在javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) 在javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) 在javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) 在javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) 在org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:418) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:447) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:342) 在org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:298) 在org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) 在org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 在org.apache.hadoop.hive.metastore.RawStoreProxy。(RawStoreProxy.java:60) 在org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:69) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.newRawStore(HiveMetaStore.java:682) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.getMS(HiveMetaStore.java:660) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.createDefaultDB(HiveMetaStore.java:709) 在org.apache.hadoop.hive.metastore.HiveMetaStore $ HMSHandler.init(HiveMetaStore.java:508) 在org.apache.hadoop.hive.metastore.RetryingHMSHandler。(RetryingHMSHandler.java:78) 在org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) 在org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6474) 在org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6469) 在org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6719) 在org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6646) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) 在org.apache.hadoop.util.RunJar.run(RunJar.java:221) 在org.apache.hadoop.util.RunJar.main(RunJar.java:136) 引发者:错误XJ041:无法创建数据库&#cos ;_core',有关详细信息,请参阅下一个例外。 at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory.wrapArgsForTransportAcrossDRDA(Unknown Source) ......还有61个 引发者:错误XBM0H:无法创建目录/ metastore_db。 at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) 在org.apache.derby.impl.services.monitor.StorageFactoryService $ 10.run(未知来源) at java.security.AccessController.doPrivileged(Native Method) at org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown Source) at org.apache.derby.impl.services.monitor.FileMonitor.createPersistentService(Unknown Source) 在org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(未知来源)
我不知道该怎么做。 蜂巢/ Metastore服务器的状态是关闭的,没有关闭
答案 0 :(得分:2)
原因说明引起问题的原因。 在这种情况下,出现内存不足(OOM)错误是由于服务器没有足够的内存来启动HiveMetaStore服务。
RCA
在客户端的hive-site.xml中配置了HiveMetaStore服务,但该服务未启动。
例如:
/etc/gphd/hive-0.11.0_gphd_2_1_0_0/conf/hive-site.xml
<property>
<name>hive.metastore.uris
<value>thrift://hdw1.viadea.com:9083
</property>
但是在hdw1上:
-bash-4.1$ service hive-metastore status
hive-metastore dead but pid file exists
观察hive-metastore.log后,您可以看到该服务启动失败的原因是由于内存不足(OOM)错误,如下例所示:
14/04/07 16:43:13警告conf.HiveConf:已弃用:配置属性hive.metastore.local不再起作用。如果要连接到远程元存储,请确保为hive.metastore.uris提供有效值。 警告:不建议使用org.apache.hadoop.metrics.jvm.EventCounter。请在所有log4j.properties文件中使用org.apache.hadoop.log.metrics.EventCounter。 库初始化失败-无法分配文件描述符表-内存不足
程序 要解决此问题,请按照以下步骤操作: 1.了解为什么未启动HiveMetaStore服务。在这种情况下,请增加服务器的物理内存。 2.然后使用root用户手动启动HiveMetaStore服务。
service hive-metastore start