Hbase Master在群集设置中失败。 (Hadoop,Hbase和Zookeeper)

时间:2016-06-02 10:07:04

标签: hadoop hbase apache-zookeeper

我正在使用hadoop,hbase和zookeeper运行单节点集群。 Hmaster无法构造以下错误。任何人都可以帮助我吗?

**

2016-06-02 14:51:56,770 INFO  [master/localhost/127.0.0.1:60000] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x290556860x0, quorum=localhost:2181, baseZNode=/hbase
2016-06-02 14:51:56,776 INFO  [master/localhost/127.0.0.1:60000-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-06-02 14:51:56,781 INFO  [master/localhost/127.0.0.1:60000-SendThread(localhost:2181)] zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
2016-06-02 14:51:56,804 INFO  [master/localhost/127.0.0.1:60000-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x155106b3fcb0002, negotiated timeout = 40000
2016-06-02 14:51:56,815 INFO  [master/localhost/127.0.0.1:60000] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2016-06-02 14:51:56,905 FATAL [localhost:60000.activeMasterManager] master.HMaster: Failed to become active master
java.lang.IllegalStateException
    at com.google.common.base.Preconditions.checkState(Preconditions.java:133)
    at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:118)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
    at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2419)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1036)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1020)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:424)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:638)
    at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:184)
    at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1729)
    at java.lang.Thread.run(Thread.java:745)

**

另外,我在zookeeper中收到INFO日志,显示错误。

2016-06-02 14:52:00,080 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@651] - Got user-level KeeperException when processing sessionid:0x155106b3fcb0000 type:delete cxid:0x1f zxid:0x13 txntype:-1 reqpath:n/a Error Path:/hbase/rs/localhost,60000,1464859314301 Error:KeeperErrorCode = NoNode for /hbase/rs/localhost,60000,1464859314301

有谁能指出我的实际问题是什么?

HBase的-site.xml中

<configuration>
<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://localhost:9000/hbase</value>
</property>
<property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<!--<property>
  <name>hbase.master.loadbalancer.class</name>
  <value>org.apache.phoenix.hbase.index.balancer.IndexLoadBalancer</value>
</property>-->
<!--<property>
  <name>hbase.coprocessor.master.classes</name>
  <value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>-->
<property>
  <name>hbase.master.port</name>
  <value>60000</value>
</property>
<property>
  <name>phoenix.functions.allowUserDefinedFunctions</name>
  <value>true</value>
</property>

<property>
  <name>hbase.master.info.port</name>
  <value>60010</value>
</property>
<property>
  <name>hbase.dynamic.jars.dir</name>
  <value>hdfs://localhost:9000/hbase/tmpjars/</value>
</property>
<!--<property>
  <name>hbase.coprocessor.region.classes</name>
  <value>org.apache.hadoop.hbase.coprocessor.MyCoprocessor</value>
</property>-->
<property>
  <name>hbase.regionserver.executor.openregion.threads</name>
  <value>100</value>
</property>
<!--<property>
  <name>phoenix.groupby.maxCacheSize</name>
  <value>2096</value>
</property>-->
<!--<property>
  <name>phoenix.trace.frequency</name>
  <value>never</value>
</property>-->
<property>
  <name>phoenix.query.spoolThresholdBytes</name>
  <value>31457280</value>
</property>
<property>
  <name>hbase.region.server.rpc.scheduler.factory.class</name>
  <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
</property>
<property>
  <name>hbase.rpc.controllerfactory.class</name>
  <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
</property>
<!--<property>
  <name>phoenix.trace.statsTableName</name>
  <value>SYSTEM.TRACING_STATS</value>
</property>-->
<property>
  <name>phoenix.table.use.stats.timestamp</name>
  <value>false</value>
</property>
<!--<property>
  <name>phoenix.stats.guidepost.width</name>
  <value>20</value>
</property>
<property>
  <name>phoenix.stats.guidepost.width</name>
  <value>true</value>
</property>-->
<!--<property>
  <name>fs.hdfs.impl</name>
  <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>-->
<property>
  <name>hbase.coprocessor.regionserver.classes</name>
  <value>org.apache.hadoop.hbase.regionserver.LocalIndexMerger</value>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>300000</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>300000</value>
</property>
</configuration>

请帮我解决这个问题?

0 个答案:

没有答案