我有两个linux机器,一个是master machine(192.168.8.174)
,另一个是slave machine(192.168.8.173)
。我已成功在完全分布式模式下安装和配置Hadoop 2.6.0。 Hadoop输出也很完美。我安装并配置了HBase 1.0。当我启动hbase时输出如下
master machine slave machine
HMaster HQuorumpeer
HQuorumpeer RegionServer
HRegionServer
但是当我create table(EXAMPLE:create 'test','cf')
时,它会在hbase日志文件中显示如下错误
015-03-19 16:46:04,930 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.8.173/192.168.8.173:2181. Will not attempt to authenticate using SASL (unknown error)
2015-03-19 16:46:04,952 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.8.173/192.168.8.173:2181, initiating session
2015-03-19 16:46:04,963 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.8.173/192.168.8.173:2181, sessionid = 0x14c3135d05c0001, negotiated timeout = 90000
2015-03-19 16:46:04,964 INFO [master/master/192.168.8.174:16020] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2015-03-19 16:46:04,992 FATAL [master:16020.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 29 more
2015-03-19 16:46:05,002 FATAL [master:16020.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 29 more
2015-03-19 16:46:05,002 INFO [master:16020.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
2015-03-19 16:46:08,046 INFO [master/master/192.168.8.174:16020] ipc.RpcServer: Stopping server on 16020
2015-03-19 16:46:08,046 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: stopping
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-03-19 16:46:08,049 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: Stopping infoServer
2015-03-19 16:46:08,089 INFO [master/master/192.168.8.174:16020] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16030
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14c3135d05c0001
2015-03-19 16:46:08,241 INFO [master/master/192.168.8.174:16020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-03-19 16:46:08,242 INFO [master/master/192.168.8.174:16020] zookeeper.ZooKeeper: Session: 0x14c3135d05c0001 closed
2015-03-19 16:46:08,244 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593; all regions closed.
所以我无法理解问题是什么
我的配置文件是
主机 - hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.8.174:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://192.168.8.174:9002/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.8.174,192.168.8.173</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
奴隶机器 - hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.8.174:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
我在HBASE_MANAGES_ZK
hbase-env.sh
答案 0 :(得分:4)
之前我曾获得ERROR: Can't get master address from ZooKeeper; znode data == null
次。在我的例子中,它是zookeeper.znode.parent
值的配置。服务器上的值为/hbase
但是如果从客户端设置为/hbase-unsecure
,我只能连接。必须在服务器的zoo.cfg文件中编辑该值,以便客户端连接到它。
答案 1 :(得分:0)
从日志消息中,您可能会遇到名称解析问题。
我会确保您的IP地址在正向和反向方向上正确解析到同一主机名。这是HBase的常见问题。特别是,我会检查您的/etc/hosts
文件,并确保名称master
与IP地址192.168.8.174
无关。如果是,那么您需要在配置中使用正确的名称而不是IP地址。此外,请确保群集中所有计算机上的名称映射都相同。有一些工具可以为您进行此项检查,例如:
https://github.com/sujee/hadoop-dns-checker
更新:看起来您hbase.zookeeper.property.dataDir
的设置可能不正确。你现在有它指向一个HDFS网址,但我相信这应该是一个本地目录路径。有关示例,请参阅here。
我确认你甚至可以使用hbase zkcli
从命令行与zookeeper交谈。