Hbase连接问题并且无法创建表

时间:2014-12-24 11:02:29

标签: hadoop configuration hbase apache-zookeeper nosql

我正在运行一个多节点集群;我使用hadoop-1.0.3(两者都有),Hbase-0.94.2(两者都有)和zookeeper-3.4.6(只有主人)

主:192.168.0.1 从属:192.168.0.2

Hbase运行不正常,我在尝试在hbase上创建表时遇到了问题 当然,我无法访问http://master:60010上的HBase状态用户界面,请帮助!!

以下是我的所有配置文件:

(hadoop conf)core-site.xml :(主服务器和从服务器上的配置相同)

 <configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
 </property>
</configuration>

(hbase conf)hbase-site.xml:

<configuration>

<property>
      <name>hbase.rootdir</name>
      <value>hdfs://master:54310/hbase</value>
</property>

<property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
</property>

<property>
      <name>hbase.zookeeper.quorum</name>
      <value>master,slave</value>
</property>

<property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2222</value>
</property>

<property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/usr/local/hadoop/zookeeper</value>
</property>

</configuration>

/ etc / hosts and:

192.168.0.1 master
192.168.0.2 slave

regionservers:

master
slave

这里是日志文件:hbase-hduser-regionserver-master.log

2014-12-24 02:12:13,190 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
2014-12-24 02:12:14,002 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server master/192.168.0.1:2181
2014-12-24 02:12:14,003 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2014-12-24 02:12:14,004 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to master/192.168.0.1:2181, initiating session
2014-12-24 02:12:14,005 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2014-12-24 02:12:14,675 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server master,60020,1419415915643: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
    at org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
    at java.lang.Thread.run(Thread.java:745)
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2014-12-24 02:12:14,676 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization of RS failed.  Hence aborting RS.
2014-12-24 02:12:14,683 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer MXBean
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
2014-12-24 02:12:14,690 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread.
2014-12-24 02:12:14,691 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.

1 个答案:

答案 0 :(得分:0)

我认为localhost文件中的core-site.xml使用master

将slave节点主机添加到hadoop目录中的slave文件中。

主人兼和从属节点core-site.xml文件如下所示:

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
 </property>
</configuration>
如果你在两个文件中都有当前的zookeeper,那么

主服务器和从属主机应该存在于两个regionservers文件中。