连接到本地HBase时出错

时间:2013-05-13 13:36:44

标签: exception hadoop hbase master

我正在尝试通过Java中的一个小程序连接到本地系统中安装的HBase(使用Hortonworks 1.1.1.16),该程序执行下一个命令:

HBaseAdmin.checkHBaseAvailable(conf);

值得一提的是,使用hbase命令从命令行连接到HBase时完全没有问题。

主机文件的内容是下一个(example.com包含实际主机名):

127.0.0.1 localhost example.com

HBase配置为在独立模式下工作:

hbase.cluster.distributed=false

执行程序时,抛出下一个异常:

13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_19
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.19.x86_64/jre
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.class.path=[...]
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.2.1.el6.x86_64
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.name=root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root/git/project
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=example.com:2181 sessionTimeout=60000 watcher=hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is hconnection-0x678e4593
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x13e9d6851af0046, negotiated timeout = 40000
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: ClusterId is cccadf06-f6bf-492e-8a39-e8beac521ce6
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 of 1 failed; no more retrying.
com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
    at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
    at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
    at TestHBase.main(TestHBase.java:37)
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
    ... 8 more
13/05/13 15:18:29 INFO client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x13e9d6851af0046
13/05/13 15:18:29 INFO zookeeper.ZooKeeper: Session: 0x13e9d6851af0046 closed
13/05/13 15:18:29 INFO zookeeper.ClientCnxn: EventThread shut down
org.apache.hadoop.hbase.exceptions.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:793)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterProtocol(HConnectionManager.java:1724)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterMonitor(HConnectionManager.java:1757)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:837)
    at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2010)
    at TestHBase.main(TestHBase.java:37)
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Broken pipe
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:149)
    at com.sun.proxy.$Proxy5.isMasterRunning(Unknown Source)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterInterface(HConnectionManager.java:732)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.createMasterWithRetries(HConnectionManager.java:764)
    ... 5 more
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:450)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.writeConnectionHeader(HBaseClient.java:896)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:847)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1414)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1299)
    at org.apache.hadoop.hbase.ipc.ProtobufRpcClientEngine$Invoker.invoke(ProtobufRpcClientEngine.java:131)
    ... 8 more

此跟踪提供了一些可能实际发生的事实的证据。似乎已建立与ZooKeeper的连接,但在尝试访问主服务器时出现了某些问题。

虽然我花了好几个小时试图在谷歌找到解决方案,但我似乎没有这样的例外。特别是,这种例外情况因互联网上的大多数情况而异:

  • 每个人似乎都有错误getMaster attempt 0 of 1 failed而不是getMaster attempt 1 of 1 failed。我不知道这是否有意义,但我发现它有些奇怪。

  • 我找不到其他人收到Broken pipe错误。

顺便说一下,就我在Hortonworks管理控制台中看到的那样,主人实际上正在运行。

查看最新日志时,这是输出:

2013-05-13 15:30:07,192 WARN org.apache.hadoop.ipc.HBaseServer: Incorrect header or version mismatch from 127.0.0.1:40788 got version 0 expected version 3

因为它是警告而不是错误,我不知道它是否与实际问题有关。端口在每次执行时都会有所不同。

非常感谢您的时间,我希望您能为这个问题带来一些启发。

此致

2 个答案:

答案 0 :(得分:1)

最后我们找到了问题并解决了它。原来这是一个依赖问题。我们使用的是hbase-0.95.0hbase-client-0.95.0。使用hbase-0.94.7hbase-0.94.9似乎有效。

然而,在某些情况下,即使使用HBase库的版本,也会出现一些问题。特别是,在应用程序服务器(JBoss AS7)中运行时会出现一些问题。最后,所有问题似乎都可以通过删除依赖项hbase-client-0.95.0来解决,并将其替换为haboop-core-1.1.2,因为某些类不包含在hbase库中。

问候。

答案 1 :(得分:0)

我建议先检查你的HBase Master / RegionServer端口是否与netstat -n -a绑定。当HBase Master IPC仅绑定到外部IP(这是Cloudera CDH)并且无法通过127.0.0.1访问时,我遇到了这种情况。它看起来像你最可能的情况 - 在这种情况下,hbase shell仍然可以工作。

另一个可能的原因可能是先前的群集崩溃,一些HDFS数据已损坏。在这种情况下,HBase实际上并没有开始等待HDFS退出安全模式。但这看起来不是你的情况。如果是,您可以手动强制HDFS从控制台退出安全模式,然后执行fsck以获取Hadoop和HBase的类似过程。