Apache Phoenix无法连接到HBase

时间:2016-07-21 05:11:26

标签: hbase phoenix

我是凤凰城的新用户,可能错过了一些简单的额头。

  • HBase up

    21:44:23 / sprue $ ps -ef | grep HMaster

    501 55936 55922 0 9:50 PM ttys014 0:18.12 /Library/Java/JavaVirtualMachines/jdk1.8.0_71.jdk/Contents/Home/bin/java -Dproc_master -XX:OnOutOfMemoryError = kill -9%p -Djava。 net.preferIPv4Stack = true - .. -Dhbase.security.logger = INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

  • 我们可以通过hbase shell与其联系并查询内容:

    HBase的(主):010:0>扫描't1'

    ROW COLUMN + CELL  r1 column = f1:c1,timestamp = 1469077174795,value = val1 1行(0.0370秒)

现在我已将phoenix 4.4.6 jar复制到$ HBASE_HOME / lib目录,重启hbase并尝试通过sqlline.py进行连接:

$sqlline.py mellyrn.local:2181

Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:mellyrn.local:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:mellyrn.local:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/shared/phoenix-4.7.0-HBase-1.1-bin/phoenix-4.7.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/07/20 22:03:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
    at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1603)
    at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1535)
    at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1452)
    at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:429)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:52195)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.except

..

Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks

因此,提出phoenix所需的任何提示都会有所帮助。

2 个答案:

答案 0 :(得分:2)

当HBase master无法加载phoenix server.jar时抛出异常,即使凤凰安装说明只是重启区域服务器,但还不够,复制phoenix server.jar HBase主服务器和备份主服务器与区域服务器相同,并重新启动所有服务器。

答案 1 :(得分:1)

$HBASE_HOME/lib上查看$HBASE_HOME/conf/hbase-site.xmlHMaster

启动phoenix时,它将创建4个系统表:

SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS

SYSTEM.CATALOGSYSTEM.FUNCTION声明使用协处理器org.apache.phoenix.coprocessor.MetaDataEndpointImpl,但似乎您的HMaster无法加载它。