错误:无法从REMOTE HBASE数据库获取TABLES LISTS?

时间:2015-10-01 10:41:47

标签: java hadoop hbase

但是我在quickstart.cloudera这条路径下添加了quickstart.cloudera的ip地址。我在hbase-site.xml文件中使用此名称HBaseConfiguration hc = new HBaseConfiguration( new Configuration( ) ); hc.set("hbase.master", "quickstart.cloudera:60000"); hc.set("hbase.zookeeper.quorum", "quickstart.cloudera"); hc.set("hbase.zookeeper.property.clientPort","2181"); HBaseAdmin admin = new HBaseAdmin(hc); HTableDescriptor[] tableDescriptor = admin.listTables(); for (int i=0; i<tableDescriptor.length;i++ ) { System.out.println(tableDescriptor[i].getNameAsString()); } } ,我在我的eclipse项目中粘贴了该文件。 但是在本地系统中连接时,相同的代码可以正常工我试图运行这个程序,但有些问题。

15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Opening socket connection to server quickstart.cloudera/192.168.0.106:2181. Will not attempt to authenticate using SASL (unknown error)
    15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.0.105:62868, server: quickstart.cloudera/192.168.0.106:2181
    15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/192.168.0.106:2181, sessionid = 0x150220e6706002c, negotiated timeout = 60000
    15/10/01 15:30:55 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://192.168.0.106:8020/hbase/lib, ignored
    java.io.IOException: No FileSystem for scheme: hdfs
    at        org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2138)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2145)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
    at    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:242)
    at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:850)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
    at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:414)
    at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:407)
    at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:285)
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:207)
    at HbaseList.main(HbaseList.java:22)
    15/10/01 15:31:53 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, started=57299 ms ago, cancelled=false, msg=
    15/10/01 15:32:14 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, started=78658 ms ago, cancelled=false, msg=
    15/10/01 15:32:35 INFO client.RpcRetryingCaller: Call exception, tries=12, retries=35, started=99756 ms ago, cancelled=false, msg=

我的输出:

awk

1 个答案:

答案 0 :(得分:0)

尝试设置这些配置设置

Configuration conf= new Configuration();
conf.set("fs.defaultFS", "hdfs://" + host + ":"+port);
conf.set("fs.hdfs.impl", 
        org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
    );
conf.set("fs.file.impl",
        org.apache.hadoop.fs.LocalFileSystem.class.getName()
    );

HBaseConfiguration hc = new HBaseConfiguration( conf );