使用Mapper / Reducer

时间:2015-07-04 16:42:39

标签: java hadoop hbase

我正在尝试为hbase编写一个Mapper / Reducer,我添加了jar。但是在lib目录中添加jar文件后,我无法启动hbase。我想调试出了什么问题?如何更改日志级别?会有帮助吗? 以下是例外:

java.lang.RuntimeException:Master的构造失败:class org.apache.hadoop.hbase.master.HMasterCommandLine $ LocalHMaster         at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:143)         在org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:217)         在org.apache.hadoop.hbase.LocalHBaseCluster。(LocalHBaseCluster.java:153)         在org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:224)         在org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)         在org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)         在org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)         在org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2290) 引起:java.lang.NoSuchMethodError:org.apache.hadoop.ipc.RPC.getProtocolProxy(Ljava / lang / Class; JLjava / net / InetSocketAddress; Lorg / apache / hadoop / security / UserGroupInformation; Lorg / apache / hadoop / conf /配置; Ljavax /净/的SocketFactory; ILorg /阿帕奇/ hadoop的/ IO /重试/ RetryPolicy; Ljava / util的/并行/原子/的AtomicBoolean)Lorg /阿帕奇/ hadoop的/ IPC / ProtocolProxy;         at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:420)         在org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:316)         在org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)         在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:665)         在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:601)         在org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)         在org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)         在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:89)         at org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2625)         在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2607)         在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)         在org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)         at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1004)         在org.apache.hadoop.hbase.regionserver.HRegionServer。(HRegionServer.java:562)         在org.apache.hadoop.hbase.master.HMaster。(HMaster.java:364)         在org.apache.hadoop.hbase.master.HMasterCommandLine $ LocalHMaster。(HMasterCommandLine.java:307)         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)         at java.lang.reflect.Constructor.newInstance(Constructor.java:408)         在org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)         ......还有7个

1 个答案:

答案 0 :(得分:0)

所以看起来这个错误是由于HBase的lib目录中的hadoop库(hadoop- -2.5.1)与我实际的Hadoop安装不匹配(hadoop - -2.6.0) )。我的jar正在寻找在较旧版本的hadoop库中找不到的类,因为它失败了。 this answer让我意识到了这个问题。在我复制lib目录中的所有hadoop - * - 2.6.0 jar后,HBase按预期启动。 HBase-Hadoop compatibility documentation中也提到了相同的内容。