运行HBase时出现NoClassDefFoundError,zookeeper中没有错误

时间:2019-04-29 08:52:20

标签: hadoop hbase config

我已经使用this教程创建了一个独立的hadoop集群。然后,按照this教程,在HBase上安装了hadoop

我通过

运行Hadoop
cd /usr/local/hadoop/sbin/
./start-all.sh

还有HBase,

cd /usr/local/hbase/bin
./start-hbase.sh

然后,当我做jps时,我得到:

3761 Jps
835 NameNode
966 DataNode
3480 HMaster
3608 HRegionServer
1465 ResourceManager
1610 NodeManager
3418 HQuorumPeer
1150 SecondaryNameNode

但是一段时间后它会显示:

1779 SecondaryNameNode
1557 DataNode
2870 HQuorumPeer
2200 NodeManager
2061 ResourceManager
3246 Jps
1423 NameNode

因此,这是一个很大的指示,表明出了点问题。现在,我检查了/usr/local/hbase/logs/hbase-hduser-zookeeper-stal.log中的Zookeeper登录,并显示:

2019-04-29 07:54:45,677 INFO  [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp
2019-04-29 07:54:45,677 INFO  [main] server.ZooKeeperServer: Server environment:java.compiler=<NA>
2019-04-29 07:54:45,677 INFO  [main] server.ZooKeeperServer: Server environment:os.name=Linux
2019-04-29 07:54:45,678 INFO  [main] server.ZooKeeperServer: Server environment:os.arch=amd64
2019-04-29 07:54:45,678 INFO  [main] server.ZooKeeperServer: Server environment:os.version=4.15.0-47-generic
2019-04-29 07:54:45,678 INFO  [main] server.ZooKeeperServer: Server environment:user.name=hduser
2019-04-29 07:54:45,678 INFO  [main] server.ZooKeeperServer: Server environment:user.home=/home/hduser
2019-04-29 07:54:45,678 INFO  [main] server.ZooKeeperServer: Server environment:user.dir=/home/hduser
2019-04-29 07:54:45,782 INFO  [main] server.ZooKeeperServer: tickTime set to 3000
2019-04-29 07:54:45,782 INFO  [main] server.ZooKeeperServer: minSessionTimeout set to -1
2019-04-29 07:54:45,782 INFO  [main] server.ZooKeeperServer: maxSessionTimeout set to 90000
2019-04-29 07:54:46,780 INFO  [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181

这似乎没有任何错误。

因此,我检查了/usr/local/hbase/logs/hbase-hduser-master-stal.log中HBase的错误,并且得到了:

2019-04-29 07:55:11,513 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster.
        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3100)
        at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236)
        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3111)
Caused by: java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362)
        at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411)
        at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:489)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3093)
        ... 5 more
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 25 more

a similar question,是answered的作者:

  

HBase 2.1.0发行版使用HTrace,这是一个正在孵化的Apache   基础项目。

     

HBase lib文件夹中有一个用于第三方库的文件夹,   面向客户的第三方。您需要复制   htrace-core-3.1.0-incubating.jar从那里到HBase库   目录。 (see reference

     

Cloudera Community处还有另一种解决方案,它更改了   配置,而不是手动添加库。

第一个解决方案包括:

  
      
  1. 由于以下错误,HMaster拒绝启动:
  2.   
     

Java.lang.RuntimeException:Master:class的构造失败   org.apache.hadoop.hbase.master.HMaster原因由:   java.lang.ClassNotFoundException:org.apache.htrace.SamplerBuilder

     

这是因为在hbase 2.0中,我们有2个不同的版本   htrace-core.x.x.x.incubating.jar

cd /usr/local/hbase/lib/client-facing-thirdparty/:
htrace-core-3.1.0-incubating.jar
htrace-core-4.2.0-incubating.jar
     

当前,只有版本3.1.0具有必需的类SamplerBuilder。   我们需要删除4.2.0版:

     

mv htrace-core-4.2.0-incubating.jar htrace-core-4.2.0-incubating.jar.bak   但是,当我对cd/usr/local/hbase/lib/client-facing-thirdparty并做ls -a时,我得到:

.   audience-annotations-0.5.0.jar  findbugs-annotations-1.3.9-1.jar   log4j-1.2.17.jar      slf4j-log4j12-1.7.25.jar
..  commons-logging-1.2.jar         htrace-core4-4.2.0-incubating.jar  slf4j-api-1.7.25.jar

可以看到,只有一个htrace文件,而不是两个。因此,我从here下载了htrace-3.1.0,并将其复制到/usr/local/hbase/lib/client-facing-thirdparty,然后将htrace-core4-4.2.0-incubating.jar重命名为htrace-core4-4.2.0-incubating.jar.bak。然后,我重新启动了hadoopHBase。仍然没有变化。 jps现在没有显示HMasterHRegionServer


HBase配置文件:

<configuration>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/user/hduser/hbase</value>
</property>

<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>

<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>

<property> 
<name>hbase.master</name> 
<value>localhost:60010</value> 
</property>

<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>

<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://localhost:9000/user/hduser/zookeeper</value>
</property>
<property>
        <name>hbase.tmp.dir</name>
        <value>/hbase/tmp</value>
        <description>Temporary directory on the local filesystem.</description>
</property>
</configuration>

hbase-env.sh如下:

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_REGIONSERVERS=/usr/local/hbase/conf/regionservers
export HBASE_MANAGES_ZK=true
export HBASE_PID_DIR=/var/hbase/pids
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"

那么,我现在该怎么办?任何帮助表示赞赏。

0 个答案:

没有答案