HBase无法在HDFS中创建其目录

时间:2017-07-01 17:06:55

标签: hadoop hbase

我正在关注此tutorial以安装hbasehadoop,但我遇到了问题。

直到最后一步,一切都很好

  

HBase在HDFS中创建其目录。要查看创建的目录,   浏览到Hadoop bin并键入以下命令。

     

$ ./bin/hadoop fs -ls / hbase如果一切顺利,它会给你   以下输出。

     

发现7​​项drwxr-xr-x - hbase用户0 2014-06-25 18:58 /hbase/.tmp

     

...

但是当我运行此命令时,我得到/hbase :No such file or directory

这是我的配置

Hadoop配置

芯-site.xml中

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

HD​​FS-site.xml中

<configuration>
   <property>
      <name>dfs.replication</name >
      <value>1</value>
   </property>

   <property>
      <name>dfs.name.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/namenode</value>
   </property>

   <property>
      <name>dfs.data.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/datanode</value>
   </property>
</configuration>

mapred-site.xml中

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

纱-site.xml中

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

Hbase配置 HBase的-site.xml中

<configuration>
   <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:8030/hbase</value>
</property>
   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/marc/zookeeper</value>
   </property>
   <property>
       <name>hbase.cluster.distributed</name>
       <value>true</value>
    </property>
</configuration>

我可以浏览http://localhost:50070http://localhost:8088/cluster

我该如何解决这个问题?

修改

根据Saurabh Suman的回答,我创建了hbase文件夹,但它仍然是空的。

在hbase-marc-master-marc-pc.log中,我有以下异常。它有关系吗?

2017-07-01 20:31:59,349 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)
2017-07-01 20:31:59,351 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)

2 个答案:

答案 0 :(得分:2)

日志表明HBase在成为活动主服务器时出现问题,因此它开始关闭。

我的假设是HBase永远无法正常启动,因此它没有自己创建/hbase目录。此外,这也是/hbase目录仍为空的原因。

我在虚拟机上重现了您的错误,并使用此修改后的设置进行了修复。

操作系统 CentOS Linux版本7.2.1511

虚拟化软件 Vagrant和Virtualbox

<强>爪哇

java -version
openjdk version "1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

core-site.xml(HDFS)

<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:8020</value>
   </property>
</configuration>

hbase-site.xml(HBase)

<configuration>
   <property>
      <name>hbase.rootdir</name>
      <value>file:/home/hadoop/HBase/HFiles</value>
   </property>

   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/hadoop/zookeeper</value>
   </property>
   <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://localhost:8020/hbase</value>
   </property>
</configuration>

目录所有者和权限调整

sudo su # Become root user
cd /usr/local/

chown -R hadoop:root hadoop
chmod -R 755 hadoop

chown -R hadoop:root Hbase
chmod -R 755 Hbase

<强>结果

使用此设置启动HBase后,它会自动创建/hbase目录并填充内容。

[hadoop@localhost conf]$ hdfs dfs -ls /hbase
Found 7 items
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/.tmp
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/MasterProcWALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/WALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/data
-rw-r--r--   1 hadoop supergroup         42 2017-07-03 14:26 /hbase/hbase.id
-rw-r--r--   1 hadoop supergroup          7 2017-07-03 14:26 /hbase/hbase.version
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/oldWALs

答案 1 :(得分:1)

我们只需要编辑配置文件中那些不能自己创建的东西。所以,你需要在HDFS中手动创建目录。 hdfs dfs -mkdir /hbase