连接HBase到HDFS时,Hbase Shell中的连接被拒绝

时间:2015-01-19 18:48:42

标签: hadoop hbase hdfs hadoop2 cloudera-cdh

我正在尝试将HBase连接到HDFS。我运行了hdfs namenode(bin / hdfs namenode)和datnode(/ bin / hdfs datanode)。我也可以启动我的Hbase(sudo ./bin/start-hbase.sh)和本地区域服务器(sudo ./bin/local-regionservers.sh start 1 2)。但是当我尝试从Hbase shell执行命令时,它会出现以下错误:

cis655stu@cis655stu-VirtualBox:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.99.0-SNAPSHOT, rUnknown, Sat Aug  9 08:59:57 EDT 2014

hbase(main):001:0> list
TABLE                                                                                                    
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-01-19 13:33:07,179 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

ERROR: Connection refused

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'

以下是HBase和Hadoop的配置文件:

HBase的-site.xml中

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

    <!--for psuedo-distributed execution-->
    <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
    </property>
    <property>
      <name>hbase.master.wait.on.regionservers.mintostart</name>
      <value>1</value>
    </property>
      <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/teaching/14f-cis655/tmp/zk-deploy</value>
      </property>

    <!--for enabling collection of traces
    -->
    <property>
      <name>hbase.trace.spanreceiver.classes</name>
      <value>org.htrace.impl.LocalFileSpanReceiver</value>
    </property>
    <property>
      <name>hbase.local-file-span-receiver.path</name>
      <value>/teaching/14f-cis655/tmp/server-htrace.out</value>
    </property>
    </configuration>

HDFS-site.xml中

<configuration>
<property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/datanode</value>
 </property>
 <property>
    <name>hadoop.trace.spanreceiver.classes</name>
    <value>org.htrace.impl.LocalFileSpanReceiver</value>
  </property>
  <property>
    <name>hadoop.local-file-span-receiver.path</name>
    <value>/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/logs/htrace.out</value>
  </property>
</configuration>

核心-site.xml中

<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
</property>
</configuration>

1 个答案:

答案 0 :(得分:3)

请检查你是否可以从shell获得HDFS:

  $ hdfs dfs -ls /hbase

还要确保您拥有 hdfs-env.sh 文件中的所有环境变量:

HADOOP_CONF_LIB_NATIVE_DIR="/hadoop/lib/native"
HADOOP_OPTS="-Djava.library.path=/hadoop/lib"
HADOOP_HOME=/hadoop
YARN_HOME=/hadoop
HBASE_HOME=/hbase
HADOOP_HDFS_HOME=/hadoop
HBASE_MANAGES_ZK=true

您是否使用相同的OS用户运行Hadoop和HBase?如果您使用单独的用户,请检查是否允许HBase用户访问HDFS。

确保您在 $ {HBASE_HOME} /中拥有 hdfs-site.xml core-stie.xml (或符号链接)文件的副本conf 目录。

对于YARN,也不推荐 fs.default.name 选项(但必须仍然有效),您必须考虑使用 fs.defaultFS

你使用Zookeeper吗?因为您已指定 hbase.zookeeper.property.dataDir 选项,但此处没有 hbase.zookeeper.quorum ,以及其他重要选项。有关详细信息,请阅读http://hbase.apache.org/book.html#zookeeper

请在 hdfs-site.xml 中添加下一个选项,以使HBase正常工作(系统用户替换 $ HBASE_USER 变量,用于运行HBase):

<property>
  <name>hadoop.proxyuser.$HBASE_USER.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.$HBASE_USER.hosts</name>
  <value>*</value>
</property>
<property>
  <name>dfs.support.append</name>
  <value>true</value>
</property>