Hadoop连接被拒绝,Docker上的高可用性名称节点配置

时间:2018-06-19 20:55:52

标签: docker hadoop

我当前正在使用Quorum Journal Manager为namenode HA配置hadoop。我有3个Journalnode。以下是我的配置: hdfs-site.xml:

    <configuration>

<property>
  <name>dfs.nameservices</name>
  <value>cluster</value>
  <description>
    Comma-separated list of nameservices.
  </description>
</property>

<property>
  <name>dfs.ha.namenodes.cluster</name>
  <value>nn1,nn2</value>
  <description>
  </description>
</property>
<property>
  <name>dfs.namenode.rpc-address.cluster.nn1</name>
  <value>namenode1:8020</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.rpc-address.cluster.nn2</name>
  <value>namenode2:8020</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.http-address.cluster.nn1</name>
  <value>namenode1:50070</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.http-address.cluster.nn2</name>
  <value>namenode2:50070</value>
  <description>
  </description>
</property>

<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/cluster</value>
</property>

<property>
  <name>dfs.client.failover.proxy.provider</name>
  <value></value>
  <description>
    The prefix (plus a required nameservice ID) for the class name of the
    configured Failover proxy provider for the host.  For more detailed
    information, please consult the "Configuration Details" section of
    the HDFS High Availability documentation.
  </description>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>shell(/bin/true)</value>
</property>

    <property>
    <name>dfs.namenode.name.dir</name>
    <value>/hadoop_data/namenode</value>
    <description>NameNode directory for storing namespace and transaction logs</description>
    </property>

    <property>
        <name>dfs.datanode.name.dir</name>
        <value>/hadoop_data/datanode</value>
        <description>DataNode directory for storing data blocks</description>
    </property>

  <property>
      <name>dfs.replication</name>
      <value>1</value>
      <description>How many replicas of data blocks should exist in HDFS</description>
  </property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/hadoop_data/journalnode/</value>
  <description>
    The directory where the journal edit files are stored.
  </description>
</property>

还有core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://cluster</value>
    </property>
</configuration>

我正在遵循本指南:https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_hadoop-high-availability/content/ha-nn-deploy-nn-cluster.html 名称节点,数据节点,日志节点和YARN资源管理器均在一个本地docker网络上的单独容器上运行。在本指南中,我不想设置zookeeper,因为我只想为两个namenode运行启动命令。当我尝试运行时:

hdfs namenode -bootstrapStandby

我收到第二个namenode端口:namenode2:8020的ConnectionRefused错误。这是我的Docker运行命令:

docker run -d --net hadoop --net-alias namenode1 --name namenode1 -h namenode1 -p 50070:50070 "nn"
docker run -d --net hadoop --net-alias namenode2 --name namenode2 -h namenode1 -p 50070:50070 "nn"

我是否需要在主机名中添加名称服务字符串“ cluster”?

0 个答案:

没有答案