hdfs zkfc -formatZK错误

时间:2017-03-30 12:04:53

标签: hadoop apache-zookeeper

我有一个由三个节点组成的集群

hadoop-master (namenode) 192.168.4.128
hadoop-slave-1 (secondary name node ) 192.168.4.111
hadoop-slave-3 (data node ) 192.168.4.106
hadoop-master上的

jps命令显示

15799 JournalNode

15929 Jps

14978 QuorumPeerMain

但在namenode上执行此命令hdfs zkfc –formatZK时 我收到此错误

17/03/30 07:33:09 INFO zookeeper.ZooKeeper: Session: 0x15b1ecb76480000 closed
17/03/30 07:33:09 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK
        at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
        at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
        at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)
        at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)
        at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
        at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
17/03/30 07:33:09 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x15b1ecb76480000
17/03/30 07:33:09 INFO zookeeper.ClientCnxn: EventThread shut down

我的zoo.cfg

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/data/

clientPort=2181

DataLogDir=/usr/local/log/

server.1=hadoop-master:2888:3888

server.2=hadoop-slave-1:2889:3889

server.3=hadoop-slave-2:2890:3890

我的slaves文件是

hadoop-slave-1
hadoop-slave-2
hadoop-master

我的core-site.xml

  <property>
                 <name>dfs.tmp.dir</name>
                 <value>/opt/hadoop/data15</value>
       </property>
        <property>
           <name>fs.default.name</name>
           <value>hdfs://hadoop-master:8020</value>
       </property>
       <property>
           <name>dfs.permissions</name>
           <value>false</value>
       </property>
       <property>
           <name>dfs.journalnode.edits.dir</name>
           <value>/usr/local/journal/node/local/data</value>
       </property>

        <property>

                <name>fs.defaultFS</name>

                <value>hdfs://mycluster</value>

        </property>


        <property>

                <name>hadoop.tmp.dir</name>

                <value>/tmp</value>


  </property>

我的hdfs-site.xml

 <property>
                 <name>dfs.replication</name>
                 <value>2</value>
        </property>
        <property>
                 <name>dfs.name.dir</name>
                 <value>/opt/hadoop/data16</value>
                 <final>true</final>
        </property>
        <property>
                 <name>dfs.data.dir</name>
                 <value>/opt/hadoop/data17</value>
                 <final>true</final>
        </property>

        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hadoop-slave-1:50090</value>
        </property>

       <property>

        <name>dfs.nameservices</name>

        <value>mycluster</value>

        <final>true</final>

    </property>

    <property>

        <name>dfs.ha.namenodes.mycluster</name>

        <value>hadoop-master,hadoop-slave-1</value>

        <final>true</final>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.mycluster.hadoop-master</name>

        <value>hadoop-master:8020</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.mycluster.hadoop-slave-1</name>

        <value>hadoop-slave-1:8020</value>

    </property>

    <property>

        <name>dfs.namenode.http-address.mycluster.hadoop-master</name>

        <value>hadoop-master:50070</value>

    </property>

    <property>

        <name>dfs.namenode.http-address.mycluster.hadoop-slave-1</name>

        <value>hadoop-slave-1:50070</value>

    </property>

    <property>

        <name>dfs.namenode.shared.edits.dir</name>

        <value>qjournal://hadoop-master:8485;hadoop-slave-2:8485;hadoop-slave-1:8485/mycluster</value>

    </property>

    <property>

        <name>dfs.ha.automatic-failover.enabled</name>

        <value>true</value>

    </property>

    <property>

        <name>ha.zookeeper.quorum</name>

        <value>hadoop-master:2181,hadoop-slave-1:2181,hadoop-slave-2:2181</value>

    </property>

    <property>

        <name>dfs.ha.fencing.methods</name>

        <value>sshfence</value>

    </property>

    <property>

        <name>dfs.ha.fencing.ssh.private-key-files</name>

        <value>root/.ssh/id_rsa</value>

    </property>


    <property>

        <name>dfs.ha.fencing.ssh.connect-timeout</name>

        <value>3000</value>

    </property>

我在名称节点(hadoop-master)中执行stop-dfs.sh之前已在所有节点中应用hdfs zkfc –formatZK

是否有任何错误的配置  并且在执行

之前发出hdfs namenode -format是必要的
 hdfs zkfc –formatZK

0 个答案:

没有答案