无法在多宿主网络中形成HA Hadoop集群

时间:2015-04-27 17:53:48

标签: java hadoop hdfs apache-zookeeper high-availability

使用以下机器ip

形成多节点HA群集

活跃NN - 172.16.105 .---

待机NN-172.16.105 .---

DataNode DN-192.168 .---

对于上述配置无法创建集群,在格式化namenode时会抛出异常

 15/04/27 16:15:18 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/04/27 16:15:18 FATAL namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Unable to construct journal, qjournal://ActiveNamnode:8485;StandbyNamenod:8485;Datanode:8485/mycluster
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit
Log.java:1555)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditL
og.java:267)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite
(FSEditLog.java:233)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:
920)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo
de.java:1354)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:14
73)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit
Log.java:1553)
        ... 5 more
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getNam
e(IPCLoggerChannelMetrics.java:107)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create
(IPCLoggerChannelMetrics.java:91)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.<init>(IPCLog
gerChannel.java:166)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogge
r(IPCLoggerChannel.java:146)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLog
gers(QuorumJournalManager.java:367)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLog
gers(QuorumJournalManager.java:149)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(Qu
orumJournalManager.java:116)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(Qu
orumJournalManager.java:105)
        ... 10 more
15/04/27 16:15:18 INFO util.ExitUtil: Exiting with status 1
15/04/27 16:15:18 INFO namenode.NameNode: SHUTDOWN_MSG:

核心-site.xml中

<configuration>
<property><name>ha.zookeeper.quorum</name>         <value>activenamenode:2181,standbynamenode:2181,slave:2181</value></property       <property>
 <name>fs.defaultFS</name>
 <value>hdfs://myccluster</value>
 </property>
 </configuration>

HDFS-site.xml中

<configuration>
 <property>
 <name>dfs.datanode.data.dir</name>
 <value>file:/C:/sample/myccluster/meta/Metadata/data/dfs/datanode</value>
 </property>
 <property>
 <name>dfs.namenode.name.dir</name>
 <value>file:/sample/myccluster/meta/Metadata/data/dfs/namenode</value>
 </property>
 <property>
 <name>dfs.nameservices</name>
 <value>myccluster</value>
 <final>true</final>
 </property>
 <property>
 <name>dfs.ha.namenodes.myccluster</name>
 <value>nn1,nn2</value>
 </property>
 <property>
 <name>dfs.ha.namenode.id</name>
 <value>nn1</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.myccluster.nn1</name>
 <value>0.0.0.0:9000</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.myccluster.nn2</name>
 <value>standbynamenode:9000</value>
 </property>
 <property>
 <name>dfs.namenode.http-address.myccluster.nn1</name>
 <value>0.0.0.0:50070</value>
 </property>
 <property>
 <name>dfs.namenode.http-address.myccluster.nn2</name>
 <value>standbynamenode:50070</value>
 </property>
 <property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://activenamenode:8485;standbynamenode:8485;slave:8485/myccluster</value>
 </property>
 <property>
 <name>dfs.journalnode.edits.dir</name>
 <value>C:\sample\myccluster\meta\Metadata\data\dfs\journal\NamenodeLogs3</value>
 </property>
 <property>
 <name>dfs.ha.automatic-failover.enabled.myccluster</name>
 <value>true</value>
 </property>
 <property>
 <name>dfs.client.failover.proxy.provider.myccluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 </property>
 <property>
 <name>dfs.replication</name>
 <value>3</value>
 </property>
 <property>
 <name>dfs.permissions</name>
 <value>false</value>
 </property>
 <property>
 <name>dfs.webhdfs.enabled</name>
 <value>true</value>
 </property>
 <property>
 <name>dfs.ha.fencing.methods</name>
 <value>shell(C:\sample\myccluster\meta\SDK\hadoop\bin\fencing.bat)</value>
 </property>
 <property>
 <name>dfs.hosts.exclude</name>
 <value>/sample/myccluster/meta/Metadata/exclude</value>
 </property>
 </configuration>

MAPRED-site.xml中

<configuration>
    <property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
    </property>
    <property>
     <name>mapreduce.jobhistory.address</name>
     <value>0.0.0.0:10020</value>
    </property>
    <property>
     <name>mapreduce.jobhistory.webapp.address</name>
     <value>0.0.0.0:19888</value>
    </property>
    </configuration>

1 个答案:

答案 0 :(得分:2)