Hadoop 2.9.0不兼容的集群ID

时间:2018-02-19 00:05:38

标签: hadoop hadoop2

我正在尝试使用this教程在伪分布式模式下使用Hadoop 2.9.0。 datanode未启动,日志文件显示名称节点和数据节点的clusterID不兼容的错误。我在问题上找到了几个StackOverflow答案,例如thisthis。所有答案都建议更改日志文件中显示的VERSION文件中的 clusterId (我正在粘贴下面日志文件的相关内容。)但是,当我读取VERSION文件时,我发现了clusterId它们在所有3中都是相同的,并且在错误日志文件显示的内容上完全不同。知道发生了什么事吗?如果您需要任何其他信息,请发表评论。

以下是错误日志文件的内容:

2018-02-17 21:58:00,465 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2018-02-17 21:58:00,499 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-uname/dfs/data/in_use.lock acquired by nodename 24965@mname.host.edu
2018-02-17 21:58:00,503 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-uname/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-uname/dfs/data: namenode clusterID = CID-08ce647c-0922-4da6-accb-15620161d0b0; datanode clusterID = CID-130d222c-d2cf-4509-bde4-e58637bf9b0c
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-02-17 21:58:00,510 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid c8b193d5-50e2-4983-8831-a5ce4820e58f) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-02-17 21:58:00,510 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid c8b193d5-50e2-4983-8831-a5ce4820e58f) service to localhost/127.0.0.1:9000

以下是/ tmp / hadoop-uname / dfs / data / current / VERSION

的内容
#Sun Feb 18 18:07:28 2018
storageID=DS-c1c0868c-4ce0-45bf-8e76-4223d46587b1
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=0
datanodeUuid=ad38f7c2-f4ac-466f-8330-f06488df73f8
storageType=DATA_NODE
layoutVersion=-57

以下是/ tmp / hadoop-uname / dfs / name / current / VERSION

的内容
#Sun Feb 18 18:04:19 2018
namespaceID=1735122419
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=1518995059388
storageType=NAME_NODE
blockpoolID=BP-1715794989-127.0.1.1-1518995059388
layoutVersion=-63

以下是/ tmp / hadoop-uname / dfs / namesecondary / current / VERSION

的内容
#Sun Feb 18 17:46:57 2018
namespaceID=1735122419
clusterID=CID-0b9a5a3e-dbf0-4a36-aab1-f72e4a1e6993
cTime=1518992971549
storageType=NAME_NODE
blockpoolID=BP-2107146081-127.0.1.1-1518992971549
layoutVersion=-63

这就是etc / hdfs-site.xml的样子:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
        <name>dfs.replication</name>
                <value>1</value>
                    </property>
</configuration>

0 个答案:

没有答案