HDFS fsck命令将健康视为“/”

时间:2017-02-14 09:39:09

标签: hadoop hdfs

我在AWS EC2实例上安装了开源hadoop版本2.7.3群集(2个Masters + 3个Slaves)。我正在使用集群将其与Kafka Connect集成。

集群的设置在上个月完成,并且kafka connect的设置在最近两周完成。从那时起,我们就能够在HDFS上运行kafka主题记录并进行各种操作。

自去年下午以来,我开始收到以下错误。当我从本地将一个新文件复制到集群时,它会被打开并在一段时间后再次开始显示类似的IOException:

17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 499.3472970548959 msec.
17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 4988.873277172643 msec.
17/02/14 07:58:00 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:58:00 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:58:00 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 8598.311122824263 msec.
17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
17/02/14 07:58:09 WARN hdfs.DFSClient: DFS Read
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
        at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
        at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:107)
        at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:102)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
        at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
        at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
        at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
        at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
cat: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log

当我这样做:hdfs fsck /,我得到:

Total size:    667782677 B
 Total dirs:    406
 Total files:   44485
 Total symlinks:                0
 Total blocks (validated):      43767 (avg. block size 15257 B)
  ********************************
  UNDER MIN REPL'D BLOCKS:      43766 (99.99772 %)
  dfs.namenode.replication.min: 1
  CORRUPT FILES:        43766
  MISSING BLOCKS:       43766
  MISSING SIZE:         667781648 B
  CORRUPT BLOCKS:       43766
  ********************************
 Minimally replicated blocks:   1 (0.0022848265 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     6.8544796E-5
 Corrupt blocks:                43766
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Tue Feb 14 07:59:10 UTC 2017 in 932 milliseconds


The filesystem under path '/' is CORRUPT

这意味着,我的所有文件都以某种方式被破坏了。

我想恢复我的HDFS并修复损坏的健康状况。 另外,我想了解一下这个问题是如何突然发生的以及将来如何防止它?

1 个答案:

答案 0 :(得分:1)

标记为损坏的整个文件系统(43766块)可能是由于完全删除了dfs.datanode.data.dir文件夹或在hdfs-site.xml中更改了其值。每当这样做时,请确保Namenode也已格式化并重新启动。

如果没有,Namenode仍保留阻止信息,并期望它们在Datanode(s)中可用。问题中发布的情景类似。

如果删除数据目录,则无法恢复块。如果仅在dfs.datanode.data.dir中更改了hdfs-site.xml属性的值且Namenode尚未格式化,则还原hdfs-site.xml中的值将有所帮助。