HDFS阻止问题

时间:2015-09-25 14:04:49

标签: hadoop hdfs cloudera

当我运行fsck命令时,它显示总块为68(平均块大小286572 B)。我怎么才能只有68块?

我最近安装了CDH5版本:Hadoop 2.6.0

-

[hdfs @ cluster1~] $ hdfs fsck /

Connecting to namenode via http://cluster1.abc:50070
FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path / at Fri Sep 25 09:51:56 EDT 2015
....................................................................Status:     HEALTHY
 Total size: 19486905 B
 Total dirs: 569
 Total files: 68
 Total symlinks: 0
 Total blocks (validated): 68 (avg. block size 286572 B)
 Minimally replicated blocks: 68 (100.0 %)
 Over-replicated blocks: 0 (0.0 %)
 Under-replicated blocks: 0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor: 3
 Average block replication: 1.9411764
 Corrupt blocks: 0
 Missing replicas: 0 (0.0 %)
 Number of data-nodes: 3
 Number of racks: 1
 FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds


The filesystem under path '/' is HEALTHY

-

这是我运行hdfsadmin -repot命令时得到的结果:

[hdfs @ cluster1~] $ hdfs dfsadmin -report

Configured Capacity: 5715220577895 (5.20 TB)
Present Capacity: 5439327449088 (4.95 TB)
DFS Remaining: 5439303270400 (4.95 TB)
DFS Used: 24178688 (23.06 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 504

-

另外,我的hive查询没有启动MapReduce作业,可能是上面的问题吗?

有什么建议吗?

谢谢!

1 个答案:

答案 0 :(得分:0)

块是分布在文件系统中的节点中的数据块。因此,例如,如果您有一个200MB的文件,那么实际上将有两个128块和72块每块的块。

所以不要担心框架会对框架进行处理。正如fsck报告所示,HDFS中有68个文件,因此有68个块。