HDFS复制不足的块到文件的映射

时间:2018-07-27 18:36:00

标签: hadoop hdfs

HDFS文件系统显示,由于机架故障,群集上大约600K的块未充分复制。在HDFS恢复之前,是否有办法知道如果丢失这些块会影响哪些文件? 我无法执行'fsck /',因为群集很大。

2 个答案:

答案 0 :(得分:2)

Namenode UI列出了丢失的块,JMX日志列出了损坏/丢失的块。 UI和JMX仅显示复制不足的块数。

查看复制不足的块/文件有两种方法:使用fsck或WebHDFS API。

使用 WebHDFS REST API

curl -i  "http://<HOST>:<PORT>/webhdfs/v1/<PATH>?op=LISTSTATUS"

这将返回带有FileStatuses JSON对象的响应。解析JSON对象并筛选复制小于配置值的文件。

请在下面找到从NN返回的响应示例:

curl -i "http://<NN_HOST>:<HTTP_PORT>/webhdfs/v1/<PATH_OF_DIRECTORY>?op=LISTSTATUS"
HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26.hwx)

{"FileStatuses":{"FileStatus":[
{"accessTime":1489059994224,"blockSize":134217728,"childrenNum":0,"fileId":209158298,"group":"hdfs","length":0,"modificationTime":1489059994227,"owner":"XXX","pathSuffix":"_SUCCESS","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},
{"accessTime":1489059969939,"blockSize":134217728,"childrenNum":0,"fileId":209158053,"group":"hdfs","length":0,"modificationTime":1489059986846,"owner":"XXX","pathSuffix":"part-m-00000","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},
{"accessTime":1489059982614,"blockSize":134217728,"childrenNum":0,"fileId":209158225,"group":"hdfs","length":0,"modificationTime":1489059993497,"owner":"XXX","pathSuffix":"part-m-00001","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"},
{"accessTime":1489059977524,"blockSize":134217728,"childrenNum":0,"fileId":209158188,"group":"hdfs","length":0,"modificationTime":1489059983034,"owner":"XXX","pathSuffix":"part-m-00002","permission":"644","replication":3,"storagePolicy":0,"type":"FILE"}]}}

如果文件数量更多,您还可以使用?op=LISTSTATUS_BATCH&startAfter=<CHILD>

迭代列出文件。

参考:https://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Iteratively_List_a_Directory

答案 1 :(得分:0)

有一个更好的解决方案。

只需运行

PinsData

以及带有复制不足的所有元数据的块文件路径以及所有其他信息将存储到文件中,您可以直接查看该文件。

对我来说,这似乎是一个更好的选择