如何*真正*从hdfs回收磁盘空间

时间:2015-08-30 00:14:22

标签: hadoop hdfs

第一步是

hdfs dfs -rmr <path>

将文件丢弃到/.Trash但不删除它们。

下一步是

hdfs dfs -expunge

目前还不清楚其实际表现如何 - 之后我们仍然看到:

$ hdfs dfs -du -h
279.4 G  .Trash

那么......如何让.Trash一劳永逸地 poof ...

2 个答案:

答案 0 :(得分:3)

删除时尝试使用-skiptrash选项。这将永远删除。

答案 1 :(得分:2)

expunge似乎导致垃圾收集已安排

hdfs dfs -expunge
15/08/30 19:34:32 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
15/08/30 19:34:32 INFO fs.TrashPolicyDefault: Created trash checkpoint: /user/stack/.Trash/150830193432

请注意,已创建垃圾箱的检查点。有点令人担忧的是Emptier interval 0 。那么,什么时候才能删除数据..

以下是core-site.xml设置(归功于此SOF答案How To Automate Hadoop Trash Cleanup,以便找到它们):

https://github.com/cloudera/hadoop-common/blob/ca2ff489eb805da4700fb15fa49e539f1c195b89/src/java/core-default.xml#L216-L225

<property>
  <name>fs.trash.interval</name>
  <value>0</value>
  <description>Number of minutes after which the checkpoint
  gets deleted.
  If zero, the trash feature is disabled.
  </description>
</property>

<property>
  <name>fs.trash.checkpoint.interval</name>
  <value>0</value>
  <description>Number of minutes between trash checkpoints.
  Should be smaller or equal to fs.trash.interval.
  Every time the checkpointer runs it creates a new checkpoint 
  out of current and removes checkpoints created more than 
  fs.trash.interval minutes ago.
  </description>
</property>

我正在研究将这些值设置为零的实际效果。这似乎与Trash功能不一致甚至被启用..