如果缓存目录变得太大,我每天都会通过CRON编写一个小脚本来清除我的linux上的空间。 由于我真的很喜欢bash脚本,所以我需要你的linux专家的一些帮助。
这基本上是逻辑(伪代码)
if ( Drive Space Left < 5GB )
{
change directory to '/home/user/lotsa_cache_files/'
if ( current working directory = '/home/user/lotsa_cache_files/')
{
delete files in /home/user/lotsa_cache_files/
}
}
我打算从'/ dev / sda5'命令中获取驱动器空间。 如果您的信息返回以下值:
Filesystem 1K-blocks Used Available Use% Mounted on<br>
/dev/sda5 225981844 202987200 11330252 95% /
因此可能需要一点正则表达式来使'11330252'超出返回值
'if(当前工作目录= / home / user / lotsa_cache_files /)'部分只是我内心偏执的防御机制。我想确保我确实在'/ home / user / lotsa_cache_files /'之前我继续执行删除命令,如果当前工作目录由于某种原因不存在则可能具有破坏性。
删除文件将使用以下命令而不是通常的rm -f:
完成find . -name "*" -print | xargs rm
这是因为Linux系统固有的无法“rm”目录,如果它包含太多文件,正如我过去所了解的那样。
答案 0 :(得分:13)
另一个提案(代码内的评论):
FILESYSTEM=/dev/sda1 # or whatever filesystem to monitor
CAPACITY=95 # delete if FS is over 95% of usage
CACHEDIR=/home/user/lotsa_cache_files/
# Proceed if filesystem capacity is over than the value of CAPACITY (using df POSIX syntax)
# using [ instead of [[ for better error handling.
if [ $(df -P $FILESYSTEM | awk '{ gsub("%",""); capacity = $5 }; END { print capacity }') -gt $CAPACITY ]
then
# lets do some secure removal (if $CACHEDIR is empty or is not a directory find will exit
# with error which is quite safe for missruns.):
find "$CACHEDIR" --maxdepth 1 --type f -exec rm -f {} \;
# remove "maxdepth and type" if you want to do a recursive removal of files and dirs
find "$CACHEDIR" -exec rm -f {} \;
fi
从crontab调用脚本进行计划清理
答案 1 :(得分:7)
我会这样做:
# get the available space left on the device
size=$(df -k /dev/sda5 | tail -1 | awk '{print $4}')
# check if the available space is smaller than 5GB (5000000kB)
if (($size<5000000)); then
# find all files under /home/user/lotsa_cache_files and delete them
find /home/user/lotsa_cache_files -name "*" -delete
fi
答案 2 :(得分:4)
这是我用来删除目录中的旧文件以释放空间的脚本......
#!/bin/bash
#
# prune_dir - prune directory by deleting files if we are low on space
#
DIR=$1
CAPACITY_LIMIT=$2
if [ "$DIR" == "" ]
then
echo "ERROR: directory not specified"
exit 1
fi
if ! cd $DIR
then
echo "ERROR: unable to chdir to directory '$DIR'"
exit 2
fi
if [ "$CAPACITY_LIMIT" == "" ]
then
CAPACITY_LIMIT=95 # default limit
fi
CAPACITY=$(df -k . | awk '{gsub("%",""); capacity=$5}; END {print capacity}')
if [ $CAPACITY -gt $CAPACITY_LIMIT ]
then
#
# Get list of files, oldest first.
# Delete the oldest files until
# we are below the limit. Just
# delete regular files, ignore directories.
#
ls -rt | while read FILE
do
if [ -f $FILE ]
then
if rm -f $FILE
then
echo "Deleted $FILE"
CAPACITY=$(df -k . | awk '{gsub("%",""); capacity=$5}; END {print capacity}')
if [ $CAPACITY -le $CAPACITY_LIMIT ]
then
# we're below the limit, so stop deleting
exit
fi
fi
fi
done
fi
答案 3 :(得分:3)
要检测文件系统的占用情况,我使用:
df -k $FILESYSTEM | tail -1 | awk '{print $5}'
它给了我文件系统的占用百分比,这样,我不需要计算它:)
如果你使用bash,你可以使用pushd / popd操作来更改目录,并确保进入。
pushd '/home/user/lotsa_cache_files/'
do the stuff
popd
答案 4 :(得分:-2)
这就是我所做的:
while read f; do rm -rf ${f}; done < movies-to-delete.txt