匹配double-zero-byte-separator输入文件中的pathname

时间:2013-07-22 07:42:11

标签: perl shell awk string-matching pathname

我正在改进我去年写的script listing duplicated files(如果你点击链接,请参阅第二个脚本)。

duplicated.log输出的记录分隔符是零字节而不是回车符\n。例如:

$> tr '\0' '\n' < duplicated.log
         12      dir1/index.htm
         12      dir2/index.htm
         12      dir3/index.htm
         12      dir4/index.htm
         12      dir5/index.htm

         32      dir6/video.m4v
         32      dir7/video.m4v

(在此示例中,五个文件dir1/index.htm,...和dir5/index.htm具有相同的md5sum,其大小为12个字节。另外两个文件dir6/video.m4vdir7/video.m4v具有相同的md5sum,其内容大小(du)为32个字节。)

由于每一行以零字节(\0)而不是回车符号(\n)结束,所以空行表示为两个连续的零字节(\0\0)。 / p>

我使用零字节作为行分隔符,因为path-file-name可能包含回车符号。

但是,这样做我面对这个问题:
如何'grep' duplicated.log指定文件的所有副本?
(例如,如何检索dir1/index.htm的重复项?)

我需要:

$> ./youranswer.sh  "dir1/index.htm"  < duplicated.log | tr '\0' '\n'
         12      dir1/index.htm 
         12      dir2/index.htm 
         12      dir3/index.htm 
         12      dir4/index.htm 
         12      dir5/index.htm 
$> ./youranswer.sh  "dir4/index.htm"  < duplicated.log | tr '\0' '\n'
         12      dir1/index.htm 
         12      dir2/index.htm 
         12      dir3/index.htm 
         12      dir4/index.htm 
         12      dir5/index.htm 
$> ./youranswer.sh  "dir7/video.m4v"  < duplicated.log | tr '\0' '\n'
         32      dir6/video.m4v 
         32      dir7/video.m4v 

我在考虑一些事情:

awk 'BEGIN { RS="\0\0" } #input record separator is double zero byte 
     /filepath/ { print $0 }' duplicated.log  

...但filepath可能包含斜杠符号/和许多其他符号(引号,回车符......)。

我可能必须使用perl来处理这种情况......

我对任何建议,问题,其他想法持开放态度......

2 个答案:

答案 0 :(得分:1)

你几乎就在那里:使用匹配的运算符~

awk -v RS='\0\0' -v pattern="dir1/index.htm" '$0~pattern' duplicated.log

答案 1 :(得分:0)

我刚刚意识到我可以使用md5sum而不是路径名,因为在我的新版脚本中,我保留了md5sum信息。

这是我目前使用的新格式:

$> tr '\0' '\n' < duplicated.log
     12      89e8a208e5f06c65e6448ddeb40ad879 dir1/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir2/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir3/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir4/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir5/index.htm 

     32      fc191f86efabfca83a94d33aad2f87b4 dir6/video.m4v 
     32      fc191f86efabfca83a94d33aad2f87b4 dir7/video.m4v

gawknawk给出了想要的结果:

$> awk 'BEGIN { RS="\0\0" } 
   /89e8a208e5f06c65e6448ddeb40ad879/ { print $0 }' duplicated.log | 
   tr '\0' '\n'
     12      89e8a208e5f06c65e6448ddeb40ad879 dir1/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir2/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir3/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir4/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir5/index.htm 

但我对你的答案仍持开放态度 :-)
(目前的答案只是一种解决方法)


好奇,在正在建设中的新(可怕)剧本之下...

#!/bin/bash

fifo=$(mktemp -u) 
fif2=$(mktemp -u)
dups=$(mktemp -u)
dirs=$(mktemp -u)
menu=$(mktemp -u)
numb=$(mktemp -u)
list=$(mktemp -u)

mkfifo $fifo $fif2


# run processing in background
find . -type f -printf '%11s %P\0' |  #print size and filename
tee $fifo |                           #write in fifo for dialog progressbox
grep -vzZ '^          0 ' |           #ignore empty files
LC_ALL=C sort -z |                    #sort by size
uniq -Dzw11 |                         #keep files having same size
while IFS= read -r -d '' line
do                                    #for each file compute md5sum
  echo -en "${line:0:11}" "\t" $(md5sum "${line:12}") "\0"
                                      #file size + md5sim + file name + null terminated instead of '\n'
done |                                #keep the duplicates (same md5sum)
tee $fif2 |
uniq -zs12 -w46 --all-repeated=separate | 
tee $dups  |
#xargs -d '\n' du -sb 2<&- |          #retrieve size of each file
gawk '
function tgmkb(size) { 
  if(size<1024) return int(size)    ; size/=1024; 
  if(size<1024) return int(size) "K"; size/=1024;
  if(size<1024) return int(size) "M"; size/=1024;
  if(size<1024) return int(size) "G"; size/=1024;
                return int(size) "T"; }
function dirname (path)
      { if(sub(/\/[^\/]*$/, "", path)) return path; else return "."; }
BEGIN { RS=ORS="\0" }
!/^$/ { sz=substr($0,0,11); name=substr($0,48); dir=dirname(name); sizes[dir]+=sz; files[dir]++ }
END   { for(dir in sizes) print tgmkb(sizes[dir]) "\t(" files[dir] "\tfiles)\t" dir }' |
LC_ALL=C sort -zrshk1 > $dirs &
pid=$!


tr '\0' '\n' <$fifo |
dialog --title "Collecting files having same size..."    --no-shadow --no-lines --progressbox $(tput lines) $(tput cols)


tr '\0' '\n' <$fif2 |
dialog --title "Computing MD5 sum" --no-shadow --no-lines --progressbox $(tput lines) $(tput cols)


wait $pid
DUPLICATES=$( grep -zac -v '^$' $dups) #total number of files concerned
UNIQUES=$(    grep -zac    '^$' $dups) #number of files, if all redundant are removed
DIRECTORIES=$(grep -zac     .   $dirs) #number of directories concerned
lins=$(tput lines)
cols=$(tput cols)
cat > $menu <<EOF
--no-shadow 
--no-lines 
--hline "After selection of the directory, you will choose the redundant files you want to remove"
--menu  "There are $DUPLICATES duplicated files within $DIRECTORIES directories.\nThese duplicated files represent $UNIQUES unique files.\nChoose directory to proceed redundant file removal:"
$lins 
$cols
$DIRECTORIES
EOF
tr '\n"' "_'" < $dirs |
gawk 'BEGIN { RS="\0" } { print FNR " \"" $0 "\" " }' >> $menu

dialog --file $menu 2> $numb
[[ $? -eq 1 ]] && exit
set -x
dir=$( grep -zam"$(< $numb)" . $dirs | tac -s'\0' | grep -zam1 . | cut -f4- )
md5=$( grep -zam"$(< $numb)" . $dirs | tac -s'\0' | grep -zam1 . | cut -f2  )

grep -zao "$dir/[^/]*$" "$dups" | 
while IFS= read -r -d '' line
do
  file="${line:47}"
  awk 'BEGIN { RS="\0\0" } '"/$md5/"' { print $0 }' >> $list
done

echo -e "
fifo $fifo \t dups $dups \t menu $menu
fif2 $fif2 \t dirs $dirs \t numb $numb \t list $list"

#rm -f $fifo $fif2 $dups $dirs $menu $numb