优化shell和awk脚本

时间:2011-02-17 00:35:20

标签: shell optimization awk

我正在使用shell脚本,awk脚本和find命令的组合来在数百个文件中执行多个文本替换。文件大小在几百字节到20千字节之间变化。

我正在寻找一种加速此脚本的方法。

我正在使用cygwin。

我之前在超级用户上发布了这个问题,但我认为这个论坛更合适。

shell脚本 -

#!/bin/bash

if [ $# = 0 ]; then
 echo "Argument expected"
 exit 1
fi



while [ $# -ge 1 ]
do
   if [ ! -f $1 ]; then
     echo "No such file as $1"
     exit 1
   fi


  awk -f ~/scripts/parse.awk $1  > ${1}.$$

   if [ $? != 0 ]; then
      echo "Something went wrong with the script"
     rm ${1}.$$
      exit 1
   fi
mv ${1}.$$ $1
shift
done

awk脚本(简化) -

#! /usr/bin/awk -f

/HHH.Web/{
    if ( index($0,"Email") == 0)  {
        sub(/HHH.Web/,"HHH.Web.Email");
    }
    printf("%s\r\n",$0); 
    next;
}

命令行

find .  -type f  | xargs ~/scripts/run_parser.sh

3 个答案:

答案 0 :(得分:2)

find .  -type f | while read -r file
do
  awk '/HHH.Web/ && !/Email/ {
     sub(/HHH.Web/,"HHH.Web.Email");
     printf("%s\r\n",$0); 
     next;
  }
  ' "$file" > ${file}.$$ && mv ${file}.$$ "$file" 
done

如果您知道要处理的指定文件,则可以添加-iname选项

答案 1 :(得分:2)

在Cygwin上,最重要的是尽可能避免使用fork() - exec()。 通过设计,Windows不是为处理像linux这样的多个进程而构建的。 它没有fork(),牛被打破了。 因此,在编写脚本时,尽可能尝试从单个进程中执行。

在这种情况下,我们想要awk,只有1个awk。不惜一切代价避免使用xargs。 另一件事是,如果你必须扫描多个文件,Windows中的磁盘缓存只是一个笑话。 而不是访问所有文件,更好的方法是让grep 只查找符合要求的文件 所以你会有

grep -r "some-pattern-prahaps-HHH.Web-or-so" "/dir/to/where/you/have/millions/of/files/" |awk -f ~/scripts/parse.awk

在“〜/ scripts / parse.awk”中,你必须在awk中打​​开和关闭()文件,以加快速度。 不要尽可能使用system()。

#!/bin/awk
BEGIN{
    id=PROCINFO["pid"];
}
# int staticlibs_codesize_grep( option, regexp, filepath, returnArray, returnArray_linenum  )
# small code size
# Code size is choosen instead of speed. Search may be slow on large files
# "-n" option supported
function staticlibs_codesize_grep(o, re, p, B, C, this, r, v, c){
 if(c=o~"-n")C[0]=0;B[0]=0;while((getline r<p)>0){if(o~"-o"){while(match(r,re)){
 B[B[0]+=1]=substr(r,RSTART,RLENGTH);r=substr(r,RSTART+RLENGTH);if(c)C[C[0]+=1]=c;}
 }else{if(!match(r,re)!=!(o~"-v")){B[B[0]+=1]=r;if(c)C[C[0]+=1]=c;}}c++}return B[0]}
# Total: 293 byte , Codesize: > 276 byte, Depend: 0 byte

{
    file = $0;
    outfile = $0"."id; # Whatever.
    # If you have multiple replacements, or multiline replacements, 
    # be carefull in the order you replace. writing a k-map for efficient condition branch is a must.
    # Also, try to unroll the loop.

    # The unrolling can be anyting, this is a trade between code size for speed.
    # Here is a example of a unrolled loop
    # instead of having while((getline r<file)>0){if(file~html){print "foo";}else{print "bar";};};
    # we have moved the condition outside of the while() loop.
    if(file~".htm$"){
        while((getline r<file)>0){
            # Try to perform minimum replacement required for given file. 
            # Try to avoid branching by if(){}else{} if you are inside a loop.
            # Keep it minimalit and small.
            print "foo" > outfile;
        }
    }else{
        while((getline r<file)>0){
            # Here, as a example, we unrolled the loop into two, one for htm files, one for other files.
            print "bar" > outfile;
            # if a condition is required, match() is better
            if(match(r,"some-pattern-you-want-to-match")){
                # do whatever complex replacement you want. We reuse the RSTART,RLENGTH from match()
                before_match = substr(r,1,RSTART);
                matched_data = substr(r,RSTART,RLENGTH);
                after_match = substr(r,1,RSTART+RLENGTH);
                # if you want further matches, like grep -o, extracting only the match
                a=r;
                while(match(a,re)){
                    B[B[0]+=1]=substr(a,RSTART,RLENGTH);
                    a=substr(a,RSTART+RLENGTH);
                }
                # Avobe stores multiple matches from a single line, into B
            }
            # If you want to perform even further complex matches. try the grep() option.
            # staticlibs_codesize_grep() handles -o , -n , -v options. It sould satisfy most of the daily needs.
            # for a grep-like output, use printf("%4s\t\b:%s\n", returnArray_linenum[index] , returnArray[index] );

            # Example of multiple matches, against data that may or may not been replaced by the previous cond.
            if(match(r,"another-pattern-you-want-to-match")){
                # whatever
                # if you decide that replaceing is not good, you can abort
                if(for_whatever_reason_we_want_to_abort){
                    break;
                }
            }
            # notice that we always need to output a line.
            print r > outfile;
        }
    }
    # If we forget to close file, we will run out of FD
    close(file);
    close(outfile);
    # now we can move the file, however I would not do it here.
    # The reason is, system() is a very heavy operation, and second is our replacement may be imcomplete, by human error.
    # system("mv \""outfile"\" \""file"\" ")
    # I would advice output to another file, for later move by bash or any other shell with builtin mv command.
    # NOTE[*1]
    print "mv \""outfile"\" \""file"\" " > "files.to.update.list";
}
END{
    # Assuming we are all good, we should have a log file that records what has been modified
    close("files.to.update.list");
}

# Now when all is ready, meaning you have checked the result and it is what you desire, perform
#  source "files.to.update.list" 
# inside a terminal , or
#  cat "files.to.update.list" |bash
# and you are done
# NOTE[*1] if you have file names containing \x27 in them, the escape with \x22 is incomplete.
# Always check "files.to.update.list" for \x27 to avoid problems
# prahaps 
# grep -v -- "`echo -ne "\x27"`" > "files.to.update.list.safe"
# then 
# grep -- "`echo -ne "\x27"`" > "files.to.update.list.unsafe"
# may be a good idea.

答案 2 :(得分:1)

您正在为每个文件生成一个新的awk进程。我认为有可能有像

这样的东西
find . -type f | xargs ./awk_script.awk

awk_script.awk检查文件(我不知道的常见做法)。可能也可以做mv $ {f}。$$ $ f,但是你可以将其作为bash的单独传递。

希望这会有所帮助。