Unix性能改进-可能正在使用AWK

时间:2019-08-28 23:02:08

标签: unix awk

我有两个文件File1.txt(用管道分隔6列)和File2.txt(用管道分隔2列)

File1.txt

NEW|abcd|1234|10000000|Hello|New_value|
NEW|abcd|1234|20000000|Hello|New_value|
NEW|xyzq|5678|30000000|myname|New_Value|

File2.txt

10000000|10000001>10000002>10000003>10000004
19000000|10000000>10000001>10000002>10000003>10000004
17000000|10000099>10000000>10000001>10000002>10000003>10000004
20000000|10000001>10000002>10000003>10000004>30000000
29000000|20000000>10000001>10000002>10000003>10000004

目标是针对File1.txt中的每一行,我必须选择第4列并必须在File2.txt中搜索该值。如果在File2.txt中找到任何匹配项,那么我必须选择File2.txt中的所有行,但只有第一列。

这可能会在目标文件中产生更多数量的记录。输出应如下所示(最后一列123来自固定变量)

NEW|abcd|1234|10000000|Hello|New_value|123    (this row comes as it matches 1st row & 4th column of File1.txt with 1st row of File2.txt)
NEW|abcd|1234|19000000|Hello|New_value|123    (this row comes as it matches 1st row & 4th column of File1.txt with 2nd row of File2.txt)
NEW|abcd|1234|17000000|Hello|New_value|123    (this row comes as it matches 1st row & 4th column of File1.txt with 3rd row of File2.txt)
NEW|abcd|1234|20000000|Hello|New_value|123    (this row comes as it matches 2nd row & 4th column of File1.txt with 4th row of File2.txt)
NEW|abcd|1234|29000000|Hello|New_value|123    (this row comes as it matches 2nd row & 4th column of File1.txt with 5th row of File2.txt)
NEW|xyzq|5678|20000000|myname|New_Value|123   (this row comes as it matches 3rd row & 4th column of File1.txt with 4th row of File2.txt)

我可以编写如下所示的解决方案,它也可以为我提供正确的输出。但是当File1.txt和File2.txt都具有大约15万行时,该花费21分钟的时间。最终生成的目标文件中包含超过1000万行。

VAL1=123

for ROW in `cat File1.txt`
do
  Fld1=`echo $ROW | cut -d'|' -f'1-3'`
  Fld2=`echo $ROW | cut -d'|' -f4`
  Fld3=`echo $ROW | cut -d'|' -f'5-6'`

  grep -i $Fld2 File2.txt | cut -d'|' -f1  > File3.txt
  sed 's/^/'$Fld1'|/g' File3.txt | sed 's/$/|'${Fld3}'|'${VAL1}'/g' >> Target.txt

done 

但是我的问题是这个解决方案可以优化吗?可以使用AWK或其他任何方法将其重写以更快地做到吗?

3 个答案:

答案 0 :(得分:1)

我很确定这样做会更快(因为在单个awk或sed进程中使用隐式循环通常比在shell循环中一遍又一遍地调用要快得多),但是您必须尝试让我们知道:

编辑:此版本应解决输出中重复项的问题

$ cat a.awk
NR == FNR {
    for (i=1; i<=NF; ++i) {
        if ($i in a)
            a[$i] = a[$i] "," $1
        else
            a[$i] = $1;
    }
    next 
}

$4 in a {
    split(a[$4], b, ",")
    for (i in b) {
        if (!(b[i] in seen)) {
            print $1, $2, $3, b[i], $5, $6, new_value
            seen[b[i]]
        }
    }
    delete seen
}

输出包含所需的行,尽管顺序不同:

$ awk -v new_value=123 -v OFS="|" -f a.awk FS='[|>]' file2.txt FS='|' file1.txt 
NEW|abcd|1234|19000000|Hello|New_value|123
NEW|abcd|1234|17000000|Hello|New_value|123
NEW|abcd|1234|10000000|Hello|New_value|123
NEW|abcd|1234|29000000|Hello|New_value|123
NEW|abcd|1234|20000000|Hello|New_value|123
NEW|xyzq|5678|20000000|myname|New_Value|123

答案 1 :(得分:0)

我猜您的性能下降是由于grepsedsed反复将文件读入内存。如果您可以将File2的内容存储在内存中(或什至在临时的SQLite DB中),那应该可以加快速度。然后,您将逐行处理File1,仅对File2键进行简单的查找。

在运行脚本以跟踪RAM和CPU使用情况时,运行htop或某些活动监视器会很有帮助。

答案 2 :(得分:0)

稍微更优化的gnu awk脚本:

awk 'NR==FNR{a[$4]=$0;next}
     {
        for(i=1; i<=NF; i++){
          if($i in a) 
            print gensub("[^|]+\\|",$1 "|",4,a[$i])
        }
     }' FS='|' file1 FS='[|>]' file2

第一条语句用文件1的内容填充数组a

第二个block语句遍历file2的所有字段,并输出与file2的第一个字段匹配的数组内容。

使用awk gensub函数修改打印的字符串。它只允许更改找到的第四个图案。