Sed / Awk:如果重复第一行中的模式,如何查找和删除两行;重击

时间:2019-05-14 12:58:30

标签: awk sed grep

我正在处理文本文件,每个文件有数千条记录。每个记录由两行组成:以“>”开头的标头,然后是带有长字符串“ -AGTCNR”的行。标头包含10个以“ |”分隔的字段其第一个字段是每个记录的唯一标识符,例如“> KEN096-15”,如果记录具有相同的标识符,则称为重复记录。简单的记录如下:

>ACML500-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_-2  
----TAAGATTTTGACTTCTTCCCCCATCATCAAGAAGAATTGT-------  
>ACRJP458-10|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
-----------TCCCTTTAATACTAGGAGCCCCTGACATAGCCTTTCCTAAATAAT-----  
>ASILO303-17|Dip|gs-Par|sp-Par vid|subsp-NA|co  
-------TAAGATTCTGATTACTCCCCCCCTCTCTAACTCTTCTTCTTCTATAGTAGATG  
>ASILO326-17|Dip|gs-Goe|sp-Goe par|subsp-NA|c  
TAAGATTTTGATTATTACCCCCTTCATTAACCAGGAACAGGATGA---------------  
>CLT100-09|Lep|gs-Col|sp-Col elg|subsp-NA|co-Buru  
AACATTATATTTGGAATTT-------GATCAGGAATAGTCGGAACTTCTCTGAA------  
>PMANL2431-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_  
----ATGCCTATTATAATTGGAGGATTTGGAAAACCTTTAATATT----CCGAAT  
>STBOD057-09|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
ATCTAATATTGCACATAGAGGAACCTCNGTATTTTTTCTCTCCATCT------TTAG  
>TBBUT582-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
-----CCCCCTCATTAACATTACTAAGTTGAAAATGGAGCAGGAACAGGATGA  
>TBBUT583-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
TAAGATTTTGACTCATTAA----------------AATGGAGCAGGAACAGGATGA  
>AFBTB001-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
TAAGCTCCATCC-------------TAGAAAGAGGGG---------GGGTGA  
>PMANL2431-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_  
----ATGCCTATTAGGAAATTGATTAGTACCTTTAATATT----CCGAAT---  
>AFBTB003-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
TAAGATTTTGACTTCTGC------CATGAGAAAGA-------------AGGGTGA  
>AFBTB002-09|Cole|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
-------TCTTCTGCTCAT-------GGGGCAGGAACAGGG----------TGA  
>ACRJP458-10|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
-----------TCCCTTTAATACTAGGAGCCCCTTTCCT----TAAATAAT-----  

现在,我试图删除重复项,例如“ ACRJP458-10”和“ PMANL2431-12”的重复记录。 使用bash脚本,我提取了唯一标识符并将重复的标识符存储在变量“ $ duplicate_headers”中。目前,我正在尝试查找其两行记录的任何重复实例,并将其删除,如下所示:

for i in "$@"
do
    unset duplicate_headers
    duplicate_headers=`grep ">" $1 | awk 'BEGIN { FS="|"}; {print $1 "\n"; }' | sort | uniq -d`
    for header in `echo -e "${duplicate_headers}"`
    do
        sed -i "/^.*\b${header}\b.*$/,+1 2d" $i
        #sed -i "s/^.*\b${header}\b.*$//,+1 2g" $i
        #sed -i "/^.*\b${header}\b.*$/{$!N; s/.*//2g; }" $i
    done
done

最终结果(考虑到成千上万条记录)将如下所示:

>ACML500-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_-2  
----TAAGATTTTGACTTCTTCCCCCATCATCAAGAAGAATTGT-------  
>ACRJP458-10|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
-----------TCCCTTTAATACTAGGAGCCCCTGACATAGCCTTTCCTAAATAAT-----  
>ASILO303-17|Dip|gs-Par|sp-Par vid|subsp-NA|co  
-------TAAGATTCTGATTACTCCCCCCCTCTCTAACTCTTCTTCTTCTATAGTAGATG  
>ASILO326-17|Dip|gs-Goe|sp-Goe par|subsp-NA|c  
TAAGATTTTGATTATTACCCCCTTCATTAACCAGGAACAGGATGA---------------  
>CLT100-09|Lep|gs-Col|sp-Col elg|subsp-NA|co-Buru  
AACATTATATTTGGAATTT-------GATCAGGAATAGTCGGAACTTCTCTGAA------  
>PMANL2431-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_  
----ATGCCTATTATAATTGGAGGATTTGGAAAACCTTTAATATT----CCGAAT  
>STBOD057-09|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
ATCTAATATTGCACATAGAGGAACCTCNGTATTTTTTCTCTCCATCT------TTAG  
>TBBUT582-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
-----CCCCCTCATTAACATTACTAAGTTGAAAATGGAGCAGGAACAGGATGA  
>TBBUT583-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N  
TAAGATTTTGACTCATTAA----------------AATGGAGCAGGAACAGGATGA  
>AFBTB001-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
TAAGCTCCATCC-------------TAGAAAGAGGGG---------GGGTGA  
>AFBTB003-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
TAAGATTTTGACTTCTGC------CATGAGAAAGA-------------AGGGTGA  
>AFBTB002-09|Cole|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N  
-------TCTTCTGCTCAT-------GGGGCAGGAACAGGG----------TGA

1 个答案:

答案 0 :(得分:3)

$ awk -F'[|]' 'NR%2{f=seen[$1]++} !f' file
>ACML500-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_-2
----TAAGATTTTGACTTCTTCCCCCATCATCAAGAAGAATTGT-------
>ACRJP458-10|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N
-----------TCCCTTTAATACTAGGAGCCCCTGACATAGCCTTTCCTAAATAAT-----
>ASILO303-17|Dip|gs-Par|sp-Par vid|subsp-NA|co
-------TAAGATTCTGATTACTCCCCCCCTCTCTAACTCTTCTTCTTCTATAGTAGATG
>ASILO326-17|Dip|gs-Goe|sp-Goe par|subsp-NA|c
TAAGATTTTGATTATTACCCCCTTCATTAACCAGGAACAGGATGA---------------
>CLT100-09|Lep|gs-Col|sp-Col elg|subsp-NA|co-Buru
AACATTATATTTGGAATTT-------GATCAGGAATAGTCGGAACTTCTCTGAA------
>PMANL2431-12|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_
----ATGCCTATTATAATTGGAGGATTTGGAAAACCTTTAATATT----CCGAAT
>STBOD057-09|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N
ATCTAATATTGCACATAGAGGAACCTCNGTATTTTTTCTCTCCATCT------TTAG
>TBBUT582-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N
-----CCCCCTCATTAACATTACTAAGTTGAAAATGGAGCAGGAACAGGATGA
>TBBUT583-11|Lep|gs-NA|sp-NA|subsp-NA|co-Buru|site-NA|lat_N
TAAGATTTTGACTCATTAA----------------AATGGAGCAGGAACAGGATGA
>AFBTB001-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N
TAAGCTCCATCC-------------TAGAAAGAGGGG---------GGGTGA
>AFBTB003-09|Col|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N
TAAGATTTTGACTTCTGC------CATGAGAAAGA-------------AGGGTGA
>AFBTB002-09|Cole|gs-NA|sp-NA|subsp-NA|co-Ethi|site-NA|lat_N
-------TCTTCTGCTCAT-------GGGGCAGGAACAGGG----------TGA

要一次在多个文件上运行它将删除所有文件中的重复项:

awk -F'[|]' 'FNR%2{f=seen[$1]++} !f' *

或仅删除每个文件中的重复项:

awk -F'[|]' 'FNR==1{delete seen} FNR%2{f=seen[$1]++} !f' *