如何从Linux文件中定界文件中消除重复记录(myfile_I.out:application / octet-stream; charset = binary)

时间:2019-06-19 12:43:47

标签: linux sorting ksh

我正在尝试将linux文件(包含重复项,并且数据已从源表中卸载)中的数据加载到表中。

     mylinux file properties: 
     $ file -bi myfile_I.out
     application/octet-stream; charset=binary

在将数据加载到表之前。我应该从linux文件中删除重复项。

我删除重复项的方法:

  1. 将数据从源表卸载到临时文件(TempeEX.out)
  2. 从TempEX.out文件执行sort -u功能并删除     重复项和最终的唯一数据记录加载到myfile_I.out
  3. 最后将myfile_I.out数据加载到target_table

我在第2步中遇到问题{无法从TempEX.out文件中删除完整的重复项}

    #------------------------------------------------------------------#
    #- Delete the duplicates from TempEX.out write the unique data-----# 
    #------------------to myfile_I.out----------------------------------#

    echo -e "Eliminate the duplicates from the ${FILE_PATH}/TempEX.out 
    file" >> ${LOG}

    sort -u  ${FILE_PATH}/TempEX.out > ${DEST_PATH}/myfile_I.out

    echo -e "Unique records successfully written into 
    ${DEST_PATH}/myfile_I.out" >> ${LOG}

    count=0
    while read
    do
    ((count=$count+1))
    done <${DEST_PATH}/myfile_I.out
    echo -e "Total No of unique records in ${DEST_PATH}/myfile_I.out:" 
    ${count} "\n" >> $LOG
     #-----------------------------------------------------------------#

实际结果:

   Counts:

    $wc -l TempEX.out myfile_I.out
    196466 TempEX.out  -->#File Contains duplicate records#
    196460 myfile_I.out-->#Unique records after my approach(sort -u)# 
    392926 total

我做了一些排序功能以了解myfile_I.out中存在的重复项 TempEX.out文件中的记录重复记录

    $ cut -d'^X' -f1,6,10 TempEX.out|sort|uniq -d|wc -l
    5

myfile_I.out文件中的记录重复记录

    $ cut -d'^X' -f1,6,10 myfile_I.out|sort|uniq -d|wc -l
    1

在TempEX.out文件中记录(在primary_key上)具有重复项的

    $ cut -d'^X' -f1,6,10 TempEX.out|sort|uniq -d|cat
    701234567      412345678        19
    701234568      412345677        18
    709875641      412345859        17
    701234569      425984031        21
    701234570      409845216        20

在myfile_I.out文件中记录(在primary_key上)具有重复项的

    $ cut -d'^X' -f1,6,10 myfile_I.out|sort|uniq -d|cat
    709875641      412345859        17

预期结果: 要消除TempEX.out文件中的重复项,请将唯一数据加载到myfile_I.out。

    sort -u TempEX.out > myfile_I.out /*cant resolving the issue*/

我们可以做这样的事情吗?(在主键上执行)

    sort -u -f1,6,10 TempEX.out > myfile_I.out

1 个答案:

答案 0 :(得分:0)

这是一个可能有用的小脚本。它不会使用新数据修改原始文件,而是创建一个要加载的新文件(我总是更喜欢保留原始文件,以防出现错误)。它正在对主键进行验证,但是将确保在主键重复的情况下其他列也相同。合理的理由是,即使您没有提到它,也可能会修改现有数据或输入系统中的错误。无论如何,脚本会将这些行发送到其他文件中,以供用户检查。

它写在注释中,但是请确保任何列中的任何字段都不能包含空格。

#!/bin/ksh

TIMESTAMP=$(date +"%Y%m%d%H%M")

#No sense to do anything if the files are not readable.
if [[ ! -r $1 || ! -r $2 ]]; then
    print "ERROR - You must provide 2 parameters : 1 = path/filename of DB content 2 = path/filename of New Data"
    exit
fi

#Declaring 2 associative matrix
typeset -A TableDB
typeset -A DataToAdd

#Opening the different files. 3 and 4 for reading and 5 and 6 for writting.
#File handlers : 
# 3 for the data from the DB, 
# 4 for the new data to add, 
# 5 to write the new data to load (unique and new), 
# 6 to write the data in problem (same primary key but with different values)
exec 3<$1
exec 4<$2
exec 5>Data2Load_${TIMESTAMP}.txt
exec 6>Data2Verify_${TIMESTAMP}.txt

#Loading the 2 matrix with their data. 
#Here it is assumed that no field in any column contain blank spaces.
#Working with only 3 columns as in the example
while read -u3 a b c && read -u4 d e f; do
        TableDB[$a]=( $a $b $c )
        DataToAdd[$d]=( $d $e $f )
done

#Checking for duplicate and writting only the new one to load without the lines in possible errors
for i in ${!DataToAdd[@]}; do
        if [[ -z ${TableDB[$i]} ]]; then
                print -u5 "${DataToAdd[$i][0]} ${DataToAdd[$i][1]} ${DataToAdd[$i][2]}"
        elif [[ ${DataToAdd[$i][1]} != ${TableDB[$i][1]} || ${DataToAdd[$i][2]} != ${TableDB[$i][2]} ]]; then
                print -u6 "${DataToAdd[$i][0]} ${DataToAdd[$i][1]} ${DataToAdd[$i][2]}"
        fi
done

#closing the different files
exec 3>&-
exec 4>&-
exec 5>&-
exec 6>&-

希望有帮助!