查找并删除CSV文件中的重复项

时间:2019-04-06 17:38:24

标签: python csv awk duplicates

我有一个包含三列的大型CSV文件(1.8 GB)。每行包含两个字符串和一个数值。问题是它们重复但已交换。 示例:

Col1,Col2,Col3
ABC,DEF,123
ABC,EFG,454
DEF,ABC,123

所需的输出如下所示:

Col1,Col2,Col3
ABC,DEF,123
ABC,EFG,454

因为第三行包含与第一行相同的信息。

编辑

数据基本上看起来像这样(前两列为字符串,第三行为4000万行,为数值):

  

Blockquote

4 个答案:

答案 0 :(得分:4)

您能处理awk吗?

$ awk -F, '++seen[$3]==1' file

输出:

COL1,Col2,Col3
ABC,DEF,123
ABC,EFG,454

解释:

$ awk -F, '      # set comma as field delimiter
++seen[$3]==1    # count instances of the third field to hash, printing only first
' file

更新

$ awk -F, '++seen[($1<$2?$1 FS $2:$2 FS $1)]==1' file

输出:

COL1,Col2,Col3
ABC,DEF,123
ABC,EFG,454

它对第一个字段和第二个字段的每个满足的组合进行哈希处理,以使"ABC,DEF"=="DEF,ABC"计数并仅打印第一个字段。 ($1<$2?$1 FS $2:$2 FS $1)如果第一个字段小于第二个字段,则哈希1st,2nd否则哈希2nd,1st

答案 1 :(得分:2)

从问题描述中,不遗漏行的要求是 串联时,按任何顺序排列的第一和第二字段应该是唯一的。 如果是这样,请在awk下使用

awk -F, '{seen[$1,$2]++;seen[$2,$1]++}seen[$1,$2]==1 && seen[$2,$1]==1' filename

样本输入

Col1,Col2,Col3
ABC,DEF,123
ABC,EFG,454
DEF,ABC,123
GHI,ABC,123
DEF,ABC,123
ABC,GHI,123
DEF,GHI,123

示例输出

Col1,Col2,Col3
ABC,DEF,123
ABC,EFG,454
GHI,ABC,123
DEF,GHI,123

答案 2 :(得分:0)

如果要使用csv库本身:-

您可以使用 DictReader DictWriter

Import csv
 def main():
 """Read csv file, delete duplicates and write it.""" 
     with open('test.csv', 'r',newline='') as inputfile: 
           with open('testout.csv', 'w', newline='') as outputfile: 
               duplicatereader = csv.DictReader(inputfile, delimiter=',') 
               uniquewrite = csv.DictWriter(outputfile, fieldnames=['address', 'floor', 'date', 'price'], delimiter=',') 
                uniquewrite.writeheader()
                keysread = []
               for row in duplicatereader:
                     key = (row['date'], row['price'])
                     if key not in keysread:
                              print(row) 
                              keysread.append(key)
                              uniquewrite.writerow(row)
 if __name__ == '__main__': 
     main()

答案 3 :(得分:0)

注意:此问题是在OP将标签更改为标签之前完成的。

如果您不介意元素的顺序,则可以这样做:

with open("in.csv", "r") as file:
    lines = set()
    for line in file:
        lines.add(frozenset(line.strip("\n").split(",")))

with open("out.csv", "w") as file:
    for line in lines:
        file.write(",".join(line)+"\n")

输出:

Col2,COL1,Col3
EFG,454,ABC
DEF,123,ABC

请注意,您可能希望以特殊方式对待第一行(标题),以免失去其顺序。

但是,如果顺序很重要,则可以使用Maintaining the order of the elements in a frozen set中的代码:

from itertools import filterfalse

def unique_everseen(iterable, key=None):
    seen = set()
    seen_add = seen.add
    if key is None:
        for element in filterfalse(seen.__contains__, iterable):
            seen_add(element)
            yield element
    else:
        for element in iterable:
            k = key(element)
            if k not in seen:
                seen_add(k)
                yield element        

with open("in.csv", "r") as file:
    lines = []
    for line in file:
        lines.append(line.strip("\n").split(","))

with open("out.csv", "w") as file:
    for line in unique_everseen(lines, key=frozenset):
        file.write(",".join(line)+"\n")

输出:

COL1,Col2,Col3
ABC,DEF,123
ABC,EFG,454

OP表示这两个代码似乎不适用于大文件(1.8 Gb)。我认为这可能是由于两个代码都使用RAM将文件存储在列表中,而1.8 GB的文件可能会占用内存上的所有可用空间。

为了解决这个问题,我做了更多尝试。可悲的是,我必须说,与第一次尝试相比,它们都极端慢。第一个代码牺牲RAM消耗来提高速度,但是下面的代码牺牲速度,CPU和硬盘驱动器以减少RAM消耗(而不是消耗RAM中的整个文件大小,它们占用的内存少于50 Mb)。

由于所有这些示例都需要更高的硬盘驱动器使用率,因此建议将“ input”和“ output”文件放在不同的硬盘驱动器上。

我第一次尝试使用更少的RAM是使用shelve模块:

import shelve, os
with shelve.open("tmp") as db:
    with open("in.csv", "r") as file:
        for line in file:
            l = line.strip("\n").split(",")
            l.sort()
            db[",".join(l)] = l

    with open("out.csv", "w") as file:
        for v in db.values():
            file.write(",".join(v)+"\n")

os.remove("temp.bak")
os.remove("temp.dat")
os.remove("temp.dir")

遗憾的是,此代码比使用RAM的前两个代码花费的时间多数百

另一种尝试是:

with open("in.csv", "r") as fileRead:
    # total = sum(1 for _ in fileRead)
    # fileRead.seek(0)
    # i = 0
    with open("out.csv", "w") as _:
        pass
    with open("out.csv", "r+") as fileWrite:
        for lineRead in fileRead:
            # i += 1
            line = lineRead.strip("\n").split(",")
            lineSet = set(line)
            write = True
            fileWrite.seek(0)
            for lineWrite in fileWrite:
                if lineSet == set(lineWrite.strip("\n").split(",")):
                    write = False
            if write:
                pass
                fileWrite.write(",".join(line)+"\n")
            # if i / total * 100 % 1 == 0: print(f"{i / total * 100}% ({i} / {total})")

这稍微快一点,但是不多。

如果您的计算机具有多个核心,则可以尝试使用multiprocessing

from multiprocessing import Process, Queue, cpu_count
from os import remove

def slave(number, qIn, qOut):
    name = f"slave-{number}.csv"
    with open(name, "w") as file:
        pass
    with open(name, "r+") as file:
        while True:
            if not qIn.empty():
                get = qIn.get()
                if get == False:
                    qOut.put(name)
                    break
                else:
                    write = True
                    file.seek(0)                    
                    for line in file:
                        if set(line.strip("\n").split(",")) == get[1]:
                            write = False
                            break
                    if write:
                        file.write(get[0])

def master():
    qIn = Queue(1)
    qOut = Queue()
    slaves = cpu_count()
    slavesList = []

    for n in range(slaves):
        slavesList.append(Process(target=slave, daemon=True, args=(n, qIn, qOut)))
    for s in slavesList:
        s.start()

    with open("in.csv", "r") as file:
        for line in file:
            lineSet = set(line.strip("\n").split(","))
            qIn.put((line, lineSet))
        for _ in range(slaves):
            qIn.put(False)

    for s in slavesList:
        s.join()

    slavesList = []

    with open(qOut.get(), "r+") as fileMaster:
        for x in range(slaves-1):
            file = qOut.get()
            with open(file, "r") as fileSlave:
                for lineSlave in fileSlave:
                    lineSet = set(lineSlave.strip("\n").split(","))
                    write = True
                    fileMaster.seek(0)
                    for lineMaster in fileMaster:
                        if set(lineMaster.strip("\n").split(",")) == lineSet:
                            write = False
                            break
                    if write:
                        fileMaster.write(lineSlave)

            slavesList.append(Process(target=remove, daemon=True, args=(file,)))
            slavesList[-1].start()

    for s in slavesList:
        s.join()

如您所见,我的任务很令人失望,告诉您我的两次尝试都非常缓慢。我希望您能找到一个更好的方法,否则,将需要花费数小时甚至数天的时间来执行1.8 GB的数据(实时时间主要取决于重复值的数量,从而减少了时间)。

一次新尝试:此尝试不是将每个部分存储在文件中,而是将活动部分存储在内存中,然后在文件上写下来以更快地处理块。然后,必须使用以上方法之一重新读取这些块:

lines = set()
maxLines = 1000 # This is the amount of lines that will be stored at the same time on RAM. Higher numbers are faster but requeires more RAM on the computer
perfect = True
with open("in.csv", "r") as fileRead:
    total = sum(1 for _ in fileRead)
    fileRead.seek(0)
    i = 0
    with open("tmp.csv", "w") as fileWrite:            
        for line in fileRead:
            if (len(lines) < maxLines):                    
                lines.add(frozenset(line.strip("\n").split(",")))
                i += 1
                if i / total * 100 % 1 == 0: print(f"Reading {i / total * 100}% ({i} / {total})")
            else:
                perfect = False
                j = 0
                for line in lines:
                    j += 1
                    fileWrite.write(",".join(line) + "\n")
                    if i / total * 100 % 1 == 0: print(f"Storing {i / total * 100}% ({i} / {total})")
                lines = set()

if (not perfect):
   use_one_of_the_above_methods() # Remember to read the tmp.csv and not the in.csv

这可能会提高速度。您可以根据自己的喜好更改maxLines的数字,请记住,数字越大,速度越快(不确定真正的大数字是否相反),但RAM消耗更高。