Python:从大文本文件中删除dupes

时间:2015-08-17 10:44:36

标签: python

我需要我的代码从文件中删除重复的行,此时它只是再生输出相同的文件。谁能看到如何解决这个问题? for循环没有像我希望的那样运行。

#!usr/bin/python
import os
import sys

#Reading Input file
f = open(sys.argv[1]).readlines()

#printing no of lines in the input file
print "Total lines in the input file",len(f)

#temporary dictionary to store the unique records/rows
temp = {}

#counter to count unique items
count = 0

for i in range(0,9057,1):
    if i not in temp: #if row is not there in dictionary i.e it is unique so store it into a dictionary
        temp[f[i]] = 1;
        count += 1
    else:   #if exact row is there then print duplicate record and dont store that
        print "Duplicate Records",f[i]
        continue;

#once all the records are read print how many unique records are there
#u can print all unique records by printing temp
print "Unique records",count,len(temp)

#f = open("C://Python27//Vendor Heat Map Test 31072015.csv", 'w')
#print f
#f.close()
nf = open("C://Python34//Unique_Data.csv", "w")
for data in temp.keys():
        nf.write(data)
nf.close()


# Written by Gary O'Neill
# Date 03-08-15

2 个答案:

答案 0 :(得分:3)

这是做你想做的更好的方法:

infile_path = 'infile.csv'
outfile_path = 'outfile.csv'

written_lines = set()

with open(infile_path, 'r') as infile, open(outfile_path, 'w') as outfile:
    for line in infile:
        if line not in written_lines:
            outfile.write(line)
            written_lines.add(line)
        else:
            print "Duplicate record: {}".format(line)

print "{} unique records".format(len(written_lines))

这将一次读取一行,因此它甚至适用于不适合内存的大型文件。虽然确实如果它们大多是独特的行,written_lines最终会变得很大,但它比在内存中几乎每行都有两个副本更好。

答案 1 :(得分:1)

您应该在f[i]而不是temp中测试i的存在。改变这一行:

 if i not in temp:

 if f[i] not in temp: