计算巨大的csv文件中唯一行的数量

时间:2019-05-16 08:00:32

标签: python

我有一个很大的csv文件(大约5-6 GB),位于Hive中。有没有一种方法可以计算文件中存在的唯一行的数量?

我对此没有任何线索。

我需要将输出与另一个具有相似内容但唯一值的配置单元表进行比较。因此,基本上我需要找到不同数量的线。

1 个答案:

答案 0 :(得分:2)

以下逻辑基于散列。它读取每行而不是整行的哈希值,从而将大小最小化。然后比较哈希值。对于相同的字符串,散列在大多数情况下都是相同的,很少有字符串可能有所不同,因此请确保读取实际的行并比较实际的字符串。以下内容也适用于大型文件。

from collections import Counter
input_file = r'input_file.txt'

# Main logic
# If hash is different then the contents are different
# If hash is same then the contents may be different


def count_with_index(values):
    '''
    Returns dict like key: (count, [indexes])
    '''
    result = {}
    for i, v in enumerate(values):
        count, indexes = result.get(v, (0, []))
        result[v] = (count + 1, indexes + [i])
    return result


def get_lines(fp, line_numbers):
    return (v for i, v in enumerate(fp) if i in line_numbers)


# Gets hashes of all lines
counter = count_with_index(map(hash, open(input_file)))

# Sums only the unique hashes
sum_of_unique_hash = sum((c for _, (c, _) in counter.items() if c == 1))

# Filters all non unique hashes
non_unique_hash = ((h, v) for h, (c, v) in counter.items() if c != 1)

total_sum = sum_of_unique_hash

# For all non unique hashes get the actual line and count
# One hash is picked per time. So memory is not consumed much.
for h, v in non_unique_hash:
    counter = Counter(get_lines(open(input_file), v))
    total_sum += sum(1 for k, v in counter.items())

print('Total number of unique lines is : ', total_sum)