查找两个大型非结构化文本文件之间的常用单词

时间:2016-02-10 10:46:15

标签: data-processing bigdata

我有两个大的非结构化文本文件,无法安装到内存中。我想找到他们之间的常用词。

什么是最有效(时间和空间)的方式?

由于

1 个答案:

答案 0 :(得分:1)

我给了这两个文件:

pi_poem

Now I will a rhyme construct
By chosen words the young instruct
I do not like green eggs and ham
I do not like them Sam I am

pi_prose

The thing I like best about pi is the magic it does with circles.
Even young kids can have fun with the simple integer approximations.

代码很简单。第一个循环逐行读取第一个文件,将单词粘贴到词典集中。第二个循环读取第二个文件;它在第一个文件的词典中找到的每个单词都会进入一组常用单词。

这样做你需要的吗?您需要将其调整为标点符号,并且您可能希望在切换后删除额外的打印。

lexicon = set()
with open("pi_poem", 'r') as text:
    for line in text.readlines():
        for word in line.split():
            if not word in lexicon:
                lexicon.add(word)
print lexicon

common = set()
with open("pi_prose", 'r') as text:
    for line in text.readlines():
        for word in line.split():
            if word in lexicon:
                common.add(word)

print common

输出:

set(['and', 'am', 'instruct', 'ham', 'chosen', 'young', 'construct', 'Now', 'By', 'do', 'them', 'I', 'eggs', 'rhyme', 'words', 'not', 'a', 'like', 'Sam', 'will', 'green', 'the'])
set(['I', 'the', 'like', 'young'])