如何快速获取语料库中的单词集合(使用nltk)?

时间:2015-03-26 06:27:08

标签: python text nlp counter nltk

我想使用nltk为语料库快速构建一个单词查找表。以下是我正在做的事情:

  1. 阅读原始文本:file = open(“corpus”,“r”)。read()。decode('utf-8')
  2. 使用= nltk.word_tokenize(file)获取所有令牌;
  3. 使用set(a)获取唯一标记,并将其转换回列表。
  4. 这是完成此任务的正确方法吗?

1 个答案:

答案 0 :(得分:2)

尝试:

import time
from collections import Counter

from nltk import FreqDist
from nltk.corpus import brown
from nltk import word_tokenize

def time_uniq(maxchar):
    # Let's just take the first 10000 characters.
    words = brown.raw()[:maxchar] 

    # Time to tokenize
    start = time.time()
    words = word_tokenize(words)
    print time.time() - start

    # Using collections.Counter
    start = time.time()
    x = Counter(words)
    uniq_words = x.keys()
    print time.time() - start

    # Using nltk.FreqDist
    start = time.time()
    FreqDist(words)
    uniq_words = x.keys()
    print time.time() - start

    # If you don't need frequency info, use set()
    start = time.time()
    uniq_words = set(words)
    print time.time() - start

[OUT]:

~$ python test.py 
0.0413908958435
0.000495910644531
0.000432968139648
9.3936920166e-05

0.10734796524
0.00458407402039
0.00439405441284
0.00084400177002

1.12890005112
0.0492491722107
0.0490930080414
0.0100378990173

加载您自己的语料库文件(假设您的文件足够小以适应RAM):

from collections import Counter
from nltk import FreqDist, word_tokenize

with open('myfile.txt', 'r') as fin:
    # Using Counter.
    x = Counter(word_tokenize(fin.read()))
    uniq = x.keys()
    # Using FreqDist
    x = Counter(word_tokenize(fin.read()))
    uniq = x.keys()
    # Using Set
    uniq = set(word_tokenize(fin.read()))

如果文件太大,您可能希望一次处理一行文件:

from collections import Counter
from nltk import FreqDist, word_tokenize

from nltk.corpus import brown

# Using Counter.
x = Counter()
with open('myfile.txt', 'r') as fin:
    for line in fin.split('\n'):
        x.update(word_tokenize(line))
uniq = x.keys()

# Using Set.
x = set()
with open('myfile.txt', 'r') as fin:
    for line in fin.split('\n'):
        x.update(word_tokenize(line))
uniq = x.keys()