NLTK-大语料库的统计数据非常慢

时间:2018-12-02 17:43:07

标签: python performance nlp nltk tagged-corpus

我想查看有关我的语料库的基本统计信息,例如单词/句子计数器,分布等。 我有一个tokens_corpus_reader_ready.txt,其中包含137.000行带以下格式的示例例句:

  

Zur / APPRART Zeit / NN kostenlos / ADJD aber / Kon auch / ADV nur / ADV 11 / CARD kW./NN   Zur / APPRART Zeit / NN anscheinend / ADJD kostenlos / ADJD ./$。
  ...

我也有一个TaggedCorpusReader(),其中有一个describe()方法:

class CSCorpusReader(TaggedCorpusReader):
  def __init__(self):
    TaggedCorpusReader.__init__(self, raw_corpus_path, 'tokens_corpus_reader_ready.txt')

    def describe(self):
    """
    Performs a single pass of the corpus and
    returns a dictionary with a variety of metrics
    concerning the state of the corpus.

    modified method from https://github.com/foxbook/atap/blob/master/snippets/ch03/reader.py
    """
    started = time.time()

    # Structures to perform counting.
    counts = nltk.FreqDist()
    tokens = nltk.FreqDist()

    # Perform single pass over paragraphs, tokenize and count
    for sent in self.sents():
        print(time.time())
        counts['sents'] += 1

        for word in self.words():
            counts['words'] += 1
            tokens[word] += 1

    return {
        'sents':  counts['sents'],
        'words':  counts['words'],
        'vocab':  len(tokens),
        'lexdiv': float(counts['words']) / float(len(tokens)),
        'secs':   time.time() - started,
    }

如果我在IPython中运行这样的describe方法:

>> corpus = CSCorpusReader()
>> print(corpus.describe())

每个句子之间大约有7秒的延迟:

  

1543770777.502544
  1543770784.383989
  1543770792.2057862
  1543770798.992075
  1543770805.819034
  1543770812.599932
  ...

如果我在tokens_corpus_reader_ready.txt中只用几句话运行同一件事,则输出时间完全合理:

  

1543771884.739753
  1543771884.74035
  1543771884.7408729
  1543771884.7413561
  {'sents':4,4,'words':212,'vocab':42,'lexdiv':5.0476190476190474,'secs':0.002869129180908203}

此行为来自何处,如何解决?

编辑1

不是每次都访问语料库本身,而是按列表操作,所以每个句子的时间减少到大约3秒,但这仍然很长,但是:

    sents = list(self.sents())
    words = list(self.words())

    # Perform single pass over paragraphs, tokenize and count
    for sent in sents:
        print(time.time())
        counts['sents'] += 1

        for word in words:
            counts['words'] += 1
            tokens[word] += 1

1 个答案:

答案 0 :(得分:1)

这是您的问题:对于每个句子,您都使用words()方法阅读了整个语料库。难怪要花很长时间。

for sent in self.sents():
    print(time.time())
    counts['sents'] += 1

    for word in self.words():
        counts['words'] += 1
        tokens[word] += 1

事实上,一个句子已经被标记为单词,所以这就是您的意思:

for sent in self.sents():
    print(time.time())
    counts['sents'] += 1

    for word in sent:
        counts['words'] += 1
        tokens[word] += 1