NLTK:查找单词大小为2k的上下文

时间:2014-03-01 18:01:57

标签: python nlp nltk collocation

我有一个语料库,我有一个词。对于语料库中每个单词的出现,我想得到一个包含前面的k个单词和单词后面的k个单词的列表。我在算法上做得很好(见下文),但我想知道NLTK是否为我错过了我的需求提供了一些功能?

def sized_context(word_index, window_radius, corpus):
    """ Returns a list containing the window_size amount of words to the left
    and to the right of word_index, not including the word at word_index.
    """

    max_length = len(corpus)

    left_border = word_index - window_radius
    left_border = 0 if word_index - window_radius < 0 else left_border

    right_border = word_index + 1 + window_radius
    right_border = max_length if right_border > max_length else right_border

    return corpus[left_border:word_index] + corpus[word_index+1: right_border]

2 个答案:

答案 0 :(得分:6)

如果您想使用nltk的功能,可以使用nltk的ConcordanceIndex。为了使显示的宽度基于单词的数量而不是单词的数量(后者是ConcordanceIndex.print_concordance的默认值),你只能用这样的东西创建ConcordanceIndex的子类。 :

from nltk import ConcordanceIndex

class ConcordanceIndex2(ConcordanceIndex):
    def create_concordance(self, word, token_width=13):
        "Returns a list of contexts for @word with a context <= @token_width"
        half_width = token_width // 2
        contexts = []
        for i, token in enumerate(self._tokens):
            if token == word:
                start = i - half_width if i >= half_width else 0
                context = self._tokens[start:i + half_width + 1]
                contexts.append(context)
        return contexts

然后你可以得到这样的结果:

>>> from nltk.tokenize import wordpunct_tokenize
>>> my_corpus = 'The gerenuk fled frantically across the vast valley, whereas the giraffe merely turned indignantly and clumsily loped away from the valley into the nearby ravine.'  # my corpus
>>> tokens = wordpunct_tokenize(my_corpus)
>>> c = ConcordanceIndex2(tokens)
>>> c.create_concordance('valley')  # returns a list of lists, since words may occur more than once in a corpus
[['gerenuk', 'fled', 'frantically', 'across', 'the', 'vast', 'valley', ',', 'whereas', 'the', 'giraffe', 'merely', 'turned'], ['and', 'clumsily', 'loped', 'away', 'from', 'the', 'valley', 'into', 'the', 'nearby', 'ravine', '.']]

我在上面创建的create_concordance方法基于nltk的ConcordanceIndex.print_concordance方法,其工作方式如下:

>>> c = ConcordanceIndex(tokens)
>>> c.print_concordance('valley')
Displaying 2 of 2 matches:
                                  valley , whereas the giraffe merely turn
 and clumsily loped away from the valley into the nearby ravine .

答案 1 :(得分:3)

最简单的nltk-ish方法是使用nltk.ngrams()

words = nltk.corpus.brown.words()
k = 5
for ngram in nltk.ngrams(words, 2*k+1, pad_left=True, pad_right=True, pad_symbol=" "):
    if ngram[k+1].lower() == "settle":
        print(" ".join(ngram))

pad_leftpad_right确保查看所有字词。如果你不让你的索引跨越句子(因此:很多边界情况),这很重要。

如果要忽略窗口大小中的标点符号,可以在扫描之前将其删除:

words = (w for w in nltk.corpus.brown.words() if re.search(r"\w", w))