使用python从NLTK中提取名词短语

时间:2016-07-05 02:56:19

标签: python nltk

我是python和nltk的新手。我已将代码从https://gist.github.com/alexbowe/879414转换为下面给出的代码,以使其运行许多文档/文本块。但是我收到了以下错误

 Traceback (most recent call last):
 File "E:/NLP/PythonProgrames/NPExtractor/AdvanceMain.py", line 16, in    <module>
  result = np_extractor.extract()
 File "E:\NLP\PythonProgrames\NPExtractor\NPExtractorAdvanced.py", line 67,   in extract
 for term in terms:
File "E:\NLP\PythonProgrames\NPExtractor\NPExtractorAdvanced.py", line 60, in get_terms
 for leaf in self.leaves(tree):
 TypeError: leaves() takes 1 positional argument but 2 were given

任何人都可以帮我解决这个问题。我必须从数百万的产品评论中提取名词短语。我使用Java使用Standford NLP工具包,但它非常慢,所以我认为在python中使用nltk会更好。如果有更好的解决方案,请同时推荐。

import nltk
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
grammar = r"""
 NBAR:
    {<NN.*|JJ>*<NN.*>}  # Nouns and Adjectives, terminated with Nouns
 NP:
    {<NBAR>}
    {<NBAR><IN><NBAR>}  # Above, connected with in/of/etc...
"""
   lemmatizer = nltk.WordNetLemmatizer()
   stemmer = nltk.stem.porter.PorterStemmer()

class NounPhraseExtractor(object):

    def __init__(self, sentence):
        self.sentence = sentence

    def execute(self):
        # Taken from Su Nam Kim Paper...
        chunker = nltk.RegexpParser(grammar)
        #toks = nltk.regexp_tokenize(text, sentence_re)
        # #postoks = nltk.tag.pos_tag(toks)
        toks = nltk.word_tokenize(self.sentence)
        postoks = nltk.tag.pos_tag(toks)
        tree = chunker.parse(postoks)
        return tree

    def leaves(tree):
        """Finds NP (nounphrase) leaf nodes of a chunk tree."""
        for subtree in tree.subtrees(filter=lambda t: t.label() == 'NP'):
            yield subtree.leaves()

    def normalise(word):
        """Normalises words to lowercase and stems and lemmatizes it."""
        word = word.lower()
        word = stemmer.stem_word(word)
        word = lemmatizer.lemmatize(word)
        return word

    def acceptable_word(word):
        """Checks conditions for acceptable word: length, stopword."""
        accepted = bool(2 <= len(word) <= 40
                    and word.lower() not in stopwords)
        return accepted

    def get_terms(self,tree):
        for leaf in self.leaves(tree):
            term = [self.normalise(w) for w, t in leaf if self.acceptable_word(w)]
        yield term

    def extract(self):
        terms = self.get_terms(self.execute())
        matches = []
        for term in terms:
            for word in term:
                matches.append(word)
        return matches

1 个答案:

答案 0 :(得分:4)

您需要:

  • 使用@staticmethod或
  • 装饰normalizeacceptable_wordleaves中的每一个
  • 添加self参数作为这些方法的第一个参数。

您正在调用self.leaves,它会将self作为隐式第一个参数传递给leaves方法(但您的方法只需要一个参数)。制作这些静态方法或添加self参数将解决此问题。

(您稍后拨打self.acceptable_wordself.normalize也会遇到同样的问题)

您可以在docs中了解Python的静态方法,或者可能更容易理解的external site 中的静态方法。