元组没有属性'isdigit'

时间:2015-12-04 20:54:08

标签: python nltk tokenize

我需要使用NLTK模块进行一些文字处理,我收到此错误: AttributeError:'tuple'对象没有属性'isdigit'

有人知道如何处理这个错误吗?

import nltk

with open ("SHORT-LIST.txt", "r",encoding='utf8') as myfile:
    text =  (myfile.read().replace('\n', ''))

#text = "program managment is complicated issue for human workers"

# Used when tokenizing words
sentence_re = r'''(?x)      # set flag to allow verbose regexps
      ([A-Z])(\.[A-Z])+\.?  # abbreviations, e.g. U.S.A.
    | \w+(-\w+)*            # words with optional internal hyphens
    | \$?\d+(\.\d+)?%?      # currency and percentages, e.g. $12.40, 82%
    | \.\.\.                # ellipsis
    | [][.,;"'?():-_`]      # these are separate tokens
'''

lemmatizer = nltk.WordNetLemmatizer()
stemmer = nltk.stem.porter.PorterStemmer()


grammar = r"""
    NBAR:
        {<NN.*|JJ>*<NN.*>}  # Nouns and Adjectives, terminated with Nouns

    NP:
        {<NBAR>}
        {<NBAR><IN><NBAR>}  # Above, connected with in/of/etc...
"""
chunker = nltk.RegexpParser(grammar)

tok = nltk.regexp_tokenize(text, sentence_re)

postoks = nltk.tag.pos_tag(tok)

#print (postoks)

tree = chunker.parse(postoks)

from nltk.corpus import stopwords
stopwords = stopwords.words('english')


def leaves(tree):
    """Finds NP (nounphrase) leaf nodes of a chunk tree."""
    for subtree in tree.subtrees(filter = lambda t: t.label()=='NP'):
        yield subtree.leaves()

def normalise(word):
    """Normalises words to lowercase and stems and lemmatizes it."""
    word = word.lower()
    word = stemmer.stem_word(word)
    word = lemmatizer.lemmatize(word)
    return word

def acceptable_word(word):
    """Checks conditions for acceptable word: length, stopword."""
    accepted = bool(2 <= len(word) <= 40
        and word.lower() not in stopwords)
    return accepted


def get_terms(tree):
    for leaf in leaves(tree):
        term = [ normalise(w) for w,t in leaf if acceptable_word(w) ]
        yield term

terms = get_terms(tree)


with open("results.txt", "w+") as logfile:
    for term in terms: 
        for word in term:
            result = word
            logfile.write("%s\n" % str(word))
#           print (word),
#       (print)

logfile.close() 
ErrorException in cellmap.cls.php line 676:
Undefined offset: 688

3 个答案:

答案 0 :(得分:5)

另一种方法和简单方法是更改​​此部分:

tok = nltk.regexp_tokenize(text, sentence_re)
postoks = nltk.tag.pos_tag(tok)

用nltk标准词标记器代替它:

toks = nltk.word_tokenize(text)
postoks = nltk.tag.pos_tag(toks)

理论上,性能和结果应该没有太大差异。

答案 1 :(得分:3)

nltk 3.1版本中的默认标记符为 Perceptron 。这是现在的最新版本。我的所有nltk.regexp_tokenize都停止正常运行,我所有的nltk.pos_tag都开始出现上述错误。

我目前的解决方案是使用以前的版本nltk 3.0.1来使它们正常运行。我不确定这是否是nltk当前版本中的错误。

ubuntu中nltk 3.0.4版本的安装说明。从您的主目录或任何其他目录执行以下步骤。

$ wget https://github.com/nltk/nltk/archive/3.0.4.tar.gz
$ tar -xvzf 3.0.4.tar.gz 
$ cd nltk-3.0.4
$ sudo python3.4 setup.py install

答案 2 :(得分:3)

对于nltk的更高版本,正则表达式的更改解决了这个问题。我在https://gist.github.com/alexbowe/879414#gistcomment-1704727

找到了解决方案

-

使用括号对给定表达式进行分组,并将所有括号更改为非捕获。

sentence_re = r'(?:(?:[AZ])(?:。[AZ])+。?)|(?:\ w +(?: - \ w +)*)|(?:\ $ ?(?:\ d +)?\ d +%)|(:... |)(?:?[] [。 '?(:-_`),“\)]'

-