我正在尝试document classification, as described in NLTK Chapter 6,我在删除停用词时遇到问题。当我添加
all_words = (w for w in all_words if w not in nltk.corpus.stopwords.words('english'))
它返回
Traceback (most recent call last):
File "fiction.py", line 8, in <module>
word_features = all_words.keys()[:100]
AttributeError: 'generator' object has no attribute 'keys'
我猜测停用词代码改变了用于'all_words'的对象类型,使得它们.key()函数无用。如何在使用键功能之前删除停用词而不更改其类型?完整代码如下:
import nltk
from nltk.corpus import PlaintextCorpusReader
corpus_root = './nltk_data/corpora/fiction'
fiction = PlaintextCorpusReader(corpus_root, '.*')
all_words=nltk.FreqDist(w.lower() for w in fiction.words())
all_words = (w for w in all_words if w not in nltk.corpus.stopwords.words('english'))
word_features = all_words.keys()[:100]
def document_features(document): # [_document-classify-extractor]
document_words = set(document) # [_document-classify-set]
features = {}
for word in word_features:
features['contains(%s)' % word] = (word in document_words)
return features
print document_features(fiction.words('fic/11.txt'))
答案 0 :(得分:4)
我会通过避免首先将它们添加到FreqDist
实例来实现这一点:
all_words=nltk.FreqDist(w.lower() for w in fiction.words() if w.lower() not in nltk.corpus.stopwords.words('english'))
根据你的语料库的大小,我认为你可能会在创建一个停用词集之前获得性能提升:
stopword_set = frozenset(ntlk.corpus.stopwords.words('english'))
如果这不适合您的情况,看起来您可以利用FreqDist
继承自dict
的事实:
for stopword in nltk.corpus.stopwords.words('english'):
if stopword in all_words:
del all_words[stopword]