为什么这不起作用?在CountVectorizer中停用单词

时间:2017-01-17 16:10:21

标签: python scikit-learn nlp

我正在使用CountVectorizer来标记文本,我想添加自己的停用词。为什么这不起作用? “de”这个词不应该在最后的印刷品中。

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()
print (word_tokenizer(u'Isto é um teste de qualquer coisa.'))

[u'Isto', u'um', u'teste', u'de', u'qualquer', u'coisa']

1 个答案:

答案 0 :(得分:2)

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()

In [7]: vectorizer.vocabulary_
Out[7]: {u'coisa': 0, u'isto': 1, u'qualquer': 2, u'teste': 3, u'um': 4}

你可以看到u'de'不在计算词汇表中......

方法build_tokenizer只是标记了你的字符串,删除stopwords之后应该完成

来自CountVectorizer

的源代码
def build_tokenizer(self):
    """Return a function that splits a string into a sequence of tokens"""
    if self.tokenizer is not None:
        return self.tokenizer
    token_pattern = re.compile(self.token_pattern)
    return lambda doc: token_pattern.findall(doc)

您的问题的解决方案可以是:

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
sentence = [u'Isto é um teste de qualquer coisa.']
tokenized = vectorizer.fit_transform(sentence)
result = vectorizer.inverse_transform(tokenized)

In [12]: result
Out[12]: 
[array([u'isto', u'um', u'teste', u'qualquer', u'coisa'], 
       dtype='<U8')]