当我使用SpaCy识别停用词时,如果我使用en_core_web_lg
语料库则不起作用,但是当我使用en_core_web_sm
时它会起作用。这是一个错误,还是我做错了什么?
import spacy
nlp = spacy.load('en_core_web_lg')
doc = nlp(u'The cat ran over the hill and to my lap')
for word in doc:
print(f' {word} | {word.is_stop}')
结果:
The | False
cat | False
ran | False
over | False
the | False
hill | False
and | False
to | False
my | False
lap | False
但是,当我更改此行以使用en_core_web_sm
语料库时,会得到不同的结果:
nlp = spacy.load('en_core_web_sm')
The | False
cat | False
ran | False
over | True
the | True
hill | False
and | True
to | True
my | True
lap | False
答案 0 :(得分:2)
您遇到的问题是记录在案的bug。建议的解决方法如下:
import spacy
from spacy.lang.en.stop_words import STOP_WORDS
nlp = spacy.load('en_core_web_lg')
for word in STOP_WORDS:
for w in (word, word[0].capitalize(), word.upper()):
lex = nlp.vocab[w]
lex.is_stop = True
doc = nlp(u'The cat ran over the hill and to my lap')
for word in doc:
print('{} | {}'.format(word, word.is_stop))
输出
The | False
cat | False
ran | False
over | True
the | True
hill | False
and | True
to | True
my | True
lap | False
答案 1 :(得分:0)
尝试from spacy.lang.en.stop_words import STOP_WORDS
,然后您可以显式检查单词是否在集合中
from spacy.lang.en.stop_words import STOP_WORDS
import spacy
nlp = spacy.load('en_core_web_lg')
doc = nlp(u'The cat ran over the hill and to my lap')
for word in doc:
# Have to convert Token type to String, otherwise types won't match
print(f' {word} | {str(word) in STOP_WORDS}')
输出以下内容:
The | False
cat | False
ran | False
over | True
the | True
hill | False
and | True
to | True
my | True
lap | False
在我看来像个虫子。但是,如果需要,这种方法还可以让您灵活地向STOP_WORDS
集中添加单词