我是spacy的新手,我想使用它的lemmatizer函数,但我不知道如何使用它,就像我在单词串中一样,它会返回字符串的基本形式。< / p>
示例:
谢谢。
答案 0 :(得分:30)
以前的答案很复杂,无法编辑,所以这是一个更传统的答案。
# make sure your downloaded the english model with "python -m spacy download en"
import spacy
nlp = spacy.load('en')
doc = nlp(u"Apples and oranges are similar. Boots and hippos aren't.")
for token in doc:
print(token, token.lemma, token.lemma_)
输出:
Apples 6617 apples
and 512 and
oranges 7024 orange
are 536 be
similar 1447 similar
. 453 .
Boots 4622 boot
and 512 and
hippos 98365 hippo
are 536 be
n't 538 not
. 453 .
答案 1 :(得分:11)
代码:
import os
from spacy.en import English, LOCAL_DATA_DIR
data_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR)
nlp = English(data_dir=data_dir)
doc3 = nlp(u"this is spacy lemmatize testing. programming books are more better than others")
for token in doc3:
print token, token.lemma, token.lemma_
输出
this 496 this
is 488 be
spacy 173779 spacy
lemmatize 1510965 lemmatize
testing 2900 testing
. 419 .
programming 3408 programming
books 1011 book
are 488 be
more 529 more
better 615 better
than 555 than
others 871 others
示例参考:here
答案 2 :(得分:8)
如果你只想使用Lemmatizer。你可以通过以下方式做到这一点。
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
lemmas = lemmatizer(u'ducks', u'NOUN')
print(lemmas)
输出
['duck']
答案 3 :(得分:2)
我使用Spacy 2.x版
import spacy
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
doc = nlp('did displaying words')
print (" ".join([token.lemma_ for token in doc]))
和输出:
do display word
希望它会有所帮助:)
答案 4 :(得分:-1)
我用过:
import spacy
nlp = en_core_web_sm.load()
doc = nlp("did displaying words")
print(" ".join([token.lemma_ for token in doc]))
>>> do display word
但是它给了OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
错误,我使用了:
pip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz
摆脱错误。