我将Ted数据集成绩单搞得一团糟。我注意到有些奇怪的东西: 并非所有的词都被词状化。说,
selected -> select
哪个是对的。
但是,involved !-> involve
和horsing !-> horse
除非我明确输入'v'(Verb)属性。
在python终端上,我得到了正确的输出但不在我的code中:
>>> from nltk.stem import WordNetLemmatizer
>>> from nltk.corpus import wordnet
>>> lem = WordNetLemmatizer()
>>> lem.lemmatize('involved','v')
u'involve'
>>> lem.lemmatize('horsing','v')
u'horse'
代码的相关部分是:
for l in LDA_Row[0].split('+'):
w=str(l.split('*')[1])
word=lmtzr.lemmatize(w)
wordv=lmtzr.lemmatize(w,'v')
print wordv, word
# if word is not wordv:
# print word, wordv
整个代码为here。
有什么问题?
答案 0 :(得分:5)
lemmatizer要求正确的POS标记准确无误,如果使用WordNetLemmatizer.lemmatize()
的默认设置,默认标记为名词,请参阅https://github.com/nltk/nltk/blob/develop/nltk/stem/wordnet.py#L39
要解决此问题,请始终在引理之前对数据进行POS标记,例如
>>> from nltk.stem import WordNetLemmatizer
>>> from nltk import pos_tag, word_tokenize
>>> wnl = WordNetLemmatizer()
>>> sent = 'This is a foo bar sentence'
>>> pos_tag(word_tokenize(sent))
[('This', 'DT'), ('is', 'VBZ'), ('a', 'DT'), ('foo', 'NN'), ('bar', 'NN'), ('sentence', 'NN')]
>>> for word, tag in pos_tag(word_tokenize(sent)):
... wntag = tag[0].lower()
... wntag = wntag if wntag in ['a', 'r', 'n', 'v'] else None
... if not wntag:
... lemma = word
... else:
... lemma = wnl.lemmatize(word, wntag)
... print lemma
...
This
be
a
foo
bar
sentence
请注意'是 - > be',即
>>> wnl.lemmatize('is')
'is'
>>> wnl.lemmatize('is', 'v')
u'be'
用你的例子中的文字来回答这个问题:
>>> sent = 'These sentences involves some horsing around'
>>> for word, tag in pos_tag(word_tokenize(sent)):
... wntag = tag[0].lower()
... wntag = wntag if wntag in ['a', 'r', 'n', 'v'] else None
... lemma = wnl.lemmatize(word, wntag) if wntag else word
... print lemma
...
These
sentence
involve
some
horse
around
请注意,WordNetLemmatizer存在一些怪癖:
此外,NLTK的默认POS标签正在进行一些重大改变,以提高准确性:
对于一个开箱即用/现成的解码器解决方案,你可以看一下https://github.com/alvations/pywsd以及我是如何做出一些尝试的例子来捕捉到不在WordNet中,请参阅https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L66