当令牌化时,nltk频率组合单数和复数,动词和副词

时间:2015-08-06 06:03:17

标签: python nltk

我想计算频率,但我想结合单数和复数形式的名词和动词及其副词形式。请原谅可怜的判决。例如:“那个好斗的人走过那边的房子,是很多房子中的一个。”

标记和计数频率

import nltk
from nltk.tokenize import RegexpTokenizer
test = "That aggressive person walk by the house over there, one of many houses aggressively"
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(test)
fdist = nltk.FreqDist(tokens)
common=fdist.most_common(100)

输出: [('houses', 1), ('aggressively', 1), ('by', 1), ('That', 1), ('house', 1), ('over', 1), ('there', 1), ('walk', 1), ('person', 1), ('many', 1), ('of', 1), ('aggressive', 1), ('one', 1), ('the', 1)]

我希望将househouses计为('house\houses', 2)aggressiveaggressively计为('aggressive\aggressively',2)。这可能吗?如果没有,我该如何让它看起来像那样?

1 个答案:

答案 0 :(得分:3)

您需要 lemmatize

NLTK包括一个基于WordNet的变形器:

import nltk
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
lemmatizer = nltk.stem.WordNetLemmatizer()
test = "That aggressive person walk by the house over there, one of many houses aggressively"
tokens = tokenizer.tokenize(test)
lemmas = [lemmatizer.lemmatize(t) for t in tokens]
fdist = nltk.FreqDist(lemmas)
common = fdist.most_common(100)

这导致:

[('house', 2),
 ('aggressively', 1),
 ('by', 1),
 ('That', 1),
 ('over', 1),
 ('there', 1),
 ('walk', 1),
 ('person', 1),
 ('many', 1),
 ('of', 1),
 ('aggressive', 1),
 ('one', 1),
 ('the', 1)]

然而,积极的积极的不会被WordNet引理器合并。 还有其他的引理器可能会做你想要的。 但是,首先,您可能需要考虑阻止:

stemmer = nltk.stem.PorterStemmer()
stems = [stemmer.stem(t) for t in tokens]
nltk.FreqDist(stems).most_common()

这给了你:

[(u'aggress', 2),
 (u'hous', 2),
 (u'there', 1),
 (u'That', 1),
 (u'of', 1),
 (u'over', 1),
 (u'walk', 1),
 (u'person', 1),
 (u'mani', 1),
 (u'the', 1),
 (u'one', 1),
 (u'by', 1)]

现在计数看起来还不错! 然而,你可能会因为茎不一定看起来像真正的词而感到烦恼......