我尝试了正则表达式,但我获得了数百个不相关的令牌。我只对“玩”词干感兴趣。这是我正在使用的代码:
import nltk
from nltk.book import *
f = open('tupac_original.txt', 'rU')
text = f.read()
text1 = text.split()
tup = nltk.Text(text1)
lowtup = [w.lower() for w in tup if w.isalpha()]
import sys, re
tupclean = [w for w in lowtup if not w in nltk.corpus.stopwords.words('english')]
from nltk import stem
tupstem = stem.RegexpStemmer('az$|as$|a$')
[tupstem.stem(i) for i in tupclean]
以上结果是;
['like', 'ed', 'young', 'black', 'like'...]
我正在尝试清理.txt
个文件(全部小写,删除停用词等),将一个单词的多个拼写规范化为一个并执行频率dist / count。我知道怎么做FreqDist
,但有关于我在哪里出现干扰的任何建议?
答案 0 :(得分:12)
NLTK
中有几个预先编码的知名词干分析器,请参阅http://nltk.org/api/nltk.stem.html以及下面的示例。
>>> from nltk import stem
>>> porter = stem.porter.PorterStemmer()
>>> lancaster = stem.lancaster.LancasterStemmer()
>>> snowball = stem.snowball.EnglishStemmer()
>>> tokens = ['player', 'playa', 'playas', 'pleyaz']
>>> [porter(i) for i in tokens]
>>> [porter.stem(i) for i in tokens]
['player', 'playa', 'playa', 'pleyaz']
>>> [lancaster.stem(i) for i in tokens]
['play', 'play', 'playa', 'pleyaz']
>>> [snowball.stem(i) for i in tokens]
[u'player', u'playa', u'playa', u'pleyaz']
但你可能需要的是某种正则表达式,
>>> from nltk import stem
>>> rxstem = stem.RegexpStemmer('er$|a$|as$|az$')
>>> [rxstem.stem(i) for i in tokens]
['play', 'play', 'play', 'pley']