使用python创建wordforms

时间:2014-07-25 09:43:26

标签: python nlp nltk textblob

如何使用Python获取不同的单词形式。我想创建一个如下列表。

Work=['Work','Working','Works']

我的代码:

raw = nltk.clean_html(html)
cleaned = re.sub(r'& ?(ld|rd)quo ?[;\]]', '\"', raw)
tokens = nltk.wordpunct_tokenize(cleaned)
stemmer = PorterStemmer()
t = [stemmer.stem(t) if t in Words else t for t in tokens]
text = nltk.Text(t)
word = words(n)
Words = [stemmer.stem(e) for e in word]
find = ' '.join(str(e) for e in Words)
search_words = set(find.split(' '))
sents = ' '.join([s.lower() for s in text])
blob = TextBlob(sents.decode('ascii','ignore'))
matches = [map(str, blob.sentences[i-1:i+2])     # from prev to after next
            for i, s in enumerate(blob.sentences) # i is index, e is element
            if search_words & set(s.words)]
    #return list(itertools.chain(' '.join (str(y).replace('& rdquo','').replace('& rsquo','') for y in matches))
return list(itertools.chain(*matches))

1 个答案:

答案 0 :(得分:0)

这有点棘手。我试着查看文本中的词干表格,然后将其映射到单词列表中。此外,我将其更改为小写,因为标记化并没有完全映射。以下是更新后的代码

raw = nltk.clean_html(html)
cleaned = re.sub(r'& ?(ld|rd)quo ?[;\]]', '\"', raw)
tokens = nltk.wordpunct_tokenize(cleaned)
lower = [w.lower() for w in tokens]
stemmer = PorterStemmer()
t = [stemmer.stem(t) if t in Words else t for t in lower]
text = nltk.Text(t)
word = words(n)
Words = [stemmer.stem(e) for e in word]
find = ' '.join(str(e) for e in Words)
search_words = set(find.split(' '))
sents = ' '.join([s.lower() for s in text])
blob = TextBlob(sents.decode('ascii','ignore'))
matches = [map(str, blob.sentences[i-1:i+2])     # from prev to after next
        for i, s in enumerate(blob.sentences) # i is index, e is element
        if search_words & set(s.words)]
#return list(itertools.chain(' '.join (str(y).replace('& rdquo','').replace('& rsquo','') for y in matches))

返回列表(itertools.chain(* matches))