我有一些文字:
s="Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
我想将其解析为单词。我很快调查了附魔和nltk,但没有看到任何看起来立即有用的东西。如果我有时间投入这个,我会考虑编写一个动态程序,附魔能够检查一个单词是否是英语。我原以为在网上有什么可以做的,我错了吗?
答案 0 :(得分:9)
使用Biopython(pip install biopython
):
from Bio import trie
import string
def get_trie(dictfile='/usr/share/dict/american-english'):
tr = trie.trie()
with open(dictfile) as f:
for line in f:
word = line.rstrip()
try:
word = word.encode(encoding='ascii', errors='ignore')
tr[word] = len(word)
assert tr.has_key(word), "Missing %s" % word
except UnicodeDecodeError:
pass
return tr
def get_trie_word(tr, s):
for end in reversed(range(len(s))):
word = s[:end + 1]
if tr.has_key(word):
return word, s[end + 1: ]
return None, s
def main(s):
tr = get_trie()
while s:
word, s = get_trie_word(tr, s)
print word
if __name__ == '__main__':
s = "Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
s = s.strip(string.punctuation)
s = s.replace(" ", '')
s = s.lower()
main(s)
>>> if __name__ == '__main__':
... s = "Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
... s = s.strip(string.punctuation)
... s = s.replace(" ", '')
... s = s.lower()
... main(s)
...
image
classification
methods
can
be
roughly
divided
into
two
broad
families
of
approaches
英语中有堕落的案例,这不适用。你需要使用回溯来处理这些问题,但这应该让你开始。
>>> main("expertsexchange")
experts
exchange
答案 1 :(得分:1)
这是亚洲NLP中经常出现的问题。如果你有一本字典,那么你可以使用这个http://code.google.com/p/mini-segmenter/(免责声明:我写了它,希望你不介意)。
请注意,搜索空间可能非常大,因为字母英文中的字符数肯定比中文/日文的字母长。