如何在Python中用句子分解段落

时间:2012-02-28 00:06:15

标签: python regex text-segmentation

我需要解析Python中段落的句子。是否有现成的软件包,或者我应该尝试在这里使用正则表达式?

2 个答案:

答案 0 :(得分:30)

nltk.tokenize模块专为此设计并处理边缘情况。例如:

>>> from nltk import tokenize
>>> p = "Good morning Dr. Adams. The patient is waiting for you in room number 3."
>>> tokenize.sent_tokenize(p)
['Good morning Dr. Adams.', 'The patient is waiting for you in room number 3.']

答案 1 :(得分:0)

以下是我获得前n个句子的方法:

def get_first_n_sentence(text, n):
    endsentence = ".?!"
    sentences = itertools.groupby(text, lambda x: any(x.endswith(punct) for punct in endsentence))
    for number,(truth, sentence) in enumerate(sentences):
        if truth:
            first_n_sentences = previous+''.join(sentence).replace('\n',' ')
        previous = ''.join(sentence)
        if number>=2*n: break #

    return first_n_sentences

参考:http://www.daniweb.com/software-development/python/threads/303844