添加SpaCy令牌生成器例外:请勿拆分'>>'

时间:2018-09-08 15:36:18

标签: nlp tokenize spacy

我正在尝试添加一个例外,以将'>>'和'>>'识别为开始新句子的指示符。例如,

import spacy

nlp = spacy.load('en_core_web_sm')
doc = nlp(u'>> We should. >>No.')

for sent in doc.sents:
    print (sent)

它打印出来:

>> We should.
>
>
No.

但是,我希望将其打印出来:

>> We should.
>> No. 

谢谢您的时间!

1 个答案:

答案 0 :(得分:1)

您需要创建一个custom component。这些代码示例提供了自定义语句分段example。从文档中,该示例执行以下操作:

  

添加管道组件以禁止句子边界的示例   在某些令牌之前。

代码(使示例适应您的需要):

import spacy


def prevent_sentence_boundaries(doc):
    for token in doc:
        if not can_be_sentence_start(token):
            token.is_sent_start = False
    return doc


def can_be_sentence_start(token):
    if token.i > 0 and token.nbor(-1).text == '>':
        return False
    return True

nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(prevent_sentence_boundaries, before='parser')

raw_text = u'>> We should. >> No.'
doc = nlp(raw_text)
sentences = [sent.string.strip() for sent in doc.sents]
for sentence in sentences:
    print(sentence)

输出

>> We should.
>> No.