如何自定义spaCy的分词器以排除正则表达式描述的短语分割

时间:2019-05-04 15:27:15

标签: nlp spacy

例如,我希望令牌生成器将'New York'标记为['New York'],而不是默认的['New','York']。

文档建议在创建自定义令牌生成器时添加正则表达式。

所以我做了以下事情:

import re
import spacy
from spacy.tokenizer import Tokenizer

target = re.compile(r'New York')

def custom_tokenizer(nlp):

    dflt_prefix = nlp.Defaults.prefixes
    dflt_suffix = nlp.Defaults.suffixes
    dflt_infix = nlp.Defaults.infixes

    prefix_re = spacy.util.compile_prefix_regex(dflt_prefix).search
    suffix_re = spacy.util.compile_suffix_regex(dflt_suffix).search
    infix_re = spacy.util.compile_infix_regex(dflt_infix).finditer

    return Tokenizer(nlp.vocab, prefix_search=prefix_re,
                                suffix_search=suffix_re,
                                infix_finditer=infix_re,
                                token_match=target.match)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u"New York")
print([t.text for t in doc])

我使用默认值,以便正常行为能够继续,除非函数目标(token_match参数的参数)返回true。

但是我仍然收到['New','York']。任何帮助表示赞赏。

1 个答案:

答案 0 :(得分:0)

使用PhraseMatcher组件来标识您要视为单个标记的短语。使用doc.retokenize上下文管理器将您短语中的标记合并为一个标记。最后,将整个过程包装在custom pipeline component中,然后将该组件添加到您的语言模型中。

import spacy
from spacy.lang.en import English
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc

class MatchRetokenizeComponent:
  def __init__(self, nlp, terms):
    self.terms = terms
    self.matcher = PhraseMatcher(nlp.vocab)
    patterns = [nlp.make_doc(text) for text in terms]
    self.matcher.add("TerminologyList", None, *patterns)
    Doc.set_extension("phrase_matches", getter=self.matcher, force=True) # You should probably set Force=False

  def __call__(self, doc):
    matches = self.matcher(doc)
    with doc.retokenize() as retokenizer:
        for match_id, start, end in matches:
            retokenizer.merge(doc[start:end], attrs={"LEMMA": str(doc[start:end])})
    return doc

terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]

nlp = English()
retokenizer = MatchRetokenizeComponent(nlp, terms) 
nlp.add_pipe(retokenizer, name='merge_phrases', last=True)

doc = nlp("German Chancellor Angela Merkel and US President Barack Obama "
          "converse in the Oval Office inside the White House in Washington, D.C.")

[tok for tok in doc]

#[German,
# Chancellor,
# Angela Merkel,
# and,
# US,
# President,
# Barack Obama,
# converse,
# in,
# the,
# Oval,
# Office,
# inside,
# the,
# White,
# House,
# in,
# Washington, D.C.]

编辑:如果您最终尝试合并重叠的跨度,则PhraseMatcher实际上会引发错误。如果这是一个问题,则最好使用新的EntityRuler,它尝试保持最长的连续匹配。使用这样的实体,让我们稍微简化自定义管道组件:

class EntityRetokenizeComponent:
  def __init__(self, nlp):
    pass
  def __call__(self, doc):
    with doc.retokenize() as retokenizer:
        for ent in doc.ents:
            retokenizer.merge(doc[ent.start:ent.end], attrs={"LEMMA": str(doc[ent.start:ent.end])})
    return doc


nlp = English()

ruler = EntityRuler(nlp)

# I don't care about the entity label, so I'm just going to call everything an "ORG"
ruler.add_patterns([{"label": "ORG", "pattern": term} for term in terms])
nlp.add_pipe(ruler) 

retokenizer = EntityRetokenizeComponent(nlp)
nlp.add_pipe(retokenizer, name='merge_phrases')