我正在尝试将名词短语块合并到一个句子中,然后在合并的文档中获取每个标记的pos标签。但是,对于每个合并的跨度,我似乎都获得了跨度中第一个标记的pos标签(通常是DET或ADJ),而不是NOUN。
代码如下:
def noun_chunk_retokenizer(doc):
with doc.retokenize() as retokenizer:
for chunk in doc.noun_chunks:
retokenizer.merge(chunk)
return doc
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(noun_chunk_retokenizer)
query = "when is the tennis match happening?"
[(c.text,c.pos_) for c in nlp(query)]
这是我得到的结果:
[('when', 'ADV'),
('is', 'VERB'),
('the tennis match', 'DET'),
('happening', 'VERB'),
('?', 'PUNCT')]
但是我希望将'网球比赛'标记为'NOUN',这就是它在显示演示https://explosion.ai/demos/displacy上的工作原理吗?
似乎应该有一种“标准”方式来做到这一点,但我不确定如何做到。
答案 0 :(得分:1)
您应该使用built-in merge_noun_chunks
component。
参见Pipeline Functions documenation:
将名词块合并到单个标记中。也可以通过字符串名称
"merge_noun_chunks"
使用。初始化后,通常使用nlp.add_pipe将组件添加到处理管道。
字符串用法示例:
import spacy
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(nlp.create_pipe('merge_noun_chunks'))
query = "when is the tennis match happening?"
[(c.text,c.pos_) for c in nlp(query)]
输出:
[('when', 'ADV'),
('is', 'VERB'),
('the tennis match', 'NOUN'),
('happening', 'VERB'),
('?', 'PUNCT')]
关于“如何在源代码中完成”问题,请参阅第7行的spacy Github repo,/spaCy/blob/master/spacy/pipeline/functions.py
文件:
def merge_noun_chunks(doc):
"""Merge noun chunks into a single token.
doc (Doc): The Doc object.
RETURNS (Doc): The Doc object with merged noun chunks.
DOCS: https://spacy.io/api/pipeline-functions#merge_noun_chunks
"""
if not doc.is_parsed:
return doc
with doc.retokenize() as retokenizer:
for np in doc.noun_chunks:
attrs = {"tag": np.root.tag, "dep": np.root.dep}
retokenizer.merge(np, attrs=attrs)
return doc