从SPACY v2.0中的标记化句子中查找命名实体

时间:2017-11-26 18:41:21

标签: python nlp spacy

我正在尝试:

  • 从文本中标记句子
  • 为句子中的每个单词计算命名实体

这是我到目前为止所做的:

nlp = spacy.load('en')
sentence = "Germany and U.S.A are popular countries. I am going to gym tonight"
sentence = nlp(sentence)
tokenized_sentences = []
for sent in sentence.sents:
        tokenized_sentences.append(sent)
for s in tokenized_sentences:
        labels = [ent.label_ for ent in s.ents]
        entities = [ent.text for ent in s.ents]

错误:

    labels = [ent.label_ for ent in s.ents]
    AttributeError: 'spacy.tokens.span.Span' object has no attribute 'ents'

有没有其他方法可以找到标记化句子的命名实体?

提前致谢

1 个答案:

答案 0 :(得分:1)

请注意,您只有两个实体 - 美国和德国。

简单版本:

sentence = nlp("Germany and U.S.A are popular countries. I am going to gym tonight")    
for ent in sentence.ents:
        print(ent.text, ent.label_)

我认为你想要做的事情:

sentence = nlp("Germany and U.S.A are popular countries. I am going to gym tonight")
for sent in sentence.sents:
    tmp = nlp(str(sent))
    for ent in tmp.ents:
        print(ent.text, ent.label_)