如何使用NLTK ne_chunk提取GPE(位置)?

时间:2018-02-07 09:46:01

标签: python geolocation nlp nltk named-entity-recognition

我正在尝试使用OpenWeatherMap API和NLTK来实现代码来检查特定区域的天气状况,以查找实体名称识别。但我无法找到将GPE中存在的实体(提供位置)(在本例中为Chicago)传递给我的API请求的方法。请帮助我解决语法。下面给出的代码。

感谢您的帮助

import nltk
from nltk import load_parser
import requests
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords

sentence = "What is the weather in Chicago today? "
tokens = word_tokenize(sentence)

stop_words = set(stopwords.words('english'))

clean_tokens = [w for w in tokens if not w in stop_words]

tagged = nltk.pos_tag(clean_tokens)

print(nltk.ne_chunk(tagged))

2 个答案:

答案 0 :(得分:1)

GPE是来自预训练Tree模型的ne_chunk对象标签。

>>> from nltk import word_tokenize, pos_tag, ne_chunk
>>> sent = "What is the weather in Chicago today?"
>>> ne_chunk(pos_tag(word_tokenize(sent)))
Tree('S', [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('weather', 'NN'), ('in', 'IN'), Tree('GPE', [('Chicago', 'NNP')]), ('today', 'NN'), ('?', '.')])

要遍历树,请参阅How to Traverse an NLTK Tree object?

也许,你正在寻找对NLTK Named Entity recognition to a Python list

略有修改的东西
from nltk import word_tokenize, pos_tag, ne_chunk
from nltk import Tree

def get_continuous_chunks(text, label):
    chunked = ne_chunk(pos_tag(word_tokenize(text)))
    prev = None
    continuous_chunk = []
    current_chunk = []

    for subtree in chunked:
        if type(subtree) == Tree and subtree.label() == label:
            current_chunk.append(" ".join([token for token, pos in subtree.leaves()]))
        elif current_chunk:
            named_entity = " ".join(current_chunk)
            if named_entity not in continuous_chunk:
                continuous_chunk.append(named_entity)
                current_chunk = []
        else:
            continue

    return continuous_chunk

[OUT]:

>>> sent = "What is the weather in New York today?"
>>> get_continuous_chunks(sent, 'GPE')
['New York']

>>> sent = "What is the weather in New York and Chicago today?"
>>> get_continuous_chunks(sent, 'GPE')
['New York', 'Chicago']

答案 1 :(得分:0)

这是解决方案,我想针对您的这种情况提出建议:

步骤1。Word_tokenize,POS_tagging,名称实体识别:代码是这样的:

    Xstring = "What is the weather in New York and Chicago today?"

    tokenized_doc  = word_tokenize(Xstring)
    tagged_sentences = nltk.pos_tag(tokenized_doc )
    NE= nltk.ne_chunk(tagged_sentences )
    NE.draw()

第2步。在名称实体识别后(上面完成)提取所有命名实体

    named_entities = []
    for tagged_tree in NE:
       print(tagged_tree)
       if hasattr(tagged_tree, 'label'):
          entity_name = ' '.join(c[0] for c in tagged_tree.leaves()) #
          entity_type = tagged_tree.label() # get NE category
          named_entities.append((entity_name, entity_type))

     print(named_entities)  #all entities will be printed,check at your end once

第3步,现在仅提取GPE标签

   for tag in named_entities:
      #print(tag[1])
      if tag[1]=='GPE':   #Specify any tag which is required
        print(tag)

这是我的输出:

  ('New York', 'GPE')
  ('Chicago', 'GPE')