这是follow-up of my question。我正在使用nltk解析人员,组织及其关系。使用this example,我能够创建大量的人员和组织;但是,我在nltk.sem.extract_rel命令中收到错误:
AttributeError: 'Tree' object has no attribute 'text'
以下是完整的代码:
import nltk
import re
#billgatesbio from http://www.reuters.com/finance/stocks/officerProfile?symbol=MSFT.O&officerId=28066
with open('billgatesbio.txt', 'r') as f:
sample = f.read()
sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.batch_ne_chunk(tagged_sentences)
# tried plain ne_chunk instead of batch_ne_chunk as given in the book
#chunked_sentences = [nltk.ne_chunk(sentence) for sentence in tagged_sentences]
# pattern to find <person> served as <title> in <org>
IN = re.compile(r'.+\s+as\s+')
for doc in chunked_sentences:
for rel in nltk.sem.extract_rels('ORG', 'PERSON', doc,corpus='ieer', pattern=IN):
print nltk.sem.show_raw_rtuple(rel)
此示例与given in the book非常相似,但该示例使用准备好的“已解析的文档”,它无处可见,我不知道在哪里可以找到它的对象类型。我也通过git库搜索过。任何帮助表示赞赏。
我的最终目标是为一些公司提取人员,组织,职称(日期);然后创建人员和组织的网络地图。
答案 0 :(得分:5)
它看起来像是一个“Parsed Doc”,一个对象需要有一个headline
成员和一个text
成员,这两个成员都是令牌列表,其中一些令牌被标记为树木。例如,这个(hacky)示例有效:
import nltk
import re
IN = re.compile (r'.*\bin\b(?!\b.+ing)')
class doc():
pass
doc.headline=['foo']
doc.text=[nltk.Tree('ORGANIZATION', ['WHYY']), 'in', nltk.Tree('LOCATION',['Philadelphia']), '.', 'Ms.', nltk.Tree('PERSON', ['Gross']), ',']
for rel in nltk.sem.extract_rels('ORG','LOC',doc,corpus='ieer',pattern=IN):
print nltk.sem.relextract.show_raw_rtuple(rel)
运行时,它提供输出:
[ORG: 'WHYY'] 'in' [LOC: 'Philadelphia']
显然你不会像这样编写它,但是它提供了extract_rels
所期望的数据格式的工作示例,你只需要确定如何进行预处理步骤以将数据按到其中格式。
答案 1 :(得分:5)
这是nltk.sem.extract_rels函数的源代码:
def extract_rels(subjclass, objclass, doc, corpus='ace', pattern=None, window=10):
"""
Filter the output of ``semi_rel2reldict`` according to specified NE classes and a filler pattern.
The parameters ``subjclass`` and ``objclass`` can be used to restrict the
Named Entities to particular types (any of 'LOCATION', 'ORGANIZATION',
'PERSON', 'DURATION', 'DATE', 'CARDINAL', 'PERCENT', 'MONEY', 'MEASURE').
:param subjclass: the class of the subject Named Entity.
:type subjclass: str
:param objclass: the class of the object Named Entity.
:type objclass: str
:param doc: input document
:type doc: ieer document or a list of chunk trees
:param corpus: name of the corpus to take as input; possible values are
'ieer' and 'conll2002'
:type corpus: str
:param pattern: a regular expression for filtering the fillers of
retrieved triples.
:type pattern: SRE_Pattern
:param window: filters out fillers which exceed this threshold
:type window: int
:return: see ``mk_reldicts``
:rtype: list(defaultdict)
"""
....
因此,如果将corpus参数作为ieer传递,则nltk.sem.extract_rels函数需要doc参数为IEERDocument对象。您应该将语料库作为ace传递或者不通过它(默认为ace)。在这种情况下,它需要一个块树列表(这是你想要的)。我修改了如下代码:
import nltk
import re
from nltk.sem import extract_rels,rtuple
#billgatesbio from http://www.reuters.com/finance/stocks/officerProfile?symbol=MSFT.O&officerId=28066
with open('billgatesbio.txt', 'r') as f:
sample = f.read().decode('utf-8')
sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
# here i changed reg ex and below i exchanged subj and obj classes' places
OF = re.compile(r'.*\bof\b.*')
for i, sent in enumerate(tagged_sentences):
sent = nltk.ne_chunk(sent) # ne_chunk method expects one tagged sentence
rels = extract_rels('PER', 'ORG', sent, corpus='ace', pattern=OF, window=7) # extract_rels method expects one chunked sentence
for rel in rels:
print('{0:<5}{1}'.format(i, rtuple(rel)))
它给出了结果:
[PER: u'Chairman/NNP'] u'and/CC Chief/NNP Executive/NNP Officer/NNP of/IN the/DT' [ORG: u'Company/NNP']
答案 2 :(得分:0)
这是nltk版本问题。你的代码应该在nltk 2.x中工作 但对于nltk 3,您应该像这样编码
IN = re.compile(r'.*\bin\b(?!\b.+ing)')
for doc in nltk.corpus.ieer.parsed_docs('NYT_19980315'):
for rel in nltk.sem.relextract.extract_rels('ORG', 'LOC', doc,corpus='ieer', pattern = IN):
print (nltk.sem.relextract.rtuple(rel))