如何在Python中使用WordNet获取单词域?

时间:2014-02-20 08:42:01

标签: python nltk wordnet

如何使用nltk Python模块和WordNet找到单词域?

假设我有像(交易,需求汇票,支票,存折)这样的词,所有这些词的域名都是“BANK”。我们如何在Python中使用nltk和WordNet来获得这个?

我正在尝试通过上位词和下位词关系:

例如:

from nltk.corpus import wordnet as wn
sports = wn.synset('sport.n.01')
sports.hyponyms()
[Synset('judo.n.01'), Synset('athletic_game.n.01'), Synset('spectator_sport.n.01'),    Synset('contact_sport.n.01'), Synset('cycling.n.01'), Synset('funambulism.n.01'), Synset('water_sport.n.01'), Synset('riding.n.01'), Synset('gymnastics.n.01'), Synset('sledding.n.01'), Synset('skating.n.01'), Synset('skiing.n.01'), Synset('outdoor_sport.n.01'), Synset('rowing.n.01'), Synset('track_and_field.n.01'), Synset('archery.n.01'), Synset('team_sport.n.01'), Synset('rock_climbing.n.01'), Synset('racing.n.01'), Synset('blood_sport.n.01')]

bark = wn.synset('bark.n.02')
bark.hypernyms()
[Synset('noise.n.01')]

3 个答案:

答案 0 :(得分:11)

Princeton WordNet和NLTK的WN API中没有明确的域信息。

我建议您获取WordNet域资源的副本,然后使用域链接您的同义词集,请参阅http://wndomains.fbk.eu/

在您注册并完成下载后,您将看到一个wn-domains-3.2-20070223文本文件,这是一个制表符分隔的文件,第一列是offset-PartofSpeech标识符,第二列包含以空格分隔的域标记,例如

00584282-v  military pedagogy
00584395-v  military school university
00584526-v  animals pedagogy
00584634-v  pedagogy
00584743-v  school university
00585097-v  school university
00585271-v  pedagogy
00585495-v  pedagogy
00585683-v  psychological_features

然后使用以下脚本访问synsets的域:

from collections import defaultdict
from nltk.corpus import wordnet as wn

# Loading the Wordnet domains.
domain2synsets = defaultdict(list)
synset2domains = defaultdict(list)
for i in open('wn-domains-3.2-20070223', 'r'):
    ssid, doms = i.strip().split('\t')
    doms = doms.split()
    synset2domains[ssid] = doms
    for d in doms:
        domain2synsets[d].append(ssid)

# Gets domains given synset.
for ss in wn.all_synsets():
    ssid = str(ss.offset).zfill(8) + "-" + ss.pos()
    if synset2domains[ssid]: # not all synsets are in WordNet Domain.
        print ss, ssid, synset2domains[ssid]

# Gets synsets given domain.        
for dom in sorted(domain2synsets):
    print dom, domain2synsets[dom][:3]

还要查找wn-affect,这对于消除WordNet域资源中的情感词语非常有用。


随着NLTK v3.0的更新,它带有Open Multilingual WordNet(http://compling.hss.ntu.edu.sg/omw/),并且由于法语同义词共享相同的偏移ID,您可以简单地将WND用作跨语言资源。法语引理名称可以这样访问:

# Gets domains given synset.
for ss in wn.all_synsets():
    ssid = str(ss.offset()).zfill(8) + "-" + ss.pos()
    if synset2domains[ssid]: # not all synsets are in WordNet Domain.
        print ss, ss.lemma_names('fre'), ssid, synset2domains[ssid]

请注意,最新版本的NLTK会将synset属性更改为“get”函数:Synset.offset - > Synset.offset()

答案 1 :(得分:1)

正如@alvas建议的那样,您可以使用WordNetDomains。您必须同时下载WordNet2.0(WordNetDomains当前状态为WordNetDomains,不支持NLTK使用的WordNet默认版本WordNet3.0)和WordNetDomains。

  • WordNet2.0可以从here

  • 下载
  • WordNetDomains可以从here下载(获得许可后)。

我创建了一个非常简单的Python API,它可以同时加载Python3.x中的这两种资源,并提供您可能需要的一些常规例程(例如,将一组域链接到给定的术语或给定的同义词集,等等。)。 WordNetDomains的数据加载来自@alvas。

这是它的样子(省略了大多数注释):

from collections import defaultdict
from nltk.corpus import WordNetCorpusReader
from os.path import exists


class WordNetDomains:
    def __init__(self, wordnet_home):
        #This class assumes you have downloaded WordNet2.0 and WordNetDomains and that they are on the same data home.
        assert exists(f'{wordnet_home}/WordNet-2.0'), f'error: missing WordNet-2.0 in {wordnet_home}'
        assert exists(f'{wordnet_home}/wn-domains-3.2'), f'error: missing WordNetDomains in {wordnet_home}'

        # load WordNet2.0
        self.wn = WordNetCorpusReader(f'{wordnet_home}/WordNet-2.0/dict', 'WordNet-2.0/dict')

        # load WordNetDomains (based on https://stackoverflow.com/a/21904027/8759307)
        self.domain2synsets = defaultdict(list)
        self.synset2domains = defaultdict(list)
        for i in open(f'{wordnet_home}/wn-domains-3.2/wn-domains-3.2-20070223', 'r'):
            ssid, doms = i.strip().split('\t')
            doms = doms.split()
            self.synset2domains[ssid] = doms
            for d in doms:
                self.domain2synsets[d].append(ssid)

    def get_domains(self, word, pos=None):
        word_synsets = self.wn.synsets(word, pos=pos)
        domains = []
        for synset in word_synsets:
            domains.extend(self.get_domains_from_synset(synset))
        return set(domains)

    def get_domains_from_synset(self, synset):
        return self.synset2domains.get(self._askey_from_synset(synset), set())

    def get_synsets(self, domain):
        return [self._synset_from_key(key) for key in self.domain2synsets.get(domain, [])]

    def get_all_domains(self):
        return set(self.domain2synsets.keys())

    def _synset_from_key(self, key):
        offset, pos = key.split('-')
        return self.wn.synset_from_pos_and_offset(pos, int(offset))

    def _askey_from_synset(self, synset):
        return self._askey_from_offset_pos(synset.offset(), synset.pos())

    def _askey_from_offset_pos(self, offset, pos):
        return str(offset).zfill(8) + "-" + pos

答案 2 :(得分:0)

我认为您也可以使用 spacy 库,请参阅以下代码:

代码取自 spacy-wordnet 官方网站 https://pypi.org/project/spacy-wordnet/

import spacy

from spacy_wordnet.wordnet_annotator import WordnetAnnotator 

# Load an spacy model (supported models are "es" and "en")  nlp = spacy.load('en') nlp.add_pipe(WordnetAnnotator(nlp.lang), after='tagger') token = nlp('prices')[0]

# wordnet object link spacy token with nltk wordnet interface by giving acces to
# synsets and lemmas  token._.wordnet.synsets() token._.wordnet.lemmas()

# And automatically tags with wordnet domains token._.wordnet.wordnet_domains()

# Imagine we want to enrich the following sentence with synonyms sentence = nlp('I want to withdraw 5,000 euros')

# spaCy WordNet lets you find synonyms by domain of interest
# for example economy economy_domains = ['finance', 'banking'] enriched_sentence = []

# For each token in the sentence for token in sentence:
    # We get those synsets within the desired domains
    synsets = token._.wordnet.wordnet_synsets_for_domain(economy_domains)
    if synsets:
        lemmas_for_synset = []
        for s in synsets:
            # If we found a synset in the economy domains
            # we get the variants and add them to the enriched sentence
            lemmas_for_synset.extend(s.lemma_names())
            enriched_sentence.append('({})'.format('|'.join(set(lemmas_for_synset))))
    else:
        enriched_sentence.append(token.text)

# Let's see our enriched sentence print(' '.join(enriched_sentence))
# >> I (need|want|require) to (draw|withdraw|draw_off|take_out) 5,000 euros