将三元组,双胞胎和非语言与文本相匹配;如果unigram或bigram是已经匹配的trigram的子串,则传递;蟒蛇

时间:2011-11-29 00:30:36

标签: python nlp text-processing

main_text是一个列表,其中包含已经过词性标注的句子:

 main_text = [[('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN'), ('likes','VB'),    
              ('tea','NN'), ('and','CC'), ('hats', 'NN')], [('the', 'DT'), ('red','JJ')                   
               ('queen', 'NN'), ('hates','VB'),('alice','NN')]]  

ngrams_to_match是包含词性标记三元组的列表列表:

 ngrams_to_match = [[('likes','VB'),('tea','NN'), ('and','CC')],
                    [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')],
                    [('hates', 'DT'), ('alice', 'JJ'), ('but', 'CC') ],
                    [('and', 'CC'), ('the', 'DT'), ('rabbit', 'NN')]]

(a)对于main_text中的每个句子,首先检查ngrams_to _match中的完整trigram是否匹配。如果trigram匹配,则返回匹配的trigram和句子。

(b)然后,检查每个三元组的第一个元组(一个单元组)或前两个元组(一个二元组)是否在main_text中匹配。

(c)如果unigram或bigram形成已匹配的三元组的子字符串,则不返回任何内容。否则,返回bigram或unigram匹配和句子。

输出应该是:

 trigram_match = [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')], sentence[0]
 trigram_match = [('likes','VB'),('tea','NN'), ('and','CC')], sentence[0]
 bigram_match = [('hates', 'DT'), ('alice', JJ')], sentence[1]

条件(b)给我们bigram_match。

错误的输出是:

 trigram_match = [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')], sentence[0]
 bigram_match =  [('the', 'DT'), ('mad', 'JJ')] #*bad by condition c
 unigram_match = [ [('the', 'DT')] #*bad by condition c
 trigram_match = [('likes','VB'),('tea','NN'), ('and','CC')], sentence[0]
 bigram_match = [('likes','VB'),('tea','NN')] #*bad by condition c
 unigram_match [('likes', 'VB')]# *bad by condition c

等等。

以下非常难看的代码适用于这个玩具示例。但我想知道是否有人采用更精简的方法。

 for ngram in ngrams_to_match:
  for sentence in main_text:
        for tup in sentence:

            #we can't be sure that our part-of-speech tagger will
            #tag an ngram word and a main_text word the same way, so 
            #we match the word in the tuple, not the whole tuple

        if ngram[0][0] == tup[0]: #if word in the first ngram matches...
            unigram_index = sentence.index(tup) #...then this is our index
            unigram = (sentence[unigram_index][0]) #save it as a unigram

            try:   
                        if sentence[unigram_index+2][0]==ngram[2][0]:
                 if sentence[unigram_index+2][0]==ngram[2][0]:  #match a trigram
                      trigram = (sentence[unigram_index][0],span[1][0], ngram[2][0])#save the match
                      print 'heres the trigram-->', sentence,'\n', 'trigram--->',trigram
            except IndexError:
            pass
            if ngram[0][0] == tup[0]:# == tup[0]:  #same as above
                unigram_index = sentence.index(tup)               
                if sentence[unigram_index+1][0]==span[1][0]:  #get bigram match     

                bigram = (sentence[unigram_index][0],span[1][0])#save the match
                if bigram[0] and bigram[1] in trigram:  #no substring matches
                                     pass                             
                else:
                    print 'heres a sentence-->', sentence,'\n', 'bigram--->', bigram
                if unigram in bigram or trigram:  #no substring matches
                    pass
                else:
                    print unigram 

1 个答案:

答案 0 :(得分:0)

我已经尝试使用生成器来实现这一点。我在你的规范中发现了一些空白,所以我做了一些假设。

如果unigram或bigram形成已经匹配的trigram的子字符串,则不返回任何内容。 - 对于哪个gram指的是搜索元素或匹配的元素有点模棱两可。让我开始讨厌使用N-gram字(我上周之前从未听说过)。

使用添加到found集的内容进行播放,以修改排除的搜索元素。

# assumptions:
# - [('hates','DT'),('alice','JJ'),('but','CC')] is typoed and should be:
#   [('hates','VB'),('alice','NN'),('but','CC')]
# - matches can't overlap, matched elements are excluded from further checking
# - bigrams precede unigrams

main_text = [
  [('the','DT'),('mad','JJ'),('hatter','NN'),('likes','VB'),('tea','NN'),('and','CC'),('hats','NN')],
  [('the','DT'),('red','JJ'),('queen','NN'),('hates','VB'),('alice','NN')]
]
ngrams_to_match = [
  [('likes','VB'),('tea','NN'),('and','CC')],
  [('the','DT'),('mad','JJ'),('hatter','NN')],
  [('hates','VB'),('alice','NN'),('but','CC')],
  [('and','CC'),('the','DT'),('rabbit','NN')]
]

def slice_generator(sentence,size=3):
  """
  Generate slices through the sentence in decreasing sized windows. If True is sent to the
  generator, the elements from the previous window will be excluded from future slices.
  """
  sent = list(sentence)
  for c in range(size,0,-1):
    for i in range(len(sent)):
      slice = tuple(sent[i:i+c])
      if all(x is not None for x in slice) and len(slice) == c:
        used = yield slice
        if used:
          sent[i:i+size] = [None] * c

def gram_search(text,matches):
  tri_bi_uni = set(tuple(x) for x in matches) | set(tuple(x[:2]) for x in matches) | set(tuple(x[:1]) for x in matches)
  found = set()
  for i, sentence in enumerate(text):
    gen = slice_generator(sentence)
    send = None
    try:
      while True:
        row = gen.send(send)
        if row in tri_bi_uni - found:
          send = True
          found |= set(tuple(row[:x]) for x in range(1,len(row)))
          print "%s_gram_match, sentence[%s] = %r" % (len(row),i,row)
        else:
          send = False
    except StopIteration:
      pass

gram_search(main_text,ngrams_to_match)

收率:

3_gram_match, sentence[0] = (('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN'))
3_gram_match, sentence[0] = (('likes', 'VB'), ('tea', 'NN'), ('and', 'CC'))
2_gram_match, sentence[1] = (('hates', 'VB'), ('alice', 'NN'))