我必须匹配文档中多次出现的令牌,并获取匹配令牌的值和位置。
对于非Unicode文本,我将此正则表达式r"\b(?=\w)" + re.escape(word) + r"\b(?!\w)"
与finditer
一起使用,并且可以正常工作。
对于Unicode文本,我必须使用类似u"(\s|^)%s(\s|$)" % word
之类的单词边界解决方案。在大多数情况下,这是可行的,但是当我有两个连续的单词(例如“)”时,则无效。
这是重现此问题的代码。
import re
import json
# a input document of sentences
document="These are oranges and apples and and pears, but not pinapples\nThese are oranges and apples and pears, but not pinapples"
# uncomment to test UNICODE
document="तुम मुझे दोस्त कहते कहते हो"
sentences=[] # sentences
seen = {} # map if a token has been see already!
# split into sentences
lines=document.splitlines()
for index,line in enumerate(lines):
print("Line:%d %s" % (index,line))
# split token that are words
# LP: (for Simon ;P we do not care of punct at all!
rgx = re.compile("([\w][\w']*\w)")
tokens=rgx.findall(line)
# uncomment to test UNICODE
tokens=["तुम","मुझे","दोस्त","कहते","कहते","हो"]
print("Tokens:",tokens)
sentence={} # a sentence
items=[] # word tokens
# for each token word
for index_word,word in enumerate(tokens):
# uncomment to test UNICODE
my_regex = u"(\s|^)%s(\s|$)" % word
#my_regex = r"\b(?=\w)" + re.escape(word) + r"\b(?!\w)"
r = re.compile(my_regex, flags=re.I | re.X | re.UNICODE)
item = {}
# for each matched token in sentence
for m in r.finditer(document):
token=m.group()
characterOffsetBegin=m.start()
characterOffsetEnd=characterOffsetBegin+len(m.group()) - 1 # LP: star from 0
print ("word:%s characterOffsetBegin:%d characterOffsetEnd:%d" % (token, characterOffsetBegin, characterOffsetEnd) )
found=-1
if word in seen:
found=seen[word]
if characterOffsetBegin > found:
# store last word has been seen
seen[word] = characterOffsetBegin
item['index']=index_word+1 #// word index starts from 1
item['word']=token
item['characterOffsetBegin'] = characterOffsetBegin;
item['characterOffsetEnd'] = characterOffsetEnd;
items.append(item)
break
sentence['text']=line
sentence['tokens']=items
sentences.append(sentence)
print(json.dumps(sentences, indent=4, sort_keys=True))
print("------ testing ------")
text=''
for sentence in sentences:
for token in sentence['tokens']:
# LP: we get the token from a slice in original text
text = text + document[token['characterOffsetBegin']:token['characterOffsetEnd']+1] + " "
text = text + '\n'
print(text)
特别是对于令牌कहते
,我将获得相同的匹配,而不是下一个令牌。
word: कहते characterOffsetBegin:20 characterOffsetEnd:25
word: कहते characterOffsetBegin:20 characterOffsetEnd:25
答案 0 :(得分:1)
对于非Unicode文本,您可以使用更好的正则表达式,例如
my_regex = r"(?<!\w){}(?!\w)".format(re.escape(word))
如果word
以非单词char开头,您将无法工作。如果当前位置的左侧紧邻有一个字符char,则(?<!\w)
负向后查找将使匹配失败;如果当前位置的紧靠其后的单词(?!\w)
负向查找将使匹配失败。位置。
Unicode文本正则表达式的第二个问题是第二个组在一个单词后占用空白,因此无法用于随后的匹配。在这里使用环视很方便:
my_regex = r"(?<!\S){}(?!\S)".format(re.escape(word))
请参阅此Python demo online。
如果当前位置的左侧紧跟有一个非空格字符,则(?<!\S)
否定查找失败使匹配失败,如果存在非空格字符,则(?!\S)
否定查找失败使匹配失败紧靠当前位置的右侧。