如何重写一个简单的标记化器来使用正则表达式?

时间:2012-08-16 18:52:46

标签: python regex rewrite tokenize lexer

这是首次编写的令牌化程序的优化版本,并且运行良好。辅助标记生成器可以解析此函数的输出,以创建更具特异性的分类标记。

def tokenize(source):
    return (token for token in (token.strip() for line
            in source.replace('\r\n', '\n').replace('\r', '\n').split('\n')
            for token in line.split('#', 1)[0].split(';')) if token)

我的问题是:如何使用re模块简单地编写?以下是我无效的尝试。

def tokenize2(string):
    search = re.compile(r'^(.+?)(?:;(.+?))*?(?:#.+)?$', re.MULTILINE)
    for match in search.finditer(string):
        for item in match.groups():
            yield item

编辑:这是我从tokenizer中寻找的输出类型。解析文本应该很容易。

>>> def tokenize(source):
    return (token for token in (token.strip() for line
            in source.replace('\r\n', '\n').replace('\r', '\n').split('\n')
            for token in line.split('#', 1)[0].split(';')) if token)

>>> for token in tokenize('''\
a = 1 + 2; b = a - 3 # create zero in b
c = b * 4; d = 5 / c # trigger div error

e = (6 + 7) * 8
# try a boolean operation
f = 0 and 1 or 2
a; b; c; e; f'''):
    print(repr(token))


'a = 1 + 2'
'b = a - 3 '
'c = b * 4'
'd = 5 / c '
'e = (6 + 7) * 8'
'f = 0 and 1 or 2'
'a'
'b'
'c'
'e'
'f'
>>> 

2 个答案:

答案 0 :(得分:1)

我可能会离开这里 -

>>> def tokenize(source):
...     search = re.compile(r'^(.+?)(?:;(.+?))*?(?:#.+)?$', re.MULTILINE)
...     return (token.strip() for line in source.split('\n') if search.match(line)
...                   for token in line.split('#', 1)[0].split(';') if token)
... 
>>> 
>>> 
>>> for token in tokenize('''\
... a = 1 + 2; b = a - 3 # create zero in b
... c = b * 4; d = 5 / c # trigger div error
... 
... e = (6 + 7) * 8
... # try a boolean operation
... f = 0 and 1 or 2
... a; b; c; e; f'''):
...     print(repr(token))
... 
'a = 1 + 2'
'b = a - 3'
'c = b * 4'
'd = 5 / c'
'e = (6 + 7) * 8'
'f = 0 and 1 or 2'
'a'
'b'
'c'
'e'
'f'
>>> 

如果适用,我会将re.compile保留在def范围之外。

答案 1 :(得分:1)

这是基于tokenize2函数的一个:

def tokenize2(source):
    search = re.compile(r'([^;#\n]+)[;\n]?(?:#.+)?', re.MULTILINE)
    for match in search.finditer(source):
        for item in match.groups():
            yield item

>>> for token in tokenize2('''\
... a = 1 + 2; b = a - 3 # create zero in b
... c = b * 4; d = 5 / c # trigger div error
... 
... e = (6 + 7) * 8
... # try a boolean operation
... f = 0 and 1 or 2
... a; b; c; e; f'''):
...     print(repr(token))
... 
'a = 1 + 2'
' b = a - 3 '
'c = b * 4'
' d = 5 / c '
'e = (6 + 7) * 8'
'f = 0 and 1 or 2'
'a'
' b'
' c'
' e'
' f'
>>>