这是我第一次尝试使用pyparsing
,我很难设置它。我想使用pyparsing来解析lexc
个文件。 lexc
格式用于声明编译成有限状态传感器的词典。
特殊字符:
: divides 'upper' and 'lower' sides of a 'data' declaration
; terminates entry
# reserved LEXICON name. end-of-word or final state
' ' (space) universal delimiter
! introduces comment to the end of the line
< introduces xfst-style regex
> closes xfst-style regex
% escape character: %: %; %# % %! %< %> %%
要解析多个级别。
一般来说,从未转义的!
到换行符的任何内容都是评论。这可以在每个级别单独处理。
在文档级别,有三个不同的部分:
Multichar_Symbols Optional one-time declaration
LEXICON Usually many of these
END Anything after this is ignored
在Multichar_Symbols
级别,由空格分隔的任何内容都是声明。本节以LEXICON
的第一次声明结束。
Multichar_Symbols the+first-one thesecond_one
third_one ! comment that this one is special
+Pl ! plural
在LEXICON
级别,LEXICON
的名称被声明为:
LEXICON the_name ! whitespace delimited
在名称声明之后,LEXICON由条目data continuation ;
组成。分号分隔条目。 data
是可选的。
在data
级别,有三种可能的形式:
upper:lower
,
simple
(已展开至upper
和lower
为simple:simple
,
<xfst-style regex>
。
示例:
! # is a reserved continuation that means "end of word".
dog+Pl:dogs # ; ! upper:lower continuation ;
cat # ; ! automatically exploded to "cat:cat # ;" by interpreter
Num ; ! no data, only a continuation to LEXICON named "Num"
<[1|2|3]+> # ; ! xfst-style regex enclosed in <>
忽略END
之后的所有内容
完整的lexc
文件可能如下所示:
! Comments begin with !
! Multichar_Symbols (separated by whitespace, terminated by first declared LEXICON)
Multichar_Symbols +A +N +V ! +A is adjectives, +N is nouns, +V is verbs
+Adv ! This one is for adverbs
+Punc ! punctuation
! +Cmpar ! This is broken for now, so I commented it out.
! The bulk of lexc is made of up LEXICONs, which contain entries that point to
! other LEXICONs. "Root" is a reserved lexicon name, and the start state.
! "#" is also a reserved lexicon name, and the end state.
LEXICON Root ! Root is a reserved lexicon name, if it is not declared, then the first LEXICON is assumed to be the root
big Adj ; ! This
bigly Adv ; ! Not sure if this is a real word...
dog Noun ;
cat Noun ;
crow Noun ;
crow Verb ;
Num ; ! This continuation class generates numbers using xfst-style regex
! NB all the following are reserved characters
sour% cream Noun ; ! escaped space
%: Punctuation ; ! escaped :
%; Punctuation ; ! escaped ;
%# Punctuation ; ! escaped #
%! Punctuation ; ! escaped !
%% Punctuation ; ! escaped %
%< Punctuation ; ! escaped <
%> Punctuation ; ! escaped >
%:%:%::%: # ; ! Should map ::: to :
LEXICON Adj
+A: # ; ! # is a reserved lexicon name which means end-of-word (final state).
! +Cmpar:er # ; ! Broken, so I commented it out.
LEXICON Adv
+Adv: # ;
LEXICON Noun
+N+Sg: # ;
+N+Pl:s # ;
LEXICON Num
<[0|1|2|3|4|5|6|7|8|9]> Num ; ! This is an xfst regular expression and a cyclic continuation
# ; ! After the first cycle, this makes sense, but as it is, this is bad.
LEXICON Verb
+V+Inf: # ;
+V+Pres:s # ;
LEXICON Punctuation
+Punc: # ;
END
This text is ignored because it is after END
因此,有多个不同的级别可以解析。在pyparsing
中设置此功能的最佳方法是什么?有没有这种分层语言的例子我可以作为一个模型?
答案 0 :(得分:1)
使用pyparsing时的策略是将解析问题分解成小部分,然后将它们组合成较大的部分。
从您的第一个高级结构定义开始:
Multichar_Symbols Optional one-time declaration
LEXICON Usually many of these
END Anything after this is ignored
您的最终整体解析器将如下所示:
parser = (Optional(multichar_symbols_section)('multichar_symbols')
+ Group(OneOrMore(lexicon_section))('lexicons')
+ END)
每个部分后面的括号中的名称将为我们提供标签,以便您轻松访问已解析结果的不同部分。
进入更深入的细节,让我们看一下如何为lexicon_section
定义解析器。
首先定义标点符号和特殊关键字
COLON,SEMI = map(Suppress, ":;")
HASH = Literal('#')
LEXICON, END = map(Keyword, "LEXICON END".split())
您的标识符和值可以包含'%' - 转义字符,因此我们需要从片段构建它们:
# use regex and Combine to handle % escapes
escaped_char = Regex(r'%.').setParseAction(lambda t: t[0][1])
ident_lit_part = Word(printables, excludeChars=':%;')
xfst_regex = Regex(r'<.*?>')
ident = Combine(OneOrMore(escaped_char | ident_lit_part)) | xfst_regex
value_expr = ident()
通过这些部分,我们现在可以定义一个单独的词典声明:
# handle the following lexicon declarations:
# name ;
# name:value ;
# name value ;
# name value # ;
lexicon_decl = Group(ident("name")
+ Optional(Optional(COLON)
+ value_expr("value")
+ Optional(HASH)('hash'))
+ SEMI)
这部分有点乱,事实证明value
可以作为字符串返回,结果结构(pyparsing ParseResults),甚至可能完全丢失。我们可以使用解析操作将所有这些表单规范化为单个字符串形式。
# use a parse action to normalize the parsed values
def fixup_value(tokens):
if 'value' in tokens[0]:
# pyparsing makes this a nested element, just take zero'th value
if isinstance(tokens[0].value, ParseResults):
tokens[0]['value'] = tokens[0].value[0]
else:
# no value was given, expand 'name' as if parsed 'name:name'
tokens[0]['value'] = tokens[0].name
lexicon_decl.setParseAction(fixup_value)
现在该值将在解析时清理,因此在调用parseString后不需要额外的代码。
我们终于准备好定义整个LEXICON部分:
# TBD - make name optional, define as 'Root'
lexicon_section = Group(LEXICON
+ ident("name")
+ ZeroOrMore(lexicon_decl, stopOn=LEXICON | END)("declarations"))
最后一点管家 - 我们需要忽略评论。我们可以在最顶层的解析器表达式上调用ignore
,并且在整个解析器中将忽略注释:
# ignore comments anywhere in our parser
comment = '!' + Optional(restOfLine)
parser.ignore(comment)
以下是单个可复制粘贴部分中的所有代码:
import pyparsing as pp
# define punctuation and special words
COLON,SEMI = map(pp.Suppress, ":;")
HASH = pp.Literal('#')
LEXICON, END = map(pp.Keyword, "LEXICON END".split())
# use regex and Combine to handle % escapes
escaped_char = pp.Regex(r'%.').setParseAction(lambda t: t[0][1])
ident_lit_part = pp.Word(pp.printables, excludeChars=':%;')
xfst_regex = pp.Regex(r'<.*?>')
ident = pp.Combine(pp.OneOrMore(escaped_char | ident_lit_part | xfst_regex))
value_expr = ident()
# handle the following lexicon declarations:
# name ;
# name:value ;
# name value ;
# name value # ;
lexicon_decl = pp.Group(ident("name")
+ pp.Optional(pp.Optional(COLON)
+ value_expr("value")
+ pp.Optional(HASH)('hash'))
+ SEMI)
# use a parse action to normalize the parsed values
def fixup_value(tokens):
if 'value' in tokens[0]:
# pyparsing makes this a nested element, just take zero'th value
if isinstance(tokens[0].value, pp.ParseResults):
tokens[0]['value'] = tokens[0].value[0]
else:
# no value was given, expand 'name' as if parsed 'name:name'
tokens[0]['value'] = tokens[0].name
lexicon_decl.setParseAction(fixup_value)
# define a whole LEXICON section
# TBD - make name optional, define as 'Root'
lexicon_section = pp.Group(LEXICON
+ ident("name")
+ pp.ZeroOrMore(lexicon_decl, stopOn=LEXICON | END)("declarations"))
# this part still TBD - just put in a placeholder for now
multichar_symbols_section = pp.empty()
# tie it all together
parser = (pp.Optional(multichar_symbols_section)('multichar_symbols')
+ pp.Group(pp.OneOrMore(lexicon_section))('lexicons')
+ END)
# ignore comments anywhere in our parser
comment = '!' + pp.Optional(pp.restOfLine)
parser.ignore(comment)
解析发布的“Root”示例,我们可以使用dump()
result = lexicon_section.parseString(lexicon_sample)[0]
print(result.dump())
,并提供:
['LEXICON', 'Root', ['big', 'Adj'], ['bigly', 'Adv'], ['dog', 'Noun'], ['cat', 'Noun'], ['crow', 'Noun'], ['crow', 'Verb'], ['Num'], ['sour cream', 'Noun'], [':', 'Punctuation'], [';', 'Punctuation'], ['#', 'Punctuation'], ['!', 'Punctuation'], ['%', 'Punctuation'], ['<', 'Punctuation'], ['>', 'Punctuation'], [':::', ':', '#']]
- declarations: [['big', 'Adj'], ['bigly', 'Adv'], ['dog', 'Noun'], ['cat', 'Noun'], ['crow', 'Noun'], ['crow', 'Verb'], ['Num'], ['sour cream', 'Noun'], [':', 'Punctuation'], [';', 'Punctuation'], ['#', 'Punctuation'], ['!', 'Punctuation'], ['%', 'Punctuation'], ['<', 'Punctuation'], ['>', 'Punctuation'], [':::', ':', '#']]
[0]:
['big', 'Adj']
- name: 'big'
- value: 'Adj'
[1]:
['bigly', 'Adv']
- name: 'bigly'
- value: 'Adv'
[2]:
['dog', 'Noun']
- name: 'dog'
- value: 'Noun'
...
[13]:
['<', 'Punctuation']
- name: '<'
- value: 'Punctuation'
[14]:
['>', 'Punctuation']
- name: '>'
- value: 'Punctuation'
[15]:
[':::', ':', '#']
- hash: '#'
- name: ':::'
- value: ':'
- name: 'Root'
此代码显示如何迭代部分部分并获取命名部分:
# try out a lexicon against the posted sample
result = lexicon_section.parseString(lexicon_sample)[0]
print(result.dump())
print('Name:', result.name)
print('\nDeclarations')
for decl in result.declarations:
print("{name} -> {value}".format_map(decl), "(END)" if decl.hash else '')
,并提供:
Name: Root
Declarations
big -> Adj
bigly -> Adv
dog -> Noun
cat -> Noun
crow -> Noun
crow -> Verb
Num -> Num
sour cream -> Noun
: -> Punctuation
; -> Punctuation
# -> Punctuation
! -> Punctuation
% -> Punctuation
< -> Punctuation
> -> Punctuation
::: -> : (END)
希望这足以让你从这里拿走它。