使用简约解析潜在函数的参数

时间:2019-05-06 19:37:49

标签: python regex python-3.x parsing

问题最初是在code review上提出的。再次在这里提出建议。


背景

forcefield是一组函数和参数,用于计算复杂系统的势能。我有一些文本文件,其中包含有关力场参数的数据。文本文件分为多个部分,每个部分遵循相同的格式:

  • 用方括号括起来的节标题
  • 在下一行,单词indices:后跟一个整数列表。
  • 然后是与该节关联的1行或多行参数

这是一个展示示例格式的示例文件。

############################################
# Comments begin with '#'
############################################

[lj_pairs] # Section 1
    indices:    0 2
#  ID      eps    sigma
    1       2.344   1.234   5
    2       4.423   5.313   5
    3       1.573   6.321   5
    4       1.921   11.93   5

[bonds]
indices:    0 1
    2   4.234e-03   11.2
    6   -0.134545   5.7

目标是解析此类文件并将所有信息存储在dict中。


当前,我有以下代码来完成任务

""" Force-field data reader """

import re
from dataclasses import dataclass, field
from typing import Dict, Iterable, List, TextIO, Tuple, Union, Any


def ff_reader(fname: Union[str, TextIO]) -> Dict[str, "FFSections"]:
    """ Reads data from a force-field file """

    try:
        if _is_string(fname):
            fh = open(fname, mode="r")
            own = True
        else:
            fh = iter(fname)
    except TypeError:
        raise ValueError("fname must be a string or a file handle")

    # All the possible section headers
    keywords = ("lj_pairs", "bonds")  # etc... Long list of possible sections
                                      # Removed for brevity
    re_sections = re.compile(r"^\[(%s)\]$" % "|".join(keywords))
    ff_data = _strip_comments(fh)
    # Empty dict that'll hold all the data.
    final_ff_data = {key: FFSections() for key in keywords}

    # Get first section header
    for line in ff_data:
        match = re.match(re_sections, line)
        if match:
            section = match.group(1)
            in_section_for_first_time = True
            break
        else:
            raise FFReaderError("A valid section header must be the first line in file")
    else:
        raise FFReaderError("No force-field sections exist")

    # Read the rest of the file
    for line in ff_data:

        match = re.match(re_sections, line)

        # If we've encounted a section header the next line must be an index list.
        if in_section_for_first_time:
            if line.split()[0] != "indices:":
                raise FFReaderError(f"Missing index list for section: {section}")
            idx = _validate_indices(line)
            final_ff_data[section].use_idx = idx
            in_section_for_first_time = False
            in_params_for_first_time = True
            continue

        if match and in_params_for_first_time:
            raise FFReaderError(
                f"Section {section} missing parameters"
                + "Sections must contain atleast one type coefficients"
            )

        if match:  # and not in_section_for_first_time and in_params_for_first_time
            section = match.group(1)
            in_section_for_first_time = True
            continue

        params = _validate_params(line)
        final_ff_data[section].coeffs.update([params])
        in_params_for_first_time = False

    # Close the file if we opened it
    if own:
        fh.close()

    for section in final_ff_data.values():
        # coeff must exist if use_idx does
        if section.use_idx is not None:
            assert section.coeffs

    return final_ff_data

def _strip_comments(
    instream: TextIO, comments: Union[str, Iterable[str], None] = "#"
) -> Iterable[str]:
    """ Strip comments from a text IO stream """

    if comments is not None:
        if isinstance(comments, str):
            comments = [comments]
        comments_re = re.compile("|".join(map(re.escape, comments)))
    else:
        comments_re = ".*"
    try:
        for lines in instream.readlines():
            line = re.split(comments_re, lines, 1)[0].strip()
            if line != "":
                yield line
    except AttributeError:
        raise TypeError("instream must be a `TextIO` stream") from None


@dataclass(eq=False)
class FFSections:
    """
    FFSections(coeffs,use_idx)

    Container for forcefield information
    """

    coeffs: Dict[int, List[float]] = field(default_factory=dict)
    use_idx: List[int] = field(default=None)


class FFReaderError(Exception):
    """ Incorrect or badly formatted force-Field data """

    def __init__(self, message: str, badline: Optional[str] = None) -> None:
        if badline:
            message = f"{message}\nError parsing --> ({badline})"
        super().__init__(message)


def _validate_indices(line: str) -> List[int]:
    """
    Check if given line contains only a whitespace separated
    list of integers
    """
    # split on indices: followed by whitescape
    split = line.split("indices:")[1].split()
    # import ipdb; ipdb.set_trace()
    if not set(s.isdecimal() for s in split) == {True}:
        raise FFReaderError(
            "Indices should be integers and seperated by whitespace", line
        )
    return [int(x) for x in split]


def _validate_params(line: str) -> Tuple[int, List[float]]:
    """
    Check if given line is valid param line, which are
    an integer followed by one or more floats seperated by whitespace
    """
    split = line.split()
    id_ = split[0]
    coeffs = split[1:]
    if not id_.isdecimal():
        raise FFReaderError("Invalid params", line)
    try:
        coeffs = [float(x) for x in coeffs]
    except (TypeError, ValueError):
        raise FFReaderError("Invalid params", line) from None
    return (int(id_), coeffs)

问题

这似乎是完成一个简单任务的大量代码。如何使用parsimonious或类似的解析库来简化此类文件的解析?

1 个答案:

答案 0 :(得分:0)

如另一个答案中所述,您可以将parsimonious之类的解析库与NodeVisitor类结合使用:

from parsimonious.grammar import Grammar
from parsimonious.nodes import NodeVisitor

data = """
############################################
# Comments begin with '#'
############################################

[lj_pairs] # Section 1
    indices:    0 2
    #  ID      eps    sigma
    1       2.344   1.234   5
    2       4.423   5.313   5
    3       1.573   6.321   5
    4       1.921   11.93   5

[bonds]
indices:    0 1
    2   4.234e-03   11.2
    6   -0.134545   5.7
"""

grammar = Grammar(
    r"""
    expr        = (entry / garbage)+
    entry       = section garbage indices (valueline / garbage)*
    section     = lpar word rpar

    indices     = ws? "indices:" values+
    garbage     = ((comment / hs)* newline?)*

    word        = ~"\w+"

    values      = number+
    valueline   = values newline?

    number      = hs? ~"[-.e\d]+" hs?

    lpar        = "["
    rpar        = "]"

    comment     = ~"#.+"
    ws          = ~"\s*"
    hs          = ~"[\t\ ]*"

    newline     = ~"[\r\n]"
    """
)

tree = grammar.parse(data)

class DataVisitor(NodeVisitor):
    def visit_number(self, node, visited_children):
        """ Returns integer and float values. """
        _, value, _ = visited_children
        try:
            number = int(value.text)
        except ValueError:
            number = float(value.text)
        return number

    def visit_section(self, node, visited_children):
        """ Returns the section as text. """
        _, section, _ = visited_children
        return section.text

    def visit_indices(self, node, visited_children):
        """ Returns the index numbers. """
        *_, values = visited_children
        return values[0]

    def visit_valueline(self, node, visited_children):
        """ Returns every value from one line. """
        values, _ = visited_children
        return values

    def visit_entry(self, node, visited_children):
        """ Returns one entry (section, indices, values). """
        section, _, indices, lst = visited_children
        values = [item[0] for item in lst if item[0]]

        return (section, {'indices': indices, 'values': values})

    def visit_expr(self, node, visited_children):
        """ Returns the whole structure as a dict. """
        return dict([item[0] for item in visited_children if item[0]])

    def visit_garbage(self, node, visited_children):
        """ You know what this does. """
        return None

    def generic_visit(self, node, visited_children):
        """ Returns the visited children (if any) or the node itself. """
        return visited_children or node

d = DataVisitor()
result = d.visit(tree)
print(result)

这将产生

{
 'lj_pairs': {'indices': [0, 2], 'values': [[1, 2.344, 1.234, 5], [2, 4.423, 5.313, 5], [3, 1.573, 6.321, 5], [4, 1.921, 11.93, 5]]}, 
 'bonds': {'indices': [0, 1], 'values': [[2, 0.004234, 11.2], [6, -0.134545, 5.7]]}
}


说明

您的原始数据文件可视为DSL-一种 d 独特的 s l l 语言。因此,我们需要一个语法来描述您的格式如何显示。这里的一种常用方法是首先配制小砖,例如空白或“单词”。


parsimonious中,我们有几种选择,一种是指定正则表达式(以~开头):

ws          = ~"\s*"

在这里,ws代表\s*,它是零个或多个空格。


另一种可能性是从字面上形成一部分,例如

lpar        = "["


最后(也是最强大的)可能性是将这两个较小的部分都组合以形成较大的部分,例如

section     = lpar word rpar

转换为[word_characters_HERE123]或类似结构。


现在,将应用常规的交替(/)和量词,例如*(零矿石,贪婪),+(一个矿石,贪婪)和?(零)或一个贪婪),可以放在我们可能想到的每个表达式之后。

如果一切正常,并且语法适合于我们拥有的数据,则一切都将解析为树结构,即 a bstract s 语法 t ree(AST)。为了做某事。在此结构中很有用(例如,用它做成一个不错的dict),我们需要将其提供给NodeVisitor类。只要方法visit_*调用适合它的每片叶子,这就是我们先前形成的语法的垂饰。也就是说,将在每个visit_section(...)叶子上使用适当的section来调用方法visited_children

让我们更清楚地说明这一点。函数

    def visit_section(self, node, visited_children):
        """ Returns the section as text. """
        _, section, _ = visited_children
        return section.text
在语法(section)的lpar section rpar部分将调用

,因此叶section具有这三个子元素。我们对[]都不感兴趣,而对节文本本身不感兴趣,因此我们进行了一些拆包并返回了section.text

我们需要对我们先前定义的每个节点/叶子执行此操作。默认情况下,第一个定义(在我们的情况下为expr)和相应的visit_expr(...)将是NodeVisitor类的输出,所有其他节点都是子代(孙子代,曾孙子代等)。 )。