从Beam中的另一条管道读取泡菜?

时间:2018-01-15 22:44:18

标签: google-cloud-dataflow apache-beam apache-beam-io

我在Google Cloud Dataflow中运行批处理管道。我需要读取另一个管道先前写过的管道中的对象。最简单的wa对象是pickle / dill。

写作效果很好,编写了许多文件,每个文件都有一个腌制对象。当我手动下载文件时,我可以取消删除文件。写作代码:beam.io.WriteToText('gs://{}', coder=coders.DillCoder())

但每次阅读都会中断,其中一个错误如下。阅读代码:beam.io.ReadFromText('gs://{}*', coder=coders.DillCoder())

要么...

  File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in load
    obj = pik.load()
  File "/usr/lib/python2.7/pickle.py", line 858, in load
    dispatch[key](self)
KeyError: '\x90'

...或...

  File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in find_class
    return StockUnpickler.find_class(self, module, name)
  File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
    __import__(module)
ImportError: No module named measur

(对象的类位于measure的路径中,但不确定为什么它会错过最后一个字符)

我尝试过使用默认编码器,BytesCoder和酸洗& unpickling作为管道中的自定义任务。

我的工作假设是读者逐行分割文件,因此将单个pickle(其中包含新行)视为多个对象。如果是这样,有没有办法避免这种情况?

我可以尝试自己构建一个阅读器,但我犹豫不决,因为这似乎是一个很好解决的问题(例如,Beam已经有一种格式将对象从一个管道阶段移动到另一个管道阶段。)

切线相关:How to read blob (pickle) files from GCS in a Google Cloud DataFlow job?

谢谢!

2 个答案:

答案 0 :(得分:1)

ReadFromText旨在读取文本文件中新行分隔的记录,因此不适合您的用例。实现FileBasedSource不是一个好的解决方案,因为它设计用于读取具有多个记录的大文件(并且通常将这些文件拆分为分片以进行并行处理)。因此,在您的情况下,Python SDK的当前最佳解决方案是自己实现源代码。这可以像读取文件并生成ParDo个记录的PCollection一样简单。如果您的ParDo生成了大量记录,请考虑在其后添加apache_beam.transforms.util.Reshuffle步骤,以便让运行者更好地并行化后续步骤。对于Java SDK,我们FileIO已经提供了变换以使这一点更容易。

答案 1 :(得分:0)

编码为string_escape会转义换行符,因此Beam看到的唯一换行是pickle之间的换行符:

class DillMultiCoder(DillCoder):
    """
    Coder that allows multi-line pickles to be read
    After an object is pickled, the bytes are encoded as `unicode_escape`,
    meaning newline characters (`\n`) aren't in the string.

    Previously, the presence of newline characters these confues the Dataflow
    reader, as it can't discriminate between a new object and a new line
    within a pickle string
    """

    def _create_impl(self):
        return coder_impl.CallbackCoderImpl(
            maybe_dill_multi_dumps, maybe_dill_multi_loads)


def maybe_dill_multi_dumps(o):
    # in Py3 this needs to be `unicode_escape`
    return maybe_dill_dumps(o).encode('string_escape')


def maybe_dill_multi_loads(o):
    # in Py3 this needs to be `unicode_escape`
    return maybe_dill_loads(o.decode('string_escape'))

对于大型泡菜,我还需要将缓冲区大小设置为8MB - 在之前的缓冲区大小(8kB)上,120MB文件旋转2天的CPU时间:

class ReadFromTextPickle(ReadFromText):
    """
    Same as ReadFromText, but with a really big buffer. With the standard 8KB
    buffer, large files can be read on a loop and never finish

    Also added DillMultiCoder
    """

    def __init__(
            self,
            file_pattern=None,
            min_bundle_size=0,
            compression_type=CompressionTypes.AUTO,
            strip_trailing_newlines=True,
            coder=DillMultiCoder(),
            validate=True,
            skip_header_lines=0,
            **kwargs):
        # needs commenting out, not sure why    
        # super(ReadFromTextPickle, self).__init__(**kwargs)
        self._source = _TextSource(
            file_pattern,
            min_bundle_size,
            compression_type,
            strip_trailing_newlines=strip_trailing_newlines,
            coder=coder,
            validate=validate,
            skip_header_lines=skip_header_lines,
            buffer_size=8000000)

另一种方法是实现从PickleFileSource继承的FileBasedSource并在文件上调用pickle.load - 每次调用都会产生一个新对象。但是围绕offset_range_tracker的一系列并发症看起来更像是提升而非严格必要