apache_beam [gcp]-ParDo的侧面输入

时间:2018-07-16 20:14:29

标签: google-cloud-platform google-cloud-dataflow apache-beam app-engine-flexible

我无法找出使用2.4.0的apache_beam [gcp]版本的ParDo函数添加侧面输入的正确方法。

我的管道是

pipeline
     | "Load" >> ReadFromText("query.txt") 
     | "Count Words" >> CountWordsTransform()

class CountWordsTransform(beam.PTransform):
    def expand(self, p_collection):
    anotherPipleline = beam.Pipeline(runner="DataflowRunner", argv=[
        "--staging_location", ("%s/staging" % gcs_path),
        "--temp_location", ("%s/temp" % gcs_path),
        "--output", ("%s/output" % gcs_path),
        "--setup_file", "./setup.py"
    ])
       value2 = anotherPipleline | 'create2' >> Create([("a", 1), ("b", 2), ("c", 3)])
       return (p_collection
                | "Split" >> (beam.ParDo(FindWords(), beam.pvalue.AsDict(value2))))

FindWords()类的定义为:

class FindWords(beam.DoFn):
    def process(self, element, values):
        import re as regex
        return regex.findall(r"[A-Za-z\']+", element)

我收到以下错误:

'NoneType' object has no attribute 'parts'

1 个答案:

答案 0 :(得分:2)

您正在复合转换中创建单独的管道以创建辅助输入-这将导致问题,因为不应在不同的管道之间共享集合。

相反,您可以尝试在同一管道中创建辅助输入,并将其作为参数传递给转换。

例如。

values = pipeline | "Get pcol for side input" >> beam.Create([("a", 1), ("b", 2), ("c", 3)])

pipeline 
    | "Load" >> beam.io.ReadFromText('gs://bucket/words.txt')
    | "Count Words" >> CountWordsTransform(values)

class CountWordsTransform(beam.PTransform):

    def __init__(self, values):
        self.values = values

    def expand(self, p_collection):
        return p_collection | "Split" >> (beam.ParDo(FindWords(), beam.pvalue.AsDict(self.values)))

在2.4.0以上进行了测试