如何通过侧面输入将两个Pcollections(各种大小/数据)与一个公共的“ key”(街道)合并?

时间:2019-04-02 02:37:29

标签: python merge google-cloud-dataflow pipeline apache-beam

我有两个PCollections:一个从Pub / Sub中提取信息,另一个从CSV文件中提取数据。在每个管道中进行了各种转换之后,我想将两者合并在它们都共享的一个公用键“ STREET”上。我将第二个PCollection作为辅助输入。但是,尝试运行时出现错误。

我试图利用CoGroupByKey,但是我一直收到关于Pcollections中数据类型差异的错误。我尝试重构输出,并通过__setattr__设置PCollection的属性以强制类型相等,但是无论如何它都会报告“混合值”。经过进一步研究,似乎最好利用侧面输入,尤其是当元素之间的数据大小存在差异时。即使有侧面输入,我仍然无法克服当前错误:

from_runner_api raise ValueError('No producer for %s' % id)
ValueError: No producer for ref_PCollection_PCollection_6

我的应用程序逻辑如下:

def merge_accidents(element, pcoll):
    print(element)
    print(pcoll)
    "some code that will append to existing data"

accident_pl = beam.Pipeline()
accident_data = (accident_pl |
                        'Read' >> beam.io.ReadFromText('/modified_Excel_Crashes_Chicago.csv')
                        | 'Map Accidents' >> beam.ParDo(AccidentstoDict())
                        | 'Count Accidents' >> Count.PerKey())

chi_traf_pl = beam.Pipeline(options=pipeline_options)
chi_traffic = (chi_traf_pl | 'ReadPubSub' >> beam.io.ReadFromPubSub(subscription=subscription_name, with_attributes=True)
                           | 'GeoEnrich&Trim' >> beam.Map(loc_trim_enhance)
                           | 'TimeDelayEnrich' >> beam.Map(timedelay)
                           | 'TrafficRatingEnrich' >> beam.Map(traffic_rating)
                           | 'MergeAccidents' >> beam.Map(merge_accidents, pcoll=AsDict(accident_data))
                           | 'Temp Write'>> beam.io.WriteToText('testtime', file_name_suffix='.txt'))

accident_pl.run()
chi_result = chi_traf_pl.run()
chi_result.wait_until_finish()```

**Pcoll 1:**
[{'segment_id': '1', 'street': 'Western Ave', 'direction': 'EB', 'length': '0.5', 'cur_traffic': '24', 'county': 'Cook County', 'neighborhood': 'West Elsdon', 'zip_code': '60629', 'evnt_timestamp': '2019-04-01 20:50:20.0', 'traffic_rating': 'Heavy', 'time_delay': '0.15'}]
**Pcoll 2:**
('MILWAUKEE AVE', 1)
('CENTRAL AVE', 2)
('WESTERN AVE', 6)

**Expected:**
[{'segment_id': '1', 'street': 'Western Ave', 'direction': 'EB', 'length': '0.5', 'cur_traffic': '24', 'county': 'Cook County', 'neighborhood': 'West Elsdon', 'zip_code': '60629', 'evnt_timestamp': '2019-04-01 20:50:20.0', 'traffic_rating': 'Heavy', 'time_delay': '0.15', 'accident_count': '6'}]

**Actual Results:**
"from_runner_api raise ValueError('No producer for %s' % id)ValueError: No producer for ref_PCollection_PCollection_6

1 个答案:

答案 0 :(得分:0)

所以我找出了问题所在。浏览pipeline.py和unittest源中的辅助输入后,我意识到对创建的Pipeline对象进行了检查。

我对此并不陌生,所以我最初认为您需要创建两个单独的Pipeline对象(流与批处理),以便我可以将不同的选项传递给这两个对象;即流:是的。话虽如此,我认为这不是必需的。

将它们合并到如下所示的单个对象后,错误消失了,我能够接受函数的侧面输入:

'''

pipeline = beam.Pipeline(options=pipeline_options)
accident_data = (pipeline
                 | 'Read' >> beam.io.ReadFromText('modified_Excel_Crashes_Chicago.csv')
                 | 'Map Accidents' >> beam.ParDo(AccidentstoDict())
                 | 'Count Accidents' >> Count.PerKey())

chi_traffic = (pipeline
               | 'ReadPubSub' >> beam.io.ReadFromPubSub(subscription=subscription_name, with_attributes=True)
               | 'GeoEnrich&Trim' >> beam.Map(loc_trim_enhance)
               | 'TimeDelayEnrich' >> beam.Map(timedelay)
               | 'TrafficRatingEnrich' >> beam.Map(traffic_rating)
               | 'MergeAccidents' >> beam.Map(merge_accidents, pcoll=pvalue.AsDict(accident_data))
               | 'Temp Write' >> beam.io.WriteToText('testtime',
                                                     file_name_suffix='.txt'))

chi_result = pipeline.run()
chi_result.wait_until_finish()

'''