使用数据流将Subsubio转换为Bigquery

时间:2018-06-22 14:58:17

标签: python-2.7 google-cloud-platform google-bigquery google-cloud-dataflow apache-beam-io

在将消息从pubsubio插入到BigQuery时出现错误belwo。

如何将记录从pubsub插入BQ。我们可以将pcollection转换为列表,还是有其他替代方法?

  

AttributeError:'PCollection'对象没有属性'split'

这是我的代码:

def create_record(columns):
    #import re
    col_value=record_ids.split('|')
    col_name=columns.split(",")
    for i in range(length(col_name)):
        schmea_dict[col_name[i]]=col_value[i]
    return schmea_dict

schema = 'tungsten_opcode:STRING,tungsten_seqno:INTEGER
columns="tungsten_opcode,tungsten_seqno"
lines = p | 'Read PubSub' >> beam.io.ReadStringsFromPubSub(INPUT_TOPIC) | 
    beam.WindowInto(window.FixedWindows(15))
record_ids = lines | 'Split' >> 
    (beam.FlatMap(split_fn).with_output_types(unicode))
records = record_ids | 'CreateRecords' >> beam.Map(create_record(columns))
records | 'BqInsert' >> beam.io.WriteToBigQuery(
    OUTPUT,
    schema=schema,
    create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
    write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)

1 个答案:

答案 0 :(得分:2)

需要作为转换来完成,您不能直接访问pcollection中的数据。

编写一个DoFn类以将模式作为侧面输入对记录执行拆分转换,并使用列/记录创建dict。例如

class CreateRecord(beam.DoFn):
  def process(self, element, schema):
    cols = element.split(',')
    header = map(lambda x: x.split(':')[0], schema.split(','))
    return [dict(zip(header, cols))]

像这样应用变换

schema = 'tungsten_opcode:STRING,tungsten_seqno:INTEGER'
records = record_ids | 'CreateRecords' >> beam.ParDo(CreateRecord(), SCHEMA)