如何使用GCP Dataflow中的python管道代码读取BigQuery表

时间:2018-01-22 16:28:09

标签: python google-cloud-dataflow gcp

有人可以分享语法来读取/编写用python编写的GCP数据流管道中的bigquery表

2 个答案:

答案 0 :(得分:4)

在数据流上运行

首先,构建一个Pipeline,其中包含以下选项,以便在GCP DataFlow上运行:

import apache_beam as beam

options = {'project': <project>,
           'runner': 'DataflowRunner',
           'region': <region>,
           'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)

从BigQuery读取

使用您的查询定义BigQuerySource并使用beam.io.Read从BQ读取数据:

BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)

写入BigQuery

写入bigquery有两种选择:

  • 使用BigQuerySinkbeam.io.Write

    BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>)
    BQ_data | beam.io.Write(BQ_sink)
    
  • 使用beam.io.WriteToBigQuery

    BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
    

答案 1 :(得分:0)

从Bigquery读书

rows = (p | 'ReadFromBQ' >> beam.io.Read(beam.io.BigQuerySource(query=QUERY, use_standard_sql=True))

写给Bigquery

rows | 'writeToBQ' >> beam.io.Write(
beam.io.BigQuerySink('{}:{}.{}'.format(PROJECT, BQ_DATASET_ID, BQ_TEST), schema='CONVERSATION:STRING, LEAD_ID:INTEGER', create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
    write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))