使用python将嵌套的BigQuery数据导出到云存储

时间:2019-03-27 09:53:44

标签: python google-bigquery

尝试将bigquery数据导出到存储,但出现错误“无法对嵌套模式执行400操作。字段:event_params”。

下面是我的代码:

from google.cloud import bigquery
client = bigquery.Client()
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/Nitin/Desktop/big_query_test/soy-serty-897-ed73.json"
bucket_name = "soy-serty-897.appspot.com"
project = "soy-serty-897"
dataset_id = "analytics_157738"
table_id = "events_20190326"

destination_uri = 'gs://{}/{}'.format(bucket_name, 'basket.csv')
dataset_ref = client.dataset(dataset_id, project=project)
table_ref = dataset_ref.table(table_id)

extract_job = client.extract_table(
    table_ref,
    destination_uri,
    # Location must match that of the source table.
    location='US')  # API request
extract_job.result()  # Waits for job to complete.

print('Exported {}:{}.{} to {}'.format(
    project, dataset_id, table_id, destination_uri))

2 个答案:

答案 0 :(得分:0)

现在无法对其进行测试,但是也许可以正常工作

from google.cloud import bigquery as bq
ejc = bq.ExtractJobConfig()
ejc.destination_format='NEWLINE_DELIMITED_JSON'
extract_job = client.extract_table(
    table_ref,
    destination_uri,
    # Location must match that of the source table.
    location='US',
    job_config=ejc)  # API request

想法是使用JSON而不是CSV,以便您支持嵌套数据。

答案 1 :(得分:0)

在BigQuery export limitations中,提到了CSV不支持嵌套和重复的数据。因此,请尝试导出到Avro或JSON:

from google.cloud import bigquery
client = bigquery.Client()
bucket_name = 'your_bucket'
project = 'bigquery-public-data'
dataset_id = 'samples'
table_id = 'shakespeare'

destination_uri = 'gs://{}/{}'.format(bucket_name, '<your_file>')
dataset_ref = client.dataset(dataset_id, project=project)
table_ref = dataset_ref.table(table_id)
configuration = bigquery.job.ExtractJobConfig()
#For AVRO
#configuration.destination_format ='AVRO'
#For JSON
#configuration.destination_format ='NEWLINE_DELIMITED_JSON'

extract_job = client.extract_table(
table_ref,
destination_uri,
job_config=configuration,
location='US')
extract_job.result()

希望有帮助。