我们通过Python API使用Google BigQuery。如何从查询结果中创建表(新表或覆盖旧表)?我查看了query documentation,但我发现它没有用。
我们想要模拟:
ANSI SQL中的“SELEC ... INTO ...”。
答案 0 :(得分:16)
您可以通过在查询中指定目标表来执行此操作。您需要使用Jobs.insert
API而非Jobs.query
调用,并且应指定writeDisposition=WRITE_APPEND
并填写目标表。
如果您使用原始API,那么配置将如下所示。如果你正在使用Python,那么Python客户端应该为这些相同的字段提供访问器:
"configuration": {
"query": {
"query": "select count(*) from foo.bar",
"destinationTable": {
"projectId": "my_project",
"datasetId": "my_dataset",
"tableId": "my_table"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_APPEND",
}
}
答案 1 :(得分:14)
接受的答案是正确的,但它不提供Python代码来执行任务。这是一个例子,重构自我刚写的一个小型自定义客户端类。它不处理异常,应该定制硬编码查询以做一些比SELECT *
更有趣的事情......
import time
from google.cloud import bigquery
from google.cloud.bigquery.table import Table
from google.cloud.bigquery.dataset import Dataset
class Client(object):
def __init__(self, origin_project, origin_dataset, origin_table,
destination_dataset, destination_table):
"""
A Client that performs a hardcoded SELECT and INSERTS the results in a
user-specified location.
All init args are strings. Note that the destination project is the
default project from your Google Cloud configuration.
"""
self.project = origin_project
self.dataset = origin_dataset
self.table = origin_table
self.dest_dataset = destination_dataset
self.dest_table_name = destination_table
self.client = bigquery.Client()
def run(self):
query = ("SELECT * FROM `{project}.{dataset}.{table}`;".format(
project=self.project, dataset=self.dataset, table=self.table))
job_config = bigquery.QueryJobConfig()
# Set configuration.query.destinationTable
destination_dataset = self.client.dataset(self.dest_dataset)
destination_table = destination_dataset.table(self.dest_table_name)
job_config.destination = destination_table
# Set configuration.query.createDisposition
job_config.create_disposition = 'CREATE_IF_NEEDED'
# Set configuration.query.writeDisposition
job_config.write_disposition = 'WRITE_APPEND'
# Start the query
job = self.client.query(query, job_config=job_config)
# Wait for the query to finish
job.result()
答案 2 :(得分:0)
根据Google BigQuery中的查询结果创建表格。假设您将Jupyter Notebook与Python 3结合使用,将说明以下步骤:
在BQ上创建一个新的数据集:my_dataset
bigquery_client = bigquery.Client() #Create a BigQuery service object
dataset_id = 'my_dataset'
dataset_ref = bigquery_client.dataset(dataset_id) # Create a DatasetReference using a chosen dataset ID.
dataset = bigquery.Dataset(dataset_ref) # Construct a full Dataset object to send to the API.
dataset.location = 'US' # Specify the geographic location where the new dataset will reside. Remember this should be same location as that of source data set from where we are getting data to run a query
# Send the dataset to the API for creation. Raises google.api_core.exceptions.AlreadyExists if the Dataset already exists within the project.
dataset = bigquery_client.create_dataset(dataset) # API request
print('Dataset {} created.'.format(dataset.dataset_id))
使用Python在BQ上运行查询:
这里有2种类型:
我在这里获取Public数据集:bigquery-public-data:hacker_news&Table id:运行查询的注释。
DestinationTableName='table_id1' #Enter new table name you want to give
!bq query --allow_large_results --destination_table=project_id:my_dataset.$DestinationTableName 'SELECT * FROM [bigquery-public-data:hacker_news.comments]'
如果需要,此查询将允许较大的查询结果。
DestinationTableName='table_id2' #Enter new table name you want to give
!bq query destination_table=project_id:my_dataset.$DestinationTableName 'SELECT * FROM [bigquery-public-data:hacker_news.comments] LIMIT 100'
这将适用于查询结果不会超过Google BQ文档中提到的限制的查询。