谷歌BigQuery不完整的查询回复奇怪的尝试

时间:2013-07-19 17:17:58

标签: python google-bigquery google-api-python-client

使用:

通过python api查询BigQuery时
service.jobs().getQueryResults

我们发现第一次尝试正常 - 所有预期结果都包含在响应中。但是,如果查询在第一次(大约在5分钟内)后不久第二次运行,则只会立即返回一小部分结果(以2的幂表示),没有错误。

请参阅我们的完整代码: https://github.com/sean-schaefer/pandas/blob/master/pandas/io/gbq.py

对可能导致这种情况的任何想法?

1 个答案:

答案 0 :(得分:1)

看起来问题是我们为query()和getQueryResults()返回不同的默认行数。因此,根据您的查询是否快速完成(因此您不必使用getQueryResults()),您可以获得更多或更少的行。

我已经提交了一个错误,我们很快就会修复。

解决方法(总体而言是一个好主意)是为查询和getQueryResults调用设置maxResults。如果您想要大量的行,您可能希望使用返回的页面标记来浏览结果。

下面是一个从已完成的查询作业中读取一页数据的示例。它将包含在bq.py的下一个版本中:

class _JobTableReader(_TableReader):
  """A TableReader that reads from a completed job."""

  def __init__(self, local_apiclient, project_id, job_id):
    self.job_id = job_id
    self.project_id = project_id
    self._apiclient = local_apiclient

  def ReadSchemaAndRows(self, max_rows=None):
    """Read at most max_rows rows from a table and the schema.

    Args:
      max_rows: maximum number of rows to return.

    Raises:
      BigqueryInterfaceError: when bigquery returns something unexpected.

    Returns:
      A tuple where the first item is the list of fields and the
      second item a list of rows.
    """
    page_token = None
    rows = []
    schema = {}
    max_rows = max_rows if max_rows is not None else sys.maxint
    while len(rows) < max_rows:
      (more_rows, page_token, total_rows, current_schema) = self._ReadOnePage(
          max_rows=max_rows - len(rows),
          page_token=page_token)
      if not schema and current_schema:
        schema = current_schema.get('fields', {})

      max_rows = min(max_rows, total_rows)
      for row in more_rows:
        rows.append([entry.get('v', '') for entry in row.get('f', [])])
      if not page_token and len(rows) != max_rows:
          raise BigqueryInterfaceError(
            'PageToken missing for %r' % (self,))
      if not more_rows and len(rows) != max_rows:
        raise BigqueryInterfaceError(
            'Not enough rows returned by server for %r' % (self,))
    return (schema, rows)

  def _ReadOnePage(self, max_rows, page_token=None):
    data = self._apiclient.jobs().getQueryResults(
        maxResults=max_rows,
        pageToken=page_token,
        # Sets the timeout to 0 because we assume the table is already ready.
        timeoutMs=0,
        projectId=self.project_id,
        jobId=self.job_id).execute()
    if not data['jobComplete']:
      raise BigqueryError('Job %s is not done' % (self,))
    page_token = data.get('pageToken', None)
    total_rows = int(data['totalRows'])
    schema = data.get('schema', None)
    rows = data.get('rows', [])
    return (rows, page_token, total_rows, schema)