核心过多或一台以上的计算机时,数据流无法获取对BigQuery表的引用

时间:2018-11-25 15:15:25

标签: python google-bigquery google-cloud-dataflow apache-beam

我的流数据流管道应该从Pub / Sub中读取分析匹配并将其写入BigQuery。如果我使用过多的计算机,或者它们太大,则在获取对表的引用时(更确切地说在执行_get_or_create_table时)会引发速率限制错误。

达到的速率限制似乎是these之一:每位用户每秒100个API请求,每位用户300个并发API请求。

它并没有阻塞管道(行在某点之后被写入),但是我感觉它阻塞了一些线程,使我无法充分利用并行化。从一台具有4个CPU的计算机切换为每8个CPU 5个计算机,并没有改善延迟(实际上情况变得更糟)。

如何避免此错误,并且有大量的机器写入BQ?

这是来自Dataflow监视界面的日志。启动管道时,它会定期出现:

...
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 1087, in get_or_create_table
    found_table = self._get_table(project_id, dataset_id, table_id)
  File "/usr/local/lib/python2.7/dist-packages/apache_beam/utils/retry.py", line 184, in wrapper
    return fun(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 925, in _get_table
    response = self.client.tables.Get(request)
  File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/internal/clients/bigquery/bigquery_v2_client.py", line 611, in Get
    config, request, global_params=global_params)
  File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 722, in _RunMethod
    return self.ProcessHttpResponse(method_config, http_response, request)
  File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 728, in ProcessHttpResponse
    self.__ProcessHttpResponse(method_config, http_response, request))
  File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 599, in __ProcessHttpResponse
    http_response, method_config=method_config, request=request)

HttpForbiddenError: HttpError accessing <https://www.googleapis.com/bigquery/v2/projects/<project_id>/datasets/<dataset_id>/tables/<table_id>?alt=json>: response: <{'status': '403', 'content-length': '577', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'expires': 'Sun, 25 Nov 2018 14:36:24 GMT', 'vary': 'Origin, X-Origin', 'server': 'GSE', '-content-encoding': 'gzip', 'cache-control': 'private, max-age=0', 'date': 'Sun, 25 Nov 2018 14:36:24 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "rateLimitExceeded",
    "message": "Exceeded rate limits: Your user_method exceeded quota for api requests per user per method. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors",
    "locationType": "other",
    "location": "helix_api.method_request"
   }
  ],
  "code": 403,
  "message": "Exceeded rate limits: Your user_method exceeded quota for api requests per user per method. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors"

这是管道的代码。我剪切了其中的几乎所有内容,以查看是否仍然发生这种情况:

p = beam.Pipeline(options=options)

msgs = p | 'Read' >> beam.io.gcp.pubsub.ReadFromPubSub(
    topic='projects/{project}/topics/{topic}'.format(
        project=args.project, topic=args.hits_topic),
    id_label='hit_id',
    timestamp_attribute='time')

lines = msgs | beam.Map(lambda x: {'content': x})

(lines
    | 'WriteToBQ' >> beam.io.gcp.bigquery.WriteToBigQuery(args.table,
                                                          dataset=args.dataset,
                                                          project=args.project))

1 个答案:

答案 0 :(得分:1)

尝试升级到最新的apache_beam库(在撰写本文时为2.12.0)。 https://github.com/apache/beam/commit/932e802279a2daa0ff7797a8fc81e952a4e4f252引入了表缓存功能,否则将触发您在该库的旧版本中可能遇到的速率限制。