GoolgeBigQuery-超出速率限制

时间:2019-04-25 07:29:00

标签: google-bigquery

尝试将数据插入GoogleBigQuery时,出现以下错误:

  

table.write:超出速率限制:该表的表更新操作过多。有关更多信息,请参见https://cloud.google.com/bigquery/troubleshooting-errors(错误代码:rateLimitExceeded)

根据文档,我可能超出以下其中一项

我怎么知道我的应用程序超出了哪些条件?

我已经在网络上探索了其他解决方案,但都没有用。

4 个答案:

答案 0 :(得分:1)

您可以检查的一件事是“配额”页面(“导航”菜单-> IAM和管理->配额),然后在“服务”下,您只能选择BigQuery API来查看是否达到了任何BQ API配额。如果不是,则很可能达到“每日目标表更新限制-每天每表1,000个更新”

答案 1 :(得分:0)

您即将达到表格更新限制。这意味着您要提交许多修改表存储(插入,更新或删除)的操作。请记住,这还包括加载作业,DML或目标表查询。由于配额会定期补充,因此您必须等待几分钟才能重试,但要注意表更新配额,以免再次收到此错误。

如果要在很多操作而不是少数几个操作中插入行,请考虑改用Streaming Inserts

答案 2 :(得分:0)

让我用从队友那里得到的真实案例重现错误:

# create the table
CREATE TABLE temp.bucket_locations
AS 
SELECT 'ASIA-EAST1' bucket_location
UNION ALL SELECT 'ASIA-NORTHEAST2' bucket_location;

#update several times
UPDATE temp.bucket_locations
 SET bucket_location = "US"
 WHERE UPPER(bucket_location) LIKE "US%";
UPDATE temp.bucket_locations
 SET bucket_location = "TW"
 WHERE UPPER(bucket_location) LIKE "ASIA-EAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "JP"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "HK"
 WHERE UPPER(bucket_location) LIKE "ASIA-EAST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "JP"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "KR"
 WHERE UPPER(bucket_location) LIKE "ASIA-NORTHEAST3%";
UPDATE temp.bucket_locations
 SET bucket_location = "IN"
 WHERE UPPER(bucket_location) LIKE "ASIA-SOUTH1%";
UPDATE temp.bucket_locations
 SET bucket_location = "SG"
 WHERE UPPER(bucket_location) LIKE "ASIA-SOUTHEAST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "AU"
 WHERE UPPER(bucket_location) LIKE "AUSTRALIA%";
UPDATE temp.bucket_locations
 SET bucket_location = "FI"
 WHERE UPPER(bucket_location) LIKE "EUROPE-NORTH1%";
UPDATE temp.bucket_locations
 SET bucket_location = "BE"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST1%";
UPDATE temp.bucket_locations
 SET bucket_location = "GB"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST2%";
UPDATE temp.bucket_locations
 SET bucket_location = "DE"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST3%";
UPDATE temp.bucket_locations
 SET bucket_location = "NL"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST4%";
UPDATE temp.bucket_locations
 SET bucket_location = "CH"
 WHERE UPPER(bucket_location) LIKE "EUROPE-WEST6%";
UPDATE temp.bucket_locations
 SET bucket_location = "CA"
 WHERE UPPER(bucket_location) LIKE "NORTHAMERICA%";
UPDATE temp.bucket_locations
 SET bucket_location = "BR"
 WHERE UPPER(bucket_location) LIKE "SOUTHAMERICA%";

超出速率限制:此表的表更新操作过多

这种情况的解决方案是避免进行太多更新。相反,我们只能将所有映射组合在一起来做一个:

CREATE TEMP TABLE `mappings`
AS 
SELECT *
FROM UNNEST(
  [STRUCT('US' AS abbr, 'US%' AS long), ('TW', 'ASIA-EAST1%'), ('JP', 'ASIA-NORTHEAST2%'
  # add mappings
)]);

UPDATE temp.bucket_locations
 SET bucket_location = abbr
 FROM mappings 
 WHERE UPPER(bucket_location) LIKE long

答案 3 :(得分:0)

就解决方案而言,使用 await bigquery.createJob(jobConfig); 而不是 await bigquery.createQueryJob(jobConfig); 前者将作为批处理运行,而后者是一个交互式查询作业。

批量运行查询不计入 BigQuery API 限制。

来自 GCP documentation

<块引用>

默认情况下,BigQuery 运行交互式查询作业,这意味着查询会尽快执行。交互式查询计入您的并发速率限制和每日限制。

<块引用>

批量查询不计入您的并发速率限制

我正在运行 MERGE 查询以进行重复数据删除,并使用批处理解决了错误。我没有发现处理时间有任何明显差异。