我在pandas上使用to_gbq来更新Google BigQuery并获得GenericGBQException

时间:2018-01-10 15:59:14

标签: python pandas google-bigquery

在尝试使用 to_gbq 更新Google BigQuery表时,我收到了以下回复:

GenericGBQException: Reason: 400 Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1.

我的代码:

gbq.to_gbq(mini_df,'Name-of-Table','Project-id',chunksize=10000,reauth=False,if_exists='append',private_key=None)

我的mini_df数据框如下:

date    request_number  name    feature_name    value_name  value
2018-01-10  1   1   "a" "b" 0.309457
2018-01-10  1   1   "c" "d" 0.273748

虽然我正在运行 to_gbq ,并且BigQuery上没有表格,但我可以看到该表是使用下一个架构创建的:

日期STRING无效
request_number STRING NULLABLE
名称STRING NULLABLE
feature_name STRING NULLABLE
value_name STRING NULLABLE
值FLOAT NULLABLE

我做错了什么?我该如何解决这个问题?

P.S,其余的例外:

BadRequest                                Traceback (most recent call last)
~/anaconda3/envs/env/lib/python3.6/site-packages/pandas_gbq/gbq.py in load_data(self, dataframe, dataset_id, table_id, chunksize)
    589                         destination_table,
--> 590                         job_config=job_config).result()
    591                 except self.http_error as ex:

~/anaconda3/envs/env/lib/python3.6/site-packages/google/cloud/bigquery/job.py in result(self, timeout)
    527         # TODO: modify PollingFuture so it can pass a retry argument to done().
--> 528         return super(_AsyncJob, self).result(timeout=timeout)
    529 

~/anaconda3/envs/env/lib/python3.6/site-packages/google/api_core/future/polling.py in result(self, timeout)
    110             # Pylint doesn't recognize that this is valid in this case.
--> 111             raise self._exception
    112 

BadRequest: 400 Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1.

During handling of the above exception, another exception occurred:

GenericGBQException                       Traceback (most recent call last)
<ipython-input-28-195df93249b6> in <module>()
----> 1 gbq.to_gbq(mini_df,'Name-of-Table','Project-id',chunksize=10000,reauth=False,if_exists='append',private_key=None)

~/anaconda3/envs/env/lib/python3.6/site-packages/pandas/io/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, verbose, reauth, if_exists, private_key)
    106                       chunksize=chunksize,
    107                       verbose=verbose, reauth=reauth,
--> 108                       if_exists=if_exists, private_key=private_key)

~/anaconda3/envs/env/lib/python3.6/site-packages/pandas_gbq/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, verbose, reauth, if_exists, private_key, auth_local_webserver)
    987         table.create(table_id, table_schema)
    988 
--> 989     connector.load_data(dataframe, dataset_id, table_id, chunksize)
    990 
    991 

~/anaconda3/envs/env/lib/python3.6/site-packages/pandas_gbq/gbq.py in load_data(self, dataframe, dataset_id, table_id, chunksize)
    590                         job_config=job_config).result()
    591                 except self.http_error as ex:
--> 592                     self.process_http_error(ex)
    593 
    594                 rows = []

~/anaconda3/envs/env/lib/python3.6/site-packages/pandas_gbq/gbq.py in process_http_error(ex)
    454         # <https://cloud.google.com/bigquery/troubleshooting-errors>`__
    455 
--> 456         raise GenericGBQException("Reason: {0}".format(ex))
    457 
    458     def run_query(self, query, **kwargs):

GenericGBQException: Reason: 400 Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1.

1 个答案:

答案 0 :(得分:3)

我遇到了同样的问题。

在我的情况下,它取决于数据框的数据类型object

我有三列externalIdmappingIdinfo。对于这些字段,我都没有设置数据类型,让大熊猫做它的魔力。

决定将所有三种列数据类型设置为object。问题是,to_gbq组件内部使用to_json组件。出于某种原因,如果字段的类型为object但仅包含数值,则此输出将省略数据字段周围的引号。

所以Google Big Query需要这个

{"externalId": "12345", "mappingId":"abc123", "info":"blerb"}

但得到了这个:

{"externalId": 12345, "mappingId":"abc123", "info":"blerb"}

由于Google Big Query中字段的映射为STRING,导入过程失败。

出现了两种解决方案。

解决方案1 ​​ - 更改列的数据类型

简单的类型转换有助于解决此问题。我还必须将Big Query中的数据类型更改为INTEGER

df['externalId'] = df['externalId'].astype('int')

如果是这种情况,Big Query可以使用没有引号的字段,正如JSON标准所说的那样。

解决方案2 - 确保字符串字段是字符串

同样,这是设置数据类型。但是,由于我们将其明确设置为String,导出to_json会打印出一个引用字段,一切正常。

df['externalId'] = df['externalId'].astype('str')