从biq查询命令行加载json文件

时间:2012-09-27 13:50:51

标签: json google-bigquery

是否可以使用Big Query命令行工具从json文件(而不仅仅是csv)加载数据?我可以使用GUI加载一个简单的json文件,但是,命令行假设是一个csv,我没有看到任何关于如何指定json的文档。

这是我正在使用的简单json文件

{ “栏”: “值”}

使用架构 西:STRING

3 个答案:

答案 0 :(得分:6)

从版本2.0.12开始,bq允许上传换行符分隔的JSON文件。这是完成工作的示例命令:

bq load --source_format NEWLINE_DELIMITED_JSON datasetName.tableName data.json schema.json

如上所述,“bq help load”将为您提供所有细节。

答案 1 :(得分:1)

1)是的,你可以

2)文档为here。转到第3步:上传文档中的表格。

3)您必须使用--source_format标志告诉bq您正在上传JSON文件而不是csv。

4)完整的命令结构是

bq load [--source_format=NEWLINE_DELIMITED_JSON] [--project_id=your_project_id] destination_data_set.destination_table data_source_uri table_schema

bq load --project_id=my_project_bq dataset_name.bq_table_name gs://bucket_name/json_file_name.json path_to_schema_in_your_machine

5)您可以通过

找到其他bq负载变体
bq help load   

答案 2 :(得分:0)

它不支持JSON格式的数据加载。 以下是bq help load命令的文档(load),其中包含最新的bq版本2.0.9:

USAGE: bq [--global_flags] <command> [--command_flags] [args]


load     Perform a load operation of source into destination_table.

     Usage:
     load <destination_table> <source> [<schema>]

     The <destination_table> is the fully-qualified table name of table to create, or append to if the table already exists.

     The <source> argument can be a path to a single local file, or a comma-separated list of URIs.

     The <schema> argument should be either the name of a JSON file or a text schema. This schema should be omitted if the table already has one.

     In the case that the schema is provided in text form, it should be a comma-separated list of entries of the form name[:type], where type will default
     to string if not specified.

     In the case that <schema> is a filename, it should contain a single array object, each entry of which should be an object with properties 'name',
     'type', and (optionally) 'mode'. See the online documentation for more detail:
     https://code.google.com/apis/bigquery/docs/uploading.html#createtable

     Note: the case of a single-entry schema with no type specified is
     ambiguous; one can use name:string to force interpretation as a
     text schema.

     Examples:
     bq load ds.new_tbl ./info.csv ./info_schema.json
     bq load ds.new_tbl gs://mybucket/info.csv ./info_schema.json
     bq load ds.small gs://mybucket/small.csv name:integer,value:string
     bq load ds.small gs://mybucket/small.csv field1,field2,field3

     Arguments:
     destination_table: Destination table name.
     source: Name of local file to import, or a comma-separated list of
     URI paths to data to import.
     schema: Either a text schema or JSON file, as above.

     Flags for load:

/usr/local/bin/bq:
  --[no]allow_quoted_newlines: Whether to allow quoted newlines in CSV import data.
  -E,--encoding: <UTF-8|ISO-8859-1>: The character encoding used by the input file. Options include:
    ISO-8859-1 (also known as Latin-1)
    UTF-8
  -F,--field_delimiter: The character that indicates the boundary between columns in the input file. "\t" and "tab" are accepted names for tab.
  --max_bad_records: Maximum number of bad records allowed before the entire job fails.
    (default: '0')
    (an integer)
  --[no]replace: If true erase existing contents before loading new data.
    (default: 'false')
  --schema: Either a filename or a comma-separated list of fields in the form name[:type].
  --skip_leading_rows: The number of rows at the beginning of the source file to skip.
    (an integer)

gflags:
  --flagfile: Insert flag definitions from the given file into the command line.
    (default: '')
  --undefok: comma-separated list of flag names that it is okay to specify on the command line even if the program does not define a flag with that name.
    IMPORTANT: flags in this list that have arguments MUST use the --flag=value format.
    (default: '')