我一直在遵循Google的文档,使用以下代码将数据通过Ruby加载到BigQuery
# project_id = "Your Google Cloud project ID"
# dataset_id = "ID of the dataset containing table"
# table_id = "ID of the table to import file data into"
# local_file_path = "Path to local file to import into BigQuery table"
require "google/cloud/bigquery"
bigquery = Google::Cloud::Bigquery.new project: project_id
dataset = bigquery.dataset dataset_id
table = dataset.table table_id
puts "Importing data from file: #{local_file_path}"
load_job = table.load_job local_file_path
puts "Waiting for load job to complete: #{load_job.job_id}"
load_job.wait_until_done!
puts "Data imported"
来自:https://cloud.google.com/bigquery/docs/loading-data-local
一切正常,但是我的CSV文件需要跳过前两行。
我已经通读了更多文档,以了解skip_leading_rows的实例方法,但不确定如何使用它。 https://googleapis.github.io/google-cloud-ruby/docs/google-cloud-bigquery/latest/Google/Cloud/Bigquery/LoadJob.html
对此有一些建议。谢谢!
答案 0 :(得分:1)
根据documentation,我希望它能起作用:
puts "Importing data from file: #{local_file_path}"
load_job = table.load_job(local_file_path, skip_leading: 1)