我尝试使用java api将本地csv文件导入BigQuery。
但我没能做到。
如果您在下面的代码中了解我的错误,请告诉我......
TableSchema schema = new TableSchema();
ArrayList<TableFieldSchema> fields = new ArrayList<TableFieldSchema>();
fields.add(new TableFieldSchema().setName("nn").setType("String"));
fields.add(new TableFieldSchema().setName("gg").setType("String"));
fields.add(new TableFieldSchema().setName("uu").setType("String"));
schema.setFields(fields);
TableReference destTable = new TableReference();
destTable.setProjectId(projectId);
destTable.setDatasetId(datasetId);
destTable.setTableId("testUploads_fromJava");
FileContent content = new FileContent("application/octet-stream", new File(csv));
Job job = new Job();
JobConfiguration config = new JobConfiguration();
JobConfigurationLoad configLoad = new JobConfigurationLoad();
configLoad.setSchema(schema);
configLoad.setDestinationTable(destTable);
config.setLoad(configLoad);
job.setConfiguration(config);
Insert insert = bigquery.jobs().insert(projectId, job, content);
insert.setProjectId(projectId);
JobReference jobRef = insert.execute().getJobReference();
在“JobReference jobRef = insert.execute()。getJobReference();”中出现错误。
这是错误代码。
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3004)
at java.net.URI.<init>(URI.java:577)
at com.google.api.client.http.GenericUrl.<init>(GenericUrl.java:100)
at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:269)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:408)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:328)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:449)
at bigquery.GettingBigQueryResult.loadLocalCSVtoBQ(GettingBigQueryResult.java:117)
at main.GetBQData.main(GetBQData.java:70)
感谢。
答案 0 :(得分:1)
@ greeness感谢您的建议。
我对设置方案有误。 我修改了代码,因为方案定义是从json加载的。
正确的代码如下。
TableSchema schema = new TableSchema();
schema.setFields(new ArrayList<TableFieldSchema>());
JacksonFactory JACKSON = new JacksonFactory();
JACKSON.createJsonParser(new FileInputStream("schema.json"))
.parseArrayAndClose(schema.getFields(), TableFieldSchema.class, null);
schema.setFactory(JACKSON);
TableReference destTable = new TableReference();
destTable.setProjectId(projectId);
destTable.setDatasetId(datasetId);
destTable.setTableId(tableId);
FileContent content = new FileContent("application/octet-stream", new File(csv));
Job job = new Job();
JobConfiguration config = new JobConfiguration();
JobConfigurationLoad configLoad = new JobConfigurationLoad();
configLoad.setSchema(schema);
configLoad.setDestinationTable(destTable);
configLoad.setEncoding("UTF-8");
configLoad.setCreateDisposition("CREATE_IF_NEEDED");
config.setLoad(configLoad);
job.setConfiguration(config);
Insert insert = bigquery.jobs().insert(projectId, job, content);
insert.setProjectId(projectId);
JobReference jobRef = insert.execute().getJobReference();
String jobId = jobRef.getJobId();
谢谢。