我有一个Apache Beam项目,该项目使用Google数据流运行器来处理BigQuery中存储的许多数据。该流读取1个主表,并使用3个不同的侧流。对于输入数据集中的每一行,我们计算一个“标签”,生成5个不同的输出流。我们读取的主要BigQuery表为60GB,三个侧面流分别为2GB,51GB和110GB。这些都转换为PCollectionView<Map<String, Iterable<TableRow>>>
最终,将这5个流合并并写回到BigQuery。
当我在数据的一部分(100万行)上运行此作业时,该作业按预期方式工作,但是当我在完整的数据集(1.77亿行)上运行时,该作业返回以下错误:< strong>无法将列分区表复制到列分区元表:不支持
此错误是什么意思?我该如何解决呢?谢谢!
完整堆栈跟踪:
java.lang.RuntimeException: Failed to create copy job with id prefix beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000, reached max retries: 3, last failed copy job: {
"configuration" : {
"copy" : {
"createDisposition" : "CREATE_IF_NEEDED",
"destinationTable" : {
"datasetId" : "KPI",
"projectId" : "bolcom-stg-kpi-logistics-f6c",
"tableId" : "some_table_v1$20180811"
},
"sourceTables" : [ {
"datasetId" : "KPI",
"projectId" : "bolcom-stg-kpi-logistics-f6c",
"tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00002_00000"
}, {
"datasetId" : "KPI",
"projectId" : "bolcom-stg-kpi-logistics-f6c",
"tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00001_00000"
}, {
"datasetId" : "KPI",
"projectId" : "bolcom-stg-kpi-logistics-f6c",
"tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00004_00000"
}, {
"datasetId" : "KPI",
"projectId" : "bolcom-stg-kpi-logistics-f6c",
"tableId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00003_00000"
} ],
"writeDisposition" : "WRITE_APPEND"
}
},
"etag" : "\"HbYIGVDrlNbv2nDGLHCFlwJG0rI/oNgxlMGidSDy59VClvLIlEu08aU\"",
"id" : "bolcom-stg-kpi-logistics-f6c:EU.beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2",
"jobReference" : {
"jobId" : "beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2",
"location" : "EU",
"projectId" : "bolcom-stg-kpi-logistics-f6c"
},
"kind" : "bigquery#job",
"selfLink" : "https://www.googleapis.com/bigquery/v2/projects/bolcom-stg-kpi-logistics-f6c/jobs/beam_load_poisrschellenberger0810134033c63e44ed_e7cf725c5321409b96a4f20e7ec234bc_3d9288a5ff3a24b9eb8b1ec9c621e7dc_00000-2?location=EU",
"statistics" : {
"creationTime" : "1533957446953",
"endTime" : "1533957447111",
"startTime" : "1533957447111"
},
"status" : {
"errorResult" : {
"message" : "Failed to copy Column partitioned table to Column partitioned meta table: not supported.",
"reason" : "invalid"
},
"errors" : [ {
"message" : "Failed to copy Column partitioned table to Column partitioned meta table: not supported.",
"reason" : "invalid"
} ],
"state" : "DONE"
},
"user_email" : "595758839781-compute@developer.gserviceaccount.com"
}.
at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.copy(WriteRename.java:166)
at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.writeRename(WriteRename.java:107)
at org.apache.beam.sdk.io.gcp.bigquery.WriteRename.processElement(WriteRename.java:80)
要写入的表如下创建:
private static void write(final PCollection<TableRow> data) {
// Write to BigQuery.
data.apply(BigQueryIO.writeTableRows()
.to(new GetPartitionFromTableRowFn("table_name"))
.withSchema(getOutputSchema())
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
}
private static TableSchema getOutputSchema() {
final List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName(ORDER_LINE_REFERENCE).setType("INTEGER"));
fields.add(new TableFieldSchema().setName(COLUMN_LABEL).setType("STRING"));
fields.add(new TableFieldSchema().setName(COLUMN_INSERTION_DATETIME).setType("TIMESTAMP"));
fields.add(new TableFieldSchema().setName(COLUMN_PARTITION_DATE).setType("DATE"));
return new TableSchema().setFields(fields);
}
具有以下序列化功能:
public class GetPartitionFromTableRowFn implements SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination> {
private final String tableDestination;
public GetPartitionFromTableRowFn(final String tableDestination) {
this.tableDestination = tableDestination;
}
public TableDestination apply(final ValueInSingleWindow<TableRow> element) {
final TableDestination tableDestination;
if (null != element.getValue()) {
final TimePartitioning timePartitioning = new TimePartitioning().setType("DAY");
timePartitioning.setField(Constants.COLUMN_PARTITION_DATE);
final String formattedDate = element.getValue().get(Constants.COLUMN_PARTITION_DATE).toString().replaceAll("-", "");
// e.g. output$20180801
final String tableName = String.format("%s$%s", this.tableDestination, formattedDate);
tableDestination = new TableDestination(tableName, null, timePartitioning);
} else {
tableDestination = new TableDestination(this.tableDestination, null);
}
return tableDestination;
}
}
答案 0 :(得分:1)
1)您正在尝试写入表后缀some_table_v1$20180811
中描述为分区装饰器的列分区表。此语法仅适用于按时间划分的表。
由于您的表已根据错误消息按列进行分区,因此不支持此操作。您需要运行UPDATE或MERGE语句来更新基于列的分区,并且一项工作仅限于更改1000个分区。或删除基于列的分区并仅使用提取时间已分区的表。
请注意,BigQuery supports two kind of partitions:
2)如果不是这种情况,则需要再次检查源表:
复制多个分区表时,请注意以下几点:
ps。有关更多详细信息,请发布您的表定义。