如何在Dataflow中正确使用'flatten'

时间:2015-05-13 02:56:41

标签: google-cloud-dataflow

我们的管道如下:

GCS(gz压缩文件) - > ParDo - >大量查询

我想使用'flatten'来从GCS中挖掘多个文件作为我的管道的输入。但它仍然存在错误:

Workflow failed. Causes: (5001e5764f46ac2c): BigQuery creation of import job for table "Impressions_05_2015_denormalized_test" in dataset "CPT_XXXX" in project "gdfp-XXXX" failed. Causes: (5001e5764f46a1cf): Error:
 Message: Load configuration must specify at least one source URI
 HTTP Code: 400

代码:

PCollection<String> file1 = pipeline.apply(TextIO.Read.from("gs://<bucket_name_removed>/NetworkActiveViews_232503_20140918_21.gz").withCompressionType(TextIO.CompressionType.GZIP));
        PCollection<String> file2 = pipeline.apply(TextIO.Read.from("gs://<bucket_name_removed>/NetworkActiveViews_232503_20140918_22.gz").withCompressionType(TextIO.CompressionType.GZIP));
        PCollectionList<String> allFiles = PCollectionList.of(file1).and(file2);
        PCollection<String> inputRead = allFiles.apply(Flatten.<String>pCollections());
inputRead.apply(ParDo.of(transformation)
                .named(String.format("%s-CPT-transform", type))
                .withSideInputs(views))
                .apply(Write.to(getOutputTable(type))
                        .withCreateDisposition(CREATE_IF_NEEDED)
                        .withWriteDisposition(WRITE_APPEND)
                        .withSchema(schema)
                        .named(String.format("%s-BQ-write", type)));

作业ID示例: 2015-05-12_19_54_06-10158770219525037626

我做错了什么?

2 个答案:

答案 0 :(得分:2)

而不是提议的黑客,这实际上非常粗糙,我改为在finishBundle()方法中写一个空行。这将为每个包写入1个空行,但我们可以使用它,直到修复程序推出。设置“id”可以更轻松地在以后过滤这些行。

此外,此变通方法/黑客攻击更容易实现:

@Override
public void finishBundle(Context c) throws Exception {
   TableRow workaroundRow = new TableRow();
   workaroundRow.set("id", "workaround_row");
   c.output(workaroundRow); //Workaround to http://goo.gl/CpBxEf
}

答案 1 :(得分:1)

在为BigQuery编写一个空的PCollection时,我们有一个错误。我有一个问题的责备 - 我们会尽快修复它并在此处跟进。

如果您需要能够创建可能为空的结果PCollection,并且您可以处理在结果为空的情况下添加到BigQuery表中的单个空行,您可以暂时解决此问题使用这个黑客:

// Temporary hack around a temporary bug writing empty PCollections to BigQuery by
// creating a single empty row if a PCollection<TableRow> is empty.
static class AddEmptyRowIfEmpty
        extends PTransform<PCollection<TableRow>, PCollection<TableRow>> {

    @Override
    public PCollection<TableRow> apply(PCollection<TableRow> maybeEmpty) {

        // Build a PCollection that contains no elements if 'maybeEmpty' has elements, or
        // exactly one empty TableRow if 'maybeEmpty' is empty.
        final PCollectionView<Iterable<TableRow>> maybeEmptyView = maybeEmpty.apply(
                View.<TableRow>asIterable());
        PCollection<TableRow> singleRowIfMaybeEmptyIsEmpty =
                maybeEmpty.getPipeline()
                    .apply(Create.of((Void) null)).setCoder(VoidCoder.of())
                    .apply(ParDo.of(
                        new DoFn<Void, TableRow>() {
                          @Override
                          public void processElement(ProcessContext c) {
                            Iterator<TableRow> rows = c.sideInput(maybeEmptyView).iterator();
                            if (!rows.hasNext()) {
                              c.output(new TableRow());
                            }
                          }
                        }).withSideInputs(maybeEmptyView));

        // Return a PCollection with at least one element.
        return PCollectionList.of(singleRowIfMaybeEmptyIsEmpty).and(maybeEmpty)
                .apply(Flatten.<TableRow>pCollections());

    }
}

// Then in your pipeline:
...
.apply(new AddEmptyRowIfEmpty())
.apply(BigQueryIO.Write(...))