在输出窗口之前,如何将变换应用于无界Apache Beam管道窗口中的所有元素?

时间:2017-12-13 00:17:18

标签: google-cloud-dataflow apache-beam

我正在编写一个Dataflow管道,该管道将从Google Pub / Sub读取并将数据写入Google云端存储:

    pipeline.apply(marketData)
        .apply(ParDo.of(new PubsubMessageToByteArray()))
        .apply(ParDo.of(new ByteArrayToString()))
        .apply(ParDo.of(new StringToMarketData()))
        .apply(ParDo.of(new AddTimestamps()))
        .apply(Window.<MarketData>into(FixedWindows.of(Duration.standardMinutes(options.getMinutesPerWindow())))
                .withAllowedLateness(Duration.standardSeconds(options.getAllowedSecondLateness()))
                .accumulatingFiredPanes())
        .apply(ParDo.of(new MarketDataToCsv()))
        .apply("Write File(s)", TextIO
                .write()
                .to(options.getOutputDirectory())
                .withWindowedWrites()
                .withNumShards(1)
                .withFilenamePolicy(new WindowedFilenamePolicy(outputBaseDirectory))
                .withHeader(csvHeader));

    pipeline.run().waitUntilFinish();

我希望在输出结果之前对窗口中的元素进行重复数据删除和排序。这与典型的PTransform不同,因为我希望一旦窗口结束就执行转换。

Pub / Sub主题将具有重复项,因为多个worker在一个worker失败时生成相同的消息。如何在写入之前删除窗口中的所有重复项?我看到在Beam版本0.2中存在RemoveDuplicates类,但在当前版本中不存在。

我理解,在引擎盖下,Beam将PTransforms与工人进行并行化。但由于此管道写入withNumShards(1),因此只有一个工作程序将写入最终结果。这意味着理论上应该可以让该工作人员在写作之前应用重复数据删除转换。

Beam python sdk still has a RemoveDuplicates method,所以我可以用Java重现那个逻辑,但为什么它会被移除,除非有更好的方法呢?我想这个实现将是在一些窗口触发后执行的重复数据删除ParDo。

编辑:GroupByKeySortValues看起来他们会做我需要的事情。我现在正试图使用​​它们。

1 个答案:

答案 0 :(得分:3)

以下是重复数据删除部分的答案:

.apply(Distinct
 // MarketData::key produces a String. Use withRepresentativeValue() 
 // because Apache beam deserializes Java objects into bytes, which 
 // could cause two equal objects to be interpreted as not equal. See 
 // org/apache/beam/sdk/transforms/Distinct.java for details. 
 .withRepresentativeValueFn(MarketData::key)
 .withRepresentativeType(TypeDescriptor.of(String.class)))

这里有一个排序和重复数据删除元素的解决方案(如果还需要排序):

public static class DedupAndSortByTime extends 
        Combine.CombineFn<MarketData, TreeSet<MarketData>, List<MarketData>> {
    @Override
    public TreeSet<MarketData> createAccumulator() {
        return new TreeSet<>(Comparator
                .comparingLong(MarketData::getEventTime)
                .thenComparing(MarketData::getOrderbookType));
    }

    @Override
    public TreeSet<MarketData> addInput(TreeSet<MarketData> accum, MarketData input) {
        accum.add(input);
        return accum;
    }

    @Override
    public TreeSet<MarketData> mergeAccumulators(Iterable<TreeSet<MarketData>> accums) {

        TreeSet<MarketData> merged = createAccumulator();
        for (TreeSet<MarketData> accum : accums) {
            merged.addAll(accum);
        }
        return merged;
    }

    @Override
    public List<MarketData> extractOutput(TreeSet<MarketData> accum) {
        return Lists.newArrayList(accum.iterator());
    }
}

所以更新的管道是

    // Pipeline
    pipeline.apply(marketData)
        .apply(ParDo.of(new MarketDataDoFns.PubsubMessageToByteArray()))
        .apply(ParDo.of(new MarketDataDoFns.ByteArrayToString()))
        .apply(ParDo.of(new MarketDataDoFns.StringToMarketDataAggregate()))
        .apply(ParDo.of(new MarketDataDoFns.DenormalizeMarketDataAggregate()))
        .apply(ParDo.of(new MarketDataDoFns.AddTimestamps()))
        .apply(Window.<MarketData>into(FixedWindows.of(Duration.standardMinutes(options.getMinutesPerWindow())))
                .withAllowedLateness(Duration.standardSeconds(options.getAllowedSecondLateness()))
                .accumulatingFiredPanes())
        .apply(Combine.globally(new MarketDataCombineFn.DedupAndSortByTime()).withoutDefaults())
        .apply(ParDo.of(new MarketDataDoFns.MarketDataToCsv()))
        .apply("Write File(s)", TextIO
                .write()
                // This doesn't set the output directory as expected. 
                // "/output" gets stripped and I don't know why,
                // so "/output" has to be added to the directory path 
                // within the FilenamePolicy.
                .to(options.getOutputDirectory())
                .withWindowedWrites()
                .withNumShards(1)
                .withFilenamePolicy(new MarketDataFilenamePolicy.WindowedFilenamePolicy(outputBaseDirectory))
                .withHeader(csvHeader));

    pipeline.run().waitUntilFinish();