在协同组流的流顶部应用键控状态

时间:2019-07-23 10:32:56

标签: apache-kafka apache-beam

  • 我有两个卡夫卡消息来源
  • 我正在尝试进行世界计数并合并来自两个流的计数
  • 我为两个数据流都创建了1分钟的窗口,并从DoFn应用coGroupBykey,我发出了<Key,Value> (word,count)
  • 在此coGroupByKey函数之上,我正在应用有状态ParDo

  • 让我说,如果我在相同的窗口时间内从(Test,2)中得到stream 1,从(Test,3)中得到stream 2,然后在CogroupByKey函数中,我将合并与(Test,5)相同,但如果它们不在同一窗口中,我将发出(Test,2)(Test,3)

  • 现在我将应用状态来合并这些元素

  • 所以最终我应该得到(Test,5),但是我没有得到预期的结果,stream 1形式的所有元素都将进入一个分区,并且 从stream 2到另一个分区的元素,这就是为什么我得到结果

(Test,2)
(Test,3) 
// word count stream from kafka topic 1
PCollection<KV<String,Long>> stream1 = ... 

// word count stream from kafka topic 2
PCollection<KV<String,Long>> stream2 = ... 

PCollection<KV<String,Long>> windowed1 = 
  stream1.apply(
    Window
      .<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
      .triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
      .withAllowedLateness(Duration.millis(1000))
      .discardingFiredPanes());

PCollection<KV<String,Long>> windowed2 = 
  stream2.apply(
    Window
      .<KV<String,Long>>into(FixedWindows.of(Duration.millis(60000)))
      .triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
      .withAllowedLateness(Duration.millis(1000))
      .discardingFiredPanes());

final TupleTag<Long> count1 = new TupleTag<Long>();
final TupleTag<Long> count2 = new TupleTag<Long>();

// Merge collection values into a CoGbkResult collection.
PCollection<KV<String, CoGbkResult>> joinedStream =
    KeyedPCollectionTuple.of(count1, windowed1).and(count2, windowed2)
      .apply(CoGroupByKey.<String>create());

// applying state operation after coGroupKey fun 

PCollection<KV<String,Long>> finalCountStream =
  joinedStream.apply(ParDo.of(
    new DoFn<KV<String, CoGbkResult>, KV<String,Long>>() {

      @StateId(stateId)
      private final StateSpec<MapState<String, Long>> mapState =
          StateSpecs.map();

      @ProcessElement
      public void processElement(
        ProcessContext processContext,
        @StateId(stateId) MapState<String, Long> state) {

          KV<String, CoGbkResult> element = processContext.element();
          Iterable<Long> count1 = element.getValue().getAll(web);
          Iterable<Long> count2 = element.getValue().getAll(assist);
          Long sumAmount = 
              StreamSupport
                .stream(
                    Iterables.concat(count1, count2).spliterator(), false)
                .collect(Collectors.summingLong(n -> n));

          System.out.println(element.getKey()+"::"+sumAmount);
          //  processContext.output(element.getKey()+"::"+sumAmount);

          Long currCount = 
            state.get(element.getKey()).read() == null
              ? 0L
              : state.get(element.getKey()).read();
          Long newCount = currCount+sumAmount;
          state.put(element.getKey(),newCount);
          processContext.output(KV.of(element.getKey(),newCount));
        }
      }));

finalCountStream
    .apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {

      @StateId(myState)
      private final StateSpec<MapState<String, Long>> mapState =
        StateSpecs.map();

      @ProcessElement
      public void processElement(
        ProcessContext c,
        @StateId(myState) MapState<String, Long> state) {

          KV<String,Long> e = c.element();
          Long currCount = state.get(e.getKey()).read()==null
            ? 0L
            : state.get(e.getKey()).read();
          Long newCount = currCount+e.getValue();
          state.put(e.getKey(),newCount);
          c.output(e.getKey()+":"+newCount);
        }

      }))
    .apply(KafkaIO.<Void, String>write()
                  .withBootstrapServers("localhost:9092")
                  .withTopic("test")
                  .withValueSerializer(StringSerializer.class)
                  .values());

3 个答案:

答案 0 :(得分:0)

 PipelineOptions options = PipelineOptionsFactory.create();
    options.as(FlinkPipelineOptions.class)
            .setRunner(FlinkRunner.class);

    Pipeline p = Pipeline.create(options);


    PCollection<KV<String,Long>> stream1 = new KafkaWordCount("localhost:9092","test1")
            .build(p);

    PCollection<KV<String,Long>> stream2 = new KafkaWordCount("localhost:9092","test2")
            .build(p);


    PCollectionList<KV<String, Long>> pcs = PCollectionList.of(stream1).and(stream2);
    PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());

    merged.apply("finalState", ParDo.of(new DoFn<KV<String,Long>, String>() {

        @StateId(myState)
        private final StateSpec<MapState<String, Long>> mapState = StateSpecs.map();

        @ProcessElement
        public void processElement(ProcessContext c, @StateId(myState) MapState<String, Long> state){

            KV<String,Long> e = c.element();
            System.out.println("Thread ID :"+ Thread.currentThread().getId());
            Long currCount = state.get(e.getKey()).read()==null? 0L:state.get(e.getKey()).read();
            Long newCount = currCount+e.getValue();
            state.put(e.getKey(),newCount);
            c.output(e.getKey()+":"+newCount);
        }

    })).apply(KafkaIO.<Void, String>write()
            .withBootstrapServers("localhost:9092")
            .withTopic("test")
            .withValueSerializer(StringSerializer.class)
            .values()
    );

    p.run().waitUntilFinish();

答案 1 :(得分:0)

您已经使用触发器Repeatedly.forever(AfterPane.elementCountAtLeast(1))discardingFiredPanes()设置了两个流。这将导致CoGroupByKey在每个输入元素之后尽快输出,然后每次重置其状态。因此,正常的做法是基本上将每个输入直接传递。

让我解释更多:CoGroupByKey的执行方式如下:

  • stream1stream2中的所有元素都按照您指定的方式标记。因此,(key, value1)中的每个stream1实际上都变成了(key, (count1, value1))。并且(key, value2)中的每个stream2都变成`(key,(count2,value2))
  • 这些带标签的收集物被拼合在一起。因此,现在有了一个包含(key, (count1, value1))(key, (count2, value2))之类的元素的集合。
  • 合并后的集合会通过常规GroupByKey。这就是触发发生的地方。因此,使用默认触发器,您会得到(key, [(count1, value1), (count2, value2), ...]),其中所有的键值都将被分组。但是,有了触发器,您经常会得到分别的(key, [(count1, value1)])(key, [(count2, value2)]),因为每个分组都会立即触发。
  • GroupByKey的输出仅包装在CoGbkResult的API中。在许多跑步者中,这只是可迭代分组的过滤视图。

当然,触发器是不确定的,运行者也可以使用CoGroupByKey的不同实现。但是您所看到的行为是预期的。您可能不想使用这样的触发器或丢弃模式,否则您需要在下游进行更多分组。

通常,与CoGBK进行连接需要下游一些工作,直到Beam支持缩回为止。

答案 2 :(得分:0)

或者,您可以使用Flatten + Combine方法,该方法应该为您提供更简单的代码:

   PCollection<KV<String, Long>> pc1 = ...;
   PCollection<KV<String, Long>> pc2 = ...;
   PCollectionList<KV<String, Long>> pcs = PCollectionList.of(pc1).and(pc2);
   PCollection<KV<String, Long>> merged = pcs.apply(Flatten.<KV<String, Long>>pCollections());
   merged.apply(windiw...).apply(Combine.perKey(Sum.ofLongs()))