Apache Beam组合分组值

时间:2017-05-26 18:21:12

标签: java google-cloud-platform google-cloud-dataflow apache-beam dataflowtask

我试图找到一种方法来重新订购我的Kafka消息并使用 Apache Beam Google DataFlow 将有序消息发送到新主题

我有Kafka发布者发送以下格式的String消息: {system_timestamp}-{event_name}?{parameters}

例如:

1494002667893-client.message?chatName=1c&messageBody=hello
1494002656558-chat.started?chatName=1c&chatPatricipants=3

我想要做的是根据消息的 {system-timestamp} 部分重新排序事件,并在5秒钟内,因为我们的发布商不保证消息将是根据 {system-timestamp} 值发送。

我编写了一个模拟排序器函数,用于对从Kafka收到的事件进行排序(使用 KafkaIO 源代码):

static class SortEventsFunc extends DoFn<KV<String, Iterable<String>>, KV<String, Iterable<String>>> {

   @ProcessElement
   public void processElement(ProcessContext c) {
       KV<String, Iterable<String>> element = c.element();

       System.out.println("");
       System.out.print("key: " + element.getKey() + ";");

       Iterator<String> it = element.getValue().iterator();
       List<String> list = new ArrayList<>();
       while (it.hasNext()) {
           String val = it.next();
           System.out.print("value: " + val);
           list.add(val);
       }
       Collections.sort(list, Comparator.naturalOrder());
       c.output(KV.of(element.getKey(), list));
   }
 }

public static void main(String[] args) {
    PipelineOptions options = PipelineOptionsFactory.create();

    DirectOptions directOptions = options.as(DirectOptions.class);
    directOptions.setRunner(DirectRunner.class);

    // Create the Pipeline object with the options we defined above.
    Pipeline pipeline = Pipeline.create(options);
    pipeline
        // read from Kafka
        .apply(KafkaIO.<String,String>read()
            .withBootstrapServers("localhost:9092")
            .withTopics(new ArrayList<>((Arrays.asList("events"))))
            .withKeyDeserializer(StringDeserializer.class)
            .withValueDeserializer(StringDeserializer.class)
            .withoutMetadata())
        // apply window
        .apply(Window.<KV<String,String>>into(
                FixedWindows.of(Duration.standardSeconds(5L))))
        // group by key before sorting
        .apply(GroupByKey.<String, String>create()) // return PCollection<KV<String, Iterable<String>>
        // sort events
        .apply(ParDo.of(new SortEventsFunc()))
        //combine KV<String, Iterable<String>> input to KafkaIO acceptable KV<String, String> format
        .apply(Combine.perKey()) //:TODO somehow convert KV<String, Iterable<String>> to KV<String, String>
        // write ordered events to Kafka
        .apply(KafkaIO.<String, String>write()
                .withBootstrapServers("localhost:9092")
                .withTopic("events-sorted")
                .withKeySerializer(StringSerializer.class)
                .withValueSerializer(StringSerializer.class)
            );
    pipeline.run();
}

因此,我使用GroupByKey.<String, String>create()转换对邮件进行了分组,在sortrin事件之后,我需要以某种方式将它们从KV<String, Iterable<String>>转换为KafkaIO KV<String, String> or KV<Void, String>值。 所以我想做的就是忽略通过对变换键进行分组而简单地创建 将每个值作为单独的消息传递给KafkaIO作家

我探索了Combine#perKey转换,但它接受 SerializableFunction ,它只能将所有值组合到一个String。(带有一些分隔符),因此我只传递一个值作为一个连接字符串而不是每个值(由KafkaIO#read()读取)给KafkaIO作家。

1 个答案:

答案 0 :(得分:1)

实际上很简单! 这里的诀窍是,你可以在substr()方法中多次调用c.output

因此,在您的情况下,只需定义@ProcessElement,迭代DoFn<KV<String, Iterable<String>>, KV<String, String>>集合,然后为每个集合调用c.element().getValue()