这是对此question的概括。
假设我有多个源流,它们应用了相同的谓词集。我想设置分支流,以便满足谓词的记录(无论哪个源流)都由同一分支流处理。如下图所示,每个分支流就像一个通用处理器,可以转换传入的记录。
以下代码块无法正常工作,因为它为每个源流创建了一组独特的分支流。
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> source1 = builder.stream("x");
KStream<String, String> source2 = builder.stream("y");
Predicate<String, String>[] branchPredicates = new Predicate[forkCount];
for (int i = 0; i < forkCount; ++i) {
int idx = i;
branchPredicates[i] = ((key, value) ->
key.hashCode() % forkCount == idx);
}
Kstream<String, String>[] forkStreams = Arrays.asList(source1, source2)
.map(srcStream -> srcStream.branch(branchPredicates)
.flatMap(x -> Arrays.stream())
.collect(Collectors.toList());
对不起,我主要是Scala开发人员:)
在上面的示例中, forkStreams.length == branchPredicates.length x 2 并且通常与源流的数量成比例。 Kafka流中有一个技巧可以让我在谓词流和fork流之间保持一对一的关系吗?
更新11/27/2018 我可以取得一些进展:
但是,如以下代码块所示, ALL 分叉流存在于同一线程中。我想要实现的是将每个fork流放入不同的线程中,以提高CPU利用率
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> source = builder.stream(Arrays.asList("a", "b", "c")
// Create workers
// Need to have predicates for the branches
int totalPerdicates = Integer
.parseInt(props.getProperty(WORKER_PROCESSOR_COUNT));
Predicate<String, String>[] predicates = new Predicate[totalPerdicates];
IntStream
.range(0, totalPerdicates)
.forEach(i -> {
int idx = i;
predicates[i] = (key, value) ->
key.hashCode() % totalPerdicates == idx;
});
forkStreams = Arrays.asList(sourceStreams.branch(predicates));
// Hack- Dump the number of messages processed every 10 seconds
forkStreams
.forEach(fork -> {
KStream<Windowed<String>, Long> tbl =
fork.transformValues(new SourceTopicValueTransformerSupplier())
.selectKey((key, value) -> "foobar")
.groupByKey()
.windowedBy(TimeWindows.of(2000L))
.count()
.toStream();
tbl
.foreach((key, count) -> {
String fromTo = String.format("%d-%d",
key.window().start(),
key.window().end());
System.out.printf("(Thread %d, Index %d) %s - %s: %d\n",
Thread.currentThread().getId(),
forkStreams.indexOf(fork),
fromTo, key.key(), count);
});
这是输出的摘要
<snip>
(Thread 13, Index 1) 1542132126000-1542132128000 - foobar: 2870
(Thread 13, Index 1) 1542132024000-1542132026000 - foobar: 2955
(Thread 13, Index 1) 1542132106000-1542132108000 - foobar: 1914
(Thread 13, Index 1) 1542132054000-1542132056000 - foobar: 546
<snip>
(Thread 13, Index 2) 1542132070000-1542132072000 - foobar: 524
(Thread 13, Index 2) 1542132012000-1542132014000 - foobar: 2491
(Thread 13, Index 2) 1542132042000-1542132044000 - foobar: 261
(Thread 13, Index 2) 1542132022000-1542132024000 - foobar: 2823
<snip>
(Thread 13, Index 3) 1542132088000-1542132090000 - foobar: 2170
(Thread 13, Index 3) 1542132010000-1542132012000 - foobar: 2962
(Thread 13, Index 3) 1542132008000-1542132010000 - foobar: 2847
(Thread 13, Index 3) 1542132022000-1542132024000 - foobar: 2797
<snip>
(Thread 13, Index 4) 1542132046000-1542132048000 - foobar: 2846
(Thread 13, Index 4) 1542132096000-1542132098000 - foobar: 3216
(Thread 13, Index 4) 1542132108000-1542132110000 - foobar: 2696
(Thread 13, Index 4) 1542132010000-1542132012000 - foobar: 2881
<snip>
任何有关如何将每个fork流放置在不同线程中的建议都将受到赞赏。
答案 0 :(得分:0)
2018年11月27日的更新已回答了该问题。话虽这么说,该解决方案对我不起作用,因为我希望每个fork流都作为单独的线程运行。调用stream.branch()
在同一线程空间内创建多个子流。因此,分区中的所有记录都在同一线程空间中处理。
为了实现子分区处理,我最终将kafka客户端API与Java线程和并发队列结合使用。