定义自定义商店,用于自定义Transformer(参见下文)
https://github.com/apache/kafka/blob/trunk/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
CloudFile
我得到以下异常。不确定,为什么内部主题" test_01-HOUSE-changelog"使用单个分区和单个复制创建,而不是源分区中的2个分区" test"。这里缺少什么?
public class KafkaStream {
public static void main(String[] args) {
StateStoreSupplier houseStore = Stores.create("HOUSE").withKeys(Serdes.String()).withValues(houseSerde).persistent().build();
KStreamBuilder kstreamBuilder = new KStreamBuilder();
kstreamBuilder.addStateStore(houseStore);
.
.
.
KStream<String, String> testStream = kstreamBuilder.stream(Serdes.String(), Serdes.String(), "test");
testStream.transform(HourlyDetail::new, houseStore.name());
.
.
.
}
}
class HouseDetail implements Transformer<String, String, KeyValue<String, House>> {
@SuppressWarnings("unchecked")
@Override
public void init(ProcessorContext context) {
this.usageStore = (KeyValueStore<String, House>) context.getStateStore("HOUSE");
}
.
.
.
}
[2018-05-14 23:38:09,391] ERROR stream-thread [StreamThread-1] Failed to create an active task 0_1: (org.apache.kafka.streams.processor.internals.StreamThread:666)
org.apache.kafka.streams.errors.StreamsException: task [0_1] Store HOUSE's change log (test_01-HOUSE-changelog) does not contain partition 1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:185)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:123)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:169)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85)
at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:119)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
at org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
禁用自动主题创建后的异常
$ ./kafka-topics.sh --zookeeper localhost:2181 --topic test --describe
Topic:test PartitionCount:2 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1002,1001,1003
Topic: test Partition: 1 Leader: 1002 Replicas: 1002,1003,1001 Isr: 1002,1001,1003
$ ./kafka-topics.sh --zookeeper localhost:2181 --topic test_01-HOUSE-changelog --describe
Topic:test_01-HOUSE-changelog PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test_01-HOUSE-changelog Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001
答案 0 :(得分:1)
如果主题与一个分区一起存在,Kafka Streams将不会自动更改分区数。目前还不清楚为什么使用您提供的信息中的一个分区创建主题。一种可能性是,当您第一次启动应用程序时,您的输入主题有一个分区,之后您在输入主题中添加了第二个分区。
您需要使用文档中所述的应用程序重置工具清理应用程序(注意,这是一个两步过程):https://docs.confluent.io/current/streams/developer-guide/app-reset-tool.html