将两个Kafka KTable结果加入RocksDB中的Nullpointer

时间:2016-07-17 20:18:27

标签: apache-kafka apache-kafka-streams

我正在尝试使用以下方式加入两个Kafka Stream DSL KTable

KTable<String, String> source = builder.table("stream-source");
KTable<String, String> target = builder.table("stream-target");
source.join(target, new ValueJoiner<String, String, String>() {
    public String apply(String value1, String value2) {
        return value1 + ":" + value2;
    }
});

我确保密钥和值都不是null

Producer<String, String> producer = new KafkaProducer<String, String>(props);
for(int i = 0; i < PERSONS_SOURCE.length; i++) {
    producer.send(new ProducerRecord<String, String>("stream-source",     Long.toString(i + 1L), PERSONS_SOURCE[i]));
}
for(int i = 0; i < PERSONS_TARGET.length; i++) {
    producer.send(new ProducerRecord<String, String>("stream-target", Long.toString(i + 1L), PERSONS_TARGET[i]));
}
producer.close();

但是应用程序报告在RocksDB层中有一个关于分区的空指针。

  

[2016-07-17 21:58:04,682]错误用户提供的监听器org.apache.kafka.streams.processor.internals.StreamThread $ 1用于组流 - 人员2在分区分配上失败(org.apache.kafka.clients .consumer.internals.ConsumerCoordinator)   显示java.lang.NullPointerException       在org.rocksdb.RocksDB.put(RocksDB.java:432)       在org.apache.kafka.streams.state.internals.RocksDBStore.putInternal(RocksDBStore.java:299)       在org.apache.kafka.streams.state.internals.RocksDBStore.access $ 200(RocksDBStore.java:62)       在org.apache.kafka.streams.state.internals.RocksDBStore $ 3.restore(RocksDBStore.java:206)       at org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:245)       在org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:210)       在org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:116)       在org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:202)

1 个答案:

答案 0 :(得分:1)

发现问题是由于在应用程序代码中创建了流而不是使用命令: -

kafka-topics --create --topic stream-a --replication-factor 1 --partitions 1

似乎连接需要分区信息才能工作。