如何将自定义StateStore添加到Kafka Streams DSL处理器?

时间:2016-10-24 14:59:27

标签: apache-kafka-streams

对于我的一个Kafka流应用程序,我需要使用DSL和Processor API的功能。我的流媒体应用流程

source -> selectKey -> filter -> aggregate (on a window) -> sink

聚合后,我需要向接收器发送SINGLE聚合消息。所以我将我的拓扑定义如下

KStreamBuilder builder = new KStreamBuilder();
KStream<String, String> source = builder.stream(source_stream);
source.selectKey(new MyKeyValueMapper())
      .filterNot((k,v) -> k.equals("UnknownGroup"))
      .process(() -> new MyProcessor());

我定义了一个自定义StateStore并将其注册到我的处理器,如下所示

public class MyProcessor implements Processor<String, String> {

    private ProcessorContext context = null;
    Serde<HashMapStore> invSerde = Serdes.serdeFrom(invJsonSerializer, invJsonDeserializer);


    KeyValueStore<String, HashMapStore> invStore = (KeyValueStore) Stores.create("invStore")
        .withKeys(Serdes.String())
        .withValues(invSerde)
        .persistent()
        .build()
        .get();

    public MyProcessor() {
    }

    @Override
    public void init(ProcessorContext context) {
        this.context = context;
        this.context.register(invStore, false, null); // register the store
        this.context.schedule(10 * 60 * 1000L);
    }

    @Override
    public void process(String partitionKey, String message) {
        try {
            MessageModel smb = new MessageModel(message);
            HashMapStore oldStore = invStore.get(partitionKey);
            if (oldStore == null) {
                oldStore = new HashMapStore();
            }
            oldStore.addSmb(smb);
            invStore.put(partitionKey, oldStore);
        } catch (Exception e) {
           e.printStackTrace();
        }
    }

    @Override
    public void punctuate(long timestamp) {
       // processes all the messages in the state store and sends single aggregate message
    }


    @Override
    public void close() {
        invStore.close();
    }
}

当我运行应用时,我得到java.lang.NullPointerException

  

线程中的异常&#34; StreamThread-18&#34;显示java.lang.NullPointerException       在org.apache.kafka.streams.state.internals.MeteredKeyValueStore.flush(MeteredKeyValueStore.java:167)       在org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:332)       在org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:252)       在org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:446)       在org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:434)       在org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:422)       在org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:340)       在org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)

知道这里出了什么问题吗?

1 个答案:

答案 0 :(得分:11)

您需要使用StreamsBuilder(或旧版本中的KStreamBuilder注册您在<处理器之外存储)。首先创建商店,然后将其注册到StreamsBuilderKStreamBuilder),并在添加处理器时提供商店名称以连接处理器和商店。

StreamsBuilder builder = new StreamsBuilder();

// create store
StoreBuilder storeBuilder = Stores.keyValueStoreBuilder(
    Stores.persistentKeyValueStore("invStore"),
    Serdes.String(),
    invSerde));
// register store
builder.addStateStore(storeBuilder);

KStream<String, String> source = builder.stream(source_stream);
source.selectKey(new MyKeyValueMapper())
        .filterNot((k,v) -> k.equals("UnknownGroup"))
        .process(() -> new MyProcessor(), "invStore"); // connect store to processor by providing store name


// older API:

KStreamBuilder builder = new KStreamBuilder();

// create store
StateStoreSupplier storeSupplier = (KeyValueStore)Stores.create("invStore")
    .withKeys(Serdes.String())
    .withValues(invSerde)
    .persistent()
    .build();
// register store
builder.addStateStore(storeSupplier);

KStream<String, String> source = builder.stream(source_stream);
source.selectKey(new MyKeyValueMapper())
        .filterNot((k,v) -> k.equals("UnknownGroup"))
        .process(() -> new MyProcessor(), "invStore"); // connect store to processor by providing store name