在从statestore获取数据时,Kafka Streams DSL中的NPE不规则出现

时间:2018-02-20 12:11:06

标签: apache-kafka apache-kafka-streams

我正在使用版本1.0.0的Kafka Streams DSL API。从状态存储中获取数据时,我得到不规则的空指针异常。堆栈跟踪还指向空检查,以便根据密钥从状态存储中获取数据。我使用transform方法(TransformerSupplier)来处理值。清理后启动Kafka时不会发生NPE。但是,在使用新创建的输入和输出主题和状态存储的后续运行中,从存储中获取每个唯一键时会发生NPE。

public class POTransformerSupplier implements TransformerSupplier<String, ConsolidatedPO, KeyValue<String, ConsolidatedPO>>{

    String store;
    public POTransformerSupplier(String store) {
        super();
        this.store=store;
        // TODO Auto-generated constructor stub
    }

    @Override
    public Transformer<String, ConsolidatedPO, KeyValue<String, ConsolidatedPO>> get() {
        // TODO Auto-generated method stub
        return new Transformer<String, ConsolidatedPO, KeyValue<String,ConsolidatedPO>>() {

            private ProcessorContext context;

            @Override
            public void close() {
                // TODO Auto-generated method stub

            }

            @Override
            public void init(ProcessorContext context) {
                // TODO Auto-generated method stub
                this.context = context;
            }

            @Override
            public KeyValue<String, ConsolidatedPO> punctuate(long arg0) {
                // TODO Auto-generated method stub
                return null;
            }

            @Override
            public KeyValue<String, ConsolidatedPO> transform(String key, ConsolidatedPO value) {
               try{
                     KeyValueStore<String, ConsolidatedPO> state = (KeyValueStore<String, ConsolidatedPO>) context.getStateStore(store);
                     ConsolidatedPO lastPo =  new ConsolidatedPO();
                        if(null!=state && null!=(state.get(key)))
                        {
                            lastPo = state.get(key);
                        }
                        //processing lastPo here
                   }catch(Exception e){
                e.printStackTrace();
            }
            return null;
        }
    };
}

}

我在这里调用我的变换方法

StoreBuilder<KeyValueStore<String, ConsolidatedPO>> wordCountsStore = Stores.keyValueStoreBuilder(
                        Stores.persistentKeyValueStore(store),
                        Serdes.String(),
                        consPOSerde);

            StreamsBuilder builder = new StreamsBuilder();
            builder.addStateStore(wordCountsStore);

            KStream<String, ConsolidatedPO> textLines = builder.stream(inputTopic);
            KStream<String, ConsolidatedPO> transformed =
                    textLines.transform(new POTransformerSupplier(store),store);

主要是NPE启动时会给出10000多个唯一键,但没有特定的模式。

java.lang.NullPointerException
        at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:864)
        at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3079)
        at transformPO.Serdes.ConsolidatedPODeserializer.deserialize(ConsolidatedPODeserializer.java:31)
        at transformPO.Serdes.ConsolidatedPODeserializer.deserialize(ConsolidatedPODeserializer.java:1)
        at org.apache.kafka.streams.state.StateSerdes.valueFrom(StateSerdes.java:158)
        at org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.outerValue(MeteredKeyValueBytesStore.java:83)
        at org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore$1.outerValue(MeteredKeyValueBytesStore.java:57)
        at org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.get(InnerMeteredKeyValueStore.java:184)
        at org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.get(MeteredKeyValueBytesStore.java:116)
        at transformPO.POTransformerSupplier$1.transform(POTransformerSupplier.java:82)
        at transformPO.POTransformerSupplier$1.transform(POTransformerSupplier.java:1)
        at org.apache.kafka.streams.kstream.internals.KStreamTransform$KStreamTransformProcessor.process(KStreamTransform.java:56)
        at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
        at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
        at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
        at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)
        at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
        at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
        at org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)
        at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)
        at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)
        at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)
        at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
        at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)

0 个答案:

没有答案