KafkaSpout元组重放抛出空指针异常

时间:2017-03-31 13:09:02

标签: apache-kafka apache-storm bigdata

我使用storm 1.0.1和Kafka 0.10.0.0使用storm-kafka-client 1.0.3。

请找到我在下面的代码配置。

kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.KEY_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
            kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.VALUE_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer");


            KafkaSpoutStreams kafkaSpoutStreams = new KafkaSpoutStreamsNamedTopics.Builder(new Fields(fieldNames), topics)
                    .build();

            KafkaSpoutRetryService retryService = new KafkaSpoutRetryExponentialBackoff(TimeInterval.microSeconds(500),
                                TimeInterval.milliSeconds(2), Integer.MAX_VALUE, TimeInterval.seconds(10));


            KafkaSpoutTuplesBuilder tuplesBuilder = new KafkaSpoutTuplesBuilderNamedTopics.Builder(new TestTupleBuilder(topics))
                        .build();

            KafkaSpoutConfig kafkaSpoutConfig = new KafkaSpoutConfig.Builder<String, String>(kafkaConsumerProps, kafkaSpoutStreams, tuplesBuilder, retryService)
                                                                .setOffsetCommitPeriodMs(10_000)
                                                                .setFirstPollOffsetStrategy(LATEST)
                                                                .setMaxRetries(5)
                                                                .setMaxUncommittedOffsets(250)
                                                                .build();

当我失败时,它没有被重播。 Spout抛出错误。 请让我知道为什么它会抛出nullpointer异常。

53501 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.util - Async loop died!
java.lang.NullPointerException
    at org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:260) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:248) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:203) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
    at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
    at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
53501 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.d.executor - 
java.lang.NullPointerException
    at org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:260) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:248) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:203) ~[storm-kafka-client-1.0.3.jar:1.0.3]
    at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
    at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
    at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
53527 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
    at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
    at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.8.0.jar:?]
    at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
    at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
    at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
    at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]

请在下面找到完整的喷嘴配置 {key.deserializer = org.apache.kafka.common.serialization.ByteArrayDeserializer,value.deserializer = org.apache.kafka.common.serialization.ByteArrayDeserializer,group.id = test-group,ssl.keystore.location = C:/ test.jks,bootstrap.servers = localhost:1000,auto.commit.interval.ms = 1000,security.protocol = SSL,enable.auto.commit = true,ssl.truststore.location = C:/test1.jks,ssl .keystore.password = pass123,ssl.key.password = pass123,ssl.truststore.password = pass123,session.timeout.ms = 30000,auto.offset.reset = latest}

2 个答案:

答案 0 :(得分:0)

Storm 1.0.1由测试版质量的storm-kafka-client组成。我们已经解决了一些问题,Storm 1.1版本中提供了更稳定的版本,可用于Kafka 0.10以后。 在您的拓扑中,您​​可以使用适当的版本依赖storm-kafka-client版本1.1和kafka-clients依赖项。您无需升级风暴群集本身。

答案 1 :(得分:0)

我有enable.auto.commit = true使值为false解决了我的问题。