Kafka是否有任何限制Kafka Consumer消息数量的配置?

时间:2018-01-15 10:41:56

标签: apache-kafka apache-storm kafka-consumer-api

我的问题是Kafka消费者在收到来自Kafka的大约10,000条消息后总是挂起,当我重新启动Kafka Consumer时,它再次开始阅读并继续挂起10,000条消息。即使我只使用1个分区的所有分区,Kafka Consumer也不会在10,000条消息之后读取。

P / S:如果我使用KafkaSpout读取来自Kafka的消息,KafkaSpout也会在大约30,000条消息后停止发出。

这是我的代码:

Properties props = new Properties();
    props.put("group.id", "Tornado");
    props.put("zookeeper.connect", TwitterPropertiesLoader.getInstance().getZookeeperServer());
    props.put("zookeeper.connection.timeout.ms", "200000");
    props.put("auto.offset.reset", "smallest");
    props.put("auto.commit.enable", "true");
    props.put("auto.commit.interval.ms", "1000");

    ConsumerConfig consumerConfig = new ConsumerConfig(props);
    final ConsumerConnector consumer = Consumer.createJavaConsumerConnector(consumerConfig);

    Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
    topicCountMap.put(TwitterConstant.Kafka.TWITTER_STREAMING_TOPIC, 1);

    Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
    List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(TwitterConstant.Kafka.TWITTER_STREAMING_TOPIC);

    final KafkaStream<byte[], byte[]> stream0=streams.get(0);
    logger.info("Client ID=" + stream0.clientId());

    for (MessageAndMetadata<byte[], byte[]> message : stream0) {
        try {
            String messageReceived=new String(message.message(), "UTF-8");
            logger.info("partition = " + message.partition() + ", offset=" + message.offset() + " => " + messageReceived);
            //consumer.commitOffsets(true);
            writeMessageToDatabase(messageReceived);
        } catch (UnsupportedEncodingException e) {
            e.printStackTrace();
        }
    }

编辑:这是日志文件,在10,000条消息之后,有一些事情像重新平衡消费者(不确定),但KafkaStream无法继续阅读消息

enter image description here

0 个答案:

没有答案