ConsumerConnector类的commitOffsets方法在无限循环中挂起?

时间:2015-05-06 14:27:23

标签: apache-kafka kafka-consumer-api

我有一个kafka消费者,其骨架如下:

         private static ConsumerConfig createConsumerConfig(String connStr)
          {  
             Properties externalConsumerProperties = ConsumerProperties.getConsumerProperties();

             Properties internalConsumerProperties = new Properties();

             // Properties below must not be changeable externally
             internalConsumerProperties.put("zookeeper.connect", connStr);
             internalConsumerProperties.put("group.id", "sm-publisher");
             internalConsumerProperties.put("zookeeper.session.timeout.ms", "400");
             internalConsumerProperties.put("zookeeper.sync.time.ms", "200");
             internalConsumerProperties.put("auto.commit.enable", "false");
             internalConsumerProperties.put("consumer.timeout.ms", "15000");
             internalConsumerProperties.put("auto.offset.reset", "smallest");

             Properties props = new Properties();

             if(externalConsumerProperties != null)
             {  
                props.putAll(externalConsumerProperties);
             }

             props.putAll(internalConsumerProperties);

             return new ConsumerConfig(props);
          }

    main()
   {
        ConsumerConnector connector = null;  
        connector=kafka.consumer.Consumer.createJavaConsumerConnector(createConsumerConfig(connectionString));
        kafkaStream = createStream(connector);
        ConsumerIterator<byte[], byte[]> it = kafkaStream.iterator();
        while(it.hasNext())
        {
            //process message 
            // ...............
            connector.commitOffsets();
        }
   }

这种消费者在正常情况下运作良好。但是在获取消息之后并且在提交偏移量之前如果与zookeeper服务器的连接中断,并且消费者尝试提交偏移量,则它会陷入无限循环。有没有办法让它出来?

0 个答案:

没有答案