Kafka Consumer根据条件手动提交。

时间:2017-07-24 04:03:02

标签: apache-kafka kafka-consumer-api spring-kafka

@kafkaListener使用者在满足特定条件时提交。让我们说一个主题从生产者那里获得以下数据 "消息0"在偏移[0] "消息1"在偏移[1]

他们在消费者处收到,并在acknowledgement.acknowledge()

的帮助下提交

然后以下消息来到主题

消息2"消息2"在偏移[2] "消息3"在偏移[3]

正在运行的消费者会收到上述数据。这里条件失败,并且未提交上述偏移。

即使主题有新数据,也是"消息2"和"消息3"应该由来自同一消费者群体的任何消费者接听,因为他们没有提交。但这并没有发生,消费者会收到一条新消息。

当我重新启动我的消费者时,我得到了Message2和Message3。这应该是在消费者运行时发生的。

代码如下 - : KafkaConsumerConfig文件

enter code here

@Configuration
@EnableKafka
public class KafkaConsumerConfig {
    @Bean
    KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(3);
        factory.setBatchListener(true);
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
        factory.getContainerProperties().setSyncCommits(true);
        return factory;
    }

    @Bean
    public ConsumerFactory<String, String> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerConfigs());
    }

    @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> propsMap = new HashMap<>();
        propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
        propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
        propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
        propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
        propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
        propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,"1");
        return propsMap;
    }

    @Bean
    public Listener listener() {
        return new Listener();
    }
}

Listner Class
public class Listener {
    public CountDownLatch countDownLatch0 = new CountDownLatch(3);
    private Logger LOGGER = LoggerFactory.getLogger(Listener.class);
    static int count0 =0;


    @KafkaListener(topics = "abcdefghi", group = "group1", containerFactory = "kafkaListenerContainerFactory")
    public void listenPartition0(String data, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
                                 @Header(KafkaHeaders.OFFSET) List<Long> offsets, Acknowledgment acknowledgment) throws InterruptedException {
        count0 = count0 + 1;
        LOGGER.info("start consumer 0");

        LOGGER.info("received message via consumer 0='{}' with partition-offset='{}'", data, partitions + "-" + offsets);
        if (count0%2 ==0)
            acknowledgment.acknowledge();
        LOGGER.info("end of consumer 0");


    }

我如何实现我想要的结果?

1 个答案:

答案 0 :(得分:0)

这是对的。 offset是一个很容易在消费者实例的内存中跟踪的数字。对于相同的分区,我们需要为组中新到达的消费者提交的偏移量。这就是为什么当你重新启动应用程序或者为组发生重新平衡时它按预期工作的原因。

为了让它按照您的意愿运行,您应该考虑在监听器中实现ConsumerSeekAware,并在下一个轮询周期中调用ConsumerSeekCallback.seek()来获取您想要消费的偏移量。

http://docs.spring.io/spring-kafka/docs/2.0.0.M2/reference/html/_reference.html#seek

public class Listener implements ConsumerSeekAware {

    private final ThreadLocal<ConsumerSeekCallback> seekCallBack = new ThreadLocal<>();

    @Override
    public void registerSeekCallback(ConsumerSeekCallback callback) {
        this.seekCallBack.set(callback);
    }

    @KafkaListener()
    public void listen(...) {
        this.seekCallBack.get().seek(topic, partition, 0);
    }

}