Spring Kafka,在具有多个确认的不同线程中手动提交?

时间:2021-08-01 13:37:50

标签: multithreading spring-boot asynchronous spring-kafka

我正在尝试在单独的线程中确认通过 batchListener 使用的 kafka 消息;对被调用的方法使用@Async。

 @KafkaListener( topics = "${topic.name}" ,containerFactory = "kafkaListenerContainerFactoryBatch", id ="${kafkaconsumerprefix}")
        public void consume(List<ConsumerRecord<String, String>> records,Acknowledgment ack) {  
        records.forEach(record -> asynchttpCaller.posttoHttpsURL(record,ack));
  }

我的异步代码在下面,KafkaConsumerException 扩展了 BatchListenerFailedException

@Async
 public void posttoHttpsURL(ConsumerRecord<String, String> record,Acknowledgment ack)
 {
     
   try {                                
   //post to http
   ack.acknowledge();
   }
   catch(Exception ex){
    throw new KafkaConsumerException("Exception occured in sending via HTTPS",record);

   }
}

使用以下配置

 @Bean
    public Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
    bootstrapServers);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
    StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
    StringDeserializer.class);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
    props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, 
   "read_committed");
    props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 10000);
    props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
    props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 
    maxpollRecords);
    return props;
 }

  @Bean
   public ConsumerFactory<Object, Object> consumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigs());
  }



  /**
 * Batch Listener */
 
   @Bean
   @Primary
   public ConcurrentKafkaListenerContainerFactory<Object, Object> 
    kafkaListenerContainerFactoryBatch (
    ConcurrentKafkaListenerContainerFactoryConfigurer configurer,  
    ConsumerFactory<Object, Object> kafkaConsumerFactory,
    KafkaOperations<? extends Object, ? extends Object> template ) {
  
        ConcurrentKafkaListenerContainerFactory<Object, Object> 
        factory = new ConcurrentKafkaListenerContainerFactory<>();
        configurer.configure(factory, consumerFactory());
        factory.setBatchListener(true); 
        factory.getContainerProperties().setAckMode(AckMode.MANUAL);
        DeadLetterPublishingRecoverer recoverer = new 
        DeadLetterPublishingRecoverer(template);    
        ExponentialBackOff fbo = new ExponentialBackOff(); 
        fbo.setMaxElapsedTime(maxElapsedTime);
        fbo.setInitialInterval(initialInterval);
        fbo.setMultiplier(multiplier);
        RecoveringBatchErrorHandler errorHandler = new 
        RecoveringBatchErrorHandler(recoverer, fbo);
        factory.setBatchErrorHandler(errorHandler);
        factory.setConcurrency(setConcurrency);
        return factory;
  }

如果使用 AckMode 作为 MANUAL_IMMEDIATE,则该 ack.acknowledge() 确认该批次中的每条记录,并且仅当 AckMode 为 MANUAL 时所有记录都成功时才会确认。 我的场景是 --> 某些 httpcalls 会导致成功,并且某些会在同一批次中超时;如果错误消息的偏移量大于成功消息的偏移量;即使成功的消息也没有得到确认并被复制。

不知道为什么 BatchListenerFailedException 总是抛出整个批次,尽管我特别给出了出错的记录。

关于如何实现这一点有什么建议吗?

1 个答案:

答案 0 :(得分:0)

您不应该异步处理,因为偏移量可能会被无序提交。

BatchListenerFailedException 仅在侦听器线程上抛出时才有效。