Spring Kafka Transaction - 发布到主题的重复消息

时间:2018-01-10 21:50:29

标签: apache-kafka spring-transactions spring-kafka

我们正在尝试为我们的制作人实施交易。我们的用例我们从MQ收到消息并发布到kafka。当出现故障时,我们需要回滚发布到kafka的消息,并且不要向MQ发送确认。

我们在使用事务时看到kafka主题中的消息重复。

@Bean("producerConfig")
public Properties producerConfig() {
    LOGGER.info("Creating Dev Producer Configs");
    Properties configs = new Properties();
    configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
    configs.put(ProducerConfig.ACKS_CONFIG, "all");
    configs.put(ProducerConfig.RETRIES_CONFIG, 1);
    configs.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1);
    configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    configs.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
    return configs;
}

@Bean
public ProducerFactory<String, String> producerFactory() {
    DefaultKafkaProducerFactory<String, String> producerFactory = new DefaultKafkaProducerFactory<>(new HashMap<String, Object>((Map) producerConfig));
    producerFactory.setTransactionIdPrefix("spring-kafka-transaction");
    return producerFactory;
}

@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
    KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
    kafkaTemplate.setDefaultTopic(topic);
    return kafkaTemplate;
}


@Bean
KafkaTransactionManager<String,String> kafkaTransactionManager(){
    KafkaTransactionManager<String, String> transactionManager = new KafkaTransactionManager<>(producerFactory());
    return transactionManager;
}

听众方法

@Component
public class WMQListener implements MessageListener {

KafkaTemplate<String, String> kafkaTemplate;

@Override
@Transactional
public void onMessage(Message message) {
    String onHandXmlStr = null;
    try {
        if (message instanceof TextMessage) {
            TextMessage textMessage = (TextMessage) message;
            onHandXmlStr = textMessage.getText();
        }
        LOGGER.debug("Message Received from WMQ :: " + onHandXmlStr);
        Msg msg = JaxbUtil.convertStringToMsg(onHandXmlStr);
        List<String> onHandList = DCMUtil.convertMsgToList(msg);

        ListenableFuture send = kafkaTemplate.sendDefault(onHandList.get(0));
        send.addCallback(new ListenableFutureCallback() {
            @Override
            public void onFailure(Throwable ex) {
                ex.printStackTrace();
            }

            @Override
            public void onSuccess(Object result) {
                System.out.println(result);
            }
        });
     message.acknowledge();

}

1 个答案:

答案 0 :(得分:2)

  

但是,我想知道为什么偏移增加了两个

由于kafka主题是一个线性日志(每个分区),因此回滚消息仍占用日志中的一个插槽(猜测)。

考虑一下......

  • p1.send(tx)(抵消23)
  • p2.send(tx)(抵消24)
  • p1.rollback
  • p2.commit
  • p1.resend(tx)(抵消25)。
  • p1.commit。

我的猜测是p1在偏移量23处的记录只是标记为回滚而不是发送给消费者(除非在使用read_uncommitted隔离写入时激活)。

修改

我发现有/无交易的抵消没有区别

setInterval

@SpringBootApplication
@EnableTransactionManagement
public class So48196671Application {

    public static void main(String[] args) throws Exception {
        ConfigurableApplicationContext ctx = SpringApplication.run(So48196671Application.class, args);
        Thread.sleep(15_000);
        ctx.close();
        System.exit(0);
    }

    @Bean
    public ApplicationRunner runner(Foo foo) {
        return args -> foo.send("bar");
    }

    @Bean
    public KafkaTransactionManager<String, String> transactionManager(ProducerFactory<String, String> pf) {
        return new KafkaTransactionManager<>(pf);
    }

    @KafkaListener(id = "baz", topics = "so48196671")
    public void listen(String in, @Header(KafkaHeaders.OFFSET) long offset) {
        System.out.println(in + " @ " + offset) ;
    }

    @Component
    public static class Foo {

        @Autowired
        KafkaTemplate<String, String> template;

        @Transactional
        public void send(String out) throws Exception {
            ListenableFuture<SendResult<String, String>> sent = template.send("so48196671", out);
            SendResult<String, String> sendResult = sent.get();
            System.out.println(out + " sent to " + sendResult.getRecordMetadata().offset());
            Thread.sleep(5_000);
        }

    }

}

但是,是的,在失败的情况下,使用额外的插槽......

bar sent to 17
bar @ 17

@SpringBootApplication
@EnableTransactionManagement
public class So48196671Application {

    public static void main(String[] args) throws Exception {
        ConfigurableApplicationContext ctx = SpringApplication.run(So48196671Application.class, args);
        Thread.sleep(15_000);
        ctx.close();
        System.exit(0);
    }

    @Bean
    public ApplicationRunner runner(Foo foo) {
        return args -> {
            try {
                foo.send("bar");
            }
            catch (Exception e) {
                //
            }
            foo.send("bar");
        };
    }

    @Bean
    public KafkaTransactionManager<String, String> transactionManager(ProducerFactory<String, String> pf) {
        return new KafkaTransactionManager<>(pf);
    }

    @KafkaListener(id = "baz", topics = "so48196671")
    public void listen(String in, @Header(KafkaHeaders.OFFSET) long offset) {
        System.out.println(in + " @ " + offset) ;
    }

    @Component
    public static class Foo {

        private boolean fail = true;

        @Autowired
        KafkaTemplate<String, String> template;

        @Transactional
        public void send(String out) throws Exception {
            ListenableFuture<SendResult<String, String>> sent = template.send("so48196671", out);
            SendResult<String, String> sendResult = sent.get();
            System.out.println(out + " sent to " + sendResult.getRecordMetadata().offset());
            if (fail) {
                fail = false;
                throw new RuntimeException();
            }
        }

    }

}

并在下一次运行......

bar sent to 25
bar sent to 27
bar @ 27

如果我删除了异常,在下次运行时,我会

bar sent to 29
bar sent to 31
bar @ 31

所以,是的,似乎事务本身占用了日志中的一个插槽。

所以它并不是一个重复的消息 - 你可以在他们的设计文档中阅读它。