spring-kafka - 如何从头开始阅读一个主题,同时从头开始阅读另一个主题?

时间:2016-10-31 23:23:20

标签: spring-boot spring-kafka

我正在写一个spring-kafka应用程序,我需要阅读2个主题:test1和test2:

public class Receiver {

    private static final Logger LOGGER = LoggerFactory
            .getLogger(Receiver.class);

    @KafkaListener(id = "bar", topicPartitions =
{ @TopicPartition(topic = "test1", partitions = { "0" }),
  @TopicPartition(topic = "test2", partitions = { "0" })})
    public void receiveMessage(String message) {
        LOGGER.info("received message='{}'", message);
    }
}

我的配置如下:

@Configuration
@EnableKafka
public class ReceiverConfig {

    @Value("${kafka.bootstrap.servers}")
    private String bootstrapServers;

    @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        // list of host:port pairs used for establishing the initial connections
        // to the Kakfa cluster
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
                bootstrapServers);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
                IntegerDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
                StringDeserializer.class);
        // consumer groups allow a pool of processes to divide the work of
        // consuming and processing records
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "test1");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "test2");

        return props;
    }

    @Bean
    public ConsumerFactory<Integer, String> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerConfigs());
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());

        return factory;
    }

    @Bean
    public Receiver receiver() {
        return new Receiver();
    }
}

我需要能够只读取来自&#34; test1&#34;的最新消息,同时能够从&#34; test2&#34;的最开头读取所有消息。 我只对&#34; test2&#34;感兴趣我的应用启动时的消息,但&#34; test1&#34;只要应用程序正在运行,就需要连续读取消息。

有没有办法配置这样的功能?

2 个答案:

答案 0 :(得分:5)

这是一种对我有用的方法:

GET /foo
POST /foo/bar

使用&#34; partitionOffsets&#34;注释@KafkaListener(id = "receiver-api", topicPartitions = { @TopicPartition(topic = "schema.topic", partitionOffsets = @PartitionOffset(partition = "0", initialOffset = "0")), @TopicPartition(topic = "data.topic", partitions = { "0" })}) public void receiveMessage(String message) { try { JSONObject incomingJsonObject = new JSONObject(message); if(!incomingJsonObject.isNull("data")){ handleSchemaMessage(incomingJsonObject); } else { handleDataMessage(incomingJsonObject); } } catch (Exception e) { e.printStackTrace(); }

是能够始终从特定主题阅读特定主题的关键所在,而且#34;拖尾&#34;像往常一样的其他话题。

答案 1 :(得分:5)

我也一直在努力解决这个问题,并想提出一个更笼统的解决方案。

在您的解决方案可行的同时,您需要对分区进行硬编码。 您当然也有使用@KafkaListener注释实现ConsumerSeekAware接口的类。

这为您提供了三种可用于查找特定偏移量的方法。分配分区后,将立即调用一种方法。这样看起来像这样。

@Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
    assignments.keySet().stream()
        .filter(partition -> "MYTOPIC".equals(partition.topic()))
        .forEach(partition -> callback.seekToBeginning("MYTOPIC", partition.partition()));
}

这样,当您决定向该主题添加更多分区时,您无需触摸任何代码:)

希望这对某人有帮助。