如何在反序列化期间不使用无限循环的情况下编写kafka使用者?

时间:2019-09-13 13:26:31

标签: java apache-kafka kafka-consumer-api

如何在不使用无限循环进行轮询的情况下用Java编写kafka使用者?

我已使用此link作为参考创建了kafka消费者。在这里,在处理传入记录功能时,编写了while(true)循环,在该循环中轮询新事件。如果我在项目中使用此功能,除此以外我将无能为力。有没有一种方法可以避免使用这个无限循环来获取新事件?

 public static void main(String[] str) throws InterruptedException {
    System.out.println("Starting  AtMostOnceConsumer ...");
    execute();
}
private static void execute() throws InterruptedException {
    KafkaConsumer<String, Event> consumer = createConsumer();
    // Subscribe to all partition in that topic. 'assign' could be used here
    // instead of 'subscribe' to subscribe to specific partition.
    consumer.subscribe(Arrays.asList("topic"));
    processRecords(consumer);
}
private static KafkaConsumer<String, Event> createConsumer() {
    Properties props = new Properties();
    String consumeGroup = "group_id";
    props.put("group.id", consumeGroup);
    props.put("org.slf4j.simpleLogger.defaultLogLevel", "INFO");
    props.put("client.id", "clientId");
    props.put("security.protocol", "SASL_SSL");

    props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, "servers");
    props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
    props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
    props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.plain.PlainLoginModule required username="" + "username" + " password="" + "password";");
    props.put("enable.auto.commit", "true");
    // Auto commit interval, kafka would commit offset at this interval.
    props.put("auto.commit.interval.ms", "101");
    // This is how to control number of records being read in each poll
    props.put("max.partition.fetch.bytes", "135");
    // Set this if you want to always read from beginning.
    // props.put("auto.offset.reset", "earliest");
    props.put("heartbeat.interval.ms", "3000");
    props.put("session.timeout.ms", "6001");
    props.put("schema.registry.url", "https://avroregistry.octanner.io");
    props.put("key.deserializer",
            "io.confluent.kafka.serializers.KafkaAvroDeserializer");
    props.put("value.deserializer",
            "io.confluent.kafka.serializers.KafkaAvroDeserializer");
    return new KafkaConsumer<String, Event>(props);
}
private static void processRecords(KafkaConsumer<String, Event> consumer) throws InterruptedException {
    while (true) {
        ConsumerRecords<String, Event> records = consumer.poll(TimeUnit.MINUTES.toMillis(1));
        long lastOffset = 0;
        for (ConsumerRecord<String, Event> record : records) {
            System.out.printf("\n\n\n\n\n\n\roffset = %d, key = %s\n\n\n\n\n\n", record.offset(), record.value());
            lastOffset = record.offset();
        }
        System.out.println("lastOffset read: " + lastOffset);
        process();
    }
}
private static void process() throws InterruptedException {
    // create some delay to simulate processing of the message.
    Thread.sleep(TimeUnit.MINUTES.toMillis(1));
}

有人可以帮助我修改此设置,以便我避免 while(true)循环并只听我的传入事件吗?

3 个答案:

答案 0 :(得分:1)

您可以使用@KafkaListenerhttps://docs.spring.io/spring-kafka/api/org/springframework/kafka/annotation/KafkaListener.html)。但是,它也将在无限循环中进行轮询,因为这是Kafka的设计方式-它不是队列,而是存储一段时间的记录的事件总线。没有通知消费者的机制。

在其他线程上进行轮询,并有一种优美的方式退出循环。

答案 1 :(得分:1)

您可以尝试这样的事情:

public class ConsumerDemoWithThread {
private Logger logger = LoggerFactory.getLogger(ConsumerDemoWithThread.class.getName());
private String bootstrapServers = "127.0.0.1:9092";
private String groupId = "my-first-application";
private String topic = "first-topic";

KafkaConsumer consumer = createConsumer(bootstrapServers, groupId, topic);

private void pollForRecords() {
    ExecutorService executor = Executors.newSingleThreadExecutor();
    executor.submit(() -> processRecords());
}


private KafkaConsumer createConsumer(String bootstrapServers, String groupId, String topic) {
    Properties properties = new Properties();
    properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
    properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    // create consumer
    KafkaConsumer consumer = new KafkaConsumer<String, String>(properties);
    // subscribe consumer to our topic(s)
    consumer.subscribe(Arrays.asList(topic));
    return consumer;
}


private void processRecords() {
    try {
        while (true) {
            ConsumerRecords<String, String> records =
                    consumer.poll(Duration.ofMillis(100));

            for (ConsumerRecord<String, String> record : records) {
                logger.info("Key: " + record.key() + ", Value: " + record.value());
                logger.info("Partition: " + record.partition() + ", Offset:" + record.offset());
            }
        }
    } catch (WakeupException e) {
        logger.info("Received shutdown signal!");
    } finally {
        consumer.close();
    }
}

public static void main(String[] args) {
    ConsumerDemoWithThread consumerDemoWithThread = new ConsumerDemoWithThread();
    consumerDemoWithThread.pollForRecords();
}
}

基本上,正如Joachim所提到的,整个轮询和过程逻辑需要委托给一个线程

答案 2 :(得分:1)

如果您希望能够在代码中同时执行多项操作,则需要后台线程。

为了更轻松地执行此操作,您可以使用更高级别的Kafka库,例如Spring(已回答),Vert.xSmallrye

这是一个Vert.x示例,首先create a KafkaConsumer,然后分配处理程序并订阅您的主题

response.job.jobName