由于该组正在重新平衡,尝试重新加入组,因此尝试心跳失败

时间:2017-08-16 06:10:43

标签: apache-kafka kafka-consumer-api

在我的项目中,我使用@KakfaListener配置kafka containerFactory和topic。

主题名称:

public static final String CONNECT_DEVICE_MESSAGE_TOPIC = "connectDeviceMessageTopic";

主题倾听:

@KafkaListener(containerFactory = "receiveKafkaListenerContainerFactory", topics = KafkaQueueName.CONNECT_DEVICE_MESSAGE_TOPIC)
public void onMessageListener(MessageTemplate message){
}

Kafka Config:

package me.hekr.bot.parse.core.kafka;

import lombok.extern.slf4j.Slf4j;
import me.hekr.bot.utils.IpUtil;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.*;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.support.converter.StringJsonMessageConverter;

import java.util.HashMap;
import java.util.Map;

/**
 * Created by Neon Wang on 2016/10/20.
 */
@EnableKafka
@Configuration
@Slf4j
public class KafkaConfig {
    @Value("${bot.kafka.servers}")
    private String servers;

    /*********************** Producer Config ***************************/
    private ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    private Map<String, Object> producerConfigs() {
        return new CustomHashMap().put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers)
                    .put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
                    .put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
                    .put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 1000 * 2);
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }

    /*********************** Consumer Config ***************************/
    private Map<String, Object> consumerProps() {
        return new CustomHashMap()
                .put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers)
                .put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true)
                .put(ConsumerConfig.GROUP_ID_CONFIG, "parseReceiveMessageFormConnection")
                .put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100")
                .put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000")
                .put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class)
                .put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    }

    @Bean
    KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>
    receiveKafkaListenerContainerFactory() {

        ConcurrentKafkaListenerContainerFactory<String, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(3);
        factory.setMessageConverter(new StringJsonMessageConverter());
        factory.getContainerProperties().setPollTimeout(3000L);
        return factory;
    }

    private ConsumerFactory<String, String> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerProps());
    }

    class CustomHashMap extends HashMap<String, Object> {

        CustomHashMap(){
            super();
        }

        @Override
        public CustomHashMap put(String key, Object value) {
            super.put(key, value);
            return this;
        }
    }
}

启动项目,kafka配置信息已经成功,但我发现每个主题信息都记录了三次,这是正常的吗?

当第一条消息被修改时,请打印出来
2017-08-16T11:12:43.633+0800 INFO  [org.springframework.kafka.KafkaListenerEndpointContainer#0-2-kafka-consumer-1] o.a.k.c.c.i.AbstractCoordinator.handle:623 - Attempt to heart beat failed since the group is rebalancing, try to re-join group.

然后第一条消息一次又一次收到,所有结果都是一样的!

三次之后,我没有收到任何消息,我不知道,谁能帮帮我?

组ID已更改两次,在上一版本中使用了localhost ip。

IpUtil.getLocalhostAddress().replace(".", "")

0 个答案:

没有答案