如何在春季启动2.1.9中使用“ li-apache-kafka-clients”创建kafka使用者?

时间:2019-10-09 15:24:36

标签: java spring spring-boot apache-kafka kafka-consumer-api

在以下情况下,我必须转移到天蓝色的blob存储中,将几个PDF文件作为blob字段存储在oracle 12c中。第一种方法是创建一个kafka连接器,该连接器每天移动一次文件,但是我们开始收到一条错误消息,消息大小为1mb以上。在互联网上进行研究后,我发现li-apache-kafka-clients可以创建生产者,但不能创建消费者。

我正在使用以下技术:

  • Java 11 JDK
  • Spring Boot 2.1.9.RELEASE
  • 卡夫卡2.0.1
  • li-apache-kafka-clients 0.0.16

这个库很棒,但是没有关于如何在春季启动中创建生产者和消费者的示例。

阅读代码,我创建了以下ConsumerConfiguration

@EnableKafka
@Configuration
public class LiConsumerConfiguration {

    @Value("${spring.kafka.consumer.bootstrap-servers}")
    private String bootstrapServers;

    @Value("${spring.kafka.consumer.group-id}")
    private String groupId;

    @Bean
    public CustomKafkaConsumerFactory liKafkaConsumerFactory() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        props.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
                "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor");

        props.put(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_BUFFER_CAPACITY_CONFIG, "32000000");
        props.put(LiKafkaConsumerConfig.MESSAGE_ASSEMBLER_EXPIRATION_OFFSET_GAP_CONFIG, "1000");
        props.put(LiKafkaConsumerConfig.MAX_TRACKED_MESSAGES_PER_PARTITION_CONFIG, "500");
        props.put(LiKafkaConsumerConfig.EXCEPTION_ON_MESSAGE_DROPPED_CONFIG, "false");
        props.put(LiKafkaConsumerConfig.SEGMENT_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
        props.put(LiKafkaConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());
        props.put(LiKafkaConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class.getName());

        return new CustomKafkaConsumerFactory<>(props);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, byte[]> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, byte[]> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(liKafkaConsumerFactory());
        return factory;
    }
}

这是我创建的CustomKafkaConsumerFactory

public class CustomKafkaConsumerFactory<K, V> extends DefaultKafkaConsumerFactory {

    private final Map configs;

    public CustomKafkaConsumerFactory(Map configs) {
        super(configs);
        this.configs = configs;
    }

    public Consumer<K, V> createConsumer() {
        return new LiKafkaConsumerImpl<>(configs);
    }
}

这是我创建的使用者:

@Service
public class DocumentsConsumerImpl {

    private static final Logger LOGGER = LoggerFactory.getLogger(DocumentsConsumerImpl.class);

    @Value("${spring.kafka.producer.bootstrap-servers}")
    private String bootstrapServers;

    @Value("${spring.kafka.consumer.group-id}")
    private String groupId;

    @Value("${kafka.topic.documents.name}")
    private String topicName;

    List<ConsumerRecord<byte[], byte[]>> recordList;

    public PcnDocumentsConsumerImpl() {
        this.recordList = new ArrayList<>();
    }

    @KafkaListener(topics = "${kafka.topic.documents.name}")
    public void listen(ConsumerRecord<byte[], byte[]> consumerRecord) {
        LOGGER.info("[DOCUMENT] Processing large message response...");
        recordList.add(consumerRecord);
        processRecords();
    }

    private void processRecords() {
        ConsumerRecordsProcessor<byte[], byte[]> consumerRecordsProcessor = createConsumerRecordsProcessor();
        ConsumerRecords<byte[], byte[]> records = createConsumerRecords();
        consumerRecordsProcessor.process(records);
    }

    private ConsumerRecords<byte[], byte[]> createConsumerRecords() {
        Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> recordsMap = new HashMap<>();
        recordsMap.put(new TopicPartition(topicName, 0), recordList);
        ConsumerRecords<byte[], byte[]> records = new ConsumerRecords<>(recordsMap);
        return records;
    }

    private ConsumerRecordsProcessor<byte[], byte[]> createConsumerRecordsProcessor() {
        Deserializer<byte[]> stringDeserializer = new ByteArrayDeserializer();
        Deserializer<LargeMessageSegment> segmentDeserializer = new DefaultSegmentDeserializer();
        MessageAssembler assembler = new MessageAssemblerImpl(1100000, 100, false, segmentDeserializer);
        DeliveredMessageOffsetTracker deliveredMessageOffsetTracker = new DeliveredMessageOffsetTracker(4);
        return new ConsumerRecordsProcessor<>(assembler, stringDeserializer, stringDeserializer,
                deliveredMessageOffsetTracker, null);
    }
}

您可以看到未正确创建使用者。主要问题是消费者没有正确组装消息。

我创建了一个具有3个分区的主题,而我拥有的复制因子是2,因此,这也是另一个问题,因为我有2条相同消息的副本。

如果有人可以使用li-apache-kafka-clients或至少如何正确使用ConsumerRecordsProcessor帮助基本消费者,我真的很感激。

谢谢

0 个答案:

没有答案