从Apache Camel中的特定偏移量开始阅读Kafka主题

时间:2018-07-20 21:55:04

标签: java apache-kafka apache-camel

我阅读了骆驼卡夫卡(Camel Kafka)的所有文档,而我读到的唯一方法是此from git和指定

的路由生成器
    public void configure() throws Exception {
                    from("kafka:" + TOPIC
                                 + "?groupId=A"
                                 + "&autoOffsetReset=earliest"             // Ask to start from the beginning if we have unknown offset
                                 + "&consumersCount=2"                     // We have 2 partitions, we want 1 consumer per partition
                                 + "&offsetRepository=#offset")            // Keep the offset in our repository
                            .to("mock:result");

}

但是要订购客户,我需要使用Spring,所以我的kafka端点是

<!--DEFINE KAFKA'S TOPCIS AS ENDPOINT-->
        <endpoint id="tagBlink" uri="kafka:10.0.0.165:9092">
            <property key="topic" value="tagBlink"/>
            <property key="brokers" value="10.0.0.165:9092"/>
            <property key="offsetRepository" value="100"/>
        </endpoint>

但是有例外

  

找不到适合属性的setter:offsetRepository as   没有相同类型的setter方法:java.lang.String也不类型   可能的转换:没有类型转换器可用于从类型转换:   java.lang.String为必需的类型:   值为100的org.apache.camel.spi.StateRepository

我当前的配置有可能吗?如何从特定的偏移量恢复? ?

2 个答案:

答案 0 :(得分:0)

这段时间之后,我设法解决了这个问题。为此,我遵循了Spring Bean的创建过程,并检查了FileStateRepository的文档,我需要一个文件,因此创建了一个File Bean并添加为Constructor-arg。之后,我添加了init-method="doStart"。如果存在,此方法将加载文件,否则将创建文件。

     <endpoint id="event" uri="kafka:localhost:9092">
        <property key="topic" value="eventTopic4"/>
        <property key="brokers" value="localhost:9092"/>
        <property key="autoOffsetReset" value="earliest"/>
        <property key="offsetRepository" value="#myRepo2"/>
    </endpoint>

    <bean id="myFileOfMyRepo" class="java.io.File">
        <constructor-arg type="java.lang.String" value="C:\repoDat\repo.dat"/>
    </bean>

    <bean id="myRepo2" class="org.apache.camel.impl.FileStateRepository " factory-method="fileStateRepository" init-method="doStart">
        <constructor-arg ref="myFileOfMyRepo"/>
    </bean>

此后,我在Git中看到了骆驼的KafkaConsumer的代码。

    offsetRepository.getState(serializeOffsetKey(topicPartition));
    if (offsetState != null && !offsetState.isEmpty()) {
        // The state contains the last read offset so you need to seek from the next one
        long offset = deserializeOffsetValue(offsetState) + 1;
        log.debug("Resuming partition {} from offset {} from state", topicPartition.partition(), offset);
        consumer.seek(topicPartition, offset);
    } 

有了这个,我设法从上一个偏移量开始读取。我希望骆驼文档为Kafka添加此额外步骤。

答案 1 :(得分:-1)

重要的单词是“ repository”而不是“ offset”:它不是整数值,而是对指定持久化偏移量的bean的引用。

(非春季)示例

// Create the repository in which the Kafka offsets will be persisted
FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat"));

// Bind this repository into the Camel registry
JndiRegistry registry = new JndiRegistry();
registry.bind("offsetRepo", repository);

// Configure the camel context
DefaultCamelContext camelContext = new DefaultCamelContext(registry);
camelContext.addRoutes(new RouteBuilder() {
    @Override
    public void configure() throws Exception {
        from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
                     "&groupId=A" +                            //
                     "&autoOffsetReset=earliest" +             // Ask to start from the beginning if we have unknown offset
                     "&offsetRepository=#offsetRepo")          // Keep the offsets in the previously configured repository
                .to("mock:result");
    }
});