如何用camel-kafka手动控制偏移提交?

时间:2017-08-29 19:42:51

标签: apache-kafka apache-camel

我正在使用驼峰卡夫卡组件而且我不清楚在提供偏移的情况下发生了什么。如下所示,我正在聚合记录,我认为对于我的用例,只有在将记录保存到SFTP后提交偏移才有意义。

是否可以手动控制何时可以执行提交?

private static class MyRouteBuilder extends RouteBuilder {

    @Override
    public void configure() throws Exception {

        from("kafka:{{mh.topic}}?" + getKafkaConfigString())
        .unmarshal().string()
        .aggregate(constant(true), new MyAggregationStrategy())
            .completionSize(1000)
            .completionTimeout(1000)
        .setHeader("CamelFileName").constant("transactions-" + (new Date()).getTime())
        .to("sftp://" + getSftpConfigString())

        // how to commit offset only after saving messages to SFTP?

        ;
    }

    private final class MyAggregationStrategy implements AggregationStrategy {
        @Override
        public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
            if (oldExchange == null) {
                return newExchange;
            }
            String oldBody = oldExchange.getIn().getBody(String.class); 
            String newBody = newExchange.getIn().getBody(String.class);
            String body = oldBody + newBody;
            oldExchange.getIn().setBody(body);
            return oldExchange;
        }
    }
}

private static String getSftpConfigString() {
        return "{{sftp.hostname}}/{{sftp.dir}}?"
                + "username={{sftp.username}}"
                + "&password={{sftp.password}}"
                + "&tempPrefix=.temp."
                + "&fileExist=Append"
                ;
}

private static String getKafkaConfigString() {
        return "brokers={{mh.brokers}}" 
            + "&saslMechanism={{mh.saslMechanism}}"  
            + "&securityProtocol={{mh.securityProtocol}}"
            + "&sslProtocol={{mh.sslProtocol}}"
            + "&sslEnabledProtocols={{mh.sslEnabledProtocols}}" 
            + "&sslEndpointAlgorithm={{mh.sslEndpointAlgorithm}}"
            + "&saslJaasConfig={{mh.saslJaasConfig}}" 
            + "&groupId={{mh.groupId}}"
            ;
}

3 个答案:

答案 0 :(得分:2)

不,你不能。 Kafka每隔X秒在后台执行一次自动提交(你可以配置它)。

camel-kafka没有手动提交支持。此外,由于聚合器与kafka使用者以及执行提交的使用者分离,因此这是不可能的。

答案 1 :(得分:0)

我认为这是最新版本的骆驼(2.22.0)(the doc)中的更改,您应该能够这样做。

// Endpoint configuration &autoCommitEnable=false&allowManualCommit=true
public void process(Exchange exchange) {
     KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
     manual.commitSync();
}

答案 2 :(得分:0)

您甚至可以通过使用偏移量存储库(Camel Documentation)来控制多线程路由中的手动偏移量提交(例如,使用聚合器)

@Override
public void configure() throws Exception {
      // The route
      from(kafkaEndpoint())
            .routeId(ROUTE_ID)
            // Some processors...
            // Commit kafka offset
            .process(MyRoute::commitKafka)
            // Continue or not...
            .to(someEndpoint());
}

private String kafkaEndpoint() {
    return new StringBuilder("kafka:")
            .append(kafkaConfiguration.getTopicName())
            .append("?brokers=")
            .append(kafkaConfiguration.getBootstrapServers())
            .append("&groupId=")
            .append(kafkaConfiguration.getGroupId())
            .append("&clientId=")
            .append(kafkaConfiguration.getClientId())
            .append("&autoCommitEnable=")
            .append(false)
            .append("&allowManualCommit=")
            .append(true)
            .append("&autoOffsetReset=")
            .append("earliest")
            .append("&offsetRepository=")
            .append("#fileStore")
            .toString();

}

@Bean(name = "fileStore", initMethod = "start", destroyMethod = "stop")
private FileStateRepository fileStore() {
    FileStateRepository fileStateRepository = 
    FileStateRepository.fileStateRepository(new File(kafkaConfiguration.getOffsetFilePath()));
    fileStateRepository.setMaxFileStoreSize(10485760); // 10MB max

    return fileStateRepository;
}

private static void commitKafka(Exchange exchange) {
    KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
    manual.commitSync();
}