从spring-integration-kafka 1.x迁移到2.x.

时间:2016-08-30 01:00:52

标签: java spring-integration apache-kafka

我最近接手了一个应用程序,该应用程序使用spring-integration-kafka 1.x来生成到远程kafak集群的消息。我的任务是将应用程序迁移到spring-integration-kafka 2.0以利用Kafka 0.9.x生产者。但是,由于spring-integration-kafka是一个完全重写,所以1.x和2.0 XML配置之间似乎没有很好的映射。例如,我有一个工作的1.x XML文件:

<?xml version="1.0" encoding="UTF-8"?>
<beans ...>

<import resource="load-properties-context.xml"/>

<bean id="kafkaOutboundTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
    <property name="corePoolSize" value="10"/>
    <property name="maxPoolSize" value="100"/>
    <property name="threadNamePrefix" value="Kafka Outbound Message Handler "/>
    <property name="waitForTasksToCompleteOnShutdown" value="true"/>
</bean>

<int-kafka:outbound-channel-adapter kafka-producer-context-ref="kafkaProducerContext"
                                    auto-startup="true"
                                    channel="kafkaOutboundChannel"
                                    topic-expression="headers.topic">
    <int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0"
                task-executor="kafkaOutboundTaskExecutor"/>
</int-kafka:outbound-channel-adapter>

<bean id="producerProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
    <property name="properties">
        <props>
            <prop key="message.send.max.retries">5</prop>
            <prop key="secure">false</prop>
        </props>
    </property>
</bean>

<bean id="kafkaKeySerializer" class="kafka.serializer.StringEncoder">
    <constructor-arg><null/></constructor-arg>
</bean>

<int-kafka:producer-context id="kafkaProducerContext" producer-properties="producerProperties">
    <int-kafka:producer-configurations>
        <int-kafka:producer-configuration broker-list="${kafka.brokerConnect}"
                                          topic=".*com\.example\..*"
                                          compression-codec="default"
                                          async="true"
                                          batch-num-messages="200"
                                          key-encoder="kafkaKeySerializer"
                                          key-class-type="java.lang.String"/>
    </int-kafka:producer-configurations>
</int-kafka:producer-context>

</beans>

但是,当我尝试将消息发送到同一个Kafka集群时,我创建的2.0 XML给了我超时异常:

<?xml version="1.0" encoding="UTF-8"?>
<beans ...>

<import resource="load-properties-context.xml"/>

<bean id="kafkaOutboundTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
    <property name="corePoolSize" value="10"/>
    <property name="maxPoolSize" value="100"/>
    <property name="threadNamePrefix" value="Kafka Outbound Message Handler "/>
    <property name="waitForTasksToCompleteOnShutdown" value="true"/>
</bean>

<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
                                    kafka-template="template"
                                    auto-startup="true"
                                    channel="kafkaOutboundChannel"
                                    topic-expression="headers.topic">
    <int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0"
                task-executor="kafkaOutboundTaskExecutor"/>
</int-kafka:outbound-channel-adapter>

<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
    <constructor-arg>
        <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
            <constructor-arg>
                <map>
                    <entry key="bootstrap.servers" value="${kafka.brokerConnect}" />
                    <entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer" />
                    <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer" />
                   <entry key="batch.size" value="200" />
                </map>
            </constructor-arg>
        </bean>
    </constructor-arg>
</bean>
</beans>

我不是弹簧集成的专家,我坚持这个。谁能在这里给我一些指导?提前谢谢!

修改 以下是日志条目,但有例外:

Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater maybeUpdate
FINE: Initialize connection to node -1 for sending metadata request
Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient initiateConnect
FINE: Initiating connection to node -1 at <hostname>:6667.
Aug 29, 2016 10:24:12 PM org.apache.kafka.common.metrics.Metrics sensor
FINE: Added sensor with name node--1.bytes-sent
Aug 29, 2016 10:24:12 PM org.apache.kafka.common.metrics.Metrics sensor
FINE: Added sensor with name node--1.bytes-received
Aug 29, 2016 10:24:12 PM org.apache.kafka.common.metrics.Metrics sensor
FINE: Added sensor with name node--1.latency
Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient handleConnections
FINE: Completed connection to node -1
Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater maybeUpdate
FINE: Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[<topic>]}), isInitiatedByNetworkClient, createdTimeMs=1472534652275, sendTimeMs=0) to node -1
Aug 29, 2016 10:24:12 PM org.apache.kafka.common.network.Selector poll
FINE: Connection with <hostname>/<host-ip> disconnected
java.io.EOFException
    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
    at java.lang.Thread.run(Thread.java:745)

Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient handleDisconnections
FINE: Node -1 disconnected.
Aug 29, 2016 10:24:12 PM org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater maybeUpdate
FINE: Give up sending metadata request since no node is available

生产者配置:

    compression.type = none
    metric.reporters = []
    metadata.max.age.ms = 300000
    metadata.fetch.timeout.ms = 60000
    reconnect.backoff.ms = 50
    sasl.kerberos.ticket.renew.window.factor = 0.8
    bootstrap.servers = [<hostname>:6667]
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    buffer.memory = 33554432
    timeout.ms = 30000
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    ssl.keystore.type = JKS
    ssl.trustmanager.algorithm = PKIX
    block.on.buffer.full = false
    ssl.key.password = null
    max.block.ms = 60000
    sasl.kerberos.min.time.before.relogin = 60000
    connections.max.idle.ms = 540000
    ssl.truststore.password = null
    max.in.flight.requests.per.connection = 5
    metrics.num.samples = 2
    client.id =
    ssl.endpoint.identification.algorithm = null
    ssl.protocol = TLS
    request.timeout.ms = 30000
    ssl.provider = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    acks = 1
    batch.size = 16384
    ssl.keystore.location = null
    receive.buffer.bytes = 32768
    ssl.cipher.suites = null
    ssl.truststore.type = JKS
    security.protocol = PLAINTEXT
    retries = 0
    max.request.size = 1048576
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer
    ssl.truststore.location = null
    ssl.keystore.password = null
    ssl.keymanager.algorithm = SunX509
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    send.buffer.bytes = 131072
    linger.ms = 0

编辑2

一位同事指出我使用了一个VIP进行bootstrap.servers条目,所以我把它改成了一个端口为9092的实际主机。修正了这个错误消息:

FINE: Completed connection to node -1
Aug 30, 2016 2:27:53 PM org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater maybeUpdate
FINE: Sending metadata request ClientRequest(expectResponse=true, callback=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=producer-1}, body={topics=[<topic>]}), isInitiatedByNetworkClient, createdTimeMs=1472592473678, sendTimeMs=0) to node -1
Aug 30, 2016 2:27:53 PM org.apache.kafka.clients.producer.internals.Sender run
SEVERE: Uncaught error in kafka producer I/O thread:
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'brokers': Error reading field 'host': java.nio.BufferUnderflowException
    at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
    at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
    at java.lang.Thread.run(Thread.java:745)

最终:

FINE: Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Aug 30, 2016 2:28:53 PM org.springframework.kafka.support.LoggingProducerListener onError
SEVERE: Exception thrown when sending a message with key='null' and payload='{44, 65, 86, 71, 55, 78, 110, 108, 99, 72, 78, 100, 107, 52, 116, 95, 122, 110, 50, 74, 66, 110, 81,...' to topic <topic>:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

至少制片人试图与卡夫卡谈谈......

0 个答案:

没有答案