无法使用KafkaTemplate

时间:2018-10-25 09:25:05

标签: spring apache-kafka spring-kafka

我们有一个现有的Spring MVC应用程序(即SAP Hybris),其中我们想使用KafkaTemplate集成Kafka。我已经在xml中配置了kafka模板,如下所示:

    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg ref="producerFactory"/>
    </bean>

    <bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value-type="java.lang.String" value="${spring.kafka.bootstrap-servers}" />
                <entry key="key.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
                <entry key="value.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
            </map>
        </constructor-arg>
    </bean>

spring.kafka.bootstrap-servers被配置为localhost:9092。 请注意,我不能使用基于Spring Boot或Annotation的配置,只能使用基于xml的配置

这是我的控制器示例代码

@RequestMapping(method = RequestMethod.GET)
public String doRegister(final Model model) throws CMSItemNotFoundException
{
    String message = "Dummy Message: "+Math.random();
    String topic = "Dummy_Topic";
    kafkaTemplate.send(topic,message);
    return "pages/register";
}

我可以使用本地Kafka设置从命令行客户端发送和接收消息,但是当我尝试从Spring MVC应用程序发送消息时,出现以下错误。

ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

在收听该主题的命令行客户端上,我没有收到任何消息。

这是我对Kafka的配置:

acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer

++++++编辑+++++++ 我的server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
host.name=localhost
advertised.host.name= localhost
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
log.dir=D:/kafka_2.11-0.9.0.0/kafka_2.11-0.9.0.0/data
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000

+++++编辑结束++++

我无法找出为什么它无法在我的应用程序中正常工作,但对于命令行kafka生产者来说却可以正常工作。 请在这方面帮助我。

谢谢。

0 个答案:

没有答案