我正在将我的应用程序与spring-kafka(而不是spring-integration-kafka)集成。以下是项目的弹簧文档:http://docs.spring.io/spring-kafka/docs/1.0.1.RELEASE/reference/htmlsingle
我的制作人工作得很好,但消费者并没有消费任何消息。任何指针。
这是我的配置:
@EnableKafka
public class MyConfig {
@Value("${kafka.broker.list}") // List of servers server:port,
private String kafkaBrokerList;
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, Message>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, Message> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(12);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setIdleEventInterval(60000L);
factory.setAutoStartup(Boolean.TRUE);
factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
@Bean
public ConsumerFactory<Integer, Message> consumerFactory() {
JsonDeserializer<Message> messageJsonDeserializer = new JsonDeserializer<>(Message.class);
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new IntegerDeserializer(), messageJsonDeserializer);
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBrokerList);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 10000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
props.put(ConsumerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG, 60000);
props.put(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG, 10000);
return props;
}
@KafkaListener(topics = "myTopic", containerFactory = "kafkaListenerContainerFactory")
public void listen(@Payload Message message) {
System.out.println(message);
}
}
**编辑了更多信息**
感谢Gary的回复。我在日志中没有看到任何例外。此外,我尝试KafkaTemplate
具有类似的配置,我能够将消息发布到队列,但对于消费者,没有运气。我正在更改代码以使用String而不是我的Message对象。以下是日志的部分内容:
2016-07-11 09:31:43.314 INFO [RMI TCP Connection(2)-127.0.0.1] o.a.k.c.c.ConsumerConfig [AbstractConfig.java:165] ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id =
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [app1.qa:9092, app1.qa:9093, app2.qa:9092, app2.qa:9093, app3.qa:9092, app3.qa:9093]
retry.backoff.ms = 10000
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 60000
ssl.truststore.password = null
session.timeout.ms = 15000
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 10000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = latest
我也在日志中看到以下内容:
2016-07-11 09:31:53.515 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-10] o.s.k.l.KafkaMessageListenerContainer [AbstractMessageListenerContainer.java:224] partitions revoked:[]
2016-07-11 09:31:53.515 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-11] o.s.k.l.KafkaMessageListenerContainer [AbstractMessageListenerContainer.java:224] partitions revoked:[]
2016-07-11 09:31:53.516 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-3] o.s.k.l.KafkaMessageListenerContainer [AbstractMessageListenerContainer.java:224] partitions revoked:[]
2016-07-11 09:31:53.516 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-12] o.s.k.l.KafkaMessageListenerContainer [AbstractMessageListenerContainer.java:224] partitions revoked:[]
2016-07-11 09:31:53.578 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-8] o.a.k.c.c.i.AbstractCoordinator [AbstractCoordinator.java:529] Marking the coordinator 2147483639 dead.
2016-07-11 09:31:53.578 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-3] o.a.k.c.c.i.AbstractCoordinator [AbstractCoordinator.java:529] Marking the coordinator 2147483639 dead.
2016-07-11 09:31:53.578 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-10] o.a.k.c.c.i.AbstractCoordinator [AbstractCoordinator.java:529] Marking the coordinator 2147483639 dead.
2016-07-11 09:31:53.578 INFO [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-kafka-consumer-12] o.a.k.c.c.i.AbstractCoordinator [AbstractCoordinator.java:529] Marking the coordinator 2147483639 dead.
答案 0 :(得分:2)
上面引用的文档说:
尽管从低级Kafka Consumer和Producer的角度来看,Serializer / Deserializer API非常简单和灵活,但在Messaging级别上还是不够,KafkaTemplate和@KafkaListener存在于Messaging级别。为了方便地转换为org.springframework.messaging.Message,Spring for Apache Kafka使用MessagingMessageConverter实现及其StringJsonMessageConverter自定义提供MessageConverter抽象。
但在你的情况下,你将MessageConverter
:
factory.setMessageConverter(new StringJsonMessageConverter());
使用自定义Deserializer
:
JsonDeserializer<Message> messageJsonDeserializer = new JsonDeserializer<>(Message.class);
最简单的修复方法应该使用StringDeserializer
代替:
https://kafka.apache.org/090/javadoc/org/apache/kafka/common/serialization/StringDeserializer.html
谈到上面给出的日志消息Marking the coordinator XXX dead.
,错误与spring-kafka
项目无关,但意味着问题与您的Kafka配置有关。在我的情况下,当kafka节点无法访问zookeper时,我们遇到了这样的问题。为了解决问题,我建议你安装Kafka&amp; amp; Zookeper在本地并确保使用kafka-console-producer
和kafka-console-consumer
生成消耗品,例如:
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_command_line.html
然后,作为下一个阶段,您可以使用相同的本地安装检查样本spring-kafka
应用程序。