当我尝试在Spark中使用来自Kafka队列的消息时,我收到错误partition 0 does not have a leader
。相比之下,出于某种原因,我可以在没有任何问题的情况下写入相同的主题。
我从控制台做了一些测试:
/ usr / bin $ kafka-run-class kafka.tools.GetOffsetShell --broker-list " XXX.XX.XX.XXX:9092,XXX.XX.XX.XXX:9092,XXX.XX.XX.XXX:9092" - 话题 "测试主题" --time -1
WARN Fetching topic metadata with correlation id 0 for topics [Set(test-topic)] from broker [BrokerEndPoint(0,XXX.XX.XX.XXX,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:79)
at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
test-topic:2:1096
test-topic:1:1028
Error: partition 0 does not have a leader. Skip getting offsets
所以,分区0没有经纪人,对吗?但是,即使其中一个代理已关闭,我如何配置Spark来从主题中读取?
或者它与主题的创建方式有关?
更新
/usr/bin/kafka-topics --describe --zookeeper XXX.XX.XX.XXX:2181 --topic ,test-topic
Topic:test-topic PartitionCount:3 ReplicationFactor:2 Configs:
Topic: test-topic Partition: 0 Leader: 3 Replicas: 3,1 Isr: 3,1
Topic: test-topic Partition: 1 Leader: 1 Replicas: 1,2 Isr: 2,1
Topic: test-topic Partition: 2 Leader: 2 Replicas: 2,3 Isr: 2,3