经纪商删除后,Kafka分区负责人未更新

时间:2017-07-25 18:25:26

标签: apache-kafka apache-zookeeper devops marathon

我有由marathon / mesos管理的Kafka集群,它有3个经纪人版本0.10.2.1。泊坞窗图像基于wurstmeister/kafka-docker。在启动时自动按顺序分配的broker.id=-1和领导者将自动重新平衡auto.leader.rebalance.enable=true。客户端是版本0.8.2.1

Zookeeper配置:

➜ zkCli -server zookeeper.example.com:2181 ls /brokers/ids
[1106, 1105, 1104]

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1104
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},
"endpoints":["PLAINTEXT://host1.mesos-slave.example.com:9092"],
"jmx_port":9999,"host":"host1.mesos-slave.example.com",
"timestamp":"1500987386409",
"port":9092,"version":4}

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1105
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},
"endpoints":["PLAINTEXT://host2.mesos-slave.example.com:9092"],
"jmx_port":9999,"host":"host2.mesos-slave.example.com",
"timestamp":"1500987390304",
"port":9092,"version":4}

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1106
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},
"endpoints":["PLAINTEXT://host3.mesos-slave.example.com:9092"],
"jmx_port":9999,"host":"host3.mesos-slave.example.com",
"timestamp":"1500987390447","port":9092,"version":4}

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --create --topic test-topic --partitions 2 --replication-factor 2
Created topic "test-topic".

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --describe --topic test-topic
Topic:test-topic    PartitionCount:2        ReplicationFactor:2     Configs:
        Topic: test-topic  Partition: 0    Leader: 1106    Replicas: 1106,1104     Isr: 1106
        Topic: test-topic  Partition: 1    Leader: 1105    Replicas: 1104,1105     Isr: 1105

消费者可以消费生产者正在输出的东西。

➜ /opt/kafka_2.10-0.8.2.1 bin/kafka-console-producer.sh --broker-list 10.0.1.3:9092,10.0.1.1:9092 --topic test-topic
[2017-07-25 12:57:17,760] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hello 1
hello 2
hello 3
...

➜ /opt/kafka_2.10-0.8.2.1 bin/kafka-console-consumer.sh --zookeeper zookeeper.example.com:2181 --topic test-topic --from-beginning
hello 1
hello 2
hello 3
...

然后经纪人1104和1105(host1和host2)走出去,另一个正在上线,1107(主机1),手动使用马拉松界面

➜ zkCli -server zookeeper.example.com:2181 ls /brokers/ids
[1107, 1106]

➜ zkCli -server zookeeper.example.com:2181 get /brokers/ids/1107
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},
"endpoints":["PLAINTEXT://host1.mesos-slave.example.com:9092"],
"jmx_port":9999,"host":"host1.mesos-slave.example.com",
"timestamp":"1500991298225","port":9092,"version":4}

消费者仍然从生产者处获取消息,但主题描述看起来过时了:

Topic:test-topic    PartitionCount:2        ReplicationFactor:2     Configs:
        Topic: test-topic  Partition: 0    Leader: 1106    Replicas: 1106,1104     Isr: 1106
        Topic: test-topic  Partition: 1    Leader: 1105    Replicas: 1104,1105     Isr: 1105

我尝试重新平衡kafka-preferred-replica-election.shkafka-reassign-partitions.sh

➜ $cat all_partitions.json
{
  "version":1,
  "partitions":[
    {"topic":"test-topic","partition":0,"replicas":[1106,1107]},
    {"topic":"test-topic","partition":1,"replicas":[1107,1106]}
  ]
}

➜ bin/kafka-reassign-partitions.sh --zookeeper zookeeper.example.com:2181 --reassignment-json-file all_partitions.json --execute

➜ bin/kafka-reassign-partitions.sh --zookeeper zookeeper.example.com:2181 --reassignment-json-file all_partitions.json --verify

Status of partition reassignment:
Reassignment of partition [test-topic,0] completed successfully
Reassignment of partition [test-topic,1] is still in progress

➜ $cat all_leaders.json
{
  "partitions":[
    {"topic": "test-topic", "partition": 0},
    {"topic": "test-topic", "partition": 1}
  ]
}

➜ bin/kafka-preferred-replica-election.sh --zookeeper zookeeper.example.com:2181 --path-to-json-file all_leaders.json
Created preferred replica election path with {"version":1,"partitions":[{"topic":"test-topic","partition":0},{"topic":"test-topic","partition":1}]}
Successfully started preferred replica election for partitions Set([test-topic,0], [test-topic,1])

分区1的分区前导仍然是1105,这没有任何意义:

➜ bin/kafka-topics.sh --zookeeper zookeeper.example.com:2181 --describe --topic test-topic

Topic:test-topic    PartitionCount:2        ReplicationFactor:2     Configs:
        Topic: test-topic   Partition: 0    Leader: 1106    Replicas: 1106,1107     Isr: 1106,1107
        Topic: test-topic   Partition: 1    Leader: 1105    Replicas: 1107,1106,1104,1105   Isr: 1105

为什么分区1认为领导者仍然是1105,尽管host2还没有活着?

1 个答案:

答案 0 :(得分:0)

我面临与Apache kafka 2.11类似的问题。拥有一个由3个代理组成的集群,其主题分区= 2,复制因子= 1.因此,我的主题分区分布在2个代理中。在生成消息的过程中,我手动关闭其中一个其中一个分区居住的经纪人。经过相当长的时间后,上述分区的领导继续显示为-1。即分区没有转移到第3个活动且正在运行的代理。我在所有经纪人身上都设置了auto.leader.rebalance.enable。此外,Producer客户端继续尝试生成关闭doen代理上的分区,并继续无法生成。