我有如下要求: Kafka需要监听多个接口,一个外部接口和一个内部接口。系统中的所有其他组件会将kafka连接到内部接口。 在安装时,其他主机上的内部ip无法访问,需要进行一些配置以使其可访问,我们对此没有控制权。因此,假设在出现kafka时,其他节点上的内部IP无法相互访问。
场景: 我在群集中有两个节点: node1(外部IP:10.10.10.4,内部IP:5.5.5.4) node2(外部IP:10.10.10.5,内部IP:5.5.5.5)
现在,在安装时,10.10.10.4可以ping到10.10.10.5,反之亦然,但是5.5.5.4不能达到5.5.5.5。一旦完成kafka安装,然后有人进行一些配置以使其可访问,这种情况就会发生。因此在安装kafka之前,我们可以使它们可访问。
现在的要求是,kafka经纪人将在10.10.10接口上交换消息,这样将形成集群,但客户端将在5.5.5.X接口上发送消息。
我尝试的方法如下:
listeners=USERS://0.0.0.0:9092,REPLICATION://0.0.0.0:9093
advertised.listeners=USERS://5.5.5.5:9092,REPLICATION://5.5.5.5:9093
其中5.5.5.5是内部IP地址。 但是有了这个,重启卡夫卡时,我看到以下日志:
{"log":"[2020-06-23 19:05:34,923] INFO Creating /brokers/ids/2 (is it secure? false) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.923403973Z"}
{"log":"[2020-06-23 19:05:34,925] INFO Result of znode creation at /brokers/ids/2 is: OK (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.925237419Z"}
{"log":"[2020-06-23 19:05:34,926] INFO Registered broker 2 at path /brokers/ids/2 with addresses: ArrayBuffer(EndPoint(5.5.5.5,9092,ListenerName(USERS),PLAINTEXT), EndPoint(5.5.5.5,9093,ListenerName(REPLICATION),PLAINTEXT)) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.926127438Z"}
.....
{"log":"[2020-06-23 19:05:35,078] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078444509Z"}
{"log":"[2020-06-23 19:05:35,078] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078471358Z"}
{"log":"[2020-06-23 19:05:35,079] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)\n","stream":"stdout","time":"2020-06-23T19:05:35.079436798Z"}
{"log":"[2020-06-23 19:05:35,136] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.136792119Z"}
然后此消息持续出现。
{"log":"[2020-06-23 19:05:35,166] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.166895344Z"}
有什么方法可以实现?
关于, -M-