无法访问Docker Image外的Kafka服务

时间:2017-12-18 02:00:44

标签: docker apache-kafka

我在centos上创建了一个kafka docker镜像。在这里,我在同一张图片上运行Zookeeper和Kafka服务器。

我看到服务已在docker容器内启动并运行。我通过Kafka提供的kafka-console-producer.sh和kafka-console-consumer.sh脚本测试了kafka。所需的端口也会暴露出来。

PORTS
0.0.0.0:2182->2182/tcp, 22/tcp, 0.0.0.0:9093->9093/tcp

以下是在Kafka的server.properties中完成的配置:

listeners=PLAINTEXT://0.0.0.0:9093
zookeeper.connect=localhost:2182

我在docker容器中创建了一个主题。

我可以在运行我的docker镜像的主机上使用telnet命令通过外部计算机(在同一网络中)访问Kafka服务。

telnet 9093
Trying …
Connected to .
Escape character is ‘^]’.

telnet 2182
Trying …
Connected to .
Escape character is ‘^]’.

但是,使用TimeoutExceptions将数据写入Kafka主题失败:

2017-12-17 21:30:51 DEBUG NetworkClient:195 - [Producer clientId=KafkaExampleProducer] Using older server API v0 to send API_VERSIONS {} with correlation id 1 to node -1 
2017-12-17 21:30:51 DEBUG NetworkClient:189 - [Producer clientId=KafkaExampleProducer] Recorded API versions for node -1: (Produce(0): 0 to 2 [usable: 2], Fetch(1): 0 to 2 [usable: 2], ListOffsets(2): 0 [usable: 0], Metadata(3): 0 to 1 [usable: 1], LeaderAndIsr(4): 0 [usable: 0], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 2 [usable: 2], ControlledShutdown(7): 1 [usable: 1], OffsetCommit(8): 0 to 2 [usable: 2], OffsetFetch(9): 0 to 1 [usable: 1], FindCoordinator(10): 0 [usable: 0], JoinGroup(11): 0 [usable: 0], Heartbeat(12): 0 [usable: 0], LeaveGroup(13): 0 [usable: 0], SyncGroup(14): 0 [usable: 0], DescribeGroups(15): 0 [usable: 0], ListGroups(16): 0 [usable: 0], SaslHandshake(17): 0 [usable: 0], ApiVersions(18): 0 [usable: 0], CreateTopics(19): UNSUPPORTED, DeleteTopics(20): UNSUPPORTED, DeleteRecords(21): UNSUPPORTED, InitProducerId(22): UNSUPPORTED, OffsetForLeaderEpoch(23): UNSUPPORTED, AddPartitionsToTxn(24): UNSUPPORTED, AddOffsetsToTxn(25): UNSUPPORTED, EndTxn(26): UNSUPPORTED, WriteTxnMarkers(27): UNSUPPORTED, TxnOffsetCommit(28): UNSUPPORTED, DescribeAcls(29): UNSUPPORTED, CreateAcls(30): UNSUPPORTED, DeleteAcls(31): UNSUPPORTED, DescribeConfigs(32): UNSUPPORTED, AlterConfigs(33): UNSUPPORTED, AlterReplicaLogDirs(34): UNSUPPORTED, DescribeLogDirs(35): UNSUPPORTED, SaslAuthenticate(36): UNSUPPORTED, CreatePartitions(37): UNSUPPORTED)
2017-12-17 21:30:51 DEBUG NetworkClient:189 - [Producer clientId=KafkaExampleProducer] Sending metadata request (type=MetadataRequest, topics=sifs.email.in) to node <IP>:9093 (id: -1 rack: null) 
2017-12-17 21:30:51 DEBUG NetworkClient:195 - [Producer clientId=KafkaExampleProducer] Using older server API v1 to send METADATA {topics=[sifs.email.in]} with correlation id 2 to node -1 
2017-12-17 21:30:52 DEBUG Metadata:270 - Updated cluster metadata version 2 to Cluster(id = null, nodes = [0.0.0.0:9093 (id: 0 rack: null)], partitions = [Partition(topic = sifs.email.in, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])]) 
2017-12-17 21:30:52 DEBUG NetworkClient:183 - [Producer clientId=KafkaExampleProducer] Initiating connection to node 0.0.0.0:9093 (id: 0 rack: null) org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time org.apache.kafka.common.errors.TimeoutException: Expiring 50 record(s) for sifs.email.in-0: 55017 ms has passed since batch creation plus linger time 
2017-12-17 21:31:47 INFO  KafkaProducer:341 - [Producer clientId=KafkaExampleProducer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 
2017-12-17 21:31:47 DEBUG Sender:177 - [Producer clientId=KafkaExampleProducer] Beginning shutdown of Kafka producer I/O thread, sending remaining records.

让我知道如何从外部机器向Kafka主题写入数据。

1 个答案:

答案 0 :(得分:0)

此步骤解决了我从docker容器外部连接的问题:

  1. 部署的Docker容器(带有zookeeper的kafka 0.11):

https://stackoverflow.com/a/51071716/2493852

  1. 测试来自容器内部和外部的连接:

https://stackoverflow.com/a/51071629/2493852