我在AWS上有一个超级分层结构网络设置,不同的EC2机器上有不同的组件。除了经纪人和订货人之外,我可以把东西搬到我想要的地方。他们似乎需要坐在同一台机器上。
如果我在一台EC2机器上旋转我的动物园管理员,kafka经纪人和订购者,那么一切正常。然后我尝试在一个EC2上旋转动物园管理员和卡夫卡经纪人,让他们有时间安顿下来,然后将我的订购者旋转到另一台EC2机器上。
当订货人启动时,他们似乎最初连接到经纪人并且经纪人日志吐出用于为testchainid创建新主题,分区等的东西(每个订购者在开始时创建的虚拟权利的通道)< / p>
[2018-05-09 01:48:10,978] INFO Topic creation {"version":1,"partitions":{"0":[2,3,0]}} (kafka.admin.AdminUtils$)
[2018-05-09 01:48:10,988] INFO [KafkaApi-0] Auto creation of topic testchainid with 1 partitions and replication factor 3 is successful (kafka.server.KafkaApis)
[2018-05-09 01:48:10,999] INFO Topic creation {"version":1,"partitions":{"0":[2,3,0]}} (kafka.admin.AdminUtils$)
[2018-05-09 01:48:11,059] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,062] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,152] INFO Loading producer state from offset 0 for partition testchainid-0 with message format version 2 (kafka.log.Log)
[2018-05-09 01:48:11,159] INFO Completed load of log testchainid-0 with 1 log segments, log start offset 0 and log end offset 0 in 49 ms (kafka.log.Log)
[2018-05-09 01:48:11,164] INFO Created log for partition [testchainid,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 103809024, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 2, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-05-09 01:48:11,165] INFO [Partition testchainid-0 broker=0] No checkpointed highwatermark is found for partition testchainid-0 (kafka.cluster.Partition)
[2018-05-09 01:48:11,165] INFO Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-05-09 01:48:11,167] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions testchainid-0 (kafka.server.ReplicaFetcherManager)
[2018-05-09 01:48:11,241] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2018-05-09 01:48:11,242] INFO [ReplicaFetcherManager on broker 0] Added fetcher for partitions List([testchainid-0, initOffset 0 to broker BrokerEndPoint(2,0fb5eecc47b8,9092)] ) (kafka.server.ReplicaFetcherManager)
[2018-05-09 01:48:11,306] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 0 >= the follower's log end offset 0 in testchainid-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2018-05-09 01:48:11,311] INFO Truncating testchainid-0 to 0 has no effect as the largest offset in the log is -1. (kafka.log.Log)
但是他们似乎无法保持联系。
这是我的订货人尝试连接到经纪商时的日志转储:
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka] try -> DEBU 0f9 [channel: testchainid] Attempting to post the CONNECT message...
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 0fa producer/leader/testchainid/0 state change to [retrying-1]
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 0fb producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:12.483 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 0fc producer/broker/2 shut down
2018-05-09 01:49:12.583 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0fd client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:12.778 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 0fe ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] run -> DEBU 0ff producer/broker/2 starting up
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] run -> DEBU 100 producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 101 producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 102 producer/leader/testchainid/0 state change to [flushing-1]
2018-05-09 01:49:12.779 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 103 producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 104 Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 105 producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 106 producer/leader/testchainid/0 state change to [retrying-2]
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 107 producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:12.790 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 108 producer/broker/2 shut down
2018-05-09 01:49:12.890 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 109 client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 10a ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] run -> DEBU 10b producer/broker/2 starting up
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] run -> DEBU 10c producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 10d producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 10e producer/leader/testchainid/0 state change to [flushing-2]
2018-05-09 01:49:13.086 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 10f producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:13.091 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 110 Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 111 producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 112 producer/leader/testchainid/0 state change to [retrying-3]
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] newHighWatermark -> DEBU 113 producer/leader/testchainid/0 abandoning broker 2
2018-05-09 01:49:13.092 UTC [orderer/consensus/kafka/sarama] shutdown -> DEBU 114 producer/broker/2 shut down
2018-05-09 01:49:13.192 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 115 client/metadata fetching metadata for [testchainid] from broker kafka0.hyperfabric.xyz:9092
2018-05-09 01:49:13.387 UTC [orderer/consensus/kafka/sarama] Validate -> DEBU 116 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] run -> DEBU 117 producer/broker/2 starting up
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] run -> DEBU 118 producer/broker/2 state change to [open] on testchainid/0
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] dispatch -> DEBU 119 producer/leader/testchainid/0 selected broker 2
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 11a producer/leader/testchainid/0 state change to [flushing-3]
2018-05-09 01:49:13.388 UTC [orderer/consensus/kafka/sarama] flushRetryBuffers -> DEBU 11b producer/leader/testchainid/0 state change to [normal]
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 11c Failed to connect to broker 0fb5eecc47b8:9092: dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka/sarama] handleError -> DEBU 11d producer/broker/2 state change to [closing] because dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
2018-05-09 01:49:13.427 UTC [orderer/consensus/kafka] try -> DEBU 11e [channel: testchainid] Need to retry because process failed = dial tcp: lookup 0fb5eecc47b8 on 127.0.0.11:53: no such host
我也注意到它提到了IP和PORT 127.0.0.11:53。我做了一个快速搜索,它与docker swarm有关?我是否必须使用docker swarm来完成这项工作?我不喜欢到目前为止其他一切都不需要它,只是通过IP地址连接。
我不知道确切的问题是什么以及从哪里开始寻找答案。任何帮助将不胜感激。
- 更新 -
我做了一些实验,我也能够将动物园管理员搬到另一台机器上。现在,除非他们在同一台机器上,否则它们不会相互通信。
我有一种感觉,我还没有在某处指定kafka代理地址,这就是原因为127.0.0.11:53而不是实际地址的原因。我目前只在configtx文件中指定了地址,该文件用于创世块。是否需要将其设置在其他地方?订货人如何知道卡夫卡经纪人在哪里?
我的配置:
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0.hyperfabric.xyz:9092
- kafka1.hyperfabric.xyz:10092
- kafka2.hyperfabric.xyz:11092
- kafka3.hyperfabric.xyz:12092
答案 0 :(得分:1)
Kafka的工作方式,客户端从初始列表(在Fabric案例中,通过通道配置提供)连接到代理。然后该代理回复一个关于存在的其他代理的信息列表,它们是哪些分区的领导者等等。然后客户端使用第二个列表来选择要连接的正确代理。
您所看到的是第二个列表&#39;报告使用不可解析的主机名,因为它们在另一台计算机上的容器中运行。要解决此问题,您可能让Kafka代理通告的主机名与容器在本地通告的主机名不同。这是&#34; advertised.host.name&#34;来自the Kafka documentation.注意,您也可以使用广告的端口名称来覆盖。对于Fabric提供的图像,您可以设置F_xxx1 2018-05-10 17:33
F_xxx2 2018-05-04 00:13
F_xxx3 2018-03-23 08:44 ...
和KAFKA_ADVERTISED_HOST_NAME
环境变量来设置此覆盖。
设置一个可在外部解析的名称,您的问题应该得到解决。