Kafka通过Hyperledger Fabric Orderer连接提供无效的接收大小

时间:2019-04-25 14:22:14

标签: apache-kafka hyperledger-fabric

我正在为EKS上的Hyperledger Fabric建立新集群。群集具有4个kafka节点,3个zookeeper节点,4个对等方,3个订购者,1个CA。所有容器都单独放置,并且kafka / zookeeper后端也很稳定。我可以通过SSH进入任何kafka / zookeeper,并检查与其他任何节点的连接,创建主题,发布消息等。kafka可从所有订购者通过Telnet访问。

当我尝试创建频道时,我从订购器中收到以下错误:

2019-04-25 13:34:17.660 UTC [orderer.common.broadcast] ProcessMessage -> WARN 025 [channel: channel1] Rejecting broadcast of message from 192.168.94.15:53598 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-04-25 13:34:17.660 UTC [comm.grpc.server] 1 -> INFO 026 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.94.15:53598 grpc.code=OK grpc.call_duration=14.805833ms
2019-04-25 13:34:17.661 UTC [common.deliver] Handle -> WARN 027 Error reading from 192.168.94.15:53596: rpc error: code = Canceled desc = context canceled
2019-04-25 13:34:17.661 UTC [comm.grpc.server] 1 -> INFO 028 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.94.15:53596 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.987468ms

Kafka领导者报告以下错误:

[2019-04-25 14:07:09,453] WARN [SocketServer brokerId=2] Unexpected error from /192.168.89.200; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
        at kafka.network.Processor.poll(SocketServer.scala:535)
        at kafka.network.Processor.run(SocketServer.scala:452)
        at java.lang.Thread.run(Thread.java:748)
[2019-04-25 14:13:53,917] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

1 个答案:

答案 0 :(得分:2)

该错误表明您收到的消息大于允许的最大大小,默认大小为〜100MB。尝试增加server.properties文件中的以下属性,以便它可以容纳较大的接收(在这种情况下,至少为369295617字节):

# Set to 500MB
socket.request.max.bytes=500000000

,然后重新启动Kafka群集。

如果这对您不起作用,那么我猜您正在尝试连接到非SSL侦听器。因此,您必须验证代理的SSL侦听器端口为9092(或相应的端口,以防您未使用默认端口)。以下应该可以解决问题:

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL