使用TcpDiscoverySharedFsIpFinder在Kubernetes上使用Apache Ignite:集群似乎瓦解了

时间:2019-07-01 23:00:31

标签: java kubernetes ignite

我正在Kubernetes环境中使用Apache Ignite .Net v2.7。我将TcpDiscoverySharedFsIpFinder用作集群中的节点发现机制。

我注意到正在运行的群集中有一个奇怪的行为。群集成功启动,并且可以正常工作几个小时。然后,一个节点脱机,然后每个其他节点写入类似的日志:

[20:03:44] Topology snapshot [ver=45, locNode=fd32d5d7, servers=3, clients=0, state=ACTIVE, CPUs=3, offheap=4.7GB, heap=1.5GB]
[20:03:44] Topology snapshot [ver=46, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[20:03:44] Coordinator changed [prev=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
Jul 01, 2019 8:03:44 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message to remote node [node=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtPartitionsSingleMessage [parts={-2100569601=GridDhtPartitionMap [moving=0, top=AffinityTopologyVersion [topVer=44, minorTopVer=1], updateSeq=107, size=100]}, partCntrs={-2100569601=CachePartitionPartialCountersMap {22=(0,32), 44=(0,31), 59=(0,31), 64=(0,35), 66=(0,31), 72=(0,31), 78=(0,35), 91=(0,35)}}, partsSizes={-2100569601={64=2, 66=2, 22=2, 72=2, 59=2, 91=2, 44=2, 78=2}}, partHistCntrs=null, err=null, client=false, compress=true, finishMsg=null, activeQryTrackers=null, super=GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=45, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], topVer=45, nodeId8=fd32d5d7, msg=Node failed: TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], type=NODE_FAILED, tstamp=1562011424092], nodeId=f27d46f4, evt=NODE_FAILED], lastVer=GridCacheVersion [topVer=173444804, order=1562009448132, nodeOrder=44], super=GridCacheMessage [msgId=69, depInfo=null, err=null, skipPrepare=false]]]]]
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to send message (node left topology): TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]
        at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3270)
        at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)
        at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)
        at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2713)
        at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2672)
        at org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1656)
        at org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1731)
        at org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1170)
        at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendLocalPartitions(GridDhtPartitionsExchangeFuture.java:1880)
        at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendPartitions(GridDhtPartitionsExchangeFuture.java:2011)
        at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1501)
        at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:806)
        at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2667)
        at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)
        at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
        at java.lang.Thread.run(Thread.java:748)

[22:25:17] Topology snapshot [ver=47, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]
[22:25:17] Coordinator changed [prev=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=fd32d5d7-720f-4c85-925e-01a845992df9, addrs=[10.0.0.60, 127.0.0.1], sockAddrs=[product-service-deployment-76bdb6fffb-bvjx9/10.0.0.60:47500, /127.0.0.1:47500], discPort=47500, order=44, intOrder=26, lastExchangeTime=1562019906752, loc=true, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
[22:28:29] Joining node doesn't have encryption data [node=adc204a0-3cc7-45da-b512-dd69b9a23674]
[22:28:30] Topology snapshot [ver=48, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[22:31:42] Topology snapshot [ver=49, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]

如您所见,群集中的服务器数量稳步减少,直到群集中仅剩下一台服务器为止(每个节点上的拓扑快照[..servers = 1 ..])。如果我正确理解日志,则群集会崩溃成一组单独的独立节点,其中每个节点代表一个单独的群集。我应该强调,所有其他节点(崩溃的节点除外)都已启动并正在运行。

我猜测发生故障的节点可能是群集领导者,当它死亡时,群集无法选择新的领导者,并且分解为许多独立的节点。

您可以对此发表评论吗?我猜对了吗?您能否告诉我应该检查什么以诊断和解决此问题?谢谢!

1 个答案:

答案 0 :(得分:1)

节点分段通常意味着有很长的暂停时间:GC暂停时间,I / O暂停时间或网络暂停时间。

您可以尝试增加failureDetectionTimeout,看看问题是否消失。或者,您可以尝试摆脱暂停。