Ignite节点无法加入集群,无限期地等待协调器响应

时间:2019-09-06 11:02:16

标签: ignite

我正在使用TcpDiscoveryJdbcIpFinder运行Apache Ignite 2.7.0的两个服务器节点(A和B)以进行发现。

当我将B作为第一个节点启动,然后启动A节点时,一切正常。

但是,当我将A作为第一个节点启动然后再启动B时,B节点会无限期地卡住,试图加入集群。

当我检查日志时,发现节点B加入了集群,交换分区开始了。

  2019-09-05 10:59:51,850 | disco-event-worker-#39         | INFO  | org.apache.ignite.internal.managers.discovery.GridDiscoveryManager               | 
Added new node to topology: TcpDiscoveryNode [id=686bdf14-201c-43f3-8617-05c7e51224ea, addrs=[10.49.95.44], sockAddrs=[some2.domain/10.49.95.44:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1567673970663, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false] |  
2019-09-05 10:59:51,850 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.managers.discovery.GridDiscoveryManager               | /
//
/>>> +----------------+/
/>>> Topology snapshot./
/>>> +----------------+/
/>>> Ignite instance name: default/
/>>> <b>Number of server nodes: 2</b>/
/>>> Number of client nodes: 0/
/>>> Topology version: 2/
/>>> Local: F5DBEC80-D22F-4977-A534-A0E9425A77BB, [some.domain/10.49.94.205], 1, Windows Server 2012 R2 amd64 6.3, admBruegel, Java(TM) SE Runtime Environment 1.8.0_202-b08/
/>>> Remote: 686BDF14-201C-43F3-8617-05C7E51224EA, [some2.domain/10.49.95.44], 2, Windows Server 2016 amd64 10.0, admBruegel, Java(TM) SE Runtime Environment 1.8.0_202-b08/
/>>> Total number of CPUs: 4/
/>>> Total heap size: 32.0GB/
/>>> Total offheap size: 4.9GB/

过一会儿,即使节点A正在运行并等待节点A完成加入过程,节点A也会收到节点B的 NODE_FAILED 事件。

2019-09-05 10:59:51,881 | disco-event-worker-#39         | WARN  | org.apache.ignite.internal.managers.discovery.GridDiscoveryManager               | 
Node FAILED: TcpDiscoveryNode [id=686bdf14-201c-43f3-8617-05c7e51224ea, addrs=[10.49.95.44], sockAddrs=[some2.domain/10.49.95.44:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1567673970663, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false] |  
2019-09-05 10:59:51,881 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.managers.discovery.GridDiscoveryManager               | /
//
/>>> +----------------+/
/>>> Topology snapshot./
/>>> +----------------+/
/>>> Ignite instance name: default/
/>>> Number of server nodes: 1/
/>>> Number of client nodes: 0/
/>>> Topology version: 3/
/>>> Local: F5DBEC80-D22F-4977-A534-A0E9425A77BB, [some.domain/10.49.94.205], 1, Windows Server 2012 R2 amd64 6.3, admBruegel, Java(TM) SE Runtime Environment 1.8.0_202-b08/
/>>> Total number of CPUs: 2/
/>>> Total heap size: 16.0GB/
/>>> Total offheap size: 2.5GB/
/ |  
2019-09-05 10:59:51,881 | disco-net-seg-chk-worker-#38   | DEBUG | org.apache.ignite.internal.managers.discovery.GridDiscoveryManager               
| Segment has been checked [requested=true, valid=true] |  
2019-09-05 10:59:51,881 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.managers.deployment.GridDeploymentPerVersionStore     
| Processing node departure event: DiscoveryEvent [evtNode=TcpDiscoveryNode [id=686bdf14-201c-43f3-8617-05c7e51224ea, addrs=[10.49.95.44], sockAddrs=[some2.domain/10.49.95.44:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1567673970663, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], topVer=3, nodeId8=f5dbec80, msg=Node failed: TcpDiscoveryNode [id=686bdf14-201c-43f3-8617-05c7e51224ea, addrs=[10.49.95.44], sockAddrs=[some2.domain/10.49.95.44:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1567673970663, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], type=NODE_FAILED, tstamp=1567673991881] |  
2019-09-05 10:59:51,881 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.processors.cache.GridCacheMvccManager                 
| Processing node left [nodeId=686bdf14-201c-43f3-8617-05c7e51224ea] |  
2019-09-05 10:59:51,897 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager           
| Processing node departure: 686bdf14-201c-43f3-8617-05c7e51224ea |  
2019-09-05 10:59:51,897 | disco-event-worker-#39         | DEBUG | org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore          
| Deployment meta for local deployment: GridDeploymentMetadata [depMode=SHARED, alias=org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager$$Lambda$153/1190953783, clsName=org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager, userVer=null, sndNodeId=f5dbec80-d22f-4977-a534-a0e9425a77bb, clsLdrId=null, clsLdr=null, participants=null, parentLdr=null, record=true, nodeFilter=null, seqNum=n/a] |  
2019-09-05 10:59:51,897 | disco-event-worker-#39         | DEBUG | org.apache.ignite.spi.deployment.local.LocalDeploymentSpi                        
| Registering [ldrRsrcs={ParallelWebappClassLoader

节点B不断获取 Join request message has been sent (waiting for coordinator response) 并一直等待。

我将IgniteConfiguration的networkTimeout和failureDetectionTimeout增加到

 <property name="failureDetectionTimeout" value="120000"/>
 <property name="networkTimeout" value="120000"/> 

以及将DiscoverySpi的networkTimeout和joinTimeout转换为

 <property name="networkTimeout" value="120000"/>
  <property name="joinTimeout" value="90000"/>

仍然,问题仍然存在。

两个节点都可以相互ping通,并且这些节点之间没有防火墙,因此没有端口被阻塞。 这些是两个节点的日志。

我无法弄清楚为什么会这样。同样的设置可以在其他服务器上正常运行。

1 个答案:

答案 0 :(得分:3)

节点A可能通过发现端口(47500)与通信端口(47100)无法与节点B对话。

这两个节点上也可能有某些东西减慢了初始交换的速度。例如,如果节点B无法解析节点A的地址之一,则可能导致初始交换停止(检查您的DNS设置等)。