我们在两个AWS可用区域(ap-southeast-1和ap-southeast-2)上运行一个6节点的cassandra集群。
在愉快地运行了几个月之后,群集被重新启动以修复挂起修复,现在每个组都认为另一个组件已关闭。
Cluster Information:
Name: MegaportGlobal
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
220727fa-88d2-366f-9473-777e32744c37: [10.5.13.117, 10.5.12.245, 10.5.13.93]
UNREACHABLE: [10.4.0.112, 10.4.0.169, 10.4.2.186]
Cluster Information:
Name: MegaportGlobal
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
3932d237-b907-3ef8-95bc-4276dc7f32e6: [10.4.0.112, 10.4.0.169, 10.4.2.186]
UNREACHABLE: [10.5.13.117, 10.5.12.245, 10.5.13.93]
来自悉尼,'nodetool status'报告大多数新加坡节点都已关闭:
Datacenter: ap-southeast-2
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.4.0.112 9.04 GB 256 ? b9c19de4-4939-4112-bf07-d136d8a57b57 2a
UN 10.4.0.169 9.34 GB 256 ? 2d7c3ac4-ae94-43d6-9afe-7d421c06b951 2a
UN 10.4.2.186 10.72 GB 256 ? 4dc8b155-8f9a-4532-86ec-d958ac207f40 2b
Datacenter: ap-southeast-1
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.5.13.117 9.45 GB 256 ? 324ee189-3e72-465f-987f-cbc9f7bf740b 1a
DN 10.5.12.245 10.25 GB 256 ? bee281c9-715b-4134-a033-00479a390f1e 1b
DN 10.5.13.93 12.29 GB 256 ? a8262244-91bb-458f-9603-f8c8fe455924 1a
来自新加坡的但,所有悉尼节点都报告为:
ap-southeast-2
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
DN 10.4.0.112 8.91 GB 256 ? b9c19de4-4939-4112-bf07-d136d8a57b57 2a
DN 10.4.0.169 ? 256 ? 2d7c3ac4-ae94-43d6-9afe-7d421c06b951 2a
DN 10.4.2.186 ? 256 ? 4dc8b155-8f9a-4532-86ec-d958ac207f40 2b
Datacenter: ap-southeast-1
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.5.13.117 9.45 GB 256 ? 324ee189-3e72-465f-987f-cbc9f7bf740b 1a
UN 10.5.12.245 10.25 GB 256 ? bee281c9-715b-4134-a033-00479a390f1e 1b
UN 10.5.13.93 12.29 GB 256 ? a8262244-91bb-458f-9603-f8c8fe455924 1a
更令人困惑的是,在悉尼执行的'nodetool gossipinfo'报告说状态一切正常 - 正常:
/10.5.13.117
generation:1440735653
heartbeat:724504
SEVERITY:0.0
DC:ap-southeast-1
LOAD:1.0149565738E10
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:1a
STATUS:NORMAL,-1059943672916788858
RELEASE_VERSION:2.1.6
NET_VERSION:8
RPC_ADDRESS:10.5.13.117
INTERNAL_IP:10.5.13.117
HOST_ID:324ee189-3e72-465f-987f-cbc9f7bf740b
/10.5.12.245
generation:1440734497
heartbeat:728014
SEVERITY:0.0
DC:ap-southeast-1
LOAD:1.100647505E10
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:1b
STATUS:NORMAL,-1029869455226513030
RELEASE_VERSION:2.1.6
NET_VERSION:8
RPC_ADDRESS:10.5.12.245
INTERNAL_IP:10.5.12.245
HOST_ID:bee281c9-715b-4134-a033-00479a390f1e
/10.4.0.112
generation:1440973751
heartbeat:4135
SEVERITY:0.0
DC:ap-southeast-2
LOAD:9.70297176E9
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:2a
RELEASE_VERSION:2.1.6
STATUS:NORMAL,-1016623069114845926
NET_VERSION:8
RPC_ADDRESS:10.4.0.112
INTERNAL_IP:10.4.0.112
HOST_ID:b9c19de4-4939-4112-bf07-d136d8a57b57
/10.5.13.93
generation:1440734532
heartbeat:727909
SEVERITY:0.0
DC:ap-southeast-1
LOAD:1.3197536002E10
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:1a
STATUS:NORMAL,-1021689296016263011
RELEASE_VERSION:2.1.6
NET_VERSION:8
RPC_ADDRESS:10.5.13.93
INTERNAL_IP:10.5.13.93
HOST_ID:a8262244-91bb-458f-9603-f8c8fe455924
/10.4.0.169
generation:1440974511
heartbeat:1832
SEVERITY:0.0
DC:ap-southeast-2
LOAD:1.0023502338E10
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:2a
RELEASE_VERSION:2.1.6
STATUS:NORMAL,-1004223692762353764
NET_VERSION:8
RPC_ADDRESS:10.4.0.169
INTERNAL_IP:10.4.0.169
HOST_ID:2d7c3ac4-ae94-43d6-9afe-7d421c06b951
/10.4.2.186
generation:1440734382
heartbeat:730171
SEVERITY:0.0
DC:ap-southeast-2
LOAD:1.1507595081E10
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RACK:2b
STATUS:NORMAL,-10099894685483463
RELEASE_VERSION:2.1.6
NET_VERSION:8
RPC_ADDRESS:10.4.2.186
INTERNAL_IP:10.4.2.186
HOST_ID:4dc8b155-8f9a-4532-86ec-d958ac207f40
在新加坡执行的同一命令不包括悉尼任何节点的状态:
/10.5.12.245
generation:1440974710
heartbeat:1372
SEVERITY:0.0
LOAD:1.100835806E10
RPC_ADDRESS:10.5.12.245
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
STATUS:NORMAL,-1029869455226513030
DC:ap-southeast-1
RACK:1b
INTERNAL_IP:10.5.12.245
HOST_ID:bee281c9-715b-4134-a033-00479a390f1e
/10.5.13.117
generation:1440974648
heartbeat:1561
SEVERITY:0.0
LOAD:1.0149992022E10
RPC_ADDRESS:10.5.13.117
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
STATUS:NORMAL,-1059943672916788858
DC:ap-southeast-1
RACK:1a
HOST_ID:324ee189-3e72-465f-987f-cbc9f7bf740b
INTERNAL_IP:10.5.13.117
/10.4.0.112
generation:1440735420
heartbeat:23
SEVERITY:0.0
LOAD:9.570546197E9
RPC_ADDRESS:10.4.0.112
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
DC:ap-southeast-2
RACK:2a
INTERNAL_IP:10.4.0.112
HOST_ID:b9c19de4-4939-4112-bf07-d136d8a57b57
/10.5.13.93
generation:1440734532
heartbeat:729862
SEVERITY:0.0
LOAD:1.3197536002E10
RPC_ADDRESS:10.5.13.93
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
STATUS:NORMAL,-1021689296016263011
DC:ap-southeast-1
RACK:1a
INTERNAL_IP:10.5.13.93
HOST_ID:a8262244-91bb-458f-9603-f8c8fe455924
/10.4.0.169
generation:1440974511
heartbeat:15
SEVERITY:0.5076141953468323
RPC_ADDRESS:10.4.0.169
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
DC:ap-southeast-2
RACK:2a
INTERNAL_IP:10.4.0.169
HOST_ID:2d7c3ac4-ae94-43d6-9afe-7d421c06b951
/10.4.2.186
generation:1440734382
heartbeat:15
SEVERITY:0.0
RPC_ADDRESS:10.4.2.186
NET_VERSION:8
SCHEMA:7bf335ee-61ae-36c6-a902-c70d785ec7a3
RELEASE_VERSION:2.1.6
DC:ap-southeast-2
RACK:2b
INTERNAL_IP:10.4.2.186
HOST_ID:4dc8b155-8f9a-4532-86ec-d958ac207f40
在重启期间,每个节点都可以看到远程DC一段时间:
INFO [GossipStage:1] 2015-08-31 10:53:07,638 OutboundTcpConnection.java:97 - OutboundTcpConnection using coalescing strategy DISABLED
INFO [HANDSHAKE-/10.4.2.186] 2015-08-31 10:53:08,267 OutboundTcpConnection.java:485 - Handshaking version with /10.4.2.186
INFO [HANDSHAKE-/10.4.0.169] 2015-08-31 10:53:08,287 OutboundTcpConnection.java:485 - Handshaking version with /10.4.0.169
INFO [HANDSHAKE-/10.5.12.245] 2015-08-31 10:53:08,391 OutboundTcpConnection.java:485 - Handshaking version with /10.5.12.245
INFO [HANDSHAKE-/10.5.13.93] 2015-08-31 10:53:08,498 OutboundTcpConnection.java:485 - Handshaking version with /10.5.13.93
INFO [GossipStage:1] 2015-08-31 10:53:08,537 Gossiper.java:987 - Node /10.5.12.245 has restarted, now UP
INFO [HANDSHAKE-/10.5.13.117] 2015-08-31 10:53:08,537 OutboundTcpConnection.java:485 - Handshaking version with /10.5.13.117
INFO [GossipStage:1] 2015-08-31 10:53:08,656 StorageService.java:1642 - Node /10.5.12.245 state jump to normal
INFO [GossipStage:1] 2015-08-31 10:53:08,820 Gossiper.java:987 - Node /10.5.13.117 has restarted, now UP
INFO [GossipStage:1] 2015-08-31 10:53:08,852 Gossiper.java:987 - Node /10.5.13.93 has restarted, now UP
INFO [SharedPool-Worker-33] 2015-08-31 10:53:08,907 Gossiper.java:954 - InetAddress /10.5.12.245 is now UP
INFO [GossipStage:1] 2015-08-31 10:53:08,947 StorageService.java:1642 - Node /10.5.13.93 state jump to normal
INFO [GossipStage:1] 2015-08-31 10:53:09,007 Gossiper.java:987 - Node /10.4.0.169 has restarted, now UP
WARN [GossipTasks:1] 2015-08-31 10:53:09,123 FailureDetector.java:251 - Not marking nodes down due to local pause of 7948322997 > 5000000000
INFO [GossipStage:1] 2015-08-31 10:53:09,192 StorageService.java:1642 - Node /10.4.0.169 state jump to normal
INFO [HANDSHAKE-/10.5.12.245] 2015-08-31 10:53:09,199 OutboundTcpConnection.java:485 - Handshaking version with /10.5.12.245
INFO [GossipStage:1] 2015-08-31 10:53:09,203 Gossiper.java:987 - Node /10.4.2.186 has restarted, now UP
INFO [GossipStage:1] 2015-08-31 10:53:09,206 StorageService.java:1642 - Node /10.4.2.186 state jump to normal
INFO [SharedPool-Worker-34] 2015-08-31 10:53:09,215 Gossiper.java:954 - InetAddress /10.5.13.93 is now UP
INFO [SharedPool-Worker-33] 2015-08-31 10:53:09,259 Gossiper.java:954 - InetAddress /10.5.13.117 is now UP
INFO [SharedPool-Worker-33] 2015-08-31 10:53:09,259 Gossiper.java:954 - InetAddress /10.4.0.169 is now UP
INFO [SharedPool-Worker-33] 2015-08-31 10:53:09,259 Gossiper.java:954 - InetAddress /10.4.2.186 is now UP
INFO [GossipStage:1] 2015-08-31 10:53:09,296 StorageService.java:1642 - Node /10.4.0.169 state jump to normal
INFO [GossipStage:1] 2015-08-31 10:53:09,491 StorageService.java:1642 - Node /10.5.12.245 state jump to normal
INFO [HANDSHAKE-/10.5.13.117] 2015-08-31 10:53:09,509 OutboundTcpConnection.java:485 - Handshaking version with /10.5.13.117
INFO [GossipStage:1] 2015-08-31 10:53:09,511 StorageService.java:1642 - Node /10.5.13.93 state jump to normal
INFO [HANDSHAKE-/10.5.13.93] 2015-08-31 10:53:09,538 OutboundTcpConnection.java:485 - Handshaking version with /10.5.13.93
然后,没有任何错误,节点被标记为向下:
INFO [GossipTasks:1] 2015-08-31 10:53:34,410 Gossiper.java:968 - InetAddress /10.5.13.117 is now DOWN
INFO [GossipTasks:1] 2015-08-31 10:53:34,411 Gossiper.java:968 - InetAddress /10.5.12.245 is now DOWN
INFO [GossipTasks:1] 2015-08-31 10:53:34,411 Gossiper.java:968 - InetAddress /10.5.13.93 is now DOWN
我们已尝试多次重启,但行为仍然相同。
*修改
它看起来与Gossip协议有关......打开额外的调试表明PHI值正在稳步增加:
TRACE [GossipTasks:1] 2015-08-31 16:46:44,706 FailureDetector.java:262 - PHI for /10.4.0.112 : 2.9395029255
TRACE [GossipTasks:1] 2015-08-31 16:46:45,727 FailureDetector.java:262 - PHI for /10.4.0.112 : 3.449690761
TRACE [GossipTasks:1] 2015-08-31 16:46:46,728 FailureDetector.java:262 - PHI for /10.4.0.112 : 3.95049114
TRACE [GossipTasks:1] 2015-08-31 16:46:47,730 FailureDetector.java:262 - PHI for /10.4.0.112 : 4.451317456
TRACE [GossipTasks:1] 2015-08-31 16:46:48,732 FailureDetector.java:262 - PHI for /10.4.0.112 : 4.952114357
TRACE [GossipTasks:1] 2015-08-31 16:46:49,733 FailureDetector.java:262 - PHI for /10.4.0.112 : 5.4529339645
TRACE [GossipTasks:1] 2015-08-31 16:46:50,735 FailureDetector.java:262 - PHI for /10.4.0.112 : 5.953951289
TRACE [GossipTasks:1] 2015-08-31 16:46:51,737 FailureDetector.java:262 - PHI for /10.4.0.112 : 6.4547808165
TRACE [GossipTasks:1] 2015-08-31 16:46:52,738 FailureDetector.java:262 - PHI for /10.4.0.112 : 6.955600038
TRACE [GossipTasks:1] 2015-08-31 16:46:53,740 FailureDetector.java:262 - PHI for /10.4.0.112 : 7.456422601
TRACE [GossipTasks:1] 2015-08-31 16:46:54,742 FailureDetector.java:262 - PHI for /10.4.0.112 : 7.957303284
TRACE [GossipTasks:1] 2015-08-31 16:46:55,751 FailureDetector.java:262 - PHI for /10.4.0.112 : 8.461658576
TRACE [GossipTasks:1] 2015-08-31 16:46:56,755 FailureDetector.java:262 - PHI for /10.4.0.112 : 8.9636610545
TRACE [GossipTasks:1] 2015-08-31 16:46:57,763 FailureDetector.java:262 - PHI for /10.4.0.112 : 9.4676926445
重启后PHI值稳步上升,直到超过故障阈值并标记为DOWN。
有关如何进行的任何建议?
答案 0 :(得分:2)
对于滞后网络,将phi故障检测阈值提高到12或15.这在AWS中通常是必需的,尤其是跨区域。
答案 1 :(得分:2)
问题原来是AWS的网络链接,特别是网络MTU。由于我们的路由配置存在细微问题,悉尼和新加坡AWS之间的数据路径变得不对称。
我想从中吸取的教训是,如果模式在DC内而不是在DC之间感到满意,那么很可能是网络,而MTU之类的东西很重要,即使ping和telnet等看起来还不错。
感谢Jeff和Stefan的意见 - 如果你发现自己在布里斯班,我会给你买啤酒!