我在AWS上有两个节点集群。一切都很好,直到昨天。
今天,当我运行nodetool status
时,我遇到了一个问题,然后出现以下错误。
Node1认为Node2已关闭,反之亦然。
来自ip2
ip2$ nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
DN <ip1> ? 256 ? 27c91f95-4b58-492b-a16e-d9b99867a505 r1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN <ip2> 9.11 GiB 256 ? e628324d-34dd-4c9c-a53d-99abfacb54af rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
来自ip1
ip1$ nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
DN <ip2> ? 256 ? e628324d-34dd-4c9c-a53d-99abfacb54af r1
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN <ip1> 9.14 GiB 256 ? 27c91f95-4b58-492b-a16e-d9b99867a505 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
根据最后一行,存在一些复制设置问题,但我无法弄清楚这一点。请建议。
WARN [OptionalTasks:1] 2017-08-08 15:33:37,223 CassandraRoleManager.java:344 - CassandraRoleManager skipped default role setup: some nodes were not ready
INFO [OptionalTasks:1] 2017-08-08 15:33:37,223 CassandraRoleManager.java:383 - Setup task failed with error, rescheduling
INFO [HANDSHAKE-/172.15.14.106] 2017-08-08 15:33:37,340 OutboundTcpConnection.java:515 - Handshaking version with /172.15.14.106