Neo4j群集是否需要至少3个节点?

时间:2013-11-06 10:56:44

标签: neo4j cluster-computing high-availability

我正在玩Neo4J高可用性群集。虽然文档表明集群需要至少3个节点,或者2个需要仲裁器,但我想知道仅使用2个节点运行的含义是什么?

如果我设置3节点群集并删除节点,我在添加数据时没有问题。同样,如果我设置只有2个节点的集群,我仍然可以添加数据,似乎不受限制功能。我应该期待作为限制体验到什么?例如,以下表示在2节点集群中启动的从站的跟踪。数据可以毫无问题地添加到主数据中 - 并可以查询。

2013-11-06 10:34:50.403+0000 INFO  [Cluster] Attempting to join cluster of [127.0.0.1:5001, 127.0.0.1:5002]
2013-11-06 10:34:54.473+0000 INFO  [Cluster] Joined cluster:Name:neo4j.ha Nodes:{1=cluster://127.0.0.1:5001, 2=cluster://127.0.0.1:5002} Roles:{coordinator=1}
2013-11-06 10:34:54.477+0000 INFO  [Cluster] Instance 2 (this server) joined the cluster
2013-11-06 10:34:54.512+0000 INFO  [Cluster] Instance 1 was elected as coordinator
2013-11-06 10:34:54.530+0000 INFO  [Cluster] Instance 1 is available as master at ha://localhost:6363?serverId=1
2013-11-06 10:34:54.531+0000 INFO  [Cluster] Instance 1 is available as backup at backup://localhost:6366
2013-11-06 10:34:54.537+0000 INFO  [Cluster] ServerId 2, moving to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:54.564+0000 INFO  [Cluster] Checking store consistency with master
2013-11-06 10:34:54.620+0000 INFO  [Cluster] The store does not represent the same database as master. Will remove and fetch a new one from master
2013-11-06 10:34:54.646+0000 INFO  [Cluster] ServerId 2, moving to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:54.658+0000 INFO  [Cluster] Copying store from master
2013-11-06 10:34:54.687+0000 INFO  [Cluster] Copying index/lucene-store.db
2013-11-06 10:34:54.688+0000 INFO  [Cluster] Copied index/lucene-store.db
2013-11-06 10:34:54.688+0000 INFO  [Cluster] Copying neostore.nodestore.db
2013-11-06 10:34:54.689+0000 INFO  [Cluster] Copied neostore.nodestore.db
2013-11-06 10:34:54.689+0000 INFO  [Cluster] Copying neostore.propertystore.db
2013-11-06 10:34:54.689+0000 INFO  [Cluster] Copied neostore.propertystore.db
2013-11-06 10:34:54.689+0000 INFO  [Cluster] Copying neostore.propertystore.db.arrays
2013-11-06 10:34:54.690+0000 INFO  [Cluster] Copied neostore.propertystore.db.arrays
2013-11-06 10:34:54.690+0000 INFO  [Cluster] Copying neostore.propertystore.db.index
2013-11-06 10:34:54.690+0000 INFO  [Cluster] Copied neostore.propertystore.db.index
2013-11-06 10:34:54.690+0000 INFO  [Cluster] Copying neostore.propertystore.db.index.keys
2013-11-06 10:34:54.691+0000 INFO  [Cluster] Copied neostore.propertystore.db.index.keys
2013-11-06 10:34:54.691+0000 INFO  [Cluster] Copying neostore.propertystore.db.strings
2013-11-06 10:34:54.691+0000 INFO  [Cluster] Copied neostore.propertystore.db.strings
2013-11-06 10:34:54.691+0000 INFO  [Cluster] Copying neostore.relationshipstore.db
2013-11-06 10:34:54.692+0000 INFO  [Cluster] Copied neostore.relationshipstore.db
2013-11-06 10:34:54.692+0000 INFO  [Cluster] Copying neostore.relationshiptypestore.db
2013-11-06 10:34:54.692+0000 INFO  [Cluster] Copied neostore.relationshiptypestore.db
2013-11-06 10:34:54.692+0000 INFO  [Cluster] Copying neostore.relationshiptypestore.db.names
2013-11-06 10:34:54.693+0000 INFO  [Cluster] Copied neostore.relationshiptypestore.db.names
2013-11-06 10:34:54.693+0000 INFO  [Cluster] Copying nioneo_logical.log.v0
2013-11-06 10:34:54.693+0000 INFO  [Cluster] Copied nioneo_logical.log.v0
2013-11-06 10:34:54.693+0000 INFO  [Cluster] Copying neostore
2013-11-06 10:34:54.694+0000 INFO  [Cluster] Copied neostore
2013-11-06 10:34:54.694+0000 INFO  [Cluster] Done, copied 12 files
2013-11-06 10:34:55.101+0000 INFO  [Cluster] Finished copying store from master
2013-11-06 10:34:55.117+0000 INFO  [Cluster] Checking store consistency with master
2013-11-06 10:34:55.123+0000 INFO  [Cluster] Store is consistent
2013-11-06 10:34:55.124+0000 INFO  [Cluster] Catching up with master
2013-11-06 10:34:55.125+0000 INFO  [Cluster] Now consistent with master
2013-11-06 10:34:55.172+0000 INFO  [Cluster] ServerId 2, successfully moved to slave for master ha://localhost:6363?serverId=1
2013-11-06 10:34:55.207+0000 INFO  [Cluster] Instance 2 (this server) is available as slave at ha://localhost:6364?serverId=2
2013-11-06 10:34:55.261+0000 INFO  [API] Successfully started database
2013-11-06 10:34:55.265+0000 INFO  [Cluster] Database available for write transactions
2013-11-06 10:34:55.318+0000 INFO  [API] Starting HTTP on port :8574 with 40 threads available
2013-11-06 10:34:55.614+0000 INFO  [API] Enabling HTTPS on port :8575
2013-11-06 10:34:56.256+0000 INFO  [API] Mounted REST API at: /db/manage/
2013-11-06 10:34:56.261+0000 INFO  [API] Mounted discovery module at [/]
2013-11-06 10:34:56.341+0000 INFO  [API] Loaded server plugin "CypherPlugin"
2013-11-06 10:34:56.344+0000 INFO  [API] Loaded server plugin "GremlinPlugin"
2013-11-06 10:34:56.347+0000 INFO  [API] Mounted REST API at [/db/data/]
2013-11-06 10:34:56.355+0000 INFO  [API] Mounted management API at [/db/manage/]
2013-11-06 10:34:56.435+0000 INFO  [API] Mounted webadmin at [/webadmin]
2013-11-06 10:34:56.477+0000 INFO  [API] Mounting static content at [/webadmin] from [webadmin-html]
2013-11-06 10:34:57.923+0000 INFO  [API] Remote interface ready and available at [http://localhost:8574/]
2013-11-06 10:35:52.829+0000 INFO  [API] Available console sessions: SHELL: class org.neo4j.server.webadmin.console.ShellSessionCreator
CYPHER: class org.neo4j.server.webadmin.console.CypherSessionCreator
GREMLIN: class org.neo4j.server.webadmin.console.GremlinSessionCreator

由于

2 个答案:

答案 0 :(得分:0)

Neo4j服务器的功能没有任何影响。

但就高可用性而言,在群集中拥有超过2台服务器会更好。

答案 1 :(得分:0)

如果两个节点之间存在网络故障并且它们正在运行但无法相互看到,则它们都会将自己提升为主控节点。

当网络恢复时,这可能会导致集群改造出现问题。

添加第3个节点可确保3个节点中只有一个可以掌握。