我有两台ES专用机(2.2.0)。这两台机器具有相同的规格。每个都在Windows Server 2012 R2上运行,并具有128GB内存。关于ES,我计划在每台机器上为集群提供两个节点。
我正在查看elasticsearch.yml,了解如何配置每个节点以形成集群。
同一网络中的两台计算机具有以下服务器名称和IP地址:
SRC01, 172.21.0.21
SRC02, 172.21.0.22
我正在看elasticsearch.yml,我不知道如何设置。我想我需要为elasticsearch.yml中的网络和发现部分设置正确的值:
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
我在网上搜索了SO并希望找到一个完整的配置示例供我开始,但未能找到一个。
非常感谢任何输入或指针。
更新
在Val的帮助下,这是测试后我在四个节点(每台机器2个)上的最小elasticsearch.yml:
#----------SRC01, node 1---------
cluster.name: elastic
node.name: elastic_src01_1
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]
#----------SRC01, node 2---------
cluster.name: elastic
node.name: elastic_src01_2
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]
#----------SRC02, node 1---------
cluster.name: elastic
node.name: elastic_src02_1
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]
#----------SRC02, node 2---------
cluster.name: elastic
node.name: elastic_src02_2
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]
以下是我得到的问题:
日志摘录:
[2016-02-28 12:38:33,155][INFO ][node ] [elastic_src01_2] version[2.2.0], pid[4620], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:38:33,155][INFO ][node ] [elastic_src01_2] initializing ...
[2016-02-28 12:38:33,546][INFO ][plugins ] [elastic_src01_2] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:38:33,562][INFO ][env ] [elastic_src01_2] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.7gb],
net total_space [249.9gb], spins? [unknown], types [NTFS]
[2016-02-28 12:38:33,562][INFO ][env ] [elastic_src01_2] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-02-28 12:38:35,077][INFO ][node ] [elastic_src01_2] initialized
[2016-02-28 12:38:35,077][INFO ][node ] [elastic_src01_2] starting ...
[2016-02-28 12:38:35,218][INFO ][transport ] [elastic_src01_2] publish_address {172.21.0.21:9302}, bound_addresses {172.21.0.21:9302}
[2016-02-28 12:38:35,218][INFO ][discovery ] [elastic_src01_2] elastic/N8r-gD9WQSSvAYMOlJzmIg
[2016-02-28 12:38:39,796][INFO ][cluster.service ] [elastic_src01_2] detected_master {elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{1
72.21.0.21:9300}, added {{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300},{elastic_src01_1}{qNDQjkmsRjiIVjZ88JsX4g}{172.21.0.2
1}{172.21.0.21:9301},}, reason: zen-disco-receive(from master [{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300}])
[2016-02-28 12:38:39,843][INFO ][http ] [elastic_src01_2] publish_address {172.21.0.21:9202}, bound_addresses {172.21.0.21:9202}
[2016-02-28 12:38:39,843][INFO ][node ] [elastic_src01_2] started
但是,当我在SRC02计算机上启动节点1时,我没有看到 detected_master 消息。以下是ES生成的内容:
[2016-02-28 12:22:52,256][INFO ][node ] [elastic_src02_1] version[2.2.0], pid[6432], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:22:52,256][INFO ][node ] [elastic_src02_1] initializing ...
[2016-02-28 12:22:52,662][INFO ][plugins ] [elastic_src02_1] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:22:52,693][INFO ][env ] [elastic_src02_1] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.6gb], net total_
space [249.8gb], spins? [unknown], types [NTFS]
[2016-02-28 12:22:52,693][INFO ][env ] [elastic_src02_1] heap size [910.5mb], compressed ordinary object pointers [true]
[2016-02-28 12:22:54,193][INFO ][node ] [elastic_src02_1] initialized
[2016-02-28 12:22:54,193][INFO ][node ] [elastic_src02_1] starting ...
[2016-02-28 12:22:54,334][INFO ][transport ] [elastic_src02_1] publish_address {172.21.0.22:9300}, bound_addresses {172.21.0.22:9300}
[2016-02-28 12:22:54,334][INFO ][discovery ] [elastic_src02_1] elastic/SNvuAfnxQV-RW430zLF6Vg
[2016-02-28 12:22:58,912][INFO ][cluster.service ] [elastic_src02_1] new_master {elastic_src02_1}{SNvuAfnxQV-RW430zLF6Vg}{172.21.0.22}{172.21.0.22:9300
}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-02-28 12:22:58,943][INFO ][gateway ] [elastic_src02_1] recovered [0] indices into cluster_state
[2016-02-28 12:22:58,959][INFO ][http ] [elastic_src02_1] publish_address {172.21.0.22:9200}, bound_addresses {172.21.0.22:9200}
[2016-02-28 12:22:58,959][INFO ][node ] [elastic_src02_1] started
SRC02计算机上的节点是否真的形成了SRC01计算机上具有节点的群集?
discovery.zen.minimum_master_nodes:3
到node elastic_src01_1的elasticsearch.yml文件,elastic_src01_2, 然后在机器SRC01上启动第二个节点elastic_src01_2时,我无法在ES生成的消息中看到 detected_master 。
这是否意味着elastic_src01_1和elastic_src01_2不构成群集?
感谢您的帮助!
更新2
SRC01和SRC02机器可以互相看到。以下是从SRC02到SRC01的ping结果:
C:\Users\Administrator>ping 172.21.0.21
Pinging 172.21.0.21 with 32 bytes of data:
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
更新3
问题得到解决。之前我的设置不起作用的原因是服务器的防火墙阻止端口9300/9200进行通信。
答案 0 :(得分:6)
基本上,您只需配置网络设置以确保所有节点都可以在网络上看到对方。此外,由于您在同一台计算机上运行两个节点并且仍然需要高可用性,因此您希望防止主分片及其副本位于同一台物理计算机上。
最后,由于您的群集中总共有四个节点,因此您需要prevent split brain situations,因此您还需要设置discovery.zen.minimum_master_nodes
。
SRC01上的节点1/2:
# cluster name
cluster.name: Name_of_your_cluster
# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC01_Node1/2
# The IP that this node will bind to and publish
network.host: 172.21.0.21
# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.22"]
# prevent split brain
discovery.zen.minimum_master_nodes: 3
# to prevent primary/replica shards to be on the same physical host
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true
# prevent memory swapping
bootstrap.mlockall: true
SRC02上的节点1/2:
# cluster name
cluster.name: Name_of_your_cluster
# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC02_Node1/2
# The IP that this node will bind to and publish
network.host: 172.21.0.22
# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.21"]
# prevent split brain
discovery.zen.minimum_master_nodes: 3
# to prevent primary/replica shards to be on the same physical host
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true
# prevent memory swapping
bootstrap.mlockall: true