我的环境是两台物理机器,都在docker-compose中运行。
我想创建有两个docker容器的elasticsearch集群。
两个容器无法相互连接,有什么想法吗?
docker image正在使用elasticsearch:5.4.2
搬运工-compose.yml
version: '2'
services:
elasticsearch:
image: es:542
hostname: es2
container_name: es2
user: elasticsearch
ports:
- 9200:9200
- 9300:9300
environment:
- ES_JAVA_OPTS=-Xms1g -Xmx1g
command: /usr/share/elasticsearch/bin/elasticsearch
elasticsearch.yml
http.host: 0.0.0.0
transport.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 2
cluster.name: prod_es_cluster
node.name: prod_es_node1
node.master: true
node.data: true
discovery.zen.ping_timeout: 10s
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: 0.0.0.0
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "[::1]"]
和记录
ES1
[2017-11-09T05:56:10,552] [INFO] [o.e.t.TransportService] [prod_es_node1] publish_address {172.24.0.2:9300}, bound_addresses {[::]:9300}
[2017-11-09T05:56:10,558] [INFO] [o.e.b.BootstrapChecks] [prod_es_node1]绑定或发布到非环回或非链接本地地址,执行引导检查
[2017-11-09T05:56:40,576] [警告] [o.e.n.Node] [prod_es_node1] timed out while waiting for initial discovery state - timeout: 30s
[2017-11-09T05:56:40,584] [INFO] [o.e.h.n.Netty4HttpServerTransport] [prod_es_node1] publish_address {172.24.0.2:9200}, bound_addresses {[::]:9200}
[2017-11-09T05:56:40,587] [INFO] [o.e.n.Node] [prod_es_node1]开始了
ES2
[2017-11-09T09:37:20,084] [警告] [o.e.d.z.ZenDiscovery] [prod_es_node2]无法连接到主人[{prod_es_node1} {BxKzhOnJTUC50cYTz_Hm fA} {zqtU07jfQJOrmB9AYL01Ig} {172.24.0.2}{172.24.0.2:9300}], retrying...
org.elasticsearch.transport.ConnectTransportException:[prod_es_node1] [172.24.0.2:9300] connect_timeout[30s]
在org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:361)〜[?:?]
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:549)〜[elasticsearch-5.4.2.jar:5.4.2]
at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:473)〜[elasticsearch-5.4.2.jar:5.4.2]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:315)〜[elasticsearch-5.4.2.jar:5.4.2]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:302)〜[elasticsearch-5.4.2.jar:5.4.2]
在org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:468)[elasticsearch-5.4.2.jar:5.4.2]
在org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:420)[elasticsearch-5.4.2.jar:5.4.2]
在org.elasticsearch.discovery.zen.ZenDiscovery.access $ 4100(ZenDiscovery.java:83)[elasticsearch-5.4.2.jar:5.4.2]
在org.elasticsearch.discovery.zen.ZenDiscovery $ JoinThreadControl $ 1.run(ZenDiscovery.java:1197)[elasticsearch-5.4.2.jar:5.4.2]
在org.elasticsearch.common.util.concurrent.ThreadContext $ ContextPreservingRunnable.run(ThreadContext.java:569)[elasticsearch-5.4.2。
罐子:5.4.2]
在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)[?:1.8.0_131]
在java.lang.Thread.run(Thread.java:748)[?:1.8.0_131]
引起:io.netty.channel.AbstractChannel $ AnnotatedConnectException:Connection refused: 172.24.0.2/172.24.0.2:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)〜[?:?]
答案 0 :(得分:0)
现在我使用openvswitch
和pipework
来构建群集网络
ovs setting:
#ES1
sudo brctl addbr br0
sudo ip link set dev br0 up
sudo ovs-vsctl add-br ovs0
sudo ovs-vsctl set bridge ovs0 stp_enable=true
sudo ovs-vsctl add-port ovs0 br0
sudo ovs-vsctl add-port ovs0 gre0 -- set interface gre0 type=gre options:remote_ip=10.251.34.50
pipwork设置:
sudo pipework-master/pipework br0 -i eth1 es1 172.28.0.2/8
docker容器网络信息:
elasticsearch @ es1:/ usr / share / elasticsearch $ ip a
521:eth1 @ if522:mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link / ether 4e:06:91:91:1f:3e brd ff:ff:ff:ff:ff:ff inet 172.28.0.2/8 brd 172.255.255.255范围全球eth1 valid_lft forever preferred_lft forever
设置两个docker容器后,群集应该正常工作!!
elasticsearch@es1:/usr/share/elasticsearch$ ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: icmp_seq=0 ttl=64 time=1.222 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.339 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.547 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.303 ms
64 bytes from 172.18.0.2: icmp_seq=4 ttl=64 time=0.333 ms
64 bytes from 172.18.0.2: icmp_seq=5 ttl=64 time=0.362 ms`
[2017-11-17T02:06:21,205][INFO ][o.e.n.Node ] [prod_es_node1] started
[2017-11-17T02:06:21,223][INFO ][o.e.c.s.ClusterService ] [prod_es_node1] new_master {prod_es_node1}{IoG7C5BYQ26AYsggMuJo2A}{AL3oD0itR1CgibIr-x-8Sg}{172.28.0.2}{172.28.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-17T02:06:21,254][INFO ][o.e.g.GatewayService ] [prod_es_node1] recovered [0] indices into cluster_state
[2017-11-17T02:06:29,205][INFO ][o.e.c.s.ClusterService ] [prod_es_node1] added {{prod_es_node2}{ZA7CwKyBQS2gm-OamVSz2g}{U6WQyzwoQCiDkwBvmUxoQw}{172.18.0.2}{172.18.0.2:9300},}, reason: zen-disco-node-join[{prod_es_node2}{ZA7CwKyBQS2gm-OamVSz2g}{U6WQyzwoQCiDkwBvmUxoQw}{172.18.0.2}{172.18.0.2:9300}]`
curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "prod_es_cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}