将Dockerized Elasticsearch与多个Docker主机集群化

时间:2019-08-06 08:43:29

标签: docker elasticsearch docker-compose

尝试使其与docker compose集群。 我有两个Elasticsearch Docker容器,它们分别部署在不同的Docker主机中。

docker version: 18.06.3-ce
elasticsearch : 6.5.2

docker-compose.yml for docker-container-1

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data

docker-compose.yml for docker-container-2

 services:
   elasticsearch:
     restart: always
     hostname: elasticsearch
     image: docker-elk/elasticsearch:1.0.0
     build:
       context: elasticsearch
       dockerfile: Dockerfile
     environment:
       discovery.type: zen
     ports:
       - 9200:9200
       - 9300:9300
     env_file:
       - ./elasticsearch/elasticsearch.env
     volumes:
       - elasticsearch_data:/usr/share/elasticsearch/data
Docker-Host 1上elasticsearch-docker-container-1上的

elasticsearch.yml

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es1
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host1
Docker-Host 2上elasticsearch-docker-container-2上的

elasticsearch.yml

 xpack.security.enabled: true
 cluster.name: es-cluster
 node.name: es2
 network.host: 0.0.0.0
 node.master: true
 node.data: true
 transport.tcp.port: 9300
 path.data: /usr/share/elasticsearch/data
 path.logs: /usr/share/elasticsearch/logs
 discovery.zen.minimum_master_nodes: 2
 gateway.recover_after_nodes: 1
 discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
 network.publish_host: host2

下面是GET / _cluster / health?pretty的结果,它表明只有一个节点。

 {
   "cluster_name" : "dps_geocluster",
   "status" : "yellow",
   "timed_out" : false,
   "number_of_nodes" : 1,
   "number_of_data_nodes" : 1,
   "active_primary_shards" : 33,
   "active_shards" : 33,
   "relocating_shards" : 0,
   "initializing_shards" : 0,
   "unassigned_shards" : 30,
   "delayed_unassigned_shards" : 0,
   "number_of_pending_tasks" : 0,
   "number_of_in_flight_fetch" : 0,
   "task_max_waiting_in_queue_millis" : 0,
   "active_shards_percent_as_number" : 52.38095238095239
 }

根据下面的文档,至少需要三个elasticsearch节点。 https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-node.html

每个elasticsearch容器应该位于不同的Docker主机上吗?

1 个答案:

答案 0 :(得分:0)

以下是导致错误的原因。使用sysctl将vm.max_map_count的值增加到262144后,它可以正常工作。

elasticsearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

现在节点数为2。

{
  "cluster_name" : "es-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 35,
  "active_shards" : 37,
  "relocating_shards" : 0,
  "initializing_shards" : 2,
  "unassigned_shards" : 31,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 52.85714285714286
}