在docker stack / swarm中具有弹性

时间:2019-04-03 16:46:50

标签: docker elasticsearch docker-swarm

我有两个节点

[ra@speechanalytics-test ~]$ docker node ls
ID                            HOSTNAME                  STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
mlwwmkdlzbv0zlapqe1veq3uq     speechanalytics-preprod   Ready               Active                                  18.09.3
se717p88485s22s715rdir9x2 *   speechanalytics-test      Ready               Active              Leader              18.09.3

我正在尝试在堆栈中运行带有弹性的容器。这是我的docker-compose.yml文件

version: '3.4'
services:
  elastic:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
    environment:
      - cluster.name=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    deploy:
      placement:
        constraints:
          - node.hostname==speechanalytics-preprod

volumes:
  esdata:
    driver: local

从docker stack开始后

docker stack deploy preprod -c docker-compose.yml

容器在20秒内崩溃

docker service logs preprod_elastic 
...
   | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
   | OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
   | [2019-04-03T16:41:30,044][WARN ][o.e.b.JNANatives         ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
   | [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives         ] [unknown] This can result in part of the JVM being swapped out.
   | [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives         ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
   | [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives         ] [unknown] These can be adjusted by modifying /etc/security/limits.conf, for example:
   | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
   |     # allow user 'elasticsearch' mlockall
   | OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
   |     elasticsearch soft memlock unlimited
   | [2019-04-03T16:41:02,949][WARN ][o.e.b.JNANatives         ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
   |     elasticsearch hard memlock unlimited
   | [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives         ] [unknown] This can result in part of the JVM being swapped out.
   | [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives         ] [unknown] If you are logged in interactively, you will have to re-login for the new limits to take effect.
   | [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives         ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
preprod

我在两个节点上都有

ra@speechanalytics-preprod:~$ sysctl vm.max_map_count
vm.max_map_count = 262144

任何想法如何解决?

1 个答案:

答案 0 :(得分:3)

您从Elasticsearch中看到的内存锁错误不是使用Docker所独有的常见问题,而是在告诉Elasticsearch锁定其内存但无法这样做时发生的。您可以通过从docker-compose.yml文件中删除以下环境变量来排除错误:

- bootstrap.memory_lock=true

Memlock 可能与Docker Swarm模式一起使用,但有一些警告。

并非所有与docker-compose(Docker Compose)一起使用的选项都与docker stack deploy(Docker Swarm模式)一起使用,反之亦然,尽管这两个选项都共享docker-compose YAML语法。其中的一个选项就是ulimits:,它与docker stack deploy一起使用时,将被警告消息忽略,如下所示:

Ignoring unsupported options: ulimits

我的猜测是,对于您的docker-compose.yml文件,Elasticsearch在docker-compose up上运行良好,但在docker stack deploy上运行良好。

默认情况下,在Docker Swarm模式下,您定义的Elasticsearch实例将遇到内存锁问题。当前,not yet正式支持用于docker swarm服务的ulimit设置。不过,有多种方法可以解决此问题。

如果主机为 Ubuntu ,则可以在Docker服务上启用无限的内存锁(请参阅herehere)。这可以通过以下命令来实现:

echo -e "[Service]\nLimitMEMLOCK=infinity" | SYSTEMD_EDITOR=tee systemctl edit docker.service
systemctl daemon-reload
systemctl restart docker

但是,将弹性锁设置为无穷大并非没有它的缺点,正如Elastic自己here所阐明的那样。

根据我的测试,该解决方案适用于Docker 18.06,但不适用于18.09。考虑到不一致的情况以及Elasticsearch无法启动的可能性,更好的选择是在Swarm上部署时不要与Elasticsearch一起使用内存锁。相反,您可以选择Elasticsearch Docs中提到的任何其他方法来获得相似的结果。