将数据从elastic.co云群集复制到本地docker elasticsearch service

时间:2018-03-30 09:46:04

标签: docker elasticsearch

我的目标是将数据从elastic.co云中托管的弹性搜索群集移动到docker-compose.yml文件中定义的本地弹性搜索服务。

This SO question讨论了如何将节点添加到同一台机器上定义的集群。 documentation

  

当您在同一台计算机上运行第二个节点时,它会自动发现并加入   只要集群具有与第一个节点相同的cluster.name。   但是,对于在不同机器上运行的节点加入相同的节点   在群集中,您需要配置节点可以的单播主机列表   联系加入群集。有关更多信息,请参阅首选单播   通过多播。

docker-compose.yml文件中定义的elasticsearch服务是

  elasticsearch1:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.1.1
    ports:
      - "9200:9200"
      - "9300:9300"
    mem_limit: 1g
    environment:
      - cluster.name=97ec5e0ea90e0016e26f078f767b4ea4
      # - bootstrap.memory_lock=true
      - node.name=ec2-001
      # - discovery.zen.ping.multicast.enabled=false # with elastic
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=ip.of.elastic.co.cluster"

以下是泊坞窗图片的日志

Attaching to dockersicnamecheck_elasticsearch1_1
elasticsearch1_1  | [2018-03-30T09:39:42,601][INFO ][o.e.n.Node               ] [ec2-001] initializing ...
elasticsearch1_1  | [2018-03-30T09:39:42,711][INFO ][o.e.e.NodeEnvironment    ] [ec2-001] using [1] data paths, mounts [[/ (overlay)]], net usable_space [12.8gb], net total_space [19.3gb], types [overlay]
elasticsearch1_1  | [2018-03-30T09:39:42,713][INFO ][o.e.e.NodeEnvironment    ] [ec2-001] heap size [495.3mb], compressed ordinary object pointers [true]
elasticsearch1_1  | [2018-03-30T09:39:42,716][INFO ][o.e.n.Node               ] [ec2-001] node name [ec2-001], node ID [btVubyQGQWCvek5bNwIMBg]
elasticsearch1_1  | [2018-03-30T09:39:42,717][INFO ][o.e.n.Node               ] [ec2-001] version[6.1.1], pid[1], build[bd92e7f/2017-12-17T20:23:25.338Z], OS[Linux/4.4.0-1052-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
elasticsearch1_1  | [2018-03-30T09:39:42,717][INFO ][o.e.n.Node               ] [ec2-001] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
elasticsearch1_1  | [2018-03-30T09:39:44,926][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [aggs-matrix-stats]
elasticsearch1_1  | [2018-03-30T09:39:44,927][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [analysis-common]
elasticsearch1_1  | [2018-03-30T09:39:44,927][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [ingest-common]
elasticsearch1_1  | [2018-03-30T09:39:44,927][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [lang-expression]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [lang-mustache]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [lang-painless]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [mapper-extras]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [parent-join]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [percolator]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [reindex]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [repository-url]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [transport-netty4]
elasticsearch1_1  | [2018-03-30T09:39:44,932][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded module [tribe]
elasticsearch1_1  | [2018-03-30T09:39:44,933][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded plugin [ingest-geoip]
elasticsearch1_1  | [2018-03-30T09:39:44,933][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded plugin [ingest-user-agent]
elasticsearch1_1  | [2018-03-30T09:39:44,933][INFO ][o.e.p.PluginsService     ] [ec2-001] loaded plugin [x-pack]
elasticsearch1_1  | [2018-03-30T09:39:48,918][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/84] [Main.cc@128] controller (64 bit): Version 6.1.1 (Build c508cf991ee61c) Copyright (c) 2017 Elasticsearch BV
elasticsearch1_1  | [2018-03-30T09:39:49,453][INFO ][o.e.d.DiscoveryModule    ] [ec2-001] using discovery type [zen]
elasticsearch1_1  | [2018-03-30T09:39:50,366][INFO ][o.e.n.Node               ] [ec2-001] initialized
elasticsearch1_1  | [2018-03-30T09:39:50,366][INFO ][o.e.n.Node               ] [ec2-001] starting ...
elasticsearch1_1  | [2018-03-30T09:39:50,553][INFO ][o.e.t.TransportService   ] [ec2-001] publish_address {172.18.0.1:9300}, bound_addresses {[::]:9300}
elasticsearch1_1  | [2018-03-30T09:39:50,596][INFO ][o.e.b.BootstrapChecks    ] [ec2-001] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elasticsearch1_1  | [2018-03-30T09:39:53,713][INFO ][o.e.c.s.MasterService    ] [ec2-001] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {ec2-001}{btVubyQGQWCvek5bNwIMBg}{YumJMzVvQAScwJO8kLp5nA}{172.18.0.1}{172.18.0.1:9300}{ml.machine_memory=1073741824, ml.max_open_jobs=20, ml.enabled=true}
elasticsearch1_1  | [2018-03-30T09:39:53,722][INFO ][o.e.c.s.ClusterApplierService] [ec2-001] new_master {ec2-001}{btVubyQGQWCvek5bNwIMBg}{YumJMzVvQAScwJO8kLp5nA}{172.18.0.1}{172.18.0.1:9300}{ml.machine_memory=1073741824, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ec2-001}{btVubyQGQWCvek5bNwIMBg}{YumJMzVvQAScwJO8kLp5nA}{172.18.0.1}{172.18.0.1:9300}{ml.machine_memory=1073741824, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
elasticsearch1_1  | [2018-03-30T09:39:53,749][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [ec2-001] publish_address {172.18.0.1:9200}, bound_addresses {[::]:9200}
elasticsearch1_1  | [2018-03-30T09:39:53,750][INFO ][o.e.n.Node               ] [ec2-001] started
elasticsearch1_1  | [2018-03-30T09:39:53,847][INFO ][o.e.g.GatewayService     ] [ec2-001] recovered [0] indices into cluster_state
elasticsearch1_1  | [2018-03-30T09:39:54,453][INFO ][o.e.l.LicenseService     ] [ec2-001] license [7da059ea-edbb-4ec5-a199-cc9374b546d2] mode [basic] - valid
elasticsearch1_1  | [2018-03-30T09:40:00,503][INFO ][o.e.c.m.MetaDataCreateIndexService] [ec2-001] [.monitoring-es-6-2018.03.30] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]

本地docker节点没有看到elastic.co云节点。有什么建议吗?

相关:Elasticsearch in docker container cluster

1 个答案:

答案 0 :(得分:1)

你不应该这样做。不支持在多个数据中心上分离群集,特别是云。 要做到这一点,这意味着云可以访问您的本地计算机,我怀疑您直接在互联网上公开。

但是,回到需要:您希望将云中可用的数据导出到本地计算机。

您可以像您一样启动本地群集,但此群集自行运行。然后使用远程API的reindex,您可以从远程(云)读取并在本地重新索引。

在我看来,这更容易。