ELK Unassigned shards docker swarm

时间:2018-02-20 14:07:24

标签: docker elasticsearch logstash docker-swarm

我不是ELK专家。 我有一个2节点的docker Swarm集群,我想在其中部署ELK堆栈。

这是我的docker-compose.yml:

version: '3.4'

services:

  elk:
    image: docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.1
    volumes:
      - ./elk/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./elk/data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xms256m -Xmx256m"
      ELASTIC_PASSWORD: changeme
    networks:
      - net
    deploy:
      mode: replicated
      replicas: 1

  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.1
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
      - "51415:51415"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - net
    deploy:
      mode: replicated
      replicas: 1

  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.1
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
    ports:
      - "5601:5601"
    networks:
      - net
    deploy:
      mode: replicated
      replicas: 1

  logspout:
      image: gliderlabs/logspout:v3.2.4
      volumes:
        - '/var/run/docker.sock:/tmp/docker.sock'
      deploy:
        mode: global
      environment:
        SYSLOG_FORMAT: "rfc3164"
      command: 'syslog://logstash:51415'
      networks:
        - net

  apm-server:
      image: docker.elastic.co/apm/apm-server:6.2.0
      ports:
        - "8200:8200" 
      volumes:
        - ./apmserver/apm-server.yml:/usr/share/apm-server/apm-server.yml
      networks:
        - net
      deploy:
        mode: replicated
        replicas: 1

networks:
  net:

基本上我想将所有docker容器日志转发到logstash。我是用logspout做的。由于在docker swarm中只有ELK堆栈运行,logspout转发到logstash的日志只是ELK堆栈容器的日志。

它可以运行几个小时,之后有一个例外:org.elasticsearch.action.UnavailableShardsException primary shard is not active Timeout

GET _cat/shards?h=index,shard,prirep,state,unassigned.reason的输出:

.kibana                           0 p STARTED    
.triggered_watches                0 p STARTED    
.monitoring-logstash-6-2018.02.16 0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-kibana-6-2018.02.19   0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-es-6-2018.02.18       0 p UNASSIGNED ALLOCATION_FAILED
.watches                          0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-logstash-6-2018.02.20 0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-logstash-6-2018.02.17 0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-es-6-2018.02.17       0 p UNASSIGNED ALLOCATION_FAILED
.watcher-history-7-2018.02.16     0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-kibana-6-2018.02.20   0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-es-6-2018.02.16       0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-logstash-6-2018.02.19 0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-es-6-2018.02.19       0 p UNASSIGNED ALLOCATION_FAILED
logstash-2018.02.16               0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-kibana-6-2018.02.16   0 p STARTED    
.monitoring-logstash-6-2018.02.18 0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-alerts-6              0 p STARTED    
.monitoring-kibana-6-2018.02.18   0 p UNASSIGNED ALLOCATION_FAILED
apm-6.2.0-2018.02.16              0 p STARTED    
.monitoring-kibana-6-2018.02.17   0 p UNASSIGNED ALLOCATION_FAILED
.monitoring-es-6-2018.02.20       0 p UNASSIGNED ALLOCATION_FAILED

GET _template/logstash?pretty

的输出
{
  "logstash": {
    "order": 0,
    "index_patterns": [
      "logstash-*"
    ],
    "settings": {
      "index": {
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
   .....
  ......

GET _cluster/health

的输出
{
  "cluster_name": "test-cluster",
  "status": "red",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 5,
  "active_shards": 5,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 17,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 22.727272727272727
}

Elasticsearch.yml

---
## Default Elasticsearch configuration from elasticsearch-docker.
## from https://github.com/elastic/elasticsearch-docker/blob/master/build/elasticsearch/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

## Use single node discovery in order to disable production mode and avoid bootstrap checks
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
#
discovery.type: single-node

我该怎么做才能解决问题? 谢谢

2 个答案:

答案 0 :(得分:0)

我和你有同样的问题。 我认为logspout无法正常工作,因为它无法保持正常运行。 当我做import Tone from "tone"; export function create(tracks, beatNotifier){ const loop = new Tone.Sequence( loopProcessor(tracks, beatNotifier), [...new Array(16)].map((_, i) => i), "16n" ); Tone.Transport.bpm.value = 120; Tone.Transport.start(); return loop; } export function update(loop, tracks, beatNotifier){ loop.callback = loopProcessor(tracks, beatNotifier); return loop; } function loopProcessor (tracks, beatNotifier) { const urls = tracks.reduce((acc, {name}) => { return {...acc, [name]: `http://localhost:3000/src/sounds/${name}.[wav|wav]`}; }, {}); const keys = new Tone.Players(urls, { fadeOut: "64n" }).toMaster(); return (time, index) => { beatNotifier(index); tracks.forEach(({name, vol, muted, note, beats}) => { if (beats[index]) { try { var vel = Math.random() * 0.5 + 0.5; keys .get(name) .start(time, 0, note, 0, vel); keys .get(name).volume.value = muted ? -Infinity : vol; } catch(e) { console.log("error", e); } } }); }; }时,会得到以下结果:

docker ps -a

我想有一个配置可以使这个容器保持正常运行,因此,将所有容器的日志记录到logstash。

答案 1 :(得分:0)

问题是我使用glusterFS在集群的所有节点之间同步数据:Elk on Docker Swarm and glusterFS crash