为什么我的流利不会与elasticsearch合作

时间:2016-11-30 09:03:00

标签: logging elasticsearch docker fluentd

我正在尝试使用流利和弹性搜索来收集docker日志,这是我开始流利的日志。

2016-11-30 16:29:34 +0800 [info]: starting fluentd-0.12.19  
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.0'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-elasticsearch' version '1.7.0'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-mongo' version '0.7.11'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.3'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-s3' version '0.6.4'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-scribe' version '0.10.14'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-secure-forward' version '0.4.3'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-td' version '0.10.28'  
2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.1'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluent-plugin-webhdfs' version '0.4.1'  

2016-11-30 16:29:34 +0800 [info]: gem 'fluentd' version '0.12.19'  

2016-11-30 16:29:34 +0800 [info]: adding match pattern="td.*.*" type="tdlog"  

2016-11-30 16:29:34 +0800 [info]: adding match pattern="debug.**" type="stdout"  

2016-11-30 16:29:34 +0800 [info]: adding match pattern="docker.**" type="stdout"  

2016-11-30 16:29:34 +0800 [info]: adding match pattern="*.**" type="copy"  

2016-11-30 16:29:35 +0800 [info]: adding source type="forward"  
2016-11-30 16:29:35 +0800 [info]: adding source type="http"  
2016-11-30 16:29:35 +0800 [info]: adding source type="debug_agent"  
2016-11-30 16:29:35 +0800 [info]: using configuration file: <ROOT>  

  <match td.*.*>  
    type tdlog  
    apikey xxxxxx  
    auto_create_table  
    buffer_type file  
    buffer_path /var/log/td-agent/buffer/td  
    <secondary>  
      type file  
      path /var/log/td-agent/failed_records
      buffer_path /var/log/td-agent/failed_records.*
    </secondary>
  </match>
  <match debug.**>
    type stdout
  </match>
  <match docker.**>
    type stdout
  </match>
  <match *.**>
    type copy
    <store>
      @type elasticsearch
      host localhost
      port 9200
      include_tag_key true
      tag_key log_name
      logstash_format true
      flush_interval 1s
    </store>
  </match>
  <source>
    type forward
  </source>
  <source>
    type http
    port 8888
  </source>
  <source>
    type debug_agent
    bind 127.0.0.1
    port 24230
  </source>
</ROOT>

2016-11-30 16:29:35 +0800 [info]: listening fluent socket on 0.0.0.0:24224  
2016-11-30 16:29:35 +0800 [info]: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"  
2016-11-30 16:29:38 +0800 docker.40271db2b565: {"log":"1:M 30 Nov 08:29:38.065 # User requested shutdown...","container_id":"40271db2b565d52fa0ab54bde2b0fa4b61e4ca033fca2b7edcf54c1a93443c19","container_name":"/tender_banach","source":"stdout"}

我使用的是elasticsearch的默认配置,在我开始之后,它会像这样继续记录。

  

[2016-11-30T16:49:32,154] [WARN] [oecraDiskThresholdMonitor] [I_hB3Vd]超出[I_hB3VdfQ3q3hBeP5skTBQ] [I_hB3Vd] [/ Users / it / Desktop / elasticsearch-5.0]的高磁盘水印[90%] .1 / data / nodes / 0] free:10gb [8.9%],分片将从此节点重新定位

我的elasticsearch版本为5.0.1,fluentd版本,fluent-plugin-elasticsearch如上所示。我正在使用Mac OS 10.11.6。我尝试过在网上找到的所有方法,任何人都可以提供帮助吗?

2 个答案:

答案 0 :(得分:0)

忘了说我的弹性搜索版本是5.0.1,我的流利版和fluent-plugin-elasticsearch如上所示,我使用的是Mac OS 10.11.6

答案 1 :(得分:0)

看起来流利的开始就好了。我会说你的弹性搜索有问题(如果你有多个节点,数据节点)。错误说它...看起来像你的一个节点上的磁盘空间问题。